input
stringlengths 6.82k
29k
|
---|
Instruction: Is language a barrier to the use of preventive services?
Abstracts:
abstract_id: PUBMED:9276652
Is language a barrier to the use of preventive services? Objective: To isolate the effect of spoken language from financial barriers to care, we examined the relation of language to use of preventive services in a system with universal access.
Design: Cross-sectional survey.
Setting: Household population of women living in Ontario, Canada, in 1990.
Participants: Subjects were 22,448 women completing the 1990 Ontario Health Survey, a population-based random sample of households.
Measurements And Main Results: We defined language as the language spoken in the home and assessed self-reported receipt of breast examination, mammogram and Pap testing. We used logistic regression to calculate odds ratios for each service adjusting for potential sources of confounding: socio-economic characteristics, contact with the health care system, and measures reflecting culture. Ten percent of the women spoke a non-English language at home (4% French, 6% other). After adjustment, compared with English speakers, French-speaking women were significantly less likely to receive breast exams or mammography, and other language speakers were less likely to receive Pap testing.
Conclusions: Women whose main spoken language was not English were less likely to receive important preventive services. Improving communication with patients with limited English may enhance participation in screening programs.
abstract_id: PUBMED:27606643
Validation of the Spanish-language version of the Rapid Assessment for Adolescent Preventive Services among Colombian adolescents. Seventy percent of adolescent morbidity and mortality is related to six risky behaviors. The Rapid Assessment for Adolescent Preventive Services is a screening questionnaire consisting of 21 questions but there is not a validated Spanish-language version. The obj ective of this study was to validate the Spanish-language version of the Rapid Assessment for Adolescent Preventive Services in two Colombian cities: Bucaramanga and Medellin. The questionnaire was administered to 270 randomly selected adolescent students aged between 11 and 19 years old. Its internal consistency measured using Cronbach's alpha was 0.7207. The factor analysis showed that two factors accounted for 84.5% of variance, but factor loading indicates that only one of these is valid in Colombia: substance use (tobacco, alcohol, narcotics, and psychoactive substances).
abstract_id: PUBMED:21665066
Impact of communication on preventive services among deaf American Sign Language users. Background: Deaf American Sign Language (ASL) users face communication and language barriers that limit healthcare communication with their providers. Prior research has not examined preventive services with ASL-skilled clinicians.
Purpose: The goal of this study was to determine whether provider language concordance is associated with improved receipt of preventive services among deaf respondents.
Methods: This cross-sectional study included 89 deaf respondents aged 50-75 years from the Deaf Health Survey (2008), a Behavioral Risk Factor Surveillance System survey adapted for use with deaf ASL users. Association between the respondent's communication method with the provider (i.e., categorized as either concordant-doctor signs or discordant-other) and preventive services use was assessed using logistic regression adjusting for race, gender, income, health status, health insurance, and education. Analyses were conducted in 2010.
Results: Deaf respondents who reported having a concordant provider were more likely to report a greater number of preventive services (OR=3.42, 95% CI=1.31, 8.93, p=0.0122) when compared to deaf respondents who reported having a discordant provider even after adjusting for race, gender, income, health status, health insurance, and education. In unadjusted analyses, deaf respondents who reported having a concordant provider were more likely to receive an influenza vaccination in the past year (OR=4.55, p=0.016) when compared to respondents who had a discordant provider.
Conclusions: Language-concordant patient-provider communication is associated with higher appropriate use of preventive services by deaf ASL users.
abstract_id: PUBMED:25729102
Patient-physician language concordance and use of preventive care services among limited English proficient Latinos and Asians. Objectives: Patient-physician language concordance among limited English proficient (LEP) patients is associated with better outcomes for specific clinical conditions. Whether or not language concordance contributes to use of specific preventive care services is unclear.
Methods: We pooled data from the 2007 and 2009 California Health Interview Surveys to examine mammography, colorectal cancer (CRC) screening, and influenza vaccination use among self-identified LEP Latino and Asian (i.e., Chinese, Korean, and Vietnamese) immigrants. We defined language concordance by respondents reporting that their physician spoke their non-English language. Analyses were completed in 2013-2014.
Results: Language concordance did not appear to facilitate mammography use among Latinas (adjusted odds ratio [AOR] = 1.02, 95% confidence interval [CI] 0.72, 1.45). Among Asian women, we could not definitively exclude a negative association of language concordance with mammography (AOR=0.55, 95% CI 0.27, 1.09). Patient-physician language concordance was associated with lower odds of CRC screening among Asians but not Latinos (Asian AOR=0.50, 95% CI 0.29, 0.86; Latino AOR=0.85, 95% CI 0.56, 1.28). Influenza vaccination did not differ by physician language use among either Latinos or Asians.
Conclusions: Patient-physician language concordance was not associated with higher use of mammography, CRC screening, or influenza vaccination. Language concordance was negatively associated with CRC screening among Asians for reasons that require further research. Future research should isolate the impact of language concordance on the use of preventive care services from health system factors.
abstract_id: PUBMED:9187576
Acculturation, access to care, and use of preventive services by Hispanics: findings from HHANES 1982-84. Use of preventive health services (physical, dental, and eye examinations, Pap smear and breast examinations) among Mexican American, Cuban American, and Puerto Rican adults (ages 20-74) was investigated with data from the HHANES. Analyses focused on the relative importance of two predictors of recency of screening: access to services (health insurance coverage, having a routine place for care, type of facility used, having a regular provider, travel time) and acculturation (spoken and written language, ethnic identification). Regression analyses controlling for age, education, and income indicated that utilization of the preventive services was predicted more strongly by access to care than by acculturation. For each Hispanic group, having a routine place for health care, health insurance coverage, and a regular provider were each significantly associated with greater recency of screening. Type of facility used and travel time produced less consistent effects. These results replicate past studies that have demonstrated the important link between institutional access and use of health services. Of the acculturation variables, language but not ethnic identification (which was measured only for the Mexican Americans) predicted use. This latter finding, which has been demonstrated in other studies as well, suggests that the effect of language on screening practices should not be interpreted as a cultural factor, but as an access factor, i.e. use of English favors access to services.
abstract_id: PUBMED:18799780
Language spoken and differences in health status, access to care, and receipt of preventive services among US Hispanics. Objectives: We examined self-reported health status, health behaviors, access to care, and use of preventive services of the US Hispanic adult population to identify language-associated disparities.
Methods: We analyzed 2003 to 2005 Behavioral Risk Factor Surveillance System data from 45 076 Hispanic adults in 23 states, who represented 90% of the US Hispanic population, and compared 25 health indicators between Spanish-speaking Hispanics and English-speaking Hispanics.
Results: Physical activity and rates of chronic disease, obesity, and smoking were significantly lower among Spanish-speaking Hispanics than among English-speaking Hispanics. Spanish-speaking Hispanics reported far worse health status and access to care than did English-speaking Hispanics (39% vs 17% in fair or poor health, 55% vs 23% uninsured, and 58% vs 29% without a personal doctor) and received less preventive care. Adjustment for demographic and socioeconomic factors did not mitigate the influence of language on these health indicators.
Conclusions: Spanish-language preference marks a particularly vulnerable subpopulation of US Hispanics who have less access to care and use of preventive services. Priority areas for Spanish-speaking adults include maintenance of healthy behaviors, promotion of physical activity and preventive health care, and increased access to care.
abstract_id: PUBMED:31318468
Factors associated with the intention to use adult preventive health services in Taiwan. Objective: This research aimed to examine the factors associated with the intention to use adult preventive health services in Taiwan.
Design And Sample: Using Andersen's behavioral model, we employed a cross-sectional descriptive design to investigate 500 samples from four communities in southern Taiwan.
Measures: We used a self-reported survey to assess participants' intention to use adult preventive health services, and the predisposing, enabling, and need factors influencing their intention.
Results: Intention to use adult preventive health services was more significantly explained by predisposing and enabling factors than by need factors. In addition, a lack of fixed medical facilities (enabling factor) and Taiwanese origin (predisposing factor) were associated with decreased odds of intention to use adult preventive health services. An educational level of high school or below (predisposing factor), higher amounts of exercise (predisposing factor), and lower barriers to use preventive health services (predisposing factor) were associated with increased odds of intention to use adult preventive health services.
Conclusion: The findings can assist public health nurses in identifying high-risk groups with lower intentions of using adult preventive health services. Additionally, community-based health education program can be developed to increase people's intention to use adult preventive health services.
abstract_id: PUBMED:27778218
Preventive services use among female survivors of adolescent and young adult cancer. Purpose: Examine preventive services utilization among female survivors of adolescent and young adult (AYA) cancer compared with women without cancer in the USA.
Methods: A total of 1017 women diagnosed with cancer at AYA ages (15-39 years) who were at least 5 years since diagnosis were identified from 2008 to 2012 Medical Expenditure Panel Surveys. A comparison group without cancer was matched on age and other characteristics. General preventive services included dental, medical, blood pressure, and cholesterol checkups, and flu shots in the previous year. Cancer-related services included pap smear and mammography. Preventive services and covariates (demographics, socioeconomics, and health status) were compared between groups using χ 2 tests. Ordinal logistic regression identified covariates associated with general preventive services use.
Results: Female survivors reported dental checkups less often (57.8 vs. 72.4 %, p < 0.001) than the comparison group and checked their blood pressure (90 vs. 86.7 %, p = 0.045) and cholesterol (67.5 vs. 61.7 %, p = 0.045) more often. No differences were found in flu shots, medical checkups, and cancer-related services. Survivors without insurance were less likely to use general preventive services (odds ratio [OR] = 0.2, 95 % confidence interval [CI] 0.12-0.35, p < 0.001). Older survivors (OR = 3.09, 95 % CI 1.69-5.62, p < 0.001) and those who speak Spanish or other languages at home (OR = 3.19, 95 % CI 1.33-7.67, p = 0.01) were more likely to use general prevention than their counterparts.
Conclusion: Overall, female survivors were as likely as the comparison group to use preventive services, except dental services, blood pressure, and cholesterol checks.
Implications For Cancer Survivors: Survivors may require support to use recommended preventive services more effectively, especially the younger and uninsured who may be at greater risk for underuse.
abstract_id: PUBMED:34433384
Receipt of preventive health services and current cannabis users. Substance use is associated with greater barriers and reduced access to care. Little research, however, has examined the relationship between cannabis use and receipt of preventive health services. Using data from the 2017 Behavioral Risk Factor Surveillance System, we examined the association between current cannabis use and receipt of 12 preventive health services, adjusting for sociodemographic characteristics and access to care. In analyses that adjusted for sociodemographic factors and access to care, participants with current cannabis use had lower odds of being vaccinated for influenza (AOR = 0.67, 95% CI = 0.54-0.83) and higher odds of ever receiving HPV vaccination (AOR = 1.77, 95% CI = 1.06-2.96) and HIV screening (AOR = 2.34, 95% CI = 1.88-2.92) compared with those without cannabis use. Among the 12 preventive services examined, we found three differences in receipt of preventive services by cannabis use status. Cannabis use does not appear to be associated with significant underuse of preventive services.
abstract_id: PUBMED:15191136
Utilization of preventive dental services by recent immigrants in Quebec Objectives: Studies conducted in Canada show that recent immigrants tend to under-utilize preventive services provided by the health care system. The objective of our study was to learn whether this phenomenon also applies to dental care.
Methods: Our sample was composed of 5,795 women who live in Quebec and are between 30 and 44 years old. These women filled out a self-administered questionnaire regarding their immigration status and their habits concerning dental visits. In our analyses, we compared the use of dental services of recent immigrants (10 years or less in Canada), long-term immigrants (more than 10 years), and non-immigrants (Canadians of origin).
Results: 55% of recent immigrants are preventive service users compared to 69% of long-term immigrants and 76% of non-immigrants. The financial barrier partly explains this result: immigrants often have a modest income and rarely benefit from dental coverage. However, having private dental insurance does not in itself explain the gap between these groups. This suggests that there is a cultural barrier in dental services access.
Conclusions: Under-use of preventive services by immigrants is not limited to the medical field, it extends to dental services as well. In order to facilitate immigrants' adaptation to the health system, we recommend that the government provide them with free dental services that respect their culture.
Answer: Yes, language appears to be a barrier to the use of preventive services. Studies have shown that individuals who do not speak the dominant language of the country they reside in are less likely to receive important preventive services. For instance, in Ontario, Canada, women whose main spoken language was not English were less likely to receive breast exams, mammography, or Pap testing compared to English speakers (PUBMED:9276652). Similarly, deaf American Sign Language (ASL) users had better receipt of preventive services when they had language-concordant providers, indicating that communication barriers can limit access to preventive care (PUBMED:21665066).
Furthermore, among limited English proficient (LEP) Latinos and Asians in California, patient-physician language concordance did not necessarily lead to higher use of mammography, colorectal cancer screening, or influenza vaccination, and was negatively associated with colorectal cancer screening among Asians (PUBMED:25729102). This suggests that language concordance alone may not be sufficient to improve the use of preventive services and that other factors may also play a role.
Hispanics in the US who preferred speaking Spanish reported worse health status, access to care, and received less preventive care compared to their English-speaking counterparts (PUBMED:18799780). Additionally, language barriers have been shown to affect the use of preventive dental services by recent immigrants in Quebec, indicating that under-utilization of preventive services extends beyond medical care to dental services as well (PUBMED:15191136).
Overall, these findings suggest that language barriers contribute to disparities in the use of preventive health services, and improving communication between healthcare providers and patients with limited proficiency in the dominant language may enhance participation in screening programs and access to preventive care. |
Instruction: Aging, health, and depressive symptoms: are women and men different?
Abstracts:
abstract_id: PUBMED:24533913
Comprehensive evaluation of androgen replacement therapy in aging Japanese men with late-onset hypogonadism. Objective: This study assessed the efficacy and safety of testosterone replacement therapy (TRT) in aging Japanese men with late-onset hypogonadism (LOH).
Methods: This study included 50 (median age: 57.7 years) Japanese men with LOH, who were consecutively enrolled and treated with TRT for at least six months at our institution. We evaluated the following measurements before and after six months of treatment with TRT as follows: blood tests, prostate volume, residual urine volume, self-ratings for International Index of Erectile Function 5 (IIEF-5), International Prostate Symptom Score (IPSS), Self-Rating Depression Scale (SDS), Aging Male Symptom (AMS) and the Medical Outcomes Study 8-item Short-Form health survey (SF-8).
Results: Following six months of TRT, the levels of testosterone, red blood cells, hemoglobin and hematocrit were significantly increased from baseline, while total cholesterol level was significantly decreased from baseline. Furthermore, TRT led to a significant increase in IIEF-5 score and a significant decrease in IPSS score. Of 30 men who were diagnosed with depression at baseline, only 11 men (36.7%) were still suffering from depression after TRT, and SDS scores were significantly decreased from baseline at month six. Treatment with TRT led to a significant decrease in all scores of the AMS scale as well as a significant improvement in all scores of the SF-8 survey, with the exception of the bodily pain score.
Conclusion: These findings suggest that TRT is an effective and safe treatment for aging Japanese men with LOH. TRT improved depressive symptoms as well as health-related quality of life.
abstract_id: PUBMED:28067548
The association between negative attitudes toward aging and mental health among middle-aged and older gay and heterosexual men in Israel. Objectives: The association between negative attitudes toward aging and mental health (indicated by depressive symptoms, neuroticism, and happiness) was explored among Israeli middle-aged and older gay and heterosexual men.
Method: In a community-dwelling sample, 152 middle-aged and older gay men and 120 middle-aged and older heterosexual men at the age range of 50-87 (M = 59.3, SD = 7.5) completed measures of negative attitudes toward aging, depressive symptoms, neuroticism, and happiness.
Results: After controlling for socio-demographic characteristics, the association between negative attitudes toward aging and mental health was moderated by sexual orientation, demonstrating that negative attitudes toward aging were more strongly associated with adverse mental health concomitants among middle-aged and older gay men compared to middle-aged and older heterosexual men.
Conclusions: The findings suggest vulnerability of middle-aged and older gay men to risks of aging, as their mental health is markedly linked with their negative attitudes toward aging. This vulnerability should be addressed by clinicians and counselors who work with middle-aged and older gay men.
abstract_id: PUBMED:33217248
Benefits and Risks of Testosterone Treatment in Men with Age-Related Decline in Testosterone. The substantial increase in life expectancy of men has focused growing attention on quality-of-life issues associated with reproductive aging. Serum total and free testosterone levels in men, after reaching a peak in the second and third decade of life, decline gradually with advancing age. The trajectory of age-related decline is affected by comorbid conditions, adiposity, medications, and genetic factors. Testosterone treatment of older men with low testosterone levels improves overall sexual activity, sexual desire, and erectile function; improves areal and volumetric bone density, as well as estimated bone strength in the spine and the hip; corrects unexplained anemia of aging; increases skeletal muscle mass, strength and power, self-reported mobility, and some measures of physical function; and modestly improves depressive symptoms. The long-term effects of testosterone on major cardiovascular events and prostate cancer risk remain unclear. The Endocrine Society recommends against testosterone therapy of all older men with low testosterone levels but suggests consideration of treatment on an individualized basis in men who have consistently low testosterone levels and symptoms or conditions suggestive of testosterone deficiency.
abstract_id: PUBMED:26071237
Depressive Symptom Trajectories, Aging-Related Stress, and Sexual Minority Stress Among Midlife and Older Gay Men: Linking Past and Present. We concatenate 28 years of historical depressive symptoms data from a longitudinal cohort study of U.S. gay men who are now midlife and older (n = 312), with newly collected survey data to analyze trajectories of depressive symptomatology over time and their impact on associations between current stress and depressive symptoms. Symptoms are high over time, on average, and follow multiple trajectories. Aging-related stress, persistent life-course sexual minority stress, and increasing sexual minority stress are positively associated with depressive symptoms, net of symptom trajectories. Men who had experienced elevated and increasing trajectories of depressive symptoms are less susceptible to the damaging effects of aging-related stress than those who experienced a decrease in symptoms over time. Intervention efforts aimed at assisting gay men as they age should take into account life-course depressive symptom histories to appropriately contextualize the health effects of current social stressors.
abstract_id: PUBMED:35935592
The Effect of Discrimination and Resilience on Depressive Symptoms among Middle-Aged and Older Men who have Sex with Men. This study investigated if homophobic and racist discrimination increased depressive symptoms among 960 middle-aged and older men who have sex with men (MSM) and how resilience moderated these relationships. We used five waves of longitudinal data from the Healthy Aging sub-study of the Multicenter AIDS Cohort Study (MACS). We used linear regression analyses to model depressive symptoms as a function of discrimination. We used linear mixed analyses to model changes in mean resilience scores across visits. We used linear regression analyses to model depressive symptoms as a function of changes in resilience and to test the moderation effects of resilience on the relationship between discrimination and depressive symptoms. The models accounted for repeated measures of resilience. Men who experienced external and internal homophobia had greater depressive symptoms (β: 2.08; 95% Confidence Interval: 0.65, 3.51; β: 1.60; 95% Confidence Interval: 0.76, 2.44). Men experienced significant changes in mean resilience levels across visits (F = 2.84, p = 0.02). Men with a greater positive change in resilience had lower depressive symptoms (β: -0.95; 95% Confidence Interval: -1.47, -0.43). Men with higher average resilience levels had lower depressive symptoms (β: -5.08; 95% Confidence Interval: -5.68, -4.49). Men's resilience did not moderate the relationship between homophobia and depressive symptoms. Significant associations of external and internal homophobia with greater depressive symptoms present targets for future research and interventions among middle-aged and older MSM. Significant associations of average and positive changes in resilience with lower depressive symptoms provide aims for future research and interventions with this population.
abstract_id: PUBMED:22378711
Grandfather Involvement and aging men's mental health. The mental health of aging men is an understudied social issue. Although it is widely accepted that meaningful family relationships are associated with fewer depressive symptoms and greater positive affect, scholars have largely overlooked relationships between grandfathers and grandchildren as being beneficial to men's mental health. This study investigates the differences in the depressive symptoms and positive affect of 351 grandfathers. Using a cluster analytic technique, participants were categorized as involved, passive, and disengaged based on their frequency of contact, level of commitment, and participation in activities with grandchildren. Comparative analyses indicate that involved grandfathers had fewer depressive symptoms than disengaged grandfathers. Involved grandfathers had significantly higher scores on positive affect than disengaged grandfathers, and passive grandfathers had significantly higher scores on positive affect than disengaged grandfathers. This study provides evidence that grandfather-grandchild relationships influence aging men's mental health. Implications for practitioners working with aging men are discussed.
abstract_id: PUBMED:26061865
Men's Sheds and the experience of depression in older Australian men. Background/aim: Men's Sheds are community spaces where, usually, older men can socialise as they participate in a range of woodwork and other activities. There is currently little research evidence supporting the anecdotally reported mental health and wellbeing benefits of Men's Sheds. This research project investigated how older men with self-reported symptoms of depression experience their participation in Men's Sheds.
Methods: This study included in-depth interviews and administration of the Beck Depression Inventory-II with 12 men from 3 Men's Sheds, triangulated with observation of the different shed environments. Interviews explored how participation in the Men's Shed, living in a regional area, and retirement intersected with experiences of depression. Participants had either self-reported symptoms of depression or a diagnosis of depression.
Results: The findings from this study support the notion that participation at Men's Sheds decreases self-reported symptoms of depression. Beck Depression Inventory-II scores showed that most participants were currently experiencing minimal depression. The Men's Sheds environment promoted a sense of purpose through relationships and in the sharing of skills, new routines, motivation, and enjoyment for its members. The shed encouraged increased physical activity and use of cognitive skills. Finally, participants reported feelings of pride and achievement which had an impact on their sense of self-worth.
Conclusion: Men's Sheds provide an opportunity to promote health and wellbeing among retired men. The shed's activity and social focus offers a way to help men rediscover purpose and self. Further research is required to measure symptoms of depression before and after participation in Men's Sheds.
abstract_id: PUBMED:28070358
Cholesterol and depressive symptoms in older men across time. This study aimed to examine reciprocal relations between cholesterol and depression. We assessed cholesterol and depressive symptoms twice over a 3-year interval, using 842 men from the Veterans Affairs Normative Aging Study (M = 64, standard deviation = 8). Because depressive symptoms were skewed, we used zero-inflated Poisson analyses. Cross-lagged models showed that cholesterol levels at T1 predicted the existence of depressive symptoms at T2, covarying T1 depressive symptoms, age, smoking status, body mass index, and medications. Depressive symptoms at T1 did not predict cholesterol at T2. Low cholesterol levels may be risk factors for development of depressive symptoms in late life.
abstract_id: PUBMED:26337662
Relationships between nutritional status, depression and pleasure of eating in aging men and women. Nutritional health is an essential component of quality of life among older adults. The aim of this study was to identify the predictors of nutritional status in order to identify both common and sex specific predictive pathways in an aging population. A questionnaire was administered to 464 people living at home aged 65 years and above. Part of the questionnaire contained questions about nutritional status (MNA), depression (GDS), pleasure of eating and demographic characteristics. Structural equation modeling was used to examine relationships between the variables. For both sexes, results indicate that depression and pleasure of eating are related to nutritional status. In addition, different pathways were found between men and women. In particular, while pleasure of eating is affected by depression among aging women this is not the case for men. The implications of the findings for nutrition communication are discussed.
abstract_id: PUBMED:26398001
Discrimination, mental health, and leukocyte telomere length among African American men. African American men in the US experience disparities across multiple health outcomes. A common mechanism underlying premature declines in health may be accelerated biological aging, as reflected by leukocyte telomere length (LTL). Racial discrimination, a qualitatively unique source of social stress reported by African American men, in tandem with poor mental health, may negatively impact LTL in this population. The current study examined cross-sectional associations between LTL, self-reported racial discrimination, and symptoms of depression and anxiety among 92 African American men 30-50 years of age. LTL was measured in kilobase pairs using quantitative polymerase chain reaction assay. Controlling for sociodemographic factors, greater anxiety symptoms were associated with shorter LTL (b=-0.029, standard error [SE]=0.014; p<0.05). There were no main effects of racial discrimination or depressive symptoms on LTL, but we found evidence for a significant interaction between the two (b=0.011, SE=0.005; p<0.05). Racial discrimination was associated with shorter LTL among those with lower levels of depressive symptoms. Findings from this study highlight the role of social stressors and individual-level psychological factors for physiologic deterioration among African American men. Consistent with research on other populations, greater anxiety may reflect elevated stress associated with shorter LTL. Racial discrimination may represent an additional source of social stress among African American men that has detrimental consequences for cellular aging among those with lower levels of depression.
Answer: The abstracts provided offer insights into the relationship between aging, health, and depressive symptoms, with some focusing specifically on men. The studies suggest that there are indeed differences in how aging and health issues impact depressive symptoms in men and women, as well as within subgroups of men.
For instance, one study found that testosterone replacement therapy (TRT) improved depressive symptoms and health-related quality of life in aging Japanese men with late-onset hypogonadism (LOH) (PUBMED:24533913). Another study highlighted that negative attitudes toward aging were more strongly associated with adverse mental health outcomes among middle-aged and older gay men compared to their heterosexual counterparts (PUBMED:28067548). This suggests that sexual orientation can influence the relationship between aging perceptions and mental health in men.
In the context of sexual minority stress, one study showed that homophobic and racist discrimination increased depressive symptoms among middle-aged and older men who have sex with men (MSM), and resilience played a role in moderating these symptoms (PUBMED:35935592). Another study on gay men linked past depressive symptom trajectories with current stress and depressive symptoms, indicating that life-course experiences and historical stressors can impact current mental health (PUBMED:26071237).
Regarding the role of social engagement, one study found that involved grandfathers had fewer depressive symptoms and higher positive affect compared to disengaged grandfathers (PUBMED:22378711). Similarly, participation in Men's Sheds, which are social spaces for older men, was associated with decreased self-reported symptoms of depression (PUBMED:26061865).
In terms of physiological factors, one study suggested that low cholesterol levels might be a risk factor for the development of depressive symptoms in older men (PUBMED:28070358). Another study found that depression and pleasure of eating were related to nutritional status in both aging men and women, but the pathways differed by sex (PUBMED:26337662).
Lastly, a study on African American men indicated that anxiety symptoms were associated with shorter leukocyte telomere length (LTL), a marker of biological aging, and that racial discrimination in tandem with depressive symptoms could impact LTL (PUBMED:26398001).
In summary, the abstracts suggest that there are sex-specific pathways and factors that influence the relationship between aging, health, and depressive symptoms. These include hormonal changes, sexual orientation, discrimination, social engagement, nutritional status, and biological aging markers. Men may experience unique stressors such as discrimination and sexual minority stress that can exacerbate depressive symptoms, and social engagement through family relationships or community spaces like Men's Sheds can have a positive impact on their mental health. Additionally, physiological factors such as testosterone levels and cholesterol may play a role in the mental health of aging men. For women, while not directly addressed in these abstracts, it is implied that there may be different predictors of nutritional status and pleasure of eating that affect their mental health as they age (PUBMED:26337662). Overall, these findings underscore the importance of considering gender and other demographic factors when examining the interplay between aging, health, and depressive symptoms. |
Instruction: Advanced pediatric myelodysplastic syndromes: can immunophenotypic characterization of blast cells be a diagnostic and prognostic tool?
Abstracts:
abstract_id: PUBMED:19061215
Advanced pediatric myelodysplastic syndromes: can immunophenotypic characterization of blast cells be a diagnostic and prognostic tool? Background: The diagnosis of myelodysplastic syndromes (MDS) is mainly based on morphology and cytogenetic analysis. Several efforts to analyze MDS by flow cytometry have been reported in adults. These studies have focused on the identification of abnormalities in the maturation pathway of antigen expression of myelo-monocytic cells, and characterization of blast populations. Therefore, phenotype has been proposed as a diagnostic and prognostic criterion tool for adult MDS. The current article provides data concerning the blast phenotype in pediatric MDS.
Procedure: We evaluated by multiparameter flow cytometry 26 MDS pediatric patients with more than 2% of blast cells at bone marrow morphological examination (17 de novo MDS and 9 secondary MDS) and 145 pediatric de novo acute myeloid leukemia (AML) cases (M3 excluded). As control group, 12 healthy age-matched donors for allogenic bone marrow transplantation (BMD) and 6 regenerating bone marrow samples, collected from children with acute lymphoblastic leukemia (ALL) in remission after induction chemotherapy, were studied.
Results: We identified a blast immunophenotype typically expressed in most MDS cases and a strong correlation between CD7 expression and poor outcome. CD34+ compartment in MDS bone marrow was also analyzed: a significant decrease of B-cell precursors was detected in MDS patients independent of age.
Conclusions: Our data suggest that the blasts phenotypic features can constitute a diagnostic and prognostic tool also for pediatric MDS.
abstract_id: PUBMED:12525519
Additional prognostic value of bone marrow histology in patients subclassified according to the International Prognostic Scoring System for myelodysplastic syndromes. Purpose: The most recent and powerful prognostic instrument established for myelodysplastic syndromes (MDS) is the International Prognostic Scoring System (IPSS), which is primarily based on medullary blast cell count, number of cytopenias, and cytogenetics. Although this prognostic system has substantial predictive power in MDS, further refinement is necessary, especially as far as lower-risk patients are concerned. Histologic parameters, which have long proved to be associated with outcome, are promising candidates to improve the prognostic accuracy of the IPSS. Therefore, we assessed the additional predictive power of the presence of abnormally localized immature precursors (ALIPs) and CD34 immunoreactivity in bone marrow (BM) biopsies of MDS patients.
Patients And Methods: Cytogenetic, morphologic, and clinical data of 184 MDS patients, all from a single institution, were collected, with special emphasis on the determinants of the IPSS score. BM biopsies of 173 patients were analyzed for the presence of ALIP, and CD34 immunoreactivity was assessable in 119 patients. Forty-nine patients received intensive therapy.
Results: The presence of ALIP and CD34 immunoreactivity significantly improved the prognostic value of the IPSS, with respect to overall as well as leukemia-free survival, in particular within the lower-risk categories. In contrast to the IPSS, both histologic parameters also were predictive of outcome within the group of intensively treated MDS patients.
Conclusion: Our data confirm the importance of histopathologic evaluation in MDS and indicate that determining the presence of ALIP and an increase in CD34 immunostaining in addition to the IPSS score could lead to an improved prognostic subcategorization of MDS patients.
abstract_id: PUBMED:1732678
Prognostic factors in myelodysplastic syndromes. The extreme variability in prognosis among patients with myelodysplastic syndromes (MDS) complicates decision-making regarding their therapy. Several studies carried out in recent years have recognized the prognostic value of some clinical and biological characteristics. The percentage of blast cells in bone marrow, cytopenias, age and chromosome abnormalities are the most relevant factors affecting outcome. More importantly, some of these studies have resulted in the development of prognostic regression formulas and scoring systems for accurately estimating the individual prognosis of patients. The major aim of this review is to offer the clinician useful tools for treating MDS patients on a risk-fitted strategy.
abstract_id: PUBMED:12111649
The myelodysplastic syndromes: analysis of prognostic factors and comparison of prognostic systems in 128 Chinese patients from a single institution. Introduction: Although retrospective analysis were frequently undertaken, and many prognostic systems for myelodysplastic syndromes (MDS) have been proposed worldwide, few such studies have been performed and the effectiveness of different scoring systems have not yet been verified in independent patient populations in China. The aim of this single center study was to evaluate the prognostic factors and compare the prognostic scoring systems in Chinese patients with MDS.
Materials And Methods: One hundred and twenty-eight patients diagnosed as primary MDS in our Institution were studied retrospectively to identify significant prognostic factors and to assess the predictive value of 11 previously described prognostic systems, including French-American-British (FAB) classification, World Health Organization (WHO) classification, Mufti, Sanz, Morra, Aul, Oguma, Toyama, Morel and international prognostic scoring system (IPSS).
Results: The median age of the patients was 50 years (range 13-82). The 2- and 5-year survival rate of the patients were 55.22+/-4.90% and 26.09+/-6.36% respectively, with a median survival of 31 months (range 1-127 months). Fifty patients (39.1%) had progressed to acute leukemia (AL) with a median time of 8 months (range 1-43 months). Major independent variables indicated by multivariate analysis were the percentage of bone marrow (BM) blast cells and complex karyotype aberrations for survival (P=0.042 and 0.042, respectively) and only the percentage of BM blast cells for AL transformation (P=0.023). All the systems except Mufti scores successfully discriminated risk groups concerning both survival and AL evolution, especially in the high risk group, ranging from 10 to 20 months and from 4 to 7 months, respectively. The FAB and WHO classification, as well as Sanz, Oguma, Morel and IPSS possessed lower P value (P<0.0001) than that of the rest scoring systems.
Conclusion: The patients in our study were younger than these of the Western population, whereas the survival and AL transformation ratio were comparable to these previous studies. The BM blast proportion and complex chromosomal defects were highly significant for predicting outcome in MDS patients. Most investigated systems effectively stratified patients into groups with different life expectancies and identified a subset of patients with poor clinical outcome. The FAB, WHO classification, as well as Sanz, Oguma, Morel and IPSS scoring systems were more applicable for predicting survival and leukemia progression.
abstract_id: PUBMED:24507815
Chronic myelomonocytic leukemia: myelodysplastic or myeloproliferative? Chronic myelomonocytic leukemia (CMML) is a clonal disease of the hematopoietic stem cell that provokes a stable increase in peripheral blood monocyte count. The World Health Organisation classification appropriately underlines that the disease combines dysplastic and proliferative features. The percentage of blast cells in the blood and bone marrow distinguishes CMML-1 from CMML-2. The disease is usually diagnosed after the age of 50, with a strong male predominance. Inconstant and non-specific cytogenetic aberrations have a negative prognostic impact. Recurrent gene mutations affect mainly the TET2, SRSF2, and ASXL1 genes. Median survival is 3 years, with patients dying from progression to AML (20-30%) or from cytopenias. ASXL1 is the only gene whose mutation predicts outcome and can be included within a prognostic score. Allogeneic stem cell transplantation is possibly curative but rarely feasible. Hydroxyurea, which is the conventional cytoreductive agent, is used in myeloproliferative forms, and demethylating agents could be efficient in the most aggressive forms of the disease.
abstract_id: PUBMED:12931223
Is FISH a relevant prognostic tool in myelodysplastic syndromes with a normal chromosome pattern on conventional cytogenetics? A study on 57 patients. Conventional cytogenetics (CC) at clinical diagnosis shows a normal karyotype in 40-60% of de novo myelodysplastic syndromes (MDSs). Fluorescence in situ hybridization (FISH) might detect occult aberrations in these patients. Therefore, we have used FISH to check 57 MDS patients who were karyo-typically normal on CC. At clinical diagnosis, FISH revealed a clonal abnormality in 18-28% interphase cells from nine patients, five of whom also presented the same defect on metaphase FISH. In five out of nine patients, the occult defect effected a change in the international prognostic scoring system (IPSS). An abnormal FISH pattern was significantly correlated with marrow blast cell percentage (P<10(-3)) and IPSS (P<10(-3)). Patients with an occult abnormality showed an overall survival and event-free survival significantly inferior in comparison to those of patients with normal FISH (P<10(-3), P<10(-3)). Death and AML progression were 15- and eight-fold more frequent in FISH abnormal patients. In conclusion, occult defects (1) are revealed in about 15% of CC normal MDS patients, (2) are overlooked by CC either because of the poor quality of metaphases or their submicroscopic nature, (3) are clinically relevant as they may cause a change in the IPSS category and may identify a fraction of CC normal patients with an unfavorable clinical outcome.
abstract_id: PUBMED:21139142
Prognostic markers for myeloid neoplasms: a comparative review of the literature and goals for future investigation. Myeloid neoplasms include cancers associated with both rapid (acute myeloid leukemias) and gradual (myelodysplastic syndromes and myeloproliferative neoplasms) disease progression. Percentage of blast cells in marrow is used to separate acute (rapid) from chronic (gradual) and is the most consistently applied prognostic marker in veterinary medicine. However, since there is marked variation in tumor progression within groups, there is a need for more complex schemes to stratify animals into specific risk groups. In people with acute myeloid leukemia (AML), pretreatment karyotyping and molecular genetic analysis have greater utility as prognostic markers than morphologic and immunologic phenotypes. Karyotyping is not available as a prognostic marker for AML in dogs and cats, but progress in molecular genetics has created optimism about the eventual ability of veterinarians to discern conditions potentially responsive to medical intervention. In people with myelodysplastic syndromes (MDS), detailed prognostic scoring systems have been devised that use various combinations of blast cell percentage, hematocrit, platelet counts, unilineal versus multilineal cytopenias and dysplasia, karyotype, gender, age, immunophenotype, transfusion dependence, and colony-forming assays. Predictors of outcome for animals with MDS have been limited to blast cell percentage, anemia versus multilineal cytopenias, and morphologic phenotype. Prognostic markers for myeloproliferative neoplasms (eg, polycythemia vera, essential thrombocythemia) include clinical and hematological factors and in people also include cytogenetics and molecular genetics. Validation of prognostic markers for myeloid neoplasms in animals has been thwarted by the lack of a large case series that requires cooperation across institutions and veterinary specialties. Future progress requires overcoming these barriers.
abstract_id: PUBMED:8251411
The prognostic significance of auer rods in myelodysplasia. The category of refractory anaemia with excess blasts in transformation (RAEBt) of the French-American-British (FAB) classification system comprises a heterogeneous group of patients: those with any combination of 5% or more blood blast cells, more than 20% but no more than 30% marrow blast cells, or the presence of auer rods and 30% or less marrow blast cells. To determine the prognostic significance of auer rods in RAEBt, we classified the 208 patients with RAEBt seen between 1973 and 1992 as (1) those having RAEBt solely on the basis of auer rods (RAEBta, n = 29), (2) those meeting blood or marrow blast criteria for RAEBt and also having auer rods (RAEBtpos, n = 40) or (3) those meeting blood or marrow blast criteria for RAEBt without having auer rods (RAEBtneg, n = 139). The RAEBta group had a higher survival probability than either of the other two groups. Within RAEBta, those patients who, without auer rods, would be considered RAEB by the FAB system (n = 19) had a higher probability of survival than patients with RAEB as conventionally defined. Furthermore, patients with RAEBtpos were more likely to live longer than those with RAEBtneg. The RAEBta, RAEBtpos and RAEBtneg groups were similar with regard to the usual haematologic parameters. However, patients with auer rods were more likely to have a normal karyotype and less likely to have prognostically unfavourable cytogenetic abnormalities. When analysis was performed within cytogenetic groups, the favourable prognostic impact of auer rods was still evident. Similarly, the favourable prognostic significance of auer rods was discernible both among patients who did not receive intensive therapy and those who received induction chemotherapy. The complete remission rate in auer rod positive patients was 77%, compared to 27% in those without auer rods. There were no differences in remission duration. Our results suggest that: (1) patients with auer rods without blood or bone marrow blast criteria for RAEBt should not be grouped with those patients with such criteria, and (2) patients with auer rods and other criteria for RAEBt have a higher complete remission rate following induction therapy of the type frequently reserved for patients with acute myeloid leukaemia.
abstract_id: PUBMED:32337851
Identification of a metabolic gene panel to predict the prognosis of myelodysplastic syndrome. Myelodysplastic syndrome (MDS) is clonal disease featured by ineffective haematopoiesis and potential progression into acute myeloid leukaemia (AML). At present, the risk stratification and prognosis of MDS need to be further optimized. A prognostic model was constructed by the least absolute shrinkage and selection operator (LASSO) regression analysis for MDS patients based on the identified metabolic gene panel in training cohort, followed by external validation in an independent cohort. The patients with lower risk had better prognosis than patients with higher risk. The constructed model was verified as an independent prognostic factor for MDS patients with hazard ratios of 3.721 (1.814-7.630) and 2.047 (1.013-4.138) in the training cohort and validation cohort, respectively. The AUC of 3-year overall survival was 0.846 and 0.743 in the training cohort and validation cohort, respectively. The high-risk score was significantly related to other clinical prognostic characteristics, including higher bone marrow blast cells and lower absolute neutrophil count. Moreover, gene set enrichment analyses (GSEA) showed several significantly enriched pathways, with potential indication of the pathogenesis. In this study, we identified a novel stable metabolic panel, which might not only reveal the dysregulated metabolic microenvironment, but can be used to predict the prognosis of MDS.
abstract_id: PUBMED:1283545
The prognostic significance of the CD34 antigen in acute myeloid leukaemia. The prognostic value of CD34 expression on leukaemic blast cells was assessed in 38 patients with acute myeloid leukaemia. Nineteen patients had more than 10% CD34 positive blast cells. Median survival for the CD34 positive patients was 125 days and for the CD34 negative patients the median survival has not yet been reached at day 575 (p = 0.06). Of those patients who received intensive chemotherapy, CD34 positive patients (n = 13) had a median survival of 150 days while for CD34 negative patients (n = 14) the median survival has not yet been reached (p = 0.01). Adjustment for age and pre-existing myelodysplastic syndrome did not affect the correlation of CD34 positivity with survival (p = 0.02). Over the period of observation (median 10 months, range 2-19 months) the relative risk of death was 5 times greater for the CD34 positive patients. This study suggests that CD34 expression is an adverse prognostic marker, independent of age and pre-existing myelodysplasia.
Answer: Yes, immunophenotypic characterization of blast cells can be a diagnostic and prognostic tool in advanced pediatric myelodysplastic syndromes (MDS). The study by PUBMED:19061215 evaluated 26 pediatric MDS patients and found that a specific blast immunophenotype was typically expressed in most MDS cases. Additionally, there was a strong correlation between CD7 expression and poor outcome, suggesting that the phenotypic features of blasts can serve as both diagnostic and prognostic indicators in pediatric MDS. The study also noted a significant decrease in B-cell precursors in the CD34+ compartment of MDS bone marrow, which was independent of age. This supports the idea that flow cytometry and the analysis of blast cell immunophenotype can provide valuable information for the diagnosis and prognosis of pediatric MDS. |
Instruction: Does a myelomeningocele sac compared to no sac result in decreased postnatal leg function following maternal fetal surgery for spina bifida aperta?
Abstracts:
abstract_id: PUBMED:17556822
Does a myelomeningocele sac compared to no sac result in decreased postnatal leg function following maternal fetal surgery for spina bifida aperta? Objective: A fetus with large sac S1 myelomeningocele (MMC) but bilateral talipes prompted the question, 'Does the presence or size of an MMC sac affect postnatal leg function?'
Study Design: An MMC database with prenatal, birth, and a minimum of 1-year follow-up evaluation was reviewed. All fetuses had in-utero MMC repair at 20 + 0 to 25 + 6 weeks at a single institution. Fifty-four fetuses had prenatal evaluation, with 48 children completing a birth and a 1-year evaluation of leg function.
Results: An MMC sac was present in 38/54 (70%) of fetuses evaluated in-utero and had been present in 35/48 (73%) of children evaluated at 1 year of age. Although leg function evaluated at 1 year was better than expected in the 'no sac' group (p = 0.059), this did not reach statistical significance.
Conclusion: The presence of an MMC sac may increase postnatal lower limb morbidity.
abstract_id: PUBMED:37970655
Revisiting MOMS criteria for prenatal repair of spina bifida: upper gestational-age limit should be raised and assessment of prenatal motor function rather than anatomical level improves prediction of postnatal function. Objectives: To determine if the lower-extremity neurological motor function level in fetuses with open spina bifida deteriorates within the 4-week interval between a first prenatal motor assessment at around 22 weeks of gestation and a second evaluation, prior to 'late' prenatal surgery, defined as surgery at 26-28 weeks and, in certain situations, up to 30 weeks, and to assess the association between prenatal presurgical motor-function level, anatomical level of the lesion and postnatal motor-function level.
Methods: This was a two-center cohort study of 94 singleton fetuses with open spina bifida which underwent percutaneous repair using the skin-over-biocellulose for antenatal fetoscopic repair (SAFER) technique between December 2016 and January 2022. All women underwent two prenatal systematic ultrasound evaluations, approximately 4 weeks apart, with the second one being performed less than 1 week before surgery, and one postnatal evaluation via physical examination within 2 months of birth. Motor-function classification was from spinal level T12 to S1, according to key muscle function. Each leg was analyzed separately; in case of discrepancy between the two legs, the worst motor-function level was considered for analysis. Motor-function-level evaluations were compared with each other and with the anatomical level as observed on ultrasound. Independent predictors of a postnatal reduction in motor-function level were assessed using a logistic regression model.
Results: Prenatal motor-function level was assessed at a median gestational age of 22.5 (interquartile range (IQR), 20.7-24.3) and 26.7 (IQR, 25.4-27.3) weeks, with a median interval of 4.0 (IQR, 2.4-6.0) weeks. The median gestational age at surgery was 27.0 (IQR, 25.9-27.6) weeks and the postnatal examination was at median age of 0.8 (IQR, 0.3-5.4) months. There was no significant difference in motor-function level between the two prenatal evaluations (P = 0.861). We therefore decided to use the second prenatal evaluation for comparison with postnatal motor function and anatomical level. Overall, prenatal and postnatal motor function evaluations were significantly different from the anatomical level (preoperative assessment, P = 0.0015; postnatal assessment, P = 0.0333). Comparing prenatal with postnatal motor-function level, we found that 87.2% of babies had similar or improved motor function compared with that prior to prenatal surgery. On logistic regression analysis, lower anatomical level of defect and greater difference between anatomical level and prenatal motor-function level were identified as independent predictors of postnatal motor function (odds ratio, 0.237 (95% CI, 0.095-0.588) (P = 0.002) and 3.44 (95% CI, 1.738-6.813) (P < 0.001), respectively).
Conclusions: During a 4-week interval between first ultrasound evaluation and late fetal surgical repair of open spina bifida, motor function does not change significantly, suggesting that late repair, ≥ 26 weeks, does not impact negatively on motor-function outcome. Compared with the anatomical level of the lesion, preoperative neurological motor-function assessment via ultrasound is more predictive of postnatal motor function, and should be included in preoperative counseling. © 2023 The Authors. Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.
abstract_id: PUBMED:28536839
Maternal-fetal surgery for myelomeningocele: some thoughts on ethical, legal, and psychological issues in a Western European situation. Background: The results of the Management of Myelomeningocele Study (MOMS) randomized controlled trial have demonstrated that maternal-fetal surgery (MFS) for myelomeningocele (MMC) compared to postnatal MMC repair has clear neurological benefits for the child at 12 and 30 months of age. Level I evidence nevertheless does not provide answers to many questions in this delicate field. Since the beginning of 2012, our fetal center has been offering MFS for spina bifida aperta (SBA) to patients from different European and non-European countries, in a societal context where termination of pregnancy is the option chosen by most patients when being informed of this diagnosis.
Methods: We aim to explore in this text some of the ethical, legal, and psychological issues that we have encountered.
Results: For many of these questions, we do not have definite answers. A pregnant patient when diagnosed with a MMC fetus is a vulnerable subject. She needs to be referred to a highly specialized center with sufficient expertise in diagnosis and in all therapeutic options. Objective but compassionate counseling is of paramount importance. It is required that a multidisciplinary professional team obtains full voluntary consent from the mother after providing an appropriate information including diagnosis, short-, medium-, and long-term prognosis as well as benefits and harms of the fetal surgery.
Conclusion: The latter should be offered with full respect for maternal choice and individual assessment and perception of potential risks taking into consideration legislation in the fetal center and the parents' country legislation.
abstract_id: PUBMED:24582882
Maternal-fetal surgery for spina bifida: future perspectives Open spina bifida or myelomeningocele (MMC) is a frequent congenital abnormality (450 cases per year in France) associated with high morbidity. Immediate postnatal surgery is aimed at covering the exposed spinal cord, preventing infection, treating hydrocephalus with a ventricular shunt. MMC surgical techniques haven't achieved any major progress in the past decades. Numerous experimental and clinical studies have demonstrated the MMC "two-hit" hypothetic pathogenesis: a primary embryonic congenital abnormality of the nervous system due to a failure in the closure of the developing neural tube, followed by secondary damages of spinal cord and nerves caused by long-term exposure to amniotic fluid. This malformation frequently develops cranial consequences, i.e. hydrocephalus and Chiari II malformation, due to leakage of cerebrospinal fluid. After 30 years of research, a randomized trial published in February 2011 proved open maternal-fetal surgery (OMFS) for MMC to be a real therapeutic option. Comparing prenatal to postnatal surgery, it confirmed better outcomes of MMC children after a follow up of 2.5 years: enhancement of lower limb motor function, decrease of the degree of hindbrain herniation associated with the Chiari II malformation and the need for shunting. At 5 years of age, MMC children operated prenatally seems to have better neurocognitive, motor and bladder-sphincter outcomes than those operated postnatally. However, risks of OMFS exist: prematurity for the fetus and a double hysterotomy at approximately 3-month interval for the mother. Nowadays, it seems crucial to inform parents of MMC patients about OMFS and to offer it in France. Future research will improve our understanding of MMC pathophysiology and evaluate long-term outcomes of OMFS. Tomorrow's prenatal surgery will be less invasive and more premature using endoscopic, robotic or percutaneous techniques. Beforehand, Achilles' heel of maternal-fetal surgery, i.e. preterm premature rupture of membranes, preterm labor and preterm birth, must be solved.
abstract_id: PUBMED:34523765
Correlation of fetal ventricular size and need for postnatal cerebrospinal fluid diversion surgery in open spina bifida. Objectives: Open spina bifida is a common cause of hydrocephalus in the postnatal period. In-utero closure of the fetal spinal defect decreases the need for postnatal cerebrospinal fluid (CSF) diversion surgery. Good prenatal predictors of the need for postnatal CSF diversion surgery are currently lacking. In this study, we aimed to assess the association of fetal ventriculomegaly and its progression over the course of pregnancy with the rate of postnatal hydrocephalus requiring intervention.
Methods: In this retrospective study, fetuses with a prenatal diagnosis of open spina bifida were assessed longitudinally. Ventricular diameter, as well as other potential predictors of the need for postnatal CSF diversion surgery, were compared between fetuses undergoing prenatal closure and those undergoing postnatal repair.
Results: The diameter of the lateral ventricle increased significantly throughout gestation in both groups, but there was no difference in maximum ventricular diameter at first or last assessment between fetuses undergoing prenatal closure and those undergoing postnatal repair. There was no significant difference in the rate of progression of ventriculomegaly between the two groups, with a mean progression rate of 0.83 ± 0.5 mm/week in the prenatal-repair group and 0.6 ± 0.6 mm/week in the postnatal-repair group (P = 0.098). Fetal repair of open spina bifida was associated with a lower rate of postnatal CSF diversion surgery (P < 0.001). In all subjects, regardless of whether they had prenatal or postnatal surgery, the severity of ventriculomegaly at first and last assessments was associated independently with the need for postnatal CSF diversion surgery (P = 0.005 and P = 0.001, respectively), with a greater need for surgery in fetuses with larger ventricular size, even after controlling for gestational age at assessment.
Conclusions: In fetuses with open spina bifida, fetal ventricular size increases regardless of whether spina bifida closure is performed prenatally or postnatally, but the need for CSF diversion surgery is significantly lower in those undergoing prenatal repair. Ventriculomegaly is associated independently with the need for postnatal CSF diversion in fetuses with open spina bifida, irrespective of timing of closure. © 2021 International Society of Ultrasound in Obstetrics and Gynecology.
abstract_id: PUBMED:15286226
Neonatal loss of motor function in human spina bifida aperta. Objective: In neonates with spina bifida aperta (SBA), leg movements innervated by spinal segments located caudal to the meningomyelocele are transiently present. This study in neonates with SBA aimed to determine whether the presence of leg movements indicates functional integrity of neuronal innervation and whether these leg movements disappear as a result of dysfunction of upper motor neurons (axons originating cranial to the meningomyelocele) and/or of lower motor neurons (located caudal to the meningomyelocele).
Methods: Leg movements were investigated in neonates with SBA at postnatal day 1 (n = 18) and day 7 (n = 10). Upper and lower motor neuron dysfunction was assessed by neurologic examination (n = 18; disinhibition or inhibition of reflexes, respectively) and by electromyography (n = 12; absence or presence of denervation potentials, respectively).
Results: Movements, related to spinal segments caudal to the meningomyelocele, were present in all neonates at postnatal day 1. At day 1, leg movements were associated with signs of both upper (10 of 18) and lower (17 of 18) motor neuron dysfunction caudal to the meningomyelocele. In 7 of 10 neonates restudied after the first postnatal week, leg movements had disappeared. The absence of leg movements coincided with loss of relevant reflexes, which had been present at day 1, indicating progression of lower motor neuron dysfunction.
Conclusions: We conclude that the presence of neonatal leg movements does not indicate integrity of functional lower motor neuron innervation by spinal segments caudal to the meningomyelocele. Present observations could explain why fetal surgery at the level of the meningomyelocele does not prevent loss of leg movements.
abstract_id: PUBMED:19540177
Fetal myelomeningocele: natural history, pathophysiology, and in-utero intervention. Myelomeningocele (MMC) is a common birth defect that is associated with significant lifelong morbidity. Little progress has been made in the postnatal surgical management of the child with spina bifida. Postnatal surgery is aimed at covering the exposed spinal cord, preventing infection, and treating hydrocephalus with a ventricular shunt. In-utero repair of open spina bifida is now performed in selected patients and presents an additional therapeutic alternative for expectant mothers carrying a fetus with MMC. It is estimated that about 400 fetal operations have now been performed for MMC worldwide. Despite this large experience, the technique remains of unproven benefit. Preliminary results suggest that fetal surgery results in reversal of hindbrain herniation (the Chiari II malformation), a decrease in shunt-dependent hydrocephalus, and possibly improvement in leg function, but these findings might be explained by selection bias and changing management indications. A randomized prospective trial (the MOMS trial) is currently being conducted by three centers in the USA, and is estimated to be completed in 2010. Further research is needed to better understand the pathophysiology of MMC, the ideal timing and technique of repair, and the long-term impact of in-utero intervention.
abstract_id: PUBMED:32003478
Maternal low body mass index is a risk factor for fetal ductal constriction following indomethacin use among women undergoing fetal repair of spina bifida. Objectives: The objectives were to determine the prevalence of and to identify risk factors associated with constriction of the fetal ductus arteriosus (DA) following perioperative indomethacin use for fetal myelomeningocele (MMC) repair. Study design A retrospective chart review study included 100 consecutive fetuses who underwent fetal MMC repair between 2011 and 2018. All patients had fetal echocardiography (FE) on postoperative day (POD)#1 and 2 to detect constriction of the DA. All patients received indomethacin for tocolysis using a standardized protocol. Multivariate regression analysis was carried out to identify the predictors for fetal ductal constriction.
Results: Eighty patients met our study eligibility criteria. Median gestational age at time of surgery was 25 (24-25) weeks. Constriction of the DA was detected in 14 fetuses (17.5%). In five fetuses, this was observed on POD# 1, in seven on POD# 2, and in two on both days. The only independent risk factor for predicting DA constriction was maternal body mass index (BMI) <25 kg/m2 (P = .002).
Conclusion: Indomethacin therapy following fetal MMC surgery requires careful daily FE surveillance. The association of DA constriction and low BMI suggests that BMI-based dosing of indomethacin may be recommended for perioperative tocolysis in fetal MMC surgery.
abstract_id: PUBMED:22126123
Fetal endoscopic myelomeningocele closure preserves segmental neurological function. Aim: Our aim was to compare the effect of prenatal endoscopic with postnatal myelomeningocele closure (fetally operated spina bifida aperta [fSBA]) versus neonatally operated spina bifida aperta [nSBA]) on segmental neurological leg condition.
Method: Between 2003 and 2009, the fetal surgical team (Department of Obstetrics, University of Bonn, Germany) performed 19 fetal endoscopic procedures. Three procedures resulted in fetal death, three procedures were interrupted by iatrogenic hemorrhages and 13 procedures were successful. We matched each successfully treated fSBA infant with another nSBA infant of the same age and level of lesion, resulting in 13 matched pairs (mean age 14 mo; SD 16 mo; f/m=1.6; female-16, male-10). Matched fSBA and nSBA pairs were compared in terms of segmental neurological function and leg muscle ultrasound density (MUD). We also determined intraindividual difference in MUD (dMUD) between myotomes caudal and cranial to the myelomeningocele (reflecting neuromuscular damage by the myelomeningocele) and compared dMUD between fSBA and nSBA infants. Finally, we correlated dMUD with segmental neurological function.
Results: We found that, on average, the fSBA group were born at a lower gestational age than the nSBA group (median 32 wks [range 25-34 wks] vs 39 wks [34-41 wks]; p=0.001) and experienced more complications (chorioamnionitis, premature rupture of the amniotic membranes, oligohydramnios, and infant respiratory distress syndrome necessitating intermittent positive-pressure ventilation). Neurological function was better preserved after fSBA than after nSBA (median motor and sensory gain of two segments; better preserved knee-jerk [p=0.006] and anal [p=0.032] reflexes). The dMUD was smaller in fSBA than in nSBA infants (mean difference 24, 95% confidence interval [CI] 15-33; p<0.05), which was associated with better preserved segmental muscle function.
Interpretation: Fetal endoscopic surgery is associated with spinal segmental neuroprotection, but it results in more complications. Before considering clinical implementation of fetal endoscopic myelomeningocele closure as standard care, the frequency of complications should be appropriately reduced and results assessed in larger groups over a longer period of time.
abstract_id: PUBMED:26138563
Percutaneous minimally invasive fetoscopic surgery for spina bifida aperta. Part III: neurosurgical intervention in the first postnatal year. Objective: To evaluate the need for postnatal neurosurgical intervention after fetoscopic patch coverage of spina bifida aperta (SBA).
Methods: This was a retrospective analysis of a cohort of 71 fetuses which underwent minimally invasive fetoscopic patch coverage of SBA between 21 + 0 and 29 + 1 weeks of gestation. Postnatal neurosurgical procedures were classified into two types: re-coverage of the SBA within the first 3 months following birth, and shunt placement as treatment of associated hydrocephalus within the first year.
Results: Location of the SBA was lumbosacral in 59 cases, lumbar in seven, thoracic in three and sacral in two. In total, 20/71 (28%) patients underwent early postnatal neurosurgical intervention by means of re-coverage of the SBA. This was performed because of cerebrospinal fluid leakage in seven (35%), adhesions with functional deterioration in three (15%), incomplete coverage in five (25%) and skin defect in five (25%) cases. Ventriculoperitoneal shunt placement within 1 year was required in 32 (45%) cases and was preceded by ventriculostomy in two. Three (4%) infants needed Chiari decompression surgery in the first 12 months following birth, because of syringomyelia or gait disturbance.
Conclusions: Fetoscopic patch coverage of SBA may require postnatal re-coverage in some cases. In most cases, conservative wound treatment shows good results, without requiring neurosurgical intervention. The low 1-year-shunt rate is comparable to data of the Management of Myelomeningocele Study and lower compared with published data of patients with postnatal only coverage of SBA.
Answer: The presence of a myelomeningocele (MMC) sac may be associated with increased postnatal lower limb morbidity, as suggested by a study that found leg function evaluated at 1 year was better than expected in the 'no sac' group, although this did not reach statistical significance (PUBMED:17556822). This indicates that while there is a trend towards better leg function in the absence of an MMC sac, the evidence is not strong enough to conclusively state that a sac leads to decreased postnatal leg function. Further research would be needed to establish a definitive relationship between the presence of an MMC sac and postnatal leg function outcomes. |
Instruction: Does race influence age of diagnosis for children with developmental delay?
Abstracts:
abstract_id: PUBMED:21122725
Does race influence age of diagnosis for children with developmental delay? Background: Early identification of developmental delay is important for ensuring that children receive the early intervention services they need. Racial disparities exist for a number of childhood conditions, but it is not known whether there are racial disparities in the age of diagnosis with developmental delay.
Objective/hypothesis: This study aimed to determine the mean age of diagnosis with developmental delay for children ensured by South Carolina Medicaid. We hypothesized that African American children would be diagnosed later than white children.
Methods: A retrospective cohort study design explored South Carolina Medicaid claim records to determine the age when 5358 children with developmental delay (DD) were first diagnosed and whether there were racial disparities in age of diagnosis.
Results: The mean age at diagnosis was 4.08 years for African American children and 4.27 years for white children. For children diagnosed with DD and mental retardation, the average age of first diagnosis was 2.6 years, and for children with DD plus cerebral palsy, the average age was 2.1 years. African American race was significantly associated with younger diagnosis with DD in a multivariable model, but the overall model explained little of the variation in age at diagnosis.
Conclusions: There were no clinically significant racial differences in the mean age of diagnosis with developmental delay. However, in general the age of diagnosis was undesirably late for both groups. Additional efforts are needed to ensure that children with DD, living in South Carolina, are identified near the beginning of early intervention services.
abstract_id: PUBMED:36606184
Age of Diagnosis and Demographic Factors Associated with Autism Spectrum Disorders in Chinese Children: A Multi-Center Survey. Purpose: The present study investigated the age of diagnosis, treatment and demographic factors of Chinese children with autism spectrum disorders (ASD), to provide a scientific basis for the early detection, diagnosis, and intervention of ASD.
Patients And Methods: A total of 1500 ASD children aged 2-7 years old from 13 cities in China were administered questionnaires to examine their diagnosis, treatment, and basic family information. The Childhood Autism Rating Scale (CARS) was used to measure the symptoms and severity of ASD children, and the Children Neuropsychological and Behavior Scale-Revision 2016 (CNBS-R2016) was utilized to measure neurodevelopmental levels of ASD children.
Results: We found that for children with ASD, the median (p25, p75) age for the initial detection of social behavioral developmental delay was 24 (18, 30) months, while the age for the initial diagnosis was 29 (24, 36) months and the age for the beginning of intervention was 33 (27, 42) months. Multiple linear regression (MLR) analysis suggested that in children with ASD whose parents were divorced, separated, or widowed, or whose mothers were engaged in physical work, the initial detection of social behavioral developmental delay happened later. For the children with ASD who lived in urban areas, had higher levels of ASD symptom severity or whose parents were not divorced or separated, the age for the initial diagnosis was earlier. For the children with ASD who lived in urban areas or whose mothers had received higher level of education, the earlier age for the beginning of intervention was observed, while for those with ASD whose mothers were engaged in physical work, the age for the beginning of training was later.
Conclusion: It is recommended to actively carry out health education of ASD and strengthen the support for ASD families to enhance their rehabilitation level.
abstract_id: PUBMED:17261988
Diagnosis of severe developmental disorders in children under three years of age. Background: Autism, intellectual disability, and specific language impairment (SLI) constitute three important forms of developmental disability that are often mistaken for each other, especially in very young children (under age 4). Diagnostic problems are caused by the fact that a fundamental problem in cognition, language, or behavior has secondary effects on the remaining areas, which makes it difficult to separate cause from effect. A wrong or absent diagnosis can be a major hindrance in providing properly targeted therapy for developmentally disabled children.
Material/methods: From a population of 667 children referred to a specialized outpatient clinic for developmentally disabled children, we identified 35 children in whom the fundamental diagnosis of autism, intellectual disability, or SLI was unambiguous, and then analyzed these children's scores on 7 subtests from the Munich Functional Developmental Diagnosis, in order to identify specific features of each of the three syndromes.
Results: The most reliable differentiating factor in our research group proved to be the MFDD subtest for self-reliance. A model was constructed to assist in analyzing the complex interactions of symptoms, which frequently overlap.
Conclusions: Cognitive and communicative limitations resulting from underlying perceptual dysfunctions can lead to inappropriate adaptive behavior in children with developmental disorders, such as autism, intellectual disability, and specific language impairment. Each of these syndromes has a specific profile in respect to measures of cognitive function, social skills, and verbal communication.
abstract_id: PUBMED:17683451
Variability in outcome for children with an ASD diagnosis at age 2. Background: Few studies have examined the variability in outcomes of children diagnosed with autism spectrum disorder (ASD) at age 2. Research is needed to understand the children whose symptoms - or diagnoses - change over time. The objectives of this study were to examine the behavioral and diagnostic outcomes of a carefully defined sample of 2-year-old children with ASD, and to identify child and environmental factors that contribute to variability in outcomes at age 4.
Methods: Forty-eight children diagnosed with autism or pervasive developmental disorder not otherwise specified (PDDNOS) at age 2 were followed to age 4. Diagnostic measures included the Autism Diagnostic Observation Schedule - Generic (ADOS-G) and clinical diagnosis at ages 2 and 4, and the ADI-R at age 4.
Results: Diagnostic stability for an ASD diagnosis (autism or PDDNOS) was 63%, and for an autism diagnosis was 68%. Children who failed to meet diagnostic criteria for ASD at follow-up were more likely to: 1) be 30 months or younger at initial evaluation; 2) have milder symptoms of autism, particularly in the social domain; and 3) have higher cognitive scores at age 2. No differences between children with stable and unstable diagnoses were found for amount of intervention services received. Among the children with unstable diagnoses, all but one continued to have developmental disorders, most commonly in the area of language.
Conclusions: The stability of ASD was lower in the present study than has been reported previously, a finding largely attributable to children who were diagnosed at 30 months or younger. Implications for clinical practice are discussed.
abstract_id: PUBMED:28857799
Age at Exposure to Surgery and Anesthesia in Children and Association With Mental Disorder Diagnosis. Background: Animals exposed to anesthetics during specific age periods of brain development experience neurotoxicity, with neurodevelopmental changes subsequently observed during adulthood. The corresponding vulnerable age in children, however, is unknown.
Methods: An observational cohort study was performed using a longitudinal dataset constructed by linking individual-level Medicaid claims from Texas and New York from 1999 to 2010. This dataset was evaluated to determine whether the timing of exposure to anesthesia ≤5 years of age for a single common procedure (pyloromyotomy, inguinal hernia, circumcision outside the perinatal period, or tonsillectomy and/or adenoidectomy) is associated with increased subsequent risk of diagnoses for any mental disorder, or specifically developmental delay (DD) such as reading and language disorders, and attention deficit hyperactivity disorder (ADHD). Exposure to anesthesia and surgery was evaluated in 11 separate age at exposure categories: ≤28 days old, >28 days and ≤6 months, >6 months and ≤1 year, and 6-month age intervals between >1 year old and ≤5 years old. For each exposed child, 5 children matched on propensity score calculated using sociodemographic and clinical covariates were selected for comparison. Cox proportional hazards models were used to measure the hazard ratio of a mental disorder diagnosis associated with exposure to surgery and anesthesia.
Results: A total of 38,493 children with a single exposure and 192,465 propensity score-matched children unexposed before 5 years of age were included in the analysis. Increased risk of mental disorder diagnosis was observed at all ages at exposure with an overall hazard ratio of 1.26 (95% confidence interval [CI], 1.22-1.30), which did not vary significantly with the timing of exposure. Analysis of DD and ADHD showed similar results, with elevated hazard ratios distributed evenly across all ages, and overall hazard ratios of 1.26 (95% CI, 1.20-1.32) for DD and 1.31 (95% CI, 1.25-1.37) for ADHD.
Conclusions: Children who undergo minor surgery requiring anesthesia under age 5 have a small but statistically significant increased risk of mental disorder diagnoses and DD and ADHD diagnoses, but the timing of the surgical procedure does not alter the elevated risks. Based on these findings, there is little support for the concept of delaying a minor procedure to reduce long-term neurodevelopmental risks of anesthesia in children. In evaluating the influence of age at exposure, the types of procedures included may need to be considered, as some procedures are associated with specific comorbid conditions and are only performed at certain ages.
abstract_id: PUBMED:2195972
Autism in infants and young children. The value of early diagnosis Clinical studies of early symptoms are among the most significant advances achieved during recent years in the field of autism. The first descriptions of early symptoms were retrospective and lacked precision. Autistic children are now being seen increasingly early and new insight is being gained into the initial manifestations and differences in clinical patterns. Similarly, differential diagnosis problems (borderline forms) are changing as clinicians evaluate children at younger ages. Although a definitive diagnosis is neither possible nor even desirable before the age of 18 to 24 months, detailed clinical data should be collected early. This data forms the basis for differential diagnosis and will be needed later for differentiating various clinical patterns of developmental disorders, establishing a prognosis, and monitoring therapeutic effects. In prospective longitudinal studies of autistic children, comparison of recent observations with detailed early data is essential. Early initial evaluation includes videotape recordings, assessment of cognitive functions and communication skills, and use of a scale for autistic symptoms. Using the results of evaluations of very young children, a specific semiology (communication disorders) can be developed. Several symptoms probably denote neurophysiologic and/or perceptive disorders. Primary symptoms are now being better distinguished from secondary anomalies (behavior disorders) that may be avoidable or treatable.
abstract_id: PUBMED:38481459
Autism spectrum disorder: Comorbidity and demographics in a clinical sample. Objective: To determine the demographic and clinical characteristics of children followed up with the diagnosis of autism spectrum disorder (ASD) at a tertiary center in Southeeast Turkey. Methods: Children followed up with the diagnosis of ASD at a university hospital child psychiatry clinic between June 2016 and June 2021 were evaluated retrospectively for comorbidities, intellectual functioning and age at diagnosis. Results: In the preschool group, females displayed significantly more frequent cognitive developmental delay. Median age at diagnosis was 36 months (IQR= 22) regardless of gender. Approximately three-fourth (73.7%) of the cases had at least one psychiatric comorbid disorder while 22.8% had at least one medical diagnosis. Psychiatric comorbidity was found to be associated with later diagnosis. Conclusion: Although the age at first diagnosis in this study is relatively earlier than the studies in the literature, most of the children with ASD are still diagnosed very late. Psychiatric comorbidities may lead to later diagnosis due to overshadowing. Training of educational and primary healthcare workers on symptoms of ASD may enable earlier diagnosis.
abstract_id: PUBMED:19581269
No change in the age of diagnosis for fragile x syndrome: findings from a national parent survey. Objective: To determine recent trends in the diagnosis of children with fragile X syndrome (FXS) and identify factors associated with the timing of diagnosis.
Methods: More than 1000 families of children with FXS participated in a national survey. Of these, 249 had their first child (213 boys, 36 girls) diagnosed between 2001 and 2007 and did not know about FXS in their family before diagnosis. These parents answered questions about the average age of first concerns, developmental delays, early intervention, and the FXS diagnosis. They also provided other information about their child and family, reported who made the diagnosis, and described ramifications for other children and extended family members.
Results: The average age of FXS diagnosis of boys remained relatively stable across the 7-year period at approximately 35 to 37 months. The 36 girls with full mutation were given the diagnosis at an average age of 41.6 months. A trend was noted in earlier diagnosis of developmental delay for boys in more recent years. Approximately 25% of the families of male children had a second child with the full mutation before the diagnosis was given to the first child; 14 (39%) of the 36 families of female children had a second child with the full mutation before the diagnosis.
Conclusions: Despite patient advocacy, professional recommendations regarding prompt referral for genetic testing, and increased exposure to information about FXS in the pediatric literature, no changes were detected in the age of diagnosis of FXS during the time period studied. Earlier identification in the absence of systematic screening will likely continue to be a challenge.
abstract_id: PUBMED:31318778
Children with Tourette Syndrome in the United States: Parent-Reported Diagnosis, Co-Occurring Disorders, Severity, and Influence of Activities on Tics. Objective: Describe the diagnostic process for Tourette syndrome (TS) based on parent report, as well as TS severity and associated impairment; the influence of common daily activities on tics; and the presence of co-occurring mental, behavioral, and developmental disorders among children in the United States.
Methods: Parent-report data from the 2014 National Survey of the Diagnosis and Treatment of ADHD and Tourette Syndrome on 115 children ever diagnosed with TS were analyzed. Descriptive, unweighted analyses included frequencies and percentages, and means and standard deviations. Fisher's exact test and t-tests were calculated to determine statistically significant differences.
Results: The mean age that tics were first noticed was 6.3 years, and, on average, TS was diagnosed at 7.7 years. The time from initially noticing tics to TS diagnosis averaged 1.7 years. The mean age when TS symptoms were most severe was 9.3 years. Tic severity was associated with impaired child functioning but not tic noticeability. Almost 70% of parents reported that fatigue and major transitions made their child's tics worse. Children with ever-diagnosed TS had a mean of 3.2 ever-diagnosed co-occurring mental, behavioral, or developmental disorders; a quarter (26.9%) had 5 or more co-occurring disorders.
Discussion: In this sample of children with TS, the time to diagnosis averaged less than 2 years from when tics were initially noticed. More severe TS was associated with greater functional impairment, and co-occurring disorders were common among children with TS. This study provides insight into the current experiences of children with TS in the United States and their families.
abstract_id: PUBMED:33392868
Factors Associated with a Delayed Autism Spectrum Disorder Diagnosis in Children Previously Assessed on Suspicion of Autism. This study aimed to investigate factors associated with a delayed autism spectrum (ASD) diagnosis when compared to children with either no or early ASD diagnosis. Among 893 children assessed for ASD before age 8, 39% had no ASD at baseline, of which 21% received a later ASD diagnosis. Autism symptoms, diagnostic history of other developmental disorders, cognitive ability, and socioeconomic factors were associated with delayed ASD. Autism Diagnostic Observation Schedule (ADOS) scores in delayed ASD fell between early and no ASD. Other developmental disorders, time and clinical trends like ADOS use and low parental education distinguished delayed and early ASD, whereas higher frequency of IQ < 70 at baseline and a diagnosis of emotional disorders during follow-up distinguished delayed and no ASD.
Answer: Based on the information provided in the abstracts, there is evidence that race may influence the age of diagnosis for children with developmental delay, but the findings are not consistent across all studies. In a study examining children insured by South Carolina Medicaid, African American children were diagnosed with developmental delay (DD) at a slightly younger age than white children, with a mean age at diagnosis of 4.08 years for African American children and 4.27 years for white children (PUBMED:21122725). However, the study concluded that there were no clinically significant racial differences in the mean age of diagnosis with developmental delay, although the age of diagnosis was generally late for both groups.
In contrast, another study investigating autism spectrum disorders (ASD) in Chinese children found that demographic factors, including living in urban areas and parental education levels, influenced the age of initial diagnosis, with no specific mention of race as a factor (PUBMED:36606184). Similarly, other studies focused on the diagnosis of developmental disorders such as autism, intellectual disability, and specific language impairment (PUBMED:17261988), the variability in outcomes for children diagnosed with ASD at age 2 (PUBMED:17683451), and the association between exposure to surgery and anesthesia with mental disorder diagnosis (PUBMED:28857799), but did not specifically address the influence of race on the age of diagnosis.
A study on autism spectrum disorder in Southeast Turkey found that psychiatric comorbidity was associated with later diagnosis, but did not specifically mention race as a factor (PUBMED:38481459). Another study on fragile X syndrome (FXS) did not detect changes in the age of diagnosis over a 7-year period and did not report on the influence of race (PUBMED:19581269). Lastly, a study on Tourette syndrome in the United States did not find a significant influence of race on the age of diagnosis (PUBMED:31318778).
In summary, while one study (PUBMED:21122725) found a slight difference in the age of diagnosis between African American and white children, it did not find this difference to be clinically significant. Other studies did not specifically address race as a factor influencing the age of diagnosis for developmental delays. Therefore, based on the provided abstracts, it is not possible to conclusively state that race influences the age of diagnosis for children with developmental delay. |
Instruction: A 50-g glucose challenge test: is there any diagnostic cut-off?
Abstracts:
abstract_id: PUBMED:29694252
The Effect of the 50 g Glucose Challenge Test on The Thiol/Disulfide Homeostasis in Pregnancy. Aim: A 50 g glucose challenge test (GCT) is recommended for screening all pregnant women for gestational diabetes mellitus. In this study, the effect of GCT on the thiol/disulfide balance was investigated.
Methods: One-hundred women that underwent a 50 g GCT at 24-28 weeks of gestation (63 positive and 37 negative results) were evaluated in terms of thiol/disulfide in serum samples at test hours 0 and 1.
Results: Compared to the baseline values (hour 0), after the glucose load (hour 1), the thiol and native thiol/total thiol (p < 0.0001) of the GCT-positive women were reduced whereas the values of glucose, disulfide, disulfide/native thiol, disulfide/total thiol (p < 0.0001) and total thiol increased (p = 0.018).
Conclusion: In GCT-positive pregnant individuals, the glucose load increases oxidative stress by changing the thiol/disulfide homeostasis. Such an effect is not observed in healthy pregnancies.
abstract_id: PUBMED:27651570
50 Grams Oral Glucose Challenge Test: Is It an Effective Screening Test for Gestational Diabetes Mellitus? Aim: To find out whether 50 g oral glucose challenge test (OGCT) is an effective screening test for all pregnant women between 24 and 28 weeks gestation.
Method: A 50 g OGCT test was administered to 307 unselected women at 24-28 weeks of gestation. When venous plasma glucose (VPG) concentration after 1 h was >7.8 mmol/l, OGCT was positive. Women with a positive OGCT underwent 2 h 75 grams oral glucose tolerance test (OGTT) as a confirmatory diagnosis of GDM. When fasting and 2 h post 75 g OGTT values were >5.5 mmol/I and >8 mmol/l, respectively, women were considered diabetic.
Results: We screened 307 women for GDM by OGCT. Total number of women with positive OGCT was 83 (27.03 %). In the low-risk group, total number of women with GDM was 9/168 (5.35 %) while the total number of women with GDM in the high-risk group was 14/139 (10.07 %). There was no significant difference with respect to the total number of women with GDM in the groups.
Conclusions: A 50 g OGCT seems to be an effective screening test for both groups. More cases of GDM can be discovered when universal rather than risk-related screening is applied.
abstract_id: PUBMED:33350460
Different pre-analytical techniques and the results of 50 g oral glucose challenge tests. Objective: We aimed to analyse the pre-analytical process and its effect of 50 g of oral glucose challenge test results for screening gestational diabetes mellitus.
Research Design And Methods: The 50 g oral glucose challenge test was performed to 30 pregnant women, and the blood was collected as two samples for three tubes containing; serum separating jell (SSJ), sodium fluoride-potassium oxalate (NaF - KOx) and sodium citrate-containing tube. The first samples of the three tubes were centrifuged within 30 minutes, and second samples were centrifuged after 60 minutes and were analysed. One sample in SSJ tube and was analysed in the same day according to hospitals routine practice. The results were compared.
Results: Among the 30 samples, the mean decrease in glucose levels was highest in the SSJ tube (0.38 mmol/L), followed by 0.16 mmol/L in Na citrate tube and 0.14 mmol/L in NaF-KOx tube. The hospital routine assessment with SSJ was 6.36 ± 1.90 mmol/L. The <30 and >60 minutes glucose results were 6.80 ± 1.88 mmol/L vs 6.42 ± 1.97 mmol/L for SSJ, 5.95 ± 1.60 mmol/L vs 5.78 ± 1.51 mmol/L for Na Citrate and 6.90 ± 1.86 mmol/L vs 6.75 ± 1.90 mmol/L for NaF-KOx mg/dL groups, respectively, and both the changes within time and the results between the tubes showed a statistically significant difference (P < .001).
Conclusion: In cases with longer assessment time and with different blood sample tubes, the clinician should also keep in mind that, especially with results under but close to the cut-off levels, an underdiagnosed gestational diabetes might be present.
abstract_id: PUBMED:35261106
Relationship of anthropometric measurements with glycated hemoglobin and 1-h blood glucose after 50 g glucose challenge test in pregnant women: A longitudinal cohort study in Southern Thailand. Aims: To assess correlations of anthropometric measurements with glycated hemoglobin (HbA1c) and 1-h blood glucose after a 50 g glucose challenge test during the first and late second trimesters and explore their relationships of anthropometric measurements with neonatal birth weight.
Methods: A longitudinal study was conducted among pregnant Thai women with gestational age ≤14 weeks. Anthropometric measurements, using body mass index, body compositions, and circumferences, and skinfold thickness, were measured at four-time points: ≤14, 18-22, 24-28, and 30-34 weeks of gestation. HbA1c and 1-h blood glucose were examined at ≤14 and 24-28 weeks. Neonatal birth weight was recorded.
Results: Of 312 women, HbA1c was more correlated with anthropometric measurements during pregnancy than 1-h blood glucose. At 24-28 weeks, women with high/very high body fat percentage were more likely to have higher HbA1c. Women with high subscapular skinfold thickness were more likely to have higher 1-h blood glucose at ≤14 and 24-28 weeks. High hip circumference significantly increased neonatal birth weights.
Conclusion: Anthropometric measurements were longitudinally correlated with HbA1c and 1-h blood glucose, higher in the late second than first trimesters, as well as neonatal birth weight. The mechanisms to explain the relationship of different anthropometric measurements are required to be further studied.
abstract_id: PUBMED:29323608
The Association Between Low 50 g Glucose Challenge Test Values and Adverse Pregnancy Outcomes. Background: The implications of low values on the 50 g glucose challenge test (GCT) in pregnancy are not clearly defined. Few studies have evaluated the influence of maternal low GCT values on obstetrical outcomes. This study aimed to compare pregnancy outcomes between women with low 50 g GCT values and those with normal values.
Materials And Methods: Women undergoing gestational diabetes mellitus screening at 24-28 weeks of gestational age between January 2010 and December 2016 were retrospectively evaluated. Women with multifetal pregnancies, prepregnancy type I or II diabetes, GCT performed before 24 or after 28 weeks of gestational age, and women undergoing multiple GCTs in the same pregnancy were excluded. Low GCT values and normal GCT values were defined as ≤85 mg/dL and 86-130 mg/dL, respectively.
Results: Of 3875 screened subjects, 519 (13.4%) women were included in the low GCT group and 3356 (86.6%) in the normal GCT group. Low GCT women had a significantly higher rate of small for gestational age (SGA) infants than normal GCT women (10.8% vs. 7.9%, p = 0.02). Cesarean section and postpartum hemorrhage (PPH) were less frequent in low GCT women than in normal women (32.6% vs. 42.8%, p < 0.01 and 0.2% vs. 1.2%, p = 0.03, respectively). Low GCT women had a 1.38-fold increased risk of bearing SGA infants (95% confidence intervals: 1.01-1.88, p = 0.04).
Conclusions: Rate of SGA infants was significantly higher and cesarean delivery and PPH rates were significantly lower in women with low GCT values. Low GCT values were independently associated with an increased risk of SGA.
abstract_id: PUBMED:33150244
A Retrospective Multicenter Study on the Usefulness of 50 g Glucose Challenge Test in Gestational Diabetes Mellitus Screening. Introduction: To clarify the usefulness of glucose challenge test (GCT), the rate of gestational diabetes mellitus (GDM) detection and perinatal outcomes were compared between the groups of random blood glucose level (RBG) and 50 g GCT in this study.
Methods: The first survey was conducted at 255 institutions registered by the Kanto Society of Obstetrics and Gynecology and clinical training institutions in the Kanto Area, followed by a second survey. The included women were broadly classified into the RBG and GCT groups, according to the mid-trimester blood glucose screening method, and the perinatal outcomes of the two groups were retrospectively compared. The primary outcomes were the proportion of infants weighing 3,500 g or more and birth weight ≥90th-percentile infants.
Results: The rate of GDM diagnosis was significantly higher in the GCT group (7.6%) than that in the RBG group (4.8%). However, no significant differences were observed in perinatal outcomes, i.e., the proportion of infants weighing 3,500 g or more or birth weight ≥90th percentile.
Conclusions: GCT is not superior for predicting infants weighing 3,500 g or more and birth weight ≥90th percentile, as compared with RBG.
abstract_id: PUBMED:30091446
Maternal hypoglycaemia on the 50 g oral glucose challenge test - evaluation of obstetric and neonatal outcomes. Objectives: To discuss obstetric and neonatal outcomes of maternal hypoglycaemia observed after the 50 g oral glucose challenge test.
Material And Methods: A retrospective evaluation was made of the results of patients at 24-28 weeks gestation of a live singleton pregnancy who underwent a 50 g OGCT at the Health Sciences University Gazi Yaşargil Training and Research Hos-pital, between September 2016 and August 2017. In the 50 g OGCT, 1-hour blood glucose results were divided into Low OGCT (< 90 mg/dL) and Normal OGCT (90-139 mg/dL). The groups were compared in respect of obstetrics and neonatal outcomes.
Results: Of 2623 pregnant patients applied with the 50 g OGCT, blood glucose was < 140 mg/dL in 77.16% (n = 2024), with 11.9% (n = 312) in the Low OGCT group, and the remaining 65.26% (n = 1712) in the Normal OGCT group. Based on the comparison of the groups, the SGA rate was 7% in the Low OGCT group and 4% in the Normal OGCT group; the 5th minute APGAR score was < 7 in 2% of the Low OGCT group and in 1% of the Normal OGCT group, while caesarean section rates were 25% and 32% respectively (p < 0.05).
Conclusions: The results of the study showed a significant association between maternal hypoglycaemia and increased SGA rate, decreased 5-minute APGAR scores and reduced caesarean section rates, and this relationship should be confirmed with further comprehensive studies.
abstract_id: PUBMED:35931097
Screening Accuracy of the 50 g-Glucose Challenge Test in Twin Compared With Singleton Pregnancies. Context: The optimal 50 g-glucose challenge test (GCT) cutoff for the diagnosis of gestational diabetes mellitus (GDM) in twin pregnancies is unknown.
Objective: This work aimed to explore the screening accuracy of the 50 g-GCT and its correlation with the risk of large for gestational age (LGA) newborn in twin compared to singleton pregnancies. A population-based retrospective cohort study (2007-2017) was conducted in Ontario, Canada. Participants included patients with a singleton (n = 546 892 [98.4%]) or twin (n = 8832 [1.6%]) birth who underwent screening for GDM using the 50 g-GCT.
Methods: We compared the screening accuracy, risk of GDM, and risk of LGA between twin and singleton pregnancies using various 50 g-GCT cutoffs.
Results: For any given 50 g-GCT result, the probability of GDM was higher (P = .0.007), whereas the probability of LGA was considerably lower in the twin compared with the singleton group, even when a twin-specific growth chart was used to diagnose LGA in the twin group (P < .001). The estimated false-positive rate (FPR) for GDM was higher in twin compared with singleton pregnancies irrespective of the 50 g-GCT cutoff used. The cutoff of 8.2 mmol/L (148 mg/dL) in twin pregnancies was associated with an estimated FPR (10.7%-11.1%) that was similar to the FPR associated with the cutoff of 7.8 mmol/L (140 mg/dL) in singleton pregnancies (10.8%).
Conclusion: The screening performance of the 50 g-GCT for GDM and its correlation with LGA differ between twin and singleton pregnancies.
abstract_id: PUBMED:29637552
Use of the 50-g glucose challenge test to predict excess delivery weight. Objective: To identify a cut-off value for the 50-g glucose challenge test (GCT) that predicts excess delivery weight.
Methods: A retrospective study was conducted among pregnant women who undertook a 50-g GCT at Hacettepe University Hospital, Ankara, Turkey, between January 1, 2000, and December 31, 2016. Patients with singleton pregnancies who delivered live neonates after 28 weeks of pregnancy were included. Patients were classified according to their 50-g GCT values into group 1 (<7.770 mmol/L); group 2 (7.770 to <8.880 mmol/L, group 3 (8.880-9.990 mmol/L); or group 4 (>9.990 mmol/L). Classification and regression tree data mining was performed to identify the 50-g GCT cut-off value corresponding to a substantial increase in delivery weight.
, Results: Median delivery weight were 3100 g in group 1 (n=352), 3200 g in group 2 (n=165), 3720 g in group 3 (n=47), and 3865 g in group 4 (n=20). Gravidity, 50-g GCT value, and pregnancy duration at delivery explained 30.6% of the observed variance in delivery weight. The cut-off required for maternal blood glucose level to predict excessive delivery weight was 8.741 mmol/L.
Conclusion: The 50-g GCT can be used to identify women at risk of delivering offspring with excessive delivery weight.
abstract_id: PUBMED:27512415
Response to fifty grams oral glucose challenge test and pattern of preceding fasting plasma glucose in normal pregnant Nigerians. Background: Diabetes mellitus in pregnancy has profound implications for the baby and mother and thus active screening for this is desirable.
Method: Fifty grams oral glucose challenge test was administered after obtaining consent to 222 women in good health with singleton pregnancies without diabetes mellitus at 24 to 28 weeks gestation after an overnight fast. Venous blood sample was obtained before and 1 hour after the glucose load. A diagnostic 3-hour 100 g oral glucose tolerance test was subsequently performed in all.
Results: Two hundred and ten women had a normal response to oral glucose tolerance test i.e. venous plasma glucose below these cut-off levels: fasting 95 mg/dl (5.3 mmol/l), 1 hour 180 mg/dl (10.0 mmol/l), 2 hours 155 mg/dl (8.6 mmol/l) and 3 hours 140 mg/dl (7.8 mmol/l), while 12 were found to have gestational diabetes mellitus and were subsequently excluded from the study. They were appropriately managed. The mean maternal age was 30.9 ± 4.1 years (range 19 to 45 years) and the mean parity was 1.2 ± 1.1 (range 0 to 5). The mean fasting plasma glucose was 74.5 ± 11.5 mg/dl (range 42 to 117 mg/dl), while the mean plasma glucose 1 hour after 50 g glucose challenge test was 115.3 ± 19.1 mg/dl (range 56 to 180 mg/dl).
Conclusions: The mean fasting plasma glucose in normal pregnant Nigerians was 74.5 ± 11.5 mg/dl (range 42 to 117 mg/dl). There is a need to re-appraise and possibly review downwards the World Health Organization fasting plasma glucose diagnostic criteria in pregnant Nigerians for better detection of gestational diabetes mellitus. Pregnant women with venous plasma glucose greater than 153.5 mg/dl (8.5 mmol/l) 1 hour after 50 g glucose challenge test are strongly recommended for diagnostic test of gestational diabetes mellitus.
Answer: The 50 g glucose challenge test (GCT) is a screening test for gestational diabetes mellitus (GDM) and is typically administered to pregnant women between 24 and 28 weeks of gestation. According to the abstracts provided, there are indeed diagnostic cutoff values used to interpret the results of the GCT.
In one study, a venous plasma glucose (VPG) concentration greater than 7.8 mmol/L (140 mg/dL) one hour after the glucose load was considered a positive GCT result, prompting further testing with a 75 g oral glucose tolerance test (OGTT) for a confirmatory diagnosis of GDM (PUBMED:27651570). Another study mentioned that a cutoff of 8.2 mmol/L (148 mg/dL) in twin pregnancies was associated with an estimated false-positive rate (FPR) similar to the FPR associated with the cutoff of 7.8 mmol/L (140 mg/dL) in singleton pregnancies (PUBMED:35931097).
Additionally, one study aimed to identify a cutoff value for the 50-g GCT that predicts excess delivery weight and found that a maternal blood glucose level of 8.741 mmol/L was the cutoff required to predict excessive delivery weight (PUBMED:29637552).
It is important to note that the cutoff values can vary depending on the population being studied and the specific outcomes being considered. For instance, the study conducted in Nigeria suggested that there might be a need to review the fasting plasma glucose diagnostic criteria downwards for better detection of GDM in that population (PUBMED:27512415).
In summary, while there are established cutoff values for the 50 g GCT, these values can vary, and further testing with an OGTT is often required for a definitive diagnosis of GDM. |
Instruction: Maternal mortality: only 42 days?
Abstracts:
abstract_id: PUBMED:26279311
Review of the importance of nutrition during the first 1000 days: maternal nutritional status and its associations with fetal growth and birth, neonatal and infant outcomes among African women. Maternal nutritional status (MNS) is a strong predictor of growth and development in the first 1000 days of life and may influence susceptibility to non-communicable diseases in adulthood. However, the role of nutrition during this window of developmental plasticity in Africa is unclear. This paper reviews published data to address whether maternal nutrition during the first 1000 days is important for Africa, with a focus on MNS and its associations with fetal growth and birth, neonatal and infant outcomes. A systematic approach was used to search the following databases: Medline, EMBASE, Web of Science, Google Scholar, ScienceDirect, SciSearch and Cochrane Library. In all, 26 studies met the inclusion criteria for the specific objectives. MNS in Africa showed features typical of the epidemiological transition: higher prevalences of maternal overweight and obesity and lower underweight, poor diet quality 1 and high anaemia prevalence. Maternal body mass index and greater gestational weight gain (GWG) were positively associated with birth weight; however, maternal overweight and obesity were associated with increased risk of macrosomia and intrauterine growth restriction. Maternal anaemia was associated with lower birth weight. Macro- and micronutrient supplementation during pregnancy were associated with improvements in GWG, birth weight and mortality risk. Data suggest poor MNS in Africa and confirms the importance of the first 1000 days as a critical period for nutritional intervention to improve growth, birth outcomes and potential future health risk. However, there is a lack of data beyond birth and a need for longitudinal data through infancy to 2 years of age.
abstract_id: PUBMED:36381799
Importance of Maternal Nutrition in the First 1,000 Days of Life and Its Effects on Child Development: A Narrative Review. Maternal nutrition needs to be addressed during pregnancy for the child's first 1,000 days of life, or roughly between conception and a child's second birthday. The infant requires just breast milk for the first six months of life. The production of breastmilk and its nutritional value is essentially unaffected by maternal privation. The child's health suffers when the mother's diet and health are impaired. This review aims to discuss the importance of pregnant women's nutrition and how it impacts the development and expansion of a child during this critical period of development, which is supported by the most recent literature. Throughout the child's growth in the mother's womb and outside, four distinct stages have been identified: (1) nine months to zero months: pregnancy; (2) zero to six months: breastfeeding; (3) six to 12 months: introduction of solid food; and (4) >12 months: transition to family diet, appreciation of nutritious food offered within each period for the child's development. Moreover, there is a strong link between nutrition, well-being, and learning. The nutritional intake of infants, children, and adolescents maintains the body weight and is sufficient to sustain their normal growth and development. One of the crucial factors influencing a child's development is nutrition. Rapid growth occurs during infancy. Compared to other growth phases, this phase has the largest relative energy and food needs for body size.
abstract_id: PUBMED:33023698
The missing focus on women's health in the First 1,000 days approach to nutrition. The First 1,000 Days approach highlights the time between conception and a child's second birthday as a critical period where adequate nutrition is essential for adequate development and growth throughout the child's life and potentially onto their own offspring. Based on a review of relevant literature, this commentary explores the First 1,000 Days approach with a maternal lens. While the primary objective of the First 1,000 Days approach to nutrition is to reduce child malnutrition rates, particularly chronic undernutrition in the form of stunting, interventions are facilitated through mothers in terms of promoting healthy behaviours such as exclusive breast-feeding and attention to her nutritional status during pregnancy and lactation. Though these interventions were facilitated through women, women's health indicators are rarely tracked and measured, which we argue represents a missed opportunity to strengthen the evidence base for associations between maternal nutrition and women's health outcomes. Limited evidence on the effects of dietary interventions with pregnant and lactating mothers on women's health outcomes hinders advocacy efforts, which then contributes to lower prioritisation and less research.
abstract_id: PUBMED:34143504
Does neonatal manipulation on continuous or alternate days change maternal behavior? Maternal separation and neonatal manipulation of pups produce changes in maternal behavior after the dam-pup reunion. Here, we examined whether continuous versus alternating days of neonatal manipulation during the first 8 postnatal days produces differential changes in maternal and non-maternal behaviors in rats. We found that both maternal separation protocols increased anogenital licking after dam-pup reunion, reflecting increased maternal care of pups.
abstract_id: PUBMED:34077076
Policy dialogue to support maternal newborn child health evidence use in policymaking: The lessons learnt from the Nigeria research days first edition. The use of evidence in decision-making and practice can be improved through diverse interventions, including policy dialogue. The Department of Family Health, Federal Ministry of Health of Nigeria initiated and organized the Nigeria Research Days (NRD), to serve as a platform for exchange between researchers and policymakers for improving maternal, new-born and child health. The study reports on the conceptualization, organization and lessons learned from the first edition. A cross-sectional study was designed to assess the effectiveness of a policy dialogue during the NRDs. Data were collected from the feasibility and workshop evaluation surveys. A descriptive analysis of data was performed. As a result, the Nigeria Research Days meets all the criteria for a successful policy dialogue. The participants positively rated the content and format of the meeting and made suggestions for improvement. They were willing to implement the recommendations of the final communiqué. The lessons learned from this first edition will be used to improve future editions.
abstract_id: PUBMED:38468697
Maternal periconception food insecurity and postpartum parenting stress and bonding outcomes. Food insecurity during pregnancy is associated with various adverse pregnancy outcomes for the mother and infant, but less is known about the role of periconception food insecurity and its links to maternal and child wellbeing in the postpartum period. In a sample of 115 diverse (41% white) and predominately low-income mothers, results of hierarchical regression analyses showed that periconception food insecurity was positively associated with parenting stress at 2 months postpartum. A negative association between food insecurity and maternal-infant bonding at 6 months postpartum was mediated after controlling for prenatal depression, social support, and demographic factors. Findings highlight the need for maternal linkage to effective food security programs, such as United States-based Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), for women during their childbearing years due to the critical importance of food security for maternal and infant well-being.
abstract_id: PUBMED:34408995
"The First Thousand Days" Define a Fetal/Neonatal Neurology Program. Gene-environment interactions begin at conception to influence maternal/placental/fetal triads, neonates, and children with short- and long-term effects on brain development. Life-long developmental neuroplasticity more likely results during critical/sensitive periods of brain maturation over these first 1,000 days. A fetal/neonatal program (FNNP) applying this perspective better identifies trimester-specific mechanisms affecting the maternal/placental/fetal (MPF) triad, expressed as brain malformations and destructive lesions. Maladaptive MPF triad interactions impair progenitor neuronal/glial populations within transient embryonic/fetal brain structures by processes such as maternal immune activation. Destructive fetal brain lesions later in pregnancy result from ischemic placental syndromes associated with the great obstetrical syndromes. Trimester-specific MPF triad diseases may negatively impact labor and delivery outcomes. Neonatal neurocritical care addresses the symptomatic minority who express the great neonatal neurological syndromes: encephalopathy, seizures, stroke, and encephalopathy of prematurity. The asymptomatic majority present with neurologic disorders before 2 years of age without prior detection. The developmental principle of ontogenetic adaptation helps guide the diagnostic process during the first 1,000 days to identify more phenotypes using systems-biology analyses. This strategy will foster innovative interdisciplinary diagnostic/therapeutic pathways, educational curricula, and research agenda among multiple FNNP. Effective early-life diagnostic/therapeutic programs will help reduce neurologic disease burden across the lifespan and successive generations.
abstract_id: PUBMED:34067735
Stakeholder Perspectives on Barriers and Facilitators on the Implementation of the 1000 Days Plus Nutrition Policy Activities in Ghana. Optimizing nutrition in the preconception and 1000 days periods have long-term benefits such as higher economic productivity, reduced risk of related non-communicable diseases and increased health and well-being. Despite Ghana's recent progress in reducing malnutrition, the situation is far from optimal. This qualitative study analyzed the maternal and child health nutrition policy framework in Ghana to identify the current barriers and facilitators to the implementation of nutrition policies and programs relating to the first 1000 days plus. Data analyzed included in-depth interviews and focus group discussions conducted in Ghana between March and April 2019. Participants were composed of experts from government agencies, civil society organizations, community-based organizations and international partners at national and subnational levels. Seven critical areas were identified: planning policy implementation, resources, leadership and stakeholders' engagement, implementation guidance and ongoing communication, organizational culture, accountability and governance and coverage. The study showed that, to eradicate malnutrition in Ghana, priorities of individual stakeholders have to be merged and aligned into a single 1000 days plus nutrition policy framework. Furthermore, this study may support stakeholders in implementing successfully the 1000 days plus nutrition policy activities in Ghana.
abstract_id: PUBMED:14592584
Maternal mortality: only 42 days? Objective: A maternal death is defined by WHO as 'the death of a woman while pregnant or within 42 days of termination of pregnancy em leader '. The origin of the 42 days is no longer clear. In developing countries, the burden imposed by pregnancy and birth on a woman's body may extend beyond 42 days as pregnancy-related anaemia can persist for longer and vaginal haemorrhaging and risk of infections are not necessarily over after six weeks. We therefore examined duration of excess mortality after delivery in rural Guinea-Bissau.
Design: In a prospective cohort study, we followed 15,844 women of childbearing age with biannual visits over a period of six years, resulting in a total of 60,192 person-years-at-risk. To establish cause and timing in relation to termination of pregnancy, verbal autopsy was carried out for all deaths. Mortality rates were calculated for short time intervals after each delivery or miscarriage.
Results: During the observation period we registered 14,257 pregnancies and 350 deaths. One hundred and ninety-four deaths followed termination of a registered pregnancy and thus were eligible for the analysis. Eighty-two deaths occurred during the first 42 days after delivery/miscarriage. A further 16 women died in the period from 43 to 91 days after parturition, 16 between 92 and 182 days and 18 between 183 and 365 days after delivery. Compared with baseline mortality 7-12 months after delivery, women who had recently delivered had 15.9 times higher mortality (95% CI 9.8-27.4). From days 43 to 91 the mortality was still significantly elevated (RR = 2.8 [1.4-5.4]).
Conclusion: Where living conditions are harsh, pregnancy and delivery affect the health of the woman for more than 42 days. Using the WHO definition may result in an under-estimation of the pregnancy-related part of the reproductive age mortality. Extending the definition of maternal death to include all deaths within three months of delivery may increase current estimates of maternal mortality by 10-15%.
abstract_id: PUBMED:36552046
A Bio-Social Model during the First 1000 Days Optimizes Healthcare for Children with Developmental Disabilities. Most children with developmental disabilities (DD) live in resource-limited countries (LMIC) or high-income country medical deserts (HICMD). A social contract between healthcare providers and families advocates for accurate diagnoses and effective interventions to treat diseases and toxic stressors. This bio-social model emphasizes reproductive health of women with trimester-specific maternal and pediatric healthcare interactions. Lifelong neuronal connectivity is more likely established across 80% of brain circuitries during the first 1000 days. Maladaptive gene-environment (G x E) interactions begin before conception later presenting as maternal-placental-fetal (MPF) triad, neonatal, or childhood neurologic disorders. Synergy between obstetrical and pediatric healthcare providers can reduce neurologic morbidities. Partnerships between healthcare providers and families should begin during the first 1000 days to address diseases more effectively to moderate maternal and childhood adverse effects. This bio-social model lowers the incidence and lessens the severity of sequalae such as DD. Access to genetic-metabolomic, neurophysiologic and neuroimaging evaluations enhances clinical decision-making for more effective interventions before full expression of neurologic dysfunction. Diagnostic accuracy facilitates developmental interventions for effective preschool planning. A description of a mother-child pair in a HIC emphasizes the time-sensitive importance for early interventions that influenced brain health throughout childhood. Partnership by her parents with healthcare providers and educators provided effective healthcare and lessened adverse effects. Effective educational interventions were later offered through her high school graduation. Healthcare disparities in LMIC and HICMD require that this bio-social model of care begin before the first 1000 days to effectively treat the most vulnerable women and children. Prioritizing family planning followed by prenatal, neonatal and child healthcare improves wellness and brain health. Familiarity with educational neuroscience for teachers applies neurologic diagnoses for effective individual educational plans. Integrating diversity and inclusion into medical and educational services cross socioeconomic, ethnic, racial, and cultural barriers with life-course benefits. Families require knowledge to recognize risks for their children and motivation to sustain relationships with providers and educators for optimal outcomes. The WHO sustainable development goals promote brain health before conception through the first 1000 days. Improved education, employment, and social engagement for all persons will have intergenerational and transgenerational benefits for communities and nations.
Answer: The concept of maternal mortality traditionally encompasses the death of a woman while pregnant or within 42 days of termination of pregnancy, as defined by the World Health Organization (WHO) (PUBMED:14592584). However, the origin of the 42-day period is not entirely clear, and there is evidence suggesting that the burden imposed by pregnancy and birth on a woman's body may extend beyond this timeframe. In a study conducted in rural Guinea-Bissau, researchers found that women who had recently delivered experienced significantly elevated mortality rates not only during the first 42 days after delivery/miscarriage but also from 43 to 91 days and even up to 365 days postpartum (PUBMED:14592584). The study revealed that compared with baseline mortality 7-12 months after delivery, women had 15.9 times higher mortality in the first 42 days, and mortality remained significantly elevated (RR = 2.8) from days 43 to 91 (PUBMED:14592584). These findings suggest that in harsh living conditions, the health effects related to pregnancy and delivery can affect women for more than the 42-day period recognized by the WHO. Consequently, using the WHO definition may result in an underestimation of the pregnancy-related component of reproductive age mortality. The study proposes that extending the definition of maternal death to include all deaths within three months of delivery could increase current estimates of maternal mortality by 10-15% (PUBMED:14592584). |
Instruction: Can patients with brain herniation on cranial computed tomography have a normal neurologic exam?
Abstracts:
abstract_id: PUBMED:19076104
Can patients with brain herniation on cranial computed tomography have a normal neurologic exam? Objectives: Herniation of the brain outside of its normal intracranial spaces is assumed to be accompanied by clinically apparent neurologic dysfunction. The authors sought to determine if some patients with brain herniation or significant brain shift diagnosed by cranial computed tomography (CT) might have a normal neurologic examination.
Methods: This is a secondary analysis of the National Emergency X-Radiography Utilization Study (NEXUS) II cranial CT database compiled from a multicenter, prospective, observational study of all patients for whom cranial CT scanning was ordered in the emergency department (ED). Clinical information including neurologic examination was prospectively collected on all patients prior to CT scanning. Using the final cranial CT radiology reports from participating centers, all CT scans were classified into three categories: frank herniation, significant shift without frank herniation, and minimal or no shift, based on predetermined explicit criteria. These reports were concatenated with clinical information to form the final study database.
Results: A total of 161 patients had CT-diagnosed frank herniation; 3 (1.9%) had no neurologic deficit. Of 91 patients with significant brain shift but no herniation, 4 (4.4%) had no neurologic deficit.
Conclusions: A small number of patients may have normal neurologic status while harboring significant brain shift or brain herniation on cranial CT.
abstract_id: PUBMED:20370782
Prevalence of herniation and intracranial shift on cranial tomography in patients with subarachnoid hemorrhage and a normal neurologic examination. Objectives: Patients frequently present to the emergency department (ED) with headache. Those with sudden severe headache are often evaluated for spontaneous subarachnoid hemorrhage (SAH) with noncontrast cranial computed tomography (CT) followed by lumbar puncture (LP). The authors postulated that in patients without neurologic symptoms or signs, physicians could forgo noncontrast cranial CT and proceed directly to LP. The authors sought to define the safety of this option by having senior neuroradiologists rereview all cranial CTs in a group of such patients for evidence of brain herniation or midline shift.
Methods: This was a retrospective study that included all patients with a normal neurologic examination and nontraumatic SAH diagnosed by CT presenting to a tertiary care medical center from August 1, 2001, to December 31, 2004. Two neuroradiologists, blinded to clinical information and outcomes, rereviewed the initial ED head CT for evidence of herniation or midline shift.
Results: Of the 172 patients who presented to the ED with spontaneous SAH diagnoses by cranial CT, 78 had normal neurologic examinations. Of these, 73 had initial ED CTs available for review. Four of the 73 (5%; 95% confidence interval [CI] = 2% to 13%) had evidence of brain herniation or midline shift, including three (4%; 95% CI = 1% to 12%) with herniation. In only one of these patients was herniation or shift noted on the initial radiology report.
Conclusions: Awake and alert patients with a normal neurologic examination and SAH may have brain herniation and/or midline shift. Therefore, cranial CT should be obtained before LP in all patients with suspected SAH.
abstract_id: PUBMED:11742046
Computed tomography of the head before lumbar puncture in adults with suspected meningitis. Background: In adults with suspected meningitis clinicians routinely order computed tomography (CT) of the head before performing a lumbar puncture.
Methods: We prospectively studied 301 adults with suspected meningitis to determine whether clinical characteristics that were present before CT of the head was performed could be used to identify patients who were unlikely to have abnormalities on CT. The Modified National Institutes of Health Stroke Scale was used to identify neurologic abnormalities.
Results: Of the 301 patients with suspected meningitis, 235 (78 percent) underwent CT of the head before undergoing lumbar puncture. In 56 of the 235 patients (24 percent), the results of CT were abnormal; 11 patients (5 percent) had evidence of a mass effect. The clinical features at base line that were associated with an abnormal finding on CT of the head were an age of at least 60 years, immunocompromise, a history of central nervous system disease, and a history of seizure within one week before presentation, as well as the following neurologic abnormalities: an abnormal level of consciousness, an inability to answer two consecutive questions correctly or to follow two consecutive commands, gaze palsy, abnormal visual fields, facial palsy, arm drift, leg drift, and abnormal language (e.g., aphasia). None of these features were present at base line in 96 of the 235 patients who underwent CT scanning of the head (41 percent). The CT scan was normal in 93 of these 96 patients, yielding a negative predictive value of 97 percent. Of the three misclassified patients, only one had a mild mass effect on CT, and all three subsequently underwent lumbar puncture, with no evidence of brain herniation one week later.
Conclusions: In adults with suspected meningitis, clinical features can be used to identify those who are unlikely to have abnormal findings on CT of the head.
abstract_id: PUBMED:7775236
Glioma in a goat. An adult goat was examined because of behavioral changes and circling. Results of neurologic examination, CSF analysis, hematologic evaluation, and computed tomography of the brain were suggestive of an intra-axial mass. The goat was euthanatized because of worsening neurologic condition and poor prognosis. Necropsy revealed a large mass in the right cerebral hemisphere and caudal brain herniation through the foramen magnum. The mass was diagnosed as a glioma, with oligodendrocyte differentiation. Results of immunohistochemical evaluation were compatible with a malignant, poorly differentiated tumor.
abstract_id: PUBMED:33482817
The diagnostic value of intravenous contrast computed tomography in addition to plain computed tomography in dogs with head trauma. Background: The aim of this study is to evaluate additional findings which can be detected by post-contrast computed tomography (CCT) in relation to plain CT (PCT) findings in patients presented with head trauma. Medical records of canine patients with the history of head trauma from three institutions were reviewed. PCT- and CCT-anonymized images were evaluated by a veterinary radiologist separately. From the categorized findings the following conclusions were drawn as: abnormalities were identified on (A) PCT but missed on CCT, (B) CCT but missed on PCT, (C) both PCT and CCT.
Results: Thirty-two patients were included. The results showed that findings identified on CCT or PCT (category A and B) but missed on the other series were limited to mild soft tissue and sinus changes. Overall, 61 different fracture areas, 6 injuries of the temporomandibular joint (TMJ), 4 orbital injuries, 14 nasal cavities with soft tissue density filling, 13 areas of emphysema, 4 symphysis separations, 12 intracranial hemorrhages, 6 cerebral edema, 5 cerebral midline shifts, 3 intracranial aeroceles, 3 brain herniations and 6 intraparenchymal foreign bodies (defined as an abnormal structure located within the brain: e.g. bony fragments, bullet, teeth,..) were identified on both PCT and CCT separately (category C). Severity grading was different in 50% (3/6) of the reported cerebral edema using PCT and CCT images.
Conclusion: The results showed that PCT is valuable to identify the presence of intracranial traumatic injuries and CCT is not always essential to evaluate vital traumatic changes.
abstract_id: PUBMED:12148847
Three-layer reconstruction for large defects of the anterior skull base. Objectives: To evaluate and discuss a three-layer rigid reconstruction technique for large anterior skull base defects.
Study Design: Prospective, nonrandomized, non-blinded.
Setting: Tertiary teaching medical center.
Methods: Twenty consecutive patients underwent craniofacial resection for a variety of pathology. All patients had large anterior cranial base defects involving the cribriform plate, fovea ethmoidalis, and medial portion of the roof of the orbit at least on one side. A few patients had more extensive defects involving both roof of the orbits, planum sphenoidale, and bones of the upper third of the face. The defects were reconstructed with a three-layer technique. A watertight seal was obtained with a pericranial flap separating the neurocranium from the viscerocranium. Rigid support was provided by bone grafts fixed to a titanium mesh, anchored laterally to the orbital roofs. All patients had a computed tomography scan of the skull on the first or second postoperative day. Patients were observed for immediate and long-term postoperative complications after such reconstruction.
Results: Postoperative computed tomography scans showed small pneumocephalus in all patients. It resolved spontaneously and did not produce neurologic deficits in any patient. There was no cerebrospinal fluid leak, hematoma, or infection. On long-term follow-up, exposures of bone graft or mesh, brain herniation, or transmission of brain pulsation to the eyes were not observed in any patient.
Conclusions: Three-layer reconstruction using bone grafts, titanium mesh, and pericranial flap provides an alternative technique for repair of large anterior cranial base defects. It is safe and effective, and provides rigid protection to the brain.
abstract_id: PUBMED:36751360
Brain herniation on computed tomography is a poor predictor of whether patients with a devastating brain injury can be confirmed dead using neurological criteria. Background: It is unclear if the presence of compartmental brain herniation on neuroimaging should be a prerequisite to the clinical confirmation of death using neurological criteria. The World Brain Death Project has posed this as a research question.
Methods: The final computed tomography of the head scans before death of 164 consecutive patients confirmed dead using neurological criteria and 41 patients with devastating brain injury who died following withdrawal of life sustaining treatment were assessed by a neuroradiologist to compare the incidence of herniation and other features of cerebral swelling.
Results: There was no difference in the incidence of herniation in patients confirmed dead using neurological criteria and those with devastating brain injury (79% vs 76%, OR 1.23 95%, CI 0.56-2.67). The sensitivity and specificity of brain herniation in patients confirmed dead using neurological criteria was 79% and 24%, respectively. The positive and negative predictive value was 81% and 23%, respectively. The most sensitive computed tomography of the head findings for death using neurological criteria were diffuse sulcal effacement (93%) and basal cistern effacement (91%) and the most specific finding was loss of grey-white differentiation (80%). The only features with a significantly different incidence between the death using neurological criteria group and the devastating brain injury group were loss of grey-white differentiation (46 vs 20%, OR 3.56, 95% CI 1.55-8.17) and presence of contralateral ventricular dilatation (24 vs 44%, OR 0.41, 95% CI 0.20-0.84).
Conclusions: Neuroimaging is essential in establishing the cause of death using neurological criteria. However, the presence of brain herniation or other signs of cerebral swelling are poor predictors of whether a patient will satisfy the clinical criteria for death using neurological criteria or not. The decision to test must remain a clinical one.
abstract_id: PUBMED:17663954
Prognostic capacity of brain herniation signs in patients with structural neurological injury Objective: To determine whether the usual mortality prediction systems (APACHE and SAPS) can be complemented by cranial computed tomography (CT) brain herniation findings in patients with structural neurological involvement.
Design: Prospective cohort study.
Setting: Trauma ICU in university hospital.
Patients: One hundred and fifty five patients admitted to ICU in 2003 with cranial trauma or acute stroke.
Main Variables Of Interest: Data were collected on age, diagnosis, mortality, admission cranial CT findings and on APACHE II, APACHE III and SAPS II scores.
Results: Mean age was 47.8 +/- 19.4 years; APACHE II, 17.1 +/- 7.2 points; SAPS II, 43.7 +/- 17.7 points; and APACHE III, 55.8 +/- 29.7 points. Hospital mortality was 36% and mortality predicted by SAPS II was 38%, by APACHE II 30% and by APACHE III 36%. The 56 non-survivors showed greater midline shift on cranial CT scan versus survivors (4.2 +/- 5.5 vs. 1.6 +/- 3.22 mm, p = 0.002) and higher severity as assessed by SAPS II, APACHE II and APACHE III. The mortality rate was significantly higher in patients with subfalcial herniation (61% vs. 30%, p < 0.001). In the multivariate logistic regression analysis, hospital mortality was associated with the likelihood of death according to APACHE III (OR 1.07; 95% CI: 1.05-1.09) and with presence of subfalcial herniation (OR 3.15; 95% CI: 1.07-9.25).
Conclusions: In critical care patients with structural neurological involvement, cranial CT signs of subfalcial herniation complement the prognostic information given by the usual severity indexes.
abstract_id: PUBMED:28965126
Low-Dose CCT to Exclude Contraindications to Lumbar Puncture : Benefits and Limitations. Background: Low-dose cranial computed tomography (LD-CCT) based on iterative reconstruction has been shown to have sufficient image quality to assess cerebrospinal fluid spaces (CSF) and midline structures but not to exclude subtle parenchymal pathologies. Patients without focal neurological deficits often undergo CCT before lumbar puncture (LP) to exclude contraindications to LP including brain herniation or increased CSF pressure. We performed LD-CCT to assess if image quality is appropriate for this indication.
Methods: A total of 58 LD-CCT (220 mA/120 kV) of patients before LP were retrospectively evaluated and compared to 79 normal standard dose cranial computed tomography (SD-CCT) (350 mA/120 kV). Iterative reconstruction used for both dose levels was increased by one factor for LD-CCT. We assessed the signal-to-noise (SNR) and contrast-to-noise ratio (CNR), the dose estimates and scored diagnostic image quality by two raters independently. Significance level was set at p < 0.05.
Results: The inner and outer CSF spaces except the sulci were equally well depicted by the LD-CCT and SD-CCT; however, depiction of the subtle density differences of the brain parenchyma and the sulci was significantly worse in the LD-CCT (p < 0.0001). The SNR in the gray matter (9.35 vs. 10.61, p < 0.05) and white matter (7.23 vs. 8.15, p < 0.001) were significantly lower in LD-CCT than in SD-CCT with significantly lower dose estimates (1.04 vs. 1.69 mSv, respectively p < 0.0001).
Conclusion: The use of LD-CCT with a dose reduction of almost 50% is sufficient to exclude contraindications to LP; however, LD-CCT cannot exclude subtle parenchymal pathologies. Therefore, in patients with suspected parenchymal pathology, SD-CCT is still the method of choice.
abstract_id: PUBMED:37577024
The impact of oral anticoagulants on the characteristics of subdural hematomas and other brain lesions in patients with traumatic brain injury. Background And Aim: The aim of the study was to determine the impact of prior anticoagulant treatment on the characteristics of intracranial hematomas.
Methods: We included in this retrospective study 135 patients who were diagnosed with subdural hematoma in the context of traumatic brain injury. We recorded the demographic and clinical data, the paraclinical examinations and the characteristics of subdural hematoma evidenced by preoperative computed tomography (CT). We also reported the other brain injuries, entailed by primary and secondary lesions, as described by CT.
Results: The anticoagulation therapy was recorded in 35 patients, at the moment of diagnosis. Acute subdural hematoma was recorded in 89 (65.9%) patients, 21 (60%) of these had anticoagulation therapy on admission. There were 46 (34.1%) patients with chronic subdural hematoma, 14 (40%) of these were on anticoagulant therapy. The midline shift was significantly moved in patients with anticoagulation therapy. The thickness of the subdural hematoma was significantly higher in patients with anticoagulation. We did not find any significant association of the other brain lesions (cranial fracture, extradural hematoma, intraparenchymal hematoma, nor intracranial hypertension, brain herniation, brain swelling), and the presence of anticoagulation therapy.
Conclusion: The study showed that anticoagulants significantly influence some neuroimaging aspects of the SDH in head trauma.
Answer: Yes, patients with brain herniation on cranial computed tomography (CT) can have a normal neurologic exam. A study analyzing the National Emergency X-Radiography Utilization Study (NEXUS) II cranial CT database found that out of 161 patients with CT-diagnosed frank herniation, 3 (1.9%) had no neurologic deficit. Additionally, among 91 patients with significant brain shift but no herniation, 4 (4.4%) had no neurologic deficit (PUBMED:19076104). Another study focusing on patients with subarachnoid hemorrhage (SAH) and a normal neurologic examination found that 4 out of 73 (5%) had evidence of brain herniation or midline shift on cranial CT, including three (4%) with herniation (PUBMED:20370782). These findings suggest that a small number of patients may present with significant brain shift or brain herniation on cranial CT while maintaining a normal neurologic status. |
Instruction: Does patient health and hysterectomy status influence cervical cancer screening in older women?
Abstracts:
abstract_id: PUBMED:18784967
Does patient health and hysterectomy status influence cervical cancer screening in older women? Background: Decisions to screen older patients for cancer are complicated by the fact that aging populations are heterogeneous with respect to life expectancy.
Objective: To examine national trends in the association between cervical cancer screening and age, health and hysterectomy status.
Design And Participants: Cross-sectional data from the 1993, 1998, 2000, and 2005 National Health Interview Surveys (NHIS) were used to examine trends in screening for women age 35-64 and 65+ years of age. We investigated whether health is associated with Pap testing among older women using the 2005 NHIS (N = 3,073). We excluded women with a history of cervical cancer or who had their last Pap because of a problem.
Measurements: The dependent variable was having a Pap test within the past 3 years. Independent variables included three measures of respondent health (the Charlson comorbidity index (CCI), general health status and having a chronic disability), hysterectomy status and sociodemographic factors.
Main Results: NHIS data showed a consistent pattern of lower Pap use among older women (65+) compared to younger women regardless of hysterectomy status. Screening also was lower among older women who reported being in fair/poor health, having a chronic disability, or a higher CCI score (4+). Multivariate models showed that over 50% of older women reporting poor health status or a chronic disability and 47% with a hysterectomy still had a recent Pap.
Conclusions: Though age, health and hysterectomy status appear to influence Pap test use, current national data suggest that there still may be overutilization and inappropriate screening of older women.
abstract_id: PUBMED:32155360
Prevalence of Inadequate Cervical Cancer Screening in Low-Income Older Women. Objective: At age 65 years, cervical cancer screening is not recommended in women with an adequate history of negative screening tests in the previous 10 years if they do not have other high-risk factors for cervical cancer. The purpose of this study was to assess the proportion of older low-income women at a safety net urban hospital system without other risk factors for cervical cancer who should have cervical cancer screening because of an inadequate screening history, and to evaluate if they were triaged appropriately. Materials and Methods: Medical records from 200 women 65 years and older at the Gynecology clinic of John H. Stroger Hospital of Cook County were evaluated for adequate cervical cancer screening or hysterectomy to see if they could stop screening. Charts were reviewed to see if a screen was performed, and the results of that test and associated biopsies. Data using cytology alone and the cytology/human papillomavirus cotest were compared. Chi-square test was used. Results: Of 200 women included, the median age was 68.5 years, range 65-93 years. Of these women, 81 (40.5%) did not need testing because of adequate screening or hysterectomy for benign indications. There were 119 (59.5%) women who needed to continue testing because of inadequate screening. Of these women, 46 (38.7%) did not have appropriate testing carried out. Of 73 correctly screened women, 16 (21.9%) required biopsies, of which 11 demonstrated high-grade lesions or cancers. Conclusions: Many older women, especially low-income women, need to continue screening for cervical cancer because of inadequate screening histories. This is a group at increased risk for cervical cancer, and it is imperative that clinicians evaluate previous test results before exiting a woman from screening at age 65 years.
abstract_id: PUBMED:31678587
Three large scale surveys highlight the complexity of cervical cancer under-screening among women 45-65years of age in the United States. Background: Large scale United States (US) surveys guide efforts to maximize the health of its population. Cervical cancer screening is an effective preventive measure with a consistent question format among surveys. The aim of this study is to describe the predictors of cervical cancer screening in older women as reported by three national surveys.
Methods: The Behavioral Risk Factor Surveillance System (BRFSS 2016), the Health Information National Trends Survey (HINTS 2017), and the Health Center Patient Survey (HCPS 2014) were analyzed with univariate and multivariate analyses. We defined the cohort as women, without hysterectomy, who were 45-65years old. The primary outcome was cytology within the last 3years.
Results: Overall, Pap screening rates were 71% (BRFSS), 79% (HINTS) and 66% (HCPS), among 41,657, 740 and 1571 women, respectively. BRFSS showed that women 60-64years old (aPR=0.88, 95% CI: 0.85, 0.91), and in rural locations (aPR=0.95, 95% CI: 0.92, 0.98) were significantly less likely to report cervical cancer screening than women 45-49-years old or in urban locations. Compared to less than high school, women with more education reported more screening (aPR=1.20, 95% CI: 1.13, 1.28), and those with insurance had higher screening rates than the uninsured (aPR=1.47, 95% CI: 1.33, 1.62). HINTS and HCPS also showed these trends.
Conclusions: All three surveys show that cervical cancer screening rates in women 45-65years are insufficient to reduce cervical cancer incidence. Insurance is the major positive predictor of screening, followed by younger age and more education. Race/ethnicity are variable predictors depending on the survey.
abstract_id: PUBMED:15226035
Cervical cancer screening among U.S. women: analyses of the 2000 National Health Interview Survey. Background: Cervical cancer screening is not fully utilized among all groups of women in the United States, especially women without access to health care and older women.
Methods: Papanicolaou (Pap) test use among U.S. women age 18 and older is examined using data from the 2000 National Health Interview Survey (NHIS).
Results: Among women who had not had a hysterectomy (n = 13,745), 83% reported having had a Pap test within the past 3 years. Logistic regression analyses showed that women with no contact with a primary care provider in the past year were very unlikely to have reported a recent Pap test. Other characteristics associated with lower rates of Pap test use included lacking a usual source of care, low family income, low educational attainment, and being unmarried. Having no health insurance coverage was associated with lower Pap test use among women under 65. Despite higher insurance coverage, being age 65 and older was associated with low use. Rates of recent Pap test were higher among African-American women.
Conclusions: Policies to generalize insurance coverage and a usual source of health care would likely increase use of Pap testing. Also needed are health system changes such as automated reminders to assist health care providers implement appropriate screening. Renewed efforts by physicians and targeted public health messages are needed to improve screening among older women without a prior Pap test.
abstract_id: PUBMED:9275270
Cervical screening of Arabic-speaking women in Australian general practice. Objective: To determine recency and predictors of cervical screening among Arabic-speaking women in Sydney, Australia.
Method: A consecutive sample of Arabic-speaking women, attending 20 Arabic-speaking general practitioners, was asked to complete a self administered health risk questionnaire available in Arabic or English which included three questions about cervical screening knowledge and behaviour.
Results: Of 756 eligible women, 526 (70%) returned completed questionnaires. Of these, 69 (13%) did not know what a cervical smear was. Sixteen per cent of overseas-born compared with 2% of Australian-born women at risk had not heard of a cervical smear. Women were defined as being at risk of cervical cancer if they had both been married and not had a hysterectomy. Of 318 women at risk for cervical cancer who knew what a cervical smear was, 66% had had a smear in the last two years, a further 7% were attending for one that day while 11% had not had a smear for at least two years, 9% had never had one and 7% did not answer/could not remember. Religion, age, and residence in Australia for more than 10 years were significant and independent predictors of screening after adjustment for other variables in simultaneous logistic regression model (P = 0.002, P = 0.002, and P = 0.040 respectively). Muslim women and older women were more likely to be underscreened, and women with more than 10 years' residence in Australia were more likely to have been screened in the last two years. Acculturation, smoking status, health status, duration of relationship with participating doctor, and chronic disease were not significant predictors of a recent smear.
Conclusion: As only 73% of women at risk had been screened in the last two years, including women attending on the day and 9% had never been screened, Arabic-speaking women should be a priority for public campaigns, particularly Muslim and older women. Studies to evaluate the effectiveness and acceptability of reminders by ethnic general practitioners are recommended.
abstract_id: PUBMED:23317023
Health maintenance in women. The health maintenance examination is an opportunity to focus on disease prevention and health promotion. The patient history should include screening for tobacco use, alcohol misuse, intimate partner violence, and depression. Premenopausal women should receive preconception counseling and contraception as needed, and all women planning or capable of pregnancy should take 400 to 800 mcg of folic acid per day. High-risk sexually active women should be counseled on reducing the risk of sexually transmitted infections, and screened for chlamydia, gonorrhea, and syphilis. All women should be screened for human immunodeficiency virus. Adults should be screened for obesity and elevated blood pressure. Women 20 years and older should be screened for dyslipidemia if they are at increased risk of coronary heart disease. Those with sustained blood pressure greater than 135/80 mm Hg should be screened for type 2 diabetes mellitus. Women 55 to 79 years of age should take 75 mg of aspirin per day when the benefits of stroke reduction outweigh the increased risk of gastrointestinal hemorrhage. Women should begin cervical cancer screening by Papanicolaou test at 21 years of age, and if results have been normal, screening may be discontinued at 65 years of age or after total hysterectomy. Breast cancer screening with mammography may be considered in women 40 to 49 years of age based on patients' values, and potential benefits and harms. Mammography is recommended biennially in women 50 to 74 years of age. Women should be screened for colorectal cancer from 50 to 75 years of age. Osteoporosis screening is recommended in women 65 years and older, and in younger women with a similar risk of fracture. Adults should be immunized at recommended intervals according to guidelines from the Centers for Disease Control and Prevention.
abstract_id: PUBMED:11817921
Breast and cervical cancer screening practices among Hispanic women in the United States and Puerto Rico, 1998-1999. Background: Results from recent studies suggest that Hispanic women in the United States may underuse cancer screening tests and face important barriers to screening.
Methods: We examined the breast and cervical cancer screening practices of Hispanic women in 50 states, the District of Columbia, and Puerto Rico from 1998 through 1999 by using data from the Behavioral Risk Factor Surveillance System.
Results: About 68.2% (95% confidence interval [CI] = 66.3 to 70.1%) of 7,253 women in this sample aged 40 years or older had received a mammogram in the past 2 years. About 81.4% (95% CI = 80.3 to 82.5%) of 12,350 women aged 18 years or older who had not undergone a hysterectomy had received a Papanicolaou test in the past 3 years. Women with lower incomes and those with less education were less likely to be screened. Women who had seen a physician in the past year and those with health insurance coverage were much more likely to have been screened. For example, among those Hispanic women aged 40 years or older who had any health insurance coverage (n = 6,063), 72.7% (95% CI 70.7-74.6%) had had a mammogram in the past 2 years compared with only 54.8% (95% CI 48.7-61.0%) of women without health insurance coverage (n = 1,184).
Conclusions: These results underscore the need for continued efforts to ensure that Hispanic women who are medically underserved have access to cancer screening services.
abstract_id: PUBMED:24447699
Predictors of non-participation in cervical screening in Denmark. Purpose: The aims of this study were to identify demographic and socio-economic predictors of non-participation in cervical screening in Denmark, and to evaluate the influence of health care use on screening participation.
Methods: A population based register study was undertaken using data from the Central Population Register, the national Patobank, and Statistics Denmark. The study included women aged 25-54 years on 1st of January 2002, living in Denmark during the next 5 years, and without a history of total hysterectomy, N=1,052,447. Independent variables included age, civil status, nationality, level of education, and use of health care. Associations with non-participation in screening were determined with logistic regression.
Results: Main predictors of non-participation were limited or no contact with dental services (odds ratio (OR)=2.36), general practitioners (OR=1.75), and high age (OR=1.98). Other important factors for non-participation were primary school education only (OR=1.53), not being married (OR=1.49), and foreign nationality (OR=1.32).
Conclusion: A 2-1.5-fold difference in non-participation in cervical screening in Denmark was found across various population sub-groups. Increased screening compliance among women with primary school education only, and limited or no use of primary health care services in general could potentially diminish the current social inequalities in cervical cancer incidence, and thus decrease the overall high incidence of this disease in Denmark.
abstract_id: PUBMED:34019928
Impact of screening between the ages of 60 and 64 on cumulative rates of cervical cancer to age 84y by screening history at ages 50 to 59: A population-based case-control study. There is little empirical data on the absolute benefit of cervical screening between ages 60-64y on subsequent cancer risk. We estimate the incidence of cervical cancer up to age 84y in women with and without a cervical cytology test at age 60-64y, by screening histories aged 50-59y. The current study is a population based case-control study of women born between 1928 and 1956 and aged 60-84y between 2007 and 2018. We included all such women diagnosed with cervical cancer in England and an aged-matched random sample without cancer. Women with a hysterectomy were excluded. Exposure was cervical cytology between ages 50-64y. The main outcome was 25y cumulative risk of cervical cancer between ages 60-84y. We found that eight in every 1000 (8.40, 95%CI: 7.78 to 9.07) women without a screening test between age 50-64y develop cervical cancer between the ages of 60-84y. The risk is half: 3.46 per 1000 (95%CI: 2.75 to 4.36) among women with a test between age 60-64y but no cervical screening test at age 50-59y. The absolute difference in risk is equivalent to one fewer cancer for every 202 such women screened. The highest risk (10.01, 95%CI:6.70 to 14.95) was among women with abnormal screening at ages 50-59y and no tests 60-64y. 25y risk among women with a screening test every five years between age 50-64y was just under two in a 1000 (1.59, 95%CI:1.42 to 1.78). Results suggest the upper age of screening should be dependent on previous screening participation and results.
abstract_id: PUBMED:33323276
Differences in cervical cancer screening and follow-up for black and white women in the United States. Objective: To study differences in screening adherence and follow-up after an abnormal Pap test in Non-Hispanic Black (Black) and Non-Hispanic White (White) women.
Methods: An observational cohort study using 2010 National Health Interview Survey cancer module to examine HPV knowledge, screening behavior, and follow-up to abnormal Pap test in Black and White women 18 years of age or older without a hysterectomy. We fit logistic regression models to examine associations between race and primary outcome variables including: HPV awareness, Pap test in the last three years, provider recommended Pap test, received Pap test results, had an abnormal Pap test, recommended follow-up, and adhered to the recommendation for follow-up.
Results: Analyzing data for 7509 women, Black women had lower odds ratios [OR] for: 1) HPV awareness (71% vs 83%; OR = 0.42; 95% CI = 0.36-0.49); 2) reporting Pap screening was recommended (59% vs 64%; OR = 0.76; 95% CI = 0.66-0.88), and 3) acknowledging receipt of Pap results (92% vs 94%; OR = 0.64; 95% CI = 0.49-0.83). Group differences persisted after covariates adjustment. In adjusted models, Black women had higher odds of reporting recent Pap screening (84% vs 77%; OR = 1.7; 95% CI = 1.42-2.03), but reported lower odds of receiving a follow-up recommendation subsequent to abnormal test (78% vs 87%; OR = 0.54; 95% CI = 0.31-0.95).
Conclusion: Black women reported higher cervical cancer screening adherence but lower rates of being informed of an abnormal Pap test and contacted for follow-up treatment. We recommend a multilevel approach to deliver culturally appropriate education and communication for patients, physicians, clinicians in training, and clinic level ancillary staff.
Answer: Yes, patient health and hysterectomy status do influence cervical cancer screening in older women. National Health Interview Surveys data showed a consistent pattern of lower Pap test use among older women (65+) compared to younger women regardless of hysterectomy status. Screening was also lower among older women who reported being in fair/poor health, having a chronic disability, or a higher Charlson comorbidity index (CCI) score. Despite this, over 50% of older women reporting poor health status or a chronic disability and 47% with a hysterectomy still had a recent Pap test, suggesting potential overutilization and inappropriate screening of older women (PUBMED:18784967).
Additionally, many older women, especially those with low income, need to continue screening for cervical cancer due to inadequate screening histories. This is a group at increased risk for cervical cancer, and it is imperative that clinicians evaluate previous test results before exiting a woman from screening at age 65 years (PUBMED:32155360).
Moreover, large scale surveys in the United States have highlighted the complexity of cervical cancer under-screening among women aged 45-65 years. Factors such as insurance, younger age, and more education were major positive predictors of screening, while race/ethnicity were variable predictors depending on the survey (PUBMED:31678587).
In summary, both health status and hysterectomy status are associated with the likelihood of older women receiving cervical cancer screening, with a trend towards lower screening rates among those with poorer health or those who have had a hysterectomy. However, there is also evidence of continued screening in some older women with poor health or post-hysterectomy, which may not always be appropriate according to guidelines. |
Instruction: Does breast size modify the association between mammographic density and breast cancer risk?
Abstracts:
abstract_id: PUBMED:18349279
Does breast size modify the association between mammographic density and breast cancer risk? Background: Both the absolute and the percent of mammographic density are strong and independent risk factors for breast cancer. Previously, we showed that the association between mammographic density and breast cancer risk tended to be weaker in African American than in White U.S. women. Because African American women have a larger breast size, we assessed whether the association between mammographic density and breast cancer was less apparent in large than in small breasts.
Methods: We assessed mammographic density on mammograms from 348 African American and 507 White women, 479 breast cancer patients and 376 control subjects, from a case-control study conducted in Los Angeles County. We estimated odds ratios (OR) for breast cancer with increasing mammographic density, and the analyses were stratified by mammographic breast area.
Results: Median breast size was 168.4 cm2 in African American women and 121.7 cm2 in White women (P for difference <0.001). For absolute density, adjusted ORs (95% confidence intervals) per increase of 10 cm2 were 1.32 (1.13-1.54), 1.14 (1.03-1.26), and 1.02 (0.98-1.07) in the first, second, and third tertiles of breast area, respectively (P for effect modification by breast area = 0.005). The results for percent density were similar although weaker; adjusted ORs per 10% increase (absolute value) in percent density were 1.22 (1.05-1.40), 1.22 (1.06-1.41), and 1.03 (0.90-1.18 P for effect modification by breast area = 0.34).
Conclusion: Our results indicate that the association between mammographic density and breast cancer may be weaker in women with larger breasts.
abstract_id: PUBMED:32643449
Mammographic density and breast cancer screening. Mammographic density, which is determined by the relative amounts of fibroglandular tissue and fat in the breast, varies between women. Mammographic density is associated with a range of factors, including age and body mass index. The description of mammographic density has been transformed by the digitalization of mammography, which has allowed automation of the assessment of mammographic density, rather than using visual inspection by a radiologist. High mammographic density is important because it is associated with reduced sensitivity for the detection of breast cancer at the time of mammographic screening. High mammographic density is also associated with an elevated risk of developing breast cancer. Mammographic density appears to be on the causal pathway for some breast cancer risk factors, but not others. Mammographic density needs to be considered in the context of a woman's background risk of breast cancer. There is intense debate about the use of supplementary imaging for women with high mammographic density. Should supplementary imaging be used in women with high mammographic density and a clear mammogram? If so, what modalities of imaging should be used and in which women? Trials are underway to address the risks and benefits of supplementary imaging.
abstract_id: PUBMED:30977028
The association between mammographic density and breast cancer risk in Western Australian Aboriginal women. Purpose: Mammographic density is an established breast cancer risk factor within many ethnically different populations. The distribution of mammographic density has been shown to be significantly lower in Western Australian Aboriginal women compared to age- and screening location-matched non-Aboriginal women. Whether mammographic density is a predictor of breast cancer risk in Aboriginal women is unknown.
Methods: We measured mammographic density from 103 Aboriginal breast cancer cases and 327 Aboriginal controls, 341 non-Aboriginal cases, and 333 non-Aboriginal controls selected from the BreastScreen Western Australia database using the Cumulus software program. Logistic regression was used to examine the associations of percentage dense area and absolute dense area with breast cancer risk for Aboriginal and non-Aboriginal women separately, adjusting for covariates.
Results: Both percentage density and absolute dense area were strongly predictive of risk in Aboriginal women with odds per adjusted standard deviation (OPERAS) of 1.36 (95% CI 1.09, 1.69) and 1.36 (95% CI 1.08, 1.71), respectively. For non-Aboriginal women, the OPERAS were 1.22 (95% CI 1.03, 1.46) and 1.26 (95% CI 1.05, 1.50), respectively.
Conclusions: Whilst mean mammographic density for Aboriginal women is lower than non-Aboriginal women, density measures are still higher in Aboriginal women with breast cancer compared to Aboriginal women without breast cancer. Thus, mammographic density strongly predicts breast cancer risk in Aboriginal women. Future efforts to predict breast cancer risk using mammographic density or standardize risk-associated mammographic density measures should take into account Aboriginal status when applicable.
abstract_id: PUBMED:35836268
The association of age at menarche and adult height with mammographic density in the International Consortium of Mammographic Density. Background: Early age at menarche and tall stature are associated with increased breast cancer risk. We examined whether these associations were also positively associated with mammographic density, a strong marker of breast cancer risk.
Methods: Participants were 10,681 breast-cancer-free women from 22 countries in the International Consortium of Mammographic Density, each with centrally assessed mammographic density and a common set of epidemiologic data. Study periods for the 27 studies ranged from 1987 to 2014. Multi-level linear regression models estimated changes in square-root per cent density (√PD) and dense area (√DA) associated with age at menarche and adult height in pooled analyses and population-specific meta-analyses. Models were adjusted for age at mammogram, body mass index, menopausal status, hormone therapy use, mammography view and type, mammographic density assessor, parity and height/age at menarche.
Results: In pooled analyses, later age at menarche was associated with higher per cent density (β√PD = 0.023 SE = 0.008, P = 0.003) and larger dense area (β√DA = 0.032 SE = 0.010, P = 0.002). Taller women had larger dense area (β√DA = 0.069 SE = 0.028, P = 0.012) and higher per cent density (β√PD = 0.044, SE = 0.023, P = 0.054), although the observed effect on per cent density depended upon the adjustment used for body size. Similar overall effect estimates were observed in meta-analyses across population groups.
Conclusions: In one of the largest international studies to date, later age at menarche was positively associated with mammographic density. This is in contrast to its association with breast cancer risk, providing little evidence of mediation. Increased height was also positively associated with mammographic density, particularly dense area. These results suggest a complex relationship between growth and development, mammographic density and breast cancer risk. Future studies should evaluate the potential mediation of the breast cancer effects of taller stature through absolute breast density.
abstract_id: PUBMED:36183671
Mammographic breast density and the risk of breast cancer: A systematic review and meta-analysis. Objectives: Mammographic density is a well-defined risk factor for breast cancer and having extremely dense breast tissue is associated with a one-to six-fold increased risk of breast cancer. However, it is questioned whether this increased risk estimate is applicable to current breast density classification methods. Therefore, the aim of this study was to further investigate and clarify the association between mammographic density and breast cancer risk based on current literature.
Methods: Medline, Embase and Web of Science were systematically searched for articles published since 2013, that used BI-RADS lexicon 5th edition and incorporated data on digital mammography. Crude and maximally confounder-adjusted data were pooled in odds ratios (ORs) using random-effects models. Heterogeneity regarding breast cancer risks were investigated using I2 statistic, stratified and sensitivity analyses.
Results: Nine observational studies were included. Having extremely dense breast tissue (BI-RADS density D) resulted in a 2.11-fold (95% CI 1.84-2.42) increased breast cancer risk compared to having scattered dense breast tissue (BI-RADS density B). Sensitivity analysis showed that when only using data that had adjusted for age and BMI, the breast cancer risk was 1.83-fold (95% CI 1.52-2.21) increased. Both results were statistically significant and homogenous.
Conclusions: Mammographic breast density BI-RADS D is associated with an approximately two-fold increased risk of breast cancer compared to having BI-RADS density B in general population women. This is a novel and lower risk estimate compared to previously reported and might be explained due to the use of digital mammography and BI-RADS lexicon 5th edition.
abstract_id: PUBMED:33312342
Mammographic density: intersection of advocacy, science, and clinical practice. Purpose: Here we aim to review the association between mammographic density, collagen structure and breast cancer risk.
Findings: While mammographic density is a strong predictor of breast cancer risk in populations, studies by Boyd show that mammographic density does not predict breast cancer risk in individuals. Mammographic density is affected by age, parity, menopausal status, race/ethnicity, and body mass index (BMI).New studies normalize mammographic density to BMI may provide a more accurate way to compare mammographic density in women of diverse race and ethnicity. Preclinical and tissue-based studies have investigated the role collagen composition and structure in predicting breast cancer risk. There is emerging evidence that collagen structure may activate signaling pathways associated with aggressive breast cancer biology.
Summary: Measurement of film mammographic density does not adequately capture the complex signaling that occurs in women with at-risk collagen. New ways to measure at-risk collagen potentially can provide a more accurate view of risk.
abstract_id: PUBMED:37558417
Risk Analysis of Breast Cancer by Using Bilateral Mammographic Density Differences: A Case-Control Study. The identification of risk factors helps radiologists assess the risk of breast cancer. Quantitative factors such as age and mammographic density are established risk factors for breast cancer. Asymmetric breast findings are frequently encountered during diagnostic mammography. The asymmetric area may indicate a developing mass in the early stage, causing a difference in mammographic density between the left and right sides. Therefore, this paper aims to propose a quantitative parameter named bilateral mammographic density difference (BMDD) for the quantification of breast asymmetry and to verify BMDD as a risk factor for breast cancer. To quantitatively evaluate breast asymmetry, we developed a semi-automatic method to estimate mammographic densities and calculate BMDD as the absolute difference between the left and right mammographic densities. And then, a retrospective case-control study, covering the period from July 2006 to October 2014, was conducted to analyse breast cancer risk in association with BMDD. The study included 364 women diagnosed with breast cancer and 364 matched control patients. As a result, a significant difference in BMDD was found between cases and controls (P < 0.001) and the case-control study demonstrated that women with BMDD > 10% had a 2.4-fold higher risk of breast cancer (odds ratio, 2.4; 95% confidence interval, 1.3-4.5) than women with BMDD ≤ 10%. In addition, we also demonstrated the positive association between BMDD and breast cancer risk among the subgroups with different ages and the Breast Imaging Reporting and Data System (BI-RADS) mammographic density categories. This study demonstrated that BMDD could be a potential risk factor for breast cancer.
abstract_id: PUBMED:34771552
Biological Mechanisms and Therapeutic Opportunities in Mammographic Density and Breast Cancer Risk. Mammographic density is an important risk factor for breast cancer; women with extremely dense breasts have a four to six fold increased risk of breast cancer compared to women with mostly fatty breasts, when matched with age and body mass index. High mammographic density is characterised by high proportions of stroma, containing fibroblasts, collagen and immune cells that suggest a pro-tumour inflammatory microenvironment. However, the biological mechanisms that drive increased mammographic density and the associated increased risk of breast cancer are not yet understood. Inflammatory factors such as monocyte chemotactic protein 1, peroxidase enzymes, transforming growth factor beta, and tumour necrosis factor alpha have been implicated in breast development as well as breast cancer risk, and also influence functions of stromal fibroblasts. Here, the current knowledge and understanding of the underlying biological mechanisms that lead to high mammographic density and the associated increased risk of breast cancer are reviewed, with particular consideration to potential immune factors that may contribute to this process.
abstract_id: PUBMED:28697417
Tumor characteristics and family history in relation to mammographic density and breast cancer: The French E3N cohort. Background: Mammographic density is a known heritable risk factor for breast cancer, but reports how tumor characteristics and family history may modify this association are inconsistent.
Methods: Dense and total breast areas were assessed using Cumulus™ from pre-diagnostic mammograms for 820 invasive breast cancer cases and 820 matched controls nested within the French E3N cohort study. To allow comparisons across models, percent mammographic density (PMD) was standardized to the distribution of the controls. Odds ratios (OR) and 95% confidence intervals (CI) of breast cancer risk for mammographic density were estimated by conditional logistic regression while adjusting for age and body mass index. Heterogeneity according to tumor characteristic and family history was assessed using stratified analyses.
Results: Overall, the OR per 1 SD for PMD was 1.50 (95% CI, 1.33-1.69). No evidence for significant heterogeneity by tumor size, lymph node status, grade, and hormone receptor status (estrogen, progesterone, and HER2) was detected. However, the association of PMD was stronger for women reporting a family history of breast cancer (OR1SD=2.25; 95% CI, 1.67-3.04) than in women reporting none (OR1SD=1.41; 95% CI, 1.24-1.60; pheterogeneity=0.002). Similarly, effect modification by FHBC was observed using categories of PMD (pheterogeneity=0.02) with respective ORs of 15.16 (95% CI, 4.23-54.28) vs. 3.14 (95% CI, 1.89-5.22) for ≥50% vs. <10% PMD.
Conclusions: The stronger association between mammographic density and breast cancer risk with a family history supports the hypothesis of shared genetic factors responsible for familial aggregation of breast cancer and the heritable component of mammographic density.
abstract_id: PUBMED:25258465
Percent Mammographic Density and Dense Area as Risk Factors for Breast Cancer. Purpose: Mammographic characteristics are known to be correlated to breast cancer risk. Percent mammographic density (PMD), as assessed by computer-assisted methods, is an established risk factor for breast cancer. Along with this assessment the absolute dense area (DA) of the breast is reported as well. Aim of this study was to assess the predictive value of DA concerning breast cancer risk in addition to other risk factors and in addition to PMD. Methods: We conducted a case control study with hospital-based patients with a diagnosis of invasive breast cancer and healthy women as controls. A total of 561 patients and 376 controls with available mammographic density were included into this study. We describe the differences concerning the common risk factors BMI, parital status, use of hormone replacement therapy (HRT) and menopause between cases and controls and estimate the odds ratios for PMD and DA, adjusted for the mentioned risk factors. Furthermore we compare the prediction models with each other to find out whether the addition of DA improves the model. Results: Mammographic density and DA were highly correlated with each other. Both variables were as well correlated to the commonly known risk factors with an expected direction and strength, however PMD (ρ = -0.56) was stronger correlated to BMI than DA (ρ = -0.11). The group of women within the highest quartil of PMD had an OR of 2.12 (95 % CI: 1.25-3.62). This could not be seen for the fourth quartile concerning DA. However the assessment of breast cancer risk could be improved by including DA in a prediction model in addition to common risk factors and PMD. Conclusions: The inclusion of the parameter DA into a prediction model for breast cancer in addition to established risk factors and PMD could improve the breast cancer risk assessment. As DA is measured together with PMD in the process of computer-assisted assessment of PMD it might be considered to include it as one additional breast cancer risk factor that is obtained from breast imaging.
Answer: Yes, breast size does modify the association between mammographic density and breast cancer risk. A study assessing mammographic density in African American and White women found that the association between mammographic density and breast cancer risk may be weaker in women with larger breasts. Specifically, the odds ratios for breast cancer with increasing mammographic density decreased across tertiles of breast area, indicating that the association was less apparent in larger breasts (PUBMED:18349279).
This finding is consistent with the understanding that mammographic density is a strong and independent risk factor for breast cancer, but its association with breast cancer risk can be influenced by various factors, including breast size. The study's results suggest that when assessing mammographic density as a risk factor for breast cancer, breast size should be taken into account, as the risk association may not be uniform across different breast sizes. |
Instruction: Is the APLS formula used to calculate weight-for-age applicable to a Trinidadian population?
Abstracts:
abstract_id: PUBMED:22856543
Is the APLS formula used to calculate weight-for-age applicable to a Trinidadian population? Background: In paediatric emergency medicine, estimation of weight in ill children can be performed in a variety of ways. Calculation using the 'APLS' formula (weight = [age + 4] × 2) is one very common method. Studies on its validity in developed countries suggest that it tends to under-estimate the weight of children, potentially leading to errors in drug and fluid administration. The formula is not validated in Trinidad and Tobago, where it is routinely used to calculate weight in paediatric resuscitation.
Methods: Over a six-week period in January 2009, all children one to five years old presenting to the Emergency Department were weighed. Their measured weights were compared to their estimated weights as calculated using the APLS formula, the Luscombe and Owens formula and a "best fit" formula derived (then simplified) from linear regression analysis of the measured weights.
Results: The APLS formula underestimated weight in all age groups with a mean difference of -1.4 kg (95% limits of agreement 5.0 to -7.8). The Luscombe and Owens formula was more accurate in predicting weight than the APLS formula, with a mean difference of -0.4 kg (95% limits of agreement 6.9 to -6.1%). Using linear regression analysis, and simplifying the derived equation, the best formula to describe weight and age was (weight = [2.5 x age] + 8). The percentage of children whose actual weight fell within 10% of the calculated weights using any of the three formulae was not significantly different.
Conclusions: The APLS formula slightly underestimates the weights of children in Trinidad, although this is less than in similar studies in developed countries. Both the Luscombe and Owens formula and the formula derived from the results of this study give a better estimate of the measured weight of children in Trinidad. However, the accuracy and precision of all three formulae were not significantly different from each other. It is recommended that the APLS formula should continue to be used to estimate the weight of children in resuscitation situations in Trinidad, as it is well known, easy to calculate and widely taught in this setting.
abstract_id: PUBMED:33371707
Estimation of weight based on age in Ecuadorian boys and girls: a validation of the APLS formula Introduction: Introduction: in children the use of therapeutic interventions, which includes the administration of medications, is based on body weight. Objective: to validate the equations proposed by "Advanced Pediatric Life Support - APLS" in 2011 (APLS 1) and 2001 (APLS 2) to estimate weight in Ecuadorian girls and boys, considering their ethnic diversity and age groups. Methods: a cross-sectional study which included 21,735 girls and boys belonging to three ethnic groups: mestizo, indigenous, and other (white, black, and mulatto), with ages between 0 and 12 years, who participated in the ENSANUT-ECU study. Differences, Spearman's correlation, Bland-Altman graphs, and percentage error (PE) were calculated. Data were processed and analyzed using R. Results: APLS 1 tends to overestimate weight whereas APLS 2 underestimates it. The estimated weight bias was greater for the classical equation. The indigenous and "other" ethnic groups presented the highest differences with respect to measured weight. The differences between estimated weight and measured weight increased progressively with age. With APLS 1, the percentage of individuals with a PE > 10 % was greater than with APLS 2. Conclusions: APLS does not accurately estimate weight in the Ecuadorian pediatric population. The difference between estimated weight and measured weight is sensitive to ethnic and age differences.
abstract_id: PUBMED:20659877
Weight estimation in paediatrics: a comparison of the APLS formula and the formula 'Weight=3(age)+7'. Objectives: To gather data on the ages and weights of children aged between 1 and 16 years in order to assess the validity of the current weight estimation formula 'Weight(kg)=2(age+4)' and the newly derived formula 'Weight=3(age)+7'.
Design: Retrospective study using data collected from paediatric attendances at an emergency department (ED).
Setting: A large paediatric ED in a major UK city.
Patients: 93,827 children aged 1-16 years attending the ED between June 2003 and September 2008.
Main Outcome Measures: Percentage weight difference between the child's actual weight and the expected weight, the latter determined by 'Weight(kg)=2(age+4)' and by 'Weight(kg)=3(age)+7', in order to compare these two formulae.
Results: The weights of seriously ill children were recorded in only 20.5% of cases, necessitating a weight estimate in the remainder. The formula 'Weight=2(age+4)' underestimated children's weights by a mean of 33.4% (95% CI 33.2% to 33.6%) over the age range 1-16 years whereas the formula 'Weight=3(age)+7' provided a mean underestimate of 6.9% (95% CI 6.8% to 7.1%). The formula 'Weight=3(age)+7' remains applicable from 1 to 13 years inclusive.
Conclusions: Weight estimation is of paramount importance in paediatric resuscitation. This study shows that the current estimation formula provides a significant underestimate of children's weights. When used to calculate drug and fluid dosages, this may lead to the under-resuscitation of a critically ill child. The formula 'Weight=3(age)+7' can be used over a larger age range (from 1 year to puberty) and allows a safe and more accurate estimate of the weight of children today.
abstract_id: PUBMED:24727134
Are APLS formulae for estimating weight appropriate for use in children admitted to PICU? Aim: To determine if the revised APLS UK formulae for estimating weight are appropriate for use in the paediatric intensive care population in the United Kingdom.
Methods: A retrospective observational study involving 10,081 children (5622 male, 4459 female) between the age of term corrected and 15 years, who were admitted to Paediatric Intensive Care Units in the United Kingdom over a five year period between 2006 and 2010. Mean weight was calculated using retrospective data supplied by the 'Paediatric Intensive Care Audit Network' and this was compared to the estimated weight generated using age appropriate APLS UK formulae.
Results: The formula 'Weight=(0.5×age in months)+4' significantly overestimates the mean weight of children under 1 year admitted to PICU by between 10% and 25.4%. While the formula 'Weight=(2×age in years)+8' provides an accurate estimate for 1-year-olds, it significantly underestimates the mean weight of 2-5 year olds by between 2.8% and 4.9%. The formula 'Weight=(3×age in years)+7' significantly overestimates the mean weight of 6-11 year olds by between 8.6% and 20.7%. Simple linear regression was used to produce novel formulae for the prediction of the mean weight specifically for the PICU population.
Conclusions: The APLS UK formulae are not appropriate for estimating the weight of children admitted to PICU in the United Kingdom. Relying on mean weight alone will result in significant error as the standard deviation for all age groups are wide.
abstract_id: PUBMED:30847868
Validation of APLS, Argall and Luscombe Formulae for Estimating Weight among Indian Children. Weight estimation in pediatric emergencies is often required for calculation of drug dosages, fluid therapy and defibrillation. The 'gold standard' of actually weighing the patient is not practically possible in emergency conditions. The aim of this study is to validate common age-based formulae (APLS, Luscombe and Argall's) and their accuracy in estimating weight of under 5-y-old Indian children by secondary data analysis from a cross-sectional study conducted by the National Nutrition Monitoring Bureau (NNMB), National Institute of Nutrition, Hyderabad, in 10 states of India in 2011-12 among under five-year-old children. Their measured weights were compared to their estimated weights as calculated using the APLS formula, the Luscombe and Argall formulae. There is a need to adjust the formulae for accurate estimation of weight among Indian children as all the three age-based weight formulae namely APLS, Argyll and Luscombe overestimated the weight among the Indian children.
abstract_id: PUBMED:22289683
A new age-based formula for estimating weight of Korean children. Objectives: The objective of this study was to develop and validate a new age-based formula for estimating body weights of Korean children.
Methods: We obtained body weight and age data from a survey conducted in 2005 by the Korean Pediatric Society that was performed to establish normative values for Korean children. Children aged 0-14 were enrolled, and they were divided into three groups according to age: infants (<12 months), preschool-aged (1-4 years) and school-aged children (5-14 years). Seventy-five percent of all subjects were randomly selected to make a derivation set. Regression analysis was performed in order to produce equations that predict the weight from the age for each group. The linear equations derived from this analysis were simplified to create a weight estimating formula for Korean children. This formula was then validated using the remaining 25% of the study subjects with mean percentage error and absolute error. To determine whether a new formula accurately predicts actual weights of Korean children, we also compared this new formula to other weight estimation methods (APLS, Shann formula, Leffler formula, Nelson formula and Broselow tape).
Results: A total of 124,095 children's data were enrolled, and 19,854 (16.0%), 40,612 (32.7%) and 63,629 (51.3%) were classified as infants, preschool-aged and school-aged groups, respectively. Three equations, (age in months+9)/2, 2×(age in years)+9 and 4×(age in years)-1 were derived for infants, pre-school and school-aged groups, respectively. When these equations were applied to the validation set, the actual average weight of those children was 0.4kg heavier than our estimated weight (95% CI=0.37-0.43, p<0.001). The mean percentage error of our model (+0.9%) was lower than APLS (-11.5%), Shann formula (-8.6%), Leffler formula (-1.7%), Nelson formula (-10.0%), Best Guess formula (+5.0%) and Broselow tape (-4.8%) for all age groups.
Conclusion: We developed and validated a simple formula to estimate body weight from the age of Korean children and found that this new formula was more accurate than other weight estimating methods. However, care should be taken when applying this formula to older children because of a large standard deviation of estimated weight.
abstract_id: PUBMED:26714256
Review of commonly used age-based weight estimates for paediatric drug dosing in relation to the pharmacokinetic properties of resuscitation drugs. Aim: To study which weight estimate calculation used in paediatric resuscitation results in optimal drug dosing; Advanced Paediatric and Life Support (APLS) or the UK Resuscitation Council age-based formula.
Method: Commonly used drugs used in paediatric resuscitation were selected and a literature search conducted for each drug's pharmacokinetic properties, concentrating on the volume of distribution (Vd). Hydrophobic drugs have a higher Vd than hydrophilic drugs as they distribute preferentially to fat mass (FM). The larger the Vd, the higher the initial dose required to achieve therapeutic plasma concentrations. Actual body weight (ABW) estimates are a good indicator of Vd for hydrophobic drugs as they correlate well with FM. Ideal body weight (IBW) estimates may be a better indicator of Vd for hydrophilic drugs, as they correlate better with lean body mass. This highlights potential variation between ABW and IBW, which may result in toxic or sub-therapeutic dosing.
Results: The new APLS formulae give higher estimates of expected weight for a wider age range. This may be a more accurate reflection of ABW due to increasing prevalence of obesity in children. The UK Resuscitation Council's formula appears to result in a lower estimate of weight, which may relate more closely to IBW.
Conclusion: The main drugs used in paediatric resuscitation are hydrophilic, thus the APLS formulae may result in too much being given. Therefore the UK Resuscitation Council's single formula may be preferred. In addition, a single formula may minimize error in the context of a child of unknown weight requiring administration of emergency resuscitation drugs.
abstract_id: PUBMED:20943838
Validation of weight estimation by age and length based methods in the Western Cape, South Africa population. Objective: To evaluate four paediatric weight estimation methods (APLS, Luscombe and Owens, Best Guess and Broselow tape) in order to determine which are accurate for weight estimation in South African children.
Method: From a database of 2832 children aged 1-10 years seen at Red Cross Hospital in Cape Town, measured weight was compared to estimated weights from all four methods.
Results: APLS formula and the Broselow Tape showed the best correlation with measured weight. Mean error was 3.3% for APLS (for 1-10-year olds) and 0.9% for Broselow tape (children <145 cm length and <35 kg). Both the Best Guess and Luscombe and Owens formulae tended to overestimate weight (15.4% and 12.4%, respectively).
Conclusion: The Broselow tape and APLS estimation methods are most accurate in estimating weight in the Western Cape paediatric population, even though they have a small tendency to underestimate weight. Clinicians need to bear in mind that none of the formulae are infallible and constant reassessment and clinical judgement should be used, as well as a measured weight as soon as possible in an emergency situation.
abstract_id: PUBMED:21134132
Validation of the Luscombe weight formula for estimating children's weight. Objective: Several paediatric weight estimation methods have been described for use when direct weight measurement is not possible. A new age-based weight estimation method has recently been proposed. The Luscombe formula, applicable to children aged 1-10 years, is calculated as (3 × age in years) + 7. Our objective was to externally validate this formula using an existing database.
Method: Secondary analysis of a prospective observational cohort study. Data collected included height, age, ethnicity and measured weight. The outcome of interest was agreement between estimated weight using the Luscombe formula and measured weight. Secondary outcome was comparison with performance of Argall, APLS and Best Guess formulae. Accuracy of weight estimation methods was compared using mean difference (bias), 95% limits of agreement, root mean square error and proportion with agreement within 10%.
Results: Four hundred and ten children were studied. Median age was 4 years; 54.4% were boys. Mean body mass index was 17 kg/m(2) and mean measured weight was 21.2 kg. The Luscombe formula had a mean difference of 0.66 kg (95% limits of agreement -9.9 to +11.3 kg; root mean square error of 5.44 kg). 45.4% of estimates were within 10% of measured weight. The Best Guess and Luscombe formulae performed better than Argall or APLS formulae.
Conclusion: The Luscombe formula is among the more accurate age-based weight estimation formulae. When more accurate methods (e.g. parental estimation or the Broselow tape) are not available, it is an acceptable option for estimating children's weight.
abstract_id: PUBMED:27353069
A New Sonographic Weight Estimation Formula for Small-for-Gestational-Age Fetuses. Objectives: The purpose of this study was to develop a new specific weight estimation formula for small-for-gestational-age (SGA) fetuses that differentiated between symmetric and asymmetric growth patterns.
Methods: A statistical estimation technique known as component-wise gradient boosting was applied to a group of 898 SGA fetuses (symmetric, n = 750; asymmetric, n = 148). A new formula was derived from the data obtained and was then compared to other commonly used equations.
Results: The new formula derived is as follows: estimated fetal weight = e^[1.3734627 + 0.0057133 × biparietal diameter + 0.0011282 × head circumference + 0.0201147 × abdominal circumference + 0.0183081 × femur length - 0.0000177 × biparietal diameter(2) - 0.0000018 × head circumference(2) - 0.0000297 × abdominal circumference(2) -0.0001007 × femur length(2) + 0.0397563 × I(sex = male) + 0.0064505 × gestational age (days) + 0.0096528 × I(SGA = asymmetric)], where the function I denotes an indicator function, which is 1 if the expression is fulfilled (sex = male; SGA type = asymmetric) and otherwise 0. In the whole study group and the 2 subgroups, the new formula showed the lowest median absolute percentage error, mean percentage error, and random error and the best distribution of absolute percentage errors within prespecified error bounds.
Conclusions: The new formula substantially improves weight estimation in SGA fetuses.
Answer: The APLS formula is commonly used to estimate the weight of children in paediatric emergency medicine. However, its applicability to specific populations, such as the Trinidadian population, has been questioned. A study conducted in Trinidad found that the APLS formula slightly underestimates the weights of children in all age groups, with a mean difference of -1.4 kg. Despite this, the study concluded that the APLS formula should continue to be used in Trinidad for estimating the weight of children in resuscitation situations because it is well known, easy to calculate, and widely taught in that setting (PUBMED:22856543).
The study also compared the APLS formula with the Luscombe and Owens formula and a "best fit" formula derived from linear regression analysis of the measured weights. While the Luscombe and Owens formula and the derived formula gave a better estimate of the measured weight of children in Trinidad, the accuracy and precision of all three formulae were not significantly different from each other. Therefore, the recommendation was made to continue using the APLS formula in Trinidadian paediatric resuscitation (PUBMED:22856543). |
Instruction: Government spending on dental care: is it a public priority?
Abstracts:
abstract_id: PUBMED:22574760
Government spending on dental care: is it a public priority? Objectives: The majority of Canadians believe that the government should play some role in providing dental care within Canada's health care system. However, it is unclear whether Canadians consider this as a top public priority. This study determines whether dental care is a public priority among Canadian adults relative to other policy concerns and identifies factors predictive of a first priority ranking for dental care.
Methods: Data were collected in 2008 from a national random sample of 1,005 Canadian adults through a telephone interview survey. Respondents were asked to rank five spending priorities (dental care, pharmacare, home care, vision care, and child care) in terms of preferences for new government spending. Simple descriptive analyses were undertaken based on sociodemographic characteristics. Logistic regression modeling was conducted to determine which factors are predictive of a first priority ranking for dental care.
Results: Comparatively, dental care stands as the third choice among the other spending priority areas. Approximately 21 percent of adults consider dental care a first priority for spending. First priority ranking of dental care appears to be linked to socioeconomic factors: household income, educational attainment, and dental insurance coverage.
Conclusions: As a public priority, a moderate level of demand exists for more government spending on dental care in Canada, specifically among those of low income, low educational attainment, and who lack dental insurance coverage. A sustained effort should be made to push forward public dental care policies that target priority population subgroups.
abstract_id: PUBMED:30775007
Trends and drivers of government health spending in sub-Saharan Africa, 1995-2015. Introduction: Government health spending is a primary source of funding in the health sector across the world. However, in sub-Saharan Africa, only about a third of all health spending is sourced from the government. The objectives of this study are to describe the growth in government health spending, examine its determinants and explain the variation in government health spending across sub-Saharan African countries.
Methods: We used panel data on domestic government health spending in 46 countries in sub-Saharan Africa from 1995 to 2015 from the Institute for Health Metrics and Evaluation. A regression model was used to examine the factors associated with government health spending, and Shapley decomposition was used to attribute the contributions of factors to the explained variance in government health spending.
Results: While the growth rate in government health spending in sub-Saharan Africa has been positive overall, there are variations across subgroups. Between 1995 and 2015, government health spending in West Africa grew by 6.7% (95% uncertainty intervals [UI]: 6.2% to 7.0%) each year, whereas in Southern Africa it grew by only 4.5% (UI: 4.5% to 4.5%) each year. Furthermore, per-person government health spending ranged from $651 (Namibia) in 2017 purchasing power parity dollars to $4 (Central African Republic) in 2015. Good governance, national income and the share of it that is government spending were positively associated with government health spending. The results from the decomposition, however, showed that individual country characteristics made up the highest percentage of the explained variation in government health spending across sub-Saharan African countries.
Conclusion: These findings highlight that a country's policy choices are important for how much the health sector receives. As the attention of the global health community focuses on ways to stimulate domestic government health spending, an understanding that individual country sociopolitical context is an important driver for success will be key.
abstract_id: PUBMED:37809751
Can aging population affect economic growth through the channel of government spending? The demographic transition toward an aging society is a global phenomenon. An increase in the aging population directly challenges the government positions and public expenditures as it directly affects a country's aggregate demand and, thus, the country's income level. This paper investigates the impact of an aging population on the size of government spending. Using an updated dataset of 87 countries from 1996 to 2017, we study the aggregate level and each composition of government expenditures. Furthermore, we investigate whether the aging population influences the allocation of government spending toward different categories and economic growth changes. The paper uses the generalized method of moment (GMM) model for the dynamic panel data analysis to address the endogeneity problem. Our main findings suggest that an increase in the old-age population significantly induces higher aggregate government spending but only in developed countries and in particular on the spending in the social protection and environment categories. However, the aging society leads to lower government expenditure on education. Other critical findings reveal that changes in some compositions of government spending toward cultural expenditures impact growth slowdown, while an allocation toward education spending positively impacts economic growth.
abstract_id: PUBMED:37455972
Government sectoral spending and human development in Nigeria: Is there a link? The continuous increase in government expenditure in the last three decades without a commensurate improvement in all known indicators of development has generated heated debates among scholars as to the justification for the persistent rise in the annual expenditure of the government. Therefore, this study examined the effects of government sectoral spending on human development in Nigeria using annual data spanning the period 1986-2021. This study contributed to the literature by examining the effects of government sectoral spending on human development using a robust human development index that captures the multifaceted state of economic development in terms of educational attainment, life expectancy and per capita income, unlike previous studies that concentrated on aggregate government spending and used the gross domestic product as an indicator of development. Surprisingly, however, results from the Autoregressive Distributed Lag (ARDL) model employed indicated that both in the short and long run, there is no link between government sectoral spending and human development in Nigeria. Although, outcomes from ECMs suggest that government sectoral spending may affect human development in the long run.
abstract_id: PUBMED:23110682
Public preferences for government spending in Canada. This study considers three questions: 1. What are the Canadian public's prioritization preferences for new government spending on a range of public health-related goods outside the scope of the country's national system of health insurance? 2. How homogenous or heterogeneous is the Canadian public in terms of these preferences? 3. What factors are predictive of the Canadian public's preferences for new government spending? Data were collected in 2008 from a national random sample of Canadian adults through a telephone interview survey (n=1,005). Respondents were asked to rank five spending priorities in terms of their preference for new government spending. Bivariate and multivariable logistic regression analyses were conducted. As a first priority, Canadian adults prefer spending on child care (26.2%), followed by pharmacare (23.1%), dental care (20.8%), home care (17.2%), and vision care (12.7%). Sociodemographic characteristics predict spending preferences, based on the social position and needs of respondents. Policy leaders need to give fair consideration to public preferences in priority setting approaches in order to ensure that public health-related goods are distributed in a manner that best suits population needs.
abstract_id: PUBMED:30670219
State government public goods spending and citizens' quality of life. A growing literature across the social sciences uses individuals' self-assessments of their own well-being to evaluate the impact of public policy decisions on citizens' quality of life. To date, however, there has been no rigorous empirical investigation into how government spending specifically on public goods impacts well-being. Using individual-level data on respondents' self-reported happiness and detailed government spending data for the American states for 1976-2006, I find robust evidence that citizens report living happier lives when their state spends more (relative to the size of a state's economy) on providing public goods. As an important spuriousness check, I also show that this relationship does not hold for total government spending or for government spending on programs that are not (strictly speaking) public goods like education and welfare assistance to the poor. Moreover, the statistical relationship between public goods spending and happiness is substantively large and invariant across income, education, gender, and race/ethnicity lines - indicating that spending has broad benefits across society. These findings suggest that public goods spending can have important implications for the well-being of Americans and, more broadly, contribute to the growing literature on how government policy decisions concretely impact the quality of life that citizens experience.
abstract_id: PUBMED:32836679
Taking off into the wind: Unemployment risk and state-Dependent government spending multipliers. We propose a model with involuntary unemployment, incomplete markets, and nominal rigidity, in which the effects of government spending are state-dependent. An increase in government purchases raises aggregate demand, tightens the labor market and reduces unemployment. This in turn lowers unemployment risk and thus precautionary saving, leading to a larger response of private consumption than in a model with perfect insurance. The output multiplier is further amplified through a composition effect, as the fraction of high-consumption households in total population increases in response to the spending shock. These features, along with the matching frictions in the labor market, generate significantly larger multipliers in recessions than in expansions. As the pool of job seekers is larger during downturns than during expansions, the concavity of the job-finding probability with respect to market tightness implies that an increase in government spending reduces unemployment risk more in the former case than in the latter, giving rise to countercyclical multipliers.
abstract_id: PUBMED:34020816
Tracking government spending on immunization: The joint reporting forms, national health accounts, comprehensive multi-year plans and co-financing data. Background: Coverage rates for immunization have dropped in lower income countries during the COVID-19 pandemic, raising concerns regarding potential outbreaks and premature death. In order to re-invigorate immunization service delivery, sufficient financing must be made available from all sources, and particularly from government resources. This study utilizes the most recent data available to provide an updated comparison of available data sources on government spending on immunization.
Methods: We examined data from WHO/UNICEF's Joint Reporting Form (JRF), country Comprehensive Multi-Year Plan (cMYP), country co-financing data for Gavi, and WHO National Health Accounts (NHA) on government spending on immunization for consistency by comparing routine and vaccine spending where both values were reported. We also examined spending trends across time, quantified underreporting and utilized concordance analyses to assess the magnitude of difference between the data sources.
Results: Routine immunization spending reported through the cMYP was nearly double that reported through the JRF (rho = 0.64, 95% 0.53 to 0.77) and almost four times higher than that reported through the NHA on average (rho = 3.71, 95% 1.00 to 13.87). Routine immunization spending from the JRF was comparable to spending reported in the NHA (rho = 1.30, 95% 0.97 to 1.75) and vaccine spending from the JRF was comparable to that from the cMYP data (rho = 0.97, 95% 0.84 to 1.12). Vaccine spending from both the JRF and cMYP was higher than Gavi co-financing by a at least two (rho = 2.66, 95% 2.45 to 2.89) and (rho = 2.66, 95% 2.15 to 3.30), respectively.
Implications: Overall, our comparative analysis provides a degree of confidence in the validity of existing reporting mechanisms for immunization spending while highlighting areas for potential improvements. Users of these data sources should factor these into consideration when utilizing the data. Additionally, partners should work with governments to encourage more reliable, comprehensive, and accurate reporting of vaccine and immunization spending.
abstract_id: PUBMED:32837050
Government Spending, GDP and Exchange Rate in Zero Lower Bound: Measuring Causality at Multiple Horizons. This paper assesses the Granger causality between government spending and gross domestic product (GDP) in the United States at multiple horizons. This paper also analyses the role the real exchange rate plays in the causality measure during the zero lower bound (ZLB) period. Many researchers using theoretical models built in a closed economy suggest that the elasticity between government spending and GDP is very large, when the nominal interest rate is binding. Other researchers, also using theoretical models generally built in an open economy, suggest that the elasticity in the ZLB period is not large. The same conflicting results are reported in the empirical literature mostly using vector auto regressives (VARs), with different restrictions. In this paper, we use a different approach to measure the link between the two variables. The new approach has the advantage of not relying on any restrictions, as is the case with VARs when dealing with causalities. Moreover, our approach is not related to the way the model is built, as is the case with dynamic stochastic general equilibrium (DSGE) types of models. In this paper, we use a Granger causality measure to compare the causality for normal periods with the causality for the ZLB period. We emphasize the role played by the real exchange rate. Our empirical results provide evidence that the causality measures between government spending and GDP are larger and persistent in the ZLB period, but only if the exchange rate is not taken into account. When the exchange rate is taken into account, our measure of causality becomes very small and non-persistent.
abstract_id: PUBMED:29763786
Born equal? The distribution of government Medicare spending for children. Providing equitable care is an objective of many national healthcare systems. Using the birth cohort of the nationally representative Longitudinal Study of Australian Children linked with the Medicare Benefits Scheme billing data who were recruited in 2004 at ages 0-1 years and assessed biennially for six waves, we assessed the distribution of out-of-hospital government Medicare spending by household income. 4853 children followed over 11 years were included in the study. Distributions of major spending components including general practitioner and specialist care were assessed using concentration indices. Trends in the inequalities as children grow were investigated. The results showed that after controlling for health care needs, total government Medicare spending over 0-11 years of age favoured the rich (concentration index 0.041). The Medicare spending for general practitioner care was equal (concentration index 0.005) while for specialist care and diagnostics and imaging were 'pro-rich' (concentration index 0.108 and 0.088 respectively). Children from poorer families were most disadvantaged when aged 0-1 years in specialist spending, and the disparity lessened as children approached adolescence. Our findings suggest that income-related inequalities exist in government Medicare spending particularly in the first few years of life. As early years of life are a critical window in childhood development and building block for future health, the results warrant further investigation and attention from policy makers.
Answer: Based on the abstracts provided, government spending on dental care does not appear to be the top public priority when compared to other policy concerns. In Canada, dental care was ranked as the third choice among other spending priorities, with approximately 21 percent of adults considering dental care a first priority for government spending (PUBMED:22574760). The first priority ranking of dental care seems to be associated with socioeconomic factors such as household income, educational attainment, and dental insurance coverage. This suggests that there is a moderate level of demand for more government spending on dental care, particularly among those with lower income, lower educational attainment, and those lacking dental insurance coverage (PUBMED:22574760).
Furthermore, when looking at public preferences for government spending in Canada, dental care was ranked third (20.8%) after child care (26.2%) and pharmacare (23.1%) (PUBMED:23110682). These preferences were influenced by the social position and needs of respondents, indicating that while dental care is a concern, it is not the foremost priority for new government spending among the Canadian public.
The other abstracts discuss government spending in broader terms or in different contexts, such as health spending in sub-Saharan Africa (PUBMED:30775007), the impact of an aging population on government spending (PUBMED:37809751), government sectoral spending and human development in Nigeria (PUBMED:37455972), the impact of public goods spending on citizens' quality of life in the United States (PUBMED:30670219), unemployment risk and state-dependent government spending multipliers (PUBMED:32836679), tracking government spending on immunization (PUBMED:34020816), the relationship between government spending, GDP, and exchange rate during the zero lower bound period (PUBMED:32837050), and the distribution of government Medicare spending for children in Australia (PUBMED:29763786). These abstracts do not specifically address the public priority of dental care spending.
In summary, while dental care is considered important and there is some demand for increased government spending on it, it does not emerge as the top public priority in comparison to other areas of government spending, at least in the context of Canada (PUBMED:22574760; PUBMED:23110682). |
Instruction: Does pain relief with spinal cord stimulation for angina conceal myocardial infarction?
Abstracts:
abstract_id: PUBMED:8011404
Does pain relief with spinal cord stimulation for angina conceal myocardial infarction? Objective: To investigate the possibility that spinal cord stimulation (SCS) used for pain relief can conceal acute myocardial infarction (AMI).
Design: Prospective evaluation of patients treated with SCS.
Setting: University hospital.
Patients: 50 patients with coronary artery disease and severe, otherwise intractable angina treated with SCS for 1-57 months.
Main Outcome Measures: Necropsy findings, symptoms, serum enzyme concentrations, electrocardiographic changes.
Results: Ten patients were considered to have had AMI. In nine of these SCS did not conceal precordial pain and in one patient no information about precordial pain could be obtained.
Conclusion: There was no evidence that SCS concealed acute myocardial infarction.
abstract_id: PUBMED:17907475
Spinal cord stimulation for ischemic heart disease and peripheral vascular disease. Ischemic disease (ID) is now an important indication for electrical neuromodulation (NM), particularly in chronic pain conditions. NM is defined as a therapeutic modality that aims to restore functions of the nervous system or modulate neural structures involved in the dysfunction of organ systems. One of the NM methods used is chronic electrical stimulation of the spinal cord (spinal cord stimulation: SCS). SCS in ID, as applied to ischemic heart disease (IHD) and peripheral vascular disease (PVD), started in Europe in the 1970s and 1980s, respectively. Patients with ID are eligible for SCS when they experience disabling pain, resulting from ischaemia. This pain should be considered therapeutically refractory to standard treatment intended to decrease metabolic demand or following revascularization procedures. Several studies have demonstrated the beneficial effect of SCS on IHD and PVD by improving the quality of life of this group of severely disabled patients, without adversely influencing mortality and morbidity. SCS used as additional treatment for IHD reduces angina pectoris (AP) in its frequency and intensity, increases exercise capacity, and does not seem to mask the warning signs of a myocardial infarction. Besides the analgesic effect, different studies have demonstrated an anti-ischemic effect, as expressed by different cardiac indices such as exercise duration, ambulatory ECG recording, coronary flow measurements, and PET scans. SCS can be considered as an alternative to open heart bypass grafting (CABG) for patients at high risk from surgical procedures. Moreover, SCS appears to be more efficacious than transcutaneous electrical nerve stimulation (TENS). The SCS implantation technique is relatively simple: implanting an epidural electrode under local anesthesia (supervised by the anesthesist) with the tip at T1, covering the painful area with paraesthesia by external stimulation (pulse width 210, rate 85 Hz), and connecting this electrode to a subcutaneously implanted pulse generator. In PVD the pain may manifest itself at rest or during walking (claudication), disabling the patient severely. Most of the patients suffer from atherosclerotic critical limb ischemia. All patients should be therapeutically refractory (medication and revascularization) to become eligible for SCS. Ulcers on the extremities should be minimal. In PVD the same implantation technique is used as in IHD except that the tip of the electrode is positioned at T10-11. In PVD the majority of the patients show significant reduction in pain and more than half of the patients show improvement of circulatory indices, as shown by Doppler, thermography, and oximetry studies. Limb salvage studies show variable results depending on the stage of the trophic changes. The underlying mechanisms of action of SCS in PVD require further elucidation.
abstract_id: PUBMED:19295969
Myocardial perfusion after one year of spinal cord stimulation in patients with refractory angina. Aim: Spinal cord stimulation (SCS) is recommended for patients with coronary artery disease (CAD) and refractory angina. We used positron emission tomography (PET) to investigate the long-term effect of SCS on regional myocardial perfusion in patients suffering from angina pectoris refractory to medical treatment and without option for coronary intervention.
Patients, Methods: We analyzed data of 44 patients with stable CAD (91% three vessel disease). At baseline, we determined coronary flow reserve (CFR) using 13N-ammonia-PET and myocardial viability with 18F-FDG. SCS was performed for one year (Medtronic Itrell III or Synergy, Düsseldorf, Germany). During follow-up, no cardiac interventions were necessary and no myocardial infarctions occurred. At one year follow-up, CFR was measured again.
Results: In the majority of patients (77%), SCS led to an improvement of clinical symptoms. CFR did not change significantly during follow-up. Subjective improvement did not correlate with an increase of CFR.
Conclusions: Despite its clinical effect, SCS does not have a direct impact on CFR in patients with stable CAD. According to our results, the pain relief is not due to an improvement of the myocardial blood supply.
abstract_id: PUBMED:8088270
Spinal electrical stimulation for intractable angina--long-term clinical outcome and safety. Spinal cord electrical stimulation is an alternative therapy for patients with chronic pain syndromes including angina. Although it has been shown to produce symptomatic relief and reduce ischaemia, doubts remain about its long-term safety. We report here for the first time the results of a follow-up study over a period of 62 months, mean 45 months (range 21-62), of 23 patients who had stimulator units implanted for intractable angina unresponsive to standard therapy. Symptomatic improvement was good and persisted in the majority with a mean (SD) change of NYHA grade from 3.1 (0.8) pre-operatively to 2.0 (0.9) (P < 0.01) immediately after operation and 2.1 (1.07) at the latest follow-up. GTN consumption fell markedly. Mean (SEM) treadmill exercise time increased from 407 (45) s with the stimulator off to 499 (46) s with the stimulator on (P < 0.01). Forty-eight hour ST segment monitoring in those with bipolar leads showed a reduction of total number and duration of ischaemic episodes. There were three deaths, none of which were sudden or unexplained and this mortality rate is acceptable for such a group of patients. Two patients had a myocardial infarction, which was associated with typical pain and not masked by the treatment. Complications related to earlier lead designs were frequent. This study confirms that spinal electrical stimulation is an effective and safe form of alternative therapy for the occasional patient whose angina is unresponsive to standard therapies.
abstract_id: PUBMED:11666098
Long-term effects of spinal cord stimulation on myocardial ischemia and heart rate variability: results of a 48-hour ambulatory electrocardiographic monitoring. Background: Spinal cord stimulation (SCS) has analgesic properties and may be used to treat pain in patients with therapeutically refractory angina who are unsuitable for myocardial revascularization. Some studies have also demonstrated an anti-ischemic effect. The aim of this study was to evaluate the long-term persistence of the effects of SCS on myocardial ischemia and on heart rate variability.
Methods: Fifteen patients (9 males, 6 females, mean age 76 +/- 8 years, range 58-90 years) with severe refractory angina pectoris (Canadian class III-IV), on optimal pharmacological therapy, unsuitable for myocardial revascularization and treated with SCS for a mean follow-up of 39 +/- 27 months (range 9-92 months) were studied. Eleven patients had had a previous myocardial infarction and 5 a coronary artery bypass graft. The mean ejection fraction was 54 +/- 7% (range 36-65%). All patients underwent 48-hour ambulatory ECG monitoring and were randomly assigned to 24 hours without SCS (off period) and 24 hours with SCS (on period). The primary endpoints were: number of ischemic episodes, total duration of ischemic episodes (min), and total ischemic burden (mV*min).
Results: The heart rate was not statistically different during the off and on SCS periods (median 64 and 67 b/min respectively). The number of ischemic episodes decreased from a median of 6 (range 0-29) during the off period to 3 (range 0-24) during the on period (p < 0.05). The total duration of ischemic episodes decreased from a median of 29 min (range 0- 186 min) during the off period to 16 min (range 0-123 min) during the on period (p < 0.05). The total ischemic burden decreased from a median of 2.5 mV*min (range 0-19.5 mV*min) during the off period to 0.8 mV*min (range 0-13 mV*min) during the on period (p = NS). The heart rate variability parameters were similar during the on and off periods.
Conclusions: SCS exerts long-term anti-ischemic effects.
abstract_id: PUBMED:21410637
24. Chronic refractory angina pectoris. Angina pectoris, cardiac pain associated with ischemia, is considered refractory when optimal anti-anginal therapy fails to resolve symptoms. It is associated with a decreased life expectancy and diminishes the quality of life. Spinal cord stimulation (SCS) may be considered for patients who have also undergone comprehensive interventions, such as coronary artery bypass graft (CABG) and percutaneous transluminal coronary angioplasty (PTCA) procedures. The mechanism of action of SCS is not entirely clear. Pain reduction is related to the increased release of inhibitory neuropeptides as well as normalization of the intrinsic nerve system of the heart muscle, and may have a protective myocardial effect. SCS in patients with refractory angina pectoris results in reduced anginal attacks as well as improved rate pressure product prior to the occurrence of ischemic events. This may be the result of reduced Myocardial Volume Oxygen (MVO(2) ) and possibly the redistribution of the coronary blood flow to ischemic areas. There are a number of studies that demonstrate that SCS does not mask acute myocardial infarction. The efficacy of the treatment has been investigated in two prospective, randomized studies. The long-term results showed an improvement of the symptoms and of the quality of life. SCS can be an alternative to surgical intervention in a selected patient population. In addition, SCS is a viable option in patients in whom surgery is not possible. SCS is recommended in patients with chronic refractory angina pectoris that does not respond to conventional treatment and in whom revascularization procedures have been attempted or not possible, and who are optimized from a medical perspective.
abstract_id: PUBMED:6766841
Relief of refractory angina with continuous intravenous infusion of nitroglycerin. Seventy-five patients with chest pain due to prolonged myocardial ischemia (group I, n=45) or acute myocardial infaction (group II, n=30) were treated with continuous intravenous infusion of nitroglycerin. Pain relief was achieved immediately or after titration in 40 of 45 group I patients and 22 of 30 group II patients. Of the 29 group I patients who received narcotic analgesics for pain relief prior to the nitroglycerin infusion, 20 experienced a decrease in narcotics required for pain relief while intravenously receiving nitroglycerin. Twenty-four of 28 group I patients and 14 of 19 group II patients who had angina refractory to multiple doses of sublingual nitroglycerin received relief with intravenous administration of nitroglycerin. This data suggests that intravenous administration of nitroglycerin is useful, adjunctive therapy for chest pain even when refractory to multiple doses of sublingual nitroglycerin.
abstract_id: PUBMED:64978
An appraisal of symptom relief after coronary bypass grafting. Subjective symptomatic improvement is experienced by 90% of patients after coronary bypass surgery. Objective exercise testing reduces this incidence to 70%. An analysis of the multifactorial genesis of pain relief based on data of non-randomized trials reveals that graft patency plays a dominant but not unique role in causing improved symptomatology. In a number of cases, intra-operative myocardial infarctions seem to explain the pain relief but may also have opposite effects. Changes in left ventricular function operate bidirectionally but data on this variable in relation to changes in symptomatology are not amenable for detailed analysis. Progression in native vessel lesions apparently opposes pain relief and has its greatest impact in connection with graft closure. Residual post-operative angina is evidently related also to incomplete revascularization.
abstract_id: PUBMED:4544189
Carotid sinus nerve stimulation in the management of intractable angina pectoris: four-year follow-up. Since February 1969 carotid sinus nerve stimulators have been implanted in 13 patients with intractable, incapacitating angina pectoris, unrelieved by medical management and, in some cases, revascularization procedures. Four patients died, one on the third postoperative day, the others at 15, 31 and 49 months postoperatively. Two other patients sustained myocardial infarcts, at two weeks and two months postoperatively. Complications were few and transient. The condition of two patients is now deteriorating.In all cases there was relief of pain and a decrease in blood pressure and heart rate. Exercise could be performed at a heavier load or for a longer time. Use of the stimulator was both intermittent and continuous, proving especially valuable in the relief of nocturnal angina. All patients were markedly improved and able to leave hospital.Four patients underwent aortocoronary bypass 14, 15, 22 and 28 months after implantation of the device; three obtained good results and no longer require the CSNS although it remains in place. The fourth obtained little improvement and continues to use the stimulator.
abstract_id: PUBMED:8067543
Long-term home self-treatment with high thoracic epidural anesthesia in patients with severe coronary artery disease. Twenty patients with severe coronary artery disease (CAD) and refractory unstable angina who had sustained a recent myocardial infarction (n = 7) or who were considered as either inoperable (n = 10) or at severe operative risk (n = 3) were treated with high thoracic epidural anesthesia (TEA) via a tunneled epidural catheter for long-term self-use at home. They were instructed to use sublingual nitrates as their first treatment for angina and to inject 3-5 mL of bupivacaine (5 mg/mL) into the epidural catheter only for severe episodes. The epidural catheters were maintained for long periods (mean, 6 mo; longest treatment, 3.2 yr). Most patients expressed great satisfaction with the TEA treatment. The epidural bupivacaine injections induced complete pain relief within a short period. There were no severe adverse events induced by the treatment, and only one case of tachyphylaxis was documented. There was a decrease in the frequency of epidural injections with time from an average of 1.86 +/- 0.5 to 0.96 +/- 1.1 injections/d from the first to the last week of treatment (n = 18) (P < 0.001). This decrease was most evident when comparing the last week of TEA treatment in patients having TEA < 1 mo (n = 5) with patients having TEA > 6 mo (n = 5), an average of 2.00 +/- 0.47 vs 0.25 +/- 0.10 injections/d, respectively (P < 0.01). Seven patients had either coronary bypass surgery (n = 6) or spinal cord electrical stimulation (n = 1) after an average of 42 days of successful TEA treatment (range, 14-70 days).(ABSTRACT TRUNCATED AT 250 WORDS)
Answer: Based on the provided abstracts, there is no evidence that pain relief with spinal cord stimulation (SCS) for angina conceals myocardial infarction (MI). A prospective evaluation of patients treated with SCS found that in ten patients considered to have had an MI, SCS did not conceal precordial pain in nine, and in one patient, no information about precordial pain could be obtained (PUBMED:8011404). Additionally, studies have demonstrated that SCS used as additional treatment for ischemic heart disease does not seem to mask the warning signs of a myocardial infarction (PUBMED:17907475). Furthermore, there are a number of studies that demonstrate that SCS does not mask acute myocardial infarction (PUBMED:21410637). In a long-term clinical outcome and safety study, two patients had a myocardial infarction, which was associated with typical pain and not masked by the treatment (PUBMED:8088270). Therefore, the evidence suggests that SCS for angina does not conceal the occurrence of myocardial infarction. |
Instruction: A comparative study on nasal packing after septoplasty: does it matter in terms of patient comfort, bleeding, and crust or synechia formation?
Abstracts:
abstract_id: PUBMED:27107601
A comparative study on nasal packing after septoplasty: does it matter in terms of patient comfort, bleeding, and crust or synechia formation? Objectives: This study aims to compare pain, bleeding, nasal obstruction, crust and synechia formation, and anesthesia-related morbidity in patients with and without use of nasal packs after septoplasty.
Patients And Methods: A total of 66 patients (32 women, 34 men; mean age 24 years; range 18 to 48 years) who underwent Cottle's septoplasty under general anesthesia were randomly allocated to three groups in this prospective cohort. Telfa nasal packs were used in sutures + telfa group (n=22) and Merocell nasal packs in merocel alone group (n=22). No packs were administered in sutures alone group (n=22). Three groups were compared in terms of nasal obstruction, bleeding, pain, crust and synechia formation, as well as the amount of secretion, the need for oropharyngeal airway, the presence of laryngospasm, and effort for nasal breathing after anesthesia.
Results: The amount of bleeding was higher with lower degree of nasal obstruction in sutures alone group. Pain and secretion were more remarkable in merocel alone group. After the first week, these differences were unable to be differentiated among the groups. There were no differences between three groups with respect to crust and synechia formation two weeks after septal surgery.
Conclusion: Nasal packs can be more useful in patients who suffer from bleeding-related morbidity, while septoplasty applied without nasal packs can be more suitable in patients with obstructive sleep apnea. The use of nasal packs in septoplasty should be determined on an individualized basis with respect to the characteristics of each patient.
abstract_id: PUBMED:24294575
Modified technique of anterior nasal packing: a comparative study report. Anterior nasal packing, which is a common procedure in otorhinolaryngology practice, has different complications. Pain during introduction and removal of pack, bleeding after removal due to mucosal damage and synechia formation are common among them. A continuous effort is going on worldwide to combat those by modifying the nature of pack material or inventing new materials for nasal packing. In the present study an effort was made to compare a new modification of conventional gauze pack by using aluminum foil prepared from the cover of suture materials as septal splint (to reduce the mucosal damage) with conventional gauze pack and another costly material, nasal tampon (merocel). Comparisons were done in terms of cost, efficacy and complications. Prospective hospital based interventional study. Patients were distributed into three groups according to the material used for anterior nasal packing. Comparisons were made in terms of cost of the material used, pain during introduction of pack, rise of systolic blood pressure, incidences of bleeding while pack in situ, incidences of bleeding after removal of pack that required repacking and incidences of synechia formation after pack removal. The episodes of bleeding while pack in situ, within first 48 h and forced for repacking was observed to be significantly more prevalent among nasal tampon groups (12.5%) of patients but only 2.1 and 2.4% with use of conventional gauze pack and our modification respectively. Regarding bleeding after removal of pack, 10.6% patients experienced bleeding with conventional gauze pack, whereas with our modification it was only 2.4%. Synechia formation was found to be highest among the cases with conventional gauze pack (14.9%), but with our modification it is only 2.4%. In this study it is found that use of aluminum foil prepared from the cover of suture materials can be very useful and cost effective method to reduce some of the complications of anterior nasal packing.
abstract_id: PUBMED:22754803
A comparative study of septoplasty with or without nasal packing. This study was conducted to compare the outcome of septoplasty with or without Nasal packing. The study subjects were randomly allocated into two groups. There was significant reduction in frequency of post operative pain, headache, discomfort and duration of hospital stay in patients who have undergone septoplasty without nasal packing. However there was no difference in post operative bleeding and septal perforation between two groups. Therefore after Septoplasty without nasal packing is preferred alternative to with nasal packing.
abstract_id: PUBMED:31763241
Effectiveness of Nasal Packing in Trans-septal Suturing Technique in Septoplasty: A Randomized Comparative Study. Nasal packing is routinely used after septoplasty, but there are patient factors for which its use needs to be reconsidered. Effectiveness of nasal packing in trans-septal suturing technique in septoplasty. Prospective, comparative study, patients submitted to septoplasty were randomized to receive or not nasal packing postoperatively. Comparison in postoperative status for pain, headache, discomfort in swallowing, epiphora, bleeding, infection and pain on pack removal are assessed. In all the patients trans-septal suturing technique was used. Study group has 60 patients. Two groups were made group A in whom nasal packing done post operatively with merocel, group B in whom nasal packing was not done, in both groups quilting sutures were applied on to the septum. There was pain in nose and headache in all the patients in group A. Other symptoms in group A were epiphora, discomfort in swallowing due to ear discomfort. In addition to these there is pain on removal of packs. Routine use of nasal packing can be avoided instead sutures can be placed over the septum, which benefits in improving pain and symptoms due to pack in the postoperative period.
abstract_id: PUBMED:32127341
Effect of infiltrating nasal packing with local anesthetics in postoperative pain and anxiety following sinonasal surgeries: a systemic review and meta-analysis. Introduction: Packing of the nasal cavity has traditionally been used for postoperative bleeding control and decreasing synechia formation in patients undergoing nasal surgeries. Although absorbable nasal packing has been gaining popularity in the recent years, nonabsorbable nasal packing is still often used in nasal surgeries in various parts of the world. It is known to be associated with pain and discomfort especially upon and during removal, and previous reviews have only evaluated the effects of local anesthetic infiltration of nasal packing in septal surgeries.
Objective: To evaluate the effect of infiltrating nasal packing with local anesthetics in postoperative pain and anxiety following sinonasal surgeries MATERIALS AND METHODS: We searched the PubMed and Embase databases from their earliest record to April 27, 2019, randomized controlled trials and prospective controlled trials for review, and included only randomized controlled trials for data analysis. We included studies using topical anesthetics-infiltrated nasal packing following sinonasal surgeries and evaluated the effectiveness compared to placebo packing in pain reduction during postoperative follow up, as well as the effectiveness in anxiety reduction.
Results: Among 15 studies included for review, 9 studies involving 765 participants contributed to the meta-analysis. In terms of pain reduction, our analysis showed significant standard mean differences regarding effectiveness at postoperative 1, 12, 24 h interval for all surgical groups combined, in the sinus surgery group, as well as during nasal packing removal. There was no consistent evidence to support the effectiveness in anxiety reduction.
Conclusions: Our study supports anesthetics infiltration of nasal packing as an effective method in managing pain in patients with nasal packing after sinonasal surgeries. However, the level of evidence is low. More high-quality randomized controlled trials are needed to establish its effectiveness in reducing anxiety. We believe this review is of great clinical significance due to the vast patient population undergoing sinonasal surgeries. Postoperative local hemorrhage remains the greatest concern for ear nose and throat surgeons due to the rich vasculature of the nose and sinuses. Sinonasal packing provides structural support and serves as an important measure for hemostasis and synechia formation. Although absorbable packing has been gaining popularity in the recent years, nonabsorable packing materials are still used in many countries due to lower cost. Infiltration of nasal packing with local anesthetic provides a solution to the discomfort, nasal pressure and nasal pain experienced commonly by the patients as evidenced by our analysis.
abstract_id: PUBMED:36452783
The Efficacy of Septal Quilting Sutures Versus Nasal Packing in Septoplasty. Nasal packing is the classic method adopted by many otolaryngologists to stabilize the nasal septum and decrease the occurrence of postoperative bleeding and septal hematoma after septoplasty. However, because of its associated postoperative morbidity, many surgeons started to adopt alternative methods. This study aimed to assess the outcome and benefits of septal quilting sutures in comparison to nasal packing after septoplasty. A prospective non-randomized comparative interventional study was carried out at two teaching hospitals in Mosul city from January 2020 to January 2021. A total of 60 patients who were candidates for septoplasty, were included in the study. According to the surgeon's preference; 30 patients had placement of septal quilting sutures (group A), and in the other 30 patients nasal packing was performed (group B). Patients were assessed for postoperative morbidity and early outcome in the first 24 h, 1 week and 1 month postoperatively. In the first 24 h after septoplasty, patients in group B had significantly higher levels of nasal/facial pain, headache, sleep disturbance, breathing difficulties and swallowing difficulties compared to group A (p < 0.001). Over the follow up period of 1 month, no significant differences were recorded regarding postoperative bleeding, hematoma, infection, adhesions formation and septal perforation between the two groups (p > 0.05). Septal quilting sutures technique is more favorable in the early period in terms of patient discomfort after septoplasty, better nasal block and nasal/facial pain, the absence of misery on pack removal, with minimal bleeding after surgery.
abstract_id: PUBMED:27900768
Is nonabsorbable nasal packing after septoplasty essential? A meta-analysis. Objectives: Septoplasty is one of the most frequently performed rhinologic surgeries. Complications include nasal bleeding, pain, headache, septal hematoma, synechia, infection, residual septal deviation, and septal perforation. In this study, we aimed to compare complication rates among patients according to packing method.
Methods: We performed a literature search using PubMed, Embase, and the Cochrane Library through August 2016. Our systematic review followed Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. Random effect models were used to calculate risk differences and risk ratio with 95% confidence intervals (CIs). Cases referred to the nonpacking group included patients treated with transseptal sutures or septal splints. Cases referred to as the packing group included patients treated with nonabsorbable packing such as Merocel or gauze.
Results: Our search included 20 randomized controlled trials (RCTs) with a total of 1,321 subjects in the nonpacking group and 1,247 subjects in the packing group. There were no significant differences between packing methods regarding bleeding, hematoma, perforation, infection, and residual septal deviation. The risk differences of postoperative pain, headache, and postoperative synechia were -0.50 [95% CI: -0.93 to -0.07, P = .02], -0.42 [95% CI: -0.66 to -0.19, P = .0004], and -0.03 [95% CI: -0.06 to -0.01, P = .01], respectively.
Conclusions: Nonabsorbable nasal packing is no more effective than treatments without packing after septoplasty. Septal splints and transseptal sutures reduce postoperative pain, headache, and synechia.
Level Of Evidence: 1B Laryngoscope, 127:1026-1031, 2017.
abstract_id: PUBMED:24618630
Patient comfort following FESS and Nasopore® packing, a double blind, prospective, randomized trial. Background: The use of nasal packing after functional endoscopic sinus surgery (FESS) is often associated with pain and a feeling of pressure for patients. The aim of the present work was to investigate a modern wound dressing made of polyurethane (Nasopore®) that makes removal of the nasal packing unnecessary and is focussed on patient comfort.
Methodology: Following bilateral FESS, after randomisation, one side was packed with Nasopore® while the other side was without packing as a control. The following parameters from 47 patients were determined daily in two centres from post-operative day 1 for the duration of the inpatient stay in a double-blinded setting: side-specific post-operative bleeding, nasal breathing and feeling of pressure as well as the general parameters sleep disturbance, headaches and general well-being. Which side patients considered subjectively the better was also recorded.
Results: No significant differences were determined between the two sides in terms of the rates of post-operative bleeding and nasal breathing. The feeling of pressure was slightly less on the side packed with Nasopore® on post-operative days 2 and 3. No trend could be observed regarding which side patients described as being subjectively better.
Conclusion: There were only slight differences in patient comfort between the Nasopore® side and the control. Because the feeling of pressure in the midface was significantly less and there were no complications, this suggests there is greater patient comfort when using Nasopore® compared to using no nasal packing.
abstract_id: PUBMED:24427687
Comparison of septoplasty with and without nasal packing and review of literature. Septoplasty is routinely performed for symptomatic deviated nasal septum. The most unpleasant part of this procedure is the pain during removal of nasal pack. The objective of this study was to compare the results of septoplasty with and without post-operative nasal packing and thereby assess the necessity of nasal packing after septoplasty. This descriptive study was carried out in ENT Department of D.Y. Patil Hospital, Kolhapur. 50 patients between the age groups of 18-50 years, having symptomatic deviated septum were selected. Out of which 25 patients underwent septoplasty with packing and 25 patients underwent septoplasty without packing with quilting sutures taken on the septum. Patients who underwent septoplasty without packing, experienced less pain and bleeding was minimal. Only one patient developed septal hematoma postoperatively. All the patients were satisfied at the end of 3 months. Simple DNS can be safely treated with septoplasty without Anterior Nasal Packing and by taking Quilting sutures on the septum. The sutures are also useful for closing any inadvertent tears of the septal mucosa and providing additional support for the cartilage pieces retained in septoplasty. Nasal packing should be should be reserved only for the patients with increased risk of bleeding.
abstract_id: PUBMED:23120133
Routine nasal packing follwoing nasal surgery-Is it necessary? The practice of routine nasal packing after nasal surgery is usually customary and not evidence based. Post operative complications, while uncoumon, are sometimes pack related. A retrospective analysis of 110 patients who underwent a variety of nasal operations was performed to determine the incidence of complications when nasal packs were not routinely inserted 9 cases (8.2%) [6 out of these were revision surgeries] needed nasal packing for haemostasis at the end of surgery. 4 cases (3.6%) required to be packed in the immediate post operative period. One patient who required nasal packing developed a unilateral adhesion. No patient developed septal hematoma. The need for routine nasal packing is not supported. Packing should be indicated where there continuous bleeding at.
Answer: The use of nasal packing after septoplasty has been a subject of debate, with various studies comparing the outcomes of septoplasty with and without nasal packing in terms of patient comfort, bleeding, and crust or synechia formation.
One study found that nasal packs can be more useful in patients who suffer from bleeding-related morbidity, while septoplasty without nasal packs may be more suitable for patients with obstructive sleep apnea. The study concluded that the use of nasal packs should be individualized based on patient characteristics (PUBMED:27107601). Another study suggested that a modified technique using aluminum foil as a septal splint with conventional gauze pack reduced complications such as mucosal damage and synechia formation compared to conventional gauze pack and nasal tampon (PUBMED:24294575).
A comparative study reported significant reductions in postoperative pain, headache, discomfort, and duration of hospital stay in patients who underwent septoplasty without nasal packing, with no difference in postoperative bleeding and septal perforation between the two groups (PUBMED:22754803). Similarly, another study found that routine use of nasal packing can be avoided by using trans-septal suturing technique, which benefits in improving pain and symptoms due to pack in the postoperative period (PUBMED:31763241).
Infiltrating nasal packing with local anesthetics has been shown to be an effective method in managing pain in patients with nasal packing after sinonasal surgeries, although more high-quality randomized controlled trials are needed to establish its effectiveness in reducing anxiety (PUBMED:32127341). The efficacy of septal quilting sutures versus nasal packing in septoplasty was also assessed, with results indicating that quilting sutures are more favorable in terms of patient discomfort after septoplasty, with minimal bleeding after surgery (PUBMED:36452783).
A meta-analysis concluded that nonabsorbable nasal packing is no more effective than treatments without packing after septoplasty, with septal splints and transseptal sutures reducing postoperative pain, headache, and synechia (PUBMED:27900768). Another study comparing septoplasty with and without nasal packing found that patients without packing experienced less pain, and bleeding was minimal, suggesting that nasal packing should be reserved only for patients with an increased risk of bleeding (PUBMED:24427687). |
Instruction: Surgical outreach in rural South Africa: are we managing to impart surgical skills?
Abstracts:
abstract_id: PUBMED:24388091
Surgical outreach in rural South Africa: are we managing to impart surgical skills? Background: The Department of Health in KwaZulu-Natal (KZN) has run a surgical outreach programme for over a decade.Objective. To quantify the impact of the outreach programme by analysing its effect on the operative capacity of a single rural health district.
Methods: During 2012, investigators visited each district hospital in Sisonke Health District (SHD), KZN to quantify surgery undertaken by resident staff between 1998 and 2013. Investigators also reviewed the operative registers of the four district hospitals in SHD for a 6-month period (March - August 2012) to document the surgery performed at each hospital. The number of staff who attended specialist-based teaching was recorded in an attempt to measure the impact of each visit.
Results: From 1998 to 2013, 35 385 patients were seen at 1 453 clinics, 5 199 operations were performed and 1 357 patients were referred to regional hospitals. A total of 3 027 staff attended teaching ward rounds and teaching sessions. In the four district hospitals, 2 160 operations were performed in the 6-month period. There were 653 non-obstetrical operations and the obstetric cases comprised 1 094 caesarean sections, 55 sterilisations and 370 evacuations of the uterus.
Conclusion: The infrastructure is well established and the outreach programme is well run and reliable. The clinical outputs of the programme are significant. However, the impact of this programme on specific outcomes is less certain. This raises the question of the future strategic choices that need to be made in our attempts to improve access to surgical care.
abstract_id: PUBMED:24299437
Outreach surgical consulting services in North East Victoria. Objective: There is a paucity of data regarding the provision of consultative outreach specialist surgical services to rural areas. This paper aims to describe a model of outreach consultative practice to deliver specialist surgical services to rural communities.
Design: Analysis of prospectively collected data for consultations in a three month period for two surgeons based in Wangaratta.
Setting: Two surgeons in regional Victoria based in Wangaratta, North East Victoria, conducting outreach consultations to Beechworth, Benalla, Bright and Mansfield.
Participants: All patients seen in consultations over a 3-month period.
Main Outcome Measures: Patient workload, casemix of presenting complaint, consultation outcome including plan for surgical procedure.
Results: Outreach surgical consulting was associated with a higher proportion of new consultations, and there was trend towards being more likely to result in a surgical procedure than consultations in the base rural centre.
Conclusions: Outreach surgical consulting provides surgeons with a larger referral base and provides communities with better access to local specialists. Outreach practice should be encouraged for surgeons in regional centres.
abstract_id: PUBMED:35802800
Documenting surgical triage in rural surgical networks: Formalising existing structures. Objective: It is essential that the embedded process of rural case selection be highlighted and documented to provide reassurance of rigour across rural surgical services supported by generalist surgeons, general practitioners with enhanced surgical skills and general practitioner anaesthetists. This enables feedback and improves the triage and case selection process to ensure the highest quality outcomes. Therefore, this research aims to explore participants' rational criteria for decision making around rural case selection.
Design: Participants participated in a series of semi-structured in-depth interviews which were coded and underwent thematic analysis.
Setting: Six community hospitals in British Columbia, Canada.
Participants: General practitioners with enhanced surgical skills, general practitioner anaesthetists, local maternity care providers, and specialists.
Results: Based on participant accounts, rural surgical and obstetrical decision-making processes for local patient selection or regional referral had five major components: (1) Clinical Factors, (2) Physician Factors, (3) Patient Factors, (4) Consensus Between Providers and (5) the Availability of Local Resources.
Conclusion: Decision-making processes around rural surgical and obstetrical patient selection are complex and require comprehensive understanding of local capacity and resources. Current policies and guidelines fail to consider the varying capacities of each rural site and should be hospital specific.
abstract_id: PUBMED:9814737
The provision of general surgical services in rural South Australia: a new model for rural surgery. Background: Rural South Australia (SA), like other rural areas in Australia, faces a crisis in the medical workforce. It is also generally assumed that the same applies to rural surgical services but finding evidence to support this is scarce.
Methods: All hospitals situated outside the outer metropolitan area of SA were surveyed about surgical services (n = 57). Questions were asked about the frequency of emergency and elective theatre usage and which surgeons provided surgical services.
Results: Operating theatre facilities were in active use in 39 of the 57 hospitals studied. At the time of the study there were seven specialist general surgeons resident in rural SA. General practitioners continued to have a major input in the provision of surgical services, either by providing the general anaesthetic (34/39) or by performing the surgical procedures (26/39).
Conclusions: The Department of Surgery at the University of Adelaide is instituting various measures to counter the rural surgical workforce problem and is developing a model that serves either the individual or the two-person surgical practice. Metropolitan teaching hospitals can play an important role in supporting current rural surgeons and can foster an increased commitment to the future of rural general surgery.
abstract_id: PUBMED:22931424
Caseload of general surgeons working in a rural hospital with outreach practice. Background: There is little published data regarding the caseloads of general surgeons working in rural Australia conducting outreach services as part of their practice. It remains difficult to attract and retain surgeons in rural Australia. This study aims to describe the workload of surgeons working in a rural centre with outreach practices in order to determine the required skills mix for prospective surgeons.
Methods: A retrospective review of surgical procedures carried out by two surgeons over 5 years working from a base in Wangaratta, Victoria, with outreach services to Benalla, Bright and Mansfield was undertaken. Data were extracted from surgeon records using Medicare Benefits Schedule item numbers.
Results: A total of 18 029 procedures were performed over 5 years, with 15% of these performed in peripheral hospitals as part of an outreach service. A full range of general surgical procedures were undertaken, with endoscopies accounting for 32% of procedures. In addition, vascular procedures and emergency craniotomies were also performed. The majority of procedures undertaken at peripheral centres were minor procedures, with only two laparotomies performed at these centres over 5 years.
Conclusion: General surgeons working in rural centres are required to have broad skills and be able to undertake a large number of procedures. Trainees should be encouraged to consider rural practice, and those who are interested should consider the needs of the community in which they intend to practice. Outreach work to surrounding communities can be rewarding for both the surgeon and the community.
abstract_id: PUBMED:30502240
Improving Surgical Outreach in Palestine: Assessing Goals of Local and Visiting Surgeons. Background: Short-term surgical outreach is often criticized for lack of sustainability and partnership with local collaborators. As global surgical capability increases, there is increased focus on educating local providers. We sought to assess and compare the educational goals of local surgeons in the Palestinian territories with goals of visiting volunteer providers.
Methods: Electronic surveys were sent to Palestinian surgeons and compared with evaluation data collected from Palestine Children's Relief Fund volunteer providers.
Results: The response rate was 52% from Palestinian surgeons and 100% from volunteer providers, giving a combined response rate of 83%. Ninety-two percent of Palestinian surgeons desired protected time during each mission trip for formal didactic teaching and 92% learn new techniques best by performing skills on patients with expert surgeons observing and providing feedback. Most respondents requested the addition of case reviews or debriefing sessions after completion of surgical cases. Volunteer providers indicate that 86% of prior mission trips involved training of local surgeons and 100% plan to volunteer with the organization again in the future.
Conclusions: Surgical education is a vital component of any successful outreach program. Adult learning theory emphasizes the necessity of understanding the specific educational needs of participants to foster the most successful learning environment. This survey highlights the value of tailoring surgical specialty outreach to the explicit needs of local providers and patient populations, while also clearly demonstrating the importance of collaboration, feedback, and protected educational didactics as a critical focus of future surgical humanitarian endeavors.
abstract_id: PUBMED:31290468
Paediatric surgical outreach in central region of Ghana. Background: Conditions that are amenable to surgery are found globally. However, surgery is not easily accessible for most people in low- and middle-income countries due to physical and financial barriers, among others. One-way of mitigating against this situation is through surgical outreach programmes.
Patients And Methods: A paediatric surgical outreach in a teaching hospital in the Central Region of Ghana was carried out by a paediatric surgeon from Korle Bu Teaching Hospital. Data on the cases done from June 2011 to June 2014 were analysed.
Results: A total of 185 patients had surgery during the study. There were 153 males with the mean age of 4.53 ± 3.67 years. Patients aged 1-5 years represented 51.9% of the patients. Twenty-four (13%) had major surgery and 161 (87%) had minor operations. The most common minor operation performed was inguinal herniotomy representing 47.2% of the cases. None of the patients had any complications.
Conclusion: The need for paediatric surgical outreach programme has been shown in this paper as well as its cost-effectiveness. With the current rate of graduation of paediatric surgeons in Ghana, paediatric outreach programmes will be needed in Ghana in the foreseeable future. This outreach should be extended to other regions of the country to cover a larger percentage of children in Ghana.
abstract_id: PUBMED:24041561
What surgical skills rural surgeons need to master. Background: As new technology is developed and scientific evidence demonstrates strategies to improve the quality of care, it is essential that surgeons keep current with their skills. Rural surgeons need efficient and targeted continuing medical education that matches their broader scope of practice. Developing such a program begins with an assessment of the learning needs of the rural surgeon. The aim of this study was to assess the learning needs considered most important to surgeons practicing in rural areas.
Study Design: A needs assessment questionnaire was administered to surgeons practicing in rural areas. An additional gap analysis questionnaire was administered to registrants of a skills course for rural surgeons.
Results: Seventy-one needs assessment questionnaires were completed. The self-reported procedures most commonly performed included laparoscopic cholecystectomy (n = 44), hernia repair (n = 42), endoscopy (n = 43), breast surgery (n = 23), appendectomy (n = 20), and colon resection (n = 18). Respondents indicated that they would most like to learn more skills related to laparoscopic colon resection (n = 16), laparoscopic antireflux procedures (n = 6), laparoscopic common bile duct exploration/ERCP (n = 5), colonoscopy/advanced techniques and esophagogastroscopy (n = 4), and breast surgery (n = 4). Ultrasound, hand surgery, and leadership and communication were additional topics rated as useful by the respondents. Skills course participants indicated varying levels of experience and confidence with breast ultrasound, ultrasound for central line insertion, hand injury, and facial soft tissue injury.
Conclusions: Our results demonstrated that surgeons practicing in rural areas have a strong interest in acquiring additional skills in a variety of general and subspecialty surgical procedures. The information obtained in this study may be used to guide curriculum development of further postgraduate skills courses targeted to rural surgeons.
abstract_id: PUBMED:23317309
Surgical outreach program in poor rural Nigerian communities. Introduction: The majority of the world's population resides in rural areas without access to basic surgical care. Taraba State in North-Eastern Nigeria consists of rural communities where approximately 90% of the State's population resides.
Methods: This was a prospective study of patients whose surgical conditions were treated during surgical outreach program in rural North-Eastern Nigeria communities between February 2008 and July 2009.
Results: A total of 802 patients had 903 procedures due to the co-existence of multiple pathologies in 97 patients (12.1%). There were 506 males (63.1%) providing a male to female ratio of 1.7:1. Ages ranged between one month and 91 years (mean 35.2±18.8 SD). Hernia repair 404 (44.7%), hydrocelectomy±orchidectomy 133 (14.7%), lumps excision 143 (15.8%) and appendicectomy 66 (7.3%) were the most frequent procedures. The surgical conditions were frequently long in duration and huge in size. The duration of goitres ranged from 2 to 28 years (10.3±7.4) and 4 to 42 years (15±13.9) in patients with cleft lip and palate. The procedures were performed under spinal, general and local anaesthesia in 7.6%, 34.3% and 58.2% of patients, respectively. No mortality was recorded but the complications encountered included: surgical site infection 46 (5.1%), scrotal haematoma and oedema 6 (0.7%), haemorrhage 3 (0.3%) and partial wound dehiscence 3 (0.3%).
Conclusion: Surgical pathologies in rural communities are often multiple, wide ranging and of long duration. Such conditions can be successfully managed under local or spinal anaesthesia. Based on the authors' experience, a remarkable reduction in surgical disease burden is feasible in these communities using available, simple but effective options.
abstract_id: PUBMED:37300751
A surgical training system designed to help trainees acquire laparoscopic surgical skills at a rural Japanese institute. Purpose: The Endoscopic Surgical Skill Qualification System was established in Japan to evaluate safe endoscopic surgical techniques and teaching skills. Trainee surgeons obtaining this certification in rural hospitals are disadvantaged by the limited number of surgical opportunities. To address this problem, we established a surgical training system to educate trainee surgeons.
Methods: Eighteen certified expert surgeons affiliated with our department were classified into an experienced training system group (E group, n = 9) and a non-experienced group (NE group, n = 9). Results of the training system were then compared between the groups.
Results: The number of years required to become board certified was shorter in the E group (14 years) than that in the NE group (18 years). Likewise, the number of surgical procedures performed before certification was lower in the E group (n = 30) than that in the NE group (n = 50). An expert surgeon was involved in the creation of the certification video of all the E group participants. A questionnaire to board-certified surgeons revealed that guidance by a board-certified surgeon and trainee education (surgical training system) was useful for obtaining certification.
Conclusions: Continuous surgical training, starting with trainee surgeons, appears useful for expediting their acquisition of technical certification in rural areas.
Answer: The surgical outreach programme in rural South Africa, specifically in KwaZulu-Natal (KZN), has shown significant clinical outputs, with a well-established infrastructure and a reliable programme that has seen a large number of patients and performed numerous operations. However, the impact of the programme on imparting surgical skills to resident staff is less certain. From 1998 to 2013, the programme in the Sisonke Health District (SHD) involved a substantial number of clinics, operations, and teaching sessions attended by staff, but the study raises questions about the strategic choices needed to improve access to surgical care and the specific outcomes of the programme (PUBMED:24388091).
In the context of surgical outreach, it is important to consider the broader landscape of rural surgical services. For instance, in North East Victoria, outreach surgical consulting has been associated with a higher proportion of new consultations and a trend towards more surgical procedures resulting from these consultations, suggesting that outreach can expand the referral base for surgeons and improve community access to specialist services (PUBMED:24299437). Additionally, in British Columbia, Canada, the decision-making processes around rural surgical and obstetrical patient selection are complex and require a comprehensive understanding of local capacity and resources, indicating that policies and guidelines should be hospital-specific (PUBMED:35802800).
In rural South Australia, general practitioners continue to play a major role in providing surgical services, and metropolitan teaching hospitals can support rural surgeons and foster a commitment to rural general surgery (PUBMED:9814737). Similarly, in rural Victoria, general surgeons are required to have broad skills to undertake a wide range of procedures, and outreach work can be rewarding for both the surgeon and the community (PUBMED:22931424).
Overall, while the surgical outreach programme in rural South Africa has been well run and has achieved significant clinical outputs, the specific impact on imparting surgical skills to resident staff remains an area for further exploration and strategic development. |
Instruction: Clinical trials in general surgical journals: are methods better reported?
Abstracts:
abstract_id: PUBMED:9889796
Clinical trials in general surgical journals: are methods better reported? Background: Reports of clinical trials often lack adequate descriptions of their design and analysis. Thus readers cannot properly assess the strength of the findings and are limited in their ability to draw their own conclusions. A review of 6 surgical journals in 1984 revealed that the frequency of reporting 11 basic elements of design and analysis in clinical trials was only 59%. This study attempted to identify areas that still need improvement.
Methods: Eligible studies published from July 1995 through June 1996 included all reports of comparative clinical trials on human subjects that were prospective and had at least 2 treatment arms. A total of 68 articles published in 6 general surgery journals were reviewed. The frequency that the previously identified 11 basic elements of design and analysis were reported was determined.
Results: Seventy-four percent of all items were reported accurately (a 15% increase from the previous study), 4% were reported ambiguously, and 23% were not reported; improvement was seen in every journal. The reporting of eligibility criteria and statistical power improved the most. For 3 items, reporting was still not adequate; 32% of reports provided information about statistical power, 40% about the method of randomization, and 49% about whether the person assessing outcomes was blind to the treatment assignment.
Conclusions: Improvements have been made in reporting surgical clinical trials, but in general methodologic questions poorly answered in the 1980s continue to be answered poorly in the 1990s. Editors of surgical journals are urged to provide authors with guidelines on how to report clinical trial design and analysis.
abstract_id: PUBMED:6710354
Reporting clinical trials in general surgical journals. Readers need information about the design and analysis of a clinical trial to evaluate and interpret its findings. We reviewed 84 therapeutic trials appearing in six general surgical journals from July 1981 through June 1982 and assessed the reporting of 11 important aspects of design and analysis. Overall, 59% of the 11 items were clearly reported, 5% were ambiguously discussed, and 36% were not reported. The frequency of reporting in general surgical journals is thus similar to the 56% found by others for four general medical journals. Reporting was best for random allocation (89%), loss to follow-up (86%), and statistical analyses (85%). Reporting was most deficient for the method used to generate the treatment assignment (27%) and for the power of the investigation to detect treatment differences (5%). We recommend that clinical journals provide a list of important items to be included in reports on clinical trials.
abstract_id: PUBMED:34120618
Enabling patient-reported outcome measures in clinical trials, exemplified by cardiovascular trials. Objectives: There has been limited success in achieving integration of patient-reported outcomes (PROs) in clinical trials. We describe how stakeholders envision a solution to this challenge.
Methods: Stakeholders from academia, industry, non-profits, insurers, clinicians, and the Food and Drug Administration convened at a Think Tank meeting funded by the Duke Clinical Research Institute to discuss the challenges of incorporating PROs into clinical trials and how to address those challenges. Using examples from cardiovascular trials, this article describes a potential path forward with a focus on applications in the United States.
Results: Think Tank members identified one key challenge: a common understanding of the level of evidence that is necessary to support patient-reported outcome measures (PROMs) in trials. Think Tank participants discussed the possibility of creating general evidentiary standards depending upon contextual factors, but such guidelines could not be feasibly developed because many contextual factors are at play. The attendees posited that a more informative approach to PROM evidentiary standards would be to develop validity arguments akin to courtroom briefs, which would emphasize a compelling rationale (interpretation/use argument) to support a PROM within a specific context. Participants envisioned a future in which validity arguments would be publicly available via a repository, which would be indexed by contextual factors, clinical populations, and types of claims.
Conclusions: A publicly available repository would help stakeholders better understand what a community believes constitutes compelling support for a specific PROM in a trial. Our proposed strategy is expected to facilitate the incorporation of PROMs into cardiovascular clinical trials and trials in general.
abstract_id: PUBMED:34226803
Surgical Clinical Trials in India: Underutilized Opportunities. Clinical trials in Surgery are central to research; however, very few surgical clinical trials are conducted in India. Such paucity of surgical trials is a cause for concern, and prompted us to explore the recent landscape of surgical trials in India. We reviewed all clinical trials from general surgery or subspecialties of general surgery registered with the Clinical Trials Registry of India website between 2018 to 15 th May 2021. Specific details such as the surgical subspecialty, study design, multicentric or single institution and funding were obtained. We found a total of 16,710 trials, out of these 4119 (24.6%) were related to all surgical fields. Only 136 (0.8%) trials were found from general surgery and its subspecialties. Most trials were registered from Central Government Institutions (48%), followed by State Government Medical Colleges (11%). Most number of trials was registered from GI surgery (32%). Most (90.5%) trials were single centre based. Common barriers to research are well known; if the State Government Medical Colleges can mentor a culture of research from an early stage of surgical training it can improve research productivity. Multicentre trials, involving smaller hospitals from tier 2 and tier 3 cities, are a potential solution to one of the major obstacles of surgical trials i.e. small number of patients; especially in this pandemic induced draught of elective surgical operations. A positive change in attitude of surgeons and provision of necessary funding can encourage more surgical clinical trials in India.
abstract_id: PUBMED:29284407
Clinical drug trials in general practice: how well are external validity issues reported? Background: When reading a report of a clinical trial, it should be possible to judge whether the results are relevant for your patients. Issues affecting the external validity or generalizability of a trial should therefore be reported. Our aim was to determine whether articles with published results from a complete cohort of drug trials conducted entirely or partly in general practice reported sufficient information about the trials to consider the external validity.
Methods: A cohort of 196 drug trials in Norwegian general practice was previously identified from the Norwegian Medicines Agency archive with year of application for approval 1998-2007. After comprehensive literature searches, 134 journal articles reporting results published from 2000 to 2015 were identified. In these articles, we considered the reporting of the following issues relevant for external validity: reporting of the clinical setting; selection of patients before inclusion in a trial; reporting of patients' co-morbidity, co-medication or ethnicity; choice of primary outcome; and reporting of adverse events.
Results: Of these 134 articles, only 30 (22%) reported the clinical setting of the trial. The number of patients screened before enrolment was reported in 61 articles (46%). The primary outcome of the trial was a surrogate outcome for 60 trials (45%), a clinical outcome for 39 (29%) and a patient-reported outcome for 25 (19%). Clinical details of adverse events were reported in 124 (93%) articles. Co-morbidity of included participants was reported in 54 trials (40%), co-medication in 27 (20%) and race/ethnicity in 78 (58%).
Conclusions: The clinical setting of the trials, the selection of patients before enrolment, and co-morbidity or co-medication of participants was most commonly not reported, limiting the possibility to consider the generalizability of a trial. It may therefore be difficult for readers to judge whether drug trial results are applicable to clinical decision-making in general practice or when developing clinical guidelines.
abstract_id: PUBMED:23336898
Cooperative group clinical trials in general thoracic surgery: report from the 2012 Robert Ginsberg Clinical Trials Meeting of the General Thoracic Surgical Club. At the 25th Annual General Thoracic Surgical Club meeting in March 2012, the major cooperative groups presented updates on clinical trials at the Robert Ginsberg Clinical Trials Meeting. There were 57 members in attendance. Representatives from the Radiation Treatment Oncology Group (RTOG), American College of Surgeons Oncology Group (ACOSOG), Cancer and Leukemia Group B (CALGB), Southwest Oncology Group (SWOG), National Cancer Institute of Canada Clinical Trials Group (NCIC), and the Eastern Cooperative Oncology Group (ECOG) presented an overview of trials currently accruing or in development. These include oncologic trials that thoracic surgeons are currently accruing patients to in North America. The purpose of this review is to centralize the information to assist surgeons enrolling patients into oncologic clinical trials in thoracic surgery.
abstract_id: PUBMED:32757118
How to Include Patient-Reported Outcome Measures in Clinical Trials. Purpose Of Review: Patient-reported outcome measures are increasingly important measures of patient experience, which can increase research robustness, maximise economic value and improve patient outcomes. This review outlines the benefits, challenges and practicalities of incorporating patient-reported outcome measures in clinical trials.
Recent Findings: Patient-reported outcome measures are often the best way of measuring patient symptoms and quality of life. Patient-reported outcome measures can help reduce observer bias, engage patients in the research process, and inform health service resource planning. A range of tools exist to help facilitate clinicians and researchers in selecting and utilising patient reported outcome measures. Key issues to consider when selecting an appropriate tool include the development, format and psychometric properties of the patient-reported outcome measures. The use of patient-reported outcome measures allow us to better understand the patient experience and their values. A range of tools exist to help facilitate the use of patient-reported outcome measures. This article outlines how we can incorporate patient-reported outcome measures in clinical trials.
abstract_id: PUBMED:35094586
The PROTEUS-Trials Consortium: Optimizing the use of patient-reported outcomes in clinical trials. Background: The assessment of patient-reported outcomes in clinical trials has enormous potential to promote patient-centred care, but for this potential to be realized, the patient-reported outcomes must be captured effectively and communicated clearly. Over the past decade, methodologic tools have been developed to inform the design, analysis, reporting, and interpretation of patient-reported outcome data from clinical trials. We formed the PROTEUS-Trials Consortium (Patient-Reported Outcomes Tools: Engaging Users and Stakeholders) to disseminate and implement these methodologic tools.
Methods: PROTEUS-Trials are engaging with patient, clinician, research, and regulatory stakeholders from 27 organizations in the United States, Canada, Australia, the United Kingdom, and Europe to develop both organization-specific and cross-cutting strategies for implementing and disseminating the methodologic tools. Guided by the Knowledge-to-Action framework, we conducted consortium-wide webinars and meetings, as well as individual calls with participating organizations, to develop a workplan, which we are currently executing.
Results: Six methodologic tools serve as the foundation for PROTEUS-Trials dissemination and implementation efforts: the Standard Protocol Items: Recommendations for Interventional Trials-patient-reported outcome extension for writing protocols with patient-reported outcomes, the International Society for Quality of Life Research Minimum Standards for selecting a patient-reported outcome measure, Setting International Standards in Analysing Patient-Reported Outcomes and Quality of Life Endpoints Data Consortium recommendations for patient-reported outcome data analysis, the Consolidated Standards for Reporting of Trials-patient-reported outcome extension for reporting clinical trials with patient-reported outcomes, recommendations for the graphic display of patient-reported outcome data, and a Clinician's Checklist for reading and using an article about patient-reported outcomes. The PROTEUS-Trials website (www.TheProteusConsortium.org) serves as a central repository for the methodologic tools and associated resources. To date, we have developed (1) a roadmap to visually display where each of the six methodologic tools applies along the clinical trial trajectory, (2) web tutorials that provide guidance on the methodologic tools at different levels of detail, (3) checklists to provide brief summaries of each tool's recommendations, (4) a handbook to provide a self-guided approach to learning about the tools and recommendations, and (5) publications that address key topics related to patient-reported outcomes in clinical trials. We are also conducting organization-specific activities, including meetings, presentations, workshops, and webinars to publicize the existence of the methodologic tools and the PROTEUS-Trials resources. Work to develop communications strategies to ensure that PROTEUS-Trials reach key audiences with relevant information about patient-reported outcomes in clinical trials and PROTEUS-Trials is ongoing.
Discussion: The PROTEUS-Trials Consortium aims to help researchers generate patient-reported outcome data from clinical trials to (1) enable investigators, regulators, and policy-makers to take the patient perspective into account when conducting research and making decisions; (2) help patients understand treatment options and make treatment decisions; and (3) inform clinicians' discussions with patients regarding treatment options. In these ways, the PROTEUS Consortium promotes patient-centred research and care.
abstract_id: PUBMED:24265408
Patient-reported outcomes in ovarian cancer clinical trials. There is general acceptance of the importance of incorporating patient-reported outcome (PRO) measures including health-related quality of life (HRQOL) into clinical trials, and there are now a number of guidance documents available on how to use PRO's for regulatory authorities and in comparative effectiveness research. The methods used to collect, analyse and report PRO data in clinical trials have received considerable scrutiny, revealing many shortcomings in the standard of reporting of HRQOL in clinical trials as well as in how PRO's have been selected and analysed in clinical trials. This has led to the recent Consolidated Standards of Reporting Clinical Trials-PRO extension statement which lays down a framework for selection and reporting analysis of PROs, either as primary or secondary trial end points, thus ensuring scientific rigour. Adherence to these guidelines can only improve the conduct of clinical trials and interpretation of their results, which may help avoid missing out on opportunities as in the past. We review pertinent literature on PRO measures and discuss how various recent PRO guidance documents should be applied to ovarian cancer clinical trials.
abstract_id: PUBMED:33458920
Patient-reported Outcomes (PROs) in clinical trials in paediatric dentistry. Patient-reported outcomes (PROs) are reports directly from patients without interpretation by clinicians or others and captured using validated patient-reported outcome measures (PROMs). These measures are increasingly employed in clinical practice and can be incorporated into clinical trials. Benefits of using PROs include reducing observer bias; eliciting unique views on aspects important to patients and increasing public accountability. Despite inclusion in clinical trials PRO data is often under-reported and the results may not be adopted into clinical practice due to concerns about the data generated. This review discusses what PROs are and how to measure them; the benefits of using PROs; how to choose an appropriate PROM to answer the research question; considerations for using PROs in paediatric dentistry and reporting guidelines. Finally, some examples of how PROs have been included in paediatric dentistry trials are given along with discussion of the development of core outcome sets and how these may improve reporting of PROs in the future.
Answer: Based on the provided abstracts, it appears that there has been some improvement in the reporting of methods in clinical trials published in general surgical journals, but there are still areas that need further enhancement.
A study that reviewed articles published from July 1995 through June 1996 found that 74% of all items related to the design and analysis of clinical trials were reported accurately, which was a 15% increase from a previous study conducted in 1984. However, the reporting of statistical power, the method of randomization, and blinding of outcome assessors was still inadequate, with only 32%, 40%, and 49% of reports providing this information, respectively (PUBMED:9889796).
This finding is consistent with an earlier review of 84 therapeutic trials published in six general surgical journals from July 1981 through June 1982, which found that only 59% of important aspects of design and analysis were clearly reported. The most deficient areas were the method used to generate the treatment assignment (27%) and the power of the investigation to detect treatment differences (5%) (PUBMED:6710354).
The abstracts also highlight the importance of including patient-reported outcome measures (PROMs) in clinical trials to enhance the relevance and patient-centeredness of the research. However, challenges remain in integrating PROMs effectively, and there is a need for better understanding and dissemination of methodological tools to support their use (PUBMED:34120618, PUBMED:32757118, PUBMED:35094586).
In summary, while there has been progress in the reporting of methods in clinical trials in general surgical journals, there is still a need for continued improvement, particularly in the areas of statistical power, randomization methods, and blinding. Additionally, the integration of PROMs into clinical trials is recognized as important but requires further methodological support and standardization (PUBMED:9889796, PUBMED:6710354, PUBMED:34120618, PUBMED:32757118, PUBMED:35094586). |
Instruction: Comparison of 1997 Medicare relative value units for gender-specific procedures: is Adam still worth more than Eve?
Abstracts:
abstract_id: PUBMED:20169672
Medicare program; payment policies under the physician fee schedule and other revisions to Part B for CY 2010. Final rule with comment period. This final rule with comment period implements changes to the physician fee schedule and other Medicare Part B payment policies to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services. It also implements or discusses certain provisions of the Medicare Improvements for Patients and Providers Act of 2008. (See the Table of Contents for a listing of the specific issues addressed in this rule.) This final rule with comment period also finalizes the calendar year (CY) 2009 interim relative value units (RVUs) and issues interim RVUs for new and revised codes for CY 2010. In addition, in accordance with the statute, it announces that the update to the physician fee schedule conversion factor is -21.2 percent for CY 2010, the preliminary estimate for the sustainable growth rate for CY 2010 is -8.8 percent, and the conversion factor (CF) for CY 2010 is $28.4061.
abstract_id: PUBMED:12120662
Medicare program; criteria for submitting supplemental practice expense survey data under the physician fee schedule. Interim final rule with comment period. This interim final rule revises criteria that we apply to supplemental survey information supplied by physician, non-physician, and supplier groups for use in determining practice expense relative value units under the physician fee schedule. This interim final rule solicits public comments on the revised criteria for supplemental surveys.
abstract_id: PUBMED:22145186
Medicare program; payment policies under the physician fee schedule, five-year review of work relative value units, clinical laboratory fee schedule: signature on requisition, and other revisions to part B for CY 2012. Final rule with comment period. This final rule with comment period addresses changes to the physician fee schedule and other Medicare Part B payment policies to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services. It also addresses, implements or discusses certain statutory provisions including provisions of the Patient Protection and Affordable Care Act, as amended by the Health Care and Education Reconciliation Act of 2010 (collectively known as the Affordable Care Act) and the Medicare Improvements for Patients and Providers Act (MIPPA) of 2008. In addition, this final rule with comment period discusses payments for Part B drugs; Clinical Laboratory Fee Schedule: Signature on Requisition; Physician Quality Reporting System; the Electronic Prescribing (eRx) Incentive Program; the Physician Resource-Use Feedback Program and the value modifier; productivity adjustment for ambulatory surgical center payment system and the ambulance, clinical laboratory, and durable medical equipment prosthetics orthotics and supplies (DMEPOS) fee schedules; and other Part B related issues.
abstract_id: PUBMED:17171850
Medicare program; revisions to payment policies, five-year review of work relative value units, changes to the practice expense methodology under the physician fee schedule, and other changes to payment under part B; revisions to the payment policies of ambulance services under the fee schedule for ambulance services; and ambulance inflation factor update for CY 2007. Final rule with comment period. This final rule with comment period addresses certain provisions of the Deficit Reduction Act of 2005, as well as making other changes to Medicare Part B payment policy. These changes are intended to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services. This final rule with comment period also discusses geographic practice cost indices (GPCI) changes; requests for additions to the list of telehealth services; payment for covered outpatient drugs and biologicals; payment for renal dialysis services; policies related to private contracts and opt-out; policies related to bone mass measurement (BMM) services, independent diagnostic testing facilities (IDTFs), the physician self-referral prohibition; laboratory billing for the technical component (TC) of physician pathology services; the clinical laboratory fee schedule; certification of advanced practice nurses; health information technology, the health care information transparency initiative; updates the list of certain services subject to the physician self-referral prohibitions, finalizes ASP reporting requirements, and codifies Medicare's longstanding policy that payment of bad debts associated with services paid under a fee schedule/charge-based system are not allowable. We are also finalizing the calendar year (CY) 2006 interim RVUs and are issuing interim RVUs for new and revised procedure codes for CY 2007. In addition, this rule includes revisions to payment policies under the fee schedule for ambulance services and the ambulance inflation factor update for CY 2007. As required by the statute, we are announcing that the physician fee schedule update for CY 2007 is -5.0 percent, the initial estimate for the sustainable growth rate for CY 2007 is 2.0 percent and the CF for CY 2007 is $35.9848.
abstract_id: PUBMED:18435218
Multifactor productivity in health care. The following overview introduces a series of articles that focuses on multifactor productivity (MFP) growth in health care. This edition of the Health Care Financing Review begins with a theoretical discussion of the Medicare Economic Index (MEI) and the conceptual reasons for the MFP adjustment incorporated into the Medicare physician fee schedule (MPFS). The issue then moves on to an exploratory data-driven analysis of MFP growth in physicians' offices, and an evaluation of that exploration. Finally, the edition concludes with an empirically-based analysis of MFP growth in the hospital sector, as well as a study related to Medicare physician payment that looks at the individual contributors to recent growth in relative value units (RVUs).
abstract_id: PUBMED:21121181
Medicare program; payment policies under the physician fee schedule and other revisions to Part B for CY 2011. Final rule with comment period. This final rule with comment period addresses changes to the physician fee schedule and other Medicare Part B payment policies to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services. It finalizes the calendar year (CY) 2010 interim relative value units (RVUs) and issues interim RVUs for new and revised procedure codes for CY 2011. It also addresses, implements, or discusses certain provisions of both the Affordable Care Act (ACA) and the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA). In addition, this final rule with comment period discusses payments under the Ambulance Fee Schedule (AFS), the Ambulatory Surgical Center (ASC) payment system, and the Clinical Laboratory Fee Schedule (CLFS), payments to end-stage renal disease (ESRD) facilities, and payments for Part B drugs. Finally, this final rule with comment period also includes a discussion regarding the Chiropractic Services Demonstration program, the Competitive Bidding Program for durable medical equipment, prosthetics, orthotics, and supplies (CBP DMEPOS), and provider and supplier enrollment issues associated with air ambulances.
abstract_id: PUBMED:30338194
Gender-specific Knowledge of Diabetes and Its Management Among Patients Visiting Outpatient Clinics in Faisalabad, Pakistan. Introduction: Diabetes mellitus is an emerging public health concern. The aim of this study was to assess the gender-specific knowledge of patients about diabetes mellitus, its complications, and its management.
Methods: A cross-sectional study was conducted in outpatient clinics of Faisalabad, Pakistan, from November 2017 to March 2018. Consecutive patients with diabetes, aged >18 years, were administered a validated questionnaire related to knowledge of diabetes, its complications, and its management. An analysis was conducted using IBM SPSS Statistics for Windows, Version 19.0 software (IBM Corp., Armonk, NY). Results were stratified on the basis of gender and were compared using chi-square tests.
Results: Of the 840 patients recruited, 76.4% were aged >50 years. About 57% were women, and 43% were men. Most men (89.4%) and women (91.7%) were aware that the management of diabetes requires a cutting down in the consumption of refined sugar, and 64.6% and 50.4%, respectively, reported that they exercise regularly to control their glucose levels. Moreover, 14% of the men and 25% of the women responded that they knew neuropathy is a complication of diabetes.
Conclusion: Diabetes mellitus has debilitating effects on patients and communities. To effectively manage diabetes and to delay the development of complications, there is a dire need to educate patients, families, and communities.
abstract_id: PUBMED:14610760
Medicare program; revisions to payment policies under the physician fee schedule for calendar year 2004. Final rule with comment period. This final rule will refine the resource-based practice expense relative value units (RVUs) and make other changes to Medicare Part B payment policy. The policy changes concern: Medicare Economic Index, practice expense for professional component services, definition of diabetes for diabetes self-management training, supplemental survey data for practice expense, geographic practice cost indices, and several coding issues. In addition, this rule updates the codes subject to the physician self-referral prohibition. We also make revisions to the sustainable growth rate and the anesthesia conversion factor. These changes will ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services. We are also finalizing the calendar year (CY) 2003 interim RVUs and are issuing interim RVUs for new and revised procedure codes for CY 2004. As required by the statute, we are announcing that the physician fee schedule update for CY 2004 is -4.5 percent, the initial estimate of the sustainable growth rate for CY 2004 is 7.4 percent, and the conversion factor for CY 2004 is $35.1339. We published a proposed rule (68 FR 50428) in the Federal Register on Part B drug payment reform on August 20, 2003. This proposed rule would also make changes to Medicare payment for furnishing or administering certain drugs and biologicals. We have not finalized these proposals to take into account that the Congress is considering legislation that would address these issues. We will continue to monitor legislative activity that would reform the Medicare Part B drug payment system. If legislation is not enacted soon on this issue, we remain committed to completing the regulatory process.
abstract_id: PUBMED:16299947
Medicare program; revisions to payment policies under the physician fee schedule for calendar year 2006 and certain provisions related to the Competitive Acquisitions Program of outpatient drugs and biologicals under Part B. Final rule with comment. This rule addresses Medicare Part B payment policy, including the physician fee schedule that are applicable for calendar year (CY) 2006; and finalizes certain provisions of the interim final rule to implement the Competitive Acquisition Program (CAP) for Part B Drugs. It also revises Medicare Part B payment and related policies regarding: Physician work; practice expense (PE) and malpractice relative value units (RVUs); Medicare telehealth services; multiple diagnostic imaging procedures; covered outpatient drugs and biologicals; supplemental payments to Federally Qualified Health Centers (FQHCs); renal dialysis services; coverage for glaucoma screening services; National Coverage Decision (NCD) timeframes; and physician referrals for nuclear medicine services and supplies to health care entities with which they have financial relationships. In addition, the rule finalizes the interim RVUs for CY 2005 and issues interim RVUs for new and revised procedure codes for CY 2006. This rule also updates the codes subject to the physician self-referral prohibition and discusses payment policies relating to teaching anesthesia services, therapy caps, private contracts and opt-out, and chiropractic and oncology demonstrations. As required by the statute, it also announces that the physician fee schedule update for CY 2006 is -4.4 percent, the initial estimate for the sustainable growth rate for CY 2006 is 1.7 percent and the conversion factor for CY 2006 is $36.1770.
abstract_id: PUBMED:26571548
Medicare Program; Revisions to Payment Policies Under the Physician Fee Schedule and Other Revisions to Part B for CY 2016. Final rule with comment period. This major final rule with comment period addresses changes to the physician fee schedule, and other Medicare Part B payment policies to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services, as well as changes in the statute.
Answer: The provided abstracts do not contain any direct information regarding a comparison of 1997 Medicare relative value units (RVUs) for gender-specific procedures or an evaluation of whether procedures typically associated with male patients ("Adam") were valued higher than those associated with female patients ("Eve"). The abstracts focus on various updates and revisions to the Medicare physician fee schedule, payment policies, and relative value units over different calendar years, but none of them specifically address the comparison of RVUs for gender-specific procedures in 1997 or at any other time.
To answer the question, one would need to look at historical data and analyses from 1997 that specifically compare RVUs for procedures commonly performed on male patients versus those commonly performed on female patients. This information would likely be found in policy analyses, research studies, or Medicare payment databases from that time period, none of which are provided in the abstracts here. |
Instruction: Do HEDIS measures reflect cost-effective practices?
Abstracts:
abstract_id: PUBMED:12406482
Do HEDIS measures reflect cost-effective practices? Purpose: Whether the Health Plan Employer Data and Information Set (HEDIS) performance measures for managed care plans encourage a cost-effective use of society's resources has not been quantified. Our study objectives were to examine the cost-effectiveness evidence for the clinical practices underlying HEDIS 2000 measures and to develop a list of practices not reflected in HEDIS that have evidence of cost effectiveness.
Data Sources: Two databases of economic evaluations (Harvard School of Public Health Cost-Utility Registry and the Health Economics Evaluation Database) and two published lists of cost-effectiveness ratios in health and medicine.
Study Selection: For each of the 15 "effectiveness of care" measures in HEDIS 2000, we searched the data through 1998 for cost-effectiveness ratios of similar interventions and target populations. We also searched for important interventions with evidence of cost-effectiveness (<$20,000 per life-year [LY] or quality-adjusted life year [QALY] gained), which are not included in HEDIS. All ratios were standardized to 1998 dollars. The data were collected and analyzed during fall 2000 to summer 2001.
Data Extraction: Cost-effectiveness ratios reporting outcomes in terms of cost/LY or cost/QALY gained were included if they matched the intervention and population covered by the HEDIS measure.
Data Synthesis: Evidence was available for 11 of the 15 HEDIS measures. Cost-effectiveness ranges from cost saving to $660,000/LY gained. There are numerous non-HEDIS interventions with some evidence of cost effectiveness, particularly interventions to promote healthy behaviors.
Conclusions: HEDIS measures generally reflect cost-effective practices; however, in a number of cases, practices may not be cost effective for certain subgroups. Data quality and availability as well as study perspective remain key challenges in judging cost effectiveness. Opportunities exist to refine existing measures and to develop additional measures, which may promote a more efficient use of societal resources, although more research is needed on whether these measures would also satisfy other desirable attributes of HEDIS.
abstract_id: PUBMED:18559792
Cost sharing and HEDIS performance. Physicians, health plans, and health systems are increasingly evaluated and rewarded based on Health Plan Effectiveness Data and Information Set (HEDIS) and HEDIS-like performance measures. Concurrently, employers and health plans continue to try to control expenditures by increasing out-of-pocket costs for patients. The authors use fixed-effect logit models to assess how rising copayment rates for physician office visits and prescription drugs affect performance on HEDIS measures. Findings suggest that the increase in copayment rates lowers performance scores, demonstrating the connection between financial aspects of plan design and quality performance, and highlighting the potential weakness of holding plans and providers responsible for performance when payers and benefit plan managers also influence performance. Yet the effects are not consistent across all domains and, in many cases, are relatively modest in magnitude. This may reflect the HEDIS definitions and suggests that more sensitive measures may capture the impact of benefit design changes on performance.
abstract_id: PUBMED:32562878
Clinical and Economic Outcomes in Patients with Persistent Asthma Who Attain Healthcare Effectiveness and Data Information Set Measures. Background: Attainment of asthma-specific US Healthcare Effectiveness Data and Information Set (HEDIS) quality measures may be associated with improved clinical outcomes and reduced economic burden.
Objective: We examined the relationship between the attainment of HEDIS measures asthma medication ratio (AMR) and medication management for people with asthma (MMA) on clinical and economic outcomes.
Methods: This retrospective claims database analysis linked to ambulatory electronic medical records enrolled US patients aged ≥5 years with persistent asthma between May 2015 and April 2017. The attainment of AMR ≥0.5 and MMA ≥75% was determined over a 1-year premeasurement period. Asthma exacerbations and asthma-related health care costs were evaluated during the subsequent 12-month measurement period, comparing patients attaining 1 or both measures with those not attaining either.
Results: In total, 32,748 patients were included, 75.2% of whom attained AMR (n = 24,388) and/or MMA (n = 12,042) during the premeasurement period. Fewer attainers of 1 or more HEDIS measures had ≥1 asthma-related hospitalizations, emergency department visit, corticosteroid burst, or exacerbation (4.9% vs 7.3%; 9.6% vs 18.2%; 43.8% vs 51.6%; 14.3% vs 23.3%, respectively; all P < .001) compared with nonattainers. In adjusted analyses, HEDIS attainment was associated with a lower likelihood of exacerbations (odds ratio: 0.63, [95% confidence interval: 0.60-0.67]; P < .001). The attainment of ≥1 HEDIS measures lowered total and asthma-related costs, and asthma exacerbation-related health care costs per patient relative to nonattainers (cost ratio: 0.87, P < .001; 0.96, P = .02; and 0.59, P < .001, respectively). Overall and asthma-specific costs were lower for patients attaining AMR, but not MMA.
Conclusions: HEDIS attainment was associated with significantly improved asthma outcomes and lower asthma-specific costs.
abstract_id: PUBMED:31974901
Association Between HEDIS Performance and Primary Care Physician Age, Group Affiliation, Training, and Participation in ACA Exchanges. Background: There are a limited number of studies investigating the relationship between primary care physician (PCP) characteristics and the quality of care they deliver.
Objective: To examine the association between PCP performance and physician age, solo versus group affiliation, training, and participation in California's Affordable Care Act (ACA) exchange.
Design: Observational study of 2013-2014 data from Healthcare Effectiveness Data and Information Set (HEDIS) measures and select physician characteristics.
Participants: PCPs in California HMO and PPO practices (n = 5053) with part of their patient panel covered by a large commercial health insurance company.
Main Measures: Hemoglobin A1c testing; medical attention nephropathy; appropriate treatment hypertension (ACE/ARB); breast cancer screening; proportion days covered by statins; monitoring ACE/ARBs; monitoring diuretics. A composite performance measure also was constructed.
Key Results: For the average 35- versus 75-year-old PCP, regression-adjusted mean composite relative performance scores were at the 60th versus 47th percentile (89% vs. 86% composite absolute HEDIS scores; p < .001). For group versus solo PCPs, scores were at the 55th versus 50th percentiles (88% vs. 87% composite absolute HEDIS scores; p < .001). The effect of age on performance was greater for group versus solo PCPs. There was no association between scores and participation in ACA exchanges.
Conclusions: The associations between population-based care performance measures and PCP age, solo versus group affiliation, training, and participation in ACA exchanges, while statistically significant in some cases, were small. Understanding how to help older PCPs excel equally well in group practice compared with younger PCPs may be a fruitful avenue of future research.
abstract_id: PUBMED:10327824
HEDIS measures and managed care enrollment. This article examines the relationship between 1996 health plan enrollment and both HEDIS-based plan performance ratings and individual HEDIS measures. Data were obtained from a large firm that collected, aggregated, and disseminated plan performance ratings to its employees. Plan market share regressions are estimated controlling for out-of-pocket price and model type in addition to the plan ratings and HEDIS measures. The results suggests that employees did not respond strongly to the provided ratings. There are several potential explanations for the lack of response, including difficulty understanding the ratings and never having seen them. In addition, employees may base their plan choices on information that is obtained from their own past experience, friends, family, and colleagues. The pattern of results suggests that such information is important. Counterintuitive signs most likely reflect an inverse correlation between some HEDIS ratings (or measures) and attributes employees observe informally.
abstract_id: PUBMED:25758917
The HEDIS Medication Management for People with Asthma Measure is Not Related to Improved Asthma Outcomes. Background: A new Healthcare Effectiveness Data and Information Set (HEDIS) asthma quality-of-care measure designed to quantify patient adherence to asthma controller medication has been implemented. The relationship between this measure and asthma outcomes is unknown.
Objective: To examine the relationship between the HEDIS Medication Management for people with Asthma (MMA) measure and asthma outcomes.
Methods: Administrative data identified 30,040 patients who met HEDIS criteria for persistent asthma during 2012. These patients were classified as compliant or noncompliant with the MMA measure at the 75% and 50% threshold, respectively. The association between MMA compliance in 2012 and asthma outcomes in 2013 was determined.
Results: Patients who were 75% or 50% MMA compliant in 2012 showed no clinically meaningful difference in asthma-related hospitalizations, emergency department visits, or rescue inhaler dispensing in 2013 compared with those who were noncompliant. Stepwise comparison of patients who were 75% or more, 50% to 74%, and less than 50% MMA compliant showed no meaningful difference in asthma outcomes between groups.
Conclusions: Compliance with the HEDIS MMA measure is not related to improvement in the asthma outcomes assessed (rescue inhaler dispensing, asthma-coded hospitalizations, or asthma-coded emergency department visits).
abstract_id: PUBMED:12369232
Cost-benefit analysis of a new HEDIS performance measure for pneumococcal vaccination. Objectives: Measurement of the quality of care provided by managed care organizations (MCOs) has achieved national prominence, though there is controversy regarding its value. This article assesses the economic implications of a new Health Plan Employer Data and Information Set (HEDIS) measure for pneumococcal vaccination.
Methods: A Markov decision model, with Monte Carlo simulations, was utilized to conduct a cost-benefit analysis of annual HEDIS-associated interventions, which were repeated for 5 consecutive years, in an average Medicare MCO, using a societal perspective and a 3% annual discount rate.
Results: Compared with the status quo, the HEDIS intervention will be cost saving 99.8% of the time, with an average net savings of $3.80 per enrollee (95% probability interval: $0.73-$6.87).
Conclusions: The new HEDIS measure will save societal dollars. This type of analysis is essential if performance measurement is to become a legitimate part of our health care landscape.
abstract_id: PUBMED:28693351
Associations Between Community Sociodemographics and Performance in HEDIS Quality Measures: A Study of 22 Medical Centers in a Primary Care Network. Evaluation and payment for health plans and providers have been increasingly tied to their performance on quality metrics, which can be influenced by patient- and community-level sociodemographic factors. The aim of this study was to examine whether performance on Healthcare Effectiveness Data and Information Set (HEDIS) measures varied as a function of community sociodemographic characteristics at the primary care clinic level. Twenty-two primary care sites of a large multispecialty group practice were studied during the period of April 2013 to June 2016. Significant associations were found between sites' performance on selected HEDIS measures and their neighborhood sociodemographic characteristics. Outcome measures had stronger associations with sociodemographic factors than did process measures, with a range of significant correlation coefficients (absolute value, regardless of sign) from 0.44 to 0.72. Sociodemographic factors accounted for as much as 25% to 50% of the observed variance in measures such as HbA1c or blood pressure control.
abstract_id: PUBMED:30142069
Rates for HEDIS Screening for Diabetic Nephropathy Quality Measure May Be Overstated. The Healthcare Effectiveness Data and Information Set (HEDIS) is used by health plans to measure and report on quality and performance. This study evaluated the appropriateness of the prescription drug compliance step for the Medical Attention for Nephropathy quality measure for patients with diabetes. Data from national commercial claims for 28,348,363 persons were reviewed. The study applied the standard HEDIS specifications for compliance in medical attention for nephropathy for diabetic patients. Evaluation of the third and final process (evidence of angiotensin-converting enzyme [ACE] inhibitors or angiotensin II receptor blockers [ARBs]) found that the addition of this step contributed 14% to 16% of the numerator, bringing the final rate to the >80% range. Yet, presence of a prescription for an ACE inhibitor or ARB did not confirm microalbuminuria. Only 1% of the persons satisfying Step 3 had evidence of microalbuminuria in years prior and none in the reporting year. Use of these medications does not obviate the need for a nephropathy screening in diabetics. Inclusion of these medications as numerator compliance leads to overreporting and may contribute to underscreening of a population at risk.
abstract_id: PUBMED:22451701
Accuracy and usefulness of the HEDIS childhood immunization measures. Objective: With the use of Centers for Disease Control and Prevention (CDC) immunization recommendations as the gold standard, our objectives were to measure the accuracy ("is this child up-to-date on immunizations?") and usefulness ("is this child due for catch-up immunizations?") of the Healthcare Effectiveness Data and Information Set (HEDIS) childhood immunization measures.
Methods: For children aged 24 to 35 months from the 2009 National Immunization Survey, we assessed the accuracy and usefulness of the HEDIS childhood immunization measures for 6 individual immunizations and a composite.
Results: A total of 12 096 children met all inclusion criteria and composed the study sample. The HEDIS measures had >90% accuracy when compared with the CDC gold standard for each of the 6 immunizations (range, 94.3%-99.7%) and the composite (93.8%). The HEDIS measure was least accurate for hepatitis B and pneumococcal conjugate immunizations. The proportion of children for which the HEDIS measure yielded a nonuseful result (ie, an incorrect answer to the question, "is this child due for catch-up immunization?") ranged from 0.33% (varicella) to 5.96% (pneumococcal conjugate). The most important predictor of HEDIS measure accuracy and usefulness was the CDC-recommended number of immunizations due at age 2 years; children with zero or all immunizations due were the most likely to be correctly classified.
Conclusions: HEDIS childhood immunization measures are, on the whole, accurate and useful. Certain immunizations (eg, hepatitis B, pneumococcal conjugate) and children (eg, those with a single overdue immunization), however, are more prone to HEDIS misclassification.
Answer: HEDIS measures generally reflect cost-effective practices. Evidence was available for 11 of the 15 HEDIS measures examined in a study, with cost-effectiveness ranging from cost-saving to $660,000 per life-year (LY) gained. However, there are cases where practices may not be cost-effective for certain subgroups. Additionally, there are numerous non-HEDIS interventions with some evidence of cost-effectiveness, particularly those that promote healthy behaviors (PUBMED:12406482). Despite this, the quality of data and study perspective remain challenges in judging cost-effectiveness, and there is room to refine existing measures and develop additional ones to promote a more efficient use of societal resources (PUBMED:12406482).
Moreover, the attainment of HEDIS measures for asthma, such as the asthma medication ratio (AMR) and medication management for people with asthma (MMA), has been associated with improved clinical outcomes and reduced economic burden, indicating that HEDIS measures can lead to cost-effective healthcare practices (PUBMED:32562878). However, it is important to note that compliance with the HEDIS MMA measure was not related to improvement in certain asthma outcomes assessed, suggesting that the relationship between HEDIS measures and cost-effectiveness may vary depending on the specific measure and outcomes considered (PUBMED:25758917).
In summary, while HEDIS measures generally align with cost-effective practices, the relationship is not uniform across all measures and patient subgroups, and there is potential for further refinement to enhance their cost-effectiveness (PUBMED:12406482). |
Instruction: A non-toxic analogue of a coeliac-activating gliadin peptide: a basis for immunomodulation?
Abstracts:
abstract_id: PUBMED:10383530
A non-toxic analogue of a coeliac-activating gliadin peptide: a basis for immunomodulation? Background: A-gliadin residues 31-49 (peptide A) binds to HLA-DQ2 and is toxic to coeliac small bowel. Analogues of this peptide, which bind to DQ2 molecules but are non-toxic, may be a potential route to inducing tolerance to gliadin in patients with coeliac disease.
Methods: Toxicity was investigated with small bowel organ culture in six patients with untreated coeliac disease, four with treated coeliac disease and six controls. Analogue peptides comprised alanine substituted variants of peptide A at L31 (peptide D), P36 (E), P38 (F), P39 (G) and P42 (H).
Results: Peptides D and E were toxic in biopsies from some patients. Peptides F, G and H were not toxic.
Conclusions: Peptide F, which binds to DQ2 more strongly than peptide A, is not toxic in patients with coeliac disease in-vitro; this could be an initial step towards investigation of the induction of tolerance to gliadin in patients affected by coeliac disease.
abstract_id: PUBMED:22253984
Immunogenicity characterization of two ancient wheat α-gliadin peptides related to coeliac disease. The immunogenic potential of α-gliadin protein from two ancient wheats was studied with reference to coeliac disease. To this aim we investigated Graziella Ra® and Kamut® (the latter is considered an ancient relative of modern durum wheat) in comparison to four durum wheat accessions (Senatore Cappelli, Flaminio, Grazia and Svevo). ELISA and Western Blot analyses - carried out by two monoclonal antibodies raised against the α-gliadin peptides p31-49 (LGQQQPFPQQPYPQPQPF) and p56-75 (LQLQPFPQPQLPYPQPQLPY) containing a core region (underlined) reported to be toxic for coeliac patients - always showed an antibody-antigen positive reaction. For all accessions, an α-gliadin gene has also been cloned and sequenced. Deduced amino acid sequences constantly showed the toxic motifs. In conclusion, we strongly recommend that coeliac patients should avoid consuming Graziella Ra® or Kamut®. In fact their α-gliadin not only is as toxic as one of the other wheat accessions, but also occurs in greater amount, which is in line with the higher level of proteins in ancient wheats when compared to modern varieties.
abstract_id: PUBMED:7518270
Demonstration of the presence of coeliac-activating gliadin-like epitopes in malted barley. A peptide B3144, derived after peptic tryptic digestion of alpha-gliadin and corresponding to residues 3-56 from the coeliac-activating domain I, was previously used to produce monoclonal antibodies. A dot immunobinding assay was developed using these antibodies to detect gluten in wheat, rye, barley and oats. The limit of sensitivity of the assay was 1 microgram/ml for unfractionated wheat gliadin and rye prolamins, and 5 micrograms/ml for barley and oat prolamins. Extracts of flours from coeliac non-toxic rice, maize, millet and sorghum gave negative results. Malt, which represents a partial hydrolysate of barley prolamins, was shown to contain the equivalent of 100-200 mg of barley prolamins/100 g of malt. The assay demonstrates the presence of intact epitopes from the coeliac-activating domain I of alpha-gliadins in malted barley, suggesting toxicity.
abstract_id: PUBMED:10189843
Measurement of gluten using a monoclonal antibody to a coeliac toxic peptide of A-gliadin. Background: Future European Community regulations will require a sensitive and specific assay for measurement of coeliac toxic gluten proteins in foods marketed as gluten-free. To avoid spurious cross reactions with non-toxic proteins, specific antibodies and target antigens are required. A synthetic 19 amino acid peptide of A gliadin has been shown to cause deterioration in the morphology of small intestinal biopsy specimens of coeliac patients in remission.
Aims: To develop an assay for detection of gluten in foods, based on measurement of a known toxic peptide.
Methods: A monoclonal antibody raised against the toxic A gliadin peptide, with a polyclonal anti-unfractionated gliadin capture antibody, was used to develop a double sandwich enzyme linked immunosorbent assay (ELISA) for the measurement of gluten in foods.
Results: Standard curves for gliadin and for rye, barley, and oat prolamins were produced. The sensitivity of the assay was 4 ng/ml of gliadin, 500 ng/ml for rye prolamins, and 1000 ng/ml for oat and barley prolamins. The assay could detect gluten in cooked foods, although at reduced sensitivity. Prolamins from coeliac non-toxic rice, maize, millet, and sorghum did not cross react in the assay. A variety of commercially available gluten-free foods were analysed; small quantities of gluten were detected in some products.
Conclusion: The assay may form the basis of a sensitive method for measurement of gluten in foods for consumption by patients with coeliac disease.
abstract_id: PUBMED:7512105
Measurement of gluten using a monoclonal antibody to a sequenced peptide of alpha-gliadin from the coeliac-activating domain I. A monoclonal antibody, raised against a sequenced 54 amino-acid peptide from the coeliac-activating N-terminal region of alpha-gliadin, was used in an assay for the measurement of gluten in foods. A double-sandwich ELISA using a polyclonal capture antibody produced standard curves for unfractionated gliadin and its alpha, beta, gamma and omega subfractions, and for rye, barley and oat prolamins. The sensitivity of the assay for unfractionated gliadin and rye prolamins was 15 ng/ml, for barley and oat prolamins 125 and 250 ng/ml, respectively. Prolamins from coeliac non-toxic rice, maize, millet and sorghum did not cross-react in the assay.
abstract_id: PUBMED:8783747
Relation between gliadin structure and coeliac toxicity. Gliadin, the alcohol-soluble protein fraction of wheat, contains the factor toxic for coeliac patients. The numerous components of gliadin can be classified according to their primary structure into omega 5-, omega 1,2-, alpha- and gamma-type. Both omega-types have almost entirely repetitive amino acid sequences consisting of glutamine, proline and phenylalanine. alpha- and gamma-type gliadins contain four and five different domains, respectively, and are homologous within the domains III and V. Unique for each alpha- and gamma-type is domain I, which consists mostly of repetitive sequences rich in glutamine, proline and aromatic amino acids. Coeliac toxicity of gliadin is not destroyed by digestion with gastropancreatic enzymes. In vivo testing established the toxicity of alpha-type gliadins and in vitro testing of gliadin peptides revealed that domain I of alpha-type gliadins is involved in activating coeliac disease. The sequences -Pro-Ser-Gln-Gln- and -Gln-Gln-Gln-Pro- were demonstrated to be common for toxic gliadin peptides. Most of the in vivo and in vitro studies of synthetic peptides confirmed the importance of one or both of these sequences. Cultivated hexaploid, tetraploid and diploid wheat species do not differ significantly in potential toxic sequences of alpha-type gliadins.
abstract_id: PUBMED:2752594
Anti-gliadin antibody specificity for gluten-derived peptides toxic to coeliac patients. The specificities of serum and intestinal antibodies from coeliac and normal individuals towards gluten-derived peptides, known to be toxic in coeliac disease, has been investigated. Though untreated coeliacs had high serum antibody levels towards gliadin and some gluten-derived peptides, antibody specificities to various toxic gluten-derived peptides were similar to normal patients. Further, no significant binding in any patient group was found to the alpha-gliadin-derived peptides B1342 (Wieser, Belitz & Ashkenazi, 1984) or the 12 amino-acid A-gliadin peptide (Kagnoff, 1985). There appears to be no direct relationship between the toxicities and the antigenic reactivity of gluten-derived peptides. Thus, the intestinal damage in coeliac disease is probably not primarily caused by antibody-dependent mechanisms. The specificities of several monoclonal antibodies which bound to wheat prolamins as well as prolamins from other coeliac-toxic cereals have also been investigated with these toxic gluten-derived peptides, in order to identify possible common epitopes. No monoclonal antibody tested bound the B1342 and 12-amino-acid A-gliadin peptide. However the monoclonal antibodies which were specific for the coeliac-toxic cereal prolamins did show the strongest binding to other coeliac-toxic gluten-derived peptides.
abstract_id: PUBMED:1720424
Identification of reactive synthetic gliadin peptides specific for coeliac disease. Gluten intolerance (coeliac disease) is characterised by the development of a small intestinal lesion following exposure to the gliadin fraction after consumption of wheat and related cereals. Cellular immune mechanisms are thought to be responsible for gliadin toxicity, but the toxic sequence/s within gliadin have not been clearly established. A panel of synthetic gliadin peptides was tested using peripheral blood mononuclear cells from coeliac patients and two assays for cell-mediated immunity. Using the indirect leucocyte migration inhibition factor and the macrophage procoagulant activity assays, gliadin peptides which were located in the aminoterminal or the proline-rich domain of the alpha/beta gliadin molecule were coeliac-active. Peptides predicted by T cell algorithms or on the basis of homology to adenovirus Ad12 Elb protein and which were located in the proline-poor gliadin domains were inactive. Protein sequence studies which indicate significant homology in the proline-poor gliadin domains with a number of non-coeliac-toxic seed proteins also supported the hypothesis that the proline-rich domains may be more important in the pathogenesis of coeliac disease.
abstract_id: PUBMED:6692666
Clinical testing of gliadin fractions in coeliac patients. Since the toxic fraction of cereal flour which damages the small bowel mucosa of patients with coeliac disease has not been fully defined in vivo, we studied the effect of intraduodenal infusions of different doses of unfractionated gliadin and of alpha-, beta-, gamma- and omega-gliadin subfractions on the morphology of multiple jejunal biopsies taken from two patients with treated coeliac disease. A dose-response study with increasing quantities of unfractionated gliadin in one coeliac patient showed that 1000 mg produced marked damaged in serial jejunal biopsies taken 2-3 h after commencing the infusion and that the changes had almost completely disappeared 72 h later. alpha-, beta-, gamma- and omega-gliadin were prepared, checked for purity and investigated for toxicity in two coeliac patients. After an intraduodenal challenge with 1000 mg of the four gliadin subfractions these were shown to have induced damage in the mucosa of jejunal biopsies taken 6 h later. These observations confirm the results of studies in vitro, which suggest that not only alpha-but beta-, gamma- and omega-gliadin are enterotoxic in coeliac disease.
abstract_id: PUBMED:6811164
Amino acid composition of gliadin fractions which may be toxic to individuals with coeliac disease. Fraction 9, prepared by chromatography of a peptic-tryptic-pancreatinic digest of gliadin on S.P. Sephadex C-25, was re-chromatographed on Q.A.E. Sephadex A-25 and subfractions 9-1 and 9-2 further purified on S.P. Sephadex C-25. Sub-fractions 9-1 and 9-2 and the purified sub-fractions 9-1B and 9-2B appeared to be toxic to patients with coeliac disease on the basis of causing a reduction in D-xylose absorption. Amino acid analysis of undigested residues from sub-fractions 9-1B and 9-2B obtained after 'in vitro' digestion with remission coeliac mucosa contained mainly glutamine/glutamic acid and proline with some serine, leucine, phenylalanine and glycine. Another fraction (fraction 3) of wheat gliadin prepared by peptic-tryptic digestion and ion-exchange chromatography on S.P. Sephadex, previously shown to produce a skin-reaction in adults with coeliac disease, has been further purified by ion exchange chromatography, isoelectric focusing and gel filtration. The sub-fractions were submitted to amino acid analysis and the results compared with those from the undigested residues above. Isoelectric focusing of fraction 3 of the P.T. digest and its sub-fractions showed the presence of peptides of pI approximately 4.8 and 5.6 with only small amounts of peptides on either side of this region. Mucosal digestion of fractions of peptic-tryptic-pancreatinic gliadin digests 'in vitro' appears to be a promising method for the elucidation of the primary structure of that section of the gliadin which may be responsible for the lesion in coeliac disease. The evaluation of higher molecular mass peptic-tryptic digests by intradermal skin tests could also be useful for the preliminary screening of fractions for feeding tests, but this approach seems less likely to indicate the toxic region of the gliadin molecule.
Answer: The research described in abstract PUBMED:10383530 explores the potential for immunomodulation in coeliac disease by using non-toxic analogues of a coeliac-activating gliadin peptide. The study focused on a gliadin peptide known as peptide A, which comprises residues 31-49 of A-gliadin and is recognized for its ability to bind to HLA-DQ2 and its toxicity to the small bowel of coeliac patients. The researchers tested various analogue peptides with alanine substitutions at different positions of peptide A to assess their toxicity in small bowel organ cultures from patients with both untreated and treated coeliac disease, as well as from control individuals.
The results showed that some of the analogue peptides, specifically peptides D and E, were toxic to biopsies from certain patients. However, peptides F, G, and H were found to be non-toxic. Notably, peptide F, which binds more strongly to DQ2 than the original peptide A, was not toxic to patients with coeliac disease in vitro. This finding suggests that peptide F could be a starting point for further investigation into the induction of tolerance to gliadin in coeliac disease patients, potentially leading to an immunomodulatory treatment approach.
The concept of using non-toxic analogues of toxic gliadin peptides is based on the idea that these analogues could bind to the same HLA-DQ2 molecules without triggering the harmful immune response that leads to the intestinal damage characteristic of coeliac disease. If successful, this strategy could help in developing therapies that modulate the immune system's response to gliadin, ultimately improving the quality of life for individuals with coeliac disease. |
Instruction: Is the learning curve for laparoscopic fundoplication determined by the teacher or the pupil?
Abstracts:
abstract_id: PUBMED:15720987
Is the learning curve for laparoscopic fundoplication determined by the teacher or the pupil? Background: For all surgical procedures, a surgeons' learning curve can be anticipated during which complication rates are increased. The aims of this study were to evaluate individual learning curves for a group of surgeons performing laparoscopic fundoplication and to evaluate if the Procedicus MIST-simulator (Mentice Inc., Göteborg, Sweden) accurately predicts surgical performance.
Methods: Twelve Nordic centers participated, each contributing with a "master" and a "pupil" surgeon. The pupils were tested in the simulator and thereafter performed their first 20 supervised operations. All procedures were videotaped and evaluated by 3 independent reviewers.
Results: A significant decrease in operative time (P <0.001) and a trend (P = 0.12) toward improved score were seen during the series. The master significantly affected the pupil's score (P =0.0137). The simulator-test showed no correlation with the operative score.
Conclusions: Individual learning curves varied, and the teacher was shown to be the most important factor influencing the pupil's performance score. The correlation between assessed performance and patient outcome will be further investigated.
abstract_id: PUBMED:28686640
Learning curve for laparoscopic Heller myotomy and Dor fundoplication for achalasia. Purpose: Although laparoscopic Heller myotomy and Dor fundoplication (LHD) is widely performed to address achalasia, little is known about the learning curve for this technique. We assessed the learning curve for performing LHD.
Methods: Of the 514 cases with LHD performed between August 1994 and March 2016, the surgical outcomes of 463 cases were evaluated after excluding 50 cases with reduced port surgery and one case with the simultaneous performance of laparoscopic distal partial gastrectomy. A receiver operating characteristic (ROC) curve analysis was used to identify the cut-off value for the number of surgical experiences necessary to become proficient with LHD, which was defined as the completion of the learning curve.
Results: We defined the completion of the learning curve when the following 3 conditions were satisfied. 1) The operation time was less than 165 minutes. 2) There was no blood loss. 3) There was no intraoperative complication. In order to establish the appropriate number of surgical experiences required to complete the learning curve, the cut-off value was evaluated by using a ROC curve (AUC 0.717, p < 0.001). Finally, we identified the cut-off value as 16 surgical cases (sensitivity 0.706, specificity 0.646).
Conclusion: Learning curve seems to complete after performing 16 cases.
abstract_id: PUBMED:8757384
A learning curve for laparoscopic fundoplication. Definable, avoidable, or a waste of time? Objective: The objective of this study was to determine whether a learning curve for laparoscopic fundoplication can be defined, and whether steps can be taken to avoid any difficulties associated with it.
Summary Background Data: Although early outcomes after laparoscopic fundoplication have been promising, complications unique to the procedure have been described. Learning curve problems may contribute to these difficulties. Although training recommendations have been published by some professional bodies, there is disagreement about what constitutes adequate supervised experience before the solo performance of laparoscopic antireflux surgery, and the true length of the learning curve.
Methods: The outcome of 280 laparoscopic fundoplications undertaken by 11 surgeons during a 46-month period was assessed prospectively. The experience was analyzed in three different ways: 1) by an assessment of the overall learning experience within chronologically arranged groups, 2) by an assessment of all individual experiences grouped according to the experience of individual surgeons, and 3) by a comparison of early outcomes of operations performed by the surgeons who initiated laparoscopic fundoplication with the early experience of surgeons beginning laparoscopic fundoplication later in the overall institutional experience.
Results: The complication, reoperation, and laparoscopic to open conversion rates all were higher in the first 50 cases performed by the overall group, and in the first 20 cases performed by each individual surgeon. These rates were even higher in the initial first 20 cases, and the first 5 individual cases. However, adverse outcomes were less likely when surgeons began fundoplication later in the overall experience, when experienced supervision could be provided.
Conclusions: A learning curve for laparoscopic fundoplication can be defined. Experienced supervision should be sought by surgeons beginning laparoscopic fundoplication during their first 20 procedures. This should minimize adverse outcomes associated with an individual's learning curve.
abstract_id: PUBMED:17436134
The extended learning curve for laparoscopic fundoplication: a cohort analysis of 400 consecutive cases. Many studies have looked at the learning curve associated with laparoscopic Nissen fundoplication (LNF) in a given institution. This study looks at the learning curve of a single surgeon with a large cohort of patients over a 10-year period. Prospective data were collected on 400 patients undergoing laparoscopic fundoplication for over 10 years. The patients were grouped consecutively into cohorts of 50 patients. The operating time, the length of postoperative hospital stay, the conversion rate to open operation, the postoperative dilatation rate, and the reoperation rate were analyzed. Results showed that the mean length of operative time decreased from 143 min in the first 50 patients to 86 min in the last 50 patients. The mean postoperative length of hospital stay decreased from 3.7 days initially to 1.2 days latterly. There was a 14% conversion to open operation rate in the first cohort compared with a 2% rate in the last cohort. Fourteen percent of patients required reoperation in the first cohort and 6% in the last cohort. Sixteen percent required postoperative dilatation in the first cohort. None of the last 150 patients required dilatation. In conclusion, laparoscopic fundoplication is a safe and effective operation for patients with gastroesophageal reflux disease. New techniques and better instrumentation were introduced in the early era of LNF. The learning curve, however, continues well beyond the first 20 patients.
abstract_id: PUBMED:9475521
Early experience and learning curve associated with laparoscopic Nissen fundoplication. Background: Laparoscopic approach for hiatal hernia repair is relatively new. Information on the learning curve is limited.
Methods: From January 1994 to September 1996, 280 patients underwent antireflux surgery at our institution. A laparoscopic repair was attempted in 60 patients (21.4%). There were 38 men and 22 women. Median age was 49 years (range 21 to 78 years). Indications for operation were gastroesophageal reflux in 59 patients and a large paraesophageal hernia in one. A Nissen fundoplication was performed in all patients; 53 (88.3%) had concomitant hiatal hernia repair.
Results: In eight patients (13.3%) the operation was converted to an open procedure. Median operative time for the 52 patients who had laparoscopic repair was 215 minutes (range 104 to 320 minutes). There were no deaths. Complications occurred in five patients (9.6%). Median hospitalization was 2 days (range 1 to 5 days). Median operative time and median hospitalization were significantly longer in the first 26 patients than in the subsequent 25 patients (248 vs 203 minutes and 2 days vs 1 day, respectively; p = 0.03). Seven of the first 30 patients (23.3%) required laparotomy as compared with two of the second 30 (6.7%) (p = 0.07). Follow-up in the 51 patients who had laparoscopic fundoplication for reflux was complete in 50 (98.0%) and ranged from 7 to 38 months (median 13 months). Functional results were classified as excellent in 34 patients (68.0%), good in 6 (12.0%), fair in 7 (14.0%), and poor in 3 (6.0%). Three patients were reoperated on for recurrent reflux symptoms at 5, 5, and 11 months.
Conclusions: We conclude that laparoscopic Nissen fundoplication can be performed safely. The operative time, hospitalization, and conversion rate to laparotomy are higher during the early part of the experience, but all are reduced after the learning curve.
abstract_id: PUBMED:10088568
Transition from open to laparoscopic fundoplication: the learning curve. Background: Two of us (B.C.S. and C.W.D.) began performing laparoscopic fundoplication in 1992. We have always designated the resident as the operating surgeon.
Objective: To determine the time necessary for both experienced surgeons and residents to become proficient in laparoscopic fundoplication.
Design: The medical records of 241 consecutive patients undergoing laparoscopic fundoplication were reviewed. This period started with the implementation of the procedure in January 1992 and ended in March 1998. For 3 consecutive years, residents were given a questionnaire regarding their confidence in performing laparoscopic fundoplication.
Results: Laparoscopic fundoplication was attempted in 241 patients and completed in 203 patients (84%). Comparing the first 25 attempted laparoscopic fundoplications with the second 25, there were 14 conversions (56%) vs 4 conversions (16%) (P<.01). Average operative times decreased from 236 to 199 minutes (P<.05), and the intraoperative complication rates were 5 (20%) and 1 (4%), respectively. Subsequently, the conversion rate stabilized at 2%. The operative time continued to decline to an average of 99 minutes for the last 25 laparoscopies. Senior residents and recent graduates returning the questionnaire performed an average of 112 laparoscopic procedures, including 15.7 laparoscopic fundoplications. They felt comfortable with the procedure after performing an average of 10.6 operations.
Conclusions: The learning curve is very steep for the first 25 laparoscopic fundoplications for experienced surgeons. However, improvements, as judged by decreases in operative time, conversion rate, and intraoperative complications, continue to occur after 100 cases. Under supervision, residents can become comfortable with this procedure after about 10 to 15 procedures.
abstract_id: PUBMED:25392613
Single-site Nissen fundoplication versus laparoscopic Nissen fundoplication. Background: Advances in minimally invasive surgery have led to the emergence of single-incision laparoscopic surgery (SILS). The purpose of this study is to assess the feasibility of SILS Nissen fundoplication and compare its outcomes with traditional laparoscopic Nissen fundoplication.
Methods: This is a retrospective study of 33 patients who underwent Nissen fundoplication between January 2009 and September 2010.
Results: There were 15 SILS and 18 traditional laparoscopic Nissen fundoplication procedures performed. The mean operative time was 129 and 182 minutes in the traditional laparoscopic and single-incision groups, respectively (P=.019). There were no conversions in the traditional laparoscopic group, whereas 6 of the 15 patients in the SILS group required conversion by insertion of 2 to 4 additional ports (P=.0004). At short-term follow-up, recurrence rates were similar between both groups. To date, there have been no reoperations.
Conclusions: SILS Nissen fundoplication is both safe and feasible. Short-term outcomes are comparable with standard laparoscopic Nissen fundoplication. Challenges related to the single-incision Nissen fundoplication include overcoming the lengthy learning curve and decreasing the need for additional trocars.
abstract_id: PUBMED:10414540
Factors contributing to laparoscopic failure during the learning curve for laparoscopic Nissen fundoplication in a community hospital. This study was done to determine the factors contributing to laparoscopic failure (conversion to open surgery or early reoperation) during the learning curve for laparoscopic Nissen fundoplication in a 228-bed nonteaching community hospital. Data were gathered prospectively for the first 100 consecutive patients booked for elective laparoscopic Nissen fundoplication by the four general surgeons at the hospital. All complications were recorded contemporaneously, and particular note was taken of the factors surrounding conversion to open surgery and reoperation within 100 days of surgery. There were no deaths. The conversion rate was 20% and the early reoperation rate 6%. There were two late recurrences. The average operative time was 117 minutes and the average length of stay 1.8 days; 37 operations were performed on outpatients. The laparoscopic failure rate was 26% (18/68) during a surgeon's first 20 operations and 11% (3/28) thereafter (P < 0.09); the corresponding conversion rates were 22% and 4% (P < 0.05). During a surgeon's first 20 operations, the laparoscopic failure rate rose from 21% (12/57) to 55% (6/11) (P < 0.04) if a second surgeon did not assist. After 20 operations, this difference lost its significance. Intrathoracic herniation of the stomach was found preoperatively in 11 (44%) of 25 operations followed by laparoscopic failure and (8%) 6 of 75 without (P < 0.0002). Laparoscopic failure had no correlation with patient age, sex, ASA classification, duration of symptoms, or referring physician's specialty. The individual learning curve for laparoscopic Nissen fundoplication requires about 20 operations to surmount. Factors leading to laparoscopic failure during the learning curve are the surgeon's inexperience, absence of experienced help, and the presence of intrathoracic herniation.
abstract_id: PUBMED:24018072
The learning curve of laparoendoscopic single-Site (LESS) fundoplication: definable, short, and safe. Background And Objectives: This study of laparoendoscopic single-site (LESS) fundoplication for gastroesophageal reflux disease was undertaken to determine the "learning curve" for implementing LESS fundoplication.
Methods: One hundred patients, 38% men, with a median age of 61 years and median body mass index of 26 kg/m(2) , underwent LESS fundoplications. The operative times, placement of additional trocars, conversions to "open" operations, and complications were compared among patient quartiles to establish a learning curve. Median data are reported.
Results: The median operative times and complications did not differ among 25-patient cohorts. Additional trocars were placed in 27% of patients, 67% of whom were in the first 25-patient cohort. Patients undergoing LESS fundoplication had a dramatic relief in the frequency and severity of all symptoms of reflux across all cohorts equally (P < .05), particularly for heartburn and regurgitation, without causing dysphagia.
Conclusion: LESS fundoplication ameliorates symptoms of gastroesophageal reflux disease without apparent scarring. Notably, few operations required additional trocars after the first 25-patient cohort. Patient selection became more inclusive (eg, more "redo" fundoplications) with increasing experience, whereas operative times and complications remained relatively unchanged. The learning curve of LESS fundoplication is definable, short, and safe. We believe that patients will seek LESS fundoplication because of the efficacy and superior cosmetic outcomes; surgeons will need to meet this demand.
abstract_id: PUBMED:19789668
Laparoscopic redo fundoplication for intrathoracic migration of wrap. Laparoscopic fundoplication is fast emerging as the treatment of choice of gastro-esophageal reflux disease. However, a complication peculiar to laparoscopic surgery for this disease is the intrathoracic migration of the wrap. This article describes a case of a male patient who developed this particular complication after laparoscopic total fundoplication. Following a trauma, wrap migration occurred. The typical history and symptomatology is described. The classical Barium swallow picture is enclosed. Laparoscopic redo fundoplication was carried out. The difficulties encountered are described. Postoperative wrap migration can be suspected clinically by the presence of a precipitating event and typical symptomatology. Confirmation is by a Barium swallow. Treatment is by redo surgery.
Answer: The learning curve for laparoscopic fundoplication appears to be influenced more significantly by the teacher than the pupil. A study evaluating individual learning curves for surgeons performing laparoscopic fundoplication found that the teacher had a significant effect on the pupil's performance score (P =0.0137), while the simulator-test showed no correlation with the operative score. This suggests that the teacher's role is crucial in shaping the pupil's surgical skills during the initial learning phase (PUBMED:15720987).
Additionally, the presence of experienced supervision has been shown to minimize adverse outcomes associated with an individual's learning curve. Surgeons who began laparoscopic fundoplication later in the overall experience, when experienced supervision could be provided, had less likelihood of adverse outcomes (PUBMED:8757384). This further emphasizes the importance of the teacher in the learning process.
Moreover, the transition from open to laparoscopic fundoplication highlighted that under supervision, residents could become comfortable with the procedure after about 10 to 15 procedures, indicating that the learning curve is steep initially but can be overcome with proper guidance (PUBMED:10088568).
In conclusion, while individual aptitude and effort are undoubtedly important, the available evidence suggests that the learning curve for laparoscopic fundoplication is determined more by the quality of teaching and supervision provided to the pupil than by the pupil's innate abilities or performance on simulators. |
Instruction: High-resolution sonography of the rib: can fracture and metastasis be differentiated?
Abstracts:
abstract_id: PUBMED:15728626
High-resolution sonography of the rib: can fracture and metastasis be differentiated? Objective: Our aim was to evaluate whether high-resolution sonography can provide additional information concerning rib lesions compared with radiography or bone scintigraphy.
Materials And Methods: Fifty-eight patients with high-uptake rib lesions seen on bone scintigraphy were selected. Radiography and rib high-resolution sonography were performed on these patients. High-resolution sonography was performed using a linear 5-12 MHz transducer. By means of clinical history, histopathologic examination, and follow-up observation, these patients were classified into rib fracture (n = 37), rib metastasis (n = 18), or unknown (n = 3) groups. High-resolution sonography images of the 55 proven cases were reviewed for the presence of five representative findings: cortical disruption, callus formation, cortical deformity, mass, or bone destruction. The frequencies of these findings were compared between the groups with fracture and metastasis.
Results: Rib lesions were matched by bone scintigraphy and high-resolution sonography in 53 (96%) of 55 patients and by bone scintigraphy and plain radiography in 23 (42%) of 55 patients. High-resolution sonography revealed 17 (94%) of 18 patients with metastasis and 36 (97%) of 37 patients with rib fractures. Metastatic lesions were seen as mass formation (n = 13) and irregular bone destruction (n = 7) on high-resolution sonography. Fracture was seen as cortical disruption with or without hematoma (n = 17), callus formation (n = 9), or cortical deformity, such as angling or stepping (n = 12).
Conclusion: High-resolution sonography of the ribs is a useful method of characterizing rib lesions in patients who have hot-uptake lesions on bone scintigraphy.
abstract_id: PUBMED:2924091
Radiological evaluation of temporal bone disease: high-resolution computed tomography versus conventional X-ray diagnosis. Sixty-two patients with different temporal bone lesions were prospectively examined by high-resolution computed tomography (CT) and conventional plain radiography, including pluridirectional tomography. High-resolution CT enabled a clear diagnosis in 80% of cases, conventional radiology in 63%; 1.6-times more bone information was recorded by high-resolution CT which is clearly superior for imaging cholesteatomas, metastases and inflammatory processes and for evaluating osseous destruction. With regard to pathological soft tissue or effusions filling the tympanic cavities, conventional radiology shows poor sensitivity (0.61). High-resolution CT is the most sensitive method for the imaging and classification of temporal bone fractures, including labyrinthine damage and ossicular chain injuries. Only in cases of atypical fractures with an unfavourable relationship to the CT planes, can carefully directed tomography be more effective. In most cases high-resolution CT replaces conventional radiology and should be the method of choice for comprehensive radiological examination of the temporal bone.
abstract_id: PUBMED:23481369
High resolution ultrasound features of prostatic rib metastasis: a prospective feasibility study with implication in the high-risk prostate cancer patient. Objective: In a prior study, high resolution ultrasound (US) was shown to be accurate for evaluating rib metastasis detected on bone scan. However, that study did not address the specific US appearance typical of osteoblastic rib metastasis. Our objective was to determine the specific US imaging appearance of osteoblastic prostate carcinoma rib metastasis using osteolytic renal cell carcinoma rib metastasis as a comparison group.
Materials And Methods: The Institutional Review Board approval and informed consent were obtained for this prospective feasibility study. We performed high resolution US of 16 rib metastases in 4 patients with prostate carcinoma metastases and compared them to 8 rib metastases in 3 male patients with renal cell carcinoma. All patients had rib metastases proven by radiographs and computed tomography (CT). High resolution US scanning was performed by a musculoskeletal radiologist using a 12-5 MHz linear-array transducer. Transverse and longitudinal scans were obtained of each rib metastasis.
Results: All 16 prostate carcinoma metastases demonstrated mild cortical irregularity of the superficial surface of the rib without associated soft tissue mass, cortical disruption, or bone destruction. 7 of 8 (88%) renal cell carcinoma rib metastases demonstrated cortical disruption or extensive bone destruction without soft tissue mass. One of 8 (12%) renal cell carcinoma rib metastases demonstrated only minimal superficial cortical irregularity at the site of a healed metastasis.
Conclusion: Osteoblastic prostate carcinoma rib metastases have a distinctive appearance on US. Our success in visualizing these lesions suggests that US may be a useful tool to characterize isolated rib abnormalities seen on a bone scan in high-risk prostate cancer patients who are being evaluated for curative surgery or radiation treatment.
abstract_id: PUBMED:15108850
Thoraxsonography--Part 1: Chest wall and pleura The detection of lymph nodes and a careful interpretation of dignity is one of the main indications for chest wall sonography. Palpable unclear masses of the chest wall can undergo US guided puctures very easily. Either rib and sternum fractures are detectable very sensitively. In addition to the more sensitive detection than in normal X-ray also accompanying soft parts processes, pleural effusions and haematomas can be visualised. Despite the physical laws of ultrasound, approximately 70% of the pleural surface can be accessed by sonography. The normal parietal pleural can be visualized and delineated from circumscribed as well as diffuse pathological thickening. The normal visceral pleura is hidden by the total reflection at the surface of the aerated lung. Sonographic evidence of a pleural effusion can be obtained from 5 ml onwards and is much more sensitive than a chest X-ray, especially in supine position; false positive findings are not encountered. A more accurate estimation of the quantity of effusion can be made. Exudates are echogenic in one third of cases. Pleural thickening, nodular changes in the pleura, septa, and the formation of the quality of effusion can be made. Exual exudates, whereas transudates are always anechoic. Pleural metastases are characteristically hypoechoic and nodular-polypoid. Pleural mesotheliomas can be delineated nearly equally well on sonograms and on computed tomography. Morphological and, in particular, functional diagnosis of the diaphragm by means of sonography is both reliable and convincing.
abstract_id: PUBMED:19094628
Resection and reconstruction of upper thoracic tumor by high transthoracic approach Objectives: To define the role of high transthoracic approach in the treatment of cervicothoracic and high thoracic tumor, and analyze the problem encountered during tumor resection and reconstruction of this technique and oncological results of patients who received this type of surgery.
Methods: Twenty-one patients with cervicothoracic and high thoracic tumor (T(1 - 4)) were treated with high transthoracic approach. This series included metastatic tumor 11 patients, eosinophilic granuloma of bone 2 patients, osteosarcoma 1 patient, Ewing's sarcoma 2 patients, chondrosarcoma 2 patients, giant cell tumor 2 patients, lymphoma 1 patient. High transthoracic approach was applied to these patients for tumor resection and spinal cord decompression. Reconstruction method included artificial vertebrae implantation or bone graft implantation combined with anterior internal fixation.
Results: Chest-back pain of all patients relieved significantly after operation. Paraplegia of 3 patients was improved from grade A to grade D according to Frankel grading system, the other 2 patients recovered completely. Pulmonary infection and pulmonary atelectasis occurred in 2 patients; cerebrospinal fluid leakage happened in 1 patient; thoracic aorta rupture happened in 1 patient. The follow-up period was 11 - 58 months, 9 patients died, including 7 patients with metastatic cancer, 1 patient with Ewing's sarcoma, 1 patient with osteosarcoma.
Conclusions: High transthoracic approach is a satisfactory method in dealing with the lesion of cervicothoracic and high thoracic vertebrae, especially with the lesion involving the vertebrae and single vertebral arch. The thoracic canal can be decompressed effectively by this approach.
abstract_id: PUBMED:8192113
Radiation osteitis and insufficiency fractures after pelvic irradiation for gynecologic malignancies. Damage to the pelvic bones after radiotherapy for gynecological malignancies is uncommon with megavoltage radiotherapy. It can be misdiagnosed as bony metastases and is a diagnosis of exclusion. We report 12 women, who were treated for endometrial or cervical carcinoma who developed osteitis, femoral head or neck necrosis, or insufficiency fractures of the acetabulum, pubic symphysis or sacroiliac bones after radiotherapy. Many had multiple areas of bone damage. The prescribed external beam dose ranged from 40.0 to 61.2 Gy. All but one patient developed bony discomfort or pain as a symptom. Bony changes of the pelvic girdle appeared between 6 months and 8 years after irradiation. Radiographic studies including plain films, CT or bone scans were performed in these patients and showed correlative changes. Bone scans showed increased radionuclide uptake in affected bones. The subsequent favorable clinical course and outcome with resolution of symptoms confirmed the diagnosis of radiation osteitis. Therapy recommendations are conservative with avoidance of weight-bearing, use of analgesics and physical therapy. Femoral head necrosis/fractures required arthroplasty. Proper shielding, use of multifield technique, treatment of all fields per day, and awareness of tolerance doses are recommended.
abstract_id: PUBMED:11041590
Osteoarticular allograft in surgery for high-grade malignant tumours of bone. We assessed the results of 17 limb-salvage procedures using osteoarticular allografts after wide resection of high-grade malignant bone tumours. All patients received chemotherapy. At the five-year follow-up, three patients had died from metastases. The allografts survived for five years in only seven patients all of whom had good function, ranging from 73% to 90% of normal. The allografts were removed because of fracture in seven patients and infection in one, and in all of these a second limb-salvage procedure was undertaken. With such a low rate of survival of osteoarticular allografts, we believe that their use in the management of high-grade malignant bone tumours should, at best, be considered a temporary solution.
abstract_id: PUBMED:15890486
Femur window--a new approach to microcirculation of living bone in situ. Background: The processes of osteogenesis, bone remodelling, fracture repair and metastasis to bone are determined by complex sequential interactions involving cellular and microcirculatory parameters. Consequently studies targeting the analysis of microcirculatory parameters on such processes should mostly respect these complex conditions. However these conditions could not yet be achieved in vitro and therefore techniques that allow a long-term observation of functional and structural parameters of microcirculation in bone in vivo at a high spatial resolution are needed to monitor dynamic events, such as fracture healing, bone remodelling and tumor metastasis.
Methods: We developed a bone chamber implant (femur window) for long-term intravital microscopy of pre-existing bone and its microcirculation at an orthotopic site in mice preserving the mechanical properties of bone. After bone chamber implantation vascular density, vessel diameter, vessel perfusion, vascular permeability and leukocyte-endothelial interactions (LEIs) in femoral bone tissue of c57-black mice (n=11) were measured quantitatively over 12 days using intravital fluorescence microscopy. Furthermore a model for bone defect healing and bone metastasis in the femur window was tested.
Results: Microvascular permeability and LEIs showed initially high values after chamber implantation followed by a significant decrease to a steady state at day 6 and 12, whereas structural parameters remained unaltered. Bone defect healing and tumor growth was observed over 12 and 90 days respectively.
Conclusion: The new femur window design allows a long-term analysis of structural and functional properties of bone and its microcirculation quantitatively at a high spatial resolution. Altered functional parameters of microcirculation after surgical procedures and their time dependent return to a steady state underline the necessity of long-term observations to achieve unaltered microcirculatory parameters. Dissection of the complex interactions between bone and microcirculation enables us to evaluate physiological and pathological processes of bone and may give new insights especially in dynamic events e.g. fracture healing, bone remodeling and tumor metastasis.
abstract_id: PUBMED:32517398
Radiculopathy Following Vertebral Body Compression Fracture: The Role of Percutaneous Cement Augmentation. Background: Vertebral cement augmentation is a commonly used procedure in patients with vertebral body compression fractures from primary or secondary osteoporosis, metastatic disease, or trauma. Many of these patients present with radiculopathy as a presenting symptom, and can experience symptomatic relief following the procedure.
Objectives: To determine the incidence of preprocedural radiculopathy in patients with vertebral body compression fractures presenting for cement augmentation, and present their postoperative outcomes.
Study Design: Retrospective cohort study.
Setting: Interventional pain practice in a tertiary care university hospital.
Methods: In this cohort study, all patients who underwent kyphoplasty (KP) or vertebroplasty (VP) procedures in a 7-year period within our practice were evaluated through a search of the electronic medical records. The primary endpoint was to evaluate the prevalence of noncompressive preprocedural radiculopathy in our patients. Evaluation of each patient's relative improvement following the procedure, respective to the initial presence or absence of radicular symptoms (including and above T10, above and below T10, and below T10) was included as a secondary endpoint. Additional subanalysis was performed with respect to patients demographics, fracture location, and primary indication for the procedure (osteoporosis, trauma, etc.).
Results: A total of 302 procedures were performed during this time period, encompassing 544 total vertebral body levels. After exclusion criteria were applied to this cohort, 31.6% of patients demonstrated radiculopathy prior to the procedure that could not be explained by nerve impingement. Nearly half of patients demonstrated an optimal clinical outcome (48.5% nearly complete/complete resolution of symptoms, 40.1% partial resolution of symptoms, 11.4% little to no resolution of symptoms). Patients with fractures above T10 were more likely to see complete resolution, whereas patients with fractures above and below T10 were likely to not see any resolution. Men and women without initial radiculopathy symptoms were more likely to see little to no resolution, regardless of fracture location.
Limitations: This retrospective study used an electronic chart review of clinicians' notes to determine the presence of radiculopathy and their relative improvement following the procedure.
Conclusions: Preprocedural radiculopathy is a common symptom of patients presenting for the evaluation of VP or KP. The presence of radiculopathy in the absence of nerve impingement may be an important marker for those patients who may experience greater benefit from the procedure.
Key Words: Radiculopathy, kyphoplasty, vertebroplasty, osteoporosis, compression fracture, spine, cement augmentation.
abstract_id: PUBMED:21720990
Primary radiotherapy versus radical prostatectomy for high-risk prostate cancer: a decision analysis. Background: Two evidence-based therapies exist for the treatment of high-risk prostate cancer (PCA): external-beam radiotherapy (RT) with hormone therapy (H) (RT + H) and radical prostatectomy (S) with adjuvant radiotherapy (S + RT). Each of these strategies is associated with different rates of local control, distant metastasis (DM), and toxicity. By using decision analysis, the authors of this report compared the quality-adjusted life expectancy (QALE) between men with high-risk PCA who received RT + H versus S + RT versus a hypothetical trimodality therapy (S + RT + H).
Methods: The authors developed a Markov model to describe lifetime health states after treatment for high-risk PCA. Probabilities and utilities were extrapolated from the literature. Toxicities after radiotherapy were based on intensity-modulated radiotherapy series, and patients were exposed to risks of diabetes, cardiovascular disease, and fracture for 5 years after completing H. Deterministic and probabilistic sensitivity analyses were performed to model uncertainty in outcome rates, toxicities, and utilities.
Results: RT + H resulted in a higher QALE compared with S + RT over a wide range of assumptions, nearly always resulting in an increase of >1 quality-adjusted life year with outcomes highly sensitive to the risk of increased all-cause mortality from H. S + RT + H typically was superior to RT + H, albeit by small margins (<0.5 quality-adjusted life year), with results sensitive to assumptions about toxicity and radiotherapy efficacy.
Conclusions: For men with high-risk PCA, RT + H was superior to S + RT, and the result was sensitive to the risk of all-cause mortality from H. Moreover, trimodality therapy may offer local and distant control benefits that lead to optimal outcomes in a meaningful population of men.
Answer: High-resolution sonography of the rib can indeed differentiate between fracture and metastasis. A study aimed to evaluate whether high-resolution sonography could provide additional information concerning rib lesions compared with radiography or bone scintigraphy. Fifty-eight patients with high-uptake rib lesions seen on bone scintigraphy were selected and underwent both radiography and high-resolution sonography using a linear 5-12 MHz transducer. The patients were classified into rib fracture, rib metastasis, or unknown groups based on clinical history, histopathologic examination, and follow-up observation. The high-resolution sonography images of the 55 proven cases were reviewed for the presence of five representative findings: cortical disruption, callus formation, cortical deformity, mass, or bone destruction. The frequencies of these findings were compared between the groups with fracture and metastasis. The results showed that high-resolution sonography revealed 94% of patients with metastasis and 97% of patients with rib fractures. Metastatic lesions were seen as mass formation and irregular bone destruction, while fractures were seen as cortical disruption with or without hematoma, callus formation, or cortical deformity such as angling or stepping. The study concluded that high-resolution sonography of the ribs is a useful method for characterizing rib lesions in patients who have hot-uptake lesions on bone scintigraphy (PUBMED:15728626). |
Instruction: Are older adults who volunteer to participate in an exercise study fitter and healthier than nonvolunteers?
Abstracts:
abstract_id: PUBMED:23619184
Are older adults who volunteer to participate in an exercise study fitter and healthier than nonvolunteers? The participation bias of the study population. Background: Participation bias in exercise studies is poorly understood among older adults. This study was aimed at looking into whether older persons who volunteer to participate in an exercise study differ from nonvolunteers.
Methods: A self-reported questionnaire on physical activity and general health was mailed out to 1000 persons, aged 60 or over, who were covered by the medical insurance of the French National Education System. Among them, 535 answered it and sent it back. Two hundred and thirty-three persons (age 69.7 ±7.6, 65.7% women) said they would volunteer to participate in an exercise study and 270 (age 71.7 ±8.8, 62.2% women) did not.
Results: Volunteers were younger and more educated than nonvolunteers, but they did not differ in sex. They had less physical function decline and higher volumes of physical activity than nonvolunteers. Compared with volunteers, nonvolunteers had a worse self-reported health and suffered more frequently from chronic pain. Multiple logistic regressions showed that good self-reported health, absence of chronic pain, and lower levels of physical function decline were associated with volunteering to participate in an exercise study.
Conclusions: Volunteers were fitter and healthier than nonvolunteers. Therefore, caution must be taken when generalizing the results of exercise intervention studies.
abstract_id: PUBMED:22820215
Are Older Adults Who Volunteer to Participate in an Exercise Study Fitter and Healthier than Non-Volunteers? The participation bias of the study population. BACKGROUND: Participation bias in exercise studies is poorly understood among older adults. This study was aimed at looking into whether older persons who volunteer to participate in an exercise study differ from non-volunteers. METHODS: A self-reported questionnaire on physical activity and general health was mailed out to 1000 persons, aged 60 or over, who were covered by the medical insurance of the French National Education System. Among them, 535 answered it and sent it back. Two hundred and thirty-three persons (age 69.7 ±7.6, 65.7% women) said they would volunteer to participate in an exercise study and 270 (age 71.7 ±8.8, 62.2% women) did not. RESULTS: Volunteers were younger and more educated than non-volunteers, but they did not differ in sex. They had less physical function decline and higher volumes of physical activity than non-volunteers. Compared to volunteers, non-volunteers had a worse self-reported health and suffered more frequently from chronic pain. Multiple logistic regressions showed that good self-reported health, absence of chronic pain, and lower levels of physical function decline were associated with volunteering to participate in an exercise study. CONCLUSIONS: Volunteers were fitter and healthier than non-volunteers. Therefore, caution must be taken when generalizing the results of exercise intervention studies.
abstract_id: PUBMED:37507667
Volunteer-led online group exercise for community-dwelling older people: a feasibility and acceptability study. Background: Despite the clear benefits of physical activity in healthy ageing, engagement in regular physical activity among community-dwelling older adults remains low, with common barriers including exertional discomfort, concerns with falling, and access difficulties. The recent rise of the use of technology and the internet among older adults presents an opportunity to engage with older people online to promote increased physical activity. This study aims to determine the feasibility and acceptability of training volunteers to deliver online group exercises for older adults attending community social clubs.
Methods: This was a pre-post mixed-methods study. Older adults aged ≥ 65 years attending community social clubs who provided written consent and were not actively participating in exercise classes took part in the feasibility study. Older adults, volunteers, and staff were interviewed to determine the acceptability of the intervention. The intervention was a once weekly volunteer-led online group seated strength exercises using resistance bands. The duration of the intervention was 6 months. The primary outcome measures were the feasibility of the intervention (determined by the number of volunteers recruited, trained, and retained, participant recruitment and intervention adherence) and its acceptability to key stakeholders. Secondary outcome measures included physical activity levels (Community Health Model Activities Programme for Seniors (CHAMPS) questionnaire), modified Barthel Index, Health-related quality of life (EQ-5D-5L), frailty (PRISMA-7) and sarcopenia (SARC-F), at baseline and 6 months.
Results: Nineteen volunteers were recruited, 15 (78.9%) completed training and 9 (47.3%) were retained after 1 year (mean age 68 years). Thirty older adults (mean age 77 years, 27 female) participated, attending 54% (IQR 37-67) of exercise sessions. Participants had no significant changes in secondary outcome measures, with a trend towards improvement in physical activity levels (physical activity in minutes per week at baseline was 1770 min, and 1909 min at six months, p = 0.13). Twenty volunteers, older adults, and staff were interviewed and found the intervention acceptable. The seated exercises were perceived as safe, manageable, and enjoyable.
Conclusions: Trained volunteers can safely deliver online group exercise for community-dwelling older adults which was acceptable to older adults, volunteers, and club staff.
Trials Registration: NCT04672200.
abstract_id: PUBMED:34831701
Relationships between Participation in Volunteer-Managed Exercises, Distance to Exercise Facilities, and Interpersonal Social Networks in Older Adults: A Cross-Sectional Study in Japan. This study aimed to examine the factors related to participation in volunteer-managed preventive care exercises by focusing on the distance to exercise facilities and interpersonal social networks. A postal mail survey was conducted in 2013 in Kasama City in a rural region of Japan. Older adults (aged ≥ 65 years) who were living independently (n = 16,870) were targeted. Potential participants who were aware of silver-rehabili taisou exercise (SRTE) and/or square-stepping exercise (SSE) were included in the analysis (n = 4005). A multiple logistic regression analysis revealed that social and environmental factors were associated with participation in SRTE and SSE. After adjusting for confounding variables, exercise participation was negatively associated with an extensive distance from an exercise facility in both sexes for SRTE and SSE. Among women, participation in SRTE was negatively associated with weak interpersonal social networks (odds ratio (OR) = 0.57), and participation in SRTE and SSE was negatively associated with being a car passenger (SRTE, OR = 0.76; SSE, OR = 0.60). However, there were no significant interactions between sex and social and environmental factors. Our findings suggest the importance of considering location and transportation to promote participation in preventive care exercise.
abstract_id: PUBMED:28138501
Promoting Retention: African American Older Adults in a Research Volunteer Registry. Objectives: The objectives of this study were to evaluate the capability of a research volunteer registry to retain community-dwelling African American older adults, and to explore demographic and health factors associated with retention. Method: A logistic regression model was used to determine the influence of demographics, health factors, and registry logic model activities on retention in a sample of 1,730 older African American adults. Results: Almost 80% of participants active in the volunteer research registry between January 2012 and June 2015 were retained. Employment, being referred to research studies, a higher number of medical conditions, and more follow-up contacts were associated with an increased likelihood of retention. Older age, more months in the registry, and more mobility problems decreased the likelihood of retention. Discussion: These results suggest the Michigan Center for Urban African American Aging Research logic model promotes retention through involving older African American adults in research through study referrals and intensive follow-up. The loss of participants due to age- and mobility-related issues indicate the registry may be losing its most vulnerable participants.
abstract_id: PUBMED:34084693
Home Exercise Interventions in Frail Older Adults. Purpose Of Review: Frailty is characterized by decreased physiological reserve and increased risk of falls, disability, hospitalization, and mortality. Frail older adults may benefit from exercise interventions targeting their multiple problems and functional deficits; however, most research focuses on center-based interventions, which may present accessibility challenges for frail older adults. Therefore, the purpose of this review is to summarize the most recently published home-based exercise interventions for frail older adults living at home.
Recent Findings: Eight manuscripts met inclusion criteria. Research interventions consisted of a variety of modes (strength, strength/nutrition, strength/flexibility/balance/endurance), duration (12 weeks to 6 months), frequency (2-7 days/week), and delivery methods (volunteer-led, videos on a tablet, manuals/brochures). Investigators examined the effects of home-based exercise on a variety of outcomes to include feasibility, frailty status, physical performance, lean body mass, skeletal muscle mass, other physiological outcomes, mental health, nutritional status, and incidence of falls in frail.
Summary: This review demonstrates the feasibility and effectiveness of home-based exercise interventions to improve frailty, functional performance, nutritional status, and incidence of falls in frail older adults. However, the limited literature available provides conflicting reports regarding benefits for mental health outcomes and no evidence of a beneficial effect on skeletal muscle or lean mass. Future research is needed to shed light on the optimal components of home exercise programs most important for maximizing benefits for frail older adults, as well as the most effective delivery method.
abstract_id: PUBMED:28138396
How much will older adults exercise? A feasibility study of aerobic training combined with resistance training. Background: Both aerobic training (AT) and resistance training (RT) have multidimensional health benefits for older adults including increased life expectancy and decreased risk of chronic diseases. However, the volume (i.e., frequency*time) of AT combined with RT in which untrained older adults can feasibly and safely participate remains unclear. Thus, our primary objective was to investigate the feasibility and safety of a high-volume exercise program consisting of twice weekly AT combined with twice weekly RT (i.e., four times weekly exercise) on a group of untrained older adults. In addition, we investigated the effects of the program on physical function, aerobic capacity, muscular strength, and explored factors related to participant adherence.
Methods: We recruited eight inactive older adults (65+ years) to participate in a 6-week, single-group pre-post exercise intervention, consisting of 2 days/week of AT plus 2 days/week of progressive RT for 6 weeks. We recorded program attendance and monitored for adverse events during the course of the program. Participants were tested at both baseline and follow-up on the following: (1) physical function (i.e., timed-up-and-go test (TUG) and short physical performance battery (SPPB)), (2) aerobic capacity (VO2max) using the modified Bruce protocol; and (3) muscular strength on the leg press and lat pull-down. Post intervention, we performed qualitative semi-structured interviews of all participants regarding their experiences in the exercise program. We used these responses to examine themes that may affect continued program adherence to a high-volume exercise program.
Results: We recorded an average attendance rate of 83.3% with the lowest attendance for one session being five out of eight participants; no significant adverse events occurred. Significant improvements were observed for SPPB score (1.6; 95% CI: [0.3, 2.9]), VO2max (8.8 ml/kg/min; 95% CI: [2.8, 14.8]), and lat pull-down strength (11.8 lbs; 95% CI: [3.3, 20.2]). Qualitative results revealed two themes that promote older adults' adherence: (1) convenience of the program and (2) the social benefits of exercise.
Conclusions: Our findings suggest untrained older adults can be successful at completing twice weekly AT combined with twice weekly progressive RT; however, these exercise programs should be group-based in order to maintain high adherence.
abstract_id: PUBMED:38432891
Challenges and suggestions for exergaming program in exercise among older adults. Engaging in physical activity and exercise is one of the important ways for health promotion. However, older adults are often physically inactive or have a sedentary lifestyle and have poor compliance with physical activity. Exergaming program with their unique advantages could make physical activity a more joyful experience and motivate older adults to participate in physical activity. Promoting older adults' health through engagement in exergaming programs is still in the early stage, and still faces many challenges. Analyzing the challenges and difficulties faced by exergaming program for older adults and exploring in-depth strategies to promote the implementation of exergaming program for older adults are of great significance for the design and implementation of sports games for older adults.
abstract_id: PUBMED:34889743
Technology Support Challenges and Recommendations for Adapting an Evidence-Based Exercise Program for Remote Delivery to Older Adults: Exploratory Mixed Methods Study. Background: Tele-exercise has emerged as a means for older adults to participate in group exercise during the COVID-19 pandemic. However, little is known about the technology support needs of older adults for accessing tele-exercise.
Objective: This study aims to examine the needs of older adults for transition to tele-exercise, identify barriers to and facilitators of tele-exercise uptake and continued participation, and describe technology support challenges and successes encountered by older adults starting tele-exercise.
Methods: We used an exploratory, sequential mixed methods study design. Participants were older adults with symptomatic knee osteoarthritis (N=44) who started participating in a remotely delivered program called Enhance Fitness. Before the start of the classes, a subsample of the participants (n=10) completed semistructured phone interviews about their technology support needs and the barriers to and facilitators for technology adoption. All of the participants completed the surveys including the Senior Technology Acceptance Model scale and a technology needs assessment. The study team recorded the technology challenges encountered and the attendance rates for 48 sessions delivered over 16 weeks.
Results: Four themes emerged from the interviews: participants desire features in a tele-exercise program that foster accountability, direct access to helpful people who can troubleshoot and provide guidance with technology is important, opportunities to participate in high-value activities motivate willingness to persevere through the technology concerns, and belief in the ability to learn new things supersedes technology-related frustration. Among the participants in the tele-exercise classes (mean age 74, SD 6.3 years; 38/44, 86% female; mean 2.5, SD 0.9 chronic conditions), 71% (31/44) had a computer with a webcam, but 41% (18/44) had little or no experience with videoconferencing. The initial technology orientation sessions lasted on average 19.3 (SD 10.3) minutes, and 24% (11/44) required a follow-up assistance call. During the first 2 weeks of tele-exercise, 47% of participants (21/44) required technical assistance, which decreased to 12% (5/44) during weeks 3 to 16. The median attendance was 100% for the first 6 sessions and 93% for the subsequent 42 sessions.
Conclusions: With appropriate support, older adults can successfully participate in tele-exercise. Recommendations include individualized technology orientation sessions, experiential learning, and availability of standby technical assistance, particularly during the first 2 weeks of classes. Continued development of best practices in this area may allow previously hard-to-reach populations of older adults to participate in health-enhancing, evidence-based exercise programs.
abstract_id: PUBMED:24150179
Volunteer transitions and physical and psychological health among older adults in Taiwan. Objectives: This study examines the relationship between transitions in volunteering activity and physical and psychological health outcomes among older adults in Taiwan.
Method: We used 3 waves of a longitudinal survey from Taiwan (1999, 2003, and 2007) including 1,847 older adults aged 58-74. We categorized volunteer activity over time into 5 categories and used generalized estimating equations to examine the relationship between volunteer transition group membership and health outcomes.
Results: Approximately, 3% of older volunteers kept volunteering and 7% stopped or started their volunteering over time. Engaging in any volunteering, including continuous volunteering, discrete volunteering, active-to-inactive volunteering, and inactive-to-active volunteering, was significantly associated with better self-rated health and higher life satisfaction when compared with the nonvolunteering group. Only those in the continuous volunteer group and those who shifted from inactive-to-active volunteering showed a positive association with better physical function and less depression, compared with the nonvolunteering group.
Discussion: The results suggest that the positive relationship between volunteering and health may decline when older adults discontinue or withdraw from volunteering. More research is needed to explore the mechanism behind older people's desistance from, persistence in, and resumption of volunteer engagement.
Answer: Yes, older adults who volunteer to participate in an exercise study are generally fitter and healthier than nonvolunteers. This is evidenced by a study that found volunteers were younger, more educated, had less physical function decline, and higher volumes of physical activity than nonvolunteers. Nonvolunteers had a worse self-reported health and suffered more frequently from chronic pain. Good self-reported health, absence of chronic pain, and lower levels of physical function decline were associated with volunteering to participate in an exercise study (PUBMED:23619184; PUBMED:22820215). Therefore, caution must be taken when generalizing the results of exercise intervention studies due to this participation bias. |
Instruction: Is there any relationship between expressions of minor blood group antigens with HTLV-I infection?
Abstracts:
abstract_id: PUBMED:22858444
Is there any relationship between expressions of minor blood group antigens with HTLV-I infection? Background: The frequency of Human T lymphotropic Virus-1 (HTLV 1) is 2-3% in the general population and 0.7% in blood donors in northeast Iran. It is very important that we recognize the contributing factors in the pathogenesis of this virus. There are many reports that show that susceptibility to some infections is closely linked to the expression of certain blood group antigens. This study was performed to evaluate any association between minor blood group antigens and HTLV-I infection in northeast Iran.
Methods: In this case and control study major and minor blood group antigens were typed by commercial antibodies in 100 HTLV-I infected individuals and 332 healthy blood donors in Mashhad, Iran, from 2009-2010. Blood group antigens were determined by tube method less than 24h after blood collection. Finally, the results of HTLV-I positive subjects and control groups were compared by using SPSS software.
Results: The prevalence of Le(a), Le(b), P1, Fy(a), Fy(b), M, N, Jka, Jkb, K and k antigens in case group were 39.0%, 56.0%, 72.0%, 67.0%, 52.0%, 90.0%, 57.0%, 79.0%, 71.0%, 10.0%, 96.0%, respectively and the frequency of these blood group antigens in control group were 38.8%, 55.8%, 66.2%, 72.0%, 58.7%, 87.0%, 56.7%, 79.8%, 63.0%, 10.6%, 97.0%, respectively. We did not find any significant differences between the case and control group for frequency of minor blood group antigens.
Conclusion: Our study showed minor blood group antigens are not associated with an increased risk of HTLT-1 infection in northeast Iran.
abstract_id: PUBMED:23289207
Expression of rhesus blood group antigens in HTLV-i infection in northeast Iran. Background: Human T-cell lymphoma/leukemia virus type 1 (HTLV-1) infection is relatively common in northeast Iran. It is important to understand which factors play a role in the pathogenesis of this virus. Blood group antigens may act as a receptor for various infectious agents. This study was performed to detect any association between Rh blood group antigens and HTLV-1 infection in northeast Iran.
Methods: In this case and control study, Rhesus blood group antigens (D, C, c, E and e) were determined within 24 hours of blood collection by commercial antibodies in 100 HTLV-I infected individuals and 332 healthy blood donors at the Khorasan Blood Transfusion Center, Mashhad, Iran, in 2011. The results of HTLV-I positive subjects and the control group were compared using SPSS software.
Results: The frequencies of Rh blood group antigens in the case group were D in 88%, C in 72%, c in 68%, E in 27%, and e in 94%. In the control group the frequencies were D in 91%, C in 75.5%, c in 72.9%, E in 28.6% and e in 98.2%. Chi-square test showed a significant difference between the two groups for the frequency of e antigen (p = 0.03).
Conclusions: Our study showed that e antigen expression is associated with a decreased risk of HTLV-I infection in northeast Iran.
abstract_id: PUBMED:2990523
Antibodies to human adult T cell leukaemia virus type I associated antigens in Swedish leukaemia patients and blood donors. Antibodies to antigens associated with human T cell leukaemia virus type I (HTLV I) in Swedish adult leukaemia patients and blood donors were sought with a sensitive screening test using membrane antigen prepared from virus producing cells (MA-ELISA). Four persons (one ALL, one AML and two healthy blood donors) out of 483 persons tested reacted in the test. However, they were negative in the more specific anti-p19 and anti-whole virion ELISA tests. The prevalence of sera with definite anti-HTLV I activity seems to be very low in Sweden. The finding of four MA-ELISA positive persons needs further investigation.
abstract_id: PUBMED:30175867
Morphological alterations in minor salivary glands of HTLV1+ patients: A pilot study. Background: Among the complex of HTLV-associated diseases, Sjögren's syndrome (SS) is one of the most controversial. This work aims to detect morphological and inflammatory alterations, including clues of the presence of HTLV-1, in minor salivary glands of patients with dryness symptoms.
Methods: We have assessed HTLV-1-seropositive patients (HTLV-1 group) and patients with SS (SS group). We used formalin-fixed, paraffin-embedded minor salivary gland tissue to evaluate the morphological aspects and, by means of immunohistochemistry, the presence of Tax protein, CD4, CD8 and CD20 cells. Additionally, viral particles and proviral load were analysed by PCR.
Results: The HTLV-1 group had the highest prevalence of non-specific chronic sialadenitis (85.71%; P = 0.017) and greater amount of T CD8+ cells. In the SS group, focal lymphocytic sialadenitis (80%; P = 0.017) prevailed, with a greater amount of B CD20+ . Both immunohistochemistry and PCR identified the Tax protein and its gene in the salivary glands of both groups and in similar proportions.
Conclusion: The results indicate that HTLV-1-seropositive patients have different patterns of morphological/inflammatory alterations, suggesting a likely difference in the process of immune activation.
abstract_id: PUBMED:2855405
Relationship of human cord blood mononuclear cell lines, established by the HTLV-I integration, to expression of T-cell phenotypes and the T-cell receptor gene rearrangement. The HTLV-I integrated cell lines originated from human cord blood mononuclear cells were examined for the T-cell receptor beta chain gene rearrangement in conjunction with immunological phenotype analysis. Five of seven cell lines contained the rearranged T-cell receptor beta chain gene. Interestingly, most of the rearranged lines failed to express the T cell differentiation antigens, OKT 4 and 8, though their positive expression of Leu 1 antigen indicated the commitment to the T-cell lineage. On the other hand, two cell lines retained the germ-line configuration of the beta chain gene, while phenotypical examination revealed that the two cell lines have distinct T-cell antigens. Non-rearranged cell lines have proved to be nearly monoclonal from HTLV-I integration pattern and/or surface marker clonality. Thus, the relationship between the immunological phenotypes and the T-cell receptor gene rearrangement observed in the T-cell lineage is not strict and often dissociates. A possible involvement of HTLV-I infection on the dissociation between phenotype and genotype is discussed from the viewpoint of phenotypical modulation.
abstract_id: PUBMED:36787548
The Prevalence of Transfusion-Transmitted Infection Markers among Blood Donors at Saudi Hospital, Makkah. Background: Testing of blood donors for markers of transfusion-transmitted infections (TTIs) such as HBV, HCV, HIV, HTLV, syphilis, and malaria is mandatory in Saudi Arabia. This study determined the prevalence of all tested TTIs among blood donors in the western region of Saudi Arabia.
Methods: This retrospective study included 5,473 blood donors who attended the blood donation center at the Security Force Hospital (SFH) located in the western region of Saudi Arabia from January 1, 2015 to December 31, 2018. The prevalence of TTIs was determined and classified as per year, gender, age, type of donors (first-time vs. returned donors), category of donation (replacement vs. volunteer), and blood group.
Results: All donors (100%) were screened for TTIs by serological assays and nucleic acid tests (NATs). "Reactive" samples to serological assays were as follow: 57 (1.07%) HBsAg, 292 (5.34%) HBsAb, 388 (7.1%) HBcAbs, 13 (0.24%) HCV, 5 (0.09%) HIV, 8 (0.15%) HTLV-I and -II, 21 (0.83%) syphilis, and 0 (0%) malaria. The NAT results for HBV, HCV, and HIV revealed 50 (0.91%), 1 (0.0002%), and 3 (0.05%) reactive samples, respectively. Reactive donations to screening serology tests of syphilis and HTLV-I/-II were neither confirmed nor declined by their corresponding confirmatory assays. Most "reactive" samples to TTI tests were associated with male gender, first-time donor, replacement donation, and O+ blood group.
Conclusions: This study highlights the strong adherence to TTI testing policy and low prevalence of TTI markers among blood donors in the western region of Saudi Arabia.
abstract_id: PUBMED:11716116
Skin test antigens for immediate hypersensitivity prepared from infective larvae of Strongyloides stercoralis. More rapid and simplified diagnostic procedures are needed for the diagnosis of strongyloidiasis. One approach is the use of an immediate hypersensitivity skin test that would reliably identify infected people. Accordingly, somatic and excretion/secretion (E/S) antigens were prepared from filariform larvae of Strongyloides stercoralis and were treated to remove possible adventitious agents. By use of a quantitative method for measurement of skin reactions, several preparations of the 2 antigens were tested in uninfected controls and in various groups of patients. Doses of 0.35 microg of E/S and 4 microg of somatic antigens elicited positive skin tests in 82-100% of infected people, depending on clinical status. A lower frequency of positive skin tests was found in strongyloidiasis patients also infected with human T-cell lymphotropic virus type 1. Cross-reactions, especially to somatic antigens, were frequently found in patients with filarial infections. Despite these limitations and the need for further study of specificity, these results provide a basis for future development of a diagnostic skin test antigen for strongyloidiasis.
abstract_id: PUBMED:9666513
Relationship between antibody-dependent cell-mediated cytotoxicity due to anti-HTLV-1 and negative signal of major histocompatibility complex class I antigens on adult T-cell leukemia cell lines. Natural killer (NK) cells possess two types of cytotoxic activity: natural killer cytotoxicity (NKC) and antibody-dependent cell-mediated cytotoxicity (ADCC). The NKC is regulated by the negative signal of the NK receptor, which recognizes major histocompatibility complex (MHC) class I antigens. However, it is not known whether or not the negative signal influences the ADCC. In this study, the relationship of the ADCC and negative signal was investigated. As target cells, adult T-cell leukemia (ATL) cell lines were used. When the target cells were treated with an anti-human T-lymphotropic virus type-1 (anti-HTLV-1) antibody, they were killed by the NK cells by means of the ADCC (ADCC/anti-HTLV-1). The killing levels were parallel with the cell surface HTLV-1 antigenicity. However, when these cells lines were treated with an anti-HLA, the ADCC (ADCC/anti-HLA) showed inverse correlation with the HLA antigenicity. Furthermore, when HLA polymorphic and monomorphic determinants of these target cells were blocked by F(ab')2 fragments of the anti-HLA and W6/32, the ADCC/anti-HLA was enhanced, but the ADCC/anti-HTLV-1 was not enhanced. These results suggest that the ADCC/anti-HLA may have an intimate relationship with the MHC class I antigens. The ADCC/anti-HLA may be suppressed by the negative signal. On the other hand, the ADCC/anti-HTLV-1 may have no relationship with the class I antigens and the negative signal may have no influence against the ADCC/anti-HTLV-1. The biological mechanism of this difference remains to be investigated.
abstract_id: PUBMED:18671455
Association between ABO and Rhesus blood group systems among confirmed human T lymphotropic virus type 1-infected patients in Northeast Iran. The distribution of ABO and Rhesus blood group types was investigated in 984 randomly selected human T lymphotropic virus-1(HTLV-1)-infected blood donors from April 2004 to March 2007. A total of 1081 healthy controls admitted for blood donation in this period were enrolled in this study. Infected and control individuals were from the same region and their ABO/Rhesus blood group types were determined by the standard tube test technique. All blood samples were screened for HTLV-1 using an enzyme-linked immunosorbent assay (ELISA) and positive samples were confirmed by Western blot (WB). The unmatched analyses showed significant differences in frequency of the A+ blood group between healthy controls and HTLV-1-infected individuals (OR = 0.8, 95% CI = 0.66-0.97) and also a significant association was observed between these two groups(OR = 1.42, 95% CI = 1.1-1.99, p = 0.021). No significant difference in blood group (A-, B+, B-, O+, O-, and AB-) was observed between cases and controls. It is the first report of an association between HTLV-1-infected patients and ABO/Rh blood groups in our literature review. Our results might suggest that the A+ blood group decrease the risk of HTLV-1 infection in healthy controls, while the AB+ blood group is more frequent in HTLV-1 carriers and increases the risk of HTLV-1 infection.
abstract_id: PUBMED:3005178
Transmission of human T-cell leukemia virus (HTLV-I) by blood transfusion: demonstration of proviral DNA in recipients' blood lymphocytes. Human T-cell clones bearing antigens encoded by human T-cell leukemia/lymphoma virus (HTLV-I) were isolated from 6 patients who produced antibodies against HTLV-I after having received anti-HTLV-I-positive blood units containing cell components. On the other hand, it was not possible to isolate clonal cells carrying viral antigens from the recipients who did not produce antibodies. The clonal cell lines had the same surface markers as neoplastic cells of adult T-cell leukemia and had the HLA phenotype of the recipients themselves. Proviral DNA of HTLV-I was demonstrated in each of the clonal cell lines. The site of integration was different in each case even if the clones were derived from the same recipient. These results indicate that blood transfusion can cause persistent HTLV-I infection.
Answer: Based on the provided abstracts, there does not appear to be a significant relationship between the expression of minor blood group antigens and HTLV-I infection. A study conducted in northeast Iran evaluated the association between minor blood group antigens and HTLV-I infection and found no significant differences in the frequency of these antigens between the HTLV-I infected group and the healthy control group (PUBMED:22858444). This suggests that minor blood group antigens are not associated with an increased risk of HTLV-I infection in the studied population. |
Instruction: Is suspicion of genital herpes infection associated with avoiding sex?
Abstracts:
abstract_id: PUBMED:18771644
Is suspicion of genital herpes infection associated with avoiding sex? A clinic-based study. Objective: The present study tested the research hypothesis that sexually transmissible disease (STD) clinic patients suspecting genital herpes infection would be more likely than their 'non-suspecting' counterparts to abstain from sex to avoid transmission of a perceived STD.
Methods: Recruitment (n = 351) occurred in a publicly-funded STD clinic located in a metropolitan area of the southern USA. Participants were tested for herpes simplex virus type 2 (HSV-2) using a rapid test manufactured by Biokit (Lexington, MA, USA) and they completed a self-administered questionnaire (using a 3-month recall period).
Results: Well over one-third (38.7%) of those indicating suspicion also indicated avoiding sex with steady partners because of concerns about STDs as compared with 28.0% among those not indicating suspicion (prevalence ratio = 1.38; 95% CI = 1.02-1.87, P = 0.036). The relationship between suspicion and avoiding sex with non-steady partners was not significant (P = 0.720). The relationship with steady partners only applied to people who were female (P = 0.013), single (P = 0.017), reported symptoms of genital herpes (P = 0.003), perceived that genital herpes would have a strong negative influence on their sex life (P = 0.0001), and who subsequently tested positive for HSV-2 (P = 0.012).
Conclusions: Among STD clinic attendees, suspicion of genital herpes infection may translate into partner protective behaviour, but only for a minority of people and only with respect to sex with steady partners. Clinic-based and community-based education programs may benefit public health by teaching people (especially single women) how to effectively recognise symptoms of primary genital herpes infections. Reversing the often prevailing ethic of genital herpes as a 'community secret' will clearly be a challenge to these education programs.
abstract_id: PUBMED:10701208
Chronic genital ulcerations and HIV infection: 29 cases Genital ulcers are common manifestations of infectious disease. The incidence of genital ulcers featuring a chronic course has increased since the beginning of the AIDS epidemic. The purpose of this 18-month cross-sectional study was to determine the main infectious causes of chronic genital ulcers (CGU) and their correlation with HIV infection. A total of 29 patients with CGU defined as an ulcer showing no sign of healing after more than one month were studied. Mean age ranged from 24 to 54 years. The male-to-female sex ratio was 1:5. The etiology was herpes in 19 cases (65.5 p. 100), chancroid in 6 cases (20.6 p. 100), streptococcal infection in 2 cases (6.8 p. 100), Pseudomonas aeruginosa infection in 1 case (3.4 p. 100) and cutaneous amibiasis in 1 case (3.4 p. 100). Twenty-two patients (75.8 p. 100) presented HIV infection including 16 with HIV1 and 6 with HIV1 and HIV2. All patients with herpes were HIV-positive. Eighteen of these patients were in stage C3 of HIV infection. Genital herpes was the main etiology of UGC in patients with HIV infection (p < 0.001). Conversely chancroid was the main etiology in patients without HIV infection (p < 0.05). This finding suggests that herpetic CGU is highly suggestive of AIDS whereas chancroid CGU is not. Although syphilis is widespread in Africa, it was not a cause of CGU in this study. Search for herpes simplex virus or Haemophilus ducreyi in patients with CGU is an important criteria for presumptive diagnosis of AIDS in Africa.
abstract_id: PUBMED:34671875
Genital Herpes Disclosure Attitudes Among Men Who Have Sex with Men. An abundance of literature interested in sexually transmitted infections-related disclosure attitudes among MSM (men who have sex with men) exists. However, comparatively few studies have examined these with respect to genital herpes. This cross-sectional study examined attitudes about herpes-related disclosure among Houston MSM. Convenience sampling at Houston-based MSM venues and events was conducted during December 2018 and January 2019 with 302 participants recruited. Participants were asked if an individual with genital herpes should disclose to others and if they would disclose to others if they had/have genital herpes. Factors associated with decreased belief that someone should disclose a genital herpes infection to others were history of genital herpes (OR 0.14, 95% CI [0.04, 0.55]) and race other than white, black, or Hispanic/Latino (OR 0.34, 95% CI [0.15, 0.77]). History of 0 to 1 sexual partner(s) in the past year was associated with increased belief that an individual should disclose (OR 2.43, 95% CI [1.19, 4.98]), while self-reported history of genital herpes was associated with decreased intent to disclose one's own infection to potential partners (OR 0.30, 95% CI [0.10, 0.91]). Self-reported history of genital herpes was associated with decreased belief that someone with genital herpes should tell others and with decreased likelihood to disclose one's own status. Lastly, race other than white, black, or Hispanic/Latino was associated with increased belief that someone with genital herpes should not tell others. Normalization of genital herpes could bolster intent to disclose genital herpes infection and improve sexual outcomes.
abstract_id: PUBMED:25248731
A neonate with acute liver failure caused by herpes simplex infection Background: (HSV) infection in neonates is rare. This infection is generally considered when there is a history of contact with a person who has a cold sore or when the mother has active genital herpes.
Case Description: A 7-day-old female was admitted to hospital on suspicion of sepsis. Aciclovir was considered, but ultimately not given due to low suspicion of herpes infection. Her clinical situation deteriorated within a couple of days. Due to the advanced liver damage, the disease had a fulminant course and the patient died.
Conclusion: Even without evidence of contact with HSV in the history, it is important to start acyclovir early in neonates with fever of unknown origin. A rare clinical presentation is acute liver failure. HSV infection can run a serious course in neonates; therefore, aciclovir should be started immediately if there is any suspicion.
abstract_id: PUBMED:8244358
The epidemiology of herpes simplex types 1 and 2 infection of the genital tract in Edinburgh 1978-1991. Introduction: The changing epidemiology of genital herpes in Edinburgh is described in relation to herpes simplex virus (HSV) Type 1 and herpes simplex virus Type 2 infection over a period of 14 years.
Methods: 2018 episodes of genital herpes in 1794 patients over a 14 year period were assessed. Data on age, sex, sexual orientation, geographical origin and herpes antibodies were also analysed.
Results: The proportion of cases that were HSV Type 1 increased over the period from approximately 20% to over 40%. Type 1 infection is more common in the young, in women and as a primary infection.
Conclusions: HSV Type 1 is of increasing importance as a cause of genital herpes in our population. This may reflect changes in sexual attitudes and practises over the past decade.
abstract_id: PUBMED:11269641
Management of neonatal herpes simplex virus infection. Herpes simplex viruses (HSV) are ubiquitous pathogens which can be transmitted vertically causing significant morbidity and mortality in neonates. Neonatal HSV infection is infrequent with an incidence ranging from 1 in 3,500 to 1 in 20,000, depending on the population. Neonatal HSV infection is much more frequent in infants born to mothers experiencing a primary HSV infection with an incidence approaching 50%, while infants born to mothers experiencing recurrent HSV infection have an incidence of less than 3%. Neonatal infections are clinically categorised according to the extent of the disease. They are: (i) skin, eye and mouth (SEM) infections; (ii) central nervous system infection (encephalitis)--neonatal encephalitis can include SEM infections; and (iii) disseminated infection involving several organs, including the liver, lung, skin and/or adrenals. The central nervous system may also be involved in disseminated infections. Caesarean section, where the amniotic membranes are intact or have been ruptured for less than 4 hours, is recommended for those women who have clinical evidence of active herpes lesions on the cervix or vulva at the time of labour. This procedure significantly decreases the risk of transmission to the infant. Diagnosis of neonatal infection requires a very high level of clinical awareness as only a minority of mothers will have a history of genital HSV infection even though they are infected. Careful physical examination and appropriate investigations of the infant should accurately identify the infection in the majority of cases. Treatment is recommended where diagnosis is confirmed or there is a high level of suspicion. The current recommendation for treatment is aciclovir 20 mg/kg 3 times daily by intravenous infusion. Careful monitoring of hydration and renal function as well as meticulous supportive care of a very sick infant is also required. The newer anti-herpes agents, valaciclovir and famciclovir, offer no advantage over aciclovir and are not recommended for neonatal HSV infection. Prognosis is dependent upon the extent of disease and the efficacy of treatment, with highest rates of morbidity and mortality in disseminated infections, followed by central nervous system infection and the least in SEM infection. However, SEM infection is associated with poor developmental outcome even in infants who do not have encephalitis. Studies to improve the outcome of SEM infection are in progress. Neonatal HSV infections, although being relatively uncommon, are associated with significant morbidity and mortality if unrecognised and specific treatment is delayed. Diagnosis relies on a high level of clinical suspicion and appropriate investigation. With early therapy, the prognosis for this infection is considerably improved.
abstract_id: PUBMED:12508142
Herpes simplex virus type 2 infection as a risk factor for human immunodeficiency virus acquisition in men who have sex with men. The association of human immunodeficiency virus (HIV) acquisition with herpes simplex virus type 2 (HSV-2) was assessed among men who have sex with men (MSM) in a nested case-control study of 116 case subjects who seroconverted to HIV during follow-up and 342 control subjects who remained HIV seronegative, frequency-matched by follow-up duration and report of HIV-infected sex partner and unprotected anal sex. The baseline HSV-2 seroprevalence was higher among case (46%) than control (34%) subjects (P=.03); the HSV-2 seroincidence was 7% versus 4% (P=.3). Only 15% of HSV-2-infected MSM reported herpes outbreaks in the past year. HIV acquisition was associated with prior HSV-2 infection (odds ratio [OR], 1.8; 95% confidence interval [CI], 1.1-2.9), reporting >12 sex partners (OR, 2.9; 95% CI, 1.4-6.3), and reporting fewer herpes outbreaks in the past year (OR, 0.3; 95% CI, 0.1-0.8). HSV-2 increases the risk of HIV acquisition, independent of recognized herpes lesions and behaviors reflecting potential HIV exposure. HSV-2 suppression with antiviral therapy should be evaluated as an HIV prevention strategy among MSM.
abstract_id: PUBMED:24015854
Sex differences in health care provider communication during genital herpes care and patients' health outcomes. Research in primary care medicine demonstrates that health care providers' communication varies depending on their sex, and that these sex differences in communication can influence patients' health outcomes. The present study aimed to examine the extent to which sex differences in primary care providers' communication extend to the sensitive context of gynecological care for genital herpes and whether these potential sex differences in communication influence patients' herpes transmission prevention behaviors and herpes-related quality of life. Women (N = 123) from the United States recently diagnosed with genital herpes anonymously completed established measures in which they rated (a) their health care providers' communication, (b) their herpes transmission prevention behaviors, and (c) their herpes-related quality of life. The authors found significant sex differences in health care providers' communication; this finding supports that sex differences in primary care providers' communication extend to gynecological care for herpes. Specifically, patients with female health care providers indicated that their providers engaged in more patient-centered communication and were more satisfied with their providers' communication. However, health care providers' sex did not predict women's quality of life, a finding that suggests that health care providers' sex alone is of little importance in patients' health outcomes. Patient-centered communication was significantly associated with greater quality-of-life scores and may provide a promising avenue for intervention.
abstract_id: PUBMED:34765942
Bipolar herpes simplex infection in an human immunodeficiency virus-infected individual. Herpes simplex infection is the most common infection among the human immunodeficiency virus-infected individuals. However, the atypical manifestations of herpes simplex virus may confound even an astute physician. Hand involvement is rarely associated with genital herpes infection and the involvement of widespread areas healing with debilitating scarring is uncommon.
abstract_id: PUBMED:34936238
Guinea Pig and Mouse Models for Genital Herpes Infection. This article describes procedures for two preclinical animal models for genital herpes infection. The guinea pig model shares many features of genital herpes in humans, including a natural route of inoculation, self-limiting primary vulvovaginitis, spontaneous recurrences, symptomatic and subclinical shedding of HSV-2, and latent infection of the associated sensory ganglia (lumbosacral dorsal root ganglia, DRG). Many humoral and cytokine responses to HSV-2 infection in the guinea pig have been characterized; however, due to the limited availability of immunological reagents, assessments of cellular immune responses are lacking. In contrast, the mouse model has been important in assessing cellular immune responses to herpes infection. Both the mouse and guinea pig models have been extremely useful for evaluating preventative and immunotherapeutic approaches for controlling HSV infection and recurrent disease. In this article, we describe procedures for infecting guinea pigs and mice with HSV-2, scoring subsequent genital disease, and measuring replicating virus to confirm infection. We also provide detailed protocols for dissecting and isolating DRG (the site of HSV-2 latency), quantifying HSV-2 genomic copies in DRG, and assessing symptomatic and subclinical shedding of HSV-2 in the vagina. © 2021 Wiley Periodicals LLC. Basic Protocol 1: Primary and recurrent genital herpes infection in the guinea pig model Support Protocol 1: Blood collection via lateral saphenous vein or by cardiac puncture after euthanasia Support Protocol 2: Dissection and isolation of dorsal root ganglia from guinea pigs Support Protocol 3: PCR amplification and quantification of HSV-2 genomic DNA from samples Basic Protocol 2: Primary genital herpes infection in the mouse model Alternate Protocol: Flank infection with HSV-2 in the mouse model Support Protocol 4: Dissection and isolation of mouse dorsal root ganglia.
Answer: Yes, suspicion of genital herpes infection is associated with avoiding sex. A study conducted in a publicly-funded STD clinic in the southern USA found that patients who suspected they had genital herpes were more likely to abstain from sex with steady partners due to concerns about transmitting a perceived STD. Specifically, 38.7% of those who suspected they had genital herpes reported avoiding sex with steady partners, compared to 28.0% among those who did not suspect infection (PUBMED:18771644). |
Instruction: Is the performance of MRI in preoperative staging of breast cancer independent of clinical and histological factors?
Abstracts:
abstract_id: PUBMED:24836350
Is the performance of MRI in preoperative staging of breast cancer independent of clinical and histological factors? A subgroup analysis. Objective: To determine whether preoperative breast MRI is more useful in patients according to their breast density, age, menopausal status, and biopsy findings of carcinoma in situ.
Material And Methods: We retrospectively studied 264 patients treated for breast cancer who had undergone mammography, ultrasonography, and MRI. We compared the size of the tumor on the three techniques and the sensitivity of the techniques for detecting additional lesions both in the overall group and in subgroups of patients classified according to their breast density, age, menopausal status, and histological findings of intraductal carcinoma. The definitive histological diagnosis was used as the gold standard.
Results: MRI was the technique that was most concordant with the histological findings for the size of the lesion, and it was also the technique that detected the most additional lesions. With MRI, we observed no differences in lesion size between the overall group and the subgroups in which MRI provided added value. Likewise, we observed no differences in the number of additional lesions detected in the overall group except for multicentric lesions, which was larger in older patients (P=.02). In the subgroup of patients in which MRI provided added value, the sensitivity for bilateral lesions was higher in patients with fatty breasts (P=.04). Multifocal lesions were detected significantly better in premenopausal patients (P=.03).
Conclusions: MRI is better than mammography and better than ultrasonography for establishing the size of the tumor and for detecting additional lesions. Our results did not identify any subgroups in which the technique was more useful.
abstract_id: PUBMED:34159733
The selective use of preoperative MRI in the staging of breast cancer: a single-institution experience. Introduction: Routine use of preoperative breast magnetic resonance imaging (MRI) for loco-regional staging of breast cancer remains controversial. At Counties Manukau District Health Board (CMDHB), preoperative breast MRI is used selectively within a multidisciplinary setting. The purpose of this study is to determine the accuracy of selective use of preoperative MRI in staging loco-regional disease and how it has impacted our clinical practice.
Methods: Patients who received preoperative MRI at CMDHB between October 2015 and October 2018 were identified on a prospective database. The decision to offer MRI was made by multidisciplinary consensus. Patient data were collected retrospectively from clinical, imaging and histology records. The accuracy of MRI was determined by comparing it against histology as gold standard, and its potential contribution to treatment decisions and treatment delay was determined by clinical record review.
Results: Ninety-two patients received preoperative MRI. Additional foci of cancer were identified in ten patients (11%). Sixteen patients (17%) required additional biopsies. In fourteen patients (15%), MRI identified more extensive disease than conventional imaging prompting a change of surgical management. This 'upstaging' was confirmed histologically in twelve (13%). In one (1%) patient, MRI incorrectly 'downstaged' disease, but it did not alter the management. No patients experienced a delay in treatment due to MRI.
Conclusion: A selective, considered use of preoperative MRI within a multidisciplinary setting at our local institution results in more biopsies but with an acceptable risk-benefit ratio. It provides accurate staging to aid treatment decisions without resulting in a delay in treatment.
abstract_id: PUBMED:28647387
Practical Considerations for the Use of Breast MRI for Breast Cancer Evaluation in the Preoperative Setting. Preoperative contrast-enhanced (CE) breast magnetic resonance imaging (MRI) remains controversial in the newly diagnosed breast cancer patient. Additional lesions are frequently discovered in these patients with CE breast MRI. As staging and treatment planning evolve to include more information on tumor biology and aggression, so should our consideration of extent of disease. Directing CE breast MRI to those patients most likely to have additional disease may be beneficial. We sought to develop practical guidance for the use of preoperative CE breast MRI in the newly diagnosed breast cancer patient based on recent scientific data. Our review suggests several populations for whom preoperative breast MRI is most likely to find additional disease beyond that seen on conventional imaging. These can be viewed in three categories: (1) tumor biology-patients with invasive lobular carcinoma or aggressive tumors such as triple negative breast cancer (estrogen receptor negative, progesterone receptor negative, and human epidermal growth factor receptor 2 (HER2) negative) and HER2 positive tumors; (2) patient characteristics-dense breast tissue or younger age, especially those age <60; and (3) clinical scenarios-patients with more sonographic disease than expected or those who are node positive at initial diagnosis. Focusing breast MRI on patients with any of the aforementioned characteristics may help utilize preoperative breast MRI where it is likely to have the most impact.
abstract_id: PUBMED:18369582
Magnetic resonance imaging in preoperative staging for breast cancer: pros and contras In oncologic patients, staging of the disease extent is of paramount importance. Imaging studies are used to decide whether the patient is a surgical candidate; if this is the case, imaging is used for detailed planning of the surgical procedure itself. Even in patients with limited prognosis, the first priority is always to achieve clear margins. Due to the widespread use of screening mammography, breast cancers are among the few cancers that are almost always diagnosed in an operable stage and are operated on with curative intention. It is well established that magnetic resonance imaging (MRI) is far superior to mammography (with and without concomitant ultrasound) for mapping the local extent of breast cancer. Accordingly, there is good reason to suggest that a pre-operative breast MRI should be considered an integral part of breast conserving treatment. Still, it is only rarely used in clinical practice. Arguments against its use are: Its high costs, allegedly high number of false positive findings, lack of MR-guided breast biopsy facilities, lack of evidence from randomized prospective trials and, notably, fear of "overtreatment". This paper discusses the reservations against staging MRI and weighs them against its clinical advantages. The point is made that radiologists as well as breast surgeons should be aware of the possibility of overtreatment, i.e. unnecessary mastectomy for very small, "MRI-only" multicentric cancer foci that would indeed be sufficiently treated by radiation therapy. There is a clear need to adapt the guidelines established for treatment of mammography-diagnosed multicentric breast cancer to account for the additional use of MRI for staging. Until these guidelines are available, the management of additional, "MRI-only" diagnosed small multicentric cancer manifestations must be decided on wisely and with caution. MRI for staging may only be done in institutions that can also offer an MR-guided tissue sampling, preferably by MR-guided vacuum assisted biopsy, to provide pre-operative histological proof of lesions visible by breast MRI alone.
abstract_id: PUBMED:26323190
Who may benefit from preoperative breast MRI? A single-center analysis of 1102 consecutive patients with primary breast cancer. Several authors question the potential benefit of preoperative magnetic resonance imaging (MRI) against the background of possible overdiagnosis, false-positive findings, and unnecessary resections in patients with newly diagnosed breast cancer. In order to reveal a better selection of patients who should undergo preoperative MRI after histological confirmed breast cancer, the present analysis was implemented. We aimed to evaluate the influence of preoperative breast MRI in patients with newly diagnosed breast cancer to find subgroups of patients that are most likely to benefit from preoperative MRI by the detection of occult malignant foci. A total of 1102 consecutive patients who underwent treatment for primary breast cancer between 2002 and 2013 were retrospectively analyzed. All patients underwent triple assessment by breast ultrasound, mammography, and bilateral breast MRI. MRI findings not seen on conventional imaging that suggested additional malignant disease was found in 344 cases (31.2 %). Histological confirmed malignant foci were found in 223 patients (20.2 %) within the index breast and in 28 patients (2.5 %) in the contralateral breast. The rate of false-negative biopsies was 31 (2.8 %) and 62 (5.6 %), respectively. Premenopausal women (p = 0.024), lobular invasive breast cancer (p = 0.02) as well as patients with high breast density [American College of Radiology (ACR) 3 + 4; p = 0.01] were significantly associated with additional malignant foci in the index breast. Multivariate analysis confirmed lobular histology (p = 0.041) as well as the co-factors "premenopausal stage" and "high breast density (ACR 3+4)" (p = 0.044) to be independently significant. Previous studies revealed that breast MRI is a reliable tool for predicting tumor extension as well as for the detection of additional ipsilateral and contralateral tumor foci in histological confirmed breast cancer. In the present study, we demonstrate that especially premenopausal patients with high breast density as well as patients with lobular histology seem to profit from preoperative MRI.
abstract_id: PUBMED:26477029
Clinical utility of 18F-FDG-PET/MR for preoperative breast cancer staging. Objective: To evaluate the performance of 18F-fluorodeoxyglucose (FDG) positron emission tomography magnetic resonance imaging (PET/MR) for preoperative breast cancer staging.
Methods: Preoperative PET/MR exams of 58 consecutive women with breast cancer were retrospectively reviewed. Histology and mean follow-up of 26 months served as gold standard. Four experienced readers evaluated primary lesions, lymph nodes and distant metastases with contrast-enhanced MRI, qualitative/quantitative PET, and combined PET/MR. ROC curves were calculated for all modalities and their combinations.
Results: The study included 101 breast lesions (83 malignant, 18 benign) and 198 lymph node groups, (34 malignant, 164 benign). Two patients had distant metastases. Areas under the curve (AUC) for breast cancer were 0.9558, 0.8347 and 0.8855 with MRI, and with qualitative and quantitative PET/MR, respectively (p = 0.066). Sensitivity for primary cancers with MRI and quantitative PET/MR was 100 % and 77 % (p = 0.004), and for lymph nodes 88 % and 79 % (p = 0.25), respectively. Specificity for MRI and PET/MR for primary cancers was 67 % and 100 % (p = 0.03) and for lymph nodes 98 % and 100 % (p = 0.25).
Conclusions: In breast cancer patients, MRI alone has the highest sensitivity for primary tumours. For nodal metastases, both MRI and PET/MR are highly specific.
Key Points: • MRI alone and PET/MR have a similar overall diagnostic performance. • MRI alone has a higher sensitivity than PET/MR for local tumour assessment. • Both MRI and PET/MR have a limited sensitivity for nodal metastases. • Positive lymph nodes on MRI or PET/MR do not require presurgical biopsy.
abstract_id: PUBMED:34017683
Preoperative Staging in Breast Cancer: Intraindividual Comparison of Unenhanced MRI Combined With Digital Breast Tomosynthesis and Dynamic Contrast Enhanced-MRI. Objectives: To evaluate the accuracy in lesion detection and size assessment of Unenhanced Magnetic Resonance Imaging combined with Digital Breast Tomosynthesis (UE-MRI+DBT) and Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI), in women with known breast cancer.
Methods: A retrospective analysis was performed on 84 patients with histological diagnosis of breast cancer, who underwent MRI on a 3T scanner and DBT over 2018-2019, in our Institution. Two radiologists, with 15 and 7 years of experience in breast imaging respectively, reviewed DCE-MRI and UE-MRI (including DWI and T2-w) + DBT images in separate reading sections, unaware of the final histological examination. DCE-MRI and UE-MRI+DBT sensitivity, positive predictive value (PPV) and accuracy were calculated, using histology as the gold standard. Spearman correlation and regression analyses were performed to evaluate lesion size agreement between DCE-MRI vs Histology, UE-MRI+DBT vs Histology, and DCE-MRI vs UE-MRI+DBT. Inter-reader agreement was evaluated using Cohen's κ coefficient. McNemar test was used to identify differences in terms of detection rate between the two methodological approaches. Spearman's correlation analysis was also performed to evaluate the correlation between ADC values and histological features.
Results: 109 lesions were confirmed on histological examination. DCE-MRI showed high sensitivity (100% Reader 1, 98% Reader 2), good PPV (89% Reader 1, 90% Reader 2) and accuracy (90% for both readers). UE-MRI+DBT showed 97% sensitivity, 91% PPV and 92% accuracy, for both readers. Lesion size Spearman coefficient were 0.94 (Reader 1) and 0.91 (Reader 2) for DCE-MRI vs Histology; 0.91 (Reader 1) and 0.90 (Reader 2) for UE-MRI+DBT vs Histology (p-value <0.001). DCE-MRI vs UE-MRI+DBT regression coefficient was 0.96 for Reader 1 and 0.94 for Reader 2. Inter-reader agreement was 0.79 for DCE-MRI and 0.94 for UE-MRI+DBT. McNemar test did not show a statistically significant difference between DCE-MRI and UE-MRI+DBT (McNemar test p-value >0.05). Spearman analyses showed an inverse correlation between ADC values and histological grade (p-value <0.001).
Conclusions: DCE-MRI was the most sensitive imaging technique in breast cancer preoperative staging. However, UE-MRI+DBT demonstrated good sensitivity and accuracy in lesion detection and tumor size assessment. Thus, UE-MRI could be a valid alternative when patients have already performed DBT.
abstract_id: PUBMED:35258769
Additional Workups Recommended During Preoperative Breast MRI: Methods to Gain Efficiency and Limit Confusion. Background: Preoperative breast MRI is indicated for staging but can lead to complex imaging workups. This study reviewed imaging recommendations made on preoperative MRI exams, to simplify management approaches for patients with newly diagnosed breast cancer.
Methods: This retrospective single-institution review was restricted to women with breast cancer who underwent staging MRI. Additional breast lesions, separate from index tumors, recommended for additional workup or surveillance were assessed to see which were detected and which characteristics predicted success in detection. Univariate mixed-effects logistic modeling predicted the likelihood of finding lesions using MRI-directed ultrasound (US), with odds ratios reported. Tests were two-sided, with a p value lower than 0.05 considered significant.
Results: In this study, 534 (39.6%) patients had recommendations for additional workup after preoperative MRI. MRI detected additional malignancy in 178 patients (33.3%). Half of the 66 patients who refused an additional workup and opted for mastectomy had additional malignancies at mastectomy. MRI-directed US was 14 times more likely to detect masses than nonmass enhancement (NME) (p < 0.001). NME was detected on US in only 16% of cases, with one third of subsequent biopsy results considered discordant. Probably benign assessments were given to 35 patients, with 23% not returning for follow-up evaluation and 7% returning at least 6 months later than recommended.
Conclusion: Use of preoperative breast MRI has increased. Although it can add value, institutions should establish indications and expectations to prevent unnecessary workups. Limiting MRI-directed US to masses, avoiding probably benign assessments, and consulting with patients after MRI but prior to workups can prevent unnecessary exams and confusion.
abstract_id: PUBMED:26374470
Role of MRI in the staging of breast cancer patients: does histological type and molecular subtype matter? Objective: To assess the role of MRI in the pre-operative staging of patients with different histological types and molecular subtypes of breast cancer, by the assessment of the dimensions of the main tumour and identification of multifocal and/or multicentric disease.
Methods: The study included 160 females diagnosed with breast cancer who underwent breast MRI for pre-operative staging. The size of the primary tumour evaluated by MRI was compared with the pathology (gold standard) using the Pearson's correlation coefficient (r). The presence of multifocal and/or multicentric disease was also evaluated.
Results: The mean age of patients was 52.6 years (range 30-81 years). Correlation between the largest dimension of the main tumour measured by MRI and pathology was worse for non-special type/invasive ductal carcinoma than for other histological types and was better for luminal A and triple-negative than for luminal B and Her-2 molecular subtypes. Multifocal and/or multicentric disease was present in 48 patients (30.0%), and it was more common in breast carcinomas classified as Her-2 molecular subtype. There was no statistically significant difference in the frequency of multifocal and/or multicentric tumours identified only by MRI in relation to histological type or molecular subtype.
Conclusion: The results of this retrospective study demonstrated that histological types and molecular subtypes might influence the MRI assessment of breast cancers, especially in the evaluation of tumour size.
Advances In Knowledge: The real benefit of MRI for treatment planning in patients with breast cancer may be different according to the histological type and molecular subtype.
abstract_id: PUBMED:37201248
The performance of contrast-enhanced mammography and breast MRI in local preoperative staging of invasive lobular breast cancer. Purpose: Breast MRI is considered the best modality for preoperative staging of invasive lobular carcinoma (ILC). However, contrast-enhanced mammography (CEM) shows comparable diagnostic performance to MRI, but evidence of CEM's accuracy in women diagnosed with ILC is scant. We aimed to retrospectively evaluate CEM and MRI accuracy in preoperative staging of ILC.
Methods: ILC cases diagnosed between 2013 and 2021 were collected. For both modalities, tumour diameter was extracted from the reports. Bland-Altman plots were used to assess discrepancies between size measurements according to imaging and histopathological findings. CEM and MRI's ability to detect multifocal/contralateral cancer was expressed as sensitivity, specificity, and diagnostic odds ratios (DORs). Pairwise comparison of women undergoing both CEM and MRI was not performed.
Results: 305 ILC-cases fulfilled preset inclusion criteria. Mean age was 63.7 years. Preoperative staging was performed using MRI or CEM in 266 (87.2%) and 77 (25.2%) cases, respectively. MRI and CEM overestimated tumour size by 1.5 and 2.1 mm, respectively. Sensitivity to detect multifocal disease was higher for MRI than for CEM (86% versus 78%), but specificity was lower for MRI (79% versus 92%). For detection of contralateral breast cancer, sensitivity for MRI was 96% versus 88% for CEM, and specificity was 92% and 99%, respectively. For both indications, DOR was higher for CEM, but differences were non-significant (p = 0.56 and p = 0.78).
Conclusion: CEM and MRI overestimate ILC size with comparable systematic and random errors. MRI's higher sensitivity for detection of multifocal/contralateral cancers is accompanied by lower specificity, but discriminative ability for both modalities was non-significant.
Answer: The performance of MRI in preoperative staging of breast cancer appears to be influenced by certain clinical and histological factors. Studies have shown that MRI is generally better than mammography and ultrasonography for establishing the size of the tumor and for detecting additional lesions (PUBMED:24836350). However, the utility of MRI may vary depending on specific patient characteristics and tumor biology.
For instance, premenopausal patients with high breast density and patients with lobular histology seem to benefit more from preoperative MRI due to a higher likelihood of detecting additional malignant foci (PUBMED:26323190). Additionally, MRI's accuracy in assessing tumor size and the presence of multifocal and/or multicentric disease may differ according to histological types and molecular subtypes of breast cancer (PUBMED:26374470).
Furthermore, MRI is more likely to detect additional disease in patients with invasive lobular carcinoma or aggressive tumors such as triple-negative and HER2-positive tumors, as well as in patients with dense breast tissue or younger age, especially those under 60 (PUBMED:28647387). In contrast, the performance of MRI in detecting multifocal lesions was significantly better in premenopausal patients (PUBMED:24836350).
The selective use of preoperative MRI within a multidisciplinary setting has been shown to provide accurate staging to aid treatment decisions without resulting in a delay in treatment, suggesting that MRI can be particularly useful when used judiciously (PUBMED:34159733).
In summary, while MRI is a valuable tool in the preoperative staging of breast cancer, its performance is not entirely independent of clinical and histological factors. Certain subgroups of patients, such as those with specific histological types, molecular subtypes, or breast density, may derive more benefit from MRI in the staging process. |
Instruction: Is the value of oral health related to culture and environment, or function and aesthetics?
Abstracts:
abstract_id: PUBMED:26738216
Is the value of oral health related to culture and environment, or function and aesthetics? Objective: To examine the disutility of tooth loss. It compared how people value their teeth in two countries which are culturally similar in order to explore the effect of culture on self-perceptions of oral health.
Basic Research Design: Cross sectional study.
Participants: Participants were recruited from subjects attending two hospitals in Turkey and in Iran.
Interventions: Nineteen descriptions of mouths with varying degrees and types of tooth loss were presented to the participants. They were shown mouth models of partially edentate dentitions and the teeth missing were explained in relation to the participants own mouth. The participants were specifically asked to consider the role their teeth played in function (chewing), communication (speech) and aesthetics (looks) along with "all the other things that make your mouth important".
Main Outcome Measures: The participants were asked to indicate on a visual analogue scale how they would value the health of their mouth if they lost the tooth/teeth described and the resultant space was left unrestored.
Results: Overall 152 subjects participated, 78 in Turkey and 74 in Iran with 83 being female and 69 male. Their mean age was 29.5 years (SD 9.3), 62.5% had experienced tooth loss and 37.5% had complete (or completely restored) dentitions. Although there were no differences between the two countries in the degree of utility people attached to anterior teeth, Turkish participants attached significantly more disutility than Iranians to the loss of premolar and molar teeth (p < 0.003).
Conclusion: Country of origin had an influence on the value placed on certain parts of the dentition and this effect is independent of the number of missing teeth, gender and age. This implies that attitudes to oral health are influenced by prevalent cultural attitudes more than by function.
abstract_id: PUBMED:31989112
Oral health related quality of life and self-esteem in a general population. Background And Aims: The interest in the research of both Oral Health Related Quality Of Life and dental aesthetics has increased in the recent years. The aim of the current study consists in the evaluation of the perception of oral-health, dental aesthetics and self-esteem in a general population.
Methods: A group of students of the Faculty of Dental Medicine, Cluj-Napoca, were trained in the field of questionnaire interviewing. The students were asked to apply the following questionnaires to a number of maximum five close persons: the OHIP-14Aesthetic questionnaire, the Rosenberg self-esteem scale and a questionnaire evaluating demographic data. Each interviewed subject provided informed consent. The sample included 97 subjects with an age range of 18-75 years. For each of the three applied questionnaires overall scores were computed and used for the calculation of Pearson correlations and inferential statistical procedures: the t-test.
Results: Related to the complete sample (N=97), the highest OHIP-14Aesthetic scores were obtained for the functional limitation (mean score of 2.22), physical pain (mean score of 2.72) and psychological discomfort (mean score of 1.37) subscales. The highest Rosenberg self-esteem scale scores were obtained for the following questions: "I think I am no good at all" (mean score of 3.50), "feel useless at times" (mean score of 3.53), "inclined to feel that I am a failure" (mean score 3.77), "positive attitude toward myself" (mean score of 3.50). Statistically significant correlations were registered between the overall Rosenberg self-esteem scale score and the scores of the following OHIP-14Aesthetic subscales: psychological discomfort (r = -0.201, p = 0.49), physical disability (r = -0.219, p = 0.031), psychological disability (r = -0.218, p = 0.032), social disability (r = -0.203, p = 0.046). The t-test revealed statistically significant gender differences, in regard to the OHIP-14Aesthetic overall score t(95) = -2.820, p = 0.006.
Conclusions: The current study indicates the existence of statistically significant gender differences in the perception of oral health and a series of dental aesthetics elements in a general population. Moreover, statistically significant correlations were obtained between the perception of oral health and the perception of self-esteem.
abstract_id: PUBMED:34224662
Comparing oral health-related quality of life, oral function and orofacial aesthetics among a group of adolescents with and without malocclusions. Objective: The aim was to analyze how malocclusion relates to perception of oral health-related quality of life (OHRQOL), oral function and orofacial aesthetics among a group of adolescents in Sweden.
Material And Methods: Thirty patients with a need for orthodontic treatment (IOTN-DHC grade 4 and 5) and 30 patients with normal occlusion (IOTN-DHC grade 1), aged 13-17 years, were included in the study. A questionnaire containing three parts was used; The Oral Health Impact Profile (OHIP-S14), Jaw Functional Limitational scale (JFLS-20) and Orofacial Aesthetic scale (OES). Malocclusions, orthodontic treatment need and confounders, such as earlier dental treatment and temporomandibular disorders, were registered.
Results: Adolescents with malocclusions were more often embarrassed by their mouth and teeth compared to controls (p < .05). Aesthetically, adolescents with malocclusions were more negatively affected by the appearance of the mouth and teeth as well as the over-all facial appearance (p < .05).
Conclusions: Malocclusions clearly affects the adolescents with need for orthodontic treatment in this study. It influences their OHRQOL in the psychosocial impact dimension. Aesthetically they perceive their oral and facial appearance as worse compared to controls. Although embarrassed and unpleased with their oral appearance they still rate themselves as having a good oral health with low jaw function limitations.
abstract_id: PUBMED:27746332
Validation of the "Quality of Life related to function, aesthetics, socialization, and thoughts about health-behavioural habits (QoLFAST-10)" scale for wearers of implant-supported fixed partial dentures. Objectives: To validate the 'Quality of Life related to function, aesthetics, socialization, and thoughts about health-behavioural habits (QoLFAST-10)' questionnaire for assessing the whole concept of oral health-related quality of life (OHRQoL) of implant-supported fixed partial denture (FPD) wearers.
Methods: 107 patients were assigned to: Group 1 (HP; n=37): fixed-detachable hybrid prostheses (control); Group 2 (C-PD, n=35): cemented partial dentures; and Group 3 (S-PD, n=35): screwed partial dentures. Patients answered the QoLFAST-10 and the Oral Health Impact Profile (OHIP-14sp) scales. Information on global oral satisfaction, socio-demographic, prosthetic, and clinical data was gathered. The psychometric capacity of the QoLFAST-10 was investigated. The correlations between both indices were explored by the Spearman's rank test. The effect of the study variables on the OHRQoL was evaluated by descriptive and non-parametric probes (α=0.05).
Results: The QoLFAST-10 was reliable and valid for implant-supported FPD wearers, who attained comparable results regardless of the connection system being cement or screws. Both fixed partial groups demonstrated significantly better social, functional, and total satisfaction than did HP wearers with this index. All groups revealed similar aesthetic-related well-being and consciousness about the importance of health-behavioural habits. Several study variables modulated the QoLFAST-10 scores.
Conclusions: Hybrid prostheses represent the least predictable treatment option, while cemented and screwed FPDs supplied equal OHRQoL as estimated by the QoLFAST-10 scale.
Clinical Significance: The selection of cemented or screwed FPDs should mainly rely on clinical factors, since no differences in patient satisfaction may be expected between both types of implant rehabilitations.
abstract_id: PUBMED:31013356
Oral health-related quality of life, oral aesthetics and oral function in head and neck cancer patients after oral rehabilitation. Head and neck cancer (HNC) is diagnosed in more than 500 000 patients every year worldwide with increasing prevalence. Oral rehabilitation is often needed after HNC treatment to regain oral function, aesthetics and oral health-related quality of life (OHRQoL). The objectives were to evaluate OHRQoL, oral aesthetics and oral function after oral rehabilitation in HNC patients and compare it to that of non-HNC patients. Eighteen patients treated for HNC who subsequently had oral rehabilitation (2014-2017), and a control group of eighteen age- and gender-matched non-HNC patients treated with removable prostheses (2014-2018) were included in a cross-sectional study. The OHRQoL was assessed by the Oral Health Impact Profile 49 questionnaire (OHIP-49), the oral aesthetics by the Prosthetic Esthetic Index (PEI) and the Orofacial Esthetic Scale (OES), and the oral function by the Nordic Orofacial Test-Screening (NOT-S). The HNC patients had worse oral function and OHRQoL than the control patients (mean NOT-S score 4.56 vs 0.56, P < 0.01 and mean OHIP-49 score 42.50 vs 20.94, P = 0.050). When including number of replaced teeth and type of prosthesis in the tests, no significant difference in OHRQoL was found between the groups. No difference was found in the overall aesthetic outcomes (mean PEI total score 32.28 vs 30.67, P = 0.367 and mean OES total score 48.78 vs 53.56, P = 0.321). Multiple regression analyses showed that being HNC patient compared to control patient impaired the oral function. Oral function is significantly impaired in HNC patients compared to non-HNC patients after oral rehabilitation.
abstract_id: PUBMED:37351386
Assessment of the psychological impact of dental aesthetics among undergraduate university students in Iraq. Aim: This study aimed to assess Iraqi university students' oral health-related quality of life (OHRQoL) according to sociodemographic variables and compare dental and non-dental students.
Methods: A cross-sectional study was carried out for students in multiple Iraqi universities from June 15, 2022, to July 15, 2022. A total of 771 individuals participated in the study using an online questionnaire. A pre-tested and validated Arabic version of the Psychosocial Impact of Dental Aesthetics Questionnaire (PIDAQ) was adopted as an evaluation tool. A P value of less than 0.05 was considered statistically significant. Reliability analysis was conducted using Cronbach's alpha.
Result: Cronbach's alpha score for the overall scales was 0.942, indicating excellent internal consistency. There were 69.8% (n = 538) dental students in the total sample. A significant difference was found between dental and non-dental students in the total PIDAQ scores and other subscale domains (P < 0.05). Statistically significant differences in means were also noted in the residency (P = 0.005) and household income of students (P = 0.000).
Conclusions: This study shows the reliability of the PIDAQ scale for assessing the psychological impact of dental aesthetics on undergraduate Iraqis. It was found that the perception of OHRQoL varies between dental and non-dental university students, and according to socioeconomic status and residency.
abstract_id: PUBMED:37473829
Perception of oral health related quality of life and orofacial aesthetics following restorative treatment of tooth wear: A five-year follow-up. Objectives: Non-carious tooth wear often has a multifactorial etiology and may lead to functional or aesthetically related problems. The most common complaints associated with tooth wear are dissatisfaction with dental appearance and a negative impact on the experienced Oral Health Related Quality of Life (OHRQoL). The aim of this study was to investigate the change in OHRQoL and the perception of aesthetics, following restorative treatment of moderate to severe tooth wear patients, with a five-year follow-up.
Methods: An explorative study, based on prospective data, was performed. OHRQoL and the perception of aesthetics were measured with the OHIP-NL and OES-NL. These questionnaires were completed before treatment, one month after treatment, and at 1-, 3- and 5-years post-treatment. Treatment involved full mouth reconstruction with composite resin restorations. The data was analysed as repeated measures by using a linear mixed-effects model.
Results: One hundred and twenty-three tooth wear patients that received restorative rehabilitation were included (97 males, 26 females, 37.5 ± 8.8 years-old). Data showed a statistically significant increase in both experienced OHRQoL and orofacial appearance after restorative treatment. The OHIP-scores remained stable over time, while the OES-scores slightly decreased during the years after treatment. Regarding the seven domains of the OHIP, the largest difference in OHIP-score was found in the domain of 'Psychological Discomfort'. The mean overall OHIP-score was 1.8 at baseline and 1.3 at the 5-years recall. The mean OES score increased from 41.8 at baseline to 66.1 at the 5-years follow-up.
Conclusions: Tooth wear patients reported significant improvements in their OHRQoL and their perception of orofacial aesthetics after restorative treatment. This increase remained at least five years post-treatment.
Clinical Significance: The clinical impact of restorative treatment for tooth wear patients is considerable. This paper emphasizes the need to include a discussion of the patient related outcome measures when planning care.
abstract_id: PUBMED:36689299
The associations of dental aesthetics, oral health-related quality of life and satisfaction with aesthetics in an adult population. Aim: The aim of this study was to investigate the gender-specific associations between dental aesthetics, oral health-related quality of life (OHRQoL), and satisfaction with dental aesthetics in an adult population.
Materials And Methods: The study population consisted of 1780 individuals (822 males and 958 females) from the Northern Finland Birth Cohort 1966 (NFBC1966). Dental aesthetics were evaluated from digital 3D dental models using the Aesthetic Component (AC) of the Index of Orthodontic Treatment Need (IOTN). Layperson and orthodontist panels evaluated the dental aesthetics of a smaller sample (n = 100). OHRQoL was measured using the Oral Health Impact Profile (OHIP-14) questionnaire. Satisfaction with dental aesthetics was asked with one separate question. Gender-specific analyses consisted of Mann-Whitney U-tests and Spearman's correlation coefficients.
Results: More than half of the population had an aesthetically acceptable occlusion, and most of the individuals were satisfied with the aesthetics. The most severe aesthetic impairments were associated with the psychological dimensions of OHIP-14. There were significant but weak associations of AC and satisfaction with aesthetics, and satisfaction with aesthetics and OHRQoL. Significant gender differences were found, men having higher mean AC scores but women reporting lower OHRQoL.
Conclusion: At the population level, most of the individuals were satisfied with their aesthetics, despite different dental aesthetic conditions. The most severe aesthetic impairments were associated with decreased psychological well-being, women reporting more impacts compared to men.
abstract_id: PUBMED:31287185
Dental-related function and oral health in relation to eating performance in assisted living residents with and without cognitive impairment. Aims: Despite the physiologic relationship, there is a lack of evidence on how dental-related function and oral health impact eating performance. This study aims to examine the association of eating performance with dental-related function and oral health among assisted living residents.
Methods And Results: This study was a secondary analysis of observational data collected from an instrument development study. Participants included 90 residents with normal to severely impaired cognition from three assisted livings. Multilevel mixed-effects ordered logistic models were used. The dependent variable was eating performance measured by the single "eating" item (scored from 0 to 4 on level of dependence). Independent variables were resident age, gender, dental-related function, and oral health. The resident and facility clustering effects accounted for 88% of variance in eating performance, among which 84% was explained by dental-related function and oral health. Eating performance was associated with dental-related function (coefficient = -0.10, 95% CI = -0.19, -0.01, P = .025), and was not associated with other characteristics.
Conclusion: Eating performance is influenced by the complex relationship with dental-related function and oral health. Novel interventions using interdisciplinary partnerships are needed to maintain dental-related function and oral health to optimize eating performance.
abstract_id: PUBMED:32228603
A biopsychosocial-cultural model for understanding oral-health-related quality of life among adolescent orthodontic patients. Background: Based on previous theoretical oral-health-related quality of life (OHRQoL) models and most recently framework, as well as sociocultural model of body image dissatisfaction, the current study aimed to investigate the effect of individual (dental aesthetics and dental appearance social comparison) and sociocultural factors (social reinforcement from parents, peers and mass media on dental aesthetics) as well as their interaction on psychosocial dimension of OHRQoL among adolescent orthodontic patients.
Methods: In this cross-sectional study comprising 427 adolescent orthodontic patients (151 boys and 276 girls) aged between 11 and 16 years old, the psychosocial dimension of OHRQoL was measured by Psychosocial Impact of Dental Aesthetics Questionnaire. Individual predictor of dental aesthetics was defined by the Aesthetic Component of the Index of Orthodontic Treatment Need, and dental appearance social comparison was assessed by four items adapted from Physical Appearance Comparison Scale. Sociocultural predictor of social reinforcement was measured by six items adapted from Perceived Sociocultural Pressure Scale. Spearman correlations, path analyses, and structural equation modeling were used to build up several predictive models.
Results: As hypothesized, two direct pathways were observed that patients' dental aesthetics and all three sources of social reinforcement directly predicted the psychosocial dimension of OHRQoL. Meanwhile, we observed one indirect pathway, that three sources of social reinforcement predicted the psychosocial dimension of OHRQoL, in part, through dental appearance social comparison.
Conclusions: This study provides preliminary evidence indicating that dental aesthetics, social reinforcement and dental appearance comparison are reliable predictors of psychosocial dimension of OHRQoL among adolescent orthodontic patients.
Answer: The value of oral health appears to be related to both culture and environment as well as function and aesthetics. A cross-sectional study comparing participants from Turkey and Iran found that although there were no differences between the two countries in the degree of utility people attached to anterior teeth, Turkish participants attached significantly more disutility than Iranians to the loss of premolar and molar teeth. This suggests that country of origin influences the value placed on certain parts of the dentition, indicating that attitudes to oral health are influenced by prevalent cultural attitudes more than by function (PUBMED:26738216).
Additionally, research on oral health-related quality of life (OHRQoL) and self-esteem in a general population found statistically significant gender differences in the perception of oral health and dental aesthetics elements. This study also found significant correlations between the perception of oral health and self-esteem (PUBMED:31989112). Furthermore, a study on adolescents with and without malocclusions in Sweden showed that malocclusions affected adolescents' OHRQoL, particularly in the psychosocial impact dimension, and their perception of oral and facial appearance (PUBMED:34224662).
The validation of the QoLFAST-10 scale for wearers of implant-supported fixed partial dentures also highlighted the importance of function, aesthetics, socialization, and thoughts about health-behavioral habits in assessing OHRQoL (PUBMED:27746332). Similarly, a study on head and neck cancer patients after oral rehabilitation emphasized the impact of oral rehabilitation on OHRQoL, oral aesthetics, and oral function (PUBMED:31013356).
Moreover, an assessment of the psychological impact of dental aesthetics among undergraduate university students in Iraq revealed that the perception of OHRQoL varies between dental and non-dental university students, and according to socioeconomic status and residency (PUBMED:37351386). A five-year follow-up study on patients with tooth wear showed significant improvements in OHRQoL and perception of orofacial aesthetics after restorative treatment (PUBMED:37473829).
Lastly, a study investigating the associations between dental aesthetics, OHRQoL, and satisfaction with dental aesthetics in an adult population found that severe aesthetic impairments were associated with decreased psychological well-being, with women reporting more impacts compared to men (PUBMED:36689299). |
Instruction: Sudden infant death syndrome in New Zealand: are risk scores useful?
Abstracts:
abstract_id: PUBMED:7707014
Sudden infant death syndrome in New Zealand: are risk scores useful? New Zealand National Cot Death Study Group. Study Objective: To evaluate the Christchurch, Invercargill, Dunedin (CID) and Oxford record linkage study (ORLS) risk scores in five regions of New Zealand and examine the effect of risk factors for sudden infant death syndrome (SIDS), such as prone sleeping position, maternal smoking, breast feeding, measures of illness, the use of antenatal classes, community health care, and medical services on a high and low risk group delineated by the CID score.
Design: This was a case-control study of infants dying of SIDS.
Setting: Both the cases and controls were born in one of five health districts in New Zealand and their parents were interviewed between 1 November 1987 and 31 October 1990.
Participants: The cases were 485 infants who died of SIDS. The controls were a random sample drawn from the same five regions in which the cases were born, chosen so that their age on the day on which they were interviewed was similar to the age at death of the cases. Risk scores were calculated for 387 case and 1579 controls.
Measurements And Main Results: Using the recommended cut off points the sensitivity and specificity of the CID and ORLS were found to be similar to those described for other samples. The differences among the regions were significant. There was, however, no evidence that the association between SIDS and the risk factors considered was different in the high and low risk groups delineated by the CID score. The relative attributable risk for smoking was 32.3% in the high risk group. The excess risk that could be attributed to a different prevalence of any of the other risk factors in the high risk group was small when compared with the low risk group.
Conclusions: Health care resources should be spent on promoting and evaluating good child care practices for all, rather than identifying and promoting special interventions for those in the high risk category.
abstract_id: PUBMED:1888562
Two decades of change: cot deaths and birth score items in Canterbury, New Zealand. The individual items of two birth scores and the scores themselves were examined for a sample of 2065 mothers and infants pairs for any changes between 1968 and 1986. About 100 births were randomly sampled for each year. The scoring systems used were the Sheffield risk score and the Christchurch-Invercargill-Dunedin score. A total of eight items from these scores was examined. The purpose was to discover whether the increasing cot death rate in Canterbury could be attributed to an increasing proportion of vulnerable infants, as determined by the items in the risk scores. No such relationship was uniformly found.
abstract_id: PUBMED:1436729
Sudden infant death syndrome in New Jersey: 1991. Sudden infant death syndrome (SIDS) is the leading cause of death in infants one month to one year of age. The New Jersey Sudden Infant Death Syndrome Resource Center gathers epidemiological data on all SIDS deaths in New Jersey, noting differences in population and countries.
abstract_id: PUBMED:18157197
Ten citation classics from the New Zealand Medical Journal. Although their contribution may go unrecognised at the time, if journal citations are indeed the "currency" of science, then citation classics could justifiably be regarded as the "gold bullion". This article examines the 10 most highly-cited articles published by the New Zealand Medical Journal (NZMJ), as of August 2007. By topic, the top cited article described a study of risk factors for sudden infant death syndrome among New Zealand infants, while 3 of the remaining 9 articles focused on asthma. Most citation classics from the NZMJ were comparatively recent, with the top cited article being published in 1991, 7 having been published in the 1980s, and 2 in the 1970s. Overall, this study clearly demonstrates the international relevance of New Zealand medical researchers, and the significant global impact of their findings for human health.
abstract_id: PUBMED:1480443
Evaluation of the Oxford and Sheffield SIDS risk prediction scores. Study Objective: To evaluate the clinical usefulness (sensitivity and specificity) of the Oxford and Sheffield birth scores for prospective identification of infants at high risk of SIDS.
Design: Retrospective medical record reviews of prospectively identified, autopsy-validated SIDS and living control infants.
Study Subjects: Consecutive sample of 140 infants, born between 1/1/83 and 12/31/87, who died suddenly and unexpectedly in the Avon Area Health Authority in southwest England between 1/1/84 and 12/31/88. Seventeen of the cases were excluded: 6 because they lacked adequate clinical records, 11 because they were not SIDS. The 637 control infants were comprised of every 80th delivery between 1/1/83 and 12/31/87 in the three major hospitals in the area.
Results: SIDS incidence was 2.85/1,000 live births. Using standard cut scores to define high SIDS risk (2.0 for Oxford and 500 for Sheffield), sensitivities were 0.55 and 0.35 and specificities were 0.78 and 0.89 for the Oxford and Sheffield scores, respectively. SIDS risk for infants in the high risk group was 7.3/1,000 (Oxford) and 9.3/1,000 (Sheffield).
Conclusions: Since there is no intervention with proven efficacy for SIDS prevention, and since approximately one half of SIDS cases occur in low risk groups, clinical use of these scoring systems for allocation of health care resources or personnel for the sole purpose of SIDS prevention is not justified.
abstract_id: PUBMED:8819556
Postmortem examinations in a statewide audit of neonatal intensive care unit admissions in Australia in 1992. New South Wales Neonatal Intensive Care Unit Study Group. The objectives of the investigation were (i) to study infants registered in a statewide audit of tertiary neonatal intensive care units in New South Wales, Australia in 1992 and who died, and (ii) to examine postmortem rates, quality of postmortem reports and compare clinical cause of death with postmortem report. Death rates, data on clinical cause of death and postmortem status were collected prospectively as part of the routine audit. Postmortem reports were examined by LS. Fifteen percent of the cohort died and 43% had a postmortem examination. The postmortem rate was highest in the 28-36 week gestation group and in babies dying of pulmonary haemorrhage, intracranial haemorrhage or sudden infant death syndrome. Fewer than 50% of babies with a major congenital anomaly had a postmortem. The postmortem changed the major diagnosis in 10% of cases and added useful information in 17%. We conclude that postmortem examination should be an essential part of any audit of neonatal intensive care unit outcomes.
abstract_id: PUBMED:18181428
SIDS among Pacificans in New Zealand: an ecological perspective. Sudden Infant Death Syndrome (SIDS) or Cot Death has unevenly affected ethnic groups more in New Zealand. This paper examines risk factors for SIDS from a political ecology perspective. The New Zealand Cot-Death Study (1987-1990) identified four modifiable risk factors of major concerns. These became the targets of a national prevention campaign. The four modifiable risk factors were: prone sleeping position of the baby, lack of breast feeding, maternal smoking and bed sharing. These four risk factors are more prevalent amongst Pacific and Mäori than others in New Zealand, and are influenced by cultural and other factors. This paper discusses these from a Pacific perspective. Through a discussion of the socio-economic situation of Pacific people in New Zealand and drawing on political ecology theory, it also challenges the classification of some risk factors as 'unmodifiable'. It argues that, through addressing the low socio-economic status of Pacificans, the so-called 'unmodifiable' risk factors are modifiable. Addressing these wider inequalities would contribute to the govaernment's aims of closing the social and economic gaps affecting Pacificans' health status and reduce the risk of SIDS among Pacific infants.
abstract_id: PUBMED:6340467
Medicolegal investigation in New York City. History and activities 1918-1978. Medicolegal investigation in America can truly be said to have begun in an organized manner in 1918. The Massachusetts medical examiner system, which began in 1877, never developed with the central control and the completeness that characterizes the New York Office of the Chief Medical Examiner, nor did it influence the spread of this form of medicolegal investigation. An overview of the period before the establishment of the New York Office in 1918 and early experiences in coroner's investigation in New York is presented. The roots of the development of the office are discussed, as were the early days of the office under Dr. Charles Norris, whose influence on the spread of knowledge and of providing an important service to the community in general is detailed. The contributions of Alexander Gettler, the father of forensic toxicology in America, are also discussed. The contributions of Gonzales, Vance, Helpern, Umberger, and Wiener are also included. Special problems of New York City are described, including narcotic deaths, gas refrigerator deaths, malaria in addicts, plastic bag hazards, sudden infant deaths, operative deaths, as well as many famous cases involving murder, disasters, and unusual deaths over a period of 60 years. Milestones in the Office of the Chief Medical Examiner of New York City are listed, as are chronological details of major cases and problems. Several comparative figures of the workload and frequency of various types of death are also included. A relationship of deaths to different life-styles is noted.
abstract_id: PUBMED:9484426
Potential for prevention of premature death and disease in New Zealand. Aim: To assess the potential for preventing major causes of premature death, disease and injury in New Zealand.
Methods: Population attributable risks for major modifiable risk factors for important causes of death and disease in New Zealand were calculated using available national and international data on the relative risk of disease and the prevalence of risk factors in the relevant New Zealand population. Attainable changes in risk factor prevalences were used to model population attributable risks over the next five years. These estimates were then used to estimate potential reductions in absolute numbers of deaths from major diseases.
Results: High population attributable risks were found for several disease/risk factor combinations: smoking and lung cancer (81% in Maori), smoking and coronary heart disease (44% in Maori), smoking and sudden infant death syndrome (49% in Maori); raised serum cholesterol and coronary heart disease (58%); physical inactivity and coronary heart disease (35%), physical inactivity and diabetes (30%), physical inactivity and colorectal cancer (33%), physical inactivity and fractured neck of femur (65%); obesity and hypertension (66%), obesity and diabetes (46%); lack of fruits and vegetables and stomach cancer (46%), and colorectal cancer (34%). The estimated, readily attainable reduction in absolute numbers of annual deaths due to decrease in risk factor prevalence was greatest for smoking (457 deaths), followed by hypertension (326), physical inactivity (303) and raised serum cholesterol (142).
Conclusion: There is significant scope for reducing mortality from major non-communicable diseases although for some diseases such as the cancers, there will be a time lag of many years before the full benefits are realised. Together, reducing the prevalence of smoking, hypertension, physical inactivity and raised serum cholesterol would result in 1228 fewer deaths per year.
abstract_id: PUBMED:1391157
The national cot death prevention program in New Zealand. A case-control study examining the risk factors for sudden infant death syndrome (SIDS) in New Zealand identified three risk factors that are potentially amenable to modification: prone sleeping position of the infant, maternal smoking and lack of breastfeeding. In total these three risk factors may account for 79 per cent of deaths from SIDS in New Zealand. We describe the planning and implementation of the cot death prevention program, which has involved a wide range of groups and different strategies. The outcome of the prevention program is being evaluated.
Answer: The utility of risk scores for sudden infant death syndrome (SIDS) in New Zealand has been evaluated in several studies. The Christchurch, Invercargill, Dunedin (CID) and Oxford record linkage study (ORLS) risk scores were specifically assessed to determine their effectiveness in five regions of New Zealand. The study found that while the sensitivity and specificity of the CID and ORLS were similar to those described for other samples, there was no evidence that the association between SIDS and the risk factors considered was different in the high and low risk groups delineated by the CID score. The study concluded that health care resources should be focused on promoting good child care practices for all, rather than identifying and promoting special interventions for those in the high risk category (PUBMED:7707014).
Another study evaluated the Sheffield risk score and the Christchurch-Invercargill-Dunedin score and found no uniform relationship between the increasing cot death rate in Canterbury and an increasing proportion of vulnerable infants as determined by the items in the risk scores (PUBMED:1888562).
The evaluation of the Oxford and Sheffield SIDS risk prediction scores also indicated that their clinical use for allocation of health care resources or personnel for the sole purpose of SIDS prevention is not justified, as there is no intervention with proven efficacy for SIDS prevention, and approximately half of SIDS cases occur in low risk groups (PUBMED:1480443).
In summary, while risk scores have been developed and used in New Zealand to identify infants at high risk for SIDS, the evidence suggests that their utility may be limited. The focus has shifted towards promoting universal good child care practices and addressing modifiable risk factors such as prone sleeping position, maternal smoking, and lack of breastfeeding, which have been identified as major concerns and targets for national prevention campaigns (PUBMED:1391157). |
Instruction: Do recovery expectations change over time?
Abstracts:
abstract_id: PUBMED:23179742
Exploring the distribution and determinants of a change in recovery expectations following traumatic injury to Victorian workers. Purpose: To examine the frequency, distribution and determinants of a change in recovery expectations following non-life threatening acute orthopaedic trauma to Victorian workers. It is proposed that interventions to modify recovery expectations may reduce the burden associated with injury. However, it is not known whether recovery expectations change over time or the factors that are associated with change.
Methods: A prospective inception cohort study was carried out in which participants were recruited following presentation to hospital for treatment of their injury and followed for 6 months post-injury. Baseline data was obtained by survey and medical record review. Binary logistic regression was used to examine factors associated with a change in recovery expectations between week 2 and week 12 post-injury.
Results: The cohort comprised injured workers (n = 145) who had sustained nonlife threatening acute orthopaedic trauma. Factors associated with an improvement in recovery expectations or recovery timeframe included more years of education and higher social functioning. Participants whose injury involved a perception of responsibility by a third party were 7.18 (95 % CI 1.86-27.68) times more likely to change their recovery expectations to more negative expectations and less likely to change to an earlier recovery timeframe. Participants with more severe injuries were more likely to change their recovery timeframe to a longer timeframe.
Conclusion: Change in recovery expectations provide some information on injured workers who may benefit from targeted interventions to improve or maintain recovery expectations. The post-injury time-point at which recovery expectations are measured is important if recovery expectations are to inform long-term outcomes.
abstract_id: PUBMED:38484281
Patient and therapist change process expectations: Independent and dyadic associations with psychotherapy outcomes. Objective: Patients and therapists possess psychotherapy-related expectations, such as their forecast of what processes will promote improvement. Yet, there remains limited research on such change process expectations, including their independent and dyadic associations with psychotherapy outcome. In this study, we explored the predictive influence of participants' change process expectations, and their level of congruence, on therapeutic outcomes.
Methods: Patients (N = 75) and therapists (N = 17) rated their change process expectations at baseline, and patients rated their psychological distress at baseline and three months into treatment.
Results: Multilevel models indicated that patients' expectations for therapy to work through sharing sensitive contents openly and securely were positively related to subsequent improvement (B = -1.097; p = .007). On the other hand, patients' expectations for therapy to work through the exploration of unexpressed contents were negatively related to improvement (B = 1.388; p = .049). When patients rated the sharing of sensitive contents openly and securely higher than their therapists, they reported better outcomes (B = -16.528; p = .035).
Conclusion: These findings suggest that patients' expectations produce diverse effects during early stages of treatment, and that patients' belief in their ability to share sensitive contents may constitute a potential target to improve therapy effectiveness.
abstract_id: PUBMED:30129892
Caregiver expectations of recovery among persons with spinal cord injury at three and six months post-injury: A brief report. Objective: Caregivers of patients with spinal cord injury (SCI) have increased risk of depression, anxiety, and diminished quality of life. Unmet expectations for recovery may contribute to poorer outcomes.Design: Prospective, longitudinal observation study.Settings: Trauma/Critical care ICU at baseline, telephone for follow-ups.Participants: Caregivers of patients with SCI (n = 13).Interventions: None.Outcome Measures: Expectations for recovery were assessed across four primary domains identified in a review of the literature including: pain severity, level of engagement in social/recreational activities, sleep quality, and ability to return to work/school. Caregivers' forecasts of future recovery were compared to later perceived actual recovery.Results: At three months, 75% of caregivers had unmet expectations for social engagement recovery, 50% had unmet expectations for pain decrease, and 42% had unmet expectations for sleep improvement and resuming work. Rates of unmet expectations were similar at six months, with 70% of caregivers reporting unmet expectations for social engagement recovery, 50% with unmet expectations for pain decrease, and 40% with unmet expectations for sleep improvement.Conclusion: Unmet caregiver expectations for recovery could pose a risk for caregiver recovery and adjustment. Our results show that caregiver expectations merit further investigation for their link with caregiver mental health.
abstract_id: PUBMED:24913213
Do recovery expectations change over time? Purpose: While a considerable body of research has explored the relationship between patient expectations and clinical outcomes, few studies investigate the extent to which patient expectations change over time. Further, the temporal relationship between expectations and symptoms is not well researched.
Methods: We conducted a latent class growth analysis on patients (n = 874) with back pain. Patients were categorised in latent profile clusters according to the course of their expectations over 3 months.
Results: Nearly 80% of participants showed a pattern of stable expectation levels, these patients had either high, medium or low levels of expectations for the whole study period. While baseline levels of symptom severity did not discriminate between the three clusters, those in the groups with higher expectations experienced better outcome at 3 months. Approximately 15% of patients showed decrease in expectation levels over the study period and the remainder were categorised in a group with increasingly positive expectations. In the former clusters, decrease in expectations appeared to be concordant with a plateau in symptom improvement, and in the latter, increase in expectations occurred alongside an increase in symptom improvement rate.
Conclusions: The expectations of most people presenting to primary care with low back pain do not change over the first 3 months of their condition. People with very positive, stable expectations generally experience a good outcome. While we attempted to identify a causal influence of expectations on symptom severity, or vice versa, we were unable to demonstrate either conclusively.
abstract_id: PUBMED:36469842
The Role of Subjective Expectations for Exhaustion and Recovery: The Sample Case of Work and Leisure. We propose a new model of exhaustion and recovery that posits that people evaluate an activity as exhausting or recovering on the basis of the subjective expectation about how exhausting or recovering activities related to a certain life domain are. To exemplify the model, we focus as a first step on the widely shared expectations that work is exhausting and leisure is recovering. We assume that the association of an activity related to a life domain associated with exhaustion (e.g., work) leads people to monitor their experiences and selectively attend to signs of exhaustion; in contrast, while pursuing an activity related to a life domain associated with recovery (e.g., leisure), people preferentially process signs of recovery. We further posit that the preferential processing of signs of exhaustion (vs. recovery) leads to experiencing more exhaustion when pursuing activities expected to be exhausting (e.g., work activities) and more recovery when pursuing activities expected to be recovering (e.g., leisure activities). This motivational process model of exhaustion and recovery provides new testable hypotheses that differ from predictions derived from limited-resource models.
abstract_id: PUBMED:36879419
The role of expectations, subjective experience, and pain in the recovery from an elective and emergency caesarean section: A structural equation model. Background: Rapid return to mobilisation and daily function is essential for recovery after an elective and emergency caesarean section, prevention of short- and long-term complications, and mothers' well-being. High pain levels may delay recovery. Considering the biopsychosocial model, recovery is additionally complex and comprises social and psychological aspects.
Objective: This study examined the relationships between preoperative expectations, perioperative subjective experience, postoperative pain levels, and postoperative interruption of functioning and recovery.
Methods: Overall, 306 women completed a set of questionnaires on the fourth day after a caesarean section regarding their demographic information, levels of expectation matching the caesarean section and the perioperative subjective experience, and the pain levels and interruption to daily activities 24 hours postpartum.
Results: Using a structural equation model, a gap between preoperative expectations and perioperative experience related to a poorer perioperative subjective experience was found. This was associated with higher postoperative pain levels that were directly and indirectly related to the interruption of various functions and activities during the initial 24 hours postpartum. The model explained 58% of the variance in postpartum functioning and had good goodness-of-fit (χ2 = 242.74, df = 112, χ2/df = 2.17, NFI = 0.93, CFI = 0.96, TLI = 0.94, RMSEA = 0.06). Additionally, pain levels were higher and daily activities were more severely impaired for women who had undergone emergency caesarean section compared to those who had undergone elective caesarean section.
Conclusion: The need for preoperative preparation and setting expectations, perioperative emotional support, continuous communication with the mother, and an efficient postoperative pain management was highlighted.
abstract_id: PUBMED:35473657
Client Versus Clinicians' Standards of Clinically Meaningful Change and the Effects of Treatment Expectations on Therapeutic Outcomes in Individuals With Posttraumatic Stress Disorder. There is limited research on the concordance between client perceptions and clinician standards of the degree of symptom change required to achieve meaningful therapeutic improvement. This was investigated in an adult sample (N = 147) who received trauma-focused cognitive-behavioral therapies for posttraumatic stress disorder (PTSD). We examined whether clients' benchmarks of change were related to actual outcomes and the relationship between client expectations and their treatment outcomes. Clients completed measures indexing the level of symptom reduction required (in their view) to reflect a benefit or recovery from treatment and treatment expectations. Actual PTSD severity was indexed pre- and posttreatment via self-report and clinician-administered interview. Results demonstrated that the amount of change clients said they required to experience a benefit or recovery was significantly larger than typical clinical research standards. Nonetheless, the majority of client benchmarks of change (79.7-81.8%) were consistent with clinical research standards of what constitutes benefit or recovery. Client benchmarks were generally positively correlated with their actual outcomes. Clients' belief that treatment would be successful was associated with greater reductions in PTSD symptoms. These findings provide preliminary evidence that the standards used to determine clinically significant change are somewhat consistent with clients' own perceptions of required symptom change.
abstract_id: PUBMED:25684135
Patients' expectations about total knee arthroplasty outcomes. Objective: The aim of this study was to ascertain Patients' pre-operative expectations of total knee arthroplasty (TKA) recovery.
Methods: Two hundred and thirty-six patients with knee osteoarthritis (OA) who underwent TKA completed self-administered questionnaires before their surgery. Patients' expectations of time to functional recovery were measured using an ordinal time-response scale to indicate expected time to recovery for each of 10 functional activities. Expected time to recovery was dichotomized into short- and long-term expectations for recovery of each activity using median responses. Knee pain and function were ascertained using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). Other measures included the SF-36, the Depression, Anxiety and Stress Scale (DASS) and the Medical Outcomes Study Social Support Survey (MOS-SSS). Multivariate logistic regression was used to identify pre-operative characteristics associated with short- vs. long-term expectations.
Results: Sixty-five percent of the patients were females and 70% Whites; mean age was 65 years. Patients were optimistic about their time to functional recovery: over 65% of patients expected functional recovery within 3 months. Over 80% of the patients expected to perform 8 of the 10 activities within 3 months. Patients who expected to be able to perform the functional activities in <6 weeks were more likely to be younger, male, and have lower self-reported pain and better general health before surgery compared to those who expected to be able to perform the activities 3 months post-surgery or later.
Conclusion: Pre-operative patient characteristics may be important to evaluate when considering individual Patients' expectations of post-operative outcomes.
abstract_id: PUBMED:33537663
Variability in Surgeon Approaches to Emotional Recovery and Expectation Setting After Adult Traumatic Brachial Plexus Injury. Purpose: Increasing emphasis has been placed on multidisciplinary care for patients with traumatic brachial plexus injury (BPI), and there has been a growing appreciation for the impact of psychological and emotional components of recovery. Because surgeons are typically charged with leading the recovery phase of BPI, our objective was to build a greater understanding of surgeons' perspectives on the care of BPI patients and potential areas for improvement in care delivery.
Methods: We conducted semistructured qualitative interviews with 14 surgeons with expertise in BPI reconstruction. The interview guide contained questions regarding the surgeons' practice and care team structure, their attitudes and approaches to psychological and emotional aspects of recovery, and their preferences for setting patient expectations. We used inductive thematic analysis to identify themes.
Results: There was a high degree of variability in how surgeons addressed emotional and psychological aspects of recovery. Whereas some surgeons embraced the practice of addressing these components of care, others felt strongly that BPI surgeons should remain focused on technical aspects of care. Several participants described the emotional toll that caring for BPI patients can have on surgeons and how this concern has affected their approach to care. Surgeons also recognized the importance of setting preoperative expectations. There was an emphasis on setting low expectations in an attempt to minimize the risk for dissatisfaction. Surgeons described the challenges in effectively counseling patients about a condition that is prone to substantial injury heterogeneity and variability in functional outcomes.
Conclusions: Our results demonstrate wide variability in how surgeons address emotional, psychological, and social barriers to recovery for BPI patients.
Clinical Relevance: Best practices for BPI care are difficult to establish because of the relative heterogeneity of neurologic injury, the unpredictable impact and recovery of the patient, and the substantial variability in physician approach to the care of these patients.
abstract_id: PUBMED:23824730
When the wind goes out of the sail - declining recovery expectations in the first weeks of back pain. Background: Expectations for recovery are a known predictor for returning to work. Most studies seem to conclude that the higher the expectancy the better the outcome. However, the development of expectations over time is rarely researched and experimental studies show that realistic expectations rather than high expectancies are the most adaptive. This study aims to explore patterns of stability and change in expectations for recovery during the first weeks of a back-pain episode and how these patterns relate to other psychological variables and outcome.
Methods: The study included 496 volunteer patients seeking treatment for work-related, acute back pain. The participants were measured with self-report scales of depression, fear of pain, life impact of pain, catastrophizing and expectations for recovery at two time points. A follow-up focusing on recovery and return to work was conducted 3 months later. A cluster analysis was conducted, categorizing the data on the trajectories of recovery expectations.
Results: Cluster analysis revealed four clusters regarding the development of expectations for recovery during a 2-week period after pain onset. Three out of four clusters showed stability in their expectations as well as corresponding levels of proximal psychological factors. The fourth cluster showed increases in distress and a decrease in expectations for recovery. This cluster also has poor odds ratios for returning to work and recovery.
Conclusion: Decreases in expectancies for recovery seem as important as baseline values in terms of outcome, which has clinical and theoretical implications.
Answer: Yes, recovery expectations can change over time. Studies have shown that while a considerable portion of patients maintain stable levels of recovery expectations, there are individuals whose expectations can either improve or decline over time. For instance, a study on patients with back pain found that nearly 80% of participants had stable expectation levels over a 3-month period, but approximately 15% showed a decrease in expectation levels, and the remainder experienced increasingly positive expectations (PUBMED:24913213). Similarly, a study on injured workers revealed that factors such as education level, social functioning, and injury severity could influence changes in recovery expectations over time (PUBMED:23179742). Another study on caregivers of patients with spinal cord injury indicated that unmet expectations for recovery were common at both three and six months post-injury, suggesting that expectations can persist or change as the recovery process unfolds (PUBMED:30129892). Additionally, a study on the early weeks of a back-pain episode found that decreases in recovery expectations were associated with poorer outcomes, including lower odds of returning to work and recovery (PUBMED:23824730). These findings suggest that recovery expectations are not static and can be influenced by various factors, including the progression of recovery, the individual's psychological state, and the severity of the injury or condition. |
Instruction: Is it possible to exclude a diagnosis of myocardial damage within six hours of admission to an emergency department?
Abstracts:
abstract_id: PUBMED:11509427
Is it possible to exclude a diagnosis of myocardial damage within six hours of admission to an emergency department? Diagnostic cohort study. Objective: To assess the clinical efficacy and accuracy of an emergency department based six hour rule-out protocol for myocardial damage.
Design: Diagnostic cohort study.
Setting: Emergency department of an inner city university hospital.
Participants: 383 consecutive patients aged over 25 years with chest pain of less than 12 hours' duration who were at low to moderate risk of acute myocardial infarction.
Intervention: Serial measurements of creatine kinase MB mass and continuous ST segment monitoring for six hours with 12 leads.
Main Outcome Measure: Performance of the diagnostic test against a gold standard consisting of either a 48 hour measurement of troponin T concentration or screening for myocardial infarction according to the World Health Organization's criteria.
Results: Outcome of the gold standard test was available for 292 patients. On the diagnostic test for the protocol, 53 patients had positive results and 239 patients had negative results. There were 18 false positive results and one false negative result. Sensitivity was 97.2% (95% confidence interval 95.0% to 99.0%), specificity 93.0% (90.0% to 96.0%), the negative predictive value 99.6%, and the positive predictive value 66.0%. The positive likelihood ratio was 13.9 and the negative likelihood ratio 0.03.
Conclusions: The six hour rule-out protocol for myocardial infarction is accurate and efficacious. It can be used in patients presenting to emergency departments with chest pain indicating a low to moderate risk of myocardial infarction.
abstract_id: PUBMED:12661459
The troponin assay in a cardiac emergency unit: especially to exclude severe cardiac risk Objective: To determine the value in daily practice of a troponin assay for triage of patients with chest pain.
Design: Retrospective and descriptive.
Method: All patients in whom troponin T was measured after at least six hours after complaints began during the first three months of 2001 in the cardiac emergency unit of the Isala Clinics, location Weezenlanden, Zwolle, the Netherlands, were included. Cardiac events occurring within 30 days after the troponin assay were recorded retrospectively.
Results: All 350 included patients were followed for 30 days. An elevated troponin level was found in 51 patients (15%). At 30 days, 27 of these 51 patients had had a myocardial infarction or had died. Apart from these 27 patients, a revascularisation procedure was performed in nine patients. In the remaining 15 patients with an elevated troponin level, another reason for myocardial damage was found. In 40 patients in whom the troponin assay was negative, coronary artery disease was diagnosed later. The negative predictive value for myocardial infarction or death within 30 days was 98%.
Conclusion: A troponin assay, performed six hours or more after the onset of cardiac symptoms, appears to be a safe method to exclude patients with severe coronary artery disease resulting in myocardial necrosis and an elevated risk of death. An elevated troponin level was always associated with myocardial damage, but not always with coronary artery disease. Therefore, there must be a clear indication for requesting a troponin assay, and one should always keep in mind that a normal troponin level does not exclude coronary artery disease.
abstract_id: PUBMED:21448539
The use of cardiac markers bed test in acute coronary syndrome in emergency department Aims: The evaluation of the patient with chest pain in the emergency department is one of the most common situations that the doctor has to face. The diagnostic procedure supposes an observation period of at least 6-12 hours, a well organized medical facilities and the identification of all SCA cases to reduce inappropriate admission.
Materials And Methods: In our study we have estimated the utility of the marker assay that is associated to the use of risk scores (TIMI and GRACE risk score) to obtain indication about the most appropriate assistance level. In particular, we used the assay of necrosis markers to highlight the damage along with the assay of natriuretic peptides for their role in the diagnosis and in the monitoring of the patients with cardiac damage.
Results: Also PCR has an important role such as marker of plaque stability and of inflammation. These markers associated to the necrosis markers could give important clinical information of independent nature.
Discussion: The sensibility of laboratory markers, without important necrosis, is low and it is not possible to exclude in a few time a SCA There is now an alternative strategy: a precocious risk stratification. Using clinical criteria it is possible to do a first evaluation of the probability of SCA and the complications.
abstract_id: PUBMED:30663291
DIRECT ADMISSION OF STEMI PATIENTS TO THE CARDIAC CARE UNIT VERSUS ADMISSION VIA THE EMERGENCY DEPARTMENT FOR PRIMARY CORONARY INTERVENTION IMPROVES SHORT AND LONG-TERM SURVIVAL Introduction: Shortening door-to-balloon time intervals in ST-elevation myocardial infarction (STEMI) patients treated by primary percutaneous coronary intervention (PPCI) is necessary in order to limit myocardial damage. Direct admission to the cardiac care unit (CCU) facilitates this goal. We compared characteristics and short- and long-term mortality of PPCI-treated STEMI patients admitted directly to the CCU with those admitted via the emergency department (ED).
Methods: To compare 303 patients admitted directly to the CCU (42%) with 427 admitted via the ED (58%) included in the current registry comprising 730 consecutive PPCI-treated STEMI patients.
Results: Groups were similar regarding demographics, medical history and risk factors. Pain-to-CCU time was 151±164 minutes (median-94) for patients admitted directly and 242±226 minutes (160) for those admitted via the ED, while door-to-balloon intervals were 69±42 minutes (61) and 133±102 minutes (111), respectively. LVEF evaluated during admission (48.3±13% [47.5%] vs. 47.7±13.7% [47.5%]) and mean CK level (893±1157 [527] vs. 891±1255 [507], p=0.45) were similar between groups. Mortality was 4.2% vs. 10.3% at 30-days (p<0.002), 7.6% and 14.3% at one-year (p<0.01), reaching 12.2% and 21.9% at 3.9±2.3 years (median-3.5, p<0.004) among directly-admitted patients vs. those admitted via the ED, respectively. Long-term mortality was 4.1%, 9.4%, 21.4%, and 16% for pain-to-balloon quartiles of <140 min, 141-207 min, 208-330 min, and >330 mins, respectively (p=0.026).
Conclusions: Direct admission of STEMI patients to the CCU for PPCI facilitated the attainment of guidelines-dictated door-to-balloon time intervals and yielded improved short- and long-term mortality. Longer pain-to-balloon time was associated with higher long-term mortality.
abstract_id: PUBMED:11911914
Prognosis and risk indicators of death during a period of 10 years for women admitted to the emergency department with a suspected acute coronary syndrome. Aim: To describe the 10-year prognosis and risk indicators of death in women admitted to the emergency department with acute chest pain or other symptoms raising a suspicion of acute myocardial infarction (AMI). Particular interest was paid to women of <or =75 years of age surviving 1 month after admission, who were judged to have suffered a possible or confirmed acute ischemic event with signs of either minor or no myocardial damage.
Patients: All women admitted to the emergency department at Sahlgrenska University Hospital, Göteborg, during a period of 21 months, due to acute chest pain or other symptoms raising a suspicion of AMI.
Methods: All the women were followed prospectively for 10 years. The subset described previously underwent a bicycle exercise tolerance test and metabolic screening 3 and 4 weeks, respectively, after admission to the emergency department.
Results: In all, 5362 patients were admitted to the emergency department on 7157 occasions during the time of the survey and 2387 (45%) of them were women. Of these women, 61% were hospitalised and 39% were sent home directly. The overall 10-year mortality for women was 42.5% (55.5% among those hospitalised and 21.8% among those not hospitalised). Of the variables recorded at the emergency department, the following were independently associated with 10-year mortality: age, history of angina pectoris, history of hypertension, history of diabetes, history of congestive heart failure, pathological ECG on admission, degree of initial suspicion of AMI on admission, symptoms of congestive heart failure on admission and other non-specific symptoms on admission. The majority of these risk factors were more markedly associated with prognosis in women discharged directly from the emergency department than in those hospitalised. In the subset aged < or =75 years defined above (n=241), the following were independent predictors of death: a history of AMI and working capacity in a bicycle exercise tolerance test.
Conclusion: Among women admitted to hospital due to chest pain or other symptoms raising a suspicion of AMI, 42.5% had died after 10 years. Major risk indicators of death were age, history of cardiovascular disease, pathological ECG on admission and symptoms of congestive heart failure on admission. Women presenting with an acute coronary syndrome but minimal myocardial damage, work capacity and a history of AMI predicted a poor outcome.
abstract_id: PUBMED:11719119
Clinical application of rapid quantitative determination of cardiac troponin-T in an emergency department setting. Objectives: We analysed the clinical use of Troponin-T compared to creatine kinase MB in a non-trauma emergency department setting.
Background: A newly established single specimen quantitative Troponin T assay allows the clinical application of this parameter. METHODS. Five-hundred Troponin T tests were provided for use by emergency physicians who could combine them with the routine laboratory tests without restriction as to the indication or number of tests per patient. The number of tests per patient, time frame, final diagnosis and additional clinical information gained were recorded. All patients were followed for at least 6 months to verify the diagnosis and to assess the occurrence of cardiac events (nonfatal AMI or cardiac death). The ability of Troponin T and creatine kinase MB tests to predict cardiac events within 6 months were compared.
Results: The 500 Troponin T tests were used in 249 patients (median two tests per patient (range 1-5)) within 41 days. The final diagnosis revealed coronary heart disease in 85, non-coronary heart disease in 39, non-cardiac chest pain in 86 and other diagnoses in 39 of the patients. In 14 patients with an elevated creatine kinase MB, myocardial damage could safely be ruled out by a negative Troponin T, in six patients with a normal creatine kinase MB minor myocardial damage could be detected by a positive Troponin T. During follow up 28 cardiac events were recorded. Troponin T had a significantly higher specificity, positive predictive value and proportion of correct prediction for cardiac events within 6 months compared to creatine kinase MB.
Conclusions: Troponin T has proved to be an useful method for diagnosing myocardial damage in routine clinical use in the non-trauma emergency department.
abstract_id: PUBMED:33345559
Direct Admission of Patients With ST-Segment-Elevation Myocardial Infarction to the Catheterization Laboratory Shortens Pain-to-Balloon and Door-to-Balloon Time Intervals but Only the Pain-to-Balloon Interval Impacts Short- and Long-Term Mortality. Background Shortening the pain-to-balloon (P2B) and door-to-balloon (D2B) intervals in patients with ST-segment-elevation myocardial infarction (STEMI) treated by primary percutaneous coronary intervention (PPCI) is essential in order to limit myocardial damage. We investigated whether direct admission of PPCI-treated patients with STEMI to the catheterization laboratory, bypassing the emergency department, expedites reperfusion and improves prognosis. Methods and Results Consecutive PPCI-treated patients with STEMI included in the ACSIS (Acute Coronary Syndrome in Israel Survey), a prospective nationwide multicenter registry, were divided into patients admitted directly or via the emergency department. The impact of the P2B and D2B intervals on mortality was compared between groups by logistic regression and propensity score matching. Of the 4839 PPCI-treated patients with STEMI, 1174 were admitted directly and 3665 via the emergency department. Respective median P2B and D2B were shorter among the directly admitted patients with STEMI (160 and 35 minutes) compared with those admitted via the emergency department (210 and 75 minutes, P<0.001). Decreased mortality was observed with direct admission at 1 and 2 years and at the end of follow-up (median 6.4 years, P<0.001). Survival advantage persisted after adjustment by logistic regression and propensity matching. P2B, but not D2B, impacted survival (P<0.001). Conclusions Direct admission of PPCI-treated patients with STEMI decreased mortality by shortening P2B and D2B intervals considerably. However, P2B, but not D2B, impacted mortality. It seems that the D2B interval has reached its limit of effect. Thus, all efforts should be extended to shorten P2B by educating the public to activate early the emergency medical services to bypass the emergency department and allow timely PPCI for the best outcome.
abstract_id: PUBMED:10864545
Prospective audit of incidence of prognostically important myocardial damage in patients discharged from emergency department. Objective: To assess the incidence of prognostically important myocardial damage in patients with chest pain discharged from the emergency department.
Design: Prospective observational study.
Setting: District general hospital emergency department.
Participants: 110 patients presenting with chest pain of unknown cause who were subsequently discharged home after cardiac causes of chest pain were ruled out by clinical and electrocardiographic investigation.
Interventions: Patients were reviewed 12-48 hours after presentation by repeat electrocardiography and measurement of cardiac troponin T.
Main Outcome Measures: Incidence of missed myocardial damage.
Results: Eight (7%) patients had detectable cardiac troponin T on review and seven had concentrations >/=0.1 microg/l. The repeat electrocardiogram showed no abnormality in any patient.
Conclusion: 6% of the patients discharged from the emergency department had missed prognostically important myocardial damage. Follow up measurement of cardiac troponin T allows convenient audit of clinical performance in the emergency department.
abstract_id: PUBMED:12204983
ROMEO: a rapid rule out strategy for low risk chest pain. Does it work in a UK emergency department? Aims: To examine the feasibility of using the ROMEO (rule out myocardial events on "obs" ward) pathway for low risk patients with chest pain in a UK emergency department.
Methods: A prospective study was undertaken to determine outcomes for the first 100 patients entering the pathway (from May to Oct 1999). Serum troponin levels, serial ECG recordings, exercise test result, total length of stay, and final diagnoses were reviewed. Patients were telephoned after discharge to inquire about persisting or recurrent pain, and further investigations after completing the ROMEO pathway.
Results: 82 of 100 (82%) had myocardial damage excluded by serum troponin assay. Sixty two of 82 (76%) of these completed exercise tolerance testing (ETT). Fifty seven of 62 (92%) ETTs were negative. Twenty of 82 (26%) did not undergo ETT because of mobility problems, recent ETT, or if considered very low probability of cardiac pain on consultant review. Five of 100 (5%) had an increased initial troponin and five of 100 (5%) had an increased 12 hour troponin. These patients were referred for admission under the general physicians. Seven of 100 (7%) were referred for other reasons (late ECG changes, continuing or worsening pain). One patient self discharged. Length of stay varied because of changes to arrangements for ETT. The median time for all patients over the period studied was 23 hours. All patients were discharged within an hour of a negative ETT. FOLLOW UP RESULTS: 67 of 74 (91%) eligible patients were contacted by telephone. Forty six of 67 (69%) had no further pain, attendances, or GP consultations. Six of 67 (9%) had further cardiological investigation or treatment.
Conclusions: A rapid rule out strategy such as the ROMEO pathway is feasible in the UK healthcare setting and provides standardised and consistent evaluation.
abstract_id: PUBMED:9623165
Early exclusion of ischemic myocardial damage made possible by new biochemical cardiac markers New immunochemical assay methods for the cardiac markers creatine kinase (CK) MB mass, myoglobin, troponin T (TnT), troponin I (TnI), fatty acid binding protein (FABP) necessitate re-evaluation of their usefulness in the early diagnosis of myocardial damage. Cardiac markers play an important part in the exclusion of myocardial damage in patients with chest pain and an inconclusive ECG at admission. A serial CK-MB mass determination is particularly suitable for this exclusion of myocardial damage. The sensitivity of TnT shortly after an infarction is comparable with that of CK-MB mass; a single determination on admission is insufficient. The marker myoglobin is of limited value owing to the brief duration of myoglobin rise and insufficient heart specificity. An increased TnT or TnI value in patients with unstable angina pectoris is a prognostically negative sign. It is still not clear how this prognosis can be improved. A negative troponin finding appears not to exclude early complications. The department of Emergency Cardiac Care of the Academic Medical Centre, Amsterdam, the Netherlands, currently prefers the serial measurement of the CK-MB mass, in the future possibly to be supplemented by a troponin determination.
Answer: Yes, it is possible to exclude a diagnosis of myocardial damage within six hours of admission to an emergency department. A study assessing the clinical efficacy and accuracy of an emergency department-based six-hour rule-out protocol for myocardial damage found that the protocol had a high sensitivity of 97.2% and a negative predictive value of 99.6%, indicating that it can accurately exclude myocardial infarction in patients presenting with chest pain indicating a low to moderate risk of myocardial infarction (PUBMED:11509427). Another study showed that a troponin assay performed six hours or more after the onset of cardiac symptoms is a safe method to exclude patients with severe coronary artery disease resulting in myocardial necrosis and an elevated risk of death, with a negative predictive value for myocardial infarction or death within 30 days of 98% (PUBMED:12661459). Additionally, a prospective observational study found that 6% of patients discharged from the emergency department had missed prognostically important myocardial damage, but follow-up measurement of cardiac troponin T allows for the convenient audit of clinical performance in the emergency department (PUBMED:10864545). These findings suggest that with the appropriate use of diagnostic tests such as serial measurements of creatine kinase MB mass, continuous ST segment monitoring, and troponin assays, it is feasible to rule out myocardial damage within a six-hour window in the emergency department setting. |
Instruction: Does managed care restrictiveness affect the perceived quality of primary care?
Abstracts:
abstract_id: PUBMED:12224673
Does managed care restrictiveness affect the perceived quality of primary care? A report from ASPN. Ambulatory Sentinel Practice Network. Background: The competitive managed care marketplace is causing increased restrictiveness in the structure of health plans. The effect of plan restrictiveness on the delivery of primary care is unknown. Our purpose was to examine the association of the organizational and financial restrictiveness of managed care plans with important elements of primary care, the patient-clinician relationship, and patient satisfaction.
Methods: We conducted a cross-sectional study of 15 member practices of the Ambulatory Sentinel Practice Network selected to represent diverse health care markets. Each practice completed a Managed Care Survey to characterize the degree of organizational and financial restrictiveness for each individual health care plan. A total of 199 managed care plans were characterized. Then, 1475 consecutive outpatients completed a patient survey that included: the Components of Primary Care Instrument as a measure of attributes of primary care; a measure of the amount of inconvenience involved with using the health care plan; and the Medical Outcomes Study Visit Rating Form for assessing patient satisfaction.
Results: Clinicians' reports of inconvenience were significantly associated (P < .001) with the financial and organizational restrictiveness scores of the plan. There was no association between plan restrictiveness and patient report of multiple aspects of the delivery of primary care or patient satisfaction with the visit.
Conclusions: Plan restrictiveness is associated with greater perceived hassle for clinicians but not for patients. Plan restrictiveness seems to be creating great pressures for clinicians, but is not affecting patients' reports of the quality of important attributes of primary care or satisfaction with the visit. Physicians and their staffs appear to be buffering patients from the potentially negative effects of plan restrictiveness.
abstract_id: PUBMED:8723812
Managed care, primary care, and quality for children. In an effort to provide medical care that is both more effective and less costly, the new variants of managed care organizations have instituted a variety of incentives and administrative controls that impact on the types and quantity of care provided to patients. Evidence suggests that the early forms of managed care, namely prepaid group practices, showed particular promise in improving the primary care delivered to children, ie, care that is accessible, person-focused in the long term, comprehensive, coordinated, and oriented toward achieving better outcomes. However, recent evidence concerning the quality of care delivered to children in the newer variants of managed care is mixed and scant; the newer organizational forms may not facilitate and may even have a negative impact on the attainment of primary care. Managed care can have a positive effect on first contact care, because it contractually defines a primary care provider and reduces use of the emergency room as a source of care. It may, however, have mixed effects on other aspects of access and use, depending on the plan's particular characteristics. Longitudinality is threatened by the disruption of prior relationships with out-of-plan providers and by the instability of both enrollees and providers in managed care plans. Children's benefits in managed care arrangements tend to include more preventive services, but access to specialty services has generally been found to be more restrictive. Coordination of care is not inherent to managed care, and many plans are no more likely to foster communication than are traditional indemnity plans. Evidence for the superior clinical quality afforded to children by new variants of managed care is lacking. Because managed care arrangements are proliferating rapidly, better studies are needed to prove or refute the contention that managed care has a significant positive effect on quality of care.
abstract_id: PUBMED:7941522
Primary and managed care. Ingredients for health care reform. The use of primary and managed care is likely to increase under proposed federal health care reform. I review the definition of primary care and primary care physicians and show that this delivery model can affect access to medical care, the cost of treatment, and the quality of services. Because the use of primary care is often greater in managed care than in fee-for-service, I compare the two insurance systems to further understand the delivery of primary care. Research suggests that primary care can help meet the goal of providing accessible, cost-effective, and high-quality care, but that changes in medical education and marketplace incentives will be needed to encourage students and trained physicians to enter this field.
abstract_id: PUBMED:12891474
Opportunities and risks of managed care Aim: The present paper aims at analysing and discussing managed care with its opportunities and risks. This analysis should be a further basis for a reasonable discussion concerning the implementation of managed care elements in the German Health Care system.
Method: On the basis of an update literature research the relevant international experiences with managed care in several health care systems--especially in the United States and Switzerland--are analysed and described. Most relevant opportunities and risks of managed care are deduced from this analysis.
Results: The most important opportunities of managed care are the stabilisation of health care costs, an improvement of health care processes and quality and a stronger consideration of preventive measures. The possibility of choosing between several health care models and more convenient health insurance fees are opportunities for the patients. Relevant risks of managed care involve the potential withholding of medical care with the necessity of a comprehensive quality assurance and negative influences on the physicians' autonomy. Furthermore, managed care may have negative effects on the relationship between patients and physicians or between general practitioners and medical specialists.
Conclusions: Managed care has proven advantages in respect of cost stabilisation and quality improvement compared with traditional health care systems. If the risks and known problems of managed care are realised and avoided, the available opportunities could be an important option to reform the German Health Care system in respect of costs and quality.
abstract_id: PUBMED:10575393
Quality of care for primary care patients with depression in managed care. Objective: To evaluate the process and quality of care for primary care patients with depression under managed care organizations.
Method: Surveys of 1204 outpatients with depression at the time of and after a visit to 1 of 181 primary care clinicians from 46 primary care clinics in 7 managed care organizations. Patients had depressive symptoms in the previous 30 days, with or without a 12-month depressive disorder by diagnostic interview. Process indicators were depression counseling, mental health referral, or psychotropic medication management at index visit and the use of appropriate antidepressant medication during the last 6 months.
Results: Of patients with depressive disorder and recent symptoms, 29% to 43% reported a depression-specific process of care in the index visit, and 35% to 42% used antidepressant medication in appropriate dosages in the prior 6 months. Patients with depressive disorders rather than symptoms only and those with comorbid anxiety had higher rates of depression-specific processes and quality of care (P < .005). Recurrent depression, suicidal ideation, and alcohol abuse were not uniquely associated with such rates. Patients visiting for old problems or checkups received more depression-specific care than those with new problems or unscheduled visits. The 7 managed care organizations varied by a factor of 2-fold in rates of depression counseling and appropriate anti-depressant use.
Conclusions: Rates of process and quality of care for depression as reported by patients are moderate to low in managed primary care practices. Such rates are higher for patients with more severe forms of depression or with comorbid anxiety, but not for those with severe but "silent" symptoms like suicide ideation. Visit context factors, such as whether the visit is scheduled, affect rates of depression-specific care. Rates of care for depression are highly variable among managed care organizations, emphasizing the need for process monitoring and quality improvement for depression at the organizational level.
abstract_id: PUBMED:15643028
Managed care and patient-rated quality of care from primary physicians. The aim is to determine the associations between managed care controls and patient-rated quality of care from primary physicians. In a prospective cohort study, 17,187 patients were screened in the waiting rooms of 261 primary care physicians in the Seattle metropolitan area (1996-1997) to identify 2,850 English-speaking adult patients with depressive symptoms and/or selected pain problems. Patients completed 6-month follow-ups to rate the quality of care from their primary physicians. The intensity of managed care was measured for each patient's health plan, primary care office, and physician. Regression analyses revealed that patients in more managed plans and offices had lower ratings of the quality of care from their primary physicians. Managed care controls targeting physicians were generally not associated with patient ratings.
abstract_id: PUBMED:12050942
Managed care and the under-privileged in the United States It has been said that "vulnerable populations" (elderly people, chronic patients,...) were largely affected by the Managed Care. The following article shed a light on the quality of care delivered to Medicaid patients under Managed Care. After a short description of the US Managed care experience, the specificities and difficulties (quality, adverse selection, non-compliance...) of Medicaid patients under capitation plans will be described.
abstract_id: PUBMED:11400398
Managed care, control of costs and the patient's health The following article aims at reviewing the main historical evolutions of the Managed Care in the United States of America. We will also try to gain a better understanding of the main ethical issues raised by the physicians in the Managed Care context. Those issues have raised important concerns among the population of physicians. Furthermore, we will try to describe some of the mechanisms that have been developed by the HMOs to reduce the costs of care in the U.S.A. Finally, we will see the side-effects of the measures that have been decided to reduce the cost of care and how they affect the quality of care delivered to patients.
abstract_id: PUBMED:9816764
Do we really want managed care? In spite of the mounting criticism levelled at Managed Care, it is still being discussed in Germany as a promising concept expected to improve both the quality as well as the cost effectiveness of health care. However, this discussion focuses largely on the theoretical advantages of Managed Care in comparison with the German health care system. To examine whether these advantages of Managed Care are actually realised once the concept is broadly implemented, we analysed the contemporary effects of managed care on patients and physicians in the US. This revealed that while Managed Care has for the time being slowed the rise in health services expenditures, major shortcomings of the concept are evident: Patients express a loss of trust in the health care system and complain about decreased choice, poor continuity of care, and persistent high personal health care cost. Physicians complain about increasing financial risks imposed on them, about curtailed therapeutic freedom, and generally the way they must now conduct their medical practice. Law-makers, in turn, react to the defects of the Managed Care industry by imposing increasing external controls and regulations. All in all, Managed Care as implemented by many of the US systems does not appear to fulfill the positive expectations of the two principal stakeholders--patients and physicians. Instead, a large number of side effects are in evidence that appear to make a transfer of the American concept to the German health care system far from desirable.
abstract_id: PUBMED:9444812
The provision of primary care: does managed care make a difference? Drawing on a nationally representative survey of physicians, this paper examines the extent to which physicians split their time between primary care and non-primary care activities and whether managed care makes a difference. We find that managed care does matter: Physicians have been narrowing their scope of practice, and the trend is linked to their involvement with managed care plans. These findings have implications for workforce policy and training, regulation of managed care contracting, and the quality of care.
Answer: The perceived quality of primary care in the context of managed care restrictiveness has been examined in various studies, with mixed results. According to a cross-sectional study conducted by the Ambulatory Sentinel Practice Network, clinicians reported significant inconvenience associated with the financial and organizational restrictiveness of managed care plans, but there was no association between plan restrictiveness and patient reports of the delivery of primary care or patient satisfaction with the visit. This suggests that while clinicians feel the pressure of plan restrictiveness, patients do not perceive a decline in the quality of important attributes of primary care or satisfaction with their visits, possibly because physicians and their staffs are buffering patients from the negative effects of plan restrictiveness (PUBMED:12224673).
However, other research indicates that managed care can have mixed effects on the quality of primary care. For example, managed care may positively affect first contact care by defining a primary care provider and reducing emergency room use, but it may also threaten longitudinal care due to disruptions in provider relationships and plan instability. Access to specialty services in managed care is generally more restrictive, and coordination of care is not inherently better than in traditional indemnity plans. The evidence for superior clinical quality for children in newer variants of managed care is lacking, and better studies are needed to confirm or refute the positive effects of managed care on quality of care (PUBMED:8723812).
Furthermore, a study on primary care patients with depression in managed care organizations found moderate to low rates of process and quality of care as reported by patients. The variability in care for depression among managed care organizations highlights the need for process monitoring and quality improvement at the organizational level (PUBMED:10575393).
In summary, while managed care restrictiveness may not directly affect patients' perceptions of primary care quality, it does create challenges for clinicians and may have mixed effects on various aspects of primary care quality. The evidence suggests that the impact of managed care on the perceived quality of primary care is complex and may vary depending on specific patient populations, conditions, and organizational practices. |
Instruction: Is visual field constriction in epilepsy patients treated with vigabatrin reversible?
Abstracts:
abstract_id: PUBMED:12195456
Is visual field constriction in epilepsy patients treated with vigabatrin reversible? Objective: To evaluate the reversibility of vigabatrin associated visual field constriction.
Background: Visual field constriction (VFC) occurs in approximately 40 % of epilepsy patients under treatment with vigabatrin (VGB). There is still controversy about whether VGB-associated VFC is reversible. From a cross-sectional study there is evidence that VFC does not reverse three to six months after stopping VGB treatment. So far, there are no long term studies on this subject.
Methods: We performed a follow-up study on 15 epilepsy patients (eight women, seven men, median age 45 (21-58) years) with VGB-associated VFC but otherwise normal ophthalmological examination. Kinetic and static perimetry was performed one and two years after VFC was diagnosed (baseline examination). Visual field size at first and at second year-follow-up was compared with the baseline examination. Because discontinuation of VGB-treatment was dependant on clinical needs, patients either stopped VGB treatment before or after VFC was diagnosed. In a small group of patients VGB treatment was continued despite of VFC.
Results: There was no statistically significant difference in visual field size comparing baseline values with first year and second year follow-up examinations either in patients who stopped VGB treatment (n = 11) or in patients who continued VGB treatment on a reduced dosage (n = 4).
Conclusion: Although our data are based on a relatively small group of patients there is evidence that VFC resulting from VGB treatment is not reversible in epilepsy patients after stopping the drug.
abstract_id: PUBMED:10880291
Recovery of visual field constriction following discontinuation of vigabatrin. Epilepsy patients treated with vigabatrin may develop symptomatic or asymptomatic concentric visual field constriction due to GABA-associated retinal dysfunction. The prevalence and course of this side effect are not established yet; in previously reported adult patients the visual disturbances seem to be irreversible. We present two patients with a significant improvement of visual field constriction and retinal function after the discontinuation of vigabatrin. These findings suggest that vigabatrin-associated retinal changes are at least partly reversible in some patients, and that these patients may benefit significantly from a withdrawal of vigabatrin. Larger scale clinical studies are needed to identify predictive factors both for the occurrence and reversibility of vigabatrin-associated visual field defects.
abstract_id: PUBMED:11077455
Electro-oculography, electroretinography, visual evoked potentials, and multifocal electroretinography in patients with vigabatrin-attributed visual field constriction. Purpose: Symptomatic visual field constriction thought to be associated with vigabatrin has been reported. The current study investigated the visual fields and visual electrophysiology of eight patients with known vigabatrin-attributed visual field loss, three of whom were reported previously. Six of the patients were no longer receiving vigabatrin.
Methods: The central and peripheral fields were examined with the Humphrey Visual Field Analyzer. Full visual electrophysiology, including flash electroretinography (ERG), pattern electroretinography, multifocal ERG using the VERIS system, electro-oculography, and flash and pattern visual evoked potentials, was undertaken.
Results: Seven patients showed marked visual field constriction with some sparing of the temporal visual field. The eighth exhibited concentric constriction. Most electrophysiological responses were usually just within normal limits; two patients had subnormal Arden electro-oculography indices; and one patient showed an abnormally delayed photopic b wave. However, five patients showed delayed 30-Hz flicker b waves, and seven patients showed delayed oscillatory potentials. Multifocal ERG showed abnormalities that sometimes correlated with the visual field appearance and confirmed that the deficit occurs at the retinal level.
Conclusion: Marked visual field constriction appears to be associated with vigabatrin therapy. The field defects and some electrophysiological abnormalities persist when vigabatrin therapy is withdrawn.
abstract_id: PUBMED:11503799
Visual field constriction on vigabatrin. (1) Vigabatrin carries a high risk of concentric visual field constriction, sometimes associated with a drop in visual acuity. The changes in the visual field appear to be irreversible. (2) Consequently, vigabatrin can be considered only as a last resort for infantile spasms refractory to steroid therapy and for partial epilepsy refractory to other anticonvulsants. (3) Patients treated with vigabatrin must have their visual fields monitored regularly.
abstract_id: PUBMED:11967655
Visual field constriction in epilepsy patients treated with vigabatrin and other antiepileptic drugs: a prospective study. Background: Visual field constriction (VFC) has been described in about 30 % to 50 % of patients treated with the antiepileptic drug (AED) Vigabatrin (GVG). The exact incidence of VFC related to GVG exposure is unknown. Risk factors other than medication have not been identified as yet, and it is unclear whether the occurrence of VFC is restricted to the use of GVG.
Methods: In a longitudinal study, we investigated 60 epilepsy patients who received GVG and other AEDs. Patients underwent full ophthalmological examination including perimetry.
Results: 16 of 60 patients exposed to different AEDs developed VFC, which was judged as clinically relevant by an experienced neuroophthalmologist. VFC was observed significantly more often in patients treated with GVG as add-on- or monotherapy as compared with patients who had never been exposed to GVG (13/29 versus 3/31). Within the subgroup of 23 patients who received GVG as add-on therapy, those who developed VFC had been exposed to GVG for significantly longer than patients without VFC. The only non-treatment related feature associated with VFC was older age. Type and severity of epilepsy or type and number of concomitant AED were not related to the occurrence of VFC.
Conclusions: The findings of an overrepresentation of VFC in patients receiving GVG and of a correlation between duration of GVG treatment and occurrence of VFC support the causal role of GVG treatment in the development of VFC. Old age is a possible risk factor for the development of VFC associated with GVG in epilepsy patients.
abstract_id: PUBMED:11015531
Visual field constriction in children with epilepsy on vigabatrin treatment. Vigabatrin is considered the drug of choice for infantile spasms and simple and complex partial epilepsy in childhood. Its mechanism of action relies on the irreversible inhibition of gamma-aminobutyric acid (GABA) transaminase. Since June 1997 several articles have been published reporting visual field constriction in adult patients on vigabatrin therapy. Recently, 7 pediatric patients, 1 on vigabatrin monotherapy and 6 on add-on therapy with visual field constriction have been described. We have observed 30 pediatric patients with epilepsy (14 boys and 16 girls), ages ranging from 4 to 20 years (mean: 11 years and 2 months) treated with vigabatrin for infantile spasms, simple and complex partial epilepsy, who had never complained of ophthalmologic disturbances. Twenty-one patients underwent complete routine ophthalmologic examination (fundus oculi, visual acuity, intraocular pressure, and visual field tests); 9 children (<6 years old) underwent only fundus examination, because collaboration was lacking. We report on 4 children showing constriction of visual field, prevailing in nasal hemifield. In 1 child, visual abnormalities were stable even 10 months after vigabatrin discontinuation, while in another a greater improvement was observed 5 months after discontinuation. The possible mechanisms have been discussed and the cone dysfunction, connected with GABA augmentation in the outer retina, has been outlined. We suggest a possible protocol to control visual abnormalities in epileptic children.
abstract_id: PUBMED:18188629
Full-field ERG and visual fields in patients 5 years after discontinuing vigabatrin therapy. Purpose: In numerous studies vigabatrin medication has been associated with visual field constriction and alterations in the full-field electroretinogram (ff-ERG), but it is not clear whether these changes are reversible or not. The purpose of this study was to examine patients with visual field loss and reduced ff-ERG several years after discontinuing vigabatrin therapy, in order to investigate reversibility.
Methods: Eight patients with visual field constriction and reduced cone responses measured by 30 Hz flicker ERG were examined with Goldmann perimetry and ff-ERG 4-6 years after discontinuing medication. The results were compared with investigations conducted during medication, 4-6 years previously. Statistical analysis was also used to compare the ff-ERG results of the patients, during treatment and at follow-up, with a group of 70 healthy subjects.
Results: Visual field constriction remained 4-6 years after discontinuing vigabatrin therapy. The amplitude of the 30 Hz flicker response also remained reduced on follow-up both compared with the results during treatment and with the control group. Moreover, the amplitude of the isolated rod response and the combined rod-cone response were decreased in the patients compared with the control group, during vigabatrin treatment as well as on follow-up. On follow-up, oscillatory potentials (OPs) also were registered, showing reduced amplitudes in patients compared with controls. The within subject comparison showed no significant changes.
Conclusion: Vigabatrin attributed visual field constriction and reduced ff-ERG responses remain several years after discontinuing vigabatrin therapy, indicating drug-induced permanent retinal damage.
abstract_id: PUBMED:12631018
Vigabatrin-associated visual field constriction in a longitudinal series. Reversibility suggested after drug withdrawal. Purpose: To evaluate through a longitudinal study the effects on visual fields of long-term vigabatrin medication in patients with partial epilepsy and to discuss visual field screening strategies.
Methods: A total of 26 patients aged 14-68 years with a mean history of vigabatrin medication of 8.5 years (range 2-14 years) were followed by manual kinetic Goldmann perimetry (objects IV,4 and I,4) for 6-26 months (mean value 12.3 months). At time zero and at follow-up, each patient was assigned a "pooled" averaged value, as a linear percentage of normal isopter position, for the two objects as tested nasally and temporally in the five most horizontal meridians on the Goldmann chart. Twelve eyes from nine adults (age 24-60 years) served as controls.
Results: Constrictions were recorded in 24 of 26 patients at baseline. Averaged isopters ranged from 8% to 96% of the controls' averaged isopter positions. Median values of 71.5% and 60.5% for large and small objects, respectively, indicated that the smaller object was more sensitive to visual field constriction. There was no difference in the degree of constriction between nasal and temporal hemifields. Significant improvement in the visual field (mean gain 13.6% units) was seen in the eight patients who underwent full drug withdrawal. No similar improvement was seen in the 12 patients still on full dose or the six with reduced intake.
Conclusions: Most Danish patients on long-term vigabatrin medication have suffered some visual field loss. Contrary to most clinical evidence so far, the present follow-up study indicates some reversibility of visual field loss after drug withdrawal. Kinetic Goldmann perimetry appears to be a fair alternative to computerized static perimetry techniques for screening and following vigabatrin-treated patients.
abstract_id: PUBMED:12102679
Visual field constriction in 91 Finnish children treated with vigabatrin. Purpose: To study the prevalence and features of visual field constrictions (VFCs) associated with vigabatrin (VGB) in children.
Methods: A systematic collection of all children with any history of VGB treatment in fifteen Finnish neuropediatric units was performed, and children were included after being able to cooperate reliably in repeated visual field tests by Goldmann kinetic perimetry. This inclusion criterion yielded 91 children (45 boys; 46 girls) between ages 5.6 and 17.9 years. Visual field extent <70 degrees in the temporal meridian was considered abnormal VFC.
Results: There was a notable variation in visual field extents between successive test sessions and between different individuals. VFCs <70 degrees were found in repeated test sessions in 17 (18.7%) of 91 children. There was no difference in the ages at the study, the ages at the beginning of treatment, the total duration of the treatment, general cognitive performance, or neuroradiologic findings between the patients with normal visual fields and those with VFC, but the patients with VFC had received a higher total dose of VGB. In linear regression analysis, there were statistically significant inverse correlations between the temporal extent of the visual fields and the total dose and the duration of VGB treatment. The shortest duration of VGB treatment associated with VFC was 15 months, and the lowest total dose 914 g.
Conclusions: Because of a wide variation in normal visual-field test results in children, the prevalence figures of VFCs are highly dependent on the definition of normality. Although our results confirm the previous findings that VFC may occur in children treated with VGB, our study points out the need to reevaluate critically any suspected VFC to avoid misdiagnosis. Nevertheless, our study suggests that the prevalence of VFC may be lower in children than in adults, and that the cumulative dose of VGB or length of VGB therapy may add to the personal predisposition for developing VFC.
abstract_id: PUBMED:11944216
Visual field disturbances in epileptic patients treated with vigabatrin (sevril) Vigabatrin, one of the newer anti-epileptic drugs (AED), whose effect is mediated via elevated levels of brain GABA, has proved to be effective in drug resistant partial seizures and infantile spasms. Recently, visual field constriction was found in up to 30% of adults, whereas information for the pediatric age group is sparse. We examined 24 visually-asymptomatic children, ages 3.5-18 years, treated for 3.01.6 years at doses of 25-90 mg/kg. These children underwent an ophthalmologic examination, visual evoked potentials, electroretinogram and when possible, perimetry. Over half of the children had at least one abnormal test and 11/17, who were able to undergo perimetric studies, had symmetrical, nasal visual field constriction. In view of the gravity and prevalence of visual field constriction, use of the AED vigabatrin should be weighed against its clinical benefits.
Answer: The reversibility of visual field constriction (VFC) in epilepsy patients treated with vigabatrin (VGB) is a subject of some debate and appears to vary among individuals. Some studies suggest that VFC associated with VGB treatment is not reversible after stopping the drug. A follow-up study on 15 epilepsy patients with VGB-associated VFC found no statistically significant difference in visual field size comparing baseline values with first and second-year follow-up examinations, regardless of whether patients stopped or continued VGB treatment on a reduced dosage (PUBMED:12195456). Another study also reported that the changes in the visual field appear to be irreversible (PUBMED:11503799).
However, there are reports of patients experiencing significant improvement in visual field constriction and retinal function after discontinuing VGB, suggesting that VGB-associated retinal changes are at least partly reversible in some patients (PUBMED:10880291). A longitudinal study indicated some reversibility of visual field loss after drug withdrawal, with significant improvement seen in patients who underwent full drug withdrawal (PUBMED:12631018).
Despite these findings, other studies have shown that VFC and reduced full-field electroretinogram (ff-ERG) responses remain several years after discontinuing VGB therapy, indicating possible permanent retinal damage (PUBMED:18188629). In children, there is also evidence of VFC persisting after VGB discontinuation, although some improvement was observed in one case (PUBMED:11015531).
In conclusion, while there is evidence that VFC resulting from VGB treatment may not be reversible in some patients, there are also cases where partial or significant reversibility has been observed after discontinuation of the drug. The variability in outcomes suggests that individual factors may play a role in the potential for recovery, and larger scale clinical studies are needed to identify predictive factors for the occurrence and reversibility of VGB-associated visual field defects (PUBMED:10880291). |
Instruction: Are breast cancer subtypes prognostic for nodal involvement and associated with clinicopathologic features at presentation in early-stage breast cancer?
Abstracts:
abstract_id: PUBMED:23661183
Are breast cancer subtypes prognostic for nodal involvement and associated with clinicopathologic features at presentation in early-stage breast cancer? Background: Breast cancer subtypes (BCS) determined from immunohistochemical staining have been correlated with molecular subtypes and associated with prognosis and outcomes, but there are limited data correlating these BCS and axillary node involvement. This study was conducted to assess whether BCS predicted for nodal metastasis or was associated with other clinicopathologic features at presentation.
Methods: Patients with stage I/II disease who underwent breast-conserving surgery and axillary surgical assessment with available tissue blocks underwent a institutional pathological review and construction of a tissue microarray. The slides were stained for estrogen receptor, progesterone receptor, and HER-2/neu (HER-2) for classification into BCS. Nodal involvement and other clinicopathologic features were analyzed to assess associations between BCS and patient and tumor characteristics. Outcomes were calculated a function of BCS.
Results: The study cohort consisted of 453 patients (luminal A 48.6%, luminal B 16.1%, HER-2 11.0%, triple negative 24.2%), of which 22% (n=113) were node positive. There were no significant associations with BCS and pN stage, node positivity, or absolute number of nodes involved (p>0.05 for all). However, there were significant associations with subtype and age at presentation (p<0.001), method of detection (p=0.049), tumor histology (p<0.001), race (p=0.041), and tumor size (pT stage, p<0.001) by univariate and multivariate analysis. As expected, 10-year outcomes differed by BCS, with triple negative and HER-2 subtypes having the worse overall (p=0.03), disease-free (p=0.03), and distant metastasis-free survival (p<0.01).
Conclusions: There is a significant association between BCS and age, T stage, histology, method of detection, and race, but no associations to predict nodal involvement. If additionally validated, these findings suggest that BCS may not be a useful prognostic variable for influencing regional management considerations.
abstract_id: PUBMED:27544822
The St. Gallen surrogate classification for breast cancer subtypes successfully predicts tumor presenting features, nodal involvement, recurrence patterns and disease free survival. Aims: To evaluate how the St. Gallen intrinsic subtype classification for breast cancer surrogates predicts disease features, recurrence patterns and disease free survival.
Materials And Methods: Subtypes were classified by immunohistochemical staining according to St. Gallen subtypes classification in a 5-tyre system: luminal A, luminal B HER2-neu negative, luminal B HER2-neu positive, HER2-neu non luminal or basal-like. Data were obtained from the records of patients with invasive breast cancer treated at our institution. Recurrence data and site of first recurrence were recorded. The chi(2) test, analysis of variance, and multivariate logistic regression analysis were used to determine associations between surrogates and clinicopathologic variables.
Results: A total of 2.984 tumors were classifiable into surrogate subtypes. Significant differences in age, tumor size, nodal involvement, nuclear grade, multicentric/multifocal disease (MF/MC), lymphovascular invasion, and extensive intraductal component (EIC) were observed among surrogates (p < 0.0001). After adjusting for confounding factors surrogates remained predictive of nodal involvement (luminal B HER2-neu pos. OR = 1.49 p = 0.009, non-luminal HER2-neu pos. OR = 1.61 p = 0.015 and basal-like OR = 0.60, p = 0.002) while HER2-neu positivity remained predictive of EIC (OR = 3.10, p < 0.0001) and MF/MC (OR = 1.45, p = 0.02). Recurrence rates differed among the surrogates and were time-dependent (p = 0.001) and site-specific (p < 0.0001).
Conclusion: The St. Gallen 5-tyre surrogate classification for breast cancer subtypes accurately predicts breast cancer presenting features (with emphasis on prediction of nodal involvement), recurrence patterns and disease free survival.
abstract_id: PUBMED:33418983
The Prognostic Value of Lymph Node Involvement after Neoadjuvant Chemotherapy Is Different among Breast Cancer Subtypes. Introduction: The three different breast cancer subtypes (Luminal, HER2-positive, and triple negative (TNBCs) display different natural history and sensitivity to treatment, but little is known about whether residual axillary disease after neoadjuvant chemotherapy (NAC) carries a different prognostic value by BC subtype.
Methods: We retrospectively evaluated the axillary involvement (0, 1 to 3 positive nodes, ≥4 positive nodes) on surgical specimens from a cohort of T1-T3NxM0 BC patients treated with NAC between 2002 and 2012. We analyzed the association between nodal involvement (ypN) binned into three classes (0; 1 to 3; 4 or more), relapse-free survival (RFS) and overall survival (OS) among the global population, and according to BC subtypes.
Results: 1197 patients were included in the analysis (luminal (n = 526, 43.9%), TNBCs (n = 376, 31.4%), HER2-positive BCs (n = 295, 24.6%)). After a median follow-up of 110.5 months, ypN was significantly associated with RFS, but this effect was different by BC subtype (Pinteraction = 0.004), and this effect was nonlinear. In the luminal subgroup, RFS was impaired in patients with 4 or more nodes involved (HR 2.8; 95% CI [1.93; 4.06], p < 0.001) when compared with ypN0, while it was not in patients with 1 to 3 nodes (HR = 1.24, 95% CI = [0.86; 1.79]). In patients with TNBC, both 1-3N+ and ≥4 N+ classes were associated with a decreased RFS (HR = 3.19, 95% CI = [2.05; 4.98] and HR = 4.83, 95% CI = [3.06; 7.63], respectively versus ypN0, p < 0.001). Similar decreased prognosis were observed among patients with HER2-positive BC (1-3N +: HR = 2.7, 95% CI = [1.64; 4.43] and ≥4 N +: HR = 2.69, 95% CI = [1.24; 5.8] respectively, p = 0.003).
Conclusion: The prognostic value of residual axillary disease should be considered differently in the 3 BC subtypes to accurately stratify patients with a high risk of recurrence after NAC who should be offered second line therapies.
abstract_id: PUBMED:36654951
The ctDNA-based postoperative molecular residual disease status in different subtypes of early-stage breast cancer. Background: Breast cancer is a highly heterogeneous disease. Early-stage, non-metastatic breast cancer is considered curable after definitive treatment. Early detection of tumor recurrence and metastasis through sensitive biomarkers is helpful for guiding clinical decision-making and early intervention in second-line treatment, which could improve patient prognosis and survival.
Methods: In this real-world study, we retrospectively analyzed 82 patients with stages I to III breast cancer who had been analyzed by molecular residual disease (MRD) assay. A total of 82 tumor tissues and 224 peripheral blood samples were collected and detected by next-generation sequencing (NGS) based on a 1,021-gene panel in this study.
Results: MRD positivity was detected in 18 of 82 patients (22.0%). The hormone receptor-/human epidermal growth factor receptor 2+ (HR-/HER2+) subgroup had the highest postoperative MRD detection rate at 30.8% (4/13). The BRCA2 and SLX4 genes were significantly enriched in all patients in the MRD positive group and FGFR1 amplification was significantly enriched in the MRD negative group with HR+/HER2-. The number of single nucleotide variants (SNVs) in tissue samples of MRD-positive patients was higher than that of MRD-negative patients (11.94 vs. 8.50 SNVs/sample). Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis showed that there was a similar biological function of the tumor-mutated genes in the 2 MRD status groups.
Conclusions: This real-world study confirmed that patient samples of primary tumor tissue with different MRD status and molecular subtypes had differential genetic features, which may be used to predict patients at high risk for recurrence.
abstract_id: PUBMED:36143189
Clustering Molecular Subtypes in Breast Cancer, Immunohistochemical Parameters and Risk of Axillary Nodal Involvement. (1) Background: To establish similarities in the risk of axillary lymph node metastasis between different groups of women with breast cancer according to immunohistochemical (IHC) parameters. (2) Methods: Data was collected retrospectively, from 2000 to 2013, of 1058 node-positive breast tumours. All patients were divided according to the St Gallen 2013 criteria and IHC features. The proportion of axillary involvement (pN > pN0; pN > pN1mi; pN > pN1) was calculated for each group. Similarities in axillary nodal dissemination were explored by cluster analysis and association between IHC and risk of axillary disease was studied with multivariate analysis. (3) Results: Among clinico-pathological surrogates of intrinsic subtypes, axillary involvement was more frequent in Luminal-B like HER2 negative (45.8%) and less frequent in Luminal-B HER2 positive (33.8%; p = 0.044). Axillary macroscopic involvement was more frequent in Luminal-B like HER2 negative (37.9%) and HER2 positive (37.8%) and less frequent in Luminal-B HER2 positive (25.5%) and Luminal-A like (25.6%; p = 0.002). Axillary involvement ≥pN2 was significantly less frequent in Luminal-A like (7.4%; p < 0.001). Luminal-A with Luminal-B HER2 positive, and triple-negative with Erb-B2 overexpressing tumours were clustered together regarding any axillary involvement, macroscopic disease or ≥pN2. Among the defined subgroups, axillary metastases were more frequent when Ki67 was higher. In a multivariate analysis, Ki67>14% were associated with a risk of axillary metastases (HR: 1.31; 95% CI, 1.51−6.80; p < 0.037). (4) Conclusions: there are two lymphatic drainage pathways of the breast according to the expression of hormone receptor-related genes. Positive-ER tumors are associated with lower axillary involvement and negative-ER tumors and Ki67 > 14% with higher nodal involvement.
abstract_id: PUBMED:27770782
Triple negative breast cancer in North of Morocco: clinicopathologic and prognostic features. Background: Triple Negative Breast Cancer (TNBC) is defined by a lack of estrogen and progesterone receptor gene expression and by the absence of overexpression on HER2. It is associated to a poor prognosis. We propose to analyze the clinicopathologic and prognostic characteristics of this breast cancer subtype in a Mediterranean population originated or resident in the North of Morocco.
Methods: We conducted a retrospective study of 279 patients diagnosed with breast cancer between January 2010 and January 2015. Clinicopathologic and prognostic features have been analyzed. Disease-Free Survival (DFS) and Overall Survival (OS) have been estimated.
Results: Of all cases, forty-nine (17.6 %) were identified as having triple negative breast cancer with a median age of 46 years. The average tumor size was 3.6 cm. The majority of patients have had invasive ductal carcinoma (91.8 %) and 40.4 % of them were grade III SBR. Nodal metastasis was detected in 38.9 % of the patients and vascular invasion was found in 36.6 % of them. About half of the patients had an early disease (53.1 %) and 46.9 % were diagnosed at an advanced stage. Patients with operable tumors (61.2 %) underwent primary surgery and adjuvant chemotherapy. Patients with no operable tumors (26.5 %) received neoadjuvant chemotherapy followed by surgery, and patients with metastatic disease (12.2 %) were treated by palliative chemotherapy. DFS and OS at 5 years were respectively 83.7 and 71.4 %. Among 49, twelve had recurrences, found either when diagnosing them or after a follow-up. Local relapse was 6.1 %. Lung and liver metastases accounted consecutively for 8.2 and 10.2 %. Bone metastases were found in 4.1 % and brain metastases in 2.1 % of the cases.
Conclusion: Our results are in accordance with literature data, particularly what concerning young age and poor prognosis among TNBC phenotype. Therefore, the identification of BRCA mutations in our population seems to be essential in order to better adapt management options for this aggressive form of breast cancer.
abstract_id: PUBMED:38129804
A multivariable model of ultrasound and clinicopathological features for predicting axillary nodal burden of breast cancer: potential to prevent unnecessary axillary lymph node dissection. Background: To develop a clinical model for predicting high axillary nodal burden in patients with early breast cancer by integrating ultrasound (US) and clinicopathological features.
Methods And Materials: Patients with breast cancer who underwent preoperative US examination and breast surgery at the Affiliated Hospital of Nantong University (centre 1, n = 250) and at the Affiliated Hospital of Jiangsu University (centre 2, n = 97) between January 2012 and December 2016 and between January 2020 and March 2022, respectively, were deemed eligible for this study (n = 347). According to the number of lymph node (LN) metastasis based on pathology, patients were divided into two groups: limited nodal burden (0-2 metastatic LNs) and heavy nodal burden (≥ 3 metastatic LNs). In addition, US features combined with clinicopathological variables were compared between these two groups. Univariate and multivariate logistic regression analysis were conducted to identify the most valuable variables for predicting ≥ 3 LNs in breast cancer. A nomogram was then developed based on these independent factors.
Results: Univariate logistic regression analysis revealed that the cortical thickness (p < 0.001), longitudinal to transverse ratio (p = 0.001), absence of hilum (p < 0.001), T stage (p = 0.002) and Ki-67 (p = 0.039) were significantly associated with heavy nodal burden. In the multivariate logistic regression analysis, cortical thickness (p = 0.001), absence of hilum (p = 0.042) and T stage (p = 0.012) were considered independent predictors of high-burden node. The area under curve (AUC) of the nomogram was 0.749.
Conclusion: Our model based on US variables and clinicopathological characteristics demonstrates that can help select patients with ≥ 3 LNs, which can in turn be helpful to predict high axillary nodal burden in early breast cancer patients and prevent unnecessary axillary lymph node dissection.
abstract_id: PUBMED:33898305
Tumor Size Still Impacts Prognosis in Breast Cancer With Extensive Nodal Involvement. Background And Purpose: Although tumor size and nodal status are the most important prognostic factors, it is believed that nodal status outperforms tumor size as a prognostic factor. In particular, when patients have a nodal stage greater than N2 (more than nine positive lymph nodes), it is well accepted that tumor size does not retain its prognostic value. Even in the newest American Joint Committee on Cancer (AJCC) prognostic staging system, which includes molecular subtype as an important prognostic factor, T1-3N2 patients are categorized as the same population. The same is true for T1-4N3 patients. Moreover, some physicians have speculated that for tumors staged N2 or greater, the smaller the tumor is, the more aggressive the tumor. Thus, this study aims to investigate the prognostic value of tumor stage (T stage) in patients with extensive nodal involvement and to compare the survival of T4N × M0 and T × N3M0.
Patients And Methods: Female breast cancer patients with nine or more positive lymph nodes or with T4 tumors were identified in the SEER registry between 2010 and 2015. The effect of T stage on breast cancer-specific survival (BCSS) was assessed using the Kaplan-Meier survival curve method and risk-adjusted Cox proportional hazard regression modeling. Survival comparison of T4NxM0 and TxN3M0 patients was also achieved using the Kaplan-Meier survival curve method and risk-adjusted Cox proportional hazard regression model.
Results: Overall, 21,696 women with N2-3 tumors were included from 284,073 patients.T stage, nodal stage (N stage), ER, PR, HER2 and grade were all independent prognostic factors (p <0.001). HRs for ER, PR, HER2, grade, and N stage were 0.662 (0.595-0.738), 0.488 (0.438-0.543), 0.541 (0.489-0.598), 1.534 (1.293-1.418) and 1.551 (1.435-1.676), respectively. Notably, HER2 positivity was correlated with better BCSS possibly due to the wide adoption of anti-HER2 therapy. Using T1 as a reference, HRs of T2, T3, and T4 were 1.363 (1.200-1.548), 2.092 (1.824-2.399) and 3.497 (3.045-4.017), respectively. The same results held true when subgroup analysis based on N stage were conducted. In the two subgroups, namely, women staged as T1-3N2 and women staged as T1-4N3, T stage was also a significant negative prognostic factor independent of ER, PR, HER2 and grade. Moreover, 8,328 women staged as T4 with different nodal statuses were also identified from the whole database. When we compared T4Nx with TxN3, it was found that T4 tumors exhibited worse outcomes than N3 tumors independent of other prognostic factors. When molecular subtype was included in the subgroup analysis, survival could not be distinguished between T4 and N3 only in TNBC.
Conclusions: In patients with extensive nodal status, tumor stage remains a prognostic factor independent of other factors, such as ER, PR, HER2, and grade. In patients with T4Nx or TxN3 tumors, T4 tumors exhibit worse outcomes than N3 tumors independent of other prognostic factors. The AJCC staging system should be modified based on these findings.
abstract_id: PUBMED:17845761
Nodal versus extranodal diffuse large B-cell lymphoma: comparison of clinicopathologic features, immunophenotype and prognosis Objective: To study the clinicopathologic features and outcome of patients with diffuse large B-cell lymphoma (DLBCL), and to compare the differences between DLBCL of nodal and extranodal origins.
Methods: One hundred and forty-two cases of de novo DLBCL collected during a 10-year period were reviewed. The clinicopathologic features and follow-up (2 - 108 months) data were analyzed. Tissue microarray blocks were performed and immunohistochemical studies using antibodies against CD10, bcl-6 and MUM1 were carried out. The cases were then further categorized into germinal center B cell-like (GCB) and non-GCB subtypes.
Results: Primary gastrointestinal DLBCL often presented as early-stage disease (stage I or II) and was associated with low international prognostic index. They showed better prognosis than DLBCL of nodal and other extranodal origins. The positivity rates of CD10, bcl-6 and MUM1 were 19%, 51% and 58%, respectively. 36% of the cases belonged to GCB, while the remaining 64% were non-GCB. In general, DLBCL of extranodal origin showed more frequent bcl-6 expression than nodal DLBCL. As for extranodal DLBCL, GCB immunophenotype was often seen in thyroid and breast tumors, while testicular DLBCL usually carried a non-GCB immunophenotype.
Conclusions: DLBCL of various origins show a diversified GCB and non-GCB differentiation. Nodal and extranodal DLBCL, as well as extranodal DLBCL from different primary sites, carry different biologic characteristics and prognostic implications.
abstract_id: PUBMED:25767340
Can molecular subtyping replace axillary nodal status as prognostic marker in breast cancer? Subtypes are an established prognostic factor of BC in western population but its significance in Indian BC patients has not been evaluated. Thus this study provides an insight into the prognostic significance of molecular classification and its effect on the survival of BC patients in Eastern India. In this hospital based study 242 BC patients attending a Comprehensive Breast Service Clinic of a reputed institute in Eastern India and having IDC were studied over a period of 7 years (January 2007 to October 2013). Nonluminal HER-2-positive and Triple negative tumors were associated with advanced stage of disease, metastatic lymph nodes and NPI ≥5.4, whereas Luminal 1 and Luminal 2 tumors were associated with early stage, uninvolved lymph nodes and NPI <5.4. Better survival was observed for the patients with Luminal 1 [OS = 57.1 % (n = 36)] and Luminal 2 [OS = 60.0 % (n = 6)], compared to Triple negative [OS = 33.6 % (n = 38)] and nonluminal HER-2-positive tumors [OS = 32.1 % (n = 18)]. This study provided some idea about the pattern of BC on the basis of classification by molecular profiling. Our study indicated that Triple negative and nonluminal HER-2-positive tumors have reduced DFS and OS compared with luminal 1 and 2 subtypes. In our patients, Triple negative and nonluminal HER-2-positive tumors were associated with established unfavorable prognostic indicators and this reflects the data in the western literature. The results suggest that the molecular subtypes are an independent prognostic and predictive marker in Indian BC patients. Whether or not molecular subtyping of breast cancer can replace axillary lymph nodes as the standard in prognosis remains to be seen, but if molecular subtyping can provide more information than the axilla about the prognosis and treatment option, it may well be the future of prognostication.
Answer: Breast cancer subtypes are indeed associated with clinicopathologic features at presentation in early-stage breast cancer, but their prognostic value for nodal involvement is variable according to different studies.
One study found no significant associations between breast cancer subtypes (BCS) and nodal involvement, including pN stage, node positivity, or the absolute number of nodes involved. However, significant associations were observed between BCS and age at presentation, method of detection, tumor histology, race, and tumor size (pT stage) (PUBMED:23661183).
In contrast, another study using the St. Gallen surrogate classification for breast cancer subtypes reported that these subtypes successfully predict tumor presenting features, nodal involvement, recurrence patterns, and disease-free survival. After adjusting for confounding factors, surrogates remained predictive of nodal involvement (PUBMED:27544822).
Further research indicated that the prognostic value of lymph node involvement after neoadjuvant chemotherapy differs among breast cancer subtypes. For example, in luminal subtypes, a higher number of involved nodes was associated with impaired relapse-free survival, while in triple-negative and HER2-positive breast cancers, even 1 to 3 positive nodes were associated with decreased relapse-free survival (PUBMED:33418983).
Another study found that the hormone receptor-/human epidermal growth factor receptor 2+ (HR-/HER2+) subgroup had the highest postoperative molecular residual disease detection rate, suggesting that different molecular subtypes may have different risks for recurrence (PUBMED:36654951).
Additionally, clustering of molecular subtypes based on immunohistochemical parameters revealed two distinct lymphatic drainage pathways, with positive-ER tumors associated with lower axillary involvement and negative-ER tumors and Ki67 > 14% with higher nodal involvement (PUBMED:36143189).
In the context of triple-negative breast cancer (TNBC), a study in North Morocco found that TNBC was associated with a poor prognosis, with a significant number of patients presenting with nodal metastasis and vascular invasion (PUBMED:27770782).
A clinical model integrating ultrasound and clinicopathological features was developed to predict high axillary nodal burden, which could potentially prevent unnecessary axillary lymph node dissection (PUBMED:38129804). |
Instruction: Laparoscopic Distal Pancreatectomy for Pancreatic Tumors: Does Size Matter?
Abstracts:
abstract_id: PUBMED:27688035
Current status of laparoscopic pancreaticoduodenectomy and pancreatectomy. This review describes the recent advances in, and current status of, minimally invasive pancreatic surgery (MIPS). Typical MIPS procedures are laparoscopic pancreaticoduodenectomy (LPD), laparoscopic distal pancreatectomy (LDP), laparoscopic central pancreatectomy (LCP), and laparoscopic total pancreatectomy (LTP). Some retrospective studies comparing LPD or LDP and open procedures have demonstrated the safety and feasibility as well as the intraoperative outcomes and postoperative recovery of these procedures. In contrast, LCP and LTP have not been widely accepted as common laparoscopic procedures owing to their complicated reconstruction and limited indications. Nevertheless, our concise review reveals that LCP and LTP performed by expert laparoscopic surgeons can result in good short-term and long-term outcomes. Moreover, as surgeons' experience with laparoscopic techniques continues to grow around the world, new innovations and breakthroughs in MIPS will evolve. Well-designed and suitably powered randomized controlled trials of LPD, LDP, LCP, and LTP are now warranted to demonstrate the superiority of these procedures.
abstract_id: PUBMED:24450349
Laparoscopic total remnant pancreatectomy after laparoscopic pancreaticoduodenectomy. Total remnant pancreatectomy after pancreaticoduodenectomy (PD) is a difficult procedure. Recently, distal pancreatectomy and PD have been performed laparoscopically. Herein, we present the first case report of a laparoscopic total remnant pancreatectomy. A 72-year-old woman underwent a totally laparoscopic pylorus-preserving PD for inferior bile duct cancer. The tumor was composed of moderately differentiated tubular adenocarcinoma and was diagnosed as pStage III according to the UICC-TNM classification. Eighteen months later, CT showed a low-density mass in the remnant pancreas. We conducted a total resection of the remnant pancreas laparoscopically. Histologically, it was diagnosed as a primary pancreatic cancer. The patient's postoperative course was uneventful. She was discharged on postoperative day 14. When an initial PD is performed laparoscopically, laparoscopic total remnant pancreatectomy is technically feasible and safe in selected patients.
abstract_id: PUBMED:26722207
Laparoscopic Pancreatic Resections for Cancer: Pushing the Boundaries. Laparoscopic pancreatic resection (distal pancreatectomy, pancreaticoduodenectomy, including pancreaticoduodenectomy with vascular resections) for pancreatic ductal adenocarcinoma is a technically demanding procedure, and the available evidence from selected high-volume centers suggest it to be well within the oncological principles of surgery for cancer, though this remains to be proven in a randomized study. This review summarizes the present status of laparoscopic resections for pancreatic cancer.
abstract_id: PUBMED:24550177
Tips on laparoscopic distal pancreatectomy. An increasing number of laparoscopic pancreatic procedures are currently carried out worldwide. Laparoscopic distal pancreatectomy (LDP) appears to be technically and oncologically promising in selected patients with benign tumors and low-grade malignancies of the pancreatic body/tail, and is now widely adopted. Here, we described our standard procedures of LDP and some tips on LDP. Recent important insights into some variations/options of LDP including spleen preservation, hand-assisted procedure, and single-incision surgery are also reviewed in this article.
abstract_id: PUBMED:25339812
Laparoscopic resection of pancreatic adenocarcinoma: dream or reality? Laparoscopic pancreatic surgery is in its infancy despite initial procedures reported two decades ago. Both laparoscopic distal pancreatectomy (LDP) and laparoscopic pancreaticoduodenectomy (LPD) can be performed competently; however when minimally invasive surgical (MIS) approaches are implemented the indication is often benign or low-grade malignant pathologies. Nonetheless, LDP and LPD afford improved perioperative outcomes, similar to those observed when MIS is utilized for other purposes. This includes decreased blood loss, shorter length of hospital stay, reduced post-operative pain, and expedited time to functional recovery. What then is its role for resection of pancreatic adenocarcinoma? The biology of this aggressive cancer and the inherent challenge of pancreatic surgery have slowed MIS progress in this field. In general, the overall quality of evidence is low with a lack of randomized control trials, a preponderance of uncontrolled series, short follow-up intervals, and small sample sizes in the studies available. Available evidence compiles heterogeneous pathologic diagnoses and is limited by case-by-case follow-up, which makes extrapolation of results difficult. Nonetheless, short-term surrogate markers of oncologic success, such as margin status and lymph node harvest, are comparable to open procedures. Unfortunately disease recurrence and long-term survival data are lacking. In this review we explore the evidence available regarding laparoscopic resection of pancreatic adenocarcinoma, a promising approach for future widespread application.
abstract_id: PUBMED:7932341
Laparoscopic surgery of the pancreas. Diagnostic laparoscopy provides useful information in patients with pancreatic disease and is the most reliable technique for the staging of patients with pancreatic cancer. The advent of laparoscopic contact ultrasonography has enhanced the diagnostic and staging potential of laparoscopy. In addition to laparoscopic cholecystectomy for acute gallstone-associated pancreatitis, the following operations have been performed laparoscopically: bilio-enteric bypass and gastrojejunostomy in patients with advanced pancreatic cancer; internal drainage for pseudocysts; resection of insulinomas; distal resection for chronic pancreatitis; and pancreaticoduodenectomy for pancreatic cancer. Aside from cholecystectomy, there is as yet, insufficient information to conclude on the advantages of these laparoscopic approaches although the early results, particularly in the palliation of patients with malignant jaundice, are promising. Bilateral thoracoscopic splanchnicectomy for the relief of intractable pancreatic pain is also under current evaluation.
abstract_id: PUBMED:30843350
Laparoscopic pancreaticoduodenectomy for remnant pancreatic recurrence after laparoscopic distal pancreatectomy and hepatectomy for greater omentum leiomyosarcoma. Laparoscopic pancreatic surgery is one of the most difficult procedures, and the adoption of laparoscopic pancreaticoduodenectomy has been limited. The application of laparoscopic surgery has extended to advance cancer, but there have been no reports of laparoscopic pancreaticoduodenectomy after laparoscopic liver resection and distal pancreatectomy. In the present case, a 67-year-old woman was diagnosed with remnant pancreatic recurrence of metastatic greater omentum leiomyosarcoma. She had previously undergone laparoscopic distal pancreatectomy and left lateral liver sectionectomy in 2016. We performed laparoscopic subtotal stomach-preserving pancreaticoduodenectomy in June 2017. The operation time was 274 minutes, and the estimated blood loss was 50 mL. There were no postoperative complications. In summary, laparoscopic pancreaticoduodenectomy is a safe and feasible procedure for a patient who had previously undergone pancreas and liver surgery.
abstract_id: PUBMED:33218187
Is Laparoscopic Pancreaticoduodenectomy Feasible for Pancreatic Ductal Adenocarcinoma? Margin-negative radical pancreatectomy is the essential condition to obtain long-term survival of patients with pancreatic cancer. With the investigation for early diagnosis, introduction of potent chemotherapeutic agents, application of neoadjuvnat chemotherapy, advancement of open and laparoscopic surgical techniques, mature perioperative management, and patients' improved general conditions, survival of the resected pancreatic cancer is expected to be further improved. According to the literatures, laparoscopic pancreaticoduodenectomy (LPD) is also thought to be good alternative strategy in managing well-selected resectable pancreatic cancer. LPD with combined vascular resection is also feasible, but only expert surgeons should handle these challenging cases. LPD for pancreatic cancer should be determined based on surgeons' proficiency to fulfil the goals of the patient's safety and oncologic principles.
abstract_id: PUBMED:23890145
Laparoscopic left pancreatectomy: current concepts. The minimally invasive approach has been slow to gain acceptance in the field of pancreatic surgery even though its advantages over the open approach have been extensively documented in the medical literature. The reasons for the reluctant use of the technique are manifold. Laparoscopic distal or left sided pancreatic resections have slowly become the standard approach to lesions of the pancreatic body and tail as a result of evolution in technology and experience. A number of studies have shown the potential advantages of the technique in terms of safety, blood loss, oncological and economic feasibility, hospital stay and time to recovery from surgery. This review aims to provide an overview of the recent advances in the field of laparoscopic left pancreatectomy (LLP) and discuss potential future developments.
abstract_id: PUBMED:24862672
Laparoscopic spleen-preserving distal pancreatectomy for insulinoma: experience of a single center. Background: Laparoscopic spleen-preserving distal pancreatectomy is gaining acceptance for the treatment of insulinomas of the pancreatic body and tail. The aim of this report is to evaluate the feasibility, safety and outcomes of this procedure in a retrospective series.
Methods: From May 2004 to November 2013, 9 patients underwent laparoscopic spleen-preserving distal pancreatectomy for benign insulinomas in our department. Tumors were single and sporadic in eight patients, while the remaining patient had insulinomas in the setting of multiple endocrine neoplasia type 1. Tumors were located by preoperative imaging in all cases. Laparoscopic ultrasound was always performed to guide the surgical procedure.
Results: All the operations were carried out laparoscopically with a mean operative time of 110 min (range 90-210 min) and a mean blood loss of 50 ml (range 30-120 ml). One patient (11.1%) died on the 22nd post-operative day for massive intra-abdominal bleeding associated with pancreatitis of the stump. Two patients (22.2%) developed pancreatic fistula that healed conservatively. Mean postoperative hospital stay was 7.1 days (range 5-18 days). All alive patients were free from recurrence after a mean follow-up of 45 months (range 11-72 months).
Conclusion: Laparoscopic spleen-preserving distal pancreatectomy is safe and feasible for the management of benign insulinomas. Definition of the tumor with preoperative imaging and laparoscopic ultrasound is essential to achieve high cure rate with minimal conversion.
Answer: Laparoscopic distal pancreatectomy (LDP) is considered technically and oncologically promising for selected patients with benign tumors and low-grade malignancies of the pancreatic body/tail, and it is now widely adopted (PUBMED:24550177). The procedure has been shown to be safe and feasible for the management of benign insulinomas, with the size of the tumors being successfully defined by preoperative imaging and laparoscopic ultrasound to achieve a high cure rate with minimal conversion (PUBMED:24862672).
However, the abstracts provided do not explicitly discuss the impact of tumor size on the feasibility or outcomes of LDP. They do emphasize the importance of careful patient selection and the technical expertise of the surgeon (PUBMED:27688035, PUBMED:26722207, PUBMED:25339812, PUBMED:30843350, PUBMED:33218187, PUBMED:23890145).
Given the complexity of pancreatic surgeries and the need for meticulous surgical technique, it is reasonable to infer that larger tumors might present more challenges during LDP, potentially affecting the ease of resection and the ability to achieve clear margins. However, without specific data on the impact of tumor size on LDP outcomes, it is not possible to make definitive statements based on the provided abstracts.
For a more conclusive answer, studies that directly compare the outcomes of LDP in relation to tumor size would be needed. These would ideally include assessments of operative time, blood loss, margin status, postoperative complications, and long-term oncological outcomes such as recurrence and survival rates. |
Instruction: Seeing it from both sides: do approaches to involving patients in improving their safety risk damaging the trust between patients and healthcare professionals?
Abstracts:
abstract_id: PUBMED:34656110
Quality of care and patient safety at healthcare institutions in Oman: quantitative study of the perspectives of patients and healthcare professionals. Background: Oman's healthcare system has rapidly transformed in recent years. A recent Report of Quality and Patient Safety has nevertheless highlighted decreasing levels of patient safety and quality culture among healthcare professionals. This indicates the need to assess the quality of care and patient safety from the perspectives of both patients and healthcare professionals.
Objectives: This study aimed to examine (1) patients' and healthcare professionals' perspectives on overall quality of care and patient safety standards at two tertiary hospitals in Oman and (2) which demographic characteristics are related to the overall quality of care and patient safety.
Methods: A cross-sectional study design was employed. Data were collected by two items: overall quality of care and patient safety, incorporated in the Revised Humane Caring Scale, and Healthcare Professional Core Competency Instrument. Questionnaires were distributed to (1) patients (n = 600) and (2) healthcare professionals (nurses and physicians) (n = 246) in three departments (medical, surgical and obstetrics and gynaecology) at two tertiary hospitals in Oman towards the end of 2018 and the beginning of 2019. Descriptive statistics and binary logistic regression were used for data analysis.
Results: A total of 367 patients and 140 healthcare professionals completed the questionnaires, representing response rates of 61.2% and 56.9%, respectively. Overall, quality of care and patient safety were perceived as high, with the healthcare professionals rating quality of care (M = 4.36; SD = 0.720) and patient safety (M = 4.39; SD = 0.675) slightly higher than the patients did (M = 4.23; SD = 0.706), (M = 4.22; SD = 0.709). The findings indicated an association between hospital variables and overall quality of care (OR = 0.095; 95% CI = 0.016-0.551; p = 0.009) and patient safety (OR = 0.153; 95% CI = 0.027-0.854; p = 0.032) among healthcare professionals. Additionally, an association between the admission/work area and participants' perspectives on the quality of care (patients, OR = 0.257; 95% CI = 0.072-0.916; p = 0.036; professionals, OR = 0.093; 95% CI = 0.009-0.959; p = 0.046) was found.
Conclusions: The perspectives of both patients and healthcare professionals showed that they viewed both quality of care and patient safety as excellent, with slight differences, indicating a high level of patient satisfaction and competent healthcare delivery professionals. Such perspectives can provide meaningful and complementary insights on improving the overall standards of healthcare delivery systems.
abstract_id: PUBMED:24223230
Seeing it from both sides: do approaches to involving patients in improving their safety risk damaging the trust between patients and healthcare professionals? An interview study. Objective: Encouraging patients to be more vigilant about their care challenges the traditional dynamics of patient-healthcare professional interactions. This study aimed to explore, from the perspectives of both patients and frontline healthcare staff, the potential consequences of patient-mediated intervention as a way of pushing safety improvement through the involvement of patients.
Design: Qualitative study, using purposive sampling and semi-structured interviews with patients, their relatives and healthcare professionals. Emergent themes were identified using grounded theory, with data coded using NVIVO 8.
Participants: 16 patients, 4 relatives, (mean age (sd) 60 years (15); 12 female, 8 male) and 39 healthcare professionals, (9 pharmacists, 11 doctors, 12 nurses, 7 health care assistants).
Setting: Participants were sampled from general medical and surgical wards, taking acute and elective admissions, in two hospitals in north east England.
Results: Positive consequences were identified but some actions encouraged by current patient-mediated approaches elicited feelings of suspicion and mistrust. For example, patients felt speaking up might appear rude or disrespectful, were concerned about upsetting staff and worried that their care might be compromised. Staff, whilst apparently welcoming patient questions, appeared uncertain about patients' motives for questioning and believed that patients who asked many questions and/or who wrote things down were preparing to complain. Behavioural implications were identified that could serve to exacerbate patient safety problems (e.g. staff avoiding contact with inquisitive patients or relatives; patients avoiding contact with unreceptive staff).
Conclusions: Approaches that aim to push improvement in patient safety through the involvement of patients could engender mistrust and create negative tensions in the patient-provider relationship. A more collaborative approach, that encourages patients and healthcare staff to work together, is needed. Future initiatives should aim to shift the current focus away from "checking up" on individual healthcare professionals to one that engages both parties in the common goal of enhancing safety.
abstract_id: PUBMED:37889675
Trust in healthcare professionals of people with chronic cardiovascular disease. Background: Trust is an essential phenomenon of relationship between patients and healthcare professionals and can be described as an accepted vulnerability to the power of another person over something that one cares about in virtue of goodwill toward the trustor. This characterization of interpersonal trust appears to be adequate for patients suffering from chronic illness. Trust is especially important in the context of chronic cardiovascular diseases as one of the main global health problems.
Research Aim: The purpose of the qualitative study was to gain a deeper understanding of how people with chronic cardiovascular disease experience and make sense of trust in healthcare professionals.
Research Design: Eleven semi-structured interviews with participants analysed using interpretative phenomenological analysis to explore in detail their lived experience of trust as a relational phenomenon.
Participants And Research Context: Participants with chronic cardiovascular disease were purposively recruited from inpatients on the cardiology ward of the university hospital located in central Slovakia.
Ethical Considerations: The study was approved by the faculty ethics committee. Participants gave their written informed consent.
Findings Four Interrelated Group Experiential Themes: Sense of co-existence; Belief in competence; Will to help; Ontological security with eight subthemes were identified. The findings describe the participants' experience with trust in healthcare professionals as a phenomenon of close co-existence, which is rooted in the participants' vulnerability and dependence on the goodwill and competence of health professionals to help with the consequence of (re)establishing a sense of ontological security in the situation of chronic illness.
Conclusion: Findings will contribute to an in-depth understanding of trust as an existential dimension of human co-existence and an ethical requirement of healthcare practice, inspire patient empowerment interventions, support adherence to treatment, and person-centred care.
abstract_id: PUBMED:36292561
Impacts of Internet Use on Chinese Patients' Trust-Related Primary Healthcare Utilization. Background: The internet has greatly improved the availability of medical knowledge and may be an important avenue to improve patients’ trust in physicians and promote primary healthcare seeking by reducing information asymmetry. However, very few studies have addressed the interactive impacts of both patients’ internet use and trust on primary healthcare-seeking decisions. Objective: To explore the impact of internet use on the relationship between patients’ trust in physicians and primary healthcare seeking among Chinese adults 18 years of age and older to understand the varieties of effects in different cities. Methods: Generalized linear mixed models were applied to investigate the interactive impacts of internet use and patients’ trust in physicians on primary healthcare seeking using pooled data from the China Family Panel Study of 2014 to 2018. We also compared these effects based on different levels of urbanization, ages, and PHC services. Results: Overall, a higher degree of patients’ trust (p < 0.001) was able to directly predict better primary healthcare seeking, and internet use significantly increased the positive effect of patients’ trust on primary healthcare seeking (p < 0.001). However, the marginal effect analysis showed that this effect was related to the level of patients’ trust and that internet use could reduce the positive effect of patients’ trust on primary healthcare seeking when the individual had a low level of trust (≤ 3 units). Further, the heterogeneity analysis indicated that the benefits from internet use were higher in cities with high urbanization, high aging, and high PHC service levels compared to cities with low levels of these factors. Conclusions: The internet use may enhance patients’ trust-related PHC utilization. However, this impact is effective only if patients’ benchmark trust remains at a relatively high level. Comparatively, the role of internet use is more effective in areas with high urbanization, high aging and high PHC level. Thus, with increasing accessibility to the internet, the internet should be regulated to disseminate correct healthcare information. Moreover, in-depth integration of the internet and PHC should be promoted to provide excellent opportunities for patient participation, and different strategies should be set according to each city’s characteristics.
abstract_id: PUBMED:34553446
Framing healthcare professionals in written adverse events: A discourse analysis. Healthcare professionals have a major responsibility to protect patients from harm. Despite vast efforts to decrease the number of adverse events, the progression of patient safety has internationally been acknowledged as slow. From a social construction perspective, it has been argued that the understanding of patient safety is contextual based on historical and structural rules, and that this meaning construction points out different directions of possible patient safety actions. By focusing on fact construction and its productive and limiting effect on how something can be understood, we explored the discourses about healthcare professionals in 29 written reports of adverse events as reported by patients, relatives, and healthcare professionals. Through the analysis, a discourse about the healthcare professionals as experts was found. The expert role most dominantly included an understanding that adverse events were identified through physical signs and that patient safety could be prevented by more strictly following routines and work procedures. We drew upon the conclusion that these regimes of truth brought power to the expert discourse, to the point that it became difficult for patients and relatives to engage in patient safety actions on their terms.
abstract_id: PUBMED:30192694
Examining factors affecting patients trust in online healthcare services in China: The moderating role of the purpose of use. With the development of Web 2.0 technologies, an increasing number of websites are providing online healthcare services, and they have potential to alleviate problems of overloaded medical resources in China. However, some patients are reluctant to trust and continue using online healthcare services, partly due to the immature development of healthcare websites. Previous research has argued that online trust is significantly associated with the risk or benefit perceived by users. This study aims to extend prior research and examine how perceptual factors influence patients' online trust and intention to continue using online healthcare services. We developed a model with the moderating role of purpose of use and tested it with data collected from 283 participants. The results support the validity of the model and most hypotheses. The moderating role of purpose of use between the perceived benefits/risks and patients' online trust is also highlighted. Theoretical and practical implications are also discussed.
abstract_id: PUBMED:33235457
Healthcare Professionals' Experiences of Assessing, Treating and Preventing Constipation Among Older Patients During Hospitalization: An Interview Study. Purpose: Constipation is a common and troublesome condition among older patients and can result in a variety of negative health consequences. It is often undiagnosed or undertreated. Healthcare professionals have a responsibility to understand and address patients' overall healthcare needs; so exploring their experiences is, therefore, highly relevant. The purpose of the study was to explore healthcare professionals' experiences of assessing, treating and preventing constipation among older patients.
Methods: A qualitative design with an exploratory approach was used. The participants (registered nurses and physicians) were purposively sampled from three wards in a geriatric department in a medium-sized hospital in Sweden. Data were collected through focus group discussions and individual interviews, and analyzed using content analysis.
Results: Three categories were generated: Reasons for suboptimal management of constipation, Strategies for management, and Approaching the patients' needs. In the care of older patients at risk of or with constipation, decisions were made based on personal knowledge, personal experience and clinical reasoning. A person-centered approach was highlighted but was not always possible to incorporate.
Conclusion: Different strategies for preventing and treating constipation were believed to be important, as was person-centered care, but were found to be challenging in the complexity of the care situation. It is important that healthcare professionals reflect on their own knowledge and clinical practice. There is a need for more support, information and specific guidance for healthcare professionals caring for older patients during hospitalization. Overall, this study underscores the importance of adequate access to resources and education in constipation management and that clinical guidelines, such as the Swedish Handbook for Healthcare, could be used as a guide for delivering high-quality care in hospitals.
abstract_id: PUBMED:37340472
Patient safety and sense of security when telemonitoring chronic conditions at home: the views of patients and healthcare professionals - a qualitative study. Background: Chronic diseases are increasing worldwide, and the complexity of disease management is putting new demands on safe healthcare. Telemonitoring technology has the potential to improve self-care management with the support of healthcare professionals for people with chronic diseases living at home. Patient safety threats related to telemonitoring and how they may affect patients' and healthcare professionals' sense of security need attention. This study aimed to explore patients' and healthcare professionals' experiences of safety and sense of security when using telemonitoring of chronic conditions at home.
Methods: Semi-structured interviews were conducted with twenty patients and nine healthcare professionals (nurses and physicians), recruited from four primary healthcare centers and one medical department in a region in southern Sweden using telemonitoring service for chronic conditions in home healthcare.
Results: The main theme was that experiences of safety and a sense of security were intertwined and relied on patients´ and healthcare professionals´ mutual engagement in telemonitoring and managing symptoms together. Telemonitoring was perceived to increase symptom awareness and promote early detection of deterioration promoting patient safety. A sense of security emerged through having someone keeping track of symptoms and comprised aspects of availability, shared responsibility, technical confidence, and empowering patients in self-management. The meeting with technology changed healthcare professionals' work processes, and patients' daily routines, creating patient safety risks if combined with low health- and digital literacy and a naïve reliance on technology. Empowering patients' self-management ability and improving shared understanding of the patient's health status and symptom management were prerequisites for safe care and the patient´s sense of security.
Conclusions: Telemonitoring chronic conditions in the homecare context can promote a sense of security when care is co-created in a mutual understanding and responsibility. Attentiveness to the patient's health literacy, symptom management, and health-related safety behavior when using eHealth technology may enlighten and mitigate latent patient safety risks. A systems approach indicates that patient safety risks related to telemonitoring are not only associated with the patient's and healthcare professionals functioning and behavior or the human-technology interaction. Mitigating patient safety risks are likely also dependent on the complex management of home health and social care service.
abstract_id: PUBMED:37393250
Trusting relationships between patients with non-curative cancer and healthcare professionals create ethical obstacles for informed consent in clinical trials: a grounded theory study. Background: Clinical trial participation for patients with non-curative cancer is unlikely to present personal clinical benefit, which raises the bar for informed consent. Previous work demonstrates that decisions by patients in this setting are made within a 'trusting relationship' with healthcare professionals. The current study aimed to further illuminate the nuances of this relationship from both the patients' and healthcare professionals' perspectives.
Methods: Face-to-face interviews using a grounded theory approach were conducted at a regional Cancer Centre in the United Kingdom. Interviews were performed with 34 participants (patients with non-curative cancer, number (n) = 16; healthcare professionals involved in the consent process, n = 18). Data analysis was performed after each interview using open, selective, and theoretical coding.
Results: The 'Trusting relationship' with healthcare professionals underpinned patient motivation to participate, with many patients 'feeling lucky' and articulating an unrealistic hope that a clinical trial could provide a cure. Patients adopted the attitude of 'What the doctor thinks is best' and placed significant trust in healthcare professionals, focusing on mainly positive aspects of the information provided. Healthcare professionals recognised that trial information was not received neutrally by patients, with some expressing concerns that patients would consent to 'please' them. This raises the question: Within the trusting relationship between patients and healthcare professionals, 'Is it possible to provide balanced information?'. The theoretical model identified in this study is central to understanding how the trusting professional-patient relationship influences the decision-making process.
Conclusion: The significant trust placed on healthcare professionals by patients presented an obstacle to delivering balanced trial information, with patients sometimes participating to please the 'experts'. In this high-stakes scenario, it may be pertinent to consider strategies, such as separation of the clinician-researcher roles and enabling patients to articulate their care priorities and preferences within the informed consent process. Further research is needed to expand on these ethical conundrums and ensure patient choice and autonomy in trial participation are prioritised, particularly when the patient's life is limited.
abstract_id: PUBMED:30504947
Healthcare professionals awareness of the consequences of medical error on patients. Objective: To ascertain the awareness level and demographic differences of the consequences of medical errors on patients' health, safety, resources and survival by healthcare professionals.
Methods: The descriptive study was conducted at five different public hospitals in Nigeria from August to October 2017, and comprised healthcare professionals who were permanent staff members. Awareness of medical errors questionnaire was used for data collection. Dimensions assessed were safety, health, resources and survival. A mean score of >2.50 was taken as a cut-off value for acceptable level of awareness. SPSS 20 was used for data analysis..
Results: Of the 200 participants initially enrolled, 186(93%) completed the questionnaire completely. Of them, 98(53%) were females, 92(49%) were aged 30 49 years, 98(53%) were staff nurses, 24(13%) were doctors and 64(34%) were healthcare assistants. Overall mean questionnaire score was 2.60±0.05, indicating that the participants were aware of the consequences of medical error on patients. In terms of individual dimensions, the scores were acceptable for safety, health and resources (>2.50) but the mean score on patients' survival was 2.34±0.08.
Conclusions: There appeared to be a need for the government to raise the awareness level for healthcare professionals regarding consequences of medical errors on patients in public hospitals.
Answer: The question of whether involving patients in improving their safety risks damaging the trust between patients and healthcare professionals is complex and multifaceted. The study by PUBMED:24223230 explores this issue and suggests that while there are positive consequences to patient involvement in safety, there are also potential risks. The study found that some patient-mediated approaches could elicit feelings of suspicion and mistrust. Patients may worry about appearing rude or disrespectful by speaking up, and they may fear that their care could be compromised if they upset staff. Healthcare professionals, while outwardly welcoming patient questions, may be uncertain about patients' motives and may perceive inquisitive patients as preparing to complain. This could lead to behavioral implications that exacerbate patient safety problems, such as staff avoiding contact with inquisitive patients or patients avoiding contact with unreceptive staff.
The study concludes that approaches to improving patient safety through patient involvement could potentially engender mistrust and create negative tensions in the patient-provider relationship. It suggests that a more collaborative approach is needed, one that encourages patients and healthcare staff to work together towards the common goal of enhancing safety, rather than focusing on "checking up" on individual healthcare professionals.
In contrast, the study by PUBMED:34656110 indicates that both patients and healthcare professionals in Oman perceive the quality of care and patient safety as high, suggesting a level of trust and satisfaction with the healthcare system. However, this study does not specifically address the impact of patient involvement in safety on the trust relationship.
The study by PUBMED:37889675 highlights the importance of trust in the context of chronic cardiovascular diseases, emphasizing that trust is crucial for patients suffering from chronic illness. It does not directly address the impact of patient involvement in safety on trust but underscores the significance of trust in healthcare relationships.
The study by PUBMED:37393250 explores the trusting relationship between patients with non-curative cancer and healthcare professionals in the context of clinical trials. It finds that the significant trust placed on healthcare professionals by patients can present an obstacle to delivering balanced trial information, with patients sometimes participating to please the "experts." This suggests that while trust is essential, it can also complicate the informed consent process and potentially impact patient autonomy.
In summary, while trust is a critical component of the patient-healthcare professional relationship, approaches to involving patients in improving their safety must be carefully considered to avoid damaging this trust. |
Instruction: Is gaze-dependent tonometry a useful tool in the differential diagnosis of Graves' ophthalmopathy?
Abstracts:
abstract_id: PUBMED:18751718
Is gaze-dependent tonometry a useful tool in the differential diagnosis of Graves' ophthalmopathy? Background: A rise in intraocular pressure (IOP) in upgaze is regarded as a diagnostic sign in Graves' ophthalmopathy (GO). However, the question of erroneous IOP measurement due to applanation carried out on the peripheral cornea has never been addressed.
Methods: In 22 healthy volunteers, as well as in 51 GO patients, applanation tonometry was performed in the primary position of gaze and at 20 degrees of upgaze. In addition, applanation tonometry was repeated using a flexible chin rest to incline the head and produce 20 degrees upgaze. This enabled applanation on the central cornea.
Results: In healthy controls, mean IOP in conventional upgaze showed a significant rise compared to primary position (p < 0.0001). IOP measurements in 20 degrees upgaze/head inclination were significantly lower compared to conventional upgaze tonometry (p < 0.0001) and comparable to mean IOP in primary position (p = 0.7930). Mean IOP in GO patients was also significantly higher in conventional upgaze compared to primary position (p < 0.0001). The upgaze measurements obtained by head inclination were significantly lower than those from conventional upgaze tonometry (p < 0.0001), but showed a statistically significant rise compared to mean IOP in primary position (p < 0.0001). The overlap of IOP readings in upgaze between normal individuals and GO patients was considerable, even in patients with severely impaired ocular motility.
Conclusion: In both normal volunteers and patients suffering from GO, a rise in IOP was observed in conventional upgaze tonometry. However, this increase in IOP was partially due to applanation on the peripheral cornea. Measurements in upgaze by head inclination on the central cornea led to a significant lowering of the gaze-dependent IOP change. The discriminating power of the IOP difference between upgaze and primary position to diagnose GO was found to be limited. The broad overlap of IOP between normal individuals and GO patients as detected by conventionally performed upgaze tonometry leads us to conclude that this sign may not be of relevant differential diagnostic value in patients with a clinically undetermined diagnosis.
abstract_id: PUBMED:3586534
Gaze-fixation tonometry or tonography in endocrine ophthalmopathy In patients with Graves' ophthalmopathy there is often a rise in intraocular pressure (dIOP) in changing from straight-ahead gaze to upgaze. If dIOP is measured using the applanation tonometer of the slitlamp, mistakes occur in pressure recording. In order to check these mistakes the authors compared tonometric with tonographic records of dIOP in 90 eyes of 45 patients. It was found that tonographic values were twice as high as tonometric values. Therefore, tonography furnishes more information about the activity of Graves' disease than tonometry; the errors of measurement associated with tonometry are also avoided.
abstract_id: PUBMED:22622414
Diagnosis and differential diagnosis of Graves' orbitopathy in MRI Imaging of Graves' orbitopathy (GO) includes radiological and nuclear medicine procedures. Depending on the method used they provide information about the distribution and activity of the disease. Magnetic resonance imaging (MRI) is not only a helpful tool for making the diagnosis it also enables differentiation of the active and inactive forms of GO due to intramuscular edema. The modality is therefore appropriate to evaluate the disease activity and the course of therapy. The disease leads to the typical enlargement of the muscle bodies of the extraocular muscles. The inferior rectus, medial rectus and levator palpebrae muscles are mostly involved. Signal changes of the intraconal and extraconal fat tissue are possible and a bilateral manifestation is common. The differential diagnosis includes inflammatory diseases and tumors, of which orbital pseudotumor (idiopathic, unspecific orbital inflammation), ocular myositis and orbital lymphoma are the most important. The specific patterns (localization, involvement of orbital structures and signal changes) can be differentiated by MRI.
abstract_id: PUBMED:36928323
The Isabel Differential Diagnosis Generator for Orbital Diagnosis. Purpose: The Isabel differential diagnosis generator is one of the most widely known electronic diagnosis decision support tools. The authors prospectively evaluated the utility of Isabel for orbital disease differential diagnosis.
Methods: The terms "proptosis," "lid retraction," "orbit inflammation," "orbit tumour," "orbit tumor, infiltrative" and "orbital tumor, well-circumscribed" were separately input into Isabel and the results were tabulated. Then the clinical details (patient age, gender, signs, symptoms, and imaging findings) of 25 orbital cases from a textbook of orbital surgery were entered into Isabel. The top 10 differential diagnoses generated by Isabel were compared with the correct diagnosis.
Results: Isabel identified hyperthyroidism and Graves ophthalmopathy as the leading causes of lid retraction, but many common causes of proptosis and orbital tumors were not correctly elucidated. Of the textbook cases, Isabel correctly identified 4/25 (16%) of orbital cases as one of its top 10 differential diagnoses, and the median rank of the correct diagnosis was 6/10. Thirty-two percent of the output diagnoses were unlikely to cause orbital disease.
Conclusion: Isabel is currently of limited value in the mainstream orbital differential diagnosis. The incorporation of anatomic localizations and imaging findings may help increase the accuracy of orbital diagnosis.
abstract_id: PUBMED:8719687
The differential diagnosis and classification of eyelid retraction. Purpose: Classification schemes are useful in the formulation of differential diagnoses. Thoughtful commentary has been devoted to the classification of blepharoptosis, but the causes of eyelid retraction have received less attention in published reports. Although eyelid retraction most frequently is associated with Graves' ophthalmopathy, numerous other entities may cause the sign. This study was undertaken to provide a more comprehensive differential diagnosis and classification of eyelid retraction.
Methods: A series of patients with eyelid retraction was studied, and pertinent published reports were reviewed.
Results: Forty-four patients with different causes for eyelid retraction are described. Normal thyroid function and regulation were confirmed in all patients in whom Graves' ophthalmopathy could not be excluded by clinical, biochemical, or historical criteria.
Conclusion: Based on a series of patients and reported cases, a differential diagnosis for eyelid retraction is proposed using a classification system comprising three categories (neurogenic, myogenic, and mechanistic).
abstract_id: PUBMED:8628549
The differential diagnosis and classification of eyelid retraction. Background: Classification schemes are useful in the formulation of differential diagnoses. Thoughtful commentary has been devoted to the classification of blepharoptosis, but the causes of eyelid retraction have received less attention in published reports. Although eyelid retraction most frequently is associated with Graves ophthalmopathy, numerous other entities may cause the sign. This study was undertaken to provide a more comprehensive differential diagnosis and classification of eyelid retraction.
Methods: A series of patients with eyelid retraction was studied, and pertinent published reports were reviewed.
Results: Forty-four patients with different causes for eyelid retraction are described. Normal thyroid function and regulation were confirmed in all patients in whom Graves ophthalmopathy could not be excluded by clinical, biochemical, or historical criteria.
Conclusion: Based on a series of patients and reported cases, a differential diagnosis for eyelid retraction is proposed using a classification system compromising three categories (neurogenic, myogenic, and mechanistic).
abstract_id: PUBMED:32202741
IgG4-associated disease in differential diagnosis of inflammatory orbitopathy IgG4-associated disease (IgG4-RD) is a systemic inflammatory disease characterized by tumorlike sclerosing masses in different organs. Differential diagnosis in orbital IgG4-RD includes majority of conditions, such as thyroid eye disease (TED), sarcoidosis, granulomatosis with polyangiitis, idiopatic orbital inflammation, limphoproliferative diseases and others. A case of IgG4-RD with different organs involvement and complicated differential diagnosis is presented. This case demonstrates very uncommon manifestation of IgG4-RD, when orbital involvement was very similar with TED. Systemic process was not recognized during a long period of time and diagnosis of IgG4-RD was established only after biopsy of abnormally increased lacrimal gland. Differential diagnosis included other systemic diseases, first of all sarcoidosis, GPA, and lymphoma. Biopsy results were consistent with the gold standard of diagnosis, e. g. more than 40% of plasma cells were IgG4 positive. This case demonstrates the necessity of orbital biopsy before starting immunosuppression to avoid inappropriate treatment strategy.
abstract_id: PUBMED:38497384
Differential diagnosis of thyroid orbitopathy - diseases mimicking the presentation or activity of thyroid orbitopathy. Thyroid orbitopathy (TO) is the most common cause of orbital tissue inflammation, accounting for about 60% of all orbital inflammations. The inflammatory activity and severity of TO should be diagnosed based on personal experience and according to standard diagnostic criteria. Magnetic resonance imaging (MRI) of the orbit is used not only to identify swelling and to differentiate inflammatory active from non-active TO, but also to exclude other pathologies, such as orbital tumours or vascular lesions. However, a group of diseases can mimic the clinical manifestations of TO, leading to serious diagnostic difficulties, especially when the patient has previously been diagnosed with a thyroid disorder. Diagnostic problems can be presented by cases of unilateral TO, unilateral or bilateral TO in patients with no previous or concomitant symptoms of thyroid disorders, lack of symptoms of eyelid retraction, divergent strabismus, diplopia as the only symptom of the disease, and history of increasing diplopia at the end of the day. The lack of visible efficacy of ongoing immunosuppressive treatment should also raise caution and lead to a differential diagnosis of TO. Differential diagnosis of TO and evaluation of its activity includes conditions leading to redness and/or swelling of the conjunctiva and/or eyelids, and other causes of ocular motility disorders and eye-setting disorders. In this paper, the authors review the most common diseases that can mimic TO or falsify the assessment of inflammatory activity of TO.
abstract_id: PUBMED:24398487
Intraocular pressure change with eye positions before and after orbital decompression for thyroid eye disease. Purpose: To examine intraocular pressure (IOP) changes in primary and upward gazes before and after orbital decompression in patients with thyroid eye disease.
Methods: Seventy-eight orbits of 40 patients who underwent orbital decompression between June 2010 and September 2012 were retrospectively reviewed. Subjects were divided in 2 groups according to the number of orbital walls removed: deep lateral orbital wall decompression group (Group A) or balanced decompression group (Group B). IOP was measured using Goldmann applanation tonometry in primary gaze and a 20° upward gaze before and 3 months after surgery.
Results: Preoperative IOP in upward gaze (18.7 mm Hg) was higher than in primary gaze (15.7 mm Hg, p < 0.001). Postoperative IOP reduction in upward gaze (3.8 mm Hg) was greater than in primary gaze (1.7 mm Hg, p < 0.001). Although the overall postoperative IOP in upward gaze (14.9 mm Hg) remained higher than in primary gaze (14.0 mm Hg, p = 0.038), the gaze-related IOP demonstrated no significant difference in all subgroups (Group A, p = 0.091; Group B, p = 0.332).
Conclusions: IOP in upward gaze was higher prior to orbital decompression, but reduction was greater postoperatively and approximated the IOP in primary gaze.
abstract_id: PUBMED:38086704
Comparison of Three Methods of Tonometry in Patients with Inactive Thyroid-Associated Orbitopathy. Introduction: Intraocular pressure (IOP) measurement in patients with thyroid-associated orbitopathy (TAO) can be difficult and misleading, particularly in patients with diplopia and eye deviation (esotropia or hypotropia). However, when measuring IOP, it is also necessary to pay sufficient attention to TAO patients without diplopia in primary gaze direction and without motility disorder that might not be readily apparent.
Purpose: The aim of this study was to evaluate the accuracy of measurement of intraocular pressure (IOP) using three different types of tonometers: the rebound tonometer (iCARE), the Goldmann applanation tonometer (GAT) and the non-contact airpuff tonometer (NCT) in patients with inactive TAO. Materials and Methods: A total of 98 eyes of 49 adult patients with TAO were examined. The study group included 36 females and 13 males, with an age range of 19-70 years and a median age of 55.0. All the patients had evidence of thyroid disease, a history of mild to moderate TAO, no clinical signs or symptoms of active disease, and no diplopia in direct gaze direction. In addition to a comprehensive eye examination, all the patients underwent measurement of intraocular pressure with three tonometers: NCT, iCARE, and GAT. The measurements with these three devices were compared.
Results: The mean IOP was 18.1 ± 2.4 mmHg (range 13-25 mmHg) with GAT, 22.3 ±5.0 mmHg (range 13-35 mmHg) with NCT, and 18.0 ±2.4 mmHg (range 13.3-26 mmHg) with iCARE. The mean difference between the GAT and iCARE measurements (using the Bland-Altman analysis) was -0.1 ±1.16 mmHg (limits of agreement -2.4 to 2.1). The mean difference between the GAT and NCT measurements was 4.2 ±3.6 mmHg (limits of agreement -2.8 to 11.2). The mean difference between the iCARE and NCT measurements was -4.3 ±3.7 mmHg (limits of agreement -11.6 to 2.9). No significant difference was found between GAT and iCARE (p = 1.000). However, there was a significant difference between GAT and NCT (p &lt; 0.0001), as well as between iCARE and NCT (p &lt; 0.0001). Conclusions: In patients with TAO, NCT significantly overestimates IOP values compared to the GAT and ICare. By contrast, the iCARE rebound tonometer provides IOP measurements comparable to the gold standard GAT in these patients.
Answer: Gaze-dependent tonometry, which involves measuring intraocular pressure (IOP) in different gaze positions, has been considered in the differential diagnosis of Graves' ophthalmopathy (GO). However, the usefulness of this method is questionable based on the available evidence.
A study found that in both healthy volunteers and patients with GO, IOP increased significantly in conventional upgaze tonometry compared to the primary position. This increase was partially due to applanation on the peripheral cornea. When the head was inclined to allow applanation on the central cornea, the IOP measurements in upgaze were significantly lower and comparable to those in the primary position. The overlap of IOP readings between normal individuals and GO patients was considerable, even in patients with severely impaired ocular motility. This led to the conclusion that the discriminating power of IOP difference between upgaze and primary position is limited, and gaze-dependent tonometry may not be of relevant differential diagnostic value in patients with a clinically undetermined diagnosis (PUBMED:18751718).
Another study comparing tonometric with tonographic records of IOP changes in patients with Graves' ophthalmopathy found that tonographic values were twice as high as tonometric values. This suggests that tonography may provide more information about the activity of Graves' disease than tonometry and avoid the errors associated with tonometry (PUBMED:3586534).
Furthermore, other diagnostic methods such as MRI have been shown to be helpful in diagnosing GO and differentiating between active and inactive forms of the disease due to intramuscular edema (PUBMED:22622414). Additionally, the Isabel differential diagnosis generator, an electronic diagnosis decision support tool, was found to be of limited value in mainstream orbital differential diagnosis, suggesting that reliance on gaze-dependent tonometry alone may not be sufficient (PUBMED:36928323).
In conclusion, while gaze-dependent tonometry may show an increase in IOP in patients with GO, the method has limited diagnostic value due to the significant overlap with normal individuals and the potential for measurement errors. Other diagnostic tools and methods should be considered for a more accurate differential diagnosis of Graves' ophthalmopathy. |
Instruction: Community or patient preferences for cost-effectiveness of cardiac rehabilitation: does it matter?
Abstracts:
abstract_id: PUBMED:18800005
Community or patient preferences for cost-effectiveness of cardiac rehabilitation: does it matter? Background: Few healthcare economic evaluations, and none in cardiac rehabilitation, report results based on both community and patient preferences for health outcomes. We published the results of a randomized trial of cardiac rehabilitation after myocardial infarction in 1994 in which preferences were measured using both perspectives but only patient preferences were reported. This secondary analysis uses both types of preference measurements.
Methods: We collected community Quality of Well-Being (QWB) and patient Time Trade-off (TTO) preference scores from 188 patients (rehabilitation, n=93; usual care, n=95) on entry into the trial, at 2 months (end of the intervention) and again at 4, 8, and 12 months. Mean preference scores over the 12-month follow-up study period, estimates of quality-adjusted life years (QALYs) gained per patient, incremental cost-effectiveness ratios [costs inflated to 2006 US dollars] and probabilities of the cost-effectiveness of rehabilitation for costs per QALY up to USD100,000 are reported.
Results: Mean QWB preference scores were lower (P<0.01) than the corresponding mean TTO preference scores at each assessment point. The 12-month changes in mean QWB and TTO preference scores were large and positive (P<0.001) with rehabilitation patients gaining a mean of 0.011 (95% confidence interval, -0.030 to +0.052) more QWB-derived QALYs, and 0.040 (-0.026, 0.107) more TTO-derived QALYs, per patient than usual care patients. The incremental cost-effectiveness ratio for QWB-derived QALYs was estimated at $60 270/QALY (about euro50 600/QALY) and at $16 580/QALY (about euro13 900/QALY) with TTO-derived QALYs. With a willingness to spend $100 000/QALY, the probability of rehabilitation being cost-effective is 0.58 for QWB-derived QALYs and 0.83 for TTO-derived QALYs.
Conclusion: This secondary analysis of data from a randomized trial indicates that cardiac rehabilitation is cost-effective from a community perspective and highly cost-effective from the perspective of patients.
abstract_id: PUBMED:27786569
Individualized cost-effectiveness analysis of patient-centered care: a case series of hospitalized patient preferences departing from practice-based guidelines. Objective: To develop cases of preference-sensitive care and analyze the individualized cost-effectiveness of respecting patient preference compared to guidelines.
Methods: Four cases were analyzed comparing patient preference to guidelines: (a) high-risk cancer patient preferring to forgo colonoscopy; (b) decubitus patient preferring to forgo air-fluidized bed use; (c) anemic patient preferring to forgo transfusion; (d) end-of-life patient requesting all resuscitative measures. Decision trees were modeled to analyze cost-effectiveness of alternative treatments that respect preference compared to guidelines in USD per quality-adjusted life year (QALY) at a $100,000/QALY willingness-to-pay threshold from patient, provider and societal perspectives.
Results: Forgoing colonoscopy dominates colonoscopy from patient, provider, and societal perspectives. Forgoing transfusion and air-fluidized bed are cost-effective from all three perspectives. Palliative care is cost-effective from provider and societal perspectives, but not from the patient perspective.
Conclusion: Prioritizing incorporation of patient preferences within guidelines holds good value and should be prioritized when developing new guidelines.
abstract_id: PUBMED:26297374
Community IntraVenous Antibiotic Study (CIVAS): protocol for an evaluation of patient preferences for and cost-effectiveness of community intravenous antibiotic services. Introduction: Outpatient parenteral antimicrobial therapy (OPAT) is used to treat a wide range of infections, and is common practice in countries such as the USA and Australia. In the UK, national guidelines (standards of care) for OPAT services have been developed to act as a benchmark for clinical monitoring and quality. However, the availability of OPAT services in the UK is still patchy and until quite recently was available only in specialist centres. Over time, National Health Service (NHS) Trusts have developed OPAT services in response to local needs, which has resulted in different service configurations and models of care. However, there has been no robust examination comparing the cost-effectiveness of each service type, or any systematic examination of patient preferences for services on which to base any business case decision.
Methods And Analysis: The study will use a mixed methods approach, to evaluate patient preferences for and the cost-effectiveness of OPAT service models. The study includes seven NHS Trusts located in four counties. There are five inter-related work packages: a systematic review of the published research on the safety, efficacy and cost-effectiveness of intravenous antibiotic delivery services; a qualitative study to explore existing OPAT services and perceived barriers to future development; an economic model to estimate the comparative value of four different community intravenous antibiotic services; a discrete choice experiment to assess patient preferences for services, and an expert panel to agree which service models may constitute the optimal service model(s) of community intravenous antibiotics delivery.
Ethics And Dissemination: The study has been approved by the NRES Committee, South West-Frenchay using the Proportionate Review Service (ref 13/SW/0060). The results of the study will be disseminated at national and international conferences, and in international journals.
abstract_id: PUBMED:19757864
The role of patient preferences in cost-effectiveness analysis: a conflict of values? This paper reviews the role of patient preferences within the framework of cost-effectiveness analysis (CEA). CEA typically adopts a system-wide perspective by focusing upon efficiency across groups in the allocation of scarce healthcare resources, whereas treatment decisions are made over individuals. However, patient preferences have been shown to have a direct impact on the outcome of an intervention via psychological factors or indirectly via patient adherence/compliance rates. Patient values may also be in conflict with the results of CEA through the valuation of benefits. CEA relies heavily on the QALY model to reflect individual preferences, although the healthy year equivalent offers an alternative measure that may be better at taking individual preferences into account. However, both measures typically use mean general population or mean patient values and therefore create conflict with individual-level preferences. For CEA to reflect practice, it must take into account the impact of individual patient preferences even where general population preferences are used to value the benefits of interventions. Patient preferences have implications for cost effectiveness through costs and outcomes, and it is important that cost-effectiveness models incorporate these through its structure (e.g. allowing for differing compliance rates) and parameter values, including clinical effectiveness. It will also be necessary to try to predict patient preferences in order to estimate any impact on cost effectiveness through analyses of revealed and stated preference data. It is recognized that policy makers are concerned with making interventions available to patients and not forcing them to consume healthcare. One way of moving towards this would be to adopt a two-part decision process: the identification of the most cost-effective therapy using mean general population values (i.e. the current rule), then also making available those treatments that are cheaper than the most cost-effective therapy.
abstract_id: PUBMED:38322307
The Effectiveness and Cost-Effectiveness of Community Diagnostic Centres: A Rapid Review. Objectives: To examine the effectiveness of community diagnostic centres as a potential solution to increasing capacity and reducing pressure on secondary care in the UK. Methods: A comprehensive search for relevant primary studies was conducted in a range of electronic sources in August 2022. Screening and critical appraisal were undertaken by two independent reviewers. There were no geographical restrictions or limits to year of publication. A narrative synthesis approach was used to analyse data and present findings. Results: Twenty primary studies evaluating twelve individual diagnostic centres were included. Most studies were specific to cancer diagnosis and evaluated diagnostic centres located within hospitals. The evidence of effectiveness appeared mixed. There is evidence to suggest diagnostic centres can reduce various waiting times and reduce pressure on secondary care. However, cost-effectiveness may depend on whether the diagnostic centre is running at full capacity. Most included studies used weak methodologies that may be inadequate to infer effectiveness. Conclusion: Further well-designed, quality research is needed to better understand the effectiveness and cost-effectiveness of community diagnostic centres.
abstract_id: PUBMED:29387175
Patient preferences for types of community-based cardiac rehabilitation programme. Introduction: Cardiac rehabilitation (CR) improves mortality, morbidity and quality of life of cardiovascular patients. However, its uptake is poor especially in the hospitals due to long travel distances and office hours constraints. Community-based CR is a possible solution.
Objectives: To understand the type of community-based CR preferred and identify patient characteristics associated with certain programme combinations.
Methods: A cross-sectional survey was administered to a randomised list of patients at risk for or with cardiovascular diseases at two community-based CR centres. Participants were presented with nine hypothetical choice sets and asked to choose only one of the two alternative programme combinations in each choice set. Attributes include support group presence, cash incentives, upfront deposit and out-of-pocket cost. The counts for each combination were tallied and corrected for repeats. Chi-square test and logistic regression were performed to understand the characteristics associated with the preferred CR combination.
Results: After correcting for repeats, patients most (85.2%) prefer CR programmes with new group activities, support group, cash rewards, deposit and out-of-pocket cost, and few exercise equipment with physiotherapist presence without the need for monitoring equipment. Patients with more than three bedrooms in their house are less likely (OR 0.367; CI 0.17 to 0.80; P=0.011) to choose the choice with no physiotherapist and few equipment available.
Conclusion: This is the first study to explore patients' preferences for different types of community CR. Higher income patients prefer physiotherapist presence and are willing to settle for less equipment. Our study serves as a guide for designing future community-based CR programmes.
abstract_id: PUBMED:7808213
A method for estimating the cost-effectiveness of incorporating patient preferences into practice guidelines. Many clinical practice guidelines fail to account for the preferences of the individual patient. Approaches that seek to include the preferences of the individual patient in the decision-making process (e.g., interactive videodisks for patient education), however, may incur substantial incremental costs. Developers of clinical practice guidelines must therefore determine whether it is appropriate to make their guidelines flexible with regard to patient preferences. The authors present a formal method for determining the cost-effectiveness of incorporating the preferences of individual patients into clinical practice guidelines. Based on utilities assessed from 37 patients, they apply the method in the setting of mild hypertension. In this example, they estimate that the cost-effectiveness ratio for individualized utility assessment is $48,565 per quality-adjusted year of life, a ratio that compares favorably with other health interventions that are promoted actively. This approach, which can be applied to any clinical domain, offers a formal method for determining whether the incorporation of individual patient preferences is important clinically and is justified economically.
abstract_id: PUBMED:37208666
Hybrid cardiac telerehabilitation for coronary artery disease in Australia: a cost-effectiveness analysis. Background: Traditional cardiac rehabilitation programs are centre-based and clinically supervised, with their safety and effectiveness well established. Notwithstanding the established benefits, cardiac rehabilitation remains underutilised. A possible alternative would be a hybrid approach where both centre-based and tele-based methods are combined to deliver cardiac rehabilitation to eligible patients. The objective of this study was to determine the long-term cost-effectiveness of a hybrid cardiac telerehabilitation and if it should be recommended to be implemented in the Australian context.
Methods: Following a comprehensive literature search, we chose the Telerehab III trial intervention that investigated the effectiveness of a long-term hybrid cardiac telerehabilitation program. We developed a decision analytic model to estimate the cost-effectiveness of the Telerehab III trial using a Markov process. The model included stable cardiac disease and hospitalisation health states and simulations were run using one-month cycles over a five-year time horizon. The threshold for cost-effectiveness was set at $AU 28,000 per quality-adjusted life-year (QALY). For the base analysis, we assumed that 80% completed the programme. We tested the robustness of the results using probabilistic sensitivity and scenario analyses.
Results: Telerehab III intervention was more effective but more costly and was not cost-effective, at a threshold of $28,000 per QALY. For every 1,000 patients who undergo cardiac rehabilitation, employing the telerehabilitation intervention would cost $650,000 more, and 5.7 QALYs would be gained, over five years, compared to current practice. Under probabilistic sensitivity analysis, the intervention was cost-effective in only 18% of simulations. Similarly, if the intervention compliance was increased to 90%, it was still unlikely to be cost-effective.
Conclusion: Hybrid cardiac telerehabilitation is highly unlikely to be cost-effective compared to the current practice in Australia. Exploration of alternative models of delivering cardiac telerehabilitation is still required. The results presented in this study are useful for policymakers wanting to make informed decisions about investment in hybrid cardiac telerehabilitation programs.
abstract_id: PUBMED:26208507
Cardiac rehabilitation in low- and middle-income countries: a review on cost and cost-effectiveness. Background: By 2030, more than 80% of cardiovascular disease-related deaths and disability-adjusted life years will occur in the 139 low- and middle-income (LMIC) countries. Cardiac rehabilitation (CR) has been demonstrated to be effective and cost-effective mainly based on data from high-income countries. The purpose of this paper was to review the literature for cost and cost-effectiveness data on CR in LMICs.
Methods: MEDLINE (Ovid) and EMBASE (Ovid) electronic databases were searched for CR 'cost' and 'cost-effectiveness' data in LMICs.
Results: Five CR publications with cost and cost-effectiveness data from middle-income countries were identified with none from low-income countries. Studies from Brazil demonstrated mean monthly savings of US$190 for CR, with a US$48 increase in a control group with mean costs of US$503 for a 3-month CR program. Mean costs to the public health care system of US$360 and US$540 when paid out-of-pocket were reported for a 3-month CR program in seven Latin American middle-income countries. Cardiac rehabilitation is reported to be cost-effective in both Brazil and Colombia.
Conclusions: Cardiac rehabilitation for patients with heart failure in Brazil and Colombia was estimated to be cost-effective. However, given the limited health care budgets in many LMICs, affordable CR models will need to be developed for LMICs, particularly for low-income countries.
abstract_id: PUBMED:37487337
Cost effectiveness and long-term outcomes of dexamethasone administration in major non-cardiac surgery. Study Objectives: Postoperative administration of dexamethasone has been proposed to reduce morbidity and mortality in patients undergoing major non-cardiac surgery. In this ancillary study of the PACMAN trial, we aimed to evaluate the cost effectiveness of dexamethasone in patients undergoing major non-cardiac surgery.
Methods: Patients included in the multicentric randomized double-blind, placebo-controlled PACMAN trial were followed up for 12 months after their surgical procedure. Patients were randomized to receive either dexamethasone (0.2 mg/kg immediately after the surgical procedure, and on day 1) or placebo. Cost effectiveness between the dexamethasone and placebo groups was assessed for the 12-month postoperative period from a health payer perspective.
Results: Of 1222 randomized patients in PACMAN, 137 patients (11%) were followed up until 12 months after major surgery (71 in the DXM group and 66 in the placebo group). Postoperative dexamethasone administration reduced costs per patient at 1 year by €358.06 (95%CI -€1519.99 to €803.87). The probability of dexamethasone being cost effective was between 12% and 22% for a willingness to pay of €100,000 to €150,000 per life-year, which is the threshold that is usually used in France and was 52% for willingness to pay of €50,000 per life-year (threshold in USA). At 12 months, 9 patients (13.2%) in the DXM group and 10 patients (16.1%) in the placebo group had died. In conclusion, our study does not demonstrate the cost effectiveness of perioperative administration of DXM in major non-cardiac surgery.
Answer: Community and patient preferences do indeed matter when considering the cost-effectiveness of cardiac rehabilitation. A secondary analysis of data from a randomized trial indicated that cardiac rehabilitation is cost-effective from a community perspective and highly cost-effective from the perspective of patients (PUBMED:18800005). This study found that rehabilitation patients gained more quality-adjusted life years (QALYs) than usual care patients, with the incremental cost-effectiveness ratio being more favorable when patient preferences were considered using the Time Trade-off (TTO) method compared to the community Quality of Well-Being (QWB) scores.
Moreover, individualized cost-effectiveness analysis that respects patient preferences compared to guidelines has shown that prioritizing the incorporation of patient preferences within guidelines holds good value and should be prioritized when developing new guidelines (PUBMED:27786569). This suggests that patient preferences can significantly influence the perceived value and cost-effectiveness of healthcare interventions.
Additionally, the role of patient preferences in cost-effectiveness analysis is recognized as important, as these preferences can directly impact the outcome of an intervention through psychological factors or indirectly via patient adherence/compliance rates (PUBMED:19757864). This highlights the need for cost-effectiveness models to incorporate patient preferences to better reflect practice.
Furthermore, a study on community-based cardiac rehabilitation programs found that patients have specific preferences for the type of program they would like to participate in, which can influence their willingness to engage with such programs (PUBMED:29387175). This indicates that understanding and catering to patient preferences can be crucial for the success and cost-effectiveness of community-based interventions.
In summary, both community and patient preferences play a significant role in determining the cost-effectiveness of cardiac rehabilitation. Accounting for these preferences can lead to more favorable cost-effectiveness ratios and better alignment with patient values and needs, ultimately influencing the uptake and success of rehabilitation programs. |
Instruction: Can chronic multimorbidity explain the age-related differences in strength, speed and balance in older adults?
Abstracts:
abstract_id: PUBMED:22961066
Can chronic multimorbidity explain the age-related differences in strength, speed and balance in older adults? Background And Aims: It is known that physical performance declines with age in general, however there remains much to be understood in terms of age-related differences amongst older adults across a variety of physical components (such as speed, strength and balance), and particularly in terms of the role played by multimorbidity of chronic diseases. We aimed to detect the age-related differences across four components of physical performance and to explore to what extent chronic diseases and multimorbidity may explain such differences.
Methods: We analyzed cross-sectional data from a population-based sample of 3323 people, aged 60 years and older from the SNAC-K study, Stockholm, Sweden. Physical performance was assessed by trained nurses using several tests (grip strength, walking speed, balance and chair stands). Clinical diagnoses were made by the examining physician based on clinical history and examination.
Results: Censored normal regression analyses showed that the 72-90+ year-old persons had 17-40% worse grip strength, 44-86% worse balance, 30-86% worse chair stand score, and 21-59% worse walking speed, compared with the 60-66 year-old persons. Chronic diseases were strongly associated with physical impairment, and this association was particularly strong among the younger men. However, chronic diseases explained only some of the age-related differences in physical performance. When controlling for chronic diseases in the analyses, the age-related differences in physical performance changed 1-11%.
Conclusion: In spite of the strong association between multimorbidity and physical impairment, chronic morbidities explained only a small part of the age-related differences in physical performance.
abstract_id: PUBMED:32599778
Associations between Multimorbidity and Physical Performance in Older Chinese Adults. Background: Evidence on the association between physical performance and multimorbidity is scarce in Asia. This study aimed to identify multimorbidity patterns and their association with physical performance among older Chinese adults. Methods: Individuals aged ≥60 years from the China Health and Retirement Longitudinal Study 2011-2015 (N = 10,112) were included. Physical performance was measured by maximum grip strength (kg) and average gait speed (m/s) categorized as fast (>0.8 m/s), median (>0.6-0.8 m/s), and slow (≤0.6 m/s). Multimorbidity patterns were explored using exploratory factor analysis. Generalized estimating equation was conducted. Results: Four multimorbidity patterns were identified: cardio-metabolic, respiratory, mental-sensory, and visceral-arthritic. An increased number of chronic conditions was associated with decreased normalized grip strength (NGS). Additionally, the highest quartile of factor scores for cardio-metabolic (β = -0.06; 95% Confidence interval (CI) = -0.07, -0.05), respiratory (β = -0.03; 95% CI = -0.05, -0.02), mental-sensory (β = -0.04; 95% CI = -0.05, -0.03), and visceral-arthritic (β = -0.04; 95% CI = -0.05, -0.02) patterns were associated with lower NGS compared with the lowest quartile. Participants with ≥4 chronic conditions were 2.06 times more likely to have a slow gait speed. Furthermore, the odds ratios for the highest quartile of factor scores of four patterns with slow gait speed compared with the lowest quartile ranged from 1.26-2.01. Conclusion: Multimorbidity was related to worse physical performance, and multimorbidity patterns were differentially associated with physical performance. A shift of focus from single conditions to the requirements of a complex multimorbid population was needed for research, clinical guidelines, and health-care services. Grip strength and gait speed could be targeted to routinely measure clinical performance among older adults with multimorbidity, especially mental-sensory disorders, in clinical settings.
abstract_id: PUBMED:29076226
Effect of multimorbidity on gait speed in well-functioning older people: A population-based study in Peru. Aim: To determine the association between multimorbidity and gait speed in a population-based sample of older people without functional dependency.
Methods: Data were obtained from a previously made cross-sectional population-based study of individuals aged >60 years carried out in San Martin de Porres, the second most populous district in Lima, Peru. We included well-functioning, independent older people. Exclusion criteria emphasized removing conditions that would impair gait. The exposure of interest was non-communicable chronic disease multimorbidity, and the outcome was gait speed determined by the time required for the participant to walk a distance of 8 m out of a total distance of 10 m. Generalized linear models were used to estimate adjusted gait speed by multimorbidity status.
Results: Data from 265 older adults with a median age of 68 years (IQR 63-75 years) and 54% women were analyzed. The median gait speed was 1.06 m/s (SD 0.27) and the mean number of chronic conditions per adult was 1.1 (SD ±1). The difference in mean gait speed between older adults without a chronic condition and those with ≥3 chronic conditions was 0.24 m/s. In crude models, coefficients decreased by a significant exponential factor for every increase in the number of chronic conditions. Further adjustment attenuated these estimates.
Conclusions: Slower speed gaits are observed across the spectrum of multimorbidity in older adults without functional dependency. The role of gait speed as a simple indicator to evaluate and monitor general health status in older populations is expanded to include older adults without dependency. Geriatr Gerontol Int 2018; 18: 293-300.
abstract_id: PUBMED:36632782
Comparison of gait speed, dynamic balance, and dual-task balance performance according to kinesiophobia level in older adults. Purpose: The presence of kinesiophobia was identified in older adults. Studies have examined the effects of kinesiophobia in older adults with chronic pain. Studies examining the effect of kinesiophobia on gait and balance performance in older adults without pain are insufficient. The aim of this study was to compare gait speed, dynamic balance, dual-task balance performance according to kinesiophobia level in community dwelling older adults without pain.
Materials And Methods: Seventy-five older adults were included. The socio-demographic data (age, height, weight, fall history, etc.) was recorded. Older adults were divided into two groups based on Tampa Kinesiophobia Scale scores. Scores below 37 were grouped as low level, scores above 37 were grouped as high level. The mini-mental state examination (MMSE), gait speed test, modified Four Square Step Test (mFSST), Five Times Sit-to-Stand Test, dual-mFSSt test (additional cognitive and motor task) were applied for dual-task balance performance.
Results: Thirty-six participants(mean age 70.58 ± 5.59 years) had low kinesiophobia, the other 39 individuals(mean age70.94 ± 7.45 years) had high kinesiophobia. The age, gender, body mass index, cognitive status, and fall history were similar between groups (p > 0.05). The participants with low kinesiophobia were found to have better gait speed, dynamic balance, dual-task balance performance (p < 0.001).
Conclusion: This study results showed that the presence of high level of kinesiophobia affects gait speed, dynamic balance, dual-task balance performance, and dual-task cost in older adults. Therefore, a high level of kinesiophobia can lead to falls. It may be important to investigate the effects of kinesiophobia in older adults.
abstract_id: PUBMED:37661607
Associations of cardiometabolic multimorbidity with grip strength and gait speed among older Chinese adults Objective: To investigate the associations of cardiometabolic multimorbidity (CMM) with grip strength and gait speed among older Chinese adults. Methods: This study included participants aged ≥60 years from the China Health and Retirement Longitudinal Survey during 2011-2015. Generalized estimating equation models were employed to estimate the associations of CMM with grip strength and gait speed. Results: A total of 6 357 participants were included to measure grip strength and 6 250 participants to measure gait speed. Compared with no cardiometabolic disease, participants with 1 (β=-0.018, 95%CI: -0.026--0.010), 2 (β=-0.029, 95%CI: -0.041- -0.018), and ≥3 (β=-0.050, 95%CI: -0.063- -0.037) cardiometabolic diseases were associated with a decreased grip strength. The associations between cardiometabolic disease counts (1: β=-0.052, 95%CI: -0.326-0.222; 2: β=-0.083, 95%CI: -0.506-0.340; ≥3: β=-0.186, 95%CI: -0.730-0.358) and gait speed were not statistically significant. The predictive value of gait speed of the participants with 0, 1, 2, and ≥3 cardiometabolic diseases were found to be 1.98 (95%CI: 1.38-2.58), 1.93 (95%CI: 1.34-2.51), 1.89 (95%CI: 1.18-2.61), and 1.79 (95%CI: 1.10-2.48) m/s respectively, which was clinically significant for the magnitude of the decrease. Cardiometabolic combinations with a higher risk of decreased grip strength and gait speed mainly seen in diabetes. Conclusions: Cardiometabolic disease counts and combinations were associated with grip strength and gait speed. Grip strength and gait speed can be used to measure CMM severity.
abstract_id: PUBMED:38353890
Multimorbidity patterns and health-related quality of life among community-dwelling older adults: evidence from a rural town in Suzhou, China. Purpose: The high prevalence of multimorbidity in aging societies has posed tremendous challenges to the healthcare system. The aim of our study was to comprehensively assess the association of multimorbidity patterns and health-related quality of life (HRQOL) among rural Chinese older adults.
Methods: This was a cross-sectional study. Data from 4,579 community-dwelling older adults aged 60 years and above was collected by the clinical examination and questionnaire survey. Information on 10 chronic conditions was collected and the 3-Level EQ-5D (EQ-5D-3L) was adopted to measure the HRQOL of older adults. An exploratory factor analysis was performed to determine multimorbidity patterns. Regression models were fitted to explore the associations of multimorbidity patterns with specific health dimensions and overall HRQOL.
Results: A total of 2,503 (54.7%) participants suffered from multimorbidity, and they reported lower HRQOL compared to those without multimorbidity. Three kinds of multimorbidity patterns were identified including cardiovascular-metabolic diseases, psycho-cognitive diseases and organic diseases. The associations between psycho-cognitive diseases/organic diseases and overall HRQOL assessed by EQ-5D-3L index score were found to be significant (β = - 0.097, 95% CI - 0.110, - 0.084; β = - 0.030, 95% CI - 0.038, - 0.021, respectively), and psycho-cognitive diseases affected more health dimensions. The impact of cardiovascular-metabolic diseases on HRQOL was largely non-significant.
Conclusion: Multimorbidity was negatively associated with HRQOL among older adults from rural China. The presence of the psycho-cognitive diseases pattern or the organic diseases pattern contributed to worse HRQOL. The remarkable negative impact of psycho-cognitive diseases on HRQOL necessiates more attention and relevant medical assistance to older rural adults.
abstract_id: PUBMED:33829701
Longitudinal Study of the Association Between Handgrip Strength and Chronic Disease Multimorbidity among Middle-aged and Older Adults Objective: To investigate the potential association between multimorbidity and the handgrip strength of middle-aged and older adults.
Methods: The baseline (2011) and second-round follow-up (2015) data of China Health and Retirement Longitudinal Study (CHARLS) were used. Adults≥40 were selected as the subjects of the study. Variables incorporated in the study included handgrip strength, chronic disease prevalence, demographic variables, and health behavior variables. Generalized estimating equations were used to analyze the longitudinal association between handgrip strength and multimorbidity.
Results: A total of 28 368 middle-aged and older adults were included in the baseline and follow-up samples, with an average age of (59.1±9.7) years old, the oldest being 96 while the youngest being 40. Among them, 6 239 were male, accounting for 47.3%. In the second-round follow-up, 9 186 baseline respondents and 5 994 new respondents were covered, reaching a total of 15 180 respondents. Compared with the baseline, a higher proportion of the second-round follow-up respondents were female ( P=0.033) and were older ( P<0.001). From the baseline to the second-round follow-up, Q1, the lowest grip strength category, increased from 23.4% to 26.6%, while Q4, the highest grip strength category, decreased from 26.5% to 21.2%. The prevalence of having more than three chronic diseases increased from 18.2% to 24.2% and the prevalence of having more than five chronic diseases increased from 3.3% to 6.2%. After adjusting for confounding variables, the interaction items of handgrip strength and time showed statistical significance. After stratification by gender, the interaction items of male handgrip strength and follow-up time were statistically significant in both models ( P<0.05). The marginal effect graph of the interactive item showed that the multimorbidity prevalence of respondents with lower handgrip levels grew faster with age. Individual effect analysis showed that the correlation between handgrip strength and multimorbidity was not statistically significant at baseline, but the follow-up done four years afterwards showed statistical significant correlation between handgrip strength and multimorbidity.
Conclusion: Respondents with lower baseline handgrip strength are associated with increasingly higher risk of multimorbidity over time. Handgrip strength can be used as an effective screening tool for middle-aged and older adults in China to identify those at higher risks of multimorbidity of chronic diseases.
abstract_id: PUBMED:37915295
Association of possible sarcopenia with major chronic diseases and multimorbidity among middle-aged and older adults: Findings from a national cross-sectional study in China. Aim: This study investigated the prevalence of possible sarcopenia (PSA) in a large sample of middle-aged and older adults, and determined the association between PSA, major chronic diseases and the number of chronic diseases.
Methods: A total of 14 917 adults aged ≥40 years were included in the analysis. The handgrip strength and the five-time chair stand test were used to assess PSA. The participants' major chronic diseases were divided into 14 categories. Four categories were created based on the participants' number of chronic illnesses: 0, 1, 2 and ≥3.
Results: The present study found an overall prevalence of PSA of 23.6% among Chinese middle-aged and older adults aged ≥40 years, with the risk increasing with advancing age. PSA was significantly associated with most categories of chronic diseases and multimorbidity. The closely independent associations were obtained for stroke; emotional, nervous or psychiatric problems; chronic lung disease, asthma, heart disease, hypertension and arthritis or rheumatism. Compared with participants with 0 chronic disease, those with two or more chronic diseases had higher odds for PSA. However, the association between PSA and the number of chronic diseases varied in different sex and age groups.
Conclusions: The findings suggest that PSA is associated with major chronic diseases among middle-aged and older adults. People with two or more chronic diseases have a greater likelihood of PSA compared with those without chronic diseases, and the association between PSA and the number of chronic diseases largely depended on sex and age. Geriatr Gerontol Int 2023; 23: 925-931.
abstract_id: PUBMED:34299990
Relationship of Multimorbidity, Obesity Status, and Grip Strength among Older Adults in Taiwan. Background: The combination of multiple disease statuses, muscle weakness, and sarcopenia among older adults is an important public health concern, and a health burden worldwide. This study evaluates the association between chronic disease statuses, obesity, and grip strength (GS) among older adults in Taiwan. Methods: A community-based survey was conducted every 3 years among older adults over age 65, living in Chiayi County, Taiwan. Demographic data and several diseases statuses, such as diabetes mellitus, hypertension, cerebrovascular disease, cardiovascular disease, and certain cancers, were collected using a questionnaire. Anthropometric characteristics were measured using standard methods. Grip strength was measured using a digital dynamometer (TKK5101) method. Results: A total of 3739 older individuals were recruited (1600 males and 2139 females) with the mean age of 72.9 years. The mean GS was 32.8 ± 7.1 kg for males and 21.6 ± 4.8 kg for females. GS significantly decreased most in males with cerebrovascular disease (from 33.0-29.5 kg, p < 0.001) and in females with diabetes mellitus (from 21.8-21.0 kg, p < 0.01). GS was highest in older adults with obesity (body mass index ≥ 27 kg/m2); however, there was no significant change of GS as the disease number increased. Conclusion: Older adults who have two, rather than one or greater than three chronic diseases, have significantly lower GSs than those who are healthy. Stroke and CKD for males, and hypertension and diabetes for females, are important chronic diseases that are significantly associated with GS. Furthermore, being overweight may be a protective factor for GS in older adults of both sexes.
abstract_id: PUBMED:27068303
Are there different factors affecting walking speed and gait cycle variability between men and women in community-dwelling older adults? Background: Few studies have examined the relationships between walking speed and gait cycle variability, and muscle strength and postural stability, with a focus on gender differences.
Aim: The aim of this study was to examine whether there are different factors affecting walking speed and gait cycle variability between men and women in community-dwelling older adults.
Methods: The subjects comprised 712 community-dwelling older adults (252 men, 460 women, aged 68.7 ± 4.8 years). Walking speed and coefficient of variation (CV) of step time at a comfortable walking pace were measured. The maximal isometric strength of six lower limb muscles and postural stability were evaluated. Stepwise regression analysis was performed, using lower limb muscle strength and postural stability as independent variables, to investigate the association with walking speed or CV.
Results: For older men, age, body mass index (BMI) and quadriceps setting (QS) strength were significant and independent determinants of walking speed. No variables were identified as significant determinants of CV. For older women, BMI and hip flexion, hip abduction, QS muscle strength were significant determinants of walking speed. Only hip abduction strength was a significant determinant of CV.
Discussion: The results of this study suggest that QS strength is related to walking speed in both men and women, whereas hip flexion and abduction muscle strength are related to walking speed, and hip abduction muscle strength is related to gait cycle variability in older women.
Conclusion: Gender differences exist in factors affecting walking speed and gait cycle variability in community-dwelling older adults.
Answer: Chronic multimorbidity is strongly associated with physical impairment in older adults, but it explains only a small part of the age-related differences in physical performance such as strength, speed, and balance. Studies have shown that older individuals with multiple chronic conditions exhibit worse physical performance compared to those with fewer or no chronic conditions. For instance, older persons aged 72-90+ years were found to have significantly worse grip strength, balance, chair stand score, and walking speed compared to those aged 60-66 years, and chronic diseases were strongly associated with these impairments, particularly among younger men. However, when controlling for chronic diseases, the age-related differences in physical performance changed only marginally by 1-11% (PUBMED:22961066).
In a study of older Chinese adults, multimorbidity patterns were identified, and an increased number of chronic conditions was associated with decreased normalized grip strength and a higher likelihood of having a slow gait speed. The study suggested that multimorbidity was related to worse physical performance and that different multimorbidity patterns were differentially associated with physical performance (PUBMED:32599778).
Similarly, in Peru, slower gait speeds were observed across the spectrum of multimorbidity in older adults without functional dependency, indicating that multimorbidity can affect gait speed even in well-functioning older individuals (PUBMED:29076226).
Moreover, the presence of kinesiophobia, or fear of movement, was found to affect gait speed, dynamic balance, and dual-task balance performance in older adults, suggesting that psychological factors can also influence physical performance (PUBMED:36632782).
In summary, while chronic multimorbidity is associated with declines in strength, speed, and balance in older adults, it does not fully explain the age-related differences in these physical performance components. Other factors, including psychological aspects like kinesiophobia, may also play a role. Therefore, age-related declines in physical performance are likely due to a combination of factors, including but not limited to multimorbidity (PUBMED:22961066; PUBMED:32599778; PUBMED:29076226; PUBMED:36632782). |
Instruction: A physiologic clinical study of achalasia: should Dor fundoplication be added to Heller myotomy?
Abstracts:
abstract_id: PUBMED:25610181
Fundoplication after heller myotomy: a retrospective comparison between nissen and dor. Objective: A retrospective comparison between Nissen and Dor fundoplication after laparoscopic Heller myotomy for achalasia.
Materials And Methods: From 1998 to 2004 a first group of 48 patients underwent Heller myotomy and Nissen fundoplication for idiopathic achalasia (H+N group). From 2004 to 2010 a second group of 40 patients underwent Heller myotomy followed by Dor fundoplication (H+D group). Some patients received a previous endoscopic treatment with pneumatic dilatation or endoscopic injection of botulinum toxin that provided them only a temporary clinical benefit. Changes in clinical and instrumental examinations from before to after surgery were evaluated in all patients. Clinical evaluation was carried out using a modified DeMeester symptom score system.
Results: Dor fundoplication treatment reduced both dysphagia and regurgitation severity scores significantly more than Nissen fundoplication (p<0.0001). Indeed, the incidence of dysphagia was significantly higher in patients treated with floppy-Nissen than in those treated with Dor fundoplication: by defining dysphagia as a DeMeester score equal to 3 (arbitrary cut-off), at the end of follow-up dysphagia occurred in 17.65% and 0% (p=0.037) of patients belonging to the H+N and H+D groups, respectively.
Conclusion: Heller myotomy followed by Dor fundoplication is a safe and valuable treatment. The procedure showed a lower incidence of postoperative dysphagia versus Nissen fundoplication and a negligible incidence of postoperative GERD in a long-term postoperative follow-up.
abstract_id: PUBMED:34803368
To Wrap or Not to Wrap After Heller Myotomy. Background And Objectives: The primary aim of this study is to assess the necessity of fundoplication for reflux in patients undergoing Heller myotomy for achalasia. The secondary aim is to assess the safety of the robotic approach to Heller myotomy.
Methods: This is a single institution, retrospective analysis of 61 patients who underwent robotic Heller myotomy with or without fundoplication over a 4-year period (January 1, 2015 - December 31, 2019). Symptoms were evaluated using pre-operative and postoperative Eckardt scores at < 2 weeks (short-term) and 4 - 55 months (long-term) postoperatively. Incidence of gastroesophageal reflux and use of antacids postoperatively were assessed. Long-term patient satisfaction and quality of life (QOL) were assessed with a phone survey. Finally, the perioperative safety profile of robotic Heller myotomy was evaluated.
Results: The long-term average Eckardt score in patients undergoing Heller myotomy without fundoplication was notably lower than in patients with a fundoplication (0.72 vs 2.44). Gastroesophageal reflux rates were lower in patient without a fundoplication (16.0% vs 33.3%). Additionally, dysphagia rates were lower in patients without a fundoplication (32.0% vs 44.4%). Only 34.8% (8/25) of patients without fundoplication continued use of antacids in the long-term. There were no mortalities and a 4.2% complication rate with two delayed leaks.
Conclusion: Robotic Heller myotomy without fundoplication is safe and effective for achalasia. The rate of reflux symptoms and overall Eckardt scores were low postoperatively. Great patient satisfaction and QOL were observed in the long term. Our results suggest that fundoplication is unnecessary when performing Heller myotomy.
abstract_id: PUBMED:32392447
Laparoscopic Heller Myotomy and Toupet Fundoplication for Achalasia. Achalasia manifests as failure of relaxation of the lower esophageal sphincter resulting in dysphagia. Although there are several medical and endoscopic treatment options, laparoscopic Heller myotomy has excellent short- and long-term outcomes. This article describes in detail our surgical approach to this operation. Key steps include extensive esophageal mobilization, division of the short gastric vessels, mobilization of the anterior vagus nerve, an extended gastric myotomy (3 cm as opposed to the conventional 1-2 cm gastric myotomy), a minimum 6 cm esophageal myotomy through circular and longitudinal muscle layers, and a Toupet partial fundoplication. We routinely use intraoperative endoscopy both to check for inadvertent full-thickness injury and to assess completeness of the myotomy and the geometry of the anti-reflux wrap.
abstract_id: PUBMED:37308447
Laparoscopic Heller myotomy with Dor fundoplication for esophageal achalasia treatment after distal gastrectomy and Billroth-II reconstruction. Laparoscopic Heller myotomy with Dor fundoplication is the standard surgical treatment for esophageal achalasia. However, there are few reports on the use of this method after gastric surgery. We report a case of a 78-year-old man who underwent laparoscopic Heller myotomy with Dor fundoplication for achalasia after distal gastrectomy and Billroth-II reconstruction. After the intraabdominal adhesion was sharply dissected using an ultrasonic coagulation incision device (UCID), Heller myotomy was performed 5 cm above and 2 cm below the esophagogastric junction using the UCID. To prevent postoperative gastroesophageal reflux (GER), Dor fundoplication was performed without cutting the short gastric artery and vein. The postoperative course was uneventful, and the patient is in good health without symptoms of dysphagia or GER. Although per-oral endoscopic myotomy is becoming the mainstay of treatment for achalasia after gastric surgery, laparoscopic Heller myotomy with Dor fundoplication is also an effective strategy.
abstract_id: PUBMED:34154022
Laparoscopic Heller Myotomy in the Treatment of Achalasia Background: Achalasia refers to a primary oesophageal motility disorder characterised by the absence of peristalsis and incomplete or complete lack of relaxation of the lower oesophageal sphincter. The cardinal symptom is dysphagia. The therapeutic goal is surgical or interventional repair of the oesophageal outflow tract at the level of the oesophagogastric junction.
Indication: We present the case of a 24-year-old patient with dysphagia accompanied by regurgitations, odynophagia as well as an unintentional weight loss over two years.
Methods: The video describes the preoperative imaging as well as endoscopic findings and demonstrates the technique of laparoscopic Heller myotomy followed by Dor fundoplication.
Conclusions: Concerning the therapy of classic achalasia, laparoscopic Heller myotomy followed by Dor fundoplication - despite controversies regarding peroral endoscopic myotomy as an alternative therapeutic option - can be considered as an established standard procedure.
abstract_id: PUBMED:32311278
Laparoscopic Heller Myotomy and Dor Fundoplication: How I Do It? Achalasia is a primary esophageal motility disorder characterized by lack of esophageal peristalsis and partial or absent relaxation of the lower esophageal sphincter in response to swallowing. Available treatment modalities are not curative but rather intend to relieve patient' symptoms. A laparoscopic Heller myotomy with Dor fundoplication is associated with high clinical success rates and low incidence of postoperative reflux. A properly executed operation following critical surgical steps is key for the success of the operation.
abstract_id: PUBMED:33948715
Intraoperative diagnosis and treatment of Achalasia using EndoFLIP during Heller Myotomy and Dor fundoplication. Background: Manometry is the gold standard diagnostic test for achalasia. However, there are incidences where manometry cannot be obtained preoperatively, or the results of manometry is inconsistent with the patient's symptomatology. We aim to determine if intraoperative use of EndoFLIP can provide a diagnosis of achalasia and provide objective information during Heller myotomy and Dor fundoplication.
Methods: To determine the intraoperative diagnostic EndoFLIP values for patients with achalasia, we determined the optimal cut-off points of the distensibility index (DI) between patients with a diagnosis of achalasia and patients with a diagnosis of hiatal hernia. To evaluate the usefulness of EndoFLIP values during Heller myotomy and Dor fundoplication, we obtained a cohort of patients with EndoFLIP values obtained after Heller myotomy and after Dor fundoplication as well as Eckardt score before and after surgery.
Results: Our analysis of 169 patients (133 hiatal hernia and 36 achalasia) showed that patients with DI < 0.8 have a >99% probability of having achalasia, while DI > 2.3 have a >99% probability of having hiatal hernia. Patients with a DI 0.8-1.3 have a 95% probability of having achalasia, and patients with a DI of 1.4-2.2 have a 94% probability of having a hiatal hernia. There were 40 patients in the cohort to determine objective data during Heller myotomy and Dor fundoplication. The DI increased from a median of 0.7 to 3.2 after myotomy and decreased to 2.2 after Dor fundoplication (p < 0.001). The median Eckardt score went down from a median of 4.5 to 0 (p < 0.001).
Conclusions: Our study shows that intraoperative use of EndoFLIP can facilitate the diagnosis of achalasia and is used as an adjunct to diagnose achalasia when symptoms are inconsistent. The routine use of EndoFLIP during Heller myotomy and Dor fundoplication provides objective data during the operation in a group of patients with excellent short-term outcomes.
abstract_id: PUBMED:37950218
Single-center experience of transitioning from video-assisted laparoscopic to robotic Heller myotomy with Dor fundoplication for esophageal motility disorders. Background: Video-assisted laparoscopic Heller myotomy (LHM) has become the standard treatment option for achalasia. While robotic surgery offering some specific advantages such as better three-dimensional (3D) stereoscopic vision, hand-eye consistency, and flexibility and stability with the endowrist is expected to be shorter in learning curve than that of LHM for surgeons who are proficient in LHM. The aim of this study was to describe a single surgeon's experience related to the transition from video-assisted laparoscopic to robotic Heller myotomy with Dor fundoplication.
Methods: We conducted a retrospective observational study based on the recorded data of the first 66 Heller myotomy performed with laparoscopic Heller myotomy with Dor fundoplication (LHMD, 26 cases) and with the robotic Heller myotomy with Dor fundoplication (RHMD, 40 cases) by the same surgeon in Department of Thoracic Surgery of The First Affiliated Hospital of Nanchang University in China. The operation time and intraoperative blood loss were analyzed using the cumulative sum (CUSUM) method. Corresponding statistical tests were used to compare outcomes of both serials of cases.
Results: The median operation time was shorter in the RHMD group compared to the LHMD group (130 [IQR 123-141] minutes vs. 163 [IQR 153-169]) minutes, p < 0.001). In the RHMD group, one patient (2.5%) experienced mucosal perforation, whereas, in the LHMD group, the incidence of this complication was significantly higher at 19.2% (5 patients) (p = 0.031). Based on cumulative sum analyses, operation time decreased starting with case 20 in the LHMD group and with case 18 in the RHMD group. Intraoperative blood loss tended to decline starting with case 19 in the LHMD group and with case 16 in the RHMD group.
Conclusions: Both RHMD and LHMD are effective surgical procedures for symptom relief of achalasia patients. RHMD demonstrates superior outcomes in terms of operation time and mucosal perforation during surgery compared to LHMD. Proficiency with RHMD can be achieved after approximately 16-18 cases, while that of LHMD can be obtained after around 19-20 cases.
abstract_id: PUBMED:35967962
Esophageal Achalasia: From Laparoscopic to Robotic Heller Myotomy and Dor Fundoplication. Objective: Laparoscopic Heller myotomy and Dor fundoplication has become the gold standard in treating esophageal achalasia and robotic surgical platform represents its natural evolution. The objective of our study was to assess durable long-term clinical outcomes in our cohort.
Methods And Procedures: Between June 1, 1999 and June 30, 2019, 111 patients underwent minimally invasive treatment for achalasia (96 laparoscopically and 15 robotically). Fifty-two were males. Mean age was 49 years (20 - 96). Esophageal manometry confirmed the diagnosis. Fifty patients underwent pH monitoring study, with pathologic reflux in 18. Preoperative esophageal dilation was performed in 76 patients and 21 patients received botulin injection. Dysphagia was universally present, and mean duration was 96 months (5 - 480).
Results: Median operative time was 144 minutes (90 - 200). One patient required conversion to open approach. Four mucosal perforations occurred in the laparoscopic group and were repaired intraoperatively. Seven patients underwent completion esophageal myotomy and added Dor fundoplication. Upper gastrointestinal series was performed before discharge. Median hospital stay was 39 hours (24 - 312). Median follow up was 157 months (6 - 240), and dysphagia was resolved in 94% of patients. Seven patients required postoperative esophageal dilation.
Conclusions: Minimally invasive Heller myotomy and Dor fundoplication are feasible. The operation is challenging, but excellent results hinge on the operative techniques and experience. The high dexterity, three-dimensional view, and the ergonomic movements of robotic surgery allow application of all the technical elements, achieving the best durable outcome for the patient. Robotic surgery is the natural evolution of minimally invasive treatment of esophageal achalasia.
abstract_id: PUBMED:35523962
Laparoscopic Heller Myotomy with Dor Fundoplication: An Operation that has Withstood the Test of Time. Background: Achalasia is a primary esophageal motility disorder characterized by aperistalsis and defective relaxation of the lower esophageal sphincter in response to swallowing. Patients' symptoms include dysphagia, regurgitation, weight loss, chest pain and aspiration. The disease is idiopathic, and the goal of treatment is to eliminate the resistance determined by the abnormal lower esophageal sphincter, therefore allowing passage of the ingested food from the esophagus into the stomach. Three effective treatment modalities are available today-pneumatic dilatation, peroral endoscopic myotomy, and laparoscopic Heller myotomy with a partial fundoplication.
Methods: We described the technique to perform a laparoscopic Heller myotomy and a Dor fundoplication.
Results: Five steps to perform a laparoscopic Heller myotomy and a Dor fundoplication were described.
Conclusions: The surgical approach is favored in many centers as it is very effective in relieving symptoms, while avoiding pathologic gastroesophageal reflux in most patients.
Answer: The question of whether Dor fundoplication should be added to Heller myotomy in the treatment of achalasia has been addressed in several studies. The evidence from these studies suggests that adding Dor fundoplication to Heller myotomy is beneficial in reducing postoperative symptoms and complications.
One study retrospectively compared Nissen and Dor fundoplication after laparoscopic Heller myotomy for achalasia and found that Dor fundoplication significantly reduced both dysphagia and regurgitation severity scores more than Nissen fundoplication. The incidence of dysphagia was significantly higher in patients treated with Nissen fundoplication compared to those treated with Dor fundoplication. The study concluded that Heller myotomy followed by Dor fundoplication is a safe and valuable treatment, showing a lower incidence of postoperative dysphagia and a negligible incidence of postoperative GERD in long-term follow-up (PUBMED:25610181).
Another study described the surgical approach to laparoscopic Heller myotomy and Toupet fundoplication for achalasia, emphasizing the importance of an extended gastric myotomy and a partial fundoplication (Toupet) to prevent postoperative reflux (PUBMED:32392447).
A case report also supported the use of laparoscopic Heller myotomy with Dor fundoplication for achalasia treatment after distal gastrectomy and Billroth-II reconstruction, indicating that this approach is effective even after previous gastric surgery (PUBMED:37308447).
Furthermore, a study on the intraoperative use of EndoFLIP during Heller myotomy and Dor fundoplication demonstrated that this technique provides objective data during the operation and is associated with excellent short-term outcomes, suggesting its utility in confirming the adequacy of myotomy and fundoplication (PUBMED:33948715).
In contrast, one study suggested that fundoplication might be unnecessary when performing Heller myotomy, as their results showed lower rates of reflux symptoms and dysphagia in patients who did not undergo fundoplication (PUBMED:34803368). However, this finding seems to be an outlier compared to the other studies that support the addition of Dor fundoplication. |
Instruction: Elective management of transitional cell carcinoma of the distal ureter: can kidney-sparing surgery be advised?
Abstracts:
abstract_id: PUBMED:17532855
Elective management of transitional cell carcinoma of the distal ureter: can kidney-sparing surgery be advised? Objective: To determine the long-term oncological outcome of patients with primary transitional cell carcinoma (TCC) of the distal ureter electively treated with either kidney-sparing surgery (KSS) or radical nephroureterectomy (RNU) in a retrospective, non-randomized, single-centre study.
Patients And Methods: Of 43 consecutive patients with a primary solitary distal ureter TCC, 19 had KSS, consisting of distal ureter resection with bladder cuff excision and ureter reimplantation, and 24 had RNU with bladder cuff excision.
Results: The median (range) age at surgery was 69 (31-86) years for the KSS group and 73 (59-87) years for the RNU group, patients in the latter having worse hydronephrotic kidneys. The median (range) follow-up was 58 (3-260) months. A recurrent bladder tumour was diagnosed after a median of 15 months in five of the 19 patients treated by KSS and after a median of 5.5 months in eight of the 24 treated by RNU. Five of the 19 patients treated by KSS and six of the 24 treated by RNU died from metastatic disease despite chemotherapy. Recurrence-free, cancer-specific and overall survival were comparable in the two groups. In two patients (11%) treated by KSS an ipsilateral upper urinary tract TCC recurred after 42 and 105 months, respectively.
Conclusion: Treatment by distal ureteric resection is feasible in patients with primary TCC of the distal ureter. The long-term oncological outcome seems to be comparable with that of patients treated by RNU. Furthermore, kidney preservation is advantageous if adjuvant or salvage chemotherapy is required.
abstract_id: PUBMED:31352580
Organ-sparing procedures in GU cancer: part 3-organ-sparing procedures in urothelial cancer of upper tract, bladder and urethra. Purpose: The impact of radical surgery for urothelial carcinoma is significant on patient's quality of life. Organ-sparing surgery (OSS) can provide comparable oncological outcomes and with improved quality of life. In this review, we summarize the indications, techniques and outcomes of OSS for these tumors.
Methods: PubMed® was searched for relevant articles. Keywords used were: for upper tract urothelial carcinoma (UTUC): endoscopic, ureteroscopic/percutaneous management, laser ablation; for urothelial bladder cancer: bladder preservation, trimodal therapy, muscle invasive bladder cancer (MIBC); for urethral cancer: urethra/penile-sparing, urethral carcinoma.
Results: Kidney-sparing surgery is an option in patients with low-risk UTUC with better renal function preservation and comparable oncological control to radical nephroureterectomy. In select patients with MIBC, trimodal therapy has better quality of life and comparable oncological control to radical cystectomy. In distal male urethral cancer, penile conserving surgery is feasible and offers acceptable survival outcomes. In female urethral cancer, organ preservation can be achieved, in addition to OSS, through radiation.
Conclusions: In the appropriately selected patient, OSS in upper tract, bladder and urethral carcinoma has comparable oncological outcomes to radical surgery and with the additional benefit of improved quality of life.
abstract_id: PUBMED:38347103
Analysis of progression after elective distal ureterectomy and effects of salvage radical nephroureterectomy in patients with distal ureteral urothelial carcinoma. We compared the progression patterns after radical nephroureterectomy (RNU) and elective distal ureterectomy (DU) in patients with urothelial carcinoma of the distal ureter. Between Jan 2011 and Dec 2020, 127 patients who underwent RNU and 46 who underwent elective DU for distal ureteral cancer were enrolled in this study. The patterns of progression and upper tract recurrence were compared between the two groups. Progression was defined as a local recurrence and/or distant metastasis after surgery. Upper tract recurrence and subsequent treatment in patients with DU were analyzed. Progression occurred in 35 (27.6%) and 10 (21.7%) patients in the RNU and DU groups, respectively. The progression pattern was not significantly different (p = 0.441), and the most common progression site was the lymph nodes in both groups. Multivariate logistic regression analysis revealed that pT2 stage, concomitant lymphovascular invasion, and nodal stage were significant predictors of disease progression. Upper tract recurrence was observed in nine (19.6%) patients with DU, and six (66.7%) patients had a prior history of bladder tumor. All patients with upper tract recurrence after DU were managed with salvage RNU. Elective DU with or without salvage treatment was not a risk factor for disease progression (p = 0.736), overall survival (p = 0.457), cancer-specific survival (p = 0.169), or intravesical recurrence-free survival (p = 0.921). In terms of progression patterns and oncological outcomes, there was no difference between patients who underwent RNU and elective DU with/without salvage treatment. Elective DU should be considered as a therapeutic option for distal ureter tumor.
abstract_id: PUBMED:26612196
Oncologic Outcomes of Kidney Sparing Surgery versus Radical Nephroureterectomy for the Elective Treatment of Clinically Organ Confined Upper Tract Urothelial Carcinoma of the Distal Ureter. Purpose: We compared the oncologic outcomes of radical nephroureterectomy, distal ureterectomy and endoscopic surgery for elective treatment of clinically organ confined upper tract urothelial carcinoma of the distal ureter.
Materials And Methods: From a multi-institutional collaborative database we identified 304 patients with unifocal, clinically organ confined urothelial carcinoma of the distal ureter and bilateral functional kidneys. Rates of overall, cancer specific, local recurrence-free and intravesical recurrence-free survival according to surgery type were compared using Kaplan-Meier statistics. Univariable and multivariable Cox regression analyses were performed to assess the adjusted outcomes of radical nephroureterectomy, distal ureterectomy and endoscopic surgery.
Results: Overall 128 (42.1%), 134 (44.1%) and 42 patients (13.8%) were treated with radical nephroureterectomy, distal ureterectomy and endoscopic surgery, respectively. Although rates of overall, cancer specific and intravesical recurrence-free survival were equivalent among the 3 surgical procedures, 5-year local recurrence-free survival was lower for endoscopic surgery (35.7%) than for nephroureterectomy (95.0%, p <0.001) or ureterectomy (85.5%, p = 0.01) with no significant difference between nephroureterectomy and distal ureterectomy. On multivariable analyses only endoscopic surgery was an independent predictor of decreased local recurrence-free survival compared to nephroureterectomy (HR 1.27, p = 0.001) or distal ureterectomy (HR 1.14, p = 0.01). Distal ureterectomy and endoscopic surgery did not significantly correlate to cancer specific or intravesical recurrence-free survival. However, when adjustment was made for ASA(®) (American Society of Anesthesiologists(®)) score, distal ureterectomy (HR 0.80, p = 0.01) and endoscopic surgery (HR 0.84, p = 0.02) were independent predictors of increased overall survival, although no significant difference was found between them.
Conclusions: Because of better oncologic outcomes, distal ureterectomy could be considered the elective first line treatment of clinically organ confined urothelial carcinoma of the distal ureter.
abstract_id: PUBMED:25708579
Risk-adapted strategy for the kidney-sparing management of upper tract tumours. The conservative management of upper tract urothelial carcinoma (UTUC) was traditionally restricted to patients with imperative indications only. However, current recommendations suggest selected patients with normal, functioning contralateral kidneys should also be considered for such an approach. A risk-adapted strategy to accurately select patients who could benefit from kidney-sparing surgery without compromising their oncological safety has been advocated. A number of kidney-sparing surgical procedures are available. Despite the advent of ureteroscopic management, segmental ureterectomy and the percutaneous approach both have specific indications for use that predominantly depend on the tumour location and progression risk. These kidney-sparing procedures are cost-effective, and when used to treat patients with low-risk UTUC, are associated with oncological outcomes similar to radical nephroureterectomy. Systematic second-look endoscopy combined with upper tract instillations of topical chemotherapeutic agents after ureteroscopic or percutaneous surgery and a single early intravesical instillation of mitomycin C after any kidney-sparing procedure might decrease the risks of local recurrence and progression. Meticulous and stringent endoscopic monitoring of the upper and lower urinary tract is a key component of the conservative management of UTUC. Local recurrences are often suitable for repeat conservative therapy, whereas disease progression should be treated with delayed radical nephroureterectomy.
abstract_id: PUBMED:37760465
Modern Kidney-Sparing Management of Upper Tract Urothelial Carcinoma. Purpose: To review the latest evidence on the modern techniques and outcomes of kidney-sparing surgeries (KSS) in patients with upper tract urothelial carcinoma (UTUC).
Methods: A comprehensive literature search on the study topic was conducted before 30 April 2023 using electronic databases including PubMed, MEDLINE, and EMBASE. A narrative overview of the literature was then provided based on the extracted data and a qualitative synthesis of the findings.
Results: KSS is recommended for low- as well as select high-risk UTUCs who are not eligible for radical treatments. Endoscopic ablation is a KSS option that is associated with similar oncological outcomes compared with radical treatments while preserving renal function in well-selected patients. The other option in this setting is distal ureterectomy, which has the advantage of providing a definitive pathological stage and grade. Data from retrospective studies support the superiority of this approach over radical treatment with similar oncological outcomes, albeit in select cases. Novel chemoablation agents have also been studied in the past few years, of which mitomycin gel has received FDA approval for use in low-risk UTUCs.
Conclusion: KSSs are acceptable approaches for patients with low- and select high-risk UTUCs, which preserve renal function without compromising the oncological outcomes.
abstract_id: PUBMED:36533431
Kidney-sparing surgery for distal high-risk ureteral carcinoma: Clinical efficacy and preliminary experiences in 22 patients. Background: Several groups proved kidney-sparing surgery (KSS) had equivalent oncological outcomes compared with radical nephroureterectomy (RNU) for the low-risk upper urinary tract urothelial carcinoma (UTUC) patients. Whereas, the clinical efficacy of KSS for high-risk UTUC, especially for distal high-risk ureteral carcinoma, remains unclear.
Objective: To evaluate the feasibility of KSS for patients with distal high-risk ureter cancer.
Materials And Methods: Our study included 22 patients who diagnose the distal high-risk ureter cancer and underwent KSS between May 2012 and July 2021 in the First Affiliated Hospital of Chongqing Medical University. Overall survival (OS), confirmed as the primary endpoint of present study, was assessed by a blinded independent review committee (BIRC). The secondary endpoints included the postoperative SF-36 (the short form 36 health survey questionnaire) score, progression-free survival (PFS), postoperative complications, and so on.
Results: Overall, 17 (77.3%) and 5 (22.7%) patients underwent segmental ureterectomy (SU) and endoscopic ablation (EA), respectively. By the cut-off date, the mean OS was 76.3 months (95% Cl: 51.3-101.1 months) and the mean PFS was 47.0 months (95% Cl: 31.1-62.8 months), respectively. And the SF-36 score in a majority of patients was >300 (90.9%).
Conclusion: This is a daring endeavor to explore the clinical efficacy of KSS in distal high-risk ureter cancer based on the high-risk UTUC criteria, which shows satisfactory results in the long-term prognosis and operation-associated outcomes. However, future randomized or prospective multicenter studies are necessary to validate our conclusions.
abstract_id: PUBMED:36437423
Robot-assisted ipsilateral partial nephrectomy with distal ureterectomy for synchronous renal and ureteric tumors-a case report. Background: Ipsilateral synchronous renal and ureteric tumor is uncommon. Nephron sparing surgery is the standard for small renal masses. Ureteric tumors can be selectively managed with nephron sparing surgery, especially in renal dysfunction. This case report details the management of double malignancy by nephron sparing surgery with robot-assisted approach.
Case Report: A 63-year-old gentleman with diabetes presented with history of intermittent gross hematuria for 2 weeks. He was clinically normal. On evaluation, he had grade 4 renal dysfunction (Se. creatinine 4.5 mg%) with mild proteinuria. Magnetic resonance imaging revealed right renal upper polar Bosniak III lesion and right hydroureteronephrosis due to 2 cm ureteric tumor near the vessel crossing. Renogram showed overall GFR of 22 ml/min with 31% (6 ml/min) contribution from the right side. He underwent robot-assisted right partial nephrectomy with distal ureterectomy and Boari flap ureteric reimplantation. Histopathology revealed margins free T2 clear cell carcinoma (kidney) and high-grade T3 transitional cell carcinoma (ureter). His nadir creatinine at 1 year follow-up was 3.3 mg% and no recurrence on MRI, cystoscopy, and ureteroscopy at 1 year.
Conclusion: Minimally invasive nephron sparing surgery is feasible and reasonable option with satisfactory oncological control even in ipsilateral synchronous renal and ureteric tumors in selected patients with renal dysfunction.
abstract_id: PUBMED:25610045
Endoscopic versus open approach of bladder cuff and distal ureter in the management of upper urinary tract transitional cell carcinoma. Objective: Nephroureterectomy with the removal of the ipsilateral ureteral orifice and bladder cuff en bloc remains the gold standard treatment for upper urinary tract urothelial cancer. The distal ureter can be removed with the open surgical technique or endoscopic approach. We compared the outcomes of the endoscopic approach with those of conventional open surgery on the distal ureter.
Materials And Methods: We collected data from the charts of 30 patients who underwent radical nephroureterectomy at our clinic from January 1997 to January 2007 for upper urinary tract urothelial carcinoma. The patients were divided into two groups according to procedure performed on the distal ureter. Group I (n:12) was comprised of patients who underwent an open surgical procedure, and group II (n:18), was comprised of patients who underwent an endoscopic approach. Both groups were compared in terms of operative time, blood loss, transurethral catheter duration and duration of hospital stay.
Results: Patient age and tumor location showed no significant differences between the two groups. The operative time was significantly longer in group I than group II (181 versus 128 minutes; p<0.05). On the other hand, the estimated blood loss, transurethral catheter duration and duration of hospital stay were significantly lower in group II (205 mL versus 435 mL, 5 versus 8.5 days and 5.6 versus 9.2 days, respectively; p <0.05).
Conclusion: The results of our study show that the endoscopic approach is less invasive than open surgery on the distal ureter. This procedure can easily be performed in the management of upper urinary tract urothelial carcinoma.
abstract_id: PUBMED:31237445
Management of distal ureter and bladder cuff at the time of nephroureterectomy: surgical techniques and predictors of outcome. Open radical nephroureterectomy (NU) with removal of the ureter and bladder cuff is the 'gold standard' in the treatment of high-grade urothelial cancers of the upper urinary tract. A salient issue is the management of the distal ureter and bladder cuff at time of surgery. Which technique confers superior oncologic benefit is of particular interest since this disease process is notoriously plagued with high intravesical recurrence rates. Although open radical NU is the 'gold standard', the maturation of minimally invasive surgery formidably challenges approaches considered 'gold standard'. We thus sought to critically review the literature comparing perioperative and oncologic outcomes in the approaches used to manage the distal ureter and bladder cuff in patients undergoing radical NU.
Answer: Based on the information provided in the abstracts, kidney-sparing surgery (KSS) can be advised for the elective management of transitional cell carcinoma (TCC) of the distal ureter in certain cases. Several studies have shown that KSS, such as distal ureterectomy with bladder cuff excision and ureter reimplantation, can have comparable long-term oncological outcomes to radical nephroureterectomy (RNU) (PUBMED:17532855, PUBMED:38347103, PUBMED:26612196). Furthermore, KSS has the advantage of preserving renal function, which can be beneficial if adjuvant or salvage chemotherapy is required (PUBMED:17532855).
Organ-sparing surgery (OSS) has been reported to provide comparable oncological outcomes to radical surgery, with the additional benefit of improved quality of life (PUBMED:31352580). In the case of distal ureteral urothelial carcinoma, elective distal ureterectomy (DU) has been shown not to be a risk factor for disease progression, overall survival, cancer-specific survival, or intravesical recurrence-free survival when compared with RNU (PUBMED:38347103). Moreover, in patients with low-risk upper tract urothelial carcinoma (UTUC), KSS has been associated with oncological outcomes similar to radical nephroureterectomy (PUBMED:25708579).
The latest evidence suggests that KSS is recommended for low-risk as well as select high-risk UTUCs, and modern techniques and outcomes of KSS in patients with UTUC have been found to preserve renal function without compromising oncological outcomes (PUBMED:37760465). Even in the setting of high-risk UTUC, preliminary experiences with KSS for distal high-risk ureteral carcinoma have shown satisfactory long-term prognosis and operation-associated outcomes (PUBMED:36533431).
In conclusion, KSS can be considered a therapeutic option for distal ureter tumors, and in appropriately selected patients, it can offer a balance between oncological control and preservation of renal function. However, meticulous and stringent monitoring is essential, and disease progression should be treated with delayed radical nephroureterectomy if necessary (PUBMED:25708579). |
Instruction: Is open pyeloplasty still justified?
Abstracts:
abstract_id: PUBMED:38304363
COMPARISON OF LAPAROSCOPIC AND OPEN PYELOPLASTY IMPACT ON COMFORT AND SUCCESS: A RETROSPECTIVE, SINGLE CENTER STUDY. Ureteropelvic junction obstruction causes hydronephrosis and may lead to renal parenchymal damage unless timely diagnosed and treated. Although open pyeloplasty is still the gold standard, it needs to be compared with new techniques. In this study, we compared laparoscopic and open pyeloplasty. Data on 113 patients who had undergone surgery between 2008 and 2014 were evaluated retrospectively. Thirty-nine patients had undergone laparoscopic pyeloplasty, and 74 had undergone open pyeloplasty. Ultrasonography was performed at 3 months and scintigraphy at 6 months postoperatively. Parameters such as the length of surgery, need for analgesics, length of hospital stay, complications, and success rates were compared. When compared to open pyeloplasty (mean 9.8 dexketoprofen 50 mg IV dose), the need for an analgesic was significantly lower in the laparoscopic pyeloplasty (mean 4.5, paracetamol 15 mg/kg IV dose) group (p<0.05). The length of hospital stay was also shorter in the laparoscopic pyeloplasty group (mean 4.0 days) than in the open pyeloplasty group (mean 7.3 days) (p<0.05). This study demonstrated that laparoscopic pyeloplasty could be safely used in the treatment of ureteropelvic junction obstruction with a lower need for analgesics and a shorter length of hospital stay than with open pyeloplasty.
abstract_id: PUBMED:27614698
Laparoscopic pyeloplasty versus open pyeloplasty for recurrent ureteropelvic junction obstruction in children. Introduction And Objectives: Recurrent ureteropelvic junction obstruction (UPJO) in children is an operative challenge. Minimally invasive endourological treatment options for secondary UPJO have suboptimal success rates; hence, there is a re-emergence of interest about redo pyeloplasty. The present study presented experience with laparoscopic management of previously failed pyeloplasty compared with open redo pyeloplasty in children.
Study Design: Twenty-four children with recurrent UPJO who underwent transperitoneal dismembered laparoscopic pyeloplasty were studied. Operative, postoperative, and follow-up functional details were recorded and compared with those of open pyeloplasty (n = 15) carried out for recurrent UPJO by the same surgeon during the same study period.
Results: Demographic data were comparable in the laparoscopic and open groups, except for a significantly lower GFR in the open group (24.8 vs 38.2 ml/min, P = 0.0001). Mean time to failure of the original repair was 20.2 months (23.6 months for redo laparoscopic pyeloplasty, 18.8 months for redo open). The success rate of laparoscopic redo pyeloplasty was 91.7 vs 100% in open redo pyeloplasty. Compared with redo open pyeloplasty, the mean operative time was longer (211.4 ± 32.2 vs 148.8 ± 16.6, P = 0.002), estimated blood loss was higher (102 vs 75 ml, P = 0.06), while hospital stay was shorter and pain score was lower in the laparoscopy group (P = 0.02) in the laparoscopic group. There were no intraoperative complications, while the postoperative complication rate was similar in the two groups (20.8 vs 20.0%).
Discussion: Before the laparoscopic approach became a viable option, endopyelotomy was widely used for managing recurrent UPJO. However, the success rate of endopyelotomy for secondary UPJO was approximately 10-25% lower than for open pyeloplasty. Redo pyeloplasty had excellent results, with reported success rates of 77.8-100%. Laparoscopic redo pyeloplasty is becoming a viable alternative to open redo pyeloplasty in many centers with experience in minimally invasive techniques. The present study revealed that redo laparoscopic pyeloplasty appeared to have advantages over redo open surgery, in that it was associated with shorter hospital stay (4 vs 6 days, P = 0.046), reduced postoperative pain score (P = 0.02), and less need for postoperative analgesia (P = 0.001), still with comparable successful outcomes and patient safety. However, the procedure had a longer operative times and more blood loss.
Conclusion: Laparoscopic pyeloplasty is a viable alternative to open pyeloplasty in children with recurrent UPJO, with shorter hospital stays and less postoperative pain. However, the procedure is technically demanding and should be attempted in high-volume centers by laparoscopists with considerable experience in laparoscopic reconstructive procedures.
abstract_id: PUBMED:34912393
A comparative study on the Efficacy of Retroperitoneoscopic Pyeloplasty and Open Surgery for Ureteropelvic Junction Obstruction in Children. Objectives: To compare the therapeutic effect of retroperitoneoscopic dismembered pyeloplasty and open ureteropelvic junction plasty on the ureteropelvic junction obstruction (UPJO) in children.
Methods: After the retrospective analysis of clinical data, 78 children with ureteropelvic junction stenosis treated from January, 2012 to June, 2018 were divided into two groups: OP (open pyeloplasty) group (38 cases) and LP (laparoscopic dismembered pyeloplasty) group (40 cases) according to the surgical methods. The operation time, intraoperative bleeding volume, postoperative length of stay (LOS), postoperative complication rate, postoperative hydronephrosis improvement and other indicators were compared between the two groups.
Results: All patients underwent surgery successfully, without conversion to open surgery in LP group. The incidence of postoperative urine leakage and the recovery of hydronephrosis between LP group and OP group 12 months after operation showed no statistically significant difference (P>0.05). The intraoperative bleeding volume, the incidence of postoperative retroperitoneal hematoma, and the postoperative LOS in LP group were lower than those in OP group, while the operation time was longer than that in the OP group, with statistically significant difference (P<0.05).
Conclusion: Retroperitoneoscopic dismembered pyeloplasty had similar effect with open dismembered pyeloplasty, but faster recovery and fewer complications, so it has become the preferred treatment method for UPJO in children.
abstract_id: PUBMED:9313652
Is open pyeloplasty still justified? Objective: To determine the success rate, complications and morbidity from open pyeloplasty.
Patients And Methods: The study included 63 patients with confirmed pelvi-ureteric junction (PUJ) obstruction who underwent 66 pyeloplasties. Their records were analysed retrospectively for age, clinical presentation, serum creatinine level, presence of infection, surgical technique, and pre- and post-operative isotopic renography. The mean (range) follow-up was 15.5 (3-60) months.
Results: Pain was the most common presenting symptom; most pyeloplasties were dismembered and 77% of the procedures were performed by urological trainees. Retrograde pyelography did not alter the management in any patient. The complications were persisting PUJ obstruction in four, urinary leakage in two, transient vesico-ureteric obstruction in two and meatal stenosis in one. There were no complications in non-intubated pyeloplasties. Pain was successfully relieved in 98% of patients, renal function improved or remained stable in 92% and deteriorated in 7.7%. One patient underwent a revision pyeloplasty and another required nephrectomy. A younger patient, absence of urinary tract infection and absence of palpable mass were favourable factors.
Conclusion: Pyeloplasty is the most effective and permanent treatment for PUJ obstruction. Newer endoscopic procedures currently used must be carefully assessed against this 'gold standard' before becoming widespread.
abstract_id: PUBMED:37751052
Laparoscopic assisted dismembered pyeloplasty versus open pyeloplasty in UPJO with poorly function kidney in pediatrics. Background: The management of UPJO with poor function kidney, less than 10%, has been the subject of debate for more than a decade. Some authors have recommended nephrectomy, while others favor renal salvage (pyeloplasty). We report our experience with laparoscopic assisted pyeloplasty in pediatric patients with poorly functioning kidneys in comparison with an open approach.
Materials And Methods: A retrospective study was conducted to review 65 patients who were diagnosed with hydronephrosis and had impaired renal function due to UPJO. The study was conducted in the pediatric surgery departments of Al-Azhar University Hospital and Fattouma Bourguiba University Hospital of Monastir over a period of 20 years. Limited to pediatric patients with UPJO with ≥ Grade III hydronephrosis, antero-posterior pelvic diameter ≥ 20 mm, as well as a renal function equal to or less than 10%, was corrected by laparoscopic assisted or open pyeloplasty.
Results: There were 40 cases in group A who underwent laparoscopic assisted pyeloplasty, and 25 cases in group B who underwent open pyeloplasty. There were no complications or difficulties during the operation. The mean operative time in group A was 90 ± 12 min, while in group B, it was 120 ± 11 min. The renal assessment parameters significantly improved in both groups. In group A, the mean split renal function was 7.9 ± 1.3% and increased to 22.2 ± 6.3%. In group B, the mean split renal function was 8.1 ± 1.1% and increased to 24.2 ± 5.1%. However, the differences between both groups in terms of pre-operative and post-operative renal functions were statistically insignificant.
Conclusion: Laparoscopic assisted pyeloplasty is an effective treatment for patients with poorly functioning kidneys, especially those with less than 10% function. While this surgical procedure requires shorter operative times, it yields functional outcomes that are comparable to open approach.
abstract_id: PUBMED:27936909
Variation in the Use of Open Pyeloplasty, Minimally Invasive Pyeloplasty, and Endopyelotomy for the Treatment of Ureteropelvic Junction Obstruction in Adults. Background And Purpose: Ureteropelvic junction obstruction is a common condition that can be treated with open pyeloplasty, minimally invasive pyeloplasty, and endopyelotomy. While all these treatments are effective, the extent to which they are used is unclear. We sought to examine the dissemination of these treatments.
Patients And Methods: Using the MarketScan® database, we identified adults 18 to 64 years old who underwent treatment for ureteropelvic junction obstruction between 2002 and 2010. Our primary outcome was ureteropelvic junction obstruction treatment (i.e., open pyeloplasty, minimally invasive pyeloplasty, endopyelotomy). We fit a multilevel multinomial logistic regression model accounting for patients nested within providers to examine several factors associated with treatment.
Results: Rates of minimally invasive pyeloplasty increased 10-fold, while rates of open pyeloplasty decreased by over 40%, and rates of endopyelotomy were relatively stable. Factors associated with receiving an open vs a minimally invasive pyeloplasty were largely similar. Compared with endopyelotomy, patients receiving minimally invasive pyeloplasty were less likely to be older (odds ratio [OR] 0.96; 95% confidence interval [CI], 0.95, 0.97) and live in the south (OR 0.52; 95% CI, 0.33, 0.81) and west regions (OR 0.57; 95% CI 0.33, 0.98) compared with the northeast and were more likely to live in metropolitan statistical areas (OR 1.52; 95% CI 1.08, 2.13).
Conclusions: Over this 9-year period, the landscape of ureteropelvic junction obstruction treatment has changed dramatically. Further research is needed to understand why geographic factors were associated with receiving a minimally invasive pyeloplasty or an endopyelotomy.
abstract_id: PUBMED:37048622
Robotic versus Open Pyeloplasty: Perioperative and Functional Outcomes. We designed a retrospective study to assess the surgical and economic outcomes of robot-assisted laparoscopic pyeloplasty (RALP) compared with open pyeloplasty (OP), including consecutive patients suffering from ureteropelvic junction obstruction and operated on from January 2012 to January 2022 at a single center. Preoperative, intraoperative, and postoperative outcomes, including costs, were comparatively analyzed. The primary outcome was 3-month success, defined as symptom resolution and no obstruction upon diuretic renal scintigraphy. Overall, 91 patients were included (48 OP and 43 RALP). The success rate at 3 months was 93.0% and 83.3% in the RALP and OP group, respectively (p = 0.178), and the results remained stable at the last follow-up (35.4 ± 22.8 months and 56.0 ± 28.1 months, respectively). Intraoperative blood loss (p < 0.001), need for postoperative analgesics (p = 0.019) and antibiotics (p = 0.004), and early postoperative complication rate (p = 0.009) were significantly lower in the RALP group. None of the assessed variables were a predictor for failure. The mean total direct cost per surgical procedure and related hospital stay was 2373 € higher in the RALP group. RALP is an effective and safe treatment for ureteropelvic junction obstruction; however, further studies are needed to evaluate the cost-effectiveness of RALP, accounting for indirect costs and cost-saving with new surgical platforms.
abstract_id: PUBMED:33235822
Laparoscopic Versus Open Pyeloplasty for Primary Pelvic Ureteric Junction Obstruction: A Prospective Single Centre Study. Introduction The aim of the study was to compare the clinical and patient-reported outcomes among open pyeloplasty (OP) and laparoscopic pyeloplasty (LP) patients. Materials and methods This was a prospective single centre, case-cohort study conducted in a tertiary care hospital with 62 patients. In both techniques, dismembered Anderson-Hynes pyeloplasty were undertaken. Post-operatively patients underwent visual analogue scale (VAS) assessment for pain, days to ambulation and comparison of the short- and long-term outcomes of the two procedures. Results There was no difference in the physical and functional outcomes between the two surgical approaches at 12 months period after surgery. However, patients in the laparoscopic group did report a higher rate of satisfaction at six weeks and six months' postoperatively. Likewise, patients in LP experienced less pain during the postoperative period (p-value <0.001), with decreased analgesic requirements. This translated into an early patient ambulation in the laparoscopic group (p-value <0.001), and a shorter hospital stay for the LP group (p-value <0.001). Moreover, follow-up ultrasound showed equal improvement of hydronephrosis among the two groups. Conclusion Laparoscopic and open pyeloplasty are equally effective in treating pelvic ureteric junction obstruction (PUJO), with comparable patient-reported outcomes at 12-month follow-up. However, the laparoscopic technique merits over open surgery with faster rehabilitation, a decreased postoperative pain experience and shorter hospital stay.
abstract_id: PUBMED:33413037
Functional, morphological and operative outcome after pyeloplasty in adult patients: Laparoscopic versus open. Objective: The aim of this study is determine and compare improvement of hydronephrosis, renal function, and operative outcome between laparoscopıc and open pyeloplasty in adults.
Material And Methods: Sixty-five adult patients with primary ureteropelvic junction obstruction (UPJO) underwent pyeloplasty between January 2014 and September 2020. Thirty-four patients had laparoscopıc pyeloplasty (LP), 31 patients had open pyeloplasty (OP). In this retrospective study demographics, differential renal function (DRF), hydronephrosis, anteroposterior diameter of pelvis renalis (APD) and operative outcomes: operation time, blood loss, complications, hospital stay, etiology, analgesic requirement, complications, and success rates were compared between two groups.
Results: Improvement of APD is higher in OP group (p: 0.001). Improvement of DRF (p: 0.713) and hydronephrosis (p = 1.000), success (p: 0.407) and complication rate (p: 0.661) are comparable between two groups. Median hospital stay, postoperative analgesia requirement and blood loss was less in LP group, mean operative time was shorter in OP group (p: 0.001).
Conclusion: Pain complaints are greatly reduced after pyeloplasty in adult patients but the drainage of kidney, DRF and hydronephrosis does not improve as much as desired. Improvement of APD is better and median operative time is less in OP, intraoperative blood loss, hospital stay, and analgesic requirement is less in LP group in our study.
abstract_id: PUBMED:38065760
Comparison between open and minimally invasive pyeloplasty in infants: A systematic review and meta-analysis. Introduction: Ureteropelvic junction obstruction (UPJO) is the most common cause of congenital hydronephrosis. Techniques such as laparoscopic pyeloplasty (LP) have gained in popularity over recent years. Although some retrospective studies have compared minimally invasive reconstructive techniques with open surgery for treatment of UPJO in infants, results remain controversial due to the small sample size in most of these studies.
Objective: To verify whether the benefits of minimally invasive pyeloplasty (MIP) observed in adults and children over 2 years of age also apply to infants.
Methods: A systematic review of the literature was performed according to PRISMA recommendations. We searched databases of MEDLINE, EMBASE, Web of Science, and Cochrane Central Register of Controlled Trials. We excluded studies in which patient cohorts were outside the age range between 1 and 23 months of age (infants). Studies should evaluate at least one of the following outcomes: average hospital stay, operative time, follow-up time, complications, post-surgical catheter use, success rate and reintervention rate. The quality of the evidence was assessed with the ROBINS-I tool.
Results: In total, 13 studies were selected. 3494 patients were included in the meta-analysis, of whom 3054 underwent OP, while the remaining 440 were part of the group undergoing MIP. The mean difference in hospital days was -1.16 lower the MIP group (95 % CI; -1.78, -0.53; p = 0.0003). Also, our analysis showed a significantly shorter surgical time in the group who underwent OP, with a mean operative time of 119.92 min, compared to 137.63 min in the MIP group (95 % CI; -31.76, -6.27; p = 0.003). No statistically significant between-group differences were found respect to follow-up time, complications, post-surgical catheter use, success rate and reintervention rate.
Conclusion: This systematic review with meta-analysis has shown that laparoscopic/robotic pyeloplasty in infants is a safe technique with similar success rates to open surgery. Nonetheless, randomized clinical trials with longer follow-up are needed to consolidate these results with more robust scientific evidence.
Answer: Open pyeloplasty has been considered the gold standard for the treatment of ureteropelvic junction obstruction (UPJO) and has demonstrated high success rates in relieving pain, improving or stabilizing renal function, and resolving obstruction (PUBMED:9313652). However, with the advent of minimally invasive techniques, the role of open pyeloplasty has been re-evaluated.
Laparoscopic pyeloplasty (LP) and robot-assisted laparoscopic pyeloplasty (RALP) have emerged as viable alternatives to open pyeloplasty (OP), offering several advantages. Studies have shown that LP and RALP are associated with shorter hospital stays, less postoperative pain, reduced need for analgesics, and fewer complications compared to OP (PUBMED:38304363; PUBMED:27614698; PUBMED:34912393; PUBMED:37751052; PUBMED:33413037). Additionally, laparoscopic techniques have been found to be effective in treating recurrent UPJO in children, with success rates comparable to OP (PUBMED:27614698).
Despite the longer operative times and potentially higher blood loss associated with LP, the postoperative recovery is faster, and the cosmetic results are better compared to OP (PUBMED:27614698; PUBMED:33413037). Furthermore, the success rates of LP and RALP are comparable to those of OP, even in cases of poorly functioning kidneys (PUBMED:37751052).
The use of minimally invasive techniques for UPJO treatment has increased over time, with a corresponding decrease in the rates of OP (PUBMED:27936909). This shift suggests that the benefits of minimally invasive pyeloplasty are being recognized and utilized in clinical practice.
In conclusion, while open pyeloplasty remains a highly effective treatment for UPJO, the evidence suggests that minimally invasive techniques such as LP and RALP are justified alternatives that offer several advantages, including less morbidity and quicker recovery. These minimally invasive approaches are increasingly becoming the preferred methods for treating UPJO, although OP may still be justified in certain clinical scenarios or in centers where minimally invasive expertise is not available (PUBMED:9313652; PUBMED:27936909). |
Instruction: Is admission medically justified for all patients with acute stroke or transient ischemic attack?
Abstracts:
abstract_id: PUBMED:7710148
Is admission medically justified for all patients with acute stroke or transient ischemic attack? Study Objectives: To determine whether admission to an acute care hospital is medically justified for all patients with acute stroke or transient ischemic attack (TIA) and whether those patients for whom admission is justified can be identified in the emergency department.
Design: Retrospective descriptive study.
Setting: Urban county teaching hospital.
Participants: Consecutive adult patients seen in an ED with nonhemorrhagic stroke, TIA, or hemorrhagic stroke.
Methods: Admission to an acute care hospital was deemed medically justified when the patient had any of the following criteria: another diagnosis that warranted admission, an inadequate home situation, altered mental status, or an adverse event during hospitalization or if they underwent hospital-based treatment that could not be provided on an outpatient basis.
Results: One hundred sixty-eight patients were seen during a 1-year period: 120 had an ED diagnosis of nonhemorrhagic stroke, 22 had a diagnosis of TIA, and 26 had a diagnosis of hemorrhagic stroke. One hundred sixty-one patients (96%) were admitted to our hospital. Sixty-three of the 161 admissions (39%) were retrospectively categorized as medically justified. Seventeen of the 63 patients (27%) whose admissions were medically justified developed the criteria justifying their admission after leaving the ED.
Conclusion: The practice of admitting all patients with nonhemorrhagic stroke, TIA, or hemorrhagic stroke to an acute care hospital is medically justified because the ED evaluation cannot reliably identify patients whose condition will worsen.
abstract_id: PUBMED:24209902
Predicting the need for hospital admission of TIA patients. Background: It is unknown which patient will benefit most from hospital admission after transient ischemic attack (TIA). Our aim was to define predictors of a positive hospital outcome.
Methods: We used two cohorts of TIA patients: the University of Texas at Houston Stroke Center (UTH); and Tel-Aviv Sourasky Medical Center in Israel (TASMC) for external validation. We retrospectively reviewed medical records and imaging data. We defined positive yield (PY) of the hospital admission as identification of stroke etiologies that profoundly changes clinical management.
Results: The UTH cohort included 178 patients. 24.7% had PY. In the multivariate analysis, the following were associated with PY: coronary disease (CAD); age; and acute infarct on DWI. We then derived a composite score termed the PY score to predict PY. One point is scored for: age>60, CAD, and acute infarct on DWI. The proportion of PY by PY score was as follows: 0-6%; 1-22%; 2-47%; 3-67% (p<0.001). In the validation cohort PY score was highly predictive of PY and performed in a very similar manner.
Conclusions: Our data suggest, the PY score may enable physicians to make better admission decisions and result in better, safer and more economical care for TIA patients.
abstract_id: PUBMED:8362433
The prognostic value of admission blood pressure in patients with acute stroke. Background And Purpose: Patients with acute stroke are often found to have high blood pressures at hospital admission. Previous studies have shown variable results regarding the prognostic value of high blood pressure in acute stroke. The aim of this study was to investigate the prognostic value of admission blood pressure in a population-based sample of patients with acute stroke.
Methods: Eighty-five patients with intracerebral hemorrhage and 831 with ischemic disease were included in the study. The relations between admission blood pressure and 30-day mortality were studied by logistic regression analyses.
Results: High blood pressure in patients with impaired consciousness on hospital admission was significantly related to 30-day mortality in patients with intracerebral hemorrhage (P = .037) and in patients with ischemic disease (P < .0001). In patients without impaired consciousness, high blood pressure at time of admission was not related to increased mortality at 30 days.
Conclusions: High admission blood pressure in alert stroke patients was not related to increased mortality. Stroke patients with impaired consciousness showed higher mortality rates with increasing blood pressure. However, this does not provide a basis for recommending antihypertensive therapy for such patients.
abstract_id: PUBMED:12555704
Factors of importance for early and late admission of patients with stroke and transient cerebral ischemia Introduction: Early admission after stroke and TIA is important in modern stroke treatment. We studied the time delay to admission and explored predictive factors of early/late admission.
Material And Methods: The study was prospective and community-based comprising all patients with stroke or TIA admitted to a Copenhagen hospital from 1 September 1999 to 30 April 2000. The catchment area is well defined with 283,000 inhabitants. All had a neurological examination and a structured interview within three days with registration of age, gender, premorbid Rankin, Scandinavian stroke scale score, time of onset, knowledge of the cause of symptoms, cohabitation, alone at onset, whether admitted by a general practitioner (GP) or by ambulance after calling emergency, and relevant stroke risk factors. Univariate and multivariate statistics were used.
Results: Altogether 494 patients with stroke and 63 with TIA were entered; 49% were admitted by a GP, 38% by ambulance after calling emergency, 13% via other routes. Time from onset to hospital admission could be assessed reliably in 374 patients (67%) and was a median of 2.6 hours; 37% arrived within 0-3 hours, 55% within 0-6 hours. Patients calling an ambulance over emergency arrived at a median of 1.0 hour after the stroke; those calling the GP a median of 6.0 hours after the stroke. In a multivariate analysis only admission by ambulance after calling emergency (OR 5.7), TIA (OR 5.6), or the patient's knowledge of the cause of the symptoms (OR 2.2) were predictors of early admission.
Discussion: Patients with stroke or TIA in a Danish metropolitan area arrive at hospital a median of 2.6 hours after the stroke. Admission by ambulance after calling emergency was associated with the shortest onset to admission time.
abstract_id: PUBMED:23906624
Admission rates of ED patients with transient ischemic attack have increased since 2000. Objective: A study published in December 2000 showed that 5% of patients presenting with transient ischemic attacks (TIAs) developed a stroke within 48 hours. This finding has been corroborated in several other studies. We hypothesize that, influenced by this, emergency department (ED) physicians have been more reluctant to discharge TIA patients resulting in an increase in the percentage of TIA patients admitted.
Methods: This is a retrospective cohort of consecutive ED visits. This study is conducted in 6 New Jersey EDs with annual ED visits from 25000 to 65000. Consecutive patients seen by ED physicians between January 1, 2000, and December 31, 2010, were included. We identified TIA visits using the International Classification of Diseases, Ninth Revision, code. We analyzed the admission rates for TIA testing for significant differences using the Student t test and calculated 95% confidence intervals.
Results: Of the 2622659 visits in the database, 8216 (0.3%) were for TIA. Females comprised 57%. There was a statistically significant increase in the annual admission rates for TIA patients from 2000 to 2010, from 70% to 91%, respectively (difference, 22%; 95% confidence interval, 18%-26% [P < .001]). Separate analysis by sex showed similar increased admission rates for females and males.
Conclusions: We found that the admission rate for TIAs increased significantly from 2001 to 2010. This change in physicians' practice may be due to the body of evidence that TIA patients have a significant short-term risk of stroke.
abstract_id: PUBMED:28645805
Pre-admission CHA2DS2-VASc score and outcome of patients with acute cerebrovascular events. Background: The CHA2DS2-VASc score has been recommended for the assessment of thromboembolic risk in patients with atrial fibrillation. Data regarding the association between the pre-admission CHA2DS2-VASc score and the outcome of patients with stroke and TIA are scarce. We aimed to assess the predictive value of pre-admission CHA2DS2-VASc score for early risk stratification of patients with acute cerebrovascular event.
Methods: The study group consisted of 8309 patients (53% males, mean age of 70±13.3years) with acute stroke and TIA included in the prospective National Acute Stroke Israeli (NASIS) registry. The two-primary end-points were in-hospital mortality and severe disability at discharge. We divided the study population into 4 groups according to their pre-admission CHA2DS2-VASc score (0-1, 2-3, 4-5, >5).
Results: Following a multivariate analysis odds ratios (OR) for all-cause mortality increased for CHA2DS2-VASc score >1 (OR=2.1 95% CI=1.2-3.6, OR=1.8 95% CI=1.1-3.2, OR=1.8 95% CI 1.1-3.3, for patients with CHA2DS2-VASc score of 2-3, 4-5 and >5, respectively, p<0.001). OR for severe disability (mRS 4-5) at discharge increased significantly in direct association with the CHA2DS2-VASc score (OR=1.55 95% CI=1.14-2.12, OR=2.42 95% CI=1.8-3.3, OR=3 95% CI 2.19-4.27, for patients with CHA2DS2-VASc score of 2-3, 4-5 and >5, respectively as compared with 0-1, p<0.001). Each 1-point increase in the CHA2DS2-VASc score was associated with a 21% increase in the risk for severe disability.
Conclusions: High-risk pre-admission CHA2DS2-VASc score among patients with acute cerebrovascular events is associated with higher in-hospital mortality and severe disability at discharge.
abstract_id: PUBMED:26739868
Association of age and admission mean arterial blood pressure in patients with stroke-data from a national stroke registry. Elevated blood pressure (BP) upon admission is common in patients with ischemic stroke (IS) and intracerebral hemorrhage (ICH). Older patients have a higher prevalence of stroke, but data on admission mean arterial pressure (MAP) patterns in older patients with stroke are scarce. All 6060 patients with IS (72%), ICH (8%) and transient ischemic attack (TIA; 20%) with data on BP and hypertension status on admission in the National Acute Stroke Israeli Registry were included. Admission MAP in the emergency department was studied by age group (<60, 60-74 and ⩾75 years) and stroke type. Linear regression models for admission MAP were produced, including age group, gender, hypertension status and stroke severity as covariates. Interactions between hypertension and age were assessed. Lower MAP (s.d.) was associated with older ages in hypertensive patients (113 (18) mm Hg for age <60 years, 109 (17) for age 60-74 years and 108 (19) for age ⩾75 years, P<0.0001) but not in non-hypertensive IS patients. Among patients with ICH and TIA, a significant negative association of MAP with age was observed for hypertensive patients (P=0.015 and P=0.023, respectively), whereas a significant positive association with age was found in non-hypertensive patients (P=0.023 and P=0.038, respectively). In adjusted regression models, MAP was significantly associated with hypertension in IS, ICH and TIA patients. The interaction between hypertension and age was significantly associated with MAP in IS and ICH patients. In hypertensive patients, the average admission MAP was lower in persons at older ages.
abstract_id: PUBMED:24142684
Resting heart rate at hospital admission and its relation to hospital outcome in patients with heart failure. Background: Resting heart rate (HR) has been proven to influence long-term prognosis in patients with chronic heart failure (HF). The aim of this study was to assess the relationship between resting HR at hospital admission and hospital outcome in patients with HF.
Methods: The study included Polish patients admitted to hospital due to HF who agreed to participate in Heart Failure Pilot Survey of the European Society of Cardiology.
Results: The final analysis included 598 patients. Median HR at hospital admission was 80 bpm. In univariate analyses, higher HR at admission was associated with more frequent use of inotropic support (p = 0.0462) and diuretics (p = 0.0426), worse clinical (New York Heart Association - NYHA) status at discharge (p = 0.0483), longer hospital stay (p = 0.0303) and higher in-hospital mortality (p = 0.003). Compared to patients who survived, patients who died during hospitalization (n = 21; 3.5%) were older, more often had a history of stroke or transient ischemic attack and were characterized by higher NYHA class, higher HR at admission, lower systolic and diastolic blood pressure at admission, lower ejection fraction, lower glomerular filtration rate, and lower natrium and hemoglobin concentrations at hospital admission. In multivariate analysis, higher HR at admission (OR 1.594 [per 10 bpm]; 95% CI 1.061-2.395; p = 0.0248) and lower natrium concentration at admission (OR 0.767 [per 1 mmol/L]; 95% CI 0.618-0.952; p = 0.0162) were the only independent predictors of in-hospital mortality.
Conclusions: In patients with HF, higher resting HR at hospital admission is associated with increased in-hospital mortality.
abstract_id: PUBMED:12865612
The impact of the day of the week and month of admission on the length of hospital stay in stroke patients. Background: Length of hospital stay (LOS) in many diseases is determined by patient-related sociodemographic and clinical factors but also by the management of treatment and care. We examined the influence of the day of the week (DW) and month of admission on LOS in stroke patients.
Methods: We used data from a large regional stroke registry in the northwest of Germany for the years 2000 and 2001. The registry is based on a standardised data assessment in participating hospitals, including patient-related sociodemographic and clinical items, diagnostic and treatment procedures, stroke classification, as well as admission and discharge dates. Data management and analysis were done centrally.
Results: A total of 11,942 patients from 42 hospital departments were included. Median LOS was 13 days for ischemic stroke and 8 days for TIA. Patients with ischaemic stroke admitted on Tuesdays or Wednesdays had the longest LOS. We observed a significantly shorter LOS in patients discharged towards the weekend, both for ischaemic stroke and TIA, independent of age, gender and symptom severity. Discharges were predominantly done on Fridays (25%), transferrals to rehabilitation centres most often on Thursdays (28%) in neurologic departments but on Tuesdays in departments of internal medicine. LOS according to the month of admission revealed a biphasic curve for both ischaemic stroke and TIA, with shortest LOS in July and December.
Conclusions: Discharge patterns on weekends contribute to differences in LOS for the various DWs of admission. Changes in discharge organisation and improved co-operation with rehabilitation centres may shorten LOS. Differences in LOS between the months show considerable seasonal variability.
abstract_id: PUBMED:12670275
Time to hospital admission for acute stroke: an observational study. Objectives: To determine the time from symptom onset to hospital admission of patients with suspected acute stroke, final diagnoses and patient eligibility for thrombolytic therapy.
Design: Hospital-based, prospective, observational study.
Setting: Royal Adelaide Hospital Stroke Unit, South Australia.
Patients: All patients admitted to the unit with suspected acute stroke over 11 months (11 April to 10 October 2000 and 20 August 2001 to 19 January 2002).
Main Outcome Measures: Time from symptom onset to admission; final diagnosis.
Results: Of 284 patients admitted, 39 (14%) had diagnoses other than stroke (including eight with transient ischaemic attacks), 42 (15%) had haemorrhagic stroke and 203 (71%) had ischaemic stroke. Median time to admission after symptom onset was 6 hours (range, 30 min to 13 days), with 100 patients admitted within 3 hours of symptom onset (35%), and 80 within 2 hours (28%). Thirty-seven patients (13%) could have been considered for thrombolysis (diagnosis of non-severe but disabling ischaemic stroke and admission time < 3 hours). Location at stroke onset was the only independent predictor of time to admission.
Conclusions: Most patients with stroke do not present urgently to the emergency department, rendering them less likely to be considered for thrombolytic therapy.
Answer: Admission to an acute care hospital for patients with acute stroke or transient ischemic attack (TIA) is not always medically justified for all patients. A retrospective study found that only 39% of admissions were retrospectively categorized as medically justified, with criteria including another diagnosis warranting admission, an inadequate home situation, altered mental status, an adverse event during hospitalization, or the need for hospital-based treatment that could not be provided on an outpatient basis. However, 27% of those with medically justified admissions developed criteria justifying their admission after leaving the emergency department (ED), suggesting that ED evaluation cannot reliably identify patients whose condition will worsen (PUBMED:7710148).
Predictors of a positive hospital outcome, defined as identification of stroke etiologies that profoundly change clinical management, include coronary disease, age, and acute infarct on diffusion-weighted imaging (DWI). A composite score (PY score) based on these factors may help physicians make better admission decisions for TIA patients (PUBMED:24209902).
Other studies have focused on factors such as admission blood pressure, which has shown variable prognostic value. High blood pressure in patients with impaired consciousness on hospital admission was significantly related to 30-day mortality in patients with intracerebral hemorrhage and ischemic disease. However, high admission blood pressure in alert stroke patients was not related to increased mortality (PUBMED:8362433).
The time delay to admission and factors predicting early or late admission have also been studied. Factors such as admission by ambulance after calling emergency, TIA, or the patient's knowledge of the cause of the symptoms were predictors of early admission (PUBMED:12555704).
Admission rates for TIA patients have increased over time, possibly influenced by evidence that TIA patients have a significant short-term risk of stroke (PUBMED:23906624). Additionally, the pre-admission CHA2DS2-VASc score has been associated with higher in-hospital mortality and severe disability at discharge for patients with acute cerebrovascular events (PUBMED:28645805).
In conclusion, while admission is often the default approach, it may not be medically justified for all patients with acute stroke or TIA. Predictive tools and individual patient assessments are necessary to determine the need for hospital admission on a case-by-case basis. |
Instruction: Is the OSCE more stressful?
Abstracts:
abstract_id: PUBMED:36854090
Adapting the OSCE to the times of the COVID-19 pandemic. A look at the e-OSCE from the students' perspective. Background: At the Department of Medical Education, Centre for Innovative Medical Education at Jagiellonian University Medical College, a completely remote OSCE (e-OSCE) was con- ducted for the first time using the Microsoft Teams platform. 255 test takers were tasked with presenting their communication and clinical skills in order to assess clinical reasoning. A i m: Analysis of the assessment of the OSCE adaptation to the requirements of the COVID-19 pandemic at the Department of Medical Educations in the form of the e-OSCE from the students' perspective. M e t h o d s: Discussion of the OSCE modification was carried out among 6th-year medical students and graduates undergoing validation of their foreign medical degrees. In order to assess students' opinions of the e-OSCE, we used questionnaires. The Statistica 12.0 program was used to analyse the results. R e s u l t s: According to 91.57% of respondents, the e-OSCE was well-prepared. 60% of students strongly agree and 29.47% rather agree that the order of the stations was appropriate and clear. A majority of respondents rated the e-OSCE as fair. 66.32% of respondents strongly agree and rather agree that the proportions of communication and clinical skills were appropriate. The vast majority of the participants of the exam (81.05%) had enough time for individual stations. A statistically significant (p <0.0001) correlation was found between the type of classes and preparation for the e-OSCE. For 61.05% of respondents, the Laboratory Training of Clinical Skills course was the best preparation for students taking the e-OSCE. Taking into account the stressfulness of the OSCE, only 15.96% of students found the online form more stressful than the traditional (in-person) exam. C o n c l u s i o n s: The e-OSCE in students' opinions was well-organized. Informing test-takers prior to the e-OSCE about the role of invigilators assessing individual stations should be improved. The e-OSCE has been proven to be suitable for assessing a wide range of material and validating communication and clinical skills in appropriate proportions. The e-OSCE is fair according to examinees' opinion. The study proves that even in a pandemic, it is possible to prepare an online exam without exposing examiners and examinees to the dangers posed by COVID-19.
abstract_id: PUBMED:23683814
'I found the OSCE very stressful': student midwives' attitudes towards an objective structured clinical examination (OSCE). Background: The Objective Structured Clinical Examination (OSCE) has become widely accepted as a strategy for assessing clinical competence in nursing and midwifery education and training. There is a dearth of information, however, on the OSCE procedure from the perspective of midwifery students. In particular, there is an absence of an objective quantification of midwifery students' attitudes towards the OSCE.
Objectives: The objective of this study is to report the conduct and findings of a survey of midwifery students' attitudes towards a Lactation and Infant Feeding OSCE and to consider these attitudes in the context of the international literature and the empirical evidence base.
Methods: A descriptive survey design using an 18-item Likert (1 to 5 point) scale was used to capture the relevant data. Potential participants were 3rd year midwifery students who had undertaken a Lactation and Infant Feeding OSCE (n=35) in one School of Nursing & Midwifery in the Republic of Ireland. Survey responses were analysed using the Statistical Package for the Social Sciences Version 18.
Results: Thirty-three students completed the survey providing a 94% response rate. Midwifery students' attitudes towards individual aspects of the OSCE varied. Overall, midwifery students were neutral/unsure of the OSCE as a strategy for assessing clinical competence (mean 3.3). Most agreed that the examiner made them feel at ease (mean 3.94). Contrastingly this does not appear to appease student nerves and stress as the majority agreed that the OSCE evokes nervousness (mean 4.27) and stress (mean 4.30). Midwifery students, overall, disagreed that the OSCE reflected real life clinical situations (mean 2.48). Midwifery students were neutral/unsure that the OSCE provided an opportunity to show their practical skills (mean 3.36).
Conclusion: The findings of this study identified that midwifery students were neutral/unsure of the OSCE as a strategy for assessing clinical competence. This has relevance for OSCE development at the authors' institution. The results suggest the need to explore further why students responded in this way. This will assist to develop this OSCE further to ensure that it becomes a positive assessment process for midwifery students and for student learning as they progress through their midwifery education and training.
abstract_id: PUBMED:31239801
An evaluative study of objective structured clinical examination (OSCE): students and examiners perspectives. Background: The objective structured clinical examination (OSCE) is the gold standard and universal format to assess the clinical competence of medical students in a comprehensive, reliable and valid manner. The clinical competence is assessed by a team of many examiners on various stations of the examination. Therefore, it is found to be a more complex, resource- and time-intensive assessment exercise compared to the traditional examinations. Purpose: The objective of this study was to determine the final year MBBS students' and OSCE examiners' perception on the attributes, quality, validity, reliability and organization of the Medicine and Therapeutics exit OSCE held at the University of the West Indies (Cave Hill) in June 2017. Methods: At the end of the OSCE, students and examiners were provided with a questionnaire to obtain their views and comments about the OSCE. Due to the ordinal level of data produced by the Likert scale survey, statistical analysis was performed using the median, IQR and chi-square. Results: A total of 52 students and 22 examiners completed the questionnaire. The majority of the students provided positive views regarding the attributes (eg, fairness, administration, structure, sequence, and coverage of knowledge/clinical skills), quality (eg, awareness, instructions, tasks, and sequence of stations), validity and reliability (eg, true measure of essential clinical skills, standardized, practical and useful experiences), and organization (eg, orientation, timetable, announcements and quality of examination rooms) of the OSCE. Similarly, majority of the examiners expressed their satisfaction with organization, administration and process of OSCE. However, students expressed certain concerns such as stressful environment and difficulty level of OSCE. Conclusion: Overall, the OSCE was perceived very positively and welcomed by both the students and examiners. The concerns and challenges regarding OSCE can be overcome through better orientation of the faculty and preparation of the students for the OSCE.
abstract_id: PUBMED:38486530
Development and Implementation of an Objective Structured Clinical Examination (OSCE) of the Subject of Surgery for Undergraduate Students in an Institution with Limited Resources. This article was migrated. The article was marked as recommended. Aim: To develop and to test the feasibility of conducting an objective structured clinical examination (OSCE) of the subject of surgery for third-year medical students in a limited-resources institution. Methods: To planning the OSCE following the Kane Validity framework. A blueprint based on curriculum was developed to design stations. A specific checklist/rubric (using google forms) was elaborated for each station. The pass/score was determined using the Modified Angoff Approach. Cronbach's alpha was used to determine the reliability. The whole process was evaluated by assessing students' and professors' satisfaction using a survey. Results: It was feasible to develop and implement an OSCE in an institution with limited resources. 28 students and 10 examiners participated. Both considered that the OSCE allows evaluation of the clinical competencies of the subject. They consider that this kind of assessment changed their way of studying, placing more emphasis on clinical skills. In the same way, they consider that it is, more objective, and less stressful when compared to other traditional methods. Similarly, the implementation of this strategy encourages teachers to improve teaching strategies. Conclusion: It's possible to implement un OSCE in an institution with limited resources. The incorporation of this tool has a positive impact on learning.
abstract_id: PUBMED:30899701
OSCE: DESIGN, DEVELOPMENT AND DEPLOYMENT. Background: OSCE - Objective Structured Clinical Examination - as an examination format was developed by Harden and colleagues in 1975 as an answer to the oft-criticised traditional long case clinical examination which was judged to have low psychometric properties. Since then it has received wide acceptance globally as an objective form of assessing clinical competences both at the undergraduate and post graduate levels. Despite this wide acceptability and usage, many medical institutions in the West African sub region are yet to embrace this reality. However there has been a renaissance of interest in the past decade within the sub region such that medical assessments at both undergraduate and postgraduate levels are increasingly adopting the OSCE system. A lot of training and capacity building need to be done. It is in the light of this that a comprehensible and moderately comprehensive document has been developed for the benefit of medical teachers and examiners in the West African sub region.
Aim: This document aims to provide the medical teachers and examiners in the West African sub region a valuable, easily understood OSCE document that will facilitate their understanding and use of OSCE as an assessment tool, based on wide experience of use, capacity building and establishment of the format in medical schools and postgraduate institutions in the sub region.
Methodology: A widespread relevant literature search using different search platforms was conducted to identify published works, monograms and workshop manuals that met the aim and objectives targeted.
Results: Out of numerous publications, most of which highlighted the works of the original authors of OSCE, others qualitatively comparing the OSCE and traditional examinations, a few others quantitatively comparing OSCE and the traditional examination and yet others examining aspects of cost and security, this document is a distillate of all the above, such that the reader is well engaged to obtain a balanced coverage of the subject.
Conclusion: An OSCE document comprehensively but compactly presented is made available for trainers and examiners in West African sub region and which easily serves as a reference document to facilitate and improve the quality of OSCE assessments in the sub region.
abstract_id: PUBMED:33564195
Changes in the Objective Structured Clinical Examination (OSCE) of University Schools of Medicine during COVID-19. Experience with a computer-based case simulation OSCE (CCS-OSCE) Background And Objectives: The COVID-19 pandemic has forced universities to move the completion of university studies online. Spain's National Conference of Medical School Deans coordinates an objective, structured clinical competency assessment called the Objective Structured Clinical Examination (OSCE), which consists of 20 face-to-face test sections for students in their sixth year of study. As a result of the pandemic, a computer-based case simulation OSCE (CCS-OSCE) has been designed. The objective of this article is to describe the creation, administration, and development of the test.
Materials And Methods: This work is a descriptive study of the CCS-OSCE from its planning stages in April 2020 to its administration in June 2020.
Results: The CCS-OSCE evaluated the competences of anamnesis, exploration, clinical judgment, ethical aspects, interprofessional relations, prevention, and health promotion. No technical or communication skills were evaluated. The CCS-OSCE consisted of ten test sections, each of which had a 12-minutes time limit and ranged from six to 21 questions (mean: 1.1 minutes/question). The CCS-OSCE used the virtual campus platform of each of the 16 participating medical schools, which had a total of 2,829 students in their sixth year of study. It was jointly held on two dates in June 2020.
Conclusions: The CCS-OSCE made it possible to bring together the various medical schools and carry out interdisciplinary work. The CCS-OSCE conducted may be similar to Step 3 of the United States Medical Licensing Examination.
abstract_id: PUBMED:23386897
Rasch analysis on OSCE data : An illustrative example. Background: The Objective Structured Clinical Examination (OSCE) is a widely used tool for the assessment of clinical competence in health professional education. The goal of the OSCE is to make reproducible decisions on pass/fail status as well as students' levels of clinical competence according to their demonstrated abilities based on the scores. This paper explores the use of the polytomous Rasch model in evaluating the psychometric properties of OSCE scores through a case study.
Method: The authors analysed an OSCE data set (comprised of 11 stations) for 80 fourth year medical students based on the polytomous Rasch model in an effort to answer two research questions: (1) Do the clinical tasks assessed in the 11 OSCE stations map on to a common underlying construct, namely clinical competence? (2) What other insights can Rasch analysis offer in terms of scaling, item analysis and instrument validation over and above the conventional analysis based on classical test theory?
Results: The OSCE data set has demonstrated a sufficient degree of fit to the Rasch model (Χ(2) = 17.060, DF=22, p=0.76) indicating that the 11 OSCE station scores have sufficient psychometric properties to form a measure for a common underlying construct, i.e. clinical competence. Individual OSCE station scores with good fit to the Rasch model (p > 0.1 for all Χ(2) statistics) further corroborated the characteristic of unidimensionality of the OSCE scale for clinical competence. A Person Separation Index (PSI) of 0.704 indicates sufficient level of reliability for the OSCE scores. Other useful findings from the Rasch analysis that provide insights, over and above the analysis based on classical test theory, are also exemplified using the data set.
Conclusion: The polytomous Rasch model provides a useful and supplementary approach to the calibration and analysis of OSCE examination data.
abstract_id: PUBMED:32058920
The usefulness and acceptance of the OSCE in nursing schools. This qualitative study explores the usefulness and acceptance attributed by students and faculty members to an Objective Structured Clinical Evaluation (OSCE) administered to nursing undergraduates in Catalonia (Spain) for 10 years. Seventy undergraduate nursing students and twelve faculty members participated in the study. The data collection techniques included an open-ended questionnaire, a student focus group, and individualized faculty interviews. The students experienced the OSCE positively as a learning event that offered an opportunity for feedback that could help them master the required competencies. The OSCE increased students' responsibility by presenting them with a set of challenges that they had to tackle individually. Moreover, it reaffirmed their confidence in situations that closely resembled professional practice. Faculty members valued the ability of the OSCE to integrate and assess competencies, its objectivity, and the indirect information it provided on the effectiveness of the curriculum. The educational impact attributed to the OSCE and its acceptance among faculty and students suggest that it would be useful to re-implement it in the Bachelor's of Nursing in Catalan universities. Our findings may be of use to other nursing programs considering how to assess competency-based education, especially in the context of the European Higher Education Area.
abstract_id: PUBMED:32874085
Experience and Challenges of Objective Structured Clinical Examination (OSCE): Perspective of Students and Examiners in a Clinical Department of Ethiopian University. Background: Invented nearly half a century ago, Objective Structured Clinical Examination (OSCE) is overwhelmingly accepted clinical skills assessment tool and has been used worldwide for evaluating and teaching learners' competences in health care disciplines. Regardless of factors affecting the attributes, OSCE is considered as reliable and powerful tool with certain validity evidences. In spite of its advantages and various promotion efforts, the progress of OSCE implementation in Ethiopian public universities has not been satisfactory. Therefore, the objective of this study was to explore the experience and challenges of OSCE implementation from the perspective of clinical year-II medical students and their examiners in Ob-Gyn Department of Jimma University.
Methods: Forty-nine students and seven examiners voluntarily participated in Ob-Gyn Department where OSCE has been used as one of summative assessment methods. Qualitative study design using structured open-ended questionnaire as a tool and descriptive phenomenology as underpinning method were employed. Collaizzi's descriptive analysis was used as phenomenological analysis approach.
Result: Poor organization, inadequate student preparation time, and inadequate number and duration of stations were thematically emerged as umbrellas of factors negatively affecting OSCE implementation. Satisfaction with OSCE was the only theme with findings that encourage OSCE implementation.
Conclusion: There should be team approach, shared responsibility and proper planning among faculty to minimize hindering factors of OSCE implementation. Besides faculty development on OSCE, the department should improve skill lab utilization arranging schedule for both students and faculty members to increase guided students' exposure to simulation-based learning and ultimately enhance OSCE implementation.
abstract_id: PUBMED:25202170
Reliability of the OSCE for Physical and Occupational Therapists. [Purpose] To examine agreement rates between faculty members and clinical supervisors as OSCE examiners. [Subjects] The study subjects were involved physical and occupational therapists working in clinical environments for 1 to 5 years after graduating from training schools as OSCE examinees, and a physical or occupational therapy faculty member and a clinical supervisor as examiners. Another clinical supervisor acted as a simulated patient. [Methods] The agreement rate between the examiners for each OSCE item was calculated based on Cohen's kappa coefficient to confirm inter-rater reliability. [Results] The agreement rates for the behavioral aspects of the items were higher in the second than in the first examination. Similar increases were also observed in the agreement rates for the technical aspects until the initiation of each activity; however, the rates decreased during the middle to terminal stages of continuous movements. [Conclusion] The results may reflect the recent implementation of measures for the integration of therapist education in training schools and clinical training facilities.
Answer: The perception of stress associated with the Objective Structured Clinical Examination (OSCE) varies among students. According to the abstracts provided:
- In the context of the COVID-19 pandemic, only 15.96% of students found the online form of the OSCE (e-OSCE) more stressful than the traditional in-person exam (PUBMED:36854090).
- A study on midwifery students' attitudes towards the OSCE found that the majority agreed that the OSCE evokes nervousness (mean 4.27) and stress (mean 4.30), indicating a high level of stress associated with the examination (PUBMED:23683814).
- Another study reported that students expressed concerns such as a stressful environment and the difficulty level of the OSCE, suggesting that the OSCE can be perceived as stressful (PUBMED:31239801).
- Conversely, an institution with limited resources found that students and examiners considered the OSCE to be more objective and less stressful compared to other traditional methods (PUBMED:38486530).
- A study exploring the experience and challenges of OSCE implementation in an Ethiopian university did not explicitly mention stress but highlighted poor organization and inadequate preparation time as factors that could potentially contribute to a stressful experience (PUBMED:32874085).
Overall, while some students find the OSCE to be a stressful experience, others may perceive it as less stressful compared to traditional examination methods. Factors such as the organization of the OSCE, the preparation provided, and the examination environment can influence the level of stress experienced by students. |
Instruction: Cross-matched blood in colorectal surgery: a clinical waste?
Abstracts:
abstract_id: PUBMED:21176060
Cross-matched blood in colorectal surgery: a clinical waste? Aim: This study was carried out to determine the rate of perioperative blood transfusion and to create an evidence-based approach to requesting blood for elective colorectal surgery.
Method: A comparative cohort study was carried out of 164 patients (107 men, 57 women, median age 68 years) who underwent major colorectal surgery. Details obtained included demographic and operative information, the number of units of blood cross-matched, units used, the reasons for transfusion and patient suitability for electronic issue (EI). The cross-match to transfusion ratio (C:T ratio) was calculated for each procedure and for the whole group of colorectal procedures.
Results: Some 162 units of blood were cross-matched for 76 (46%) patients, with the remaining 88 (54%) being grouped with serum saved. Twenty-one (13%) were transfused with a total of 48 units of blood. The C:T ratio for all procedures was 3.4/1. The commonest indication for transfusion was anaemia. One patient required an emergency transfusion. The majority (78%) of patients were suitable for EI. There were no significant differences between the transfused and nontransfused groups with regard to age, diagnosis (malignant vs benign) and laparoscopic or open colorectal procedure.
Conclusion: Only a small proportion of patients undergoing elective major colorectal surgery require perioperative blood transfusions, most of which are nonurgent. Blood should not be routinely cross-matched in patients who are suitable for EI.
abstract_id: PUBMED:30804680
Utilization of cross-matched blood in elective thyroid and parathyroid surgeries: a single-center retrospective study. Background: Hospital blood banks face the common challenge of maintaining an adequate supply of blood products to serve all potential patients while minimizing the need to discard expired blood products. This study aimed to determine the risk of blood transfusion during elective thyroid and parathyroid surgery and potential factors related to blood loss and risk of transfusion in these cases.
Methods: The study included all thyroid and parathyroid surgeries performed at King Abdulaziz University Hospital between January 2015 and December 2017. After exclusion of patients with incomplete data, 179 patients with complete data who had undergone thyroid and parathyroid surgery were analyzed.
Results: Of the179 patients included in this study, 33 (18.4%) were male. Overall, patients had a mean age and body-mass index of 44.55±13.67 years and 27.66±5.38 kg/m2, respectively. The mean duration of surgery was 168.48±90.69 minutes. None of the patients had a history of previous radiotherapy, bleeding disorder, or blood transfusion. Benign goiter was the most common finding (n=78, 43.6%), followed by papillary carcinoma (n=49, 27.4%). During surgery, most patients (n=136, 76.0%) experienced minimal blood loss. None of the patients in our cohort (n= 179) required any blood transfusion or products.
Conclusion: In this study, we aimed to audit the surgical blood-ordering and -transfusion practices associated with elective thyroid and parathyroid surgeries at our institution. These practices are intended to balance the availability of blood products with the avoidance of unnecessary wastage. In our study of patients who underwent elective thyroid and parathyroid surgeries, parathyroid surgeries, none required blood transfusion.
abstract_id: PUBMED:33077398
Survey of Maximum Blood Ordering for Surgery (MSBOS) in elective general surgery, neurosurgery and orthopedic surgery at the Poursina Hospital in Rasht, Iran, 2017. Introduction: Blood is a valuable life resource that depends on the donation of blood by the community. As a result, it is crucial that the manner in which this expensive resource is used be correct and reasonable.
Objective: The purpose of this study was to investigate the Maximum Blood Ordering for Surgery (MSBOS) in general, orthopedic and neurosurgical elective surgeries at the Poursina Hospital in Rasht in 2017.
Methods: According to the patient file number information, such as gender, age, type of surgery, number of blood units requested, number of cross-matched blood units, number of blood units transfusion, number of patients undergoing transfusion, number of patients who were cross-matched, initial hemoglobin and the underlying disease, was extracted from the HIS (Hospital Information System). Based on the collected data, a descriptive report of the cross-match to transfusion ratio (C/T), transfusion index (TI) and transfusion probability (%T) was performed, using average and standard deviation, by using the SPSS 16.
Results: In the present study, 914 patients from the neurosurgery, orthopedic and general surgery wards of the Poursina Hospital were studied. Of these, 544 were male (59.5%) and 370 were female (40.5%), aged 1-99 years, with a mean age of 43 years. The frequency distribution of C/T in this study was 1.29 in neurosurgery, 1.95 in orthopedic surgery and 1.96 in general surgery. This study indicated that the C/T index was above the normal standard level in four different kinds of surgery, including leg fracture (2.71), cholecystectomy(2.71), forearm fracture (2.70), and skin graft (2.62).The C/T index was at the maximum normal level in thyroidectomy surgery (2.5). The other surgeries had the normal C/T index.
Conclusion: Overall, all of the MSBOS indices were at the standard level in this study, although C/T indices were higher than the standard level in the surgeries for cholecystectomy, leg fracture, forearm fracture, hand fracture and skin graft.
abstract_id: PUBMED:24340234
Primary elective spine arthrodesis: Audit of institutional cross matched to transfused (C/T) ratio to develop blood product ordering guidelines. Background: Currently, there are no uniform guidelines regarding the appropriate amount of blood products ordered prior to spine surgery. Here, we audited our own institution's practices along with preoperative variables that contributed to perioperative transfusion requirements for elective spinal arthrodesis.
Methods: This study utilized a single institution retrospective chart review of patients undergoing elective spinal fusion over a 2 year period. The cross matched to transfused (C/T) ratio was utilized to compare different patient groups. Sub-group multivariate analysis enabled us to identify possible predictors of transfusion for this patient population.
Results: Eighty-five patients were included in the study. Of the 292 units of packed red blood cells ordered preoperatively, only 66 were transfused (C/T ratio 4.4:1). Those undergoing arthrodesis for degenerative disease (6.9:1) or cervical spine arthrodesis (23:1) had the highest C/T ratios. Univariate analysis revealed several factors contributing to a relatively high probability of perioperative transfusions, while multivariate analysis showed that the indication for surgery was the only factor independently associated with the requirement for transfusion.
Conclusion: We found an unacceptably high C/T ratio at our institution. Based on the results of our univariate analysis, we recommend that two units packed cells to be arranged for patients with preoperative hemoglobin levels <9 g/dl, trauma, and Adult Idiopathic Scoliosis (AIS) cases, or where more than two levels were being decompressed and/or arthrodesed. For the remainder of the cases, a group and hold policy should be sufficient.
abstract_id: PUBMED:8292241
Requirements for cross-matched blood in burns surgery. This prospective study of operative blood loss during burns surgery in 142 patients demonstrated no significant difference between the loss seen during layered excision and grafting up to 7 days after thermal injury and that observed during formal excision and grafting from 8 days onward. There was no significant difference between the blood loss in infants under 3 years old and that seen in older children and adults. From the data obtained, a graphical 'tariff' has been constructed for both children up to 20 kg and older children and adults over 20 kg in weight for the ordering of cross-matched blood for perioperative transfusion. It is suggested that the use of this tariff will help to strike a balance between the wasteful overprovision and dangerous underprovision of cross-matched blood for burns surgery.
abstract_id: PUBMED:30044166
Blood cardioplegia benefits only patients with a long cross-clamp time. Introduction: A clear advantage of blood versus crystalloid cardioplegia has not yet been observed in smaller population studies. The purpose of this article was to further investigate the clinical outcomes of blood versus crystalloid cardioplegia in a large propensity-matched cohort of patients who underwent cardiac surgery.
Methods: The study was a single-centre study. Data was withdrawn from the Western Denmark Heart Registry, which comprises a perfusion section for each procedure. A total of 4,852 patients were propensity matched into crystalloid (CC) vs blood cardioplegia (BC) groups. The primary end points were creatinine kinase-MB (CKMB) elevation, acute myocardial infarction (AMI), stroke, dialysis, coronary angiography (CAG) and mortality (30 days and 6 months).
Results: We found lower odds ratio in 30-day mortality in the BC group (OR 0.21; CI 0.06-0.68), but no difference in overall 6-month mortality. There was no difference in CKMB elevation, AMI, dialysis or stroke. Several end points were further analysed for different cross-clamp times. In the CC group, ventilation time above 600 minutes was seen more often in almost all cross-clamp time intervals (23.5 % vs 12.2 %; p<0.0001; χ2-test) and 6-month mortality was significantly higher when the cross-clamp time exceeded 210 minutes (64.3 vs 23.8; p=0.018; χ2-test).
Conclusions: We did not find clear evidence of superiority of either type in the uncomplicated patient. When prolonged cross-clamp time or postoperative ventilation is expected, this study indicates that blood cardioplegia might be preferable.
abstract_id: PUBMED:9643594
Cross-matched blood for major head and neck surgery: an analysis of requirements. We retrospectively analysed our blood ordering practice; the number of units of cross-matched blood requested was compared with the number transfused, in 70 patients undergoing a total of 82 ablative operations for malignant disease. Patients undergoing neck dissection alone, or excision of tumour with free revascularized flap reconstruction without neck dissection, are unlikely to require blood transfusion. Operations that include excision of tumour with primary closure and neck dissection, excision of tumour with pedicled flap reconstruction and excision of tumour with any form of flap reconstruction and neck dissection in continuity, will probably require transfusion. If atypical antibodies are present in the patient's serum on screening, cross-matched blood should always be available preoperatively. Provided that atypical antibodies are not present and that blood is available within 40 minutes from the blood bank, our results show that it is safe to adopt a policy of blood grouping and saving serum, for patients undergoing neck dissection alone, but cross-matching two or more units of blood for patients who are to have more extensive operations.
abstract_id: PUBMED:16551419
Audit on the efficient use of cross-matched blood in elective total hip and total knee replacement. Introduction: This prospective audit studies the use of cross-matched blood in 301 patients over a 1-year period undergoing total knee (TKR) and total hip replacement (THR) surgery in an orthopaedic unit.
Patients And Methods: Analysis over the first 6 months revealed a high level of unnecessary cross-matched blood. The following interventions were introduced: (i) to cease routine cross-matching for THR; (ii) all patients to have a check full blood count on day 2 after surgery; and (iii) Hb < 8 g/dl to be considered as the trigger for transfusion in patients over 65 years and free from significant co-morbidity. These changes are in accordance with published national guidelines [Anon. Guidelines for the clinical use of red cell transfusions. Br J Haematol 2001; 113: 24-31].
Results: In the next 6 months, the number of units cross-matched but not transfused fell by 96% for THR, and the cross-match transfusion (C:T) ratio reduced from 3.21 to 1.62. Reductions were also observed for the TKR cohort. These results provide evidence of a substantial risk and cost benefit in the use of this limited resource. A telephone survey of 44 hospitals revealed that 20 hospitals routinely cross-matched blood for THR and 11 do so for TKR.
Conclusions: Changes can be made to the Maximum Surgical Blood Ordering Schedules (MSBOS) in other orthopaedic units according to national guidelines.
abstract_id: PUBMED:9153839
Audit of waste collected over one week from ten dental practices. A pilot study. An audit of the waste practices of ten general dental surgeries identified problems that have occurred due to the lack of specific dental guidelines or codes of practice in this area. Occupational health and safety requirements for types and locations of sharps containers, and lack of consensus on what constitutes a sharp, were identified as areas needing attention. Cross-infection control items, such as gloves, masks, single-use cups, and protective coverings, were found to constitute up to 91 per cent of total waste. When infectious waste was reclassified by the audit team as 'that waste which was visibly blood stained,' a reduction in waste in this category was made, during the audit, at each practice. The practice of disposing of radiographic fixer and developer into the sewerage system occurred in three out of the ten practices, even though the Australian Dental Association Inc. has discouraged this practice.
abstract_id: PUBMED:28596662
Rh and Kell Phenotype Matched Blood Versus Randomly Selected and Conventionally Cross Matched Blood on Incidence of Alloimmunization. There is paucity of literature regarding efficacy of transfusion of Rh and Kell matched blood in reducing alloimmunization risk among non-chronically transfused patients. A prospective study to compare efficacy of Rh and Kell phenotype matched blood over randomly selected and conventionally cross-matched blood on the incidence of alloimmunization in patients undergoing cardiac surgery was carried out in the Department of Transfusion Medicine at Indraprastha Apollo Hospitals, New Delhi, from 1st September, 2013 to 31st December, 2014. Two groups, A and B of 250 each were studied. Group A received ABO, Rh and Kell phenotype matched units. Group B received units matched only for ABO and Rh D. Retrospective analysis for antigenic exposures was done. Alloimmunization rate was evaluated for both groups after 72 h and 4 weeks and compared. A p value ≤0.05 was considered statistically significant. None of the patients in Group A were alloimmunized. In Group B, 119 patients received antigenic stimulus (single antigen stimuli- 93; multiple- 26). The probability of a patient being exposed was 52.4 %. At 6 weeks post transfusion, one patient developed 'Anti-E' and had received 'E' stimulus once. The rate of alloimmunization was 0.4 % in group B, 0.8 % overall and the risk of alloimmunization per unit transfused was 0.17 %. Non responders were 99.16 %. The study did not reach statistical significance (p = 0.238). Majority of our population are non-responders therefore, the resources and time can be reserved for providing Rh and Kell matched units for multiply transfused patients.
Answer: The study identified in abstract PUBMED:21176060 investigated the necessity of cross-matching blood for elective colorectal surgery. The results showed that only a small proportion (13%) of patients required perioperative blood transfusions, and most of these were nonurgent. The study concluded that routine cross-matching of blood is not necessary for patients who are suitable for electronic issue (EI), suggesting that the practice of routinely cross-matching blood for these surgeries may indeed be a clinical waste. The cross-match to transfusion (C:T) ratio for all procedures was 3.4:1, indicating that more blood was cross-matched than actually used, supporting the notion that the process could be optimized to reduce waste. |
Instruction: Does PTSD moderate the relationship between social support and suicide risk in Iraq and Afghanistan War Veterans seeking mental health treatment?
Abstracts:
abstract_id: PUBMED:20721901
Does PTSD moderate the relationship between social support and suicide risk in Iraq and Afghanistan War Veterans seeking mental health treatment? Objective: This study examined posttraumatic stress disorder (PTSD) as a potential moderating variable in the relationship between social support and elevated suicide risk in a sample of treatment-seeking Iraq and Afghanistan War Veterans.
Method: As part of routine care, self-reported marital status, satisfaction with social networks, PTSD, and recent suicidality were assessed in Veterans (N=431) referred for mental health services at a large Veteran Affairs Medical Center. Logistic regression analyses were conducted using this cross-sectional data sample to test predictions of diminished influence of social support on suicide risk in Veterans reporting PTSD.
Results: Thirteen percent of Veterans were classified as being at elevated risk for suicide. Married Veterans were less likely to be at elevated suicide risk relative to unmarried Veterans and Veterans reporting greater satisfaction with their social networks were less likely to be at elevated risk relative to Veterans reporting lower satisfaction. Satisfaction with social networks was protective for suicide risk in PTSD and non-PTSD cases, but was significantly less protective for veterans reporting PTSD.
Conclusions: Veterans who are married and Veterans who report greater satisfaction with social networks are less likely to endorse suicidal thoughts or behaviors suggestive of elevated suicide risk. However, the presence of PTSD may diminish the protective influence of social networks among treatment-seeking Veterans.
abstract_id: PUBMED:19626682
Posttraumatic stress disorder as a risk factor for suicidal ideation in Iraq and Afghanistan War veterans. Posttraumatic stress disorder (PTSD) was examined as a risk factor for suicidal ideation in Iraq and Afghanistan War veterans (N = 407) referred to Veterans Affairs mental health care. The authors also examined if risk for suicidal ideation was increased by the presence of comorbid mental disorders in veterans with PTSD. Veterans who screened positive for PTSD were more than 4 times as likely to endorse suicidal ideation relative to non-PTSD veterans. Among veterans who screened positive for PTSD (n = 202), the risk for suicidal ideation was 5.7 times greater in veterans who screened positive for two or more comorbid disorders relative to veterans with PTSD only. Findings are relevant to identifying risk for suicide behaviors in Iraq and Afghanistan War veterans.
abstract_id: PUBMED:34914414
Suicidal ideation in Iraq and Afghanistan veterans with mental health conditions at risk for homelessness. Suicide prevention among Veterans is a national priority. Overlap exists between conditions that may increase risk for suicide (e.g., mental health conditions, financial stressors, lack of social support) and homelessness among Veterans. We examined predictors of variance in suicidal ideation (SI) among 58 Iraq/Afghanistan Veterans at risk for homelessness who were receiving residential mental health treatment. Participants were classified as SI nonendorsers (n = 36) or SI endorsers (n = 22), based on their Patient Health Questionnaire-9 (PHQ-9) responses. Independent t tests and chi-square tests were used to examine group differences on baseline demographic variables, neuropsychological measures, and emotional/physical health symptom measures. Compared to nonendorsers, SI endorsers were significantly younger and reported less Veterans Affairs (VA) disability income, less total monthly income, less physical pain, lower quality of life overall and in the psychological health domain, lower community reintegration satisfaction, and more severe anxiety. Groups did not significantly differ on cognitive measures. A subsequent logistic regression revealed that only younger age uniquely predicted variance in SI endorsement. Younger age may be a particularly important factor to consider when assessing suicide risk in Veterans at risk for homelessness. Identifying predictors of variance in SI may help inform future treatment and suicide prevention efforts for Veterans at risk for homelessness. Future longitudinal research examining predictors of suicidality is warranted. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
abstract_id: PUBMED:21451353
Hopelessness and suicidal ideation in Iraq and Afghanistan War Veterans reporting subthreshold and threshold posttraumatic stress disorder. We examined hopelessness and suicidal ideation in association with subthreshold and threshold posttraumatic stress disorder (PTSD) in a sample of Iraq and Afghanistan War Veterans (U.S., N = 275) assessed within a specialty VA postdeployment health clinic. Veterans completed paper-and-pencil questionnaires at intake. The military version of the PTSD Checklist was used to determine PTSD levels (No PTSD; subthreshold PTSD; PTSD), and endorsement of hopelessness or suicidal ideation were used as markers of elevated suicide risk. Veterans were also asked if they received mental health treatment in the prior 6 months. Veterans reporting subthreshold PTSD were 3 times more likely to endorse these markers of elevated suicide risk relative to the Veterans without PTSD. We found no significant differences in likelihood of endorsing hopelessness or suicidal ideation comparing subthreshold and threshold PTSD groups, although the subthreshold PTSD group was less likely to report prior mental health treatment. Clinicians should be attentive to suicide risk in returned Veterans reporting both subthreshold and threshold PTSD.
abstract_id: PUBMED:25876141
Prevalence of, risk factors for, and consequences of posttraumatic stress disorder and other mental health problems in military populations deployed to Iraq and Afghanistan. This review summarizes the epidemiology of posttraumatic stress disorder (PTSD) and related mental health problems among persons who served in the armed forces during the Iraq and Afghanistan conflicts, as reflected in the literature published between 2009 and 2014. One-hundred and sixteen research studies are reviewed, most of which are among non-treatment-seeking US service members or treatment-seeking US veterans. Evidence is provided for demographic, military, and deployment-related risk factors for PTSD, though most derive from cross-sectional studies and few control for combat exposure, which is a primary risk factor for mental health problems in this cohort. Evidence is also provided linking PTSD with outcomes in the following domains: physical health, suicide, housing and homelessness, employment and economic well-being, social well-being, and aggression, violence, and criminality. Also included is evidence about the prevalence of mental health service use in this cohort. In many instances, the current suite of studies replicates findings observed in civilian samples, but new findings emerge of relevance to both military and civilian populations, such as the link between PTSD and suicide. Future research should make effort to control for combat exposure and use longitudinal study designs; promising areas for investigation are in non-treatment-seeking samples of US veterans and the role of social support in preventing or mitigating mental health problems in this group.
abstract_id: PUBMED:32248664
Religion, spirituality, and suicide risk in Iraq and Afghanistan era veterans. Background: United States military veterans experience disproportionate rates of suicide relative to the general population. Evidence suggests religion and spirituality may impact suicide risk, but less is known about which religious/spiritual factors are most salient. The present study sought to identify the religious/spiritual factors most associated with the likelihood of having experienced suicidal ideation and attempting suicide in a sample of recent veterans.
Methods: Data were collected from 1002 Iraq/Afghanistan-era veterans (Mage = 37.68; 79.6% male; 54.1% non-Hispanic White) enrolled in the ongoing Veterans Affairs Mid-Atlantic Mental Illness Research, Education and Clinical Center multi-site Study of Post-Deployment Mental Health.
Results: In multiple regression models with stepwise deletion (p < .05), after controlling for depression and posttraumatic stress disorder (PTSD) diagnoses, independent variables that demonstrated a significant effect on suicidal ideation were perceived lack of control and problems with self-forgiveness. After controlling for age, PTSD diagnosis, and substance use problems, independent variables that demonstrated a significant effect on suicide attempt history were perceived as punishment by God and lack of meaning/purpose.
Conclusions: Clinical screening for spiritual difficulties may improve detection of suicidality risk factors and refine treatment planning. Collaboration with spiritual care providers, such as chaplains, may enhance suicide prevention efforts.
abstract_id: PUBMED:27419652
Nonsuicidal self-injury and suicide attempts in Iraq/Afghanistan war veterans. The present study examined the association between history of nonsuicidal self-injury (NSSI) and history of suicide attempts (SA) among 292 Iraq/Afghanistan veterans, half of whom carried a lifetime diagnosis of posttraumatic stress disorder (PTSD). Consistent with hypotheses, veterans who reported a history of NSSI were significantly more likely to report a history of SA than veterans without a history of NSSI. In addition, logistic regression demonstrated that NSSI remained a significant predictor of SA even after a wide range of covariates (i.e., combat exposure, traumatic brain injury, PTSD, depression, alcohol dependence) were considered. Taken together, these findings suggest that clinicians working with veterans should include NSSI history as part of their standard risk assessment battery.
abstract_id: PUBMED:24612971
Combined PTSD and depressive symptoms interact with post-deployment social support to predict suicidal ideation in Operation Enduring Freedom and Operation Iraqi Freedom veterans. Rates of suicide are alarmingly high in military and veteran samples. Suicide rates are particularly elevated among those with post-traumatic stress disorder (PTSD) and depression, which share overlapping symptoms and frequently co-occur. Identifying and confirming factors that reduce, suicide risk among veterans with PTSD and depression is imperative. The proposed study evaluated, whether post-deployment social support moderated the influence of PTSD-depression symptoms on, suicidal ideation among Veterans returning from Iraq and Afghanistan using state of the art clinical, diagnostic interviews and self-report measures. Operations Enduring and Iraqi Freedom (OEF/OIF) Veterans (n=145) were invited to, participate in a study evaluating returning Veterans׳ experiences. As predicted, PTSD-depression, symptoms had almost no effect on suicidal ideation (SI) when post-deployment social support was high; however, when, post-deployment social support was low, PTSD-depression symptoms were positively associated with, SI. Thus, social support may be an important factor for clinicians to assess in the context of PTSD and, depressive symptoms. Future research is needed to prospectively examine the inter-relationship, between PTSD/depression and social support on suicidal risk, as well as whether interventions to, improve social support result in decreased suicidality.
abstract_id: PUBMED:28080076
Adverse childhood experiences and risk for suicidal behavior in male Iraq and Afghanistan veterans seeking PTSD treatment. Objective: Adverse childhood experiences (ACEs) are associated with increased risk for suicide and appear to occur in disproportionately high rates among men who served in the U.S. military. However, research has yet to examine a comprehensive range of ACEs among Iraq/Afghanistan veterans with combat-related posttraumatic stress disorder (PTSD) or whether these premilitary stressors may contribute to suicidal behavior in this highly vulnerable population.
Method: A sample of 217 men entering a residential program for combat-related PTSD completed measures for ACEs, combat exposure, and lifetime suicidal ideation and attempts.
Results: The majority of patients had experienced multiple types of adversity or traumas during childhood/adolescence. In particular, 83.4% endorsed at least 1 ACE category and 41.5% reported experiencing 4 or more ACEs. When accounting for effects of deployment-related stressors, we further found that accumulation of ACEs was uniquely linked with thoughts of suicide or attempts among these patients. Namely, for every 1-point increase on the ACE Questionnaire, veterans' risk of suicidal ideation and attempts increased by 23% and 24%, respectively.
Conclusion: This brief report provides initial evidence that veterans seeking treatment for combat-related PTSD often have extensive histories of premilitary stressors that may increase suicide risk beyond probable deployment-related traumas. (PsycINFO Database Record
abstract_id: PUBMED:32663949
Comorbid PTSD and Depression Diagnoses Mediate the Association of Military Sexual Trauma and Suicide and Intentional Self-Inflicted Injury in VHA-Enrolled Iraq/Afghanistan Veterans, 2004-2014. Background: Exposure to military sexual trauma (MST) in veterans is associated with suicidal ideation. Previous research suggests there are mechanisms of this association, including posttraumatic stress disorder (PTSD) and depression. Research has yet to examine whether comorbid PTSD and depression mediate the association of MST and suicide and intentional self-inflicted injury, and whether this comorbidity confers a greater risk for suicide relative to PTSD-only and depression-only. The current study addressed this gap in our knowledge.
Methods: Screening results identifying MST exposure, PTSD and depression diagnoses, suicide and intentional self-inflicted injury, and demographic covariates in 435,690 Iraq/Afghanistan veterans were extracted from Veterans Health Administration (VHA) medical records. Veterans were included if they attended VHA from 2004-2014. Mediation was tested with path analyses.
Results: Suicide and intentional self-inflicted injury was observed in 16,149 (3.71%) veterans. The indirect effect of suicide and intentional self-inflicted injury, given a positive screen for MST, was highest among veterans with comorbid PTSD and depression diagnoses (indirect effect=3.18%, 95% confidence interval [CI] [3.01%, 3.32%]), with smaller probabilities observed for both PTSD-only (indirect effect=-0.18%, 95% CI [-0.20%, -0.14%]) and depression-only (indirect effect=0.56%, 95% CI [0.51%, 0.62%]; ps<.05).
Limitations: Data were limited to VHA-enrolled Iraq/Afghanistan veterans.
Conclusions: To reduce suicide risk among veterans with a history of MST, treatments may be most effective if they target comorbid PTSD and depression. Future research should examine the mechanisms through which comorbid PTSD and depression result in heightened risk for suicide and intentional self-inflicted injury.
Answer: Yes, PTSD does moderate the relationship between social support and suicide risk in Iraq and Afghanistan War Veterans seeking mental health treatment. A study examining this relationship found that while married veterans and those reporting greater satisfaction with their social networks were less likely to endorse suicidal thoughts or behaviors, the protective influence of social support was significantly less for veterans reporting PTSD (PUBMED:20721901). This suggests that PTSD may diminish the protective effects of social support on suicide risk among veterans. Additionally, veterans with PTSD were found to be more than four times as likely to endorse suicidal ideation compared to non-PTSD veterans, and the risk for suicidal ideation was even greater in veterans with PTSD who also had comorbid mental disorders (PUBMED:19626682).
Furthermore, research indicates that the presence of comorbid PTSD and depression diagnoses mediates the association between military sexual trauma (MST) and suicide, with a higher risk for suicide among veterans with both conditions compared to those with only one or neither condition (PUBMED:32663949). Another study found that post-deployment social support could moderate the influence of combined PTSD and depressive symptoms on suicidal ideation, with high social support reducing the effect of these symptoms on suicidal ideation (PUBMED:24612971).
These findings underscore the complex interplay between PTSD, social support, and suicide risk in veterans, highlighting the importance of considering PTSD in the assessment and treatment of suicide risk among this population. |
Instruction: Is fixation failure after plate fixation of the symphysis pubis clinically important?
Abstracts:
abstract_id: PUBMED:22707071
Is fixation failure after plate fixation of the symphysis pubis clinically important? Background: Plate fixation is a recognized treatment for pelvic ring injuries involving disruption of the pubic symphysis. Although fixation failure is well known, it is unclear whether early or late fixation failure is clinically important.
Questions/purposes: We therefore determined (1) the incidence and mode of failure of anterior plate fixation for traumatic pubic symphysis disruption; (2) whether failure of fixation was associated with the types of pelvic ring injury or pelvic fixation used; (3) the complications, including the requirement for reoperation or hardware removal; and (4) whether radiographic followup of greater than 1 year alters subsequent management.
Methods: We retrospectively reviewed 148 of 178 (83%) patients with traumatic symphysis pubis diastasis treated by plate fixation between 1994 and 2008. Routine radiographic review, pelvic fracture classification, method of fixation, incidence of fixation failure, timing and mode of failure, and the complications were recorded after a minimum followup of 12 months (mean, 45 months; range, 1-14 years).
Results: Hardware breakage occurred in 63 patients (43%), of which 61 were asymptomatic. Breakage was not related to type of plate, fracture classification, or posterior pelvic fixation. Five patients (3%) required revision surgery for failure of fixation or symptomatic instability of the symphysis pubis, and seven patients (5%) had removal of hardware for other reasons, including late deep infection in three (2%). Routine radiographic screening as part of annual followup after 1 year did not alter management.
Conclusions: Our observations suggest the high rate of late fixation failure after plate fixation of the symphysis pubis is not clinically important.
abstract_id: PUBMED:27282685
Early failure of symphysis pubis plating. Introduction: Operative fixation of a disrupted symphysis pubis helps return alignment and stability to a traumatized pelvic ring. Implant loosening or failure has been demonstrated to commonly occur at some subacute point during the postoperative period. The purpose of this study is to report on a series of patients with traumatic pelvic ring disruptions to determine the incidence and common factors associated with early postoperative symphyseal plate failure before 7 weeks.
Materials And Methods: 126 patients retrospectively identified with unstable pelvic injuries treated with open reduction and plate fixation of the symphysis pubis and iliosacral screw fixation. Preoperative and postoperative radiographs, computed tomography (CT) images, and medical chart were reviewed to determine symphyseal displacement preoperatively and postoperatively, time until plate failure, patient symptoms and symphyseal displacement at failure, subsequent symphyseal displacement, incidence of additional surgery, and patient weight bearing compliance.
Results: 14 patients (11.1%) sustained premature postoperative fixation failure. 13 patients had anteroposterior compression (APC)-II injuries and 1 patient had an APC-III injury. Preoperative symphyseal displacement was 35.6 millimeters (mm) (20.8-52.9). Postoperative symphyseal space measurement was 6.3mm (4.7-9.3). Time until plate failure was 29days (5-47). Nine patients (64.2%) noted a pop surrounding the time of failure. Symphyseal space measurement at failure was 12.4mm (5.6-20.5). All patients demonstrated additional symphyseal displacement averaging 2.6mm (0.2-9.4). Two patients (14.2%) underwent revision. Four patients (28.5%) were non-compliant.
Conclusion: Premature failure of symphysis pubis plating is not uncommon. In this series, further symphyseal displacement after plate failure was not substantial. The presence of acute symphyseal plate failure alone may not be an absolute indication for revision surgery. Making patient education a priority could lead to decreased postoperative non-compliance and potentially a decreased incidence of implant failure. Posterior pelvic ring fixation aides overall pelvic ring stability and may help minimize further displacement after early postoperative symphyseal plate failure. Further functional outcome and biomechanical studies surrounding early symphyseal plate failure are needed.
abstract_id: PUBMED:28122599
Is there a clinical benefit of additional tension band wiring in plate fixation of the symphysis? Background: The purpose of this study was to determine whether additional tension band wiring in the plate for traumatic disruption of symphysis pubis has clinical benefits. Therefore, outcomes and complications were compared between a plate fixation group and a plate with tension band wiring group.
Methods: We retrospectively evaluated 64 consecutive patients who underwent open reduction and internal fixation of the symphysis pubis by using a plate alone (n = 39) or a plate with tension band wiring (n = 25). All the patients were followed up for a minimum of 24 months (mean, 34.4 months; range, 26-39 months). Demographic characteristics, outcomes, movement of the metal works, complications, revision surgery, and Majeed functional score were compared.
Results: Significant screw pullout was relatively significantly more frequently found in the plate fixation group than in the plate with tension band wiring group (P = 0.009). In terms of the overall rate of all-cause revision surgery, including significant loosening, symptomatic hardware, and patient-requested hardware removal during follow-up period, the plate with tension band wiring group showed a significantly lower rate.
Conclusion: Tension band wiring in combination with a symphyseal plate showed better radiological outcomes, a lower incidence of hardware loosening, and a lower rate of revision surgery than plate fixation alone. This technique would have some potential advantages in terms of avoiding significant movement of plate, symptomatic hardware failure, and revision surgery.
abstract_id: PUBMED:22139387
A comparison of percutaneous reduction and screw fixation versus open reduction and plate fixation of traumatic symphysis pubis diastasis. Purpose: Plate fixation, the conventional treatment for traumatic symphysis pubis diastasis, carries the risk of extensive exposure, blood loss and postoperative infections. Percutaneous screw fixation is a minimally invasive treatment. The goal of the present study was to compare the outcome of plate fixation and percutaneous screw technique in the treatment of traumatic pubic symphysis diastasis.
Methods: Ninety patients with traumatic symphysis pubis diastasis were treated from January 2003 to December 2009 at two level 1 regional trauma centers. The mean time of follow-up was 21 months (18 to 26). Forty-five patients were treated by percutaneous screw fixation. Forty-five patients were treated by plate and screws fixation. The demographic, distribution of fracture patterns, blood loss, incision length, fixation failure, malunion, revision surgery and functional scores were compared.
Results: Seven cases were lost during follow-up. Demographics (age and gender), fracture classification and Injury Severity Score were comparable in the two groups (P > 0.05). Blood loss and extensive exposure were much less in screw group (P < 0.01). Patients in screw group achieved better functional performance (P = 0.01). There were no significant differences favoring plate fixation in reduction quality (P = 0.32), implant failure (P = 0.39), malunion (P = 0.15), revision surgery rates (P = 0.27), percentage of impotence in the male patients (P = 0.2) and implant removal time (P = 0.12) between the two groups.
Conclusions: Our results indicate that besides lower rate of iatrogenic injuries and better functional outcome, percutaneous screw fixation of the pubic symphysis is as strong as plate fixation.
abstract_id: PUBMED:30241733
A biomechanical cadaver comparison of suture button fixation to plate fixation for pubic symphysis diastasis. Objectives: To determine whether suture button fixation of the pubic symphysis is biomechanically similar to plate fixation in the treatment of partially stable pelvic ring injuries.
Methods: Twelve pelvis specimens were harvested from fresh frozen cadavers. Dual-x-ray-absorptiometry (DXA) scans were obtained for all specimens. The pubic symphysis of each specimen was sectioned to simulate a partially stable pelvic ring injury. Six of the pelvises were instrumented using a 6 hole, 3.5 mm low profile pelvis plate and six of the pelvises were instrumented with two suture button devices. Biomechanical testing was performed on a pneumatic testing apparatus in a manner that simulates vertical stance. Displacement measurements of the superior, middle, and inferior pubic symphysis were obtained prior to loading, after an initial 440 N load, and after 30,000 and 60,000 rounds of cyclic loading. Statistical analysis was performed using Wilcoxon-Mann-Whitney tests, Fisher's exact test, and Cohen's d to calculate effect size. Significance was set at p < 0.05.
Results: There was no difference between groups for DXA T scores (p = 0.749). Between group differences in clinical load to failure (p = 0.65) and ultimate load to failure (p = 0.52) were not statistically significant. For symphysis displacement, the change in fixation strength and displacement with progressive cyclic loading was not significant when comparing fixation types (superior: p = 0.174; middle: p = 0.382; inferior: p = 0.120).
Conclusion: Suture button fixation of the pubic symphysis is biomechanically similar to plate fixation in the management of partially stable pelvic ring injuries.
abstract_id: PUBMED:3385825
Two-hole plate fixation for traumatic symphysis pubis diastasis. Techniques for managing traumatic diastasis of the pubic symphysis include bed rest, hip spica casting, pelvic slings, external fixation, and internal fixation. We report herein our experience with 14 consecutively managed patients in whom we successfully stabilized traumatic pubic diastasis with a single two-hole plate fixation. The average age of the 13 men and one woman was 30 years; followup averaged 17 months. Most of the patients had associated injuries (Injury Severity Score average, 19). Nine patients had concomitant disruption of the sacroiliac joint requiring either delayed open reduction and internal fixation or prolonged skeletal traction; among the five remaining patients, time to mobilization (bed to chair) averaged 1 day. There were no complications attributable to the procedure; i.e., no infections, and no failures of fixation. In this small series of patients early two-hole plate fixation of the traumatic diastasis of the pubis satisfactorily restored the disrupted anterior pelvic ring, contributed to early mobilization of the patients, and made reduction of a concomitantly disrupted sacroiliac joint easier, whether accomplished by skeletal traction or open reduction and internal fixation during a second procedure.
abstract_id: PUBMED:28611968
Outcome of Internal Fixation and Corticocancellous Grafting of Symphysis Pubis Diastasis Which Developed after Malunion of Pubic Rami Fracture. We report a case of pubic symphysis diastasis, which was initially asymptomatic; however, it became symptomatic with urinary incontinence during pregnancy. The patient was treated with open reduction and internal fixation of the symphysis pubis. A corticocancellous autograft was used for filling the gap which remained despite bilateral compression of the iliac bones. We obtained satisfactory outcome in terms of symptoms at the 3 years' follow-up; however, there was instability findings in the X-rays with broken screws. We conclude that asymptomatic pubic symphysis diastasis might be symptomatic after additional trauma (such as pregnancy) in the following days, if it was unstable in the very beginning of injury. Fixation of old pubic symphysis diastasis with reconstruction plate by filling the gap by using corticocancellous autograft, might not prevent ultimate implant failure if the symphysis pubis diastasis is part of an unstable pelvic fracture in the very beginning.
abstract_id: PUBMED:15949483
Implant retention and removal after internal fixation of the symphysis pubis. Although internal fixation of diastasis of the symphysis pubis is commonly performed, there are no clear guidelines regarding the indications for removal of these implants. The long-term physiologic effects of retaining these internal fixation devices are not well described. We surveyed the literature to assess the current thinking and recommendations regarding implant retention and removal. Twenty-four case series and two case reports were found, for a total of 482 cases. Complications arose as a result of implant retention in 7.5% of patients, with infection the most common complication. There is no consensus in the literature regarding implant retention and removal after internal fixation of diastasis of the symphysis pubis.
abstract_id: PUBMED:22183198
Failure of locked design-specific plate fixation of the pubic symphysis: a report of six cases. Objectives: Physiological pelvic motion has been known to lead to eventual loosening of screws, screw breakage, and plate breakage in conventional plate fixation of the disrupted pubic symphysis. Locked plating has been shown to have advantages for fracture fixation, especially in osteoporotic bone. Although design-specific locked symphyseal plates are now available, to our knowledge, their clinical use has not been evaluated and there exists a general concern that common modes of failure of the locked plate construct (such as pullout of the entire plate and screws) could result in complete and abrupt loss of fixation. The purpose of this study was to describe fixation failure of this implant in the acute clinical setting.
Design: Retrospective analysis of multicenter case series.
Setting: Multiple trauma centers.
Patients: Six cases with failed fixation, all stainless steel locked symphyseal plates and screws manufactured by Synthes (Paoli, PA) and specifically designed for the pubic symphysis, were obtained from requests for information sent to orthopaedic surgeons at 10 trauma centers. A four-hole plate with all screws locked was used in 5 cases. A six-hole plate with 4 screws locked (two in each pubic body) was used in one.
Intervention: Fixation for disruption of the pubic symphysis using an implant specifically designed for this purpose.
Main Outcome Measurements: Radiographic appearance of implant failure.
Results: Magnitude of failure ranged from implant loosening (3 cases), resulting in 10-mm to 12-mm gapping of the symphyseal reduction, to early failure (range, 1-12 weeks), resulting in complete loss of reduction (3 cases). Failure mechanism included construct pullout, breakage of screws at the screw/plate interface, and loosening of the locked screws from the plate and/or bone. Backing out of the locking screws resulting from inaccurate insertion technique was also observed.
Conclusions: Failure mechanisms of locked design-specific plate fixation of the pubic symphysis include those seen with conventional uniplanar fixation as well as those common to locked plate technology. Specific indications for the use of these implants remain to be determined.
Level Of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.
abstract_id: PUBMED:34895919
A biomechanical in-vitro study on an alternative fixation technique of the pubic symphysis for open book injuries of the pelvis. Purpose: Implant failure rates remain high after plate fixation in pelvic ring injuries. The aim of this study was to compare an alternative fixation technique with suture-button devices and anterior plate fixation in partially stable open-book injuries.
Material And Methods: We acquired 16 human fresh frozen anatomic pelvic specimens. The sacrospinous, sacrotuberous, and anterior sacroiliac ligaments were bilaterally released, and the pubic symphysis transected to simulate a partially stable open-book (AO/OTA 61-B3.1) injury. The specimens were randomly assigned to the two fixation groups. In the first group two suture-button devices were placed in a criss-crossed position through the symphysis. In second group a six-hole plate with standard 3.5 unlocked bicortical screws was used for fixation. Biomechanical testing was performed on a servo-hydraulic apparatus simulating bilateral stance, as described by Hearn and Varga. Cyclic compression loading with a progressively increasing peak load (0.5 N/cycle) was applied until failure. The failure mode, the load and the number of cycles at failure and the proximal and distal distance of the symphysis during testing were compared.
Results: There was no implant failure in either of the two groups. Failures occurred in nine pelvises (56.2%) at the fixation between the sacrum and the mounting jig and in seven pelvises (43.8%) in the sacroiliac joint. Neither the ultimate load nor the number of cycles at failure differed between the surgical techniques (p = 0.772; p = 0.788, respectively). In the suture button group the mean ultimate load was 874.5 N and the number of cycles at failure was 1907.9. In the plate group values were 826.1 N and 1805.6 cycles, respectively. No significant differences at proximal and distal diastasis of the symphysis were monitored during the whole loading process.
Conclusion: The fixation with suture button implants showed comparable results to anterior plate fixation in open-book injuries of the pelvis.
Answer: Fixation failure after plate fixation of the symphysis pubis is a recognized issue, but its clinical importance seems to vary. According to one study, a high rate of late fixation failure after plate fixation of the symphysis pubis was observed, but this was not deemed clinically important as most hardware breakages were asymptomatic and did not alter management after a 1-year follow-up (PUBMED:22707071). Another study reported that early postoperative symphyseal plate failure is not uncommon, but further symphyseal displacement after plate failure was not substantial, suggesting that acute symphyseal plate failure alone may not be an absolute indication for revision surgery (PUBMED:27282685).
Additional tension band wiring in plate fixation of the symphysis pubis has shown better radiological outcomes, a lower incidence of hardware loosening, and a lower rate of revision surgery than plate fixation alone, indicating some clinical benefits (PUBMED:28122599). Comparatively, percutaneous screw fixation of the pubic symphysis has been found to be as strong as plate fixation with a lower rate of iatrogenic injuries and better functional outcome (PUBMED:22139387).
Suture button fixation has been biomechanically compared to plate fixation and found to be similar in the management of partially stable pelvic ring injuries (PUBMED:30241733). However, there are reports of implant failure with both conventional uniplanar fixation and locked plate technology, suggesting that the specific indications for the use of these implants remain to be determined (PUBMED:22183198). An alternative fixation technique using suture-button devices showed comparable results to anterior plate fixation in open-book injuries of the pelvis (PUBMED:34895919).
In summary, while fixation failure after plate fixation of the symphysis pubis does occur, its clinical significance seems to depend on the extent of the failure, the presence of symptoms, and the specific circumstances of the injury and treatment. Some fixation failures may not require revision surgery, and alternative fixation methods may offer similar or improved outcomes. |
Instruction: Differential stigmatizing attitudes of healthcare professionals towards psychiatry and patients with mental health problems: something to worry about?
Abstracts:
abstract_id: PUBMED:25123701
Differential stigmatizing attitudes of healthcare professionals towards psychiatry and patients with mental health problems: something to worry about? A pilot study. Purpose: This study compares stigmatizing attitudes of different healthcare professionals towards psychiatry and patients with mental health problems.
Methods: The Mental Illness Clinicians Attitude (MICA) questionnaire is used to assess stigmatizing attitudes in three groups: general practitioners (GPs, n = 55), mental healthcare professionals (MHCs, n = 67) and forensic psychiatric professionals (FPs, n = 53).
Results: A modest positive attitude towards psychiatry was found in the three groups (n = 176). Significant differences were found on the total MICA-score (p < 0.001). GPs scored significantly higher than the FPs and the latter scored significantly higher than the MHCs on all factors of the MICA. Most stigmatizing attitudes were found on professionals' views of health/social care field and mental illness and disclosure. Personal and work experience did not influence stigmatizing attitudes.
Conclusions: Although all three groups have a relatively positive attitude using the MICA, there is room for improvement. Bias toward socially acceptable answers cannot be ruled out. Patients' view on stigmatizing attitudes of professionals may be a next step in stigma research in professionals.
abstract_id: PUBMED:33401095
Mental health professionals' feelings and attitudes towards coercion. Background: Despite absence of clear evidence to assert that the use of coercion in psychiatry is practically and clinically helpful or effective, coercive measures are widely used. Current practices seem to be based on institutional cultures and decision-makers' attitudes towards coercion rather than led by recommendations issued from the scientific literature. Therefore, the main goal of our study was to describe mental health professionals' feelings and attitudes towards coercion and the professionals' characteristics associated with them.
Method: Mental health professionals working in the Department of Psychiatry of Lausanne University Hospital, Switzerland, were invited to participate to an online survey. A questionnaire explored participants' sociodemographic characteristics, professional background and current working context, and their feelings and attitudes towards coercion. Exploratory Structural Equation Modelling (ESEM) was used to determine the structure of mental health professionals' feelings and attitudes towards coercion and to estimate to which extent sociodemographic and professional characteristics could predict their underlying dimensions.
Results: 130 mental health professionals completed the survey. Even if a large number considered coercion a violation of fundamental rights, an important percentage of them agreed that coercion was nevertheless indispensable in psychiatry and beneficial to the patients. ESEM revealed that professionals' feelings and attitudes towards coercion could be described by four main dimensions labelled "Internal pressure", "Emotional impact", "External pressure" and "Relational involvement". The personal as well as the professional proximity with people suffering from mental disorders influences professionals' feeling and attitudes towards coercion.
Conclusions: As voices recommend the end of coercion in psychiatry and despite the lack of scientific evidence, many mental health professionals remain convinced that it is a requisite tool beneficial to the patients. Clinical approaches that enhance shared decision making and give the opportunity to patients and professionals to share their experience and feelings towards coercion and thus alleviate stress among them should be fostered and developed.
abstract_id: PUBMED:29807502
Stigmatizing attitudes of primary care professionals towards people with mental disorders: A systematic review. Objective To examine stigmatizing attitudes towards people with mental disorders among primary care professionals and to identify potential factors related to stigmatizing attitudes through a systematic review. Methods A systematic literature search was conducted in Medline, Lilacs, IBECS, Index Psicologia, CUMED, MedCarib, Sec. Est. Saúde SP, WHOLIS, Hanseníase, LIS-Localizador de Informação em Saúde, PAHO, CVSO-Regional, and Latindex, through the Virtual Health Library portal ( http://www.bireme.br website) through to June 2017. The articles included in the review were summarized through a narrative synthesis. Results After applying eligibility criteria, 11 articles, out of 19.109 references identified, were included in the review. Primary care physicians do present stigmatizing attitudes towards patients with mental disorders and show more negative attitudes towards patients with schizophrenia than towards those with depression. Older and more experience doctors have more stigmatizing attitudes towards people with mental illness compared with younger and less-experienced doctors. Health-care providers who endorse more stigmatizing attitudes towards mental illness were likely to be more pessimistic about the patient's adherence to treatment. Conclusions Stigmatizing attitudes towards people with mental disorders are common among physicians in primary care settings, particularly among older and more experienced doctors. Stigmatizing attitudes can act as an important barrier for patients to receive the treatment they need. The primary care physicians feel they need better preparation, training, and information to deal with and to treat mental illness, such as a user friendly and pragmatic classification system that addresses the high prevalence of mental disorders in primary care and community settings.
abstract_id: PUBMED:34251575
Mental Health Professionals' Attitudes Towards People with Severe Mental Illness: Are they Related to Professional Quality of Life? The present study examines whether attitudes of mental health professionals (MHPs) towards severe mental illness are associated with professional quality of life. The Attitudes towards Severe Mental Illness (ASMI), the Maslach Burnout Inventory (MBI), and the Professional Quality of Life Scale-5 (ProQOL-5) were completed by 287 MHPs in Greece (25.4% males, 74.6% females). The results indicate that MHPs hold predominantly positive attitudes towards people with severe mental illness. Nonetheless, MHPs' attitudes are deemed to be stereotypical according to ASMI concerning treatment duration, prospects of recovery, and whether patients are similar to other people. Higher scores in emotional exhaustion, depersonalization, compassion fatigue and ProQOL-5 burn out dimension were significantly associated with MHPs' unfavorable attitudes, whereas higher scores in compassion satisfaction and personal accomplishment were associated with MHPs' positive attitudes. Assessing compassion fatigue, compassion satisfaction and burnout levels could help identify the processes involved in the development or maintenance of MHPs' stigmatizing attitudes.
abstract_id: PUBMED:35688548
The effectiveness of mental health disorder stigma-reducing interventions in the healthcare setting: An integrative review. Individuals with mental health disorders frequently seek medical treatment in health care settings other than a mental health facility. However, mental health disorder stigmatization is prevalent in the healthcare setting across the globe. Stigmatizing attitudes remain widespread among healthcare professionals who are responsible for delivering patient-centered, quality care. Stigma in the healthcare setting can undermine effective diagnosis, therapy, and optimum health outcomes. Addressing stigma is critical to delivering quality health care in both developed and developing countries. Therefore, it is important to deliver successful anti-stigma education, along with practical strategies, to reduce the stigma of mental health disorders among healthcare professionals. An integrative review was conducted to identify the effectiveness of various interventions used in 10 different countries globally to reduce the stigma of mental health disorders in the healthcare setting.
abstract_id: PUBMED:37489546
Attitudes of Spanish mental health professionals towards trans people: A cross-sectional study. WHAT IS KNOWN ABOUT THE SUBJECT?: The trans community perceives barriers to the mental health services in the form of professionals' transphobia, lack of knowledge, and cultural sensitivity in healthcare. The attitudes of health professionals are mediated by their social context, which can determine their behaviour or attitude towards users. WHAT DOES THE ARTICLE ADD TO EXISTING KNOWLEDGE?: The attitudes of mental health professionals towards trans people are related to variables such as the professional's age, gender, political ideology and religious beliefs. Mental health nursing, psychology and social work are the professions that present more favourable attitudes towards trans people. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: The inclusion of a professional perspective that understands sexual and gender diversity among mental health professionals is required. It is necessary to train professionals to promote socio-healthcare based on respect and free from prejudice, discrimination and stigma. ABSTRACT: Introduction The trans community perceives barriers to the mental health services in the form of professionals' transphobia, lack of knowledge and cultural sensitivity in healthcare. Aim Evaluation of the attitudes towards trans people of the professionals who work in the different Spanish mental health services. Method A cross-sectional design was used with a sample of professionals from different professional groups working in mental health units, hospitals and outpatient settings throughout Spain. Results Gender differences were found, with higher values in genderism and sexism among males. Negative attitudes and sexism have also been associated with age and religious beliefs. Mental health nursing, psychology and social work presented more favourable attitudes towards trans people than other mental health professionals. Discussion/Implications for Practice The inclusion of a professional perspective that understands sexual and gender diversity and the acquisition of professional attitudes based on evidence and patient-centred model are basic aspects to promote socio-healthcare based on respect and free from prejudices, discrimination and stigma.
abstract_id: PUBMED:38375931
Examining the association between stigmatizing attitudes in nursing students and their desire for a career in mental health nursing: A comparative analysis of generic and accelerated programs in Israel. WHAT IS KNOWN ON THE SUBJECT?: Mental health nursing is generally viewed as the least attractive career choice among nursing students. WHAT THE PAPER ADDS TO EXISTING KNOWLEDGE?: Studying in the generic nursing program influence higher desire for a career in mental health nursing. Nursing students who have prior experience working in mental health and have provided care to psychiatric patients are more inclined to express a desire to pursue a career in this field WHAT ARE THE IMPLICATIONS FOR PRACTICE?: Nursing students enrolled in the generic program, who have previous work experience in mental health or experience caring for a person with a mental illness, and who have a lower level of stigmatizing attitudes, may constitute the future workforce in mental health nursing.
Abstract: INTRODUCTION: Mental health nursing is often perceived as an unattractive career choice among nursing students, and it remains unclear whether the type of nursing program influences this view.
Aim: This cross-sectional study aimed to explore the association between stigmatizing attitudes in nursing students and their desire for a career in mental health nursing, comparing students in generic and accelerated programs.
Method: A total of 220 nursing students from generic and accelerated programs in North-Center Israel participated in this cross-sectional study, completing a questionnaire on stigmatizing attitudes and their interest in a mental health nursing career.
Results: Nursing students displayed a generally low desire for mental health nursing, influenced by factors such as enrollment in the generic program, previous mental health work experience and stigmatizing attitudes.
Discussion: Students in the generic program, with lower stigmatizing attitudes and prior mental health experience, exhibited a higher inclination towards mental health nursing.
Implications For Practice: Prospective mental health nursing professionals may be identified in the generic program, particularly those with prior mental health experience and lower stigmatizing attitudes. Additional studies are required to confirm and broaden their applicability to other contexts.
abstract_id: PUBMED:34366636
Evaluation of worry level in healthcare professionals and mental symptoms encountered in their children during the COVID-19 pandemic process. This study was conducted to evaluate the worry level in healthcare professionals and the mental symptoms encountered in their children during the Coronavirus Disease 2019 (COVID-19) pandemic. The study was designed in a cross-sectional, descriptive and relational screening model. Target population of the study comprised healthcare professionals living in Turkey who had children aged 6 to 16 years. The study data was obtained from 457 healthcare professionals who were accessible online between June 15 and August 15, 2020. The Introductory Information Form, the Penn State Worry Questionnaire (PSWQ) and the Pediatric Symptom Checklist-17 (PSC-17) were used as data collection method. The mean age of the healthcare professionals was 39.82 ± 4.83 years and 88.6% of them were female, 58.6% were nurses, 9.0% were doctors and 54.3% were working in the pandemic service. The mean total PSWQ score of the healthcare professionals was 53.53 ± 11.82 and the mean total PSC-17 score of their children was 10.74 ± 5.68. The mean PSWQ score of the healthcare professionals who had a psychological disease and provided care to COVID-19 patients was significantly higher. The PSC-17 scores were significantly higher in children with a mental disorder. There was a statistically significant positively correlation between the mean total PSWQ score of the healthcare professionals and the mean total PSC-17 score of their children. The study showed that children of healthcare professionals who experience all aspects of the pandemic, comprise an important risk group because they are unable to have physical contact with their parents and they experience the pandemic-related measures more.
abstract_id: PUBMED:32048132
Novel Insights into Autism Knowledge and Stigmatizing Attitudes Toward Mental Illness in Dutch Youth and Family Center Physicians. Professionals' limited knowledge on mental health and their stigmatizing attitudes toward mental illness can delay the diagnosis of autism. We evaluated the knowledge on Autism Spectrum Disorder (ASD) and stigmatizing attitudes in 93 physicians at Dutch Youth and Family Centers (YFC). These physicians screen for psychiatric symptoms in children. We show that their general ASD knowledge scored 7.1 (SD 1.2), but their specific ASD knowledge was only 5.7 (SD 1.7) (weighted means on 1-10 scale, 1 = least knowledge, 10 = most knowledge). Our physicians had positive attitudes toward mental illness (CAMI scores 2.18 (SD 0.33) to 2.22 (SD 0.40) on a 5-point Likert scale) but they had higher levels of stigmatizing attitudes than other Western healthcare professionals. Their levels were considerably lower than in non-Western professionals. We found no relations between ASD knowledge, stigmatizing attitudes and demographic variables. In conclusion, ASD knowledge and stigmatizing attitudes toward mental illness in Dutch YFC physicians require attention.
abstract_id: PUBMED:29353369
Mental health professionals' attitudes towards mental illness: professional and cultural factors in the INTER NOS study. Background: Research shows that personnel working in mental health facilities may share some of the societal prejudices towards mental illness. This might result in stigmatizing behaviours towards people suffering from mental disorders, undermining the quality of their care.
Aims: To describe and compare attitudes towards mental illness across a sample of professionals working in a wide range of mental health facilities in Spain, Portugal and Italy.
Method: We administered a survey to personnel including two questionnaires related to stigmatizing attitudes: The Community Attitudes toward the Mentally Ill (CAMI) and the Attribution Questionnaire (AQ-27). Data were compared according to professional category, work setting and country.
Results: 34.06% (1525) professionals of the surveyed population responded adequately. Psychologists and social therapists had the most positive attitudes, and nursing assistants the most negative, on most factors of CAMI and AQ-27. Community staff had more positive attitudes than hospital-based professionals in most factors on CAMI and in discriminatory responses on AQ-27.
Conclusions: Globally, mental health professionals showed a positive attitude towards mental illness, but also a relative support to coercive treatments. There are differences in attitudes modulated by professional category and setting. Results can guide preventive strategies, particularly for the hospital-based and nursing staff.
Answer: The differential stigmatizing attitudes of healthcare professionals towards psychiatry and patients with mental health problems are indeed a cause for concern. A pilot study found that general practitioners (GPs), mental healthcare professionals (MHCs), and forensic psychiatric professionals (FPs) all displayed modest positive attitudes towards psychiatry, but significant differences were observed in their stigmatizing attitudes, with GPs scoring the highest and MHCs the lowest on the Mental Illness Clinicians Attitude (MICA) questionnaire. These attitudes were most stigmatizing regarding professionals' views of the health/social care field and mental illness and disclosure, indicating that there is room for improvement (PUBMED:25123701).
Additionally, a systematic review revealed that primary care physicians do present stigmatizing attitudes towards patients with mental disorders, with more negative attitudes towards patients with schizophrenia than those with depression. Factors such as age and experience were associated with more stigmatizing attitudes, suggesting that older and more experienced doctors may have more negative views towards people with mental illness (PUBMED:29807502).
Furthermore, mental health professionals' attitudes towards coercion in psychiatry, despite the lack of clear evidence supporting its effectiveness, suggest that many remain convinced of its necessity and benefit to patients. This indicates a reliance on institutional cultures and decision-makers' attitudes rather than scientific recommendations (PUBMED:33401095).
The attitudes of mental health professionals towards people with severe mental illness were found to be predominantly positive, yet stereotypical in some aspects. Higher levels of emotional exhaustion, depersonalization, and burnout were significantly associated with unfavorable attitudes, while higher scores in compassion satisfaction and personal accomplishment were associated with positive attitudes (PUBMED:34251575).
In summary, stigmatizing attitudes among healthcare professionals towards psychiatry and patients with mental health problems are present and vary among different professional groups and settings. These attitudes can act as barriers to effective treatment and care, highlighting the need for targeted interventions and training to address and reduce stigma in healthcare settings (PUBMED:35688548). |
Instruction: Instrument Life for Robot-assisted Laparoscopic Radical Prostatectomy and Partial Nephrectomy: Are Ten Lives for Most Instruments Justified?
Abstracts:
abstract_id: PUBMED:26276575
Instrument Life for Robot-assisted Laparoscopic Radical Prostatectomy and Partial Nephrectomy: Are Ten Lives for Most Instruments Justified? Objective: To investigate the rate of premature instrument exchange during robot-assisted laparoscopic radical prostatectomy (RALRP) and robot-assisted partial nephrectomy (RAPN). The majority of robotic instruments have a predetermined lifespan of 10 uses; however, it is unknown if instruments are routinely exchanged before 10 uses in clinical practice.
Methods: We retrospectively reviewed instrument use in consecutive RALRP and RAPN cases performed by high-volume robotic surgeons at 1 tertiary care center between January 2011 and October 2014. The number of instruments used per case was evaluated and instances of additional instrument utilization were noted. Exchange number was compared between the first and second half of cases performed. Operative times were compared between cases with and without exchange. Student's t-test and Pearson's χ(2)-test were used to determine statistical significance.
Results: Three surgeons performed 1579 RALRP procedures and 2 surgeons performed 313 RAPN procedures. During RALRP, monopolar curved scissors required exchange in 12.4% cases. Other instruments were exchanged in less than 2% of cases. Exchange rates were similar to those for RAPN. Only exchange of Prograsp forceps decreased with increasing surgeon experience (P = .02) and instrument exchange did not lengthen operative times (P >.05 for all instruments).
Conclusion: During RALRP and RAPN, monopolar curved scissors required exchange in approximately 10% of cases whereas other instruments were rarely exchanged. Robotic instrument lifetime may not uniformly be 10 uses. The preset lifetime of robotic instruments and/or pricing should be reevaluated.
abstract_id: PUBMED:24912809
Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins.
Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy.
Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P < 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients.
Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins.
abstract_id: PUBMED:26212891
Transperitoneal versus extraperitoneal robot-assisted laparoscopic radical prostatectomy: A prospective single surgeon randomized comparative study. Objectives: To compare operative, pathological, and functional results of transperitoneal and extraperitoneal robot-assisted laparoscopic radical prostatectomy carried out by a single surgeon.
Methods: After having experience with 32 transperitoneal laparoscopic radical prostatectomies, 317 extraperitoneal laparoscopic radical prostatectomies, 30 transperitoneal robot-assisted laparoscopic radical prostatectomies and 10 extraperitoneal robot-assisted laparoscopic radical prostatectomies, 120 patients with prostate cancer were enrolled in this prospective randomized study and underwent either transperitoneal or extraperitoneal robot-assisted laparoscopic radical prostatectomy. The main outcome parameters between the two study groups were compared.
Results: No significant difference was found for age, body mass index, preoperative prostate-specific antigen, clinical and pathological stage, Gleason score on biopsy and prostatectomy specimen, tumor volume, positive surgical margin, and lymph node status. Transperitoneal robot-assisted laparoscopic radical prostatectomy had shorter trocar insertion time (16.0 vs 25.9 min for transperitoneal robot-assisted laparoscopic radical prostatectomy and extraperitoneal robot-assisted laparoscopic radical prostatectomy, P < 0.001), whereas extraperitoneal robot-assisted laparoscopic radical prostatectomy had shorter console time (101.5 vs 118.3 min, respectively, P < 0.001). Total operation time and total anesthesia time were found to be shorter in extraperitoneal robot-assisted laparoscopic radical prostatectomy, without statistical significance (200.9 vs 193.2 min; 221.8 vs 213.3 min, respectively). Estimated blood loss was found to be lower for extraperitoneal robot-assisted laparoscopic radical prostatectomy (P = 0.001). Catheterization and hospitalization times were observed to be shorter in extraperitoneal robot-assisted laparoscopic radical prostatectomy (7.3 vs 5.8 days and 3.1 vs 2.3 days for transperitoneal robot-assisted laparoscopic radical prostatectomy and extraperitoneal robot-assisted laparoscopic radical prostatectomy, respectively, P < 0.05). The time to oral diet was significantly shorter in extraperitoneal robot-assisted laparoscopic radical prostatectomy (32.3 vs 20.1 h, P = 0.031). Functional outcomes (continence and erection) and complication rates were similar in both groups.
Conclusions: Extraperitoneal robot-assisted laparoscopic radical prostatectomy seems to be a good alternative to transperitoneal robot-assisted laparoscopic radical prostatectomy with similar operative, pathological and functional results. As the surgical field remains away from the bowel, postoperative return to normal diet and early discharge can be favored.
abstract_id: PUBMED:30789615
Methods for training of robot-assisted radical prostatectomy Robotic surgery is a future method of minimal invasive surgery. Robot-assisted radical prostatectomy (RARP) is a common method of surgical treatment of prostate cancer. Due to significant differences of the surgical technique of RARP compared to open or laparoscopic radical prostatectomy (LRP) new methods of training are needed. At the moment there are many opinions how to train physicians best. Which model is the most effective one remains nowadays controversial.
Objective: Analyze currently available data of training methods of RARP. Determine the most effective training model and evaluate its advantages and disadvantages. Establish a standardized plan and criteria for proper training and certification of the entire surgical team.
Material And Methods: Literature review based on PubMed database, Web of Science and Scopus by keywords: robot-assisted radical prostatectomy, training of robot-assisted prostatectomy, training in robot-assisted operations, a learning curve of robot-assisted prostatectomy, virtual reality simulators (VR-simulators) in surgery.
Results: According to the literature in average 18 to 45 procedures are required for a surgeon to achieve the plateau of the learning curve of the RARP. Parallel training, pre-operative warm-up and the use of virtual reality simulators (VR-simulators) can significantly increase the learning curve. There are many described models of RARP training.
Conclusions: The absence of accepted criteria of evaluation of the learning curve does not allow to use this parameter as a guide for the surgeon's experience. Proper training of robotic surgeons is necessary and requires new methods of training. There are different types of training programs. In our opinion the most effective training program is when a surgeon observes the performance of tasks or any steps of operation on the VR-simulator, then he performs them and analyzes mistakes by video recording. Then the surgeon observes real operations and performs some steps of the operation which are already leant on the simulator under supervision of the mentor and analyzes mistakes by video recording. Thus, mastering first the simple stages under supervision of a mentor, the surgeon effectively adopts the surgical experience from him. It is necessary to train not only the surgeons but also the entire surgical team.
abstract_id: PUBMED:27637245
Robot-assisted laparoscopic radical prostatectomy after previous open transvesical adenomectomy. Introduction: Robot-assisted laparoscopic radical prostatectomy (RALRP) is one of the best treatment for patients with localized prostate cancer. RALRP is currently performed in patients without previous surgical treatment for benign prostatic hyperplasia. This paper presents a successfully performed RALRP after previous open transvesical adenomectomy (TVA).
Case Report: A 68-year-old patient underwent nerve-sparing RALRP for prostate cancer revealed by transrectal ultrasound guided prostate biopsy, 7 years after TVA.
Results: Postoperatively, a regular diet was allowed on day 1. The Foley catheter was removed on day 7. At 3 months' follow-up, the patient complained of moderate stress incontinence but erectile function was responsive to Tadalafil(®). Serum prostate-specific antigen was undetectable. Quality of life was satisfactory.
Conclusions: A history of previous prostatic surgery does not appear to compromise the outcome of RALRP. Nerve sparing is still indicated. Long-term follow-up is necessary to define RALRP as a gold standard also in patients with previous TVA.
abstract_id: PUBMED:28799065
Laparoscopic inguinal hernioplasty after robot-assisted laparoscopic radical prostatectomy. Purpose: To evaluate the efficacy and safety of laparoscopic transabdominal preperitoneal (TAPP) inguinal hernia repair in patients who have undergone robot-assisted laparoscopic radical prostatectomy (RALP).
Methods: From July 2014 to December 2016, TAPP inguinal hernia repair was conducted in 40 consecutive patients who had previously undergone RALP. Their data were retrospectively analyzed as an uncontrolled case series.
Results: The mean operation time in patients who had previously undergone RALP was 99.5 ± 38.0 min. The intraoperative blood loss volume was small, and the duration of hospitalization was 2.0 ± 0.5 days. No intraoperative complications or major postoperative complications occurred. During the average 11.2-month follow-up period, no patients who had previously undergone prostatectomy developed recurrence.
Conclusions: Laparoscopic TAPP inguinal hernia repair after RALP was safe and effective. TAPP inguinal hernia repair may be a valuable alternative to open hernioplasty.
abstract_id: PUBMED:31120453
Robot-assisted radical prostatectomy - functional result. Part II Robot-assisted surgery is one of the most important achievements of modern medicine. Robot-assisted operations widely used in urology, gynecology, general and cardiovascular surgery are considered by many experts as a new 'gold standard' of surgical treatment of various diseases in developed countries. Well-known advantages of robot-assisted surgery are low invasiveness, 3D-visualization of surgical field, high accuracy of instrument movements resulting minimal intraoperative blood loss, short hospital-stay, rapid recovery and short social maladjustment of operated patients. Robot-assisted radical prostatectomy in patients with prostate cancer is the most common robotic procedure worldwide. Better functional outcomes are due to another (new) understanding of pelvic surgical anatomy, changed approach to dissection and preservation of external urethral sphincter and neurovascular bundles. Prostate neuroanatomy, various variants of preservation of neurovascular bundles are reviewed in the article. Moreover, own experience of robot-assisted radical prostatectomy followed by favorable functional results is presented.
abstract_id: PUBMED:33457670
Surgical Drain-Related Intestinal Obstruction After Robot-Assisted Laparoscopic Radical Prostatectomy in Two Cases. Background: Drainage tubes are almost always routinely used after a laparoscopic or robot-assisted radical prostatectomy and pelvic lymphadenectomy to prevent urinoma formation and lymphoceles. They are seldom of any consequence. We present our unique experience of bowel obstruction resulting from the use of pelvic drains. Case Presentation: We are reporting on two prostate cancer cases with rare postoperative complications. Each of them received robot-assisted laparoscopic radical prostatectomy and bilateral pelvic lymph node dissection and subsequently developed ileus and bowel obstruction. Series follow-up images suggested the bowel obstruction was related to their drainage tube. No evidence of urine leakage or intestine perforation was found based on drainage fluid analysis. We performed exploratory laparotomy in the first patient and found drainage tube kinking with the terminal ileum and adhesion band. The drainage tube was removed and patient recovery occurred over the following days. In the second case, the patient experienced bowel obstruction for 4 days after surgery. Based on our experience in the first case, and a drainage fluid survey showing no evidence of urine leakage, we removed the drainage tube on the morning of the 4th day, giving the patient a dramatic recovery with flatus and stool passage occurring in the afternoon. Both of the patients recovered well in hospital and during regular follow-up. Conclusion: To best of our knowledge, despite there being certain case reports regarding drainage tube ileus in colorectal and bowel surgery, we have reported here on the first two cases of small bowel obstruction as a complication arising from the abdominal drainage tube used in robot-assisted urology surgery.
abstract_id: PUBMED:29366855
Quality of Life After Open Radical Prostatectomy Compared with Robot-assisted Radical Prostatectomy. Background: Surgery for prostate cancer has a large impact on quality of life (QoL).
Objective: To evaluate predictors for the level of self-assessed QoL at 3 mo, 12 mo, and 24 mo after robot-assisted laparoscopic (RALP) and open radical prostatectomy (ORP).
Design, Setting, And Participants: The LAParoscopic Prostatectomy Robot Open study, a prospective, controlled, nonrandomised trial of more than 4000 men who underwent radical prostatectomy at 14 centres. Here we report on QoL issues after RALP and ORP.
Outcome Measurements And Statistical Analysis: The primary outcome was self-assessed QoL preoperatively and at 3 mo, 12 mo, and 24 mo postoperatively. A direct validated question of self-assessed QoL on a seven-digit visual scale was used. Differences in QoL were analysed using logistic regression, with adjustment for confounders.
Results And Limitations: QoL did not differ between RALP and ORP postoperatively. Men undergoing ORP had a preoperatively significantly lower level of self-assessed QoL in a multivariable analysis compared with men undergoing RALP (odds ratio: 1.21, 95% confidence interval: 1.02-1.43), that disappeared when adjusted for preoperative preparedness for incontinence, erectile dysfunction, and certainty of being cured (odds ratio: 1.18, 95% confidence interval: 0.99-1.40). Incontinence and erectile dysfunction increased the risk for poor QoL at 3 mo, 12 mo, and 24 mo postoperatively. Biochemical recurrence did not affect QoL. A limitation of the study is the nonrandomised design.
Conclusions: QoL at 3 mo, 12 mo, and 24 mo after RALP or ORP did not differ significantly between the two techniques. Poor QoL was associated with postoperative incontinence and erectile dysfunction but not with early cancer relapse, which was related to thoughts of death and waking up at night with worry.
Patient Summary: We did not find any difference in quality of life at 3 mo, 12 mo, and 24 mo when open and robot-assisted surgery for prostate cancer were compared. Postoperative incontinence and erectile dysfunction were associated with poor quality of life.
abstract_id: PUBMED:34145964
The clinical impact of robot-assisted laparoscopic rectal cancer surgery associated with robot-assisted radical prostatectomy. Introduction: Robot-assisted laparoscopic surgery has been performed in various fields, especially in the pelvic cavity. However, little is known about the utility of robot-assisted laparoscopic rectal cancer surgery associated with robot-assisted radical prostatectomy (RARP). We herein report the clinical impact of robot-assisted laparoscopic rectal cancer surgery associated with RARP.
Methods: We experienced five cases of robot-assisted laparoscopic rectal cancer surgery associated with RARP. One involved robot-assisted laparoscopic abdominoperineal resection with en bloc prostatectomy for T4b rectal cancer, and one involved robot-assisted laparoscopic intersphincteric resection combined with RARP for synchronous rectal and prostate cancer. The remaining three involved robot-assisted laparoscopic low anterior resection (RaLAR) after RARP. For robot-assisted laparoscopic rectal cancer surgery, the da Vinci Xi surgical system was used.
Results: We could perform planned robotic rectal cancer surgery in all cases. The median operation time was 529 min (373-793 min), and the median blood loss was 307 ml (32-1191 ml). No patients required any transfusion in the intra-operative or immediate peri-operative period. The circumferential resection margin was negative in all cases. There were no complications of grade ≥III according to the Clavien-Dindo classification and no conversions to conventional laparoscopic or open surgery.
Conclusion: Robot-assisted laparoscopic surgery associated with RARP is feasible in patients with rectal cancer. The long-term surgical outcomes remain to be further evaluated.
Answer: The study conducted to investigate the rate of premature instrument exchange during robot-assisted laparoscopic radical prostatectomy (RALRP) and robot-assisted partial nephrectomy (RAPN) found that the majority of robotic instruments have a predetermined lifespan of 10 uses. However, the findings suggested that this may not be justified for all instruments. During RALRP, monopolar curved scissors required exchange in approximately 12.4% of cases, indicating that they did not reach the 10-use lifespan in these instances. Other instruments were exchanged in less than 2% of cases. The exchange rates were similar for RAPN. The study concluded that robotic instrument lifetime may not uniformly be 10 uses and suggested that the preset lifetime of robotic instruments and/or pricing should be reevaluated (PUBMED:26276575). |
Instruction: Should the organs at risk be contoured in vaginal cuff brachytherapy?
Abstracts:
abstract_id: PUBMED:21193355
Should the organs at risk be contoured in vaginal cuff brachytherapy? Purpose: To assess the dose to the organs at risk (OARs) and utility of repeated OAR dose-volume histogram calculations in multifraction high-dose-rate vaginal cylinder brachytherapy using 3-dimensional imaging.
Methods And Materials: Thirty-eight patients (125 fractions) received high-dose-rate brachytherapy to the vaginal vault between January 2005 and October 2005. All patients emptied their bladders before insertion. After each insertion, a CT scan with 2.5-mm slices and contours of the bladder, rectum, and sigmoid was performed. Dose-volume histograms were generated for the D(0.1cc) and D(2cc) for the OAR using a software program created at our institution. Variance component models estimated the within-patient variance of the dose to the OAR between fractions. Predictors of dose to the OAR were identified using linear mixed models.
Results: The within-patient coefficients of variation of total D(0.1cc) dose were bladder 14.0%, rectum 7.9%, and sigmoid 27.6%; for D(2cc), these were 8.1%, 5.9%, and 20.3%, respectively. Intraclass correlations ranged from 0.27 to 0.79. Larger OAR predicted greater total D(0.1cc) and D(2cc). Other predictors of total D(0.1cc) and D(2cc) dose included the size of the cylinder and the length of the treatment field for rectum.
Conclusions: CT simulation provides a noninvasive assessment of the dose to the bladder, rectum, and sigmoid. The small within-patient variation in doses to the bladder and rectum do not support reporting doses to the OARs beyond the initial fraction.
abstract_id: PUBMED:35079256
Is adaptive treatment planning for single-channel vaginal brachytherapy necessary? Purpose: In vaginal cuff brachytherapy, only limited information is available about the need for individualized treatment planning or imaging. Treatment planning is still performed mostly with no contouring target volume or organs at risk and with standard plan approach. Dose prescription, fractionation, and treatment planning practices vary from site to site. Without imaging, dose must be prescribed in terms of fixed distances from a known reference, such as the applicator surface. Because of different anatomies of patients, this might lead to under-dosing of target and unnecessarily high-doses delivered to adjacent organs. Also, reliable recording of dose delivered is difficult. These various uncertainties related to standard planning and lack of imaging indicate a clear need for finding an optimal method of dose planning for vaginal cuff brachytherapy.
Material And Methods: A study was conducted, in which 100 vaginal cuff brachytherapy patients' computed tomography (CT) images with applicator in situ were retrospectively analyzed to investigate target-area coverage and critical-organ doses. In addition, 28 patients' plans were re-planned with different planning approaches, to evaluate an optimal dose-planning strategy. From treatment plans, target coverage and organs-at-risk doses were assessed.
Results And Conclusions: The analysis showed that, in order to cover distal part of the vaginal cuff, dose prescription should be a 10 mm from the tip of the applicator. Individualized image-based planning is recommended at least for first fraction. This would yield lower doses to the bladder. Rectum and sigmoid doses are not significantly affected by planning approach.
abstract_id: PUBMED:24143152
Vaginal cuff dehiscence after vaginal cuff brachytherapy for uterine cancer. A case report. Vaginal cuff dehiscence is a rare, but potentially serious complication after total hysterectomy. We report a case of vaginal cuff dehiscence after vaginal cuff brachytherapy. A 62 year old female underwent a robotic-assisted laparoscopic hysterectomy with bilateral salpingo-oophorectomy, and was found to have International Federation of Gynecology and Obstetrics (FIGO) 2009 stage IB endometrioid adenocarcinoma of the uterus. The patient was referred for adjuvant vaginal cuff brachytherapy. During the radiation treatment simulation, a computerized tomography (CT) of the pelvis showed abnormal position of the vaginal cylinder. She was found to have vaginal cuff dehiscence that required immediate surgical repair. Vaginal cuff dehiscence triggered by vaginal cuff brachytherapy is very rare with only one case report in the literature.
abstract_id: PUBMED:36414525
Evaluating the relationship between vaginal apex "dog ears" and patterns of recurrence in endometrial cancer following adjuvant image guided vaginal cuff brachytherapy. Purpose: The aim of this investigation is to characterize vaginal apex "dog ears" and their association with patterns of treatment failure in patients with endometrial cancer treated with adjuvant high-dose-rate (HDR) single-channel vaginal cuff brachytherapy (VCB).
Methods: A retrospective review of patients treated with HDR VCB from 2012 to 2021 for medically operable endometrial cancer at a single institution was conducted. Dog ears, defined as tissue at the apex extending at least 10 mm from the brachytherapy applicator were identified on CT simulation images. Fisher exact test and a multivariate logistic regression model evaluated the association between factors of interest with treatment failure. Vaginal cuff failure free survival (VCFFS) was calculated from first brachytherapy to vaginal cuff recurrence (VCR).
Results: A total of 219 patients were reviewed. In this sample, 57.5% of patients met criteria for having dog ears. In total, 13 patients (5.9%) developed a VCR. There was no statistically significant difference in the rate of VCR between patients with and without dog ears (7.1% vs. 4.3%, p = 0.56). There was a trend toward increased risk of recurrence with higher grade histology identified in the multivariate logistic regression model (p = 0.085). The estimated 3-year probability of VCFFS was 86%.
Conclusions: Vaginal apex dog ears are prevalent but are not found to statistically increase the risk of VCR after VCB in our single institution experience. However, while local failure remains low in this population, we report an absolute value of over twice as many VCRs in patients with dog ears, indicating that with improved dog ear characterization this may remain a relevant parameter for consideration in treatment planning.
abstract_id: PUBMED:33384254
Do air gaps with image-guided vaginal cuff brachytherapy impact failure rates in patients with high-intermediate risk FIGO Stage I endometrial cancer? Purpose: The aim of this study was to assess the impact of air gaps at the cylinder surface on the rate of vaginal cuff failure (VCF) after image-guided adjuvant vaginal cuff brachytherapy (VCBT) in the treatment of high-intermediate risk (HIR) FIGO (Fédération Internationale de Gynécologie et d'Obstétrique (International Federation of Gynecology and Obstetrics)) Stage I endometrial cancer.
Methods And Materials: A retrospective review of patients treated with image-guided VCBT from 2009 to 2016 for HIR FIGO Stage I endometrial cancer was performed. Air gaps present at the applicator surface on the first postinsertion CT were contoured. Vaginal cuff failure-free survival (VCFFS) was measured from the first fraction of VCBT to VCF.
Results: A total of 234 patients were identified. Air gaps were present on the first postinsertion CT scan in 82% of patients. The median number of air gaps was 2 (interquartile range [IQR] 1-3), median depth of the largest air gap was 2.7 mm (IQR 2.1-3.4 mm), and the median cumulative volume of air gaps was less than 0.1 cm3 (range < 0.1-0.7 cm3). At a median followup of 56 months (IQR 41-69), 12 patients (5%) experienced VCF, of which 4 had isolated VCF and 8 had synchronous pelvic or distant failure. Five-year VCFFS and isolated VCFFS were 96% (95% confidence interval 93-98%) and 98% (95% confidence interval 96-100%), respectively. On univariate analysis, no factors, including the presence, number, maximum depth, or cumulative volume of air gaps, were predictive for VCFFS.
Conclusions: In this population, VCFFS remained high despite most patients having air gaps present on postinsertion CT scan.
abstract_id: PUBMED:35079252
Dosimetric impact of bladder filling on organs at risk with barium contrast in the small bowel for adjuvant vaginal cuff brachytherapy. Purpose: The aim of this prospective study was to analyze dosimetric impact of modifying bladder filling on dose distribution in organs at risk (OARs) when using contrast in the small bowel of patients under adjuvant therapy with high-dose-rate vaginal cuff brachytherapy (HDR-VCB) for endometrial cancer.
Material And Methods: This research included 19 patients who underwent laparoscopic surgery. They were treated with HDR-VCB and 2.5-3.5 cm diameter cylinders. Two successive computerized tomography (CT) scans were performed, with empty bladder and with bladder filled with 180 cc of saline solution. Bladder, rectum, sigmoid, and small bowel were delineated as OARs. Oral barium contrast was used to clearly visualize small bowel loops. Prescription dose was 7 Gy. Dose-volume histograms were generated for each OAR, with full and empty bladder to compare doses received.
Results: Bladder distension had no dosimetric impact on the bladder, rectum, or sigmoid, unlike the small bowel. With full bladder, mean minimum dose at 2 cc (D2cc) was not significantly higher for full vs. empty bladder (5.56 vs. 5.06 Gy, p = 0.07), whereas there was a significant reduction in the small bowel (1.68 vs. 2.70 Gy, p < 0.001). With full bladder, the dose increased to 50% of the volume (D50%) of the bladder (2.11 vs. 1.28 Gy, p < 0.001), and decreased in the small bowel (0.70 vs. 1.09 Gy, p < 0.001).
Conclusions: The present study describes the dose received by organs at risk during HDR-VCB, making it possible to define the dose received by small bowel loops, when visualized with oral barium contrast. In patients undergoing laparoscopic surgery, a full bladder during HDR-VCB reduces the dose to the small bowel without a clinically relevant dose increase in the bladder, and no dose increase in other OARs.
abstract_id: PUBMED:28848362
Vaginal cuff brachytherapy in endometrial cancer - a technically easy treatment? Endometrial cancer (EC) is one of the most common gynecological cancers among women in the developed countries. Vaginal cuff is the main location of relapses after a curative surgical procedure and postoperative radiation therapy have proven to diminish it. Nevertheless, these results have not translated into better survival results. The preeminent place of vaginal cuff brachytherapy (VCB) in the postoperative treatment of high- to intermediate-risk EC was given by the PORTEC-2 trial, which demonstrated a similar reduction in relapses with VCB than with external beam radiotherapy (EBRT), but VCB induced less late toxicity. As a result of this trial, the use of VCB has increased in clinical practice at the expense of EBRT. A majority of the clinical reviews of VCB usually address the risk categories and patient selection but pay little attention to technical aspects of the VCB procedure. Our review aimed to address both aspects. First of all, we described the risk groups, which guide patient selection for VCB in clinical practice. Then, we depicted several technical aspects that might influence dose deposition and toxicity. Bladder distension and rectal distension as well as applicator position or patient position are some of those variables that we reviewed.
abstract_id: PUBMED:26796601
Do changes in interfraction organ at risk volume and cylinder insertion geometry impact delivered dose in high-dose-rate vaginal cuff brachytherapy? Purpose: Within a multifraction high-dose-rate vaginal cuff brachytherapy course, we determined if individual variations in organ at risk (OAR) volume and cylinder insertion geometry (CIG) impacted dose and whether planned minus fractional (P - F) differences led to a discrepancy between planned dose and delivered dose.
Methods And Materials: We analyzed vaginal cuff brachytherapy applications from consecutive patients treated with three fractions of 5 Gy after each undergoing a planning CT and three repeat fractional CTs (fCTs). Rectal and bladder D2ccs and volumes were recorded in addition to the x (in relationship to midplane) and y (in relationship to the table) angles of CIG. Paired t-tests and multiple regression analyses were performed.
Results: Twenty-seven patients were identified. In comparing the planning CT vs. mean fCT rectal volumes, bladder volumes, x angles, and y angles, only bladder volume was significantly different (planned volume higher, t = 2.433, p = 0.017). The cumulative mean planned OAR D2cc vs. delivered D2cc was only significantly different for the bladder (planned dose lower, t = -2.025, p = 0.053). Regression analysis revealed planned rectal D2cc (p < 0.0003) and a positive (posterior) y insertion angle (p = 0.015) to significantly impact delivered rectal D2cc. Additionally, P - F rectal volume (p = 0.037) was significant in determining rectal delivered dose.
Conclusions: A more posterior y angle of insertion was found to increase rectal D2cc leading us to believe that angling the vaginal cylinder anteriorly may reduce rectal dose without significantly increasing bladder dose. Although attention should be paid to OAR volume and CIG to minimize OAR dose, the clinical significance of P - F changes remains yet to be shown.
abstract_id: PUBMED:34122569
Vaginal cuff brachytherapy: do we need to treat to more than a two-centimeter active length? Purpose: American Brachytherapy Society (ABS) guidelines recommend using a 3-5 cm active length (AL) when treating vaginal cuff (VC) in adjuvant setting of endometrial cancer (EC). The purpose of this study was to evaluate local control and toxicity, using an AL of 1 or 2 cm and immobilization with a traditional table-mounted (stand) or patient-mounted (suspenders) device.
Material And Methods: Between 2005 and 2019, 247 patients with EC were treated with adjuvant high-dose-rate vaginal cuff (HDR-VC) brachytherapy with or without external beam radiation (EBRT). Treatment was prescribed to a 0.5 cm depth, with an AL of 1 or 2 cm, using stand or suspenders. VC boost after EBRT was typically administered with 2 fractions of 5.5 Gy, while VC brachytherapy alone was typically applied with 3 fractions of 7 Gy or 5 fractions of 5.5 Gy.
Results: The combination of suspender immobilization and an AL of 2 cm (n = 126, 51%) resulted in 5-year local control of 100%. An AL of 2 cm compared to 1 cm correlated with better local control (99.1% vs. 88.5%, p = 0.0479). Regarding immobilization, suspenders correlated with improved local control compared to stand (100% vs. 86.7%, p = 0.0038). Immobilization technique was significantly correlated with AL (p < 0.0001). Only 5 (2.0%) patients experienced grade ≥ 3 toxicity, all of whom received EBRT.
Conclusions: In the present series, an AL of 2 cm provided excellent local control, while 1 cm was inadequate. Suspender immobilization was a practical alternative to stand immobilization in HDR brachytherapy of the vaginal cuff.
abstract_id: PUBMED:33897788
Vaginal cuff brachytherapy practice in endometrial cancer patients: a report from the Turkish Oncology Group. Purpose: The American Brachytherapy Association is attempting to develop standards for delivering brachytherapy, although differences in practice have been reported in the literature. This study evaluated vaginal cuff brachytherapy (VBT) practice and quality of life-related recommendations among Turkish radiation oncologists.
Material And Methods: A nationwide web-based 17-item survey was distributed to the members of the Turkish Society for Radiation Oncology. These members received e-mail notifications, and a link was posted on the Turkish Society for Radiation Oncology internet site to solicit voluntary responses The survey addressed the simulation processes, target volume, prescribed dose, delivery schedules, and recommendations related to vaginal side effects.
Results: Fifty-seven radiation oncologists responded to the survey. The most used dose fraction schemes for adjuvant VBT were 7 Gy × 3 fractions (30%), 5.5 Gy × 5 fractions (26%), and 6 Gy × 5 fractions (28%). The preferred VBT scheme was 5 Gy × 3 fractions (50%) when the external beam radiotherapy (EBRT) dose was 45 Gy external radiotherapy, while the preferred schemes were 6 Gy × 3 fractions (30%) or 5 Gy × 3 fractions (32%) when the external radiotherapy dose was increased to 50.4 Gy. One-half of the respondents delivered VBT twice a week, and the dose was prescribed to 0.5 cm from vaginal mucosa by 86% of the respondents. There was no common definition for the dose prescription length, which was defined as 3 cm from the vaginal cuff in 33% of responses and as 4 cm in 35% of responses. For serous and clear cell histological types, 38% of the respondents targeted "full cylinder length". To prevent vaginal side effects, 78% of the respondents recommended using a vaginal dilator and/or sexual intercourse after VBT.
Conclusions: This survey revealed variations in the clinical practice of VBT among Turkish radiation oncologists, which suggests that standardization is necessary.
Answer: The question of whether organs at risk (OARs) should be contoured in vaginal cuff brachytherapy is addressed in several studies. According to the study by PUBMED:21193355, CT simulation allows for a noninvasive assessment of the dose to the bladder, rectum, and sigmoid. However, the small within-patient variation in doses to the bladder and rectum does not support reporting doses to the OARs beyond the initial fraction. This suggests that while it is possible to contour OARs, the variation in dose between fractions may not be significant enough to warrant repeated contouring and dose-volume histogram calculations for each fraction.
On the other hand, the study by PUBMED:35079256 suggests that individualized image-based planning is recommended at least for the first fraction to ensure adequate target coverage and potentially yield lower doses to the bladder. This indicates that contouring OARs can be beneficial, at least initially, to optimize the treatment plan.
The study by PUBMED:35079252 also supports the idea of contouring OARs, as it found that bladder distension had a dosimetric impact on the small bowel when using contrast in the small bowel during high-dose-rate vaginal cuff brachytherapy. This suggests that understanding the dose distribution to OARs can help in making clinical decisions, such as modifying bladder filling to reduce the dose to the small bowel.
In summary, while there may be some variability in practice, the evidence suggests that contouring OARs can be beneficial in vaginal cuff brachytherapy, particularly for individualized treatment planning and optimizing the dose distribution to minimize the risk to adjacent organs. However, the necessity of repeating this process for each fraction may not be supported due to the small variation in dose to the bladder and rectum between fractions. |
Instruction: The round window: is it the "cochleostomy" of choice?
Abstracts:
abstract_id: PUBMED:31750226
Functional Outcomes in Cochleostomy and Round Window Insertion Technique: Difference or No Difference? With the introduction and rapid development of Cochlear Implants since the 1970s, there has been marked improvement in the speech recognition and spoken language skills of the implanted profoundly deaf children. The cochlear implant can be done by means of different techniques, traditionally by Cochleostomy method and round window membrane (RWM) insertion technique. Post operatively, the functional outcomes are measured by many scores more commonly by Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) scores. To study the speech and hearing perception skills in pediatric cases of Congenital non syndromic bilateral profound sensorineural hearing loss after Cochleostomy and Round Window Insertion technique of Cochlear Implantation. 31 patients clinically diagnosed as congenital non syndromic bilateral profound sensorineural hearing loss who had undergone Cochlear implantation either by Cochleostomy or by RWM insertion technique and fulfilling the eligibility criteria were enrolled for study. Post operatively functional outcomes were assessed subjectively by measuring CAP and SIR scores. All the patients showed increase in their CAP and SIR scores post-operatively, measured at 03 months, 06 months and 01 year after Cochlear Implantation. The mean CAP and SIR scores in the two groups were comparable at 03 months, 06 months and 1 year after surgery. There was no significant difference in the speech and hearing perception skills of post implantees in the two groups (p value < 0.05). There is no difference in functional outcomes of Cochlear implantation by Cochleostomy and round window membrane insertion technique.
abstract_id: PUBMED:38294508
Comparison of depth of electrode insertion between cochleostomy and round window approach: a cadaveric study. Introduction: Round window approach and cochleostomy approach can have different depth of electrode insertion during cochlear implantation which itself can alter the audiological outcomes in cochlear implant.
Objective: The current study was conducted to determine the difference in the depth of electrode insertion via cochleostomy and round widow approach when done serially in same temporal bone.
Methodology: This is a cross-sectional study conducted in the Department of Otorhinolaryngology in conjunction with Department of Anatomy and Department of Diagnostic and Interventional Radiology over a period of 1 year. 12-electrode array insertion was performed via either approach (cochleostomy or round window) in the cadaveric temporal bone. HRCT temporal bone scan of the implanted temporal bone was done and depth of insertion and various cochlear parameters were calculated.
Result: A total of 12 temporal bones were included for imaging analysis. The mean cochlear duct length was 32.892 mm; the alpha and beta angles were 58.175° and 8.350°, respectively. The mean angular depth of electrode insertion via round window was found to be 325.2° (SD = 150.5842) and via cochleostomy 327.350 (SD = 112.79) degree and the mean linear depth of electrode insertion via round window was found to be 18.80 (SD = 4.4962) mm via cochleostomy 19.650 (SD = 3.8087) mm, which was calculated using OTOPLAN 1.5.0 software. There was a statically significant difference in linear depth of insertion between round window and cochleostomy. Although the angular depth of insertion was higher in CS group, there was no statistically significant difference with round window type of insertion.
Conclusion: The depth of electrode insertion is one of the parameters that influences the hearing outcome. Linear depth of electrode insertion was found to be more in case of cochleostomy compared to round window approach (p = 0.075) and difference in case of angular depth of electrode insertion existed but not significant (p = 0.529).
abstract_id: PUBMED:36514425
OTOPLAN-Based Study of Intracochlear Electrode Position Through Cochleostomy and Round Window in Transcanal Veria Technique. To study the postoperative visualisation of the electrode array insertion angle through transcanal Veria approach in both round window and cochleostomy techniques. Retrospective study. Tertiary care centre. 26 subjects aged 2-15 years implanted with a MED-EL STANDARD electrode array (31.5 mm) through Veria technique were selected. 16 had the electrode insertion through the round window, 10 through anteroinferior cochleostomy. DICOM files of postoperative computer tomography (CT) scans were collected and analysed using the OTOPLAN 3.0 software. Examined parameters were cochlear duct length, average angle of insertion depth. Pearson's Correlation Test was utilized for statistical analysis. Average cochlear duct length was 38.12 mm, ranging from 34.2 to 43 mm. Average angle of insertion depth was 666 degrees through round window insertion and 670 degrees through cochleostomy insertion. Pearson's correlation showed no significant difference in average angle of insertion depth between subjects with cochleostomy and round window insertion. Detailed study on the OTOPLAN software has established that there remains no difference between round window insertion or cochleostomy insertion when it comes to electrode array position and placement in the scala tympani. It is feasible to perform round window insertion and cochleostomy insertion through transcanal Veria approach as this technique provides good visualisation.
Supplementary Information: The online version contains supplementary material available at 10.1007/s12070-022-03228-5.
abstract_id: PUBMED:33958008
Spectral resolution and speech perception after cochlear implantation using the round window versus cochleostomy technique. Objective: To evaluate the spectral resolution achieved with a cochlear implant in users who were implanted using round window route electrode insertion versus a traditional cochleostomy technique.
Methods: Twenty-six patients were classified into two groups according to the surgical approach: one group (n = 13) underwent cochlear implantation via the round window technique and the other group (n = 13) underwent surgery via cochleostomy.
Results: A statistically significant difference was found in spectral ripple discrimination scores between the round window and cochleostomy groups. The round window group performed almost two times better than the cochleostomy group. Differences between Turkish matrix sentence test scores were not statistically significant.
Conclusion: The spectral ripple discrimination scores of patients who had undergone round window cochlear implant electrode insertion were superior to those of patients whose cochlear implants were inserted using a classical cochleostomy technique.
abstract_id: PUBMED:29204559
The impact of round window vs cochleostomy surgical approaches on interscalar excursions in the cochlea: Preliminary results from a flat-panel computed tomography study. Objective: To evaluate incidence of interscalar excursions between round window (RW) and cochleostomy approaches for cochlear implant (CI) insertion.
Methods: This was a retrospective case-comparison. Flat-panel CT (FPCT) scans for 8 CI users with Med-El standard length electrode arrays were collected. Surgical technique was identified by a combination of operative notes and FPCT imaging. Four cochleae underwent round window insertion and 4 cochleae underwent cochleostomy approaches anterior and inferior to the round window.
Results: In our pilot study, cochleostomy approaches were associated with a higher likelihood of interscalar excursion. Within the cochleostomy group, we found 29% of electrode contacts (14 of 48 electrodes) to be outside the scala tympani. On the other hand, 8.5% of the electrode contacts (4 of 47 electrodes) in the round window insertion group were extra-scalar to the scala tympani. These displacements occurred at a mean angle of occurrence of 364° ± 133°, near the apex of the cochlea. Round window electrode displacements tend to localize at angle of occurrences of 400° or greater. Cochleostomy electrodes occurred at an angle of occurrence of 19°-490°.
Conclusions: Currently, the optimal surgical approach for standard CI electrode insertion is highly debated, to a certain extent due to a lack of post-operative assessment of intracochlear electrode contact. Based on our preliminary findings, cochleostomy approach is associated with an increased likelihood of interscalar excursions, and these findings should be further evaluated with future prospective studies.
abstract_id: PUBMED:26739790
Delayed low frequency hearing loss caused by cochlear implantation interventions via the round window but not cochleostomy. Cochlear implant recipients show improved speech perception and music appreciation when residual acoustic hearing is combined with the cochlear implant. However, up to one third of patients lose their pre-operative residual hearing weeks to months after implantation, for reasons that are not well understood. This study tested whether this "delayed" hearing loss was influenced by the route of electrode array insertion and/or position of the electrode array within scala tympani in a guinea pig model of cochlear implantation. Five treatment groups were monitored over 12 weeks: (1) round window implant; (2) round window incised with no implant; (3) cochleostomy with medially-oriented implant; (4) cochleostomy with laterally-oriented implant; and (5) cochleostomy with no implant. Hearing was measured at selected time points by the auditory brainstem response. Cochlear condition was assessed histologically, with cochleae three-dimensionally reconstructed to plot electrode paths and estimate tissue response. Electrode array trajectories matched their intended paths. Arrays inserted via the round window were situated nearer to the basilar membrane and organ of Corti over the majority of their intrascalar path compared with arrays inserted via cochleostomy. Round window interventions exhibited delayed, low frequency hearing loss that was not seen after cochleostomy. This hearing loss appeared unrelated to the extent of tissue reaction or injury within scala tympani, although round window insertion was histologically the most traumatic mode of implantation. We speculate that delayed hearing loss was related not to the electrode position as postulated, but rather to the muscle graft used to seal the round window post-intervention, by altering cochlear mechanics via round window fibrosis.
abstract_id: PUBMED:25583631
Residual hearing preservation after cochlear implantation via round window or cochleostomy approach. Objectives/hypothesis: The purpose of the study was to investigate whether cochlear implantation using the round window approach provided better preservation of residual hearing than the cochleostomy approach.
Study Design: Case-control study.
Methods: We designed a case-control study including 40 patients from a tertiary referral center who underwent cochlear implantation surgeries using devices from MED-EL Co., Innsbruck, Austria. Between November 2013 and July 2014, we prospectively enrolled 20 subjects for cochlear implantation surgery using the round window insertion approach. In addition, 20 age- and sex-matched control subjects from the database of cochlear implantees treated using the cochleostomy approach between January 2008 and October 2013 were retrospectively enrolled. The residual hearing of the operated ear was measured before and after surgery. The variables analyzed were the pure-tone average threshold at 250, 500, and 1,000 Hz and the residual hearing at frequencies of 250 to 8,000 Hz. The residual hearing was considered as preserved when the audiometric changes were <10 dB hearing loss for each variable. The audiological results of the two groups were compared.
Results: No statistically significant difference in the preservation of residual hearing was found in the two groups (P > .05 for all of the variables).
Conclusions: The round window and cochleostomy approaches for cochlear implant surgery may preserve residual hearing at similar rates across a range of frequencies.
abstract_id: PUBMED:32083025
Comparison of the Pediatric Cochlear Implantation Using Round Window and Cochleostomy. Introduction: Cochlear implantation (CI) is now regarded as a standard treatment for children with severe to profound sensor neural hearing loss. This study aimed to compare the efficacy of the round window approach (RWA) and standard cochleostomy approach (SCA) in the preservation of residual hearing after CI in pediatric patients.
Materials And Methods: This double-blind randomized controlled trial was conducted on 97 pediatric patients receiving CI with 12-month follow-up. The study population was divided into two groups according to the surgical approaches they received, including RWA and SCA. Consequently, the patients were evaluated based on the Categories of Auditory Performance scale (CAP) and Speech Intelligibility Rating (SIR) test 45-60 days and 3, 6, 9, and 12 months post-surgery.
Results: The CAP and SIR mean scores increased in both groups during the 12-month follow-up. This upward trend was significant in both groups (P<0.001). There was no significant difference between the two treatment groups in any of the follow-up stages regarding the CAP mean score. The mean SIR score (P=1.14±0.40) was significantly higher in the RWA group 3(P=0.001), 6(P=0.008), and 9(P=0.006) months after the surgery. However, there was no significant difference between the RWA and SCA groups, regarding 1-year SIR (P=0.258).
Conclusion: The CI with either RWA or SCA could improve hearing and speech performance in pediatric patients. Although mid-term speech intelligibility was better for RWA, there was no significant difference in the 1-year outcome between these two methods.
abstract_id: PUBMED:27346175
Cochlear implantation via round window or cochleostomy: Effect on hearing in an animal model. Objectives/hypothesis: Cochlear implantation in patients with residual hearing has increased interest in hearing preservation. Two major surgical approaches to implantation have been devised: via the round window membrane and through cochleostomy. However, the advantages of either approach on hearing preservation have not been established. Due to the great inter- and intravariability among implantees, the current study used a normal-hearing animal model to compare the effect of the two methods on hearing.
Study Design: Animal study.
Methods: Thirteen fat sand rats were studied, in which 13 ears were implanted through cochleostomy and 13 via the round window. Hearing thresholds were determined by auditory brainstem responses to air and bone conduction at low and high auditory stimuli.
Results: The results indicated that each stage of the surgery, primarily the opening of the membranous labyrinth, was accompanied by significant deterioration in hearing. Hearing loss was mainly conductive, with no significant differences between the surgical approaches.
Conclusions: Both surgical approaches carry similar risk of hearing loss.
Level Of Evidence: NA Laryngoscope, 126:E375-E378, 2016.
abstract_id: PUBMED:32798831
Cochlear implantation outcomes with round window electrode insertion versus cochleostomy insertion. Objectives: assessment of two techniques for electrode insertion during cochlear implantation which are the round window and the traditional cochleostomy insertions, the comparison utilized cochlear implantation outcomes. STUDY DEIGN: a prospective cohort study.
Patients: children (n = 200) between 2 and 8 years old who had bilateral severe to profound SNHL and received a unilateral cochlear implant, 100 children had a round window insertion and were labeled the RW group while the other 100 children had a cochleostomy insertion and were labeled the C group which was taken as a control group.
Outcome Measure(s): all the participants in this study were followed up and tested twice for their cochlear implant outcomes, the first time when the duration of using their implants was no less than 24 months and no more than 30 months. The second time between 31 and 36 months post implantation. For speech perception; The Consonant-Nucleus-Consonant test (CNC) and The Bamford-Kowal-Bench sentence lists (BKB). For language acquisition; The Preschool Language Scale-fourth edition (PLS-4) and The Peabody Picture Vocabulary Test, Fourth edition (PPVT-4) had been utilized. For speech production; the Diagnostic Evaluation of Articulation and Phonology (DEAP).
Results: for speech perception: The mean CNC scores for the RW group and the C group between 24 and 30 months post-implantation and at 31-36 months post-implantation were better for the round window group. Also, the mean BKB scores for the RW group were higher than the C group between 24 and 30 months post-implantation (p < 0.0013) and at 31-36 months post-implantation (p < 0.0011). for language acquisition; the PLS-4 mean scores for the RW group and the C group between 24 and 30 months post-implantation and at 31-36 months post-implantation were better for the round window group (p < 0.001). The PPVT-4 mean scores for the RW group were also higher than the C group at both durations. for speech production; The mean word intelligibility scores of the DEAP for the RW group were better at 24 to 30 and 31-36 months post-implantation.
Conclusion: Round window insertion is associated with superior cochlear implantation outcomes regarding speech perception, language acquisition and speech production rather than the standard cochleostomy insertion.
Answer: The round window insertion technique for cochlear implantation has been compared with the traditional cochleostomy approach in various studies, with mixed results. Some studies have found no significant difference in functional outcomes, such as speech and hearing perception skills, between the two techniques. For instance, one study reported that there was no significant difference in the speech and hearing perception skills of patients post-implantation between the cochleostomy and round window membrane insertion techniques, as measured by Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) scores (PUBMED:31750226). Similarly, another study found no significant difference in the average angle of insertion depth between subjects with cochleostomy and round window insertion (PUBMED:36514425).
However, other studies have suggested that the round window approach may have certain advantages. For example, one study found that the spectral ripple discrimination scores of patients who underwent round window cochlear implant electrode insertion were superior to those of patients whose cochlear implants were inserted using a classical cochleostomy technique (PUBMED:33958008). Another study indicated that round window insertion is associated with superior cochlear implantation outcomes regarding speech perception, language acquisition, and speech production compared to the standard cochleostomy insertion (PUBMED:32798831).
In terms of preserving residual hearing, one study found no statistically significant difference in the preservation of residual hearing between the round window and cochleostomy approaches (PUBMED:25583631). However, another study suggested that interventions via the round window might be associated with delayed low-frequency hearing loss, which was not seen after cochleostomy (PUBMED:26739790).
Regarding the depth of electrode insertion, one cadaveric study found a statistically significant difference in linear depth of insertion between round window and cochleostomy, with cochleostomy having a deeper insertion, although the difference in angular depth of electrode insertion was not significant (PUBMED:38294508).
In conclusion, while some studies suggest that the round window approach may offer better outcomes in certain aspects, such as spectral resolution and overall cochlear implantation outcomes, other studies have found no significant difference between the two techniques. Therefore, the round window approach can be considered a viable "cochleostomy" of choice for cochlear implantation, but the decision may ultimately depend on individual patient factors and surgeon preference. |
Instruction: Alterations in transmural myocardial strain: an early marker of left ventricular dysfunction in mitral regurgitation?
Abstracts:
abstract_id: PUBMED:18824764
Alterations in transmural myocardial strain: an early marker of left ventricular dysfunction in mitral regurgitation? Background: In asymptomatic patients with severe isolated mitral regurgitation (MR), identifying the onset of early left ventricular (LV) dysfunction can guide the timing of surgical intervention. We hypothesized that changes in LV transmural myocardial strain represent an early marker of LV dysfunction in an ovine chronic MR model.
Methods And Results: Sheep were randomized to control (CTRL, n=8) or experimental (EXP, n=12) groups. In EXP, a 3.5- or 4.8-mm hole was created in the posterior mitral leaflet to generate "pure" MR. Transmural beadsets were inserted into the lateral and anterior LV wall to radiographically measure 3-dimensional transmural strains during systole and diastolic filling, at 1 and 12 weeks postoperatively. MR grade was higher in EXP than CTRL at 1 and 12 weeks (3.0 [2-4] versus 0.5 [0-2]; 3.0 [1-4] versus 0.5 [0-1], respectively, both P<0.001). At 12 weeks, LV mass index was greater in EXP than CTRL (201+/-18 versus 173+/-17 g/m(2); P<0.01). LVEDVI increased in EXP from 1 to 12 weeks (P=0.015). Between the 1 and 12 week values, the change in BNP (-4.5+/-4.4 versus -3.0+/-3.6 pmol/L), PRSW (9+/-13 versus 23+/-18 mm Hg), tau (-3+/-11 versus -4+/-7 ms), and systolic strains was similar between EXP and CTRL. The changes in longitudinal diastolic filling strains between 1 and 12 weeks, however, were greater in EXP versus CTRL in the subendocardium (lateral: -0.08+/-0.05 versus 0.02+/-0.14; anterior: -0.10+/-0.05 versus -0.02+/-0.07, both P<0.01).
Conclusions: Twelve weeks of ovine "pure" MR caused LV remodeling with early changes in LV function detected by alterations in transmural myocardial strain, but not by changes in BNP, PRSW, or tau.
abstract_id: PUBMED:16607903
Transmural left ventricular shear strain alterations adjacent to and remote from infarcted myocardium. Background And Aim Of The Study: In some patients, dysfunction in a localized infarct region spreads throughout the left ventricle to aggravate mitral regurgitation and produce deleterious global left ventricular (LV) remodeling. Alterations in transmural strains could be a trigger for this process, as these changes can produce apoptosis and extracellular matrix disruption. The hypothesis was tested that localized infarction perturbs transmural strain patterns not only in adjacent regions but also at remote sites.
Methods: Transmural radiopaque beadsets were inserted surgically into the anterior basal and lateral equatorial LV walls of 25 sheep; additional markers were used to silhouette the left ventricle. One week thereafter, 10 sheep had posterior wall infarction from (obtuse marginal occlusion, INFARCT) and 15 had no infarction (SHAM). Four-dimensional marker dynamics were studied with biplane videofluoroscopy eight weeks later. Fractional area shrinkage, LV volumes and transmural circumferential, longitudinal and radial systolic strains were analyzed.
Results: Compared to SHAM, INFARCT greatly increased longitudinal-radial shear (mid-wall: 0.07 +/- 0.07 versus 0.14 +/- 0.06; subendocardium: 0.03 +/- 0.07 versus 0.20 +/- 0.08) in the inner half of the lateral LV wall and increased circumferential-radial shear (mid-wall: 0.03 +/- 0.05 versus 0.10 +/- 0.04; subepicardium: 0.02 +/- 0.05 versus 0.12 +/- 0.10) increased in the outer half of the LATERAL wall. In the ANTERIOR wall, INFARCT also increased longitudinal-radial shear (midwall: 0.01 +/- 0.05 versus 0.12 +/- 0.04; subendocardium: 0.04 +/- 0.09 versus 0.25 +/- 0.20) in the inner layers.
Conclusion: Increased transmural shear strains were found not only in an adjacent region, but also at a site remote from a localized infarction. This perturbation could trigger remodeling processes that promote the progression of ischemic cardiomyopathy. A better understanding of this process is important for the future development of surgical therapies to reverse destructive LV remodeling.
abstract_id: PUBMED:15364848
Alterations in left ventricular torsion and diastolic recoil after myocardial infarction with and without chronic ischemic mitral regurgitation. Background: Chronic ischemic mitral regurgitation (CIMR) is associated with heart failure that continues unabated whether the valve is repaired, replaced, or ignored. Altered left ventricular (LV) torsion dynamics, with deleterious effects on transmural gradients of oxygen consumption and diastolic filling, may play a role in the cycle of the failing myocardium. We hypothesized that LV dilatation and perturbations in torsion would be greater in animals in which CIMR developed after inferior myocardial infarction (MI) than in those that it did not.
Methods: 8+/-2 days after marker placement in sheep, 3-dimensional fluoroscopic marker data (baseline) were obtained before creating inferior MI by snare occlusion. After 7+/-1 weeks, the animals were restudied (chronic). Inferior MI resulted in CIMR in 11 animals but not in 9 (non-CIMR). End-diastolic septal-lateral and anterior-posterior LV diameters, maximal torsional deformation (phi(max), rotation of the LV apex with respect to the base), and torsional recoil in early diastole (phi(5%), first 5% of filling) for each LV free wall region (anterior, lateral, posterior) were measured.
Results: Both CIMR and non-CIMR animals demonstrated derangement of LV torsion after inferior MI. In contrast to non-CIMR, CIMR animals exhibited greater LV dilation and significant reductions in posterior maximal torsion (6.1+/-4.3 degrees to 3.9+/-1.9 degrees * versus 4.4+/-2.5 degrees to 2.8+/-2.0 degrees; mean+/-SD, baseline to chronic, *P<0.05) and anterior torsional recoil (-1.4+/-1.1 degrees to -0.2+/-1.0 degrees versus -1.2+/-1.0 degrees to -1.3+/-1.6 degrees ).
Conclusions: MI associated with CIMR resulted in greater perturbations in torsion and recoil than inferior MI without CIMR. These perturbations may be linked to more LV dilation in CIMR, which possibly reduced the effectiveness of fiber shortening on torsion generation. Altered torsion and recoil may contribute to the "ventricular disease" component of CIMR, with increased gradients of myocardial oxygen consumption and impaired diastolic filling. These abnormalities in regional torsion and recoil may, in part, underlie the "ventricular disease" of CIMR, which may persist despite restoration of mitral competence.
abstract_id: PUBMED:17223567
Septal-lateral annnular cinching perturbs basal left ventricular transmural strains. Objective: Septal-lateral annular cinching ('SLAC') corrects both acute and chronic ischemic mitral regurgitation in animal experiments, which has led to the development of therapeutic surgical and interventional strategies incorporating this concept (e.g., Edwards GeoForm ring, Myocor Coapsys, Ample Medical PS3). Changes in left ventricular (LV) transmural cardiac and fiber-sheet strains after SLAC, however, remain unknown.
Methods: Eight normal sheep hearts had two triads of transmural radiopaque bead columns inserted adjacent to (anterobasal) and remote from (midlateral equatorial) the mitral annulus. Under acute, open chest conditions, 4D bead coordinates were obtained using videofluoroscopy before and after SLAC. Transmural systolic strains were calculated from bead displacements relative to local circumferential, longitudinal, and radial cardiac axes. Transmural cardiac strains were transformed into fiber-sheet coordinates (X(f), X(s), X(n)) oriented along the fiber (f), sheet (s), and sheet-normal (n) axes using fiber (alpha) and sheet (beta) angle measurements.
Results: SLAC markedly reduced (approximately 60%) septal-lateral annular diameter at both end-diastole (ED) (2.5+/-0.3 to 1.0+/-0.3 cm, p=0.001) and end-systole (ES) (2.4+/-0.4 to 1.0+/-0.3 cm, p=0.001). In the LV wall remote from the mitral annulus, transmural systolic strains did not change. In the anterobasal region adjacent to the mitral annulus, ED wall thickness increased (p=0.01) and systolic wall thickening was less in the epicardial (0.28+/-0.12 vs 0.20+/-0.06, p=0.05) and midwall (0.36+/-0.24 vs 0.19+/-0.11, p=0.04) LV layers. This impaired wall thickening was due to decreased systolic sheet thickening (0.20+/-0.8 to 0.12+/-0.07, p=0.01) and sheet shear (-0.15+/-0.07 to -0.11+/-0.04, p=0.02) in the epicardium and sheet extension (0.21+/-0.11 to 0.10+/-0.04, p=0.03) in the midwall. Transmural systolic and remodeling strains in the lateral midwall (remote from the annulus) were unaffected.
Conclusions: Although SLAC is an alluring concept to correct ischemic mitral regurgitation, these data suggest that extreme SLAC adversely effects systolic wall thickening adjacent to the mitral annulus by inhibiting systolic sheet thickening, sheet shear, and sheet extension. Such alterations in LV strains could result in unanticipated deleterious remodeling and warrant further investigation.
abstract_id: PUBMED:19276097
Prognostic value of echocardiography after acute myocardial infarction. Echocardiography is useful for risk stratification and assessment of prognosis after myocardial infarction, which is the focus of this review. Various traditional echocardiographic parameters have been shown to provide prognostic information, such as left ventricular volumes and ejection fraction, wall motion score index, mitral regurgitation and left atrial volume. The introduction of tissue Doppler imaging and speckle-tracking strain imaging has resulted in additional prognostic parameters, such as left ventricular strain (rate) and dyssynchrony. Also, (myocardial) contrast echocardiography provides valuable information, particularly about myocardial perfusion (as a marker of myocardial viability), which is strongly related to prognosis after myocardial infarction. Stress echocardiography provides information on ischaemia and viability, coronary flow reserve can be obtained by Doppler imaging of the coronary arteries, and finally, three-dimensional echocardiography provides optimal information on left ventricular volumes, function and sphericity, which are also important for long-term outcome.
abstract_id: PUBMED:7082508
Atrial fibrillation--a marker for abnormal left ventricular function in coronary heart disease. Retrospective study of 1176 patients with known coronary heart disease by cardiac catheterisation disclosed 10 patients (0.8%) with atrial fibrillation. Comparison with 25 randomly selected patients with coronary heart disease with sinus rhythm showed that atrial fibrillation correlated significantly with impaired haemodynamic function, mitral regurgitation, and abnormalities of left ventricular contraction. Atrial fibrillation is, therefore, a useful marker of extensive myocardial dysfunction.
abstract_id: PUBMED:33825727
Assessment of no-reflow phenomenon in patients with acute ST-elevation myocardial infarction Background: The problems concerning assessment of the state of myocardial perfusion in patients with acute ST elevation myocardial infarction after successful revascularization still remain of current importance. Contrast-enhanced echocardiography remains the least studied and most promising ultrasound technology for the diagnosis of the no-reflow phenomenon.
Aim: The study was aimed at evaluating echocardiographic and angiographic characteristics of the no-reflow phenomenon detected by means of contrast-enhanced echocardiography in patients with ST-segment elevation myocardial infarction.
Patients And Methods: The study included a total of forty-three 40-to-82-year-old patients in acute period of myocardial infarction. The patients were divided into two groups: 32 patients with satisfactory myocardial reperfusion after revascularization according to the findings of contrast-enhanced echocardiography and 11 patients with impaired perfusion.
Results: The patients in the group with impaired perfusion demonstrated a greater size of the left ventricular (LV) asynergy (40.1±2.2% vs 27.4±8.5%, p<0.001), more frequent LV dilatation (LV end-systolic volume 67.3±20.3 ml vs 51.8±17.2 ml, p=0.015), decreased LV contractility (LV ejection fraction 39.5±3.4% vs 47.2±4.9%, p < 0.001), and significant mitral regurgitation (45.5% vs 3.1%, p=0.011) with a decrease in DP/DT (979.9±363.4 mmHg/s vs 1565.7±502.8 mmHg/s, p<0.001) were more often detected in this group. Coronary angiography showed no perfusion disorders after revascularization in more than a quarter of these patients. In the group with impaired perfusion, more frequently revealed were single-vascular lesions (46.9% vs 9.1%, p=0.033), lesions of the anterior interventricular artery (90.9% vs 40.6%, p=0.004), and acute occlusion (100% vs 68.8%, p=0.043); compliance by the SYNTAX score in this group was higher (18.9±3.7 vs 9.9±5.7, p<0.001).
Conclusion: In patients with acute myocardial infarction after successfully performed revascularization, perfusion disorders revealed by the findings of contrast-enhanced echocardiography were accompanied by more pronounced echo signs of left-ventricular dysfunction, higher values of the SYNTAX score and significantly more frequently revealed lesions of the anterior interventricular septum as compared with the patients with recovered perfusion.
abstract_id: PUBMED:3639819
Hemodynamic abnormalities in acute myocardial infarction. Great strides have been made in the management of patients with acute myocardial infarction since the advent of coronary care units. However, congestive heart failure continues to be the major cause of in-hospital mortality. The accurate diagnosis and classification of hemodynamic abnormalities allow the application of specific therapies for each patient. Because clinicians can now routinely measure left and right ventricular preload, systemic and pulmonary vascular resistance, cardiac output, and arteriovenous oxygen difference, pharmacologic and surgical interventions can be applied in a scientific manner. In addition, mechanical complications can be promptly recognized and aggressively treated. Although the mortality rate for patients with severe left ventricular dysfunction after myocardial infarction remains high, expert management offers an improved prognosis for many patients.
abstract_id: PUBMED:12407836
Refractory heart failure. Myocardial revascularization as alternative to heart transplantation Coronary artery disease is the most common cause of heart failure in the western world. Due to shortage of donors, heart transplantation is not a realistic treatment for the great majority of patients with heart failure, while surgical revascularization is a valuable alternative in selected patients. Several variables have to be taken into account in order to appropriately identify patients with severe left ventricular dysfunction who are likely to benefit from myocardial revascularization. The recovery of cardiac function can be expected only in patients with sufficiently large areas of hibernating myocardium, particularly when the contractile reserve is documented. The anatomy of the coronary arteries should be suitable for coronary bypass grafting and provide a good run-off. Patients with an excessively dilated heart, with signs and symptoms of right heart failure and significant pulmonary hypertension are not candidates for myocardial revascularization. An appropriate surgical strategy, also including the reduction of the left ventricular volume and/or the correction of mitral insufficiency if needed, is the key factor for a successful revascularization procedure in patients with coronary artery disease and heart failure as the predominant symptom.
abstract_id: PUBMED:1804821
The role of echocardiography in acute myocardial infarction. Our institutional experience with two-dimensional echocardiography and color flow Doppler tests, as well as a review of the literature, reveals that echocardiography is useful in diagnosing myocardial infarction. Furthermore, complications of myocardial infarction such as mitral regurgitation with or without papillary muscle rupture, acquired ventricular septal defect, true and false left ventricular aneurysms, left ventricular thrombi, right ventricular infarction, and pericardial effusion in the setting of acute infarction can be detected by cardiac ultrasound. Echocardiography is also useful in assessing prognosis of postinfarction patients based on degree of left ventricular dysfunction. This test, however, is not 100% sensitive and specific in diagnosing these entities and should be reviewed in context of a patient's history, physical findings, and other laboratory tests.
Answer: Alterations in transmural myocardial strain may indeed serve as an early marker of left ventricular (LV) dysfunction in mitral regurgitation (MR). In a study with an ovine model of chronic MR, it was found that twelve weeks of "pure" MR caused LV remodeling with early changes in LV function detected by alterations in transmural myocardial strain. These changes were observed even though there were no significant changes in B-type natriuretic peptide (BNP), preload recruitable stroke work (PRSW), or tau (a measure of diastolic relaxation) (PUBMED:18824764).
Additionally, transmural strain patterns have been shown to be perturbed not only in regions adjacent to infarcted myocardium but also at remote sites, which could trigger remodeling processes that promote the progression of ischemic cardiomyopathy (PUBMED:16607903). This suggests that transmural strain alterations could be indicative of more widespread LV dysfunction.
In the context of chronic ischemic mitral regurgitation (CIMR), it has been observed that LV dilatation and perturbations in torsion were greater in animals with CIMR after inferior myocardial infarction (MI) than in those without CIMR. This indicates that altered torsion and recoil may contribute to the "ventricular disease" component of CIMR, potentially affecting myocardial oxygen consumption and diastolic filling (PUBMED:15364848).
Furthermore, interventions such as septal-lateral annular cinching (SLAC), which are used to correct ischemic mitral regurgitation, have been shown to adversely affect systolic wall thickening adjacent to the mitral annulus by inhibiting systolic sheet thickening, sheet shear, and sheet extension. This suggests that extreme alterations in LV strains could result in unanticipated deleterious remodeling (PUBMED:17223567).
In summary, alterations in transmural myocardial strain are associated with early LV dysfunction in MR and may be a useful marker for guiding the timing of surgical intervention in asymptomatic patients with severe isolated MR. These alterations can precede changes in other commonly used markers of LV function, such as BNP, PRSW, and tau, and can be indicative of the progression of LV dysfunction and remodeling in the context of MR and myocardial infarction. |
Instruction: Do patients with structural abnormalities of the shoulder experience pain after MR arthrography of the shoulder?
Abstracts:
abstract_id: PUBMED:20720072
Do patients with structural abnormalities of the shoulder experience pain after MR arthrography of the shoulder? Purpose: To assess the pain course after intraarticular injection of a gadolinium-containing contrast material admixed with anesthetic for magnetic resonance (MR) arthrography of the shoulder in relation to internal derangements of the shoulder.
Materials And Methods: Institutional review board approval and informed consent were obtained for this study. The study sample consisted of 655 consecutive patients (249 female, 406 male; median age, 54 years) referred for MR arthrography of the shoulder. Pain level was measured at baseline, directly after intraarticular injection of the gadolinium-containing contrast material admixed with anesthetic, 4 hours after injection, 1 day (18-30 hours) after injection, and 1 week (6-8 days) after injection with a visual analog scale (range, 0-10). MR arthrography was used to assess the following internal derangements: lesions of the rotator cuff tendons and long biceps tendon, adhesive capsulitis (frozen shoulder), fluid in the subacromial bursa, labral tears, and osteoarthritis of the glenohumeral joint. History of shoulder surgery was recorded. Linear regression models were calculated for the dependent variable (difference between follow-up pain and baseline pain), with the independent variable grouping adjusted for age and sex.
Results: There was no significant association between pain level over time and internal derangements of the shoulder, nor was there significant association between pain level over time in patients with a history of shoulder surgery and patients without a history of shoulder surgery.
Conclusion: Neither internal derangements nor prior surgery have an apparent effect on the pain course after MR arthrography of the shoulder.
abstract_id: PUBMED:34881048
MR arthrography of the shoulder; correlation with arthroscopy. Background: Shoulder dislocation is a common injury, particularly in the younger population. Common long-term sequelae include pain, recurrence, and shoulder arthritis. Immediate and correct diagnosis following shoulder dislocation is key to achieving optimum outcomes. Although magnetic resonance arthrography (MRA) is frequently used for diagnosing shoulder instabilities, arthroscopy is still considered the gold standard.
Purpose: This study aims to compare the diagnostic value of arthroscopy and MRA of the shoulder joint.
Materials And Methods: This retrospective study estimates the sensitivity and specificity of MRA of the shoulder. Data from patients who had undergone shoulder MRA and subsequent arthroscopy during a 5-year period were retrospectively collected. Sensitivity and specificity were calculated using the arthroscopic findings as the gold standard. Moreover, diagnostic accuracy was estimated using McNemar's test.
Results: In total, 205 cases were included from which 372 pathological findings were uncovered during the arthroscopic procedures as opposed to 360 findings diagnosed from the MRA images. The glenoid labral tear was the most common finding reported by MRA and arthroscopy. For the detection of glenoid labral tears on MRA, the sensitivity was 0.955 but with eight missed lesions; the specificity was 0.679. Capsular tears, rotator cuff tears, and cartilage lesions proved the most difficult to correctly diagnose using MRA with sensitivities of 0.2, 0.346, and 0.366, respectively.
Conclusions: With a sensitivity of 95%, MRA is a valuable diagnostic tool for assessing shoulder instabilities, particularly when diagnosing labral lesions, including bony and soft-tissue Bankart lesions. Sensitivities and specificities for other glenohumeral lesions are less convincing, however.
abstract_id: PUBMED:18814056
Shoulder injuries in overhead athletes: utility of MR arthrography Introduction: The goal of this work was to assess the accuracy of the MR-Arthrografie in the evaluation of over head athletes injuries in comparison with athroscopy.
Material And Methods: In 29 patients (middle age: 30 years, 21 male, 8 female, age 16 - 53 years) with persistent pain after conservative therapy an Arthro-MRI with intraarticular application of gadolinum was performed prior to arthroscopic surgery. The MRI was retrospectivly analysed of three examiners independently from one another. The result were compared to the results of the Arthroscopy. Interrater Reliability was calculated by using of Cohens Kappa.
Results: The MR-Arthrography could demonstrate 8 of 9 (88.9 %) partial tears of he rotator cuff. All SLAP (Superiores Labrum from Anterior to Posterior) Lesions as well as all bankart type Lesions were recognized through the MR-Arthrography. However, dependent upon the experience of the examiner in a span between 33.3 % (fellow radiologist) and 93.3 % (consultant radiologist). We found a high agreement between consultant radiologist and shoulder surgeon with Kappa of 0.79 for rotator cuff tear-, 0.86 for Bankart- and 0.82 for SLAP-Läsionen.
abstract_id: PUBMED:33002752
A comparison of ultrasound-guided rotator interval and posterior glenohumeral injection techniques for MR shoulder arthrography. Purpose: The aim of this prospective, randomized study was to compare the performance of a rotator interval approach with the posterior glenohumeral approach for ultrasound-guided contrast injection prior to MR shoulder arthrography.
Method: This study was approved by the institutional review board. One hundred and twenty consecutive patients referred for MR shoulder arthrography were randomized into four groups: rotator interval approach in-plane (n = 30); rotator interval approach out-of-plane (n = 30); posterior approach in-plane (n = 30); and posterior approach out-of plane (n = 30). Outcome measures included procedure time, number of injection attempts, patient-reported pain score (0-10), and radiologist-reported technical difficulty (0-10). MR arthrograms were assessed for adequacy of joint distension, diagnostic utility, and extra-capsular contrast leakage.
Results: All 120 patients had a successful ultrasound-guided injection with adequate joint distension and diagnostic utility for MR arthrography. In-plane needle guidance was less technically demanding, quicker, required fewer injection attempts, and had a lower frequency of contrast leakage than out-of-plane needle guidance. The posterior glenohumeral approach was less technically demanding though had a higher frequency of contrast leakage and caused more patient discomfort than the rotator interval approach.
Conclusion: For ultrasound-guided shoulder joint injection, an in-plane approach is preferable. The posterior glenohumeral approach is less technically demanding though causes more patients discomfort than the rotator interval approach possibly due to the longer needle path.
abstract_id: PUBMED:32241659
Posterior Shoulder Instability: What to Look for. Posterior shoulder instability is often hard to diagnose with clinical examination. Patients generally present with vague pain, weakness, and/or joint clicking but less frequently complaining of frank sensation of instability. Imaging examinations, especially MR imaging and magnetic resonance arthrography, have a pivotal role in the identification and management of this condition. This review describes the pathologic micro/macrotraumatic magnetic resonance features of posterior shoulder instability as well as the underlying joint abnormalities predisposing to this condition, including developmental anomalies of the glenoid fossa, humeral head, posterior labrum, and capsular and ligamentous structures.
abstract_id: PUBMED:7972758
Posterosuperior glenoid impingement of the shoulder: findings at MR imaging and MR arthrography with arthroscopic correlation. Purpose: To determine the utility of magnetic resonance (MR) imaging and MR arthrography in the evaluation of arthroscopic findings of posterosuperior glenoid impingement.
Materials And Methods: The findings at MR imaging, MR arthrography, and physical examination with the patient under anesthesia were retrospectively reviewed in eight patients with arthroscopic evidence of posterosuperior glenoid impingement.
Results: All patients had shoulder pain; anterior instability was found in six patients. Other than bone marrow abnormalities, findings at MR imaging were not reliable for the detection of posterosuperior glenoid impingement. MR arthrography was superior to routine MR imaging in all four cases in which it was done; positioning the shoulder in abduction and external rotation was beneficial in three of four patients.
Conclusion: Impingement of the rotator cuff on the posterior superior glenoid labrum is a cause of posterior shoulder pain in athletes who throw. MR arthrography may allow detection of abnormalities associated with this clinical entity.
abstract_id: PUBMED:11719677
Patient's assessment of discomfort during MR arthrography of the shoulder. Purpose: To assess patient discomfort during (a) intraarticular contrast material injection (arthrography) and (b) magnetic resonance (MR) imaging in patients referred for MR arthrography of the shoulder and to compare the relative discomfort associated with each part of the examination.
Materials And Methods: With use of a visual analogue scale (VAS) and relative ratings, 202 consecutive patients referred for MR arthrography of the shoulder rated the expected discomfort and that actually experienced during both arthrography and MR imaging. The Student t test was used for statistical analysis.
Results: The average VAS score (0 = "did not feel anything," 100 = "unbearable") was 16.1 +/- 16.4 (SD) for arthrography and 20.2 +/- 25.0 for MR imaging. This difference was statistically significant (P =.036, paired t test). The discomfort experienced during arthrography was as expected in 90 (44.6%) patients, less than expected in 110 (54.4%), and worse than expected in two (1.0%). MR imaging-related discomfort was as expected in 114 (56.4%) patients, less than expected in 66 (32.7%), and worse in 22 (10.9%). Arthrography was rated worse than MR imaging by 53 (26.2%) patients, equal to MR imaging by 69 (34.2%), and less uncomfortable than MR imaging by 80 (39.6%).
Conclusion: Arthrography-related discomfort was well tolerated, often less severe than anticipated, and rated less severe than MR imaging-related discomfort.
abstract_id: PUBMED:24698298
MR arthrography of the shoulder: do we need local anesthesia? Purpose: To assess pain intensity with and without subcutaneous local anesthesia prior to intraarticular administration of contrast medium for magnetic resonance arthrography (MRa) of the shoulder.
Materials And Methods: This single-center study was conducted after an IRB waiver of authorization, between January 2010 and December 2012. All patients provided written, informed consent for the procedure. Our prospectively populated institutional database was searched, based on our inclusion criteria. There were 249 outpatients (178 men and 71 women; mean age, 44.4 years ± 14.6; range, 15-79) who underwent MRa and were enrolled in this study. Patients were excluded if they had received surgery of the shoulder before MRa, had undergone repeated MRa of the same shoulder, and/or had undergone MRa of both shoulders on the same day. Patients were randomly assigned into one of three groups. Patients in group A (n=61) received skin infiltration with local anesthesia. Patients in control group B (n=92) and group C (n=96) did not receive local anesthesia. Pain levels were immediately assessed after the injection for MRa using a horizontal visual analog scale (VAS) that ranged from 0 to 10. To compare the pain scores of the three groups for male and female patients, a two-way analysis of variance was used. A p-value equal to or less than 0.05 was considered to indicate a significant result.
Results: Patients who received local anesthesia (group A) showed a mean pain level on the VAS of 2.6 ± 2.3. In patients who did not receive local anesthetics (groups B and C), a mean pain level on the VAS of 2.6 ± 2.2 and 2.7 ± 2.4 were detected, respectively. Between the three groups, no statistically significant difference in pain intensity was detected (p=.960). There were significant differences in subjective pain perception between men and women (p=.009). Moreover, the sex difference in all three groups was equal (p=.934).
Conclusion: Local anesthesia is not required to lower a patient's pain intensity when applying intra-articular contrast media for MR arthrography of the shoulder. This could result in reduced costs and a reduced risk of adverse reactions, without an impact on patient comfort.
abstract_id: PUBMED:7784579
Glenohumeral ligaments and shoulder capsular mechanism: evaluation with MR arthrography. Purpose: To evaluate the efficacy of magnetic resonance (MR) arthrography in identification of the glenohumeral ligaments (GHLs) and to determine the location of abnormalities of the GHL, joint capsule, and labrum.
Materials And Methods: MR arthrograms were evaluated retrospectively in 46 patients with a history of shoulder instability, impingement syndrome, or pain of unknown cause. Imaging findings were correlated with surgical observations.
Results: The superior, middle, and inferior GHLs were identified on MR arthrograms in 39 (85%), 39 (85%), and 42 (91%) of the 46 patients, respectively. In diagnosis of tears of the superior, middle, and inferior GHLs, MR arthrography had a sensitivity of 100%, 89%, and 88% and a specificity of 94%, 88%, and 100%, respectively.
Conclusion: Findings at MR arthrography can help accurate identification and demonstration of the integrity of the GHL and labrum and can help in staging of abnormalities. The large number of abnormalities depicted in the middle and inferior GHLs suggests that both might be important in the maintenance of glenohumeral joint congruity.
abstract_id: PUBMED:26611904
Correlations of magnetic resonance imaging findings with clinical symptom severity and prognosis of frozen shoulder. Purpose: To evaluate the correlation between indirect magnetic resonance (MR) arthrographic imaging findings and the clinical symptoms and prognosis of patients with frozen shoulder.
Methods: Indirect MR arthrography was performed for 52 patients with primary frozen shoulder (mean age 55.1 ± 9.0 years) and 52 individuals without frozen shoulder (mean age 53.1 ± 10.7 years); capsular thickening and enhancement of the axillary recess as well as soft tissue thickening of the rotator interval were evaluated. Clinical symptom severity was assessed using the Visual Analogue Scale for Pain (VAS Pain), simple shoulder test (SST), Constant score, American Shoulder and Elbow Surgeons (ASES) score, and range of motion (ROM). At 6-month follow-up, we evaluated whether MR arthrography findings correlated with the clinical symptoms and prognosis.
Results: Capsular thickening and enhancement of the axillary recess as well as soft tissue thickening of the rotator interval were significantly greater in the patient group than in the controls (p < 0.001). Capsular thickening of the axillary recess did not correlate with clinical symptoms or ROM (n.s.); however, capsular enhancement correlated with clinical symptom severity according to VAS Pain (p = 0.005), SST (p = 0.046), and ASES scores (p = 0.009). Soft tissue thickening of the rotator interval did not correlate with clinical symptom severity, but was associated with external rotation limitation (p = 0.002). However, none of the parameters correlated with clinical symptoms at 6-month follow-up.
Conclusions: Indirect MR arthrography provided ancillary findings, especially with capsular enhancement, for evaluating clinical symptom severity of frozen shoulder, but did not reflect the prognosis. MR findings in frozen shoulder should not replace clinical judgments regarding further prognosis and treatment decisions.
Level Of Evidence: IV.
Answer: Patients with structural abnormalities of the shoulder do not appear to experience pain specifically related to those abnormalities after MR arthrography of the shoulder. A study assessing the pain course after intraarticular injection of a gadolinium-containing contrast material admixed with anesthetic for magnetic resonance (MR) arthrography of the shoulder found no significant association between pain level over time and internal derangements of the shoulder. Additionally, there was no significant association between pain level over time in patients with a history of shoulder surgery and patients without a history of shoulder surgery (PUBMED:20720072). This suggests that neither internal derangements nor prior surgery have an apparent effect on the pain course after MR arthrography of the shoulder. |
Instruction: Do parents reinforce somatic complaints in their children?
Abstracts:
abstract_id: PUBMED:18377148
Do parents reinforce somatic complaints in their children? Objective: To examine the influence of parental solicitousness on self-reported somatic complaints in school-age children.
Design And Main Outcome Measures: Participants were 564 children (mean age 10 years) and their parents. Children completed self-report measures of somatic complaints, parental solicitousness, depressiveness, fear, and sense of coherence. Somatic complaints were assessed again 6 months later. Parents also completed a questionnaire about solicitousness.
Results: Parental solicitousness as reported by children or parents was unrelated to the frequency of self-reported somatic complaints. Symptoms of depression, fear, and lower sense of coherence were associated with more somatic complaints, but did not interact with parental solicitousness.
Conclusion: Parental solicitousness seems unrelated to more frequent somatic complaints in schoolchildren.
abstract_id: PUBMED:30218392
Somatic complaints in children and adolescents with social anxiety disorder. Background: Associations of social anxiety disorder (SAD) with various somatic symptoms have been already reported in the literature several times. The present study investigated somatic complaints in children and adolescents with SAD compared to controls and evaluated the relationship between social anxiety and somatic symptom severity.
Methods: Thirty children and adolescents with SAD were compared with 36 healthy age-matched controls. Self-reported fears were assessed using the Phobiefragebogen für Kinder und Jugendliche (PHOKI); emotional and behavioral problems were assessed using the Child Behavior Checklist (CBCL/4-18); and the Gießener Beschwerdebogen für Kinder und Jugendliche (GBB-KJ) was used to assess 59 somatic symptoms.
Results: Parents and youth with SAD reported higher somatic symptom severity compared to controls. Youth with SAD more frequently reported stomach pain, circulatory complaints, and fatigue than controls. Specific group differences between SAD and control youth were found for the following single somatic symptoms: faintness, quickly exhausted, sensation of heat, stomachache, nausea, dizziness, and sudden heart complaints. Parents of girls with SAD reported higher somatic symptom severity than parents of boys with SAD.
Conclusions: The results demonstrated a significant positive association between somatic symptoms and social anxiety in youth. The results of the present study can help to develop improved screening measurements, which increase the proportion of children and adolescents with SAD receiving proper treatment.
abstract_id: PUBMED:36360512
Somatic, Emotional and Behavioral Symptomatology in Children during COVID-19 Pandemic: The Role of Children's and Parents' Alexithymia. The COVID-19 pandemic has deeply affected the psychophysical wellbeing of children worldwide. Alexithymia, a personality trait involving difficulties in identifying and expressing feelings represents a vulnerability factor for stress-related disorders. Under pandemic stress exposure, we aimed to investigate the role of parents' and children's alexithymia in the psychophysical symptomatology shown by children and to evaluate possible differences according to age, gender and history of COVID-19 infections. The perception of parents and children about the impact of the pandemic on children's emotional, social and physiological wellbeing was also explored. Sixty-five familial triads were surveyed in the period from March to May 2022: children (n = 33 males; mean age = 9.53, sd = 1.55), mothers (mean age = 44.12; sd = 6.10) and fathers (mean age = 47.10; sd = 7.8). Both parental and children's alexithymia scores were significantly associated with somatic and externalizing symptomatology in children. Self-reported anger and externally oriented thinking scores were higher in younger children (age 8-9.9 years) than in older ones (10-12 years). Girls scored higher than boys in somatic complaints, as reported by parents. No difference emerged between children affected/not affected by COVID-19. Notably, children reported a greater negative impact of the pandemic on their emotional and psychosocial well-being than their parents. The findings emphasize the role of alexithymia in the occurrence of psychophysical symptoms in children during the COVID-19 pandemic. The reduced parental awareness of the emotional burden imposed by the pandemic on children indicates the need to better consider how epidemics affect children's mental health and to develop adequate preventive strategies to support them in these exceptional times.
abstract_id: PUBMED:36694162
Parents' drinking, childhood hangover? Parental alcohol use, subjective health complaints and perceived stress among Swedish adolescents aged 10-18 years. Background: Alcohol abuse is not only harmful to the consumer but may also negatively impact individuals in the drinker's social environment. Alcohol's harm to others is vital to consider when calculating the true societal cost of alcohol use. Children of parents who have alcohol use disorder tend to have an elevated risk of negative outcomes regarding, e.g., health, education, and social relationships. Research on the general youth population has established a link between parental drinking and offspring alcohol use. However, there is a lack of knowledge regarding other outcomes, such as health. The current study aimed to investigate the associations between parental drinking and children's psychological and somatic complaints, and perceived stress.
Methods: Data were derived from a nationally representative sample, obtained from the 2010 Swedish Level-of-Living survey (LNU). Parents and adolescents (ages 10-18) living in the same households were interviewed independently. The final study sample included 909 adolescents from 629 households. The three outcomes, psychological and somatic complaints and perceived stress, were derived from adolescents' self-reports. Parents' self-reports of alcohol use, both frequency and quantity, were used to categorise adolescents as having abstaining, low-consuming, moderate-drinking, or heavy-drinking parents. Control variables included adolescents' gender, age, family structure, and household socioeconomic status. Linear and binary logistic regression analyses were performed.
Results: Parental heavy drinking was more common among adolescents living in more socioeconomically advantaged households and among adolescents living with two custodial parents or in reconstituted families. Adolescents with heavy-drinking parents reported higher levels of psychological and somatic complaints and had an increased likelihood of reporting stress, compared with those having moderate-drinking parents. These associations remained statistically significant when adjusting for all control variables.
Conclusion: The current study's results show that parental alcohol consumption is associated with poorer offspring adolescent health. Public health policies that aim to reduce parental drinking or provide support to these adolescents may be beneficial. Further studies investigating the health-related outcomes among young people living with heavy-drinking parents in the general population are needed to gain more knowledge about these individuals and to implement adequate public health measures.
abstract_id: PUBMED:32894350
Adolescents' somatic complaints in eight countries: what influence do parental rearing styles have? Medically unexplained physical symptoms are frequently named by adolescents in both clinical and normative samples. This study analyzed the associations between parental rearing styles and adolescents' body complaints in diverse cultural contexts. In a cross-cultural study of 2415 adolescents from eight countries (Argentina, France, Germany, Greece, Pakistan, Peru, Poland, and Turkey), the associations of maternal and paternal support, psychological control, and an anxious parental monitoring style with youth body complaints were tested. Girls reported more somatic complaints than boys, the level of complaints differed between countries, and gender differences varied significantly between countries. Hierarchic multilevel models revealed that the expression of distress via body complaints, after controlling for country, gender, and sociodemographic status, was significantly associated with parental rearing styles. The negative impact of mothers' psychological control on body complaints generalize across countries. In addition, mothers' anxious monitoring had a negative impact on the offspring's health, whereas higher levels of paternal support and lower levels of paternal psychological control contributed to lower levels of somatic complaints. Sociodemographic variables such as family structure, standard of living, and employment status of the parents, did not turn out as significant in the final model. The findings point to the different roles of fathers and mothers play in adolescents' health and their complex interplay.
abstract_id: PUBMED:32841758
Discrepancies in adolescent-mother dyads' reports of core depression symptoms: Association with adolescents' help-seeking in school and their somatic complaints. Objective: Parents of adolescents with mental problems do not always recognize the symptoms in their children, particularly regarding depression, and therefore do not seek professional help. Adolescents themselves tend to seek help from school personnel for their emotional or social difficulties. In contrast, adolescents do report somatic complaints and parents are likely to seek help for these problems. The current study explored whether the divergence between maternal and child reports of depression symptoms is associated with child's help-seeking in school and patterns of somatic complaints.
Method: A sample of 9th grade students (N = 693; 56% girls; mean age = 15.1) and their mothers representing the Muslim and Druze populations in northern Israel were interviewed simultaneously and independently. Maternal reports were classified either as underestimating, matching, or overestimating their own child self-report of three core symptoms of depression (depressed mood, anhedonia, and irritability). Adolescents reported whether they had consulted school staff and were classified into clusters based on self-reported somatic complaints.
Results: Maternal misidentification of their child's depression symptoms was associated with increased help-seeking in school, particularly by boys if depressed mood or irritability were misidentified and particularly by girls if anhedonia was misidentified. Hierarchical cluster analysis indicated that the number and severity of somatic complaints was higher among adolescents whose depression symptoms were not identified, regardless of gender.
Conclusion: Mental health professionals, educators and parents should be aware that adolescents may attempt to communicate their emotional difficulties through somatic complaints and by seeking help in school.
abstract_id: PUBMED:25232513
Somatic complaints in frontotemporal dementia. Frontotemporal dementia (FTD) is associated with a broad spectrum of clinical characteristics. The objective of this study was to analyze the prevalence of unexplained somatic complaints in neuropathologically verified FTD. We also examined whether the somatic presentations correlated with protein pathology or regional brain pathology and if the patients with these somatic features showed more depressive traits. Ninety-seven consecutively neuropathologically verified FTLD patients were selected. All 97 patients were part of a longitudinal study of FTD and all medical records were systematically reviewed. The somatic complaints focused on were headache, musculoskeletal, gastro/urogenital and abnormal pain response. Symptoms of somatic character (either somatic complaints and/or abnormal pain response) were found in 40.2%. These patients did not differ from the total group with regard to gender, age at onset or duration. Six patients showed exaggerated reactions to sensory stimuli, whereas three patients showed reduced response to pain. Depressive traits were present in 38% and did not correlate with somatic complaints. Suicidal behavior was present in 17 patients, in 10 of these suicidal behavior was concurrent with somatic complaints. No clear correlation between somatic complaints and brain protein pathology, regional pathology or asymmetric hemispherical atrophy was found. Our results show that many FTD patients suffer from unexplained somatic complaints before and/or during dementia where no clear correlation can be found with protein pathology or regional degeneration. Somatic complaints are not covered by current diagnostic criteria for FTD, but need to be considered in diagnostics and care. The need for prospective studies with neuropathological follow up must be stressed as these phenomena remain unexplained, misinterpreted, bizarre and, in many cases, excruciating.
abstract_id: PUBMED:34949973
The association between somatic complaints and past-30 day opioid misuse among justice-involved children. Background: Individuals in the criminal justice system are especially vulnerable to the adverse effects of opioid misuse. Research on justice-involved children (JIC) is needed to uncover the variables that predict opioid misuse initiation to prevent misuse or reduce harm in this population. Somatic symptoms are symptoms experienced in the body, such as physical sensations, movements or experiences, which can cause severe distress and dysfunction. These include pain, nausea, dizziness, and fainting. In this study, we hypothesize that somatic complaints will be associated with a higher likelihood of opioid misuse among Florida JIC.
Methods: The study examined statewide data on 79,960 JIC in the Florida Department of Juvenile Justice database. Logistic regression was employed to investigate an ordinal measure of somatic complaints at first screen and a binary outcome measure of past-30 day illicit or nonmedical opioid use at final screen while controlling for sociodemographic and mental health factors.
Results: Nearly 28% of JIC had a history of one or more somatic complaints. Compared to those with no history of somatic complaints, JIC with a history of one or two somatic complaints were 1.23 times more likely to misuse opioids in the past 30 days and those with three or four somatic complaints were 1.5 times more likely to meet criteria for past-30 day opioid misuse.
Conclusions: Individuals may consume illicit or non-medical prescription opioids to manage somatic symptoms - indicating that increased access to healthcare may reduce misuse. Risk of opioid overdose sharply increases as justice-involved individuals are released from correctional settings largely due to a reduced tolerance to opioids as a result of incarceration and diminished access to legal medicines that are provided in the justice system. Justice systems must ensure seamless access to quality healthcare services as individuals transition from correctional settings to their communities.
abstract_id: PUBMED:37425189
Climate change and extreme weather disasters: evacuation stress is associated with youths' somatic complaints. Objective: Climate-change has brought about more frequent extreme-weather events (e.g., hurricanes, floods, and wildfires) that may require families to evacuate, without knowing precisely where and when the potential disaster will strike. Recent research indicates that evacuation is stressful for families and is associated with psychological distress. Yet, little is known about the potential impact of evacuation stressors on child health. After Hurricane Irma, which led to a mass evacuation in Florida, we examined whether evacuation stressors and hurricane exposure were uniquely associated with youth somatic complaints, and whether youth psychological distress (i.e., symptoms of posttraumatic stress, anxiety, and depression) served as a potential mediating pathway between evacuation stressors, hurricane experiences, and somatic complaints.
Method: Three months after Irma, 226 mothers of youth aged 7-17 years (N=226; M age = 9.76 years; 52% boys; 31% Hispanic) living in the five southernmost Florida counties reported on evacuation stressors, hurricane-related life threat and loss/disruption, and their child's psychological distress and somatic complaints using standardized measures.
Results: Structural equation modeling revealed a good model fit (χ2 = 32.24, p = 0.003, CFI = 0.96, RMSEA = 0.08, SRMR = 0.04). Even controlling for life-threatening hurricane experiences (β = 0.26) and hurricane loss and disruption (β = 0.26), greater evacuation stressors were associated with greater symptoms of youth psychological distress (β = 0.34; p's < 0.001), and greater psychological distress was associated with more somatic complaints (β = 0.67; p < 0.001). Indirect effects revealed that evacuation stressors (p < 0.001), actual life-threatening events (p < 0.01), and loss and disruption (p < 0.01) were all uniquely and indirectly associated with youths' somatic complaints via youth psychological distress.
Discussion: Findings suggest that even coping with the threat of a disaster may be sufficient to prompt psychological and physical health symptoms in youth. Due in part to climate change, threats of disaster occur much more often than actual disaster exposure, especially for areas that are prone to hurricanes or wildfires. Preparing youth and families residing in vulnerable areas for potential disaster evacuation or sheltering-in-place appears critical. Encouraging families to develop Disaster Plans and teaching stress management skills may reduce both youth distress and somatic health problems.
abstract_id: PUBMED:37238453
The Contribution of Bullying Involvement and Alexithymia to Somatic Complaints in Preadolescents. Somatic complaints during preadolescence are connected to individual and contextual factors, and extant research highlights the relevance of alexithymia and bullying involvement. In this cross-sectional study, we explored the joint and unique influence of bullying involvement-as perpetrators, victims, or outsiders-and alexithymia on somatic complaints in a sample of 179 Italian middle-school students (aged 11-15). Findings revealed an indirect association between bullying perpetration and victimization complaints through alexithymia. We also found a significant direct association between victimization and somatic complaints. No significant association between outsider behavior and somatization was found. Our results revealed that bullying perpetration and victimization could increase youths' risk for somatic complaints and clarify one of the processes underlying this association. The current findings further emphasize the relevance of emotional awareness for youths' well-being and propose that implementing social-emotional skills might prevent some of the adverse consequences of being involved in bullying episodes.
Answer: The relationship between parental behavior and children's somatic complaints is complex and not entirely clear-cut. According to the abstracts provided, there is no direct evidence that parental solicitousness reinforces somatic complaints in school-age children. One study found that parental solicitousness, as reported by children or parents, was unrelated to the frequency of self-reported somatic complaints in children (PUBMED:18377148). However, other factors such as symptoms of depression, fear, and a lower sense of coherence were associated with more somatic complaints.
In contrast, other studies suggest that parental behavior and family dynamics can influence children's somatic complaints in various ways. For instance, parental alcohol use has been associated with higher levels of psychological and somatic complaints and perceived stress among adolescents (PUBMED:36694162). Parental rearing styles, particularly psychological control and anxious monitoring, have been linked to adolescents' body complaints across different cultural contexts (PUBMED:32894350). Additionally, discrepancies between adolescent and maternal reports of core depression symptoms were associated with increased somatic complaints and help-seeking in school (PUBMED:32841758).
Furthermore, the role of alexithymia, a personality trait involving difficulties in identifying and expressing feelings, has been highlighted in the context of the COVID-19 pandemic. Both parental and children's alexithymia scores were significantly associated with somatic and externalizing symptomatology in children (PUBMED:36360512). Bullying involvement and alexithymia have also been shown to contribute to somatic complaints in preadolescents (PUBMED:37238453).
Overall, while one study suggests that parental solicitousness does not directly reinforce somatic complaints in children (PUBMED:18377148), other research indicates that various aspects of parental behavior, family dynamics, and children's emotional processing skills can influence the presence and severity of somatic complaints in children and adolescents. |
Instruction: Is accurate prediction of gait in nonambulatory stroke patients possible within 72 hours poststroke?
Abstracts:
abstract_id: PUBMED:21186329
Is accurate prediction of gait in nonambulatory stroke patients possible within 72 hours poststroke? The EPOS study. Background: Early prognosis, adequate goal setting, and referral are important for stroke management.
Objective: To investigate if independent gait 6 months poststroke can be accurately predicted within the first 72 hours poststroke, based on simple clinical bedside tests. Reassessment on days 5 and 9 was used to check whether accuracy changed over time.
Methods: In 154 first-ever ischemic stroke patients unable to walk independently, 19 demographic and clinical variables were assessed within 72 hours and again on days 5 and 9 poststroke. Multivariable logistic modeling was applied to identify early prognostic factors for regaining independent gait, defined as ≥4 points on the Functional Ambulation Categories.
Results: Multivariable modeling showed that patients with an independent sitting balance (Trunk Control Test-sitting; 30 seconds) and strength of the hemiparetic leg (Motricity Index leg; eg, visible contraction for all 3 items, or movement against resistance but weaker for 1 item) on day 2 poststroke had a 98% probability of achieving independent gait at 6 months. Absence of these features in the first 72 hours was associated with a probability of 27%, declining to 10% by day 9.
Conclusions: Accurate prediction of independent gait performance can be made soon after stroke, using 2 simple bedside tests: "sitting balance" and "strength of the hemiparetic leg." This knowledge is useful for making early clinical decisions regarding treatment goals and discharge planning at hospital stroke units.
abstract_id: PUBMED:22773263
Robot-assisted practice of gait and stair climbing in nonambulatory stroke patients. A novel gait robot enabled nonambulatory patients the repetitive practice of gait and stair climbing. Thirty nonambulatory patients with subacute stroke were allocated to two groups. During 60 min sessions every workday for 4 weeks, the experimental group received 30 min of robot training and 30 min of physiotherapy and the control group received 60 min of physiotherapy. The primary variable was gait and stair climbing ability (Functional Ambulation Categories [FAC] score 0-5); secondary variables were gait velocity, Rivermead Mobility Index (RMI), and leg strength and tone blindly assessed at onset, intervention end, and follow-up. Both groups were comparable at onset and functionally improved over time. The improvements were significantly larger in the experimental group with respect to the FAC, RMI, velocity, and leg strength during the intervention. The FAC gains (mean +/- standard deviation) were 2.4 +/- 1.2 (experimental group) and 1.2 +/- 1.5 (control group), p = 0.01. At the end of the intervention, seven experimental group patients and one control group patient had reached an FAC score of 5, indicating an ability to climb up and down one flight of stairs. At follow-up, this superior gait ability persisted. In conclusion, the therapy on the novel gait robot resulted in a superior gait and stair climbing ability in nonambulatory patients with subacute stroke; a higher training intensity was the most likely explanation. A large randomized controlled trial should follow.
abstract_id: PUBMED:7944913
Restoration of gait in nonambulatory hemiparetic patients by treadmill training with partial body-weight support. The effect of a treadmill training with partial body-weight support was investigated in nine nonambulatory hemiparetic patients with a mean poststroke interval of 129 days. They had received regular physiotherapy within a comprehensive stroke rehabilitation program at least 3 weeks before the treadmill training without marked improvement of their gait ability. After 25 additional treadmill training sessions scoring of functional performance and conventional gait analysis showed a definite improvement: gait ability, assessed by the Functional Ambulation Category (0 to 5) improved with a mean of 2.2 points, other motor functions, assessed by the Rivermead Motor Assessment Score with a mean of +3.9 points for gross function (range 0 to 13) and of +3.2 points for leg and trunk section (range 0 to 10)] and gait cycle parameters (p < .01). Muscle tone and strength of the paretic lower limb remained stable. We suggest that treadmill training with partial body-weight support could augment restoration of ambulation and other motor functions in hemiparetic patients by active and repetitive training.
abstract_id: PUBMED:21683618
Prediction of outcome in neurogenic oropharyngeal dysphagia within 72 hours of acute stroke. Background: Stroke is the most frequent cause of neurogenic oropharyngeal dysphagia (NOD). In the acute phase of stroke, the frequency of NOD is greater than 50% and, half of this patient population return to good swallowing within 14 days while the other half develop chronic dysphagia. Because dysphagia leads to aspiration pneumonia, malnutrition, and in-hospital mortality, it is important to pay attention to swallowing problems. The question arises if a prediction of severe chronic dysphagia is possible within the first 72 hours of acute stroke.
Methods: On admission to the stroke unit, all stroke patients were screened for swallowing problems by the nursing staff within 2 hours. Patients showing signs of aspiration were included in the study (n = 114) and were given a clinical swallowing examination (CSE) by the swallowing/speech therapist within 24 hours and a swallowing endoscopy within 72 hours by the physician. The primary outcome of the study was the functional communication measure (FCM) of swallowing (score 1-3, tube feeding dependency) on day 90.
Results: The grading system with the FCM swallowing and the penetration-aspiration scale (PAS) in the first 72 hours was tested in a multivariate analysis for its predictive value for tube feeding-dependency on day 90. For the FCM level 1 to 3 (P < .0022) and PAS level 5 to 8 (P < .00001), the area under the curve (AUC) was 72.8% and showed an odds ratio of 11.8 (P < .00001; 95% confidence interval 0.036-0.096), achieving for the patient a 12 times less chance of being orally fed on day 90 and therefore still being tube feeding-dependent.
Conclusions: We conclude that signs of aspiration in the first 72 hours of acute stroke can predict severe swallowing problems on day 90. Consequently, patients should be tested on admission to a stroke unit and evaluated with established dysphagia scales to prevent aspiration pneumonia and malnutrition. A dysphagia program can lead to better communication within the stroke unit team to initiate the appropriate diagnostics and swallowing therapy as soon as possible.
abstract_id: PUBMED:36506941
Prediction of gait independence using the Trunk Impairment Scale in patients with acute stroke. Background: Gait recovery is one of the primary goals of stroke rehabilitation. Gait independence is a key functional component of independent activities in daily living and social participation. Therefore, early prediction of gait independence is essential for stroke rehabilitation. Trunk function is important for recovery of gait, balance, and lower extremity function. The Trunk Impairment Scale (TIS) was developed to assess trunk impairment in patients with stroke.
Objective: To evaluate the predictive validity of the TIS for gait independence in patients with acute stroke.
Methods: A total of 102 patients with acute stroke participated in this study. Every participant was assessed using the TIS, Stroke Impairment Assessment Set (SIAS), and Functional Independence Measure (FIM) within 48 h of stroke onset and at discharge. Gait independence was defined as FIM gait scores of 6 and 7. Multiple regression analysis was used to predict the FIM gait score, and multiple logistic regression analysis was used to predict gait independence. Cut-off values were determined using receiver operating characteristic (ROC) curves for variables considered significant in the multiple logistic regression analysis. In addition, the area under the curve (AUC), sensitivity, and specificity were calculated.
Results: For the prediction of the FIM gait score at discharge, the TIS at admission showed a good-fitting adjusted coefficient of determination (R2 = 0.672, p < 0.001). The TIS and age were selected as predictors of gait independence. The ROC curve had a TIS cut-off value of 12 points (sensitivity: 81.4%, specificity: 79.7%) and an AUC of 0.911. The cut-off value for age was 75 years (sensitivity: 74.6%, specificity: 65.1%), and the AUC was 0.709.
Conclusion: The TIS is a useful early predictor of gait ability in patients with acute stroke.
abstract_id: PUBMED:7762049
Treadmill training with partial body weight support compared with physiotherapy in nonambulatory hemiparetic patients. Background And Purpose: Treadmill training with partial body weight support is a new and promising therapy in gait rehabilitation of stroke patients. The study intended to investigate its efficiency compared with gait training within regular physiotherapy in nonambulatory patients with chronic hemiparesis.
Methods: An A-B-A single-case study design compared treadmill training plus partial body weight support (A) with physiotherapy based on the Bobath concept (B) in seven nonambulatory hemiparetic patients. The minimum poststroke interval was 3 months, and each treatment phase lasted 3 weeks. Variables were gait ability assessed by the Functional Ambulation Category, other motor functions tested by the Rivermead Motor Assessment, muscle strength assessed by the Motricity Index, muscle tone rated by the Modified Ashworth Spasticity Scale, and gait cycle parameters.
Results: Treadmill training was more effective with regard to restoration of gait ability (P < .05) and walking velocity (P < .05). Other motor functions improved steadily during the study. Muscle strength did not change, and muscle tone varied in an unsystematic way. The ratio of cadence to stride length did not alter significantly.
Conclusions: Treadmill training offers the advantages of task-oriented training with numerous repetitions of a supervised gait pattern. It proved powerful in gait restoration of nonambulatory patients with chronic hemiparesis. Treadmill training could therefore become an adjunctive tool to regain walking ability in a shorter period of time.
abstract_id: PUBMED:35367849
Pathological gait clustering in post-stroke patients using motion capture data. Background: Analyzing the complex gait patterns of post-stroke patients with lower limb paralysis is essential for rehabilitation.
Research Question: Is it feasible to use the full joint-level kinematic features extracted from the motion capture data of patients directly to identify the optimal gait types that ensure high classification performance?
Methods: In this study, kinematic features were extracted from 111 gait cycle data on joint angles, and angular velocities of 36 post-stroke patients were collected eight times over six months using a motion capture system. Simultaneous clustering and classification were applied to determine the optimal gait types for reliable classification performance.
Results: In the given dataset, six optimal gait groups were identified, and the clustering and classification performances were denoted by a silhouette coefficient of 0.1447 and F1 score of 1.0000, respectively.
Significance: There is no distinct clinical classification of post-stroke hemiplegic gaits. However, in contrast to previous studies, more optimal gait types with a high classification performance fully utilizing the kinematic features were identified in this study.
abstract_id: PUBMED:25931759
The effects of gait velocity on the gait characteristics of hemiplegic patients. [Purpose] The present study investigated the effects of gait speed on temporal and spatial gait characteristics of hemiplegic stroke patients. [Subjects and Methods] Twenty post-stroke hemiplegic patients participated in the present study. To enhance the reliability of the analysis of the gait characteristics, the assessments were conducted three days per week at the same time every day. Each subject walked maintaining a comfortable speed for the first minute, and measurement was conducted for 30 seconds at a treadmill speed of 1 km/hour thereafter. Then, the subjects walked at a treadmill speed of 2 km/hour for 30 seconds after a 30-minute rest. The differences in the measurements were tested for significance using the paired t-test. [Results] The measures of foot rotation, step width, load response, mid stance, pre-swing, swing phase, and double stance phase showed significant difference between the gait velocities. [Conclusion] The present study provides basic data for gait velocity changes for hemiplegic patients.
abstract_id: PUBMED:30660950
Gait Profile Score in able-bodied and post-stroke individuals adjusted for the effect of gait speed. Background: The Gait Profile Score (GPS) measures the quality of an individual's walking by calculating the difference between the kinematic pattern and the average walking pattern of healthy individuals.
Research Questions: The purposes of this study were to quantify the effect of speed on the GPS and to determine whether the prediction of gait patterns at a specific speed would make the GPS outcome insensitive to gait speed in the evaluation of post-stroke individuals.
Methods: The GPS was calculated for able-bodied individuals walking at different speeds and for the comparison of post-stroke individuals with able-bodied individuals using the original experimental data (standard GPS) and the predicted gait patterns at a given speed (GPS velocity, GPSv). We employed standard gait analysis for data collection of the subjects. Sixteen participants with a stroke history were recruited for the post-stroke group, and 15 age-matched, able-bodied participants formed the control group.
Results: Gait speed significantly affects the GPS and the method to predict the gait patterns at any speed is able to mitigate the effects of gait speed on the GPS. Overall, the gap between the GPS and GPSv values across the post-stroke individuals was small (0.5° on average, range from 0.0° to 1.4°) and not statistically significant. However, there was a significant negative linear relationship in the absolute difference between the GPS and GPSv values for the participants of the post-stroke group with gait speed, indicating that a larger difference between the speeds of the post-stroke participant and the reference dataset resulted in a larger difference between the GPS and GPSv.
Significance: The modified version of the GPS, the GPSv, is effective in reducing the impact of gait speed on GPS; however, the observed difference between the two methods was only around 1° for the slowest individuals in comparison to the reference dataset.
abstract_id: PUBMED:10206603
Prediction of gait velocity in ambulatory stroke patients during rehabilitation. Objective: To quantify prediction of gait velocity in ambulatory stroke patients during rehabilitation.
Design: Single group (n = 42) at the beginning of rehabilitation (Test 1) and 8 weeks later (Test 2).
Setting: Inpatient rehabilitation.
Patients: Unilateral first stroke; informed consent; able to walk 10 meters.
Measures:
Independent Variables: Gait velocity at Test 1, age, time from stroke to Test 1, side of lesion, neglect.
Dependent Variables: Gait velocity at Test 2, gait velocity change.
Results: The correlation between initial gait velocity and gait velocity outcome at Test 2 was of moderate strength (r2 = .62, p<.05). However, even at its lowest, the standard error of prediction for an individual patient was 9.4 m/min, with 95% confidence intervals extending over a range of 36.8 m/min. Age was a weak predictor of gait velocity at Test 2 (r2 = -.10, p<.05). Gait velocity change was poorly predicted. The only significant correlations were initial gait velocity (r2 = .10, p<.05) and age (r2 = .10, p<.05).
Conclusion: While the prediction of gait velocity at Test 2 was of moderate strength on a group basis, the error surrounding predicted values of gait velocity for a single patient was relatively high, indicating that this simple approach was imprecise on an individual basis. The prediction of gait velocity change was poor. A wide range of change scores was possible for patients, irrespective of their gait velocity score on admission to rehabilitation.
Answer: Yes, accurate prediction of gait in nonambulatory stroke patients is possible within 72 hours poststroke. The EPOS study demonstrated that using two simple bedside tests—sitting balance and strength of the hemiparetic leg—within the first 72 hours poststroke, clinicians could predict with high accuracy the likelihood of a patient regaining independent gait six months later. Patients with an independent sitting balance and visible contraction or movement against resistance in the hemiparetic leg had a 98% probability of achieving independent gait at 6 months. In contrast, the absence of these features within the first 72 hours was associated with a much lower probability of 27%, which declined to 10% by day 9 (PUBMED:21186329). |
Instruction: Does pneumonectomy have a role in the treatment of stage IIIA non-small cell lung cancer?
Abstracts:
abstract_id: PUBMED:23545195
Does pneumonectomy have a role in the treatment of stage IIIA non-small cell lung cancer? Background: The role of surgical resection for stage IIIA non-small cell lung cancer (NSCLC) is unclear. We sought to examine outcomes after pneumonectomy for patients with stage IIIA disease.
Methods: All patients with stage IIIA NSCLC who had pneumonectomy at a single institution between 1999 and 2010 were reviewed. The Kaplan-Meier method was used to estimate long-term survival and multivariable Cox proportional hazards regression was used to identify clinical characteristics associated with survival.
Results: During the study period, 324 patients had surgical resection of stage IIIA NSCLC. Pneumonectomy was performed in 55 patients, 23 (42%) of whom had N2 disease. Induction treatment was used in 17 patients (31%) overall and in 11 of the patients (48%) with N2 disease. Perioperative mortality was 9% (n = 5) overall and 18% (n = 3) in patients that had received induction therapy (p = 0.17). Complications occurred in 32 patients (58%). Three-year survival was 36% and 5-year survival was 29% for all patients. Three-year survival was 40% for N0-1 patients and 29% for N2 patients (p = 0.59). In multivariable analysis, age over 60 years (hazard ratio [HR] 3.65, p = 0.001), renal insufficiency (HR 5.80, p = 0.007), and induction therapy (HR 2.17, p = 0.05) predicted worse survival, and adjuvant therapy (HR 0.35, p = 0.007) predicted improved survival.
Conclusions: Long-term survival after pneumonectomy for stage IIIA NSCLC is within an acceptable range, but pneumonectomy may not be appropriate after induction therapy or in patients with renal insufficiency. Patient selection and operative technique that limit perioperative morbidity and facilitate the use of adjuvant chemotherapy are critical to optimizing outcomes.
abstract_id: PUBMED:26410162
Pneumonectomy for Clinical Stage IIIA Non-Small Cell Lung Cancer: The Effect of Neoadjuvant Therapy. Background: The role of pneumonectomy after neoadjuvant therapy for stage IIIA non-small cell lung cancer (NSCLC) remains uncertain.
Methods: Patients who underwent pneumonectomy for clinical stage IIIA NSCLC were abstracted from the National Cancer Database. Individuals treated with neoadjuvant therapy, followed by resection, were compared with those who underwent resection, followed by adjuvant therapy. Logistic regression was performed to identify factors associated with 30-day mortality. A Cox proportional hazards model was fitted to identify factors associated with survival.
Results: Pneumonectomy for stage IIIA NSCLC with R0 resection was performed in 1,033 patients; of these, 739 (71%) received neoadjuvant therapy, and 294 (29%) underwent resection, followed by adjuvant therapy. The two groups were well matched for age, gender, race, income, Charlson comorbidity score, and tumor size. The 30-day mortality rate in the neoadjuvant group was 7.8% (57 of 739). Median survival was similar between the two groups: 25.9 months neoadjuvant vs 31.3 months adjuvant (p = 0.74). A multivariable logistic regression model for 30-day mortality demonstrated that increasing age, annual income of less than $35,000, nonacademic facility, and right-sided resection were associated with an elevated risk of 30-day mortality. A multivariable Cox model for survival demonstrated that increasing age was predictive of shorter survival and that administration of neoadjuvant therapy did not confer a survival advantage over adjuvant therapy (p = 0.59).
Conclusions: Most patients who require pneumonectomy for clinical stage IIIA NSCLC receive neoadjuvant chemoradiotherapy, without an improvement in survival. In these patients, primary resection, followed by adjuvant chemoradiotherapy, may provide equivalent long-term outcomes.
abstract_id: PUBMED:35795054
Role of Pneumonectomy in T1-4N2M0 Non-Small Cell Lung Cancer: A Propensity Score Matching Analysis. Background: N2 stage disease constitutes approximately 20%-30% of all non-small cell lung cancer (NSCLC). Concurrently, surgery remains the first-choice treatment for patients with N2 NSCLC if feasible. However, the role of pneumonectomy in N2 NSCLC has rarely been investigated and remains controversial.
Methods: We enrolled 26,798 patients with T1-4N2M0 NSCLC (stage IIIA/IIIB) from the Surveillance, Epidemiology, and End Results (SEER) database between 2004 and 2015. We compared the overall survival (OS) and cancer-specific survival (CSS) between patients who received pneumonectomy and those who did not receive surgery. The Kaplan-Meier method, Cox regression analyses, and propensity score matching (PSM) were applied to demonstrate the effect of pneumonectomy.
Results: Patients receiving pneumonectomy had a significantly better OS and CSS than those without pneumonectomy both before [adjusted-HR (95% CI): 0.461 (0.425-0.501) for OS, 0.444 (0.406-0.485) for CSS] and after PSM [adjusted-HR (95% CI): 0.499 (0.445-0.560) for OS, 0.457 (0.405-0.517) for CSS] with all p-values <0.001. Subgroup analysis demonstrated concordant results stratified by demographic or clinicopathological variables. In sensitivity analysis, no significant difference was observed between patients receiving single pneumonectomy and chemoradiotherapy without surgery in OS and CSS both before [unadjusted-HR (95% CI): 1.016 (0.878-1.176) for OS, 0.934 (0.794-1.099) for CSS, p = 0.832] and after PSM [unadjusted-HR (95% CI): 0.988 (0.799-1.222) for OS, 0.938 (0.744-1.182) for CSS] with all p-values >0.4.
Conclusion: For patients with T1-4N2M0 NSCLC (stage IIIA/IIIB), pneumonectomy is an independent protective factor of OS and should be considered when applicable.
abstract_id: PUBMED:33841969
Evaluation of different treatment strategies between right-sided and left-sided pneumonectomy for stage I-IIIA non-small cell lung cancer patients. Background: This study aimed to assess the different survival outcomes of stage I-IIIA non-small cell lung cancer (NSCLC) patients who received right-sided and left-sided pneumonectomy, and to further develop the most appropriate treatment strategies.
Methods: We accessed data from the Surveillance, Epidemiology, and End Results database from the United States for the present study. An innovative propensity score matching analysis was used to minimize the variance between groups.
Results: For 2,683 patients who received pneumonectomy, cancer-specific survival [hazard ratio (HR) =0.863, 95% confidence interval (CI): 0.771 to 0.965, P=0.010] and overall survival (OS; HR =0.875, 95% CI: 0.793 to 0.967, P=0.008) were significantly superior in left-sided pneumonectomy patients compared with right-sided pneumonectomy patients. Cancer-specific survival (HR =0.847, 95% CI: 0.745 to 0.963, P=0.011) and OS (HR =0.858, 95% CI: 0.768 to 0.959, P=0.007) were also significantly longer with left-sided compared to right-sided pneumonectomy after matching analysis of 2,050 patients. Adjuvant therapy could significantly prolong cancer-specific survival (67 versus 51 months, HR =1.314, 95% CI: 1.093 to 1.579, P=0.004) and OS (46 versus 30 months, HR =1.458, 95% CI: 1.239 to 1.715, P<0.001) among left-sided pneumonectomy patients after the matching procedure, while adjuvant therapy did not increase cancer-specific survival for right-sided pneumonectomy patients (46 versus 42 months, HR =1.112, 95% CI: 0.933 to 1.325, P=0.236). Subgroup analysis showed that adjuvant chemotherapy could significantly improve cancer-specific survival and OS for all pneumonectomy patients. However, radiotherapy was associated with worse survival for patients with right-sided pneumonectomy.
Conclusions: Pneumonectomy side can be deemed as an important factor when physicians determine the most optimal treatment strategies.
abstract_id: PUBMED:37045488
Current Management of Stage IIIA (N2) Non-Small-Cell Lung Cancer: Role of Perioperative Immunotherapy, and Tyrosine Kinase Inhibitors. There have been numerous recent advances in the treatmetn of stage IIIA non-small cell lung cancer. The most significant involve the addition of targeted therapies adn immune checkpoint inhibitors into perioperative care. These exciting advances are improving survival in this challenging patient population, but some-decade old controveries around the definition of resectability, prognositic importance of tumor response to induction therapy, and the role of pneumonectomy persist.
abstract_id: PUBMED:37493008
The prognosis of clinical stage IIIa non-small cell lung cancer in Taiwan. Lung cancer is the leading cause of cancer death. The treatment of stage IIIa remained the most controversial of all stages of non-small cell lung cancer (NSCLC). We reported on the heterogenicity and current treatment strategies of stage IIIa NSCLC in Taiwan. This study is a retrospective analysis using data from the Taiwan Society of Cancer Registry between January 2010 and December 2018. 4232 patients with stage IIIa NSCLC were included. Based on cell type, the best 5-year OS (40.40%) occurred among adenocarcinoma victims. The heterogenicity of T1N2 had the best 5-year OS (47.62%), followed by T4N0 (39.82%), and the others. Patients who underwent operations had better 5-year OS (over 50%) than those who did not (less than 30%). Segmentectomy (75.28%) and lobectomy (54.06%) showed better 5-year OS than other surgical methods (less than 50%). In multivariable analysis, young age, female, lower Charlson Comorbidity Index score, adenocarcinoma cell type, well differentiated, T1N2/T4N0 heterogenicity, treatment with operation, and segmentectomy/lobectomy/bilobectomy were significant factors. In conclusions, the heterogenicity of T1N2 had the best outcomes followed by T4N0. Patients received surgical treatment revealed much better outcomes than those did not. As always, multimodal therapies with individualized treatment tend to provide better survival outcomes.
abstract_id: PUBMED:10380519
The surgery of non-small-cell bronchogenic carcinoma in stage IIIA. An analysis of 150 treated cases Background And Aim: The authors report the findings of a retrospective study made of 150 cases of bronchogenic non-small-cell carcinoma at stage IIIA.
Methods: Of the 150 patients treated 130 were male and 20 female. The mean age of the population examined was 55, with a minimum of 28 and maximum of 76. The techniques of exeresis used were pneumonectomy in 70 cases (33.3%) (simple in 50 cases--33.3% and intrapericardial ligation of pulmonary vessels in 20--13.3%), lobectomy in 61 cases (40.6%), lobectomy with associated atypical resection in 9 cases (6%), atypical resection in 6 patients (4%) and bilobectomy in 4 (2.6%).
Results: The 5-year survival rate was 16.9%. It was also found that the 5-year survival rate was 20.7% higher for epidermoid carcinoma compared to other histiotypes. The technique used also influenced survival and subjects undergoing pneumonectomy presented a 5-year survival of 29.7% compared to 26.8% for lobectomies associated with atypical resection.
Conclusion: Surgery of bronchogenic carcinoma at stage IIIA has not obtained promising results in terms of survival. However, no other alternative treatment permits an average 5-year survival rate of 15% to be achieved.
abstract_id: PUBMED:34734237
Importance of tumour volume and histology in trimodality treatment of patients with Stage IIIA non-small cell lung cancer-results from a retrospective analysis. Objectives: Chemoradiotherapy (CRT) has been the backbone of guideline-recommended treatment for Stage IIIA non-small cell lung cancer (NSCLC). However, in selected operable patients with a resectable tumour, good results have been achieved with trimodality treatment (TT). The objective of this bi-institutional analysis of outcomes in patients treated for Stage IIIA NSCLC was to identify particular factors supporting the role of surgery after CRT.
Methods: In a 2-centre retrospective cohort study, patients with Stage III NSCLC (seventh edition TNM) were identified and those patients with Stage IIIA who were treated with CRT or TT between January 2007 and December 2013 were selected. Patient characteristics as well as tumour parameters were evaluated in relation to outcome and whether or not these variables were predictive for the influence of treatment (TT or CRT) on outcome [overall survival (OS) or progression-free survival (PFS)]. Estimation of treatment effect on PFS and OS was performed using propensity-weighted cox regression analysis based on inverse probability weighting.
Results: From a database of 725 Stage III NSCLC patients, 257 Stage IIIA NSCLC patients, treated with curative intent, were analysed; 186 (72%) with cIIIA-N2 and 71 (28%) with cT3N1/cT4N0 disease. One hundred and ninety-six (76.3%) patients were treated by CRT alone (high-dose radiation with daily low-dose cisplatin) and 61 (23.7%) by TT. The unweighted data showed that TT resulted in better PFS and OS. After weighting for factors predictive of treatment assignment, patients with a large gross tumour volume (>120 cc) had better PFS when treated with TT, and patients with an adenocarcinoma treated with TT had better OS, regardless of tumour volume.
Conclusions: Patients with Stage IIIA NSCLC and large tumour volume, as well as patients with adenocarcinoma, who were selected for TT, had favourable outcome compared to patients receiving CRT. This information can be used to assist multidisciplinary team decision-making and for stratifying patients in studies comparing TT and definitive CRT.
abstract_id: PUBMED:9292746
Results of pneumonectomy for non-small cell lung cancer. To assess the role of pneumonectomy for lung cancer and the factors affecting the prognosis, 107 patients who had undergone pneumonectomy for non-small cell lung cancer (NSCLC) between January, 1985 and March, 1996, were analyzed. They included 81 squamous cell carcinoma, 22 adenocarcinoma, 3 large cell carcinoma, and one adenosquamous cell carcinoma, with 8 patients in post-operative stage I, 15 in stage II, 51 in stage IIIA, and 33 in stage IIIB of the disease. The 5-year survival rate was 54.7% in stages I + II, 38.0% in stage IIIA, and <4% in stage IIIB. In stages I-IIIA, the patients with squamous cell carcinoma showed a significantly better prognosis than those with adenocarcinoma (50.6 vs. 0%, p < 0.01). The prognosis was also better, but not statistically significant, for patients with central type compared with those with peripheral type in both all histologic types (58.0 vs. 8.4%) and only squamous cell type (59.3 vs. 18.8%). A better prognosis observed in squamous histologic type or central type seemed to be related to a better N factor. Pneumonectomy remains the treatment of choice for lung cancer, but seems not to be justified for patients with stage IIIB due to their poor prognosis.
abstract_id: PUBMED:36841745
Safety of adjuvant atezolizumab after pneumonectomy/bilobectomy in stage II-IIIA non-small cell lung cancer in the randomized phase III IMpower010 trial. Objective: Adjuvant atezolizumab is a standard of care after chemotherapy in completely resected stage II-IIIA programmed death ligand-1 tumor cell 1% or greater non-small cell lung cancer based on results from the phase III IMpower010 study. We explored the safety and tolerability of adjuvant atezolizumab by surgery type in IMpower010.
Methods: Patients had completely resected stage IB-IIIA non-small cell lung cancer (Union Internationale Contre le Cancer/American Joint Committee on Cancer, 7th Ed), received up to four 21-day cycles of cisplatin-based chemotherapy, and were randomized 1:1 to receive atezolizumab 1200 mg every 3 weeks (≤16 cycles or 1 year) or best supportive care. Adverse events and clinical characteristics were investigated by surgery type (pneumonectomy/bilobectomy or lobectomy/sleeve lobectomy) in the randomized stage II-IIIA population who received 1 or more atezolizumab dose or with 1 or more postbaseline assessment (safety evaluable) for best supportive care.
Results: Overall, 871 patients comprised the safety-evaluable randomized stage II-IIIA population. In the atezolizumab arm, 23% (100/433) received pneumonectomy/bilobectomy and 77% (332/433) received lobectomy/sleeve lobectomy. Atezolizumab discontinuation occurred in 32% (n = 32) and 35% (n = 115) of the pneumonectomy/bilobectomy and lobectomy/sleeve lobectomy groups, respectively. Grade 3/4 adverse events were reported in 21% (n = 21) and 23% (n = 76) of patients in the atezolizumab arms in the pneumonectomy/bilobectomy and lobectomy/sleeve lobectomy groups, respectively. In the atezolizumab arms of the surgery groups, 13% (n = 13) and 17% (n = 55) had an adverse event leading to hospitalization. Atezolizumab-related adverse events leading to hospitalization occurred in 5% (n = 5) and 7% (n = 23) of the surgery groups.
Conclusions: These exploratory findings support use of adjuvant atezolizumab after platinum-based chemotherapy in patients with completely resected stage II-IIIA programmed death ligand-1 tumor cell 1% or more non-small cell lung cancer, regardless of surgery type.
Answer: Yes, pneumonectomy does have a role in the treatment of stage IIIA non-small cell lung cancer (NSCLC). Several studies have investigated the outcomes and implications of pneumonectomy in this context.
One study found that long-term survival after pneumonectomy for stage IIIA NSCLC is within an acceptable range, suggesting that pneumonectomy can be a viable option for these patients. However, the study also noted that pneumonectomy may not be appropriate after induction therapy or in patients with renal insufficiency. It emphasized the importance of patient selection and operative technique to limit perioperative morbidity and facilitate the use of adjuvant chemotherapy to optimize outcomes (PUBMED:23545195).
Another study compared patients who underwent pneumonectomy after neoadjuvant therapy with those who had resection followed by adjuvant therapy. It found that most patients requiring pneumonectomy for clinical stage IIIA NSCLC received neoadjuvant chemoradiotherapy without an improvement in survival. The study suggested that primary resection followed by adjuvant chemoradiotherapy may provide equivalent long-term outcomes (PUBMED:26410162).
A propensity score matching analysis from the SEER database indicated that patients with T1-4N2M0 NSCLC who received pneumonectomy had significantly better overall survival (OS) and cancer-specific survival (CSS) than those who did not undergo surgery. This suggests that pneumonectomy is an independent protective factor of OS and should be considered when applicable (PUBMED:35795054).
The role of pneumonectomy in the treatment of stage IIIA NSCLC is also supported by the fact that recent advances in the treatment of this stage of lung cancer include the addition of targeted therapies and immune checkpoint inhibitors into perioperative care, which are improving survival in this challenging patient population (PUBMED:37045488).
In summary, pneumonectomy can be an important part of the treatment strategy for stage IIIA NSCLC, particularly in carefully selected patients and when combined with adjuvant therapies. However, the decision to perform pneumonectomy should be made on a case-by-case basis, considering factors such as the patient's overall health, response to induction therapy, and the potential benefits of adjuvant treatments. |
Instruction: Perceived hazards of transfusion: can a clinician tool help patients' understanding?
Abstracts:
abstract_id: PUBMED:22724532
Perceived hazards of transfusion: can a clinician tool help patients' understanding? Objective: To evaluate the use of a tool prompting counselling behaviour for blood transfusion by assessing clinicians' self-reported counselling behaviours, and changes in patients' beliefs about transfusion.
Methods And Materials: Mixed quantitative and qualitative methodology undertaken in two phases. In phase 1, clinicians' responses (n = 12) to a semi-structured questionnaire were analysed to identify the content of discussions with patients about different aspects of receiving a blood transfusion. The content of discussions was coded using illness representation concepts from the Common Sense Self-Regulation Model. Phase 2 included patients (n = 14) scheduled for elective surgery who completed a questionnaire on their beliefs about transfusion before and after counselling.
Results: The most frequently coded illness representations targeted by clinicians using the tool were 'consequence of treatment' (32%) and 'cure/control' (30.5%). Two patient beliefs showed significant change following counselling using the checklist. After counselling, patients were more likely to disagree/strongly disagree with the statement that doctors relied too much on transfusion (P = 0.034) and more likely to agree/strongly agree that blood transfusion can result in new health problems (P = 0.041).
Conclusion: This pilot study provides insight into how clinicians use a tool for blood transfusion counselling and shows the potential to influence patients' beliefs about transfusion. Whilst the checklist has a role in standardising practice, this pilot study highlights the need for optimising its use before undertaking a fully randomised evaluation of the tool.
abstract_id: PUBMED:33806240
The GERtality Score: The Development of a Simple Tool to Help Predict in-Hospital Mortality in Geriatric Trauma Patients. Feasible and predictive scoring systems for severely injured geriatric patients are lacking. Therefore, the aim of this study was to develop a scoring system for the prediction of in-hospital mortality in severely injured geriatric trauma patients. The TraumaRegister DGU® (TR-DGU) was utilized. European geriatric patients (≥65 years) admitted between 2008 and 2017 were included. Relevant patient variables were implemented in the GERtality score. By conducting a receiver operating characteristic (ROC) analysis, a comparison with the Geriatric Trauma Outcome Score (GTOS) and the Revised Injury Severity Classification II (RISC-II) Score was performed. A total of 58,055 geriatric trauma patients (mean age: 77 years) were included. Univariable analysis led to the following variables: age ≥ 80 years, need for packed red blood cells (PRBC) transfusion prior to intensive care unit (ICU), American Society of Anesthesiologists (ASA) score ≥ 3, Glasgow Coma Scale (GCS) ≤ 13, Abbreviated Injury Scale (AIS) in any body region ≥ 4. The maximum GERtality score was 5 points. A mortality rate of 72.4% was calculated in patients with the maximum GERtality score. Mortality rates of 65.1 and 47.5% were encountered in patients with GERtality scores of 4 and 3 points, respectively. The area under the curve (AUC) of the novel GERtality score was 0.803 (GTOS: 0.784; RISC-II: 0.879). The novel GERtality score is a simple and feasible score that enables an adequate prediction of the probability of mortality in polytraumatized geriatric patients by using only five specific parameters.
abstract_id: PUBMED:34709190
A Decision Support Tool for Allogeneic Hematopoietic Stem Cell Transplantation for Children With Sickle Cell Disease: Acceptability and Usability Study. Background: Individuals living with sickle cell disease (SCD) may benefit from a variety of disease-modifying therapies, including hydroxyurea, voxelotor, crizanlizumab, L-glutamine, and chronic blood transfusions. However, allogeneic hematopoietic stem cell transplantation (HCT) remains the only nonexperimental treatment with curative intent. As HCT outcomes can be influenced by the complex interaction of several risk factors, HCT can be a difficult decision for health care providers to make for their patients with SCD.
Objective: The aim of this study is to determine the acceptability and usability of a prototype decision support tool for health care providers in decision-making about HCT for SCD, together with patients and their families.
Methods: On the basis of published transplant registry data, we developed the Sickle Options Decision Support Tool for Children, which provides health care providers with personalized transplant survival and risk estimates for their patients to help them make informed decisions regarding their patients' management of SCD. To evaluate the tool for its acceptability and usability, we conducted beta tests of the tool and surveys with physicians using the Ottawa Decision Support Framework and mobile health app usability questionnaire, respectively.
Results: According to the mobile health app usability questionnaire survey findings, the overall usability of the tool was high (mean 6.15, SD 0.79; range 4.2-7). According to the Ottawa Decision Support Framework survey findings, acceptability of the presentation of information on the decision support tool was also high (mean 2.94, SD 0.63; range 2-4), but the acceptability regarding the amount of information was mixed (mean 2.59, SD 0.5; range 2-3). Most participants expressed that they would use the tool in their own patient consults (13/15, 87%) and suggested that the tool would ease the decision-making process regarding HCT (8/9, 89%). The 4 major emergent themes from the qualitative analysis of participant beta tests include user interface, data content, usefulness during a patient consult, and potential for a patient-focused decision aid. Most participants supported the idea of a patient-focused decision aid but recommended that it should include more background on HCT and a simplification of medical terminology.
Conclusions: We report the development, acceptability, and usability of a prototype decision support tool app to provide individualized risk and survival estimates to patients interested in HCT in a patient consultation setting. We propose to finalize the tool by validating predictive analytics using a large data set of patients with SCD who have undergone HCT. Such a tool may be useful in promoting physician-patient collaboration in making shared decisions regarding HCT for SCD. Further incorporation of patient-specific measures, including the HCT comorbidity index and the quality of life after transplant, may improve the applicability of the decision support tool in a health care setting.
abstract_id: PUBMED:32435095
Global Trigger Tool: Proficient Adverse Drug Reaction Autodetection Method in Critical Care Patient Units. Background: Emergency department (ED) being the most crucial part of hospital, where adverse drug reactions (ADRs) often go undetected. Trigger tools are proficient ADR detection methods, which have only been applied for retrospective surveillance. We did a prospective analysis to further refine the trigger tool application in healthcare settings.
Objective: To estimate the prevalence of ADRs and prospectively evaluate the importance of using trigger tools for their detection.
Materials And Methods: A prospective study was conducted in the ED for the presence of triggers in patient records to monitor and report ADRs by applying the Institute for Healthcare Improvement (IHI) trigger tool methodology.
Results: Four hundred sixty-three medical records were analyzed randomly using 51 trigger tools, where triggers were found in 181 (39.09%) and ADRs in 62 (13.39%) patients. The prevalence of ADR was 13.39%. According to the World Health Organization (WHO)-Uppsala Monitoring Centre (UMC) causality scale, 47 (75.8%) were classified as probable and 15 (24.2%) as possible, wherein 39 (62.9%) were predictable and 8 (12.9%) were definitely preventable. Most common triggers were abrupt medication stoppage (34.98%), antiemetic use (25.91%), and time in ED >6 hours (17.49%). The positive predictive values (PPVs) of triggers such as international normalized ratio (INR) > 4 (p = 0.0384), vitamin K administration (p = 0.002), steroid use (p = 0.0001), abrupt medication stoppage (p = 0.0077), transfusion of blood or blood products (p = 0.004), and rash (p = 0.0042) showed statistically significant results, which make the event detection process more structured when these triggers are positive. Presence of five or more triggers has statistically significant chances of developing an ADR (p < 0.05).
Conclusion: Trigger tool could be a viable method to identify ADRs when compared to the traditional ADR identification methods, but there is insufficient data on IHI tool and its use to identify ADRs in the general outpatient setting. Healthcare providers may benefit from better trigger tools to help them detect ADRs.
How To Cite This Article: Pandya AD, Patel K, Rana D, Gupta SD, Malhotra SD, Patel P. Global Trigger Tool: Proficient Adverse Drug Reaction Autodetection Method in Critical Care Patient Units. Indian J Crit Care Med 2020;24(3):172-178.
abstract_id: PUBMED:35211982
Development and validation of the safe transfusion assessment tool. Background: Given the prevalence and risks of blood transfusion, it is essential that trainees and practicing clinicians have a thorough understanding of relevant transfusion medicine competencies. The aim of this research was to develop and gather validity evidence for an instrument to assess knowledge of core transfusion-related competencies.
Methods: We developed the safe transfusion assessment tool (STAT) using a multistep process. Initially, 20 core competencies in transfusion medicine were identified through a consensus-driven Delphi process. Learning objectives and assessment items pertinent to each competency were created. Next, a 13-item assessment tool was piloted with multidisciplinary experts and trainees. Multiple iterative revisions were made based on feedback. Finally, the 12-item STAT was administered to 100 participants of varying training level and specialty to establish validity, difficulty and item discrimination indices, and perceived utility.
Results: Analysis of instrument item difficulty and item discrimination indices demonstrated the ability of the STAT to assess essential knowledge in transfusion medicine relevant to trainees and clinicians in multiple programs and practice settings. Eight of twelve items discriminated between learners with varying degrees of expertise. Hundred percent of students and trainees rated the STAT as Extremely Helpful or Somewhat Helpful and the majority planned to utilize the answer guide as a study aid.
Conclusion: The STAT is a concise, valid, and reliable knowledge assessment tool that may be used by researchers and educators to augment transfusion medicine curricula (www.safetransfusion.ucsf.edu). Scores can help inform departments on areas in which trainees require additional support and areas of potential educational interventions.
abstract_id: PUBMED:34687927
Development of a Preoperative Clinical Risk Assessment Tool for Postoperative Complications After Hysterectomy. Study Objective: To develop a preoperative risk assessment tool that quantifies the risk of postoperative complications within 30 days of hysterectomy.
Design: Retrospective analysis.
Setting: Michigan Surgical Quality Collaborative hospitals.
Patients: Women who underwent hysterectomy for gynecologic indications.
Interventions: Development of a nomogram to create a clinical risk assessment tool.
Measurements And Main Results: Postoperative complications within 30 days were the primary outcome. Bivariate analysis was performed comparing women who had a complication and those who did not. The patient registry was randomly divided. A logistic regression model developed and validated from the Collaborative database was externally validated with hysterectomy cases from the National Surgical Quality Improvement Program, and a nomogram was developed to create a clinical risk assessment tool. Of the 41,147 included women, the overall postoperative complication rate was 3.98% (n = 1638). Preoperative factors associated with postoperative complications were sepsis (odds ratio [OR] 7.98; confidence interval [CI], 1.98-32.20), abdominal approach (OR 2.27; 95% CI, 1.70-3.05), dependent functional status (OR 2.20; 95% CI, 1.34-3.62), bleeding disorder (OR 2.10; 95% CI, 1.37-3.21), diabetes with HbA1c ≥9% (OR 1.93; 95% CI, 1.16-3.24), gynecologic cancer (OR 1.86; 95% CI, 1.49-2.31), blood transfusion (OR 1.84; 95% CI, 1.15-2.96), American Society of Anesthesiologists Physical Status Classification System class ≥3 (OR 1.46; 95% CI, 1.24-1.73), government insurance (OR 1.3; 95% CI, 1.40-1.90), and body mass index ≥40 (OR 1.25; 95% CI, 1.04-1.50). Model discrimination was consistent in the derivation, internal validation, and external validation cohorts (C-statistics 0.68, 0.69, 0.68, respectively).
Conclusion: We validated a preoperative clinical risk assessment tool to predict postoperative complications within 30 days of hysterectomy. Modifiable risk factors identified were preoperative blood transfusion, poor glycemic control, and open abdominal surgery.
abstract_id: PUBMED:35990312
Development of a Cardiac Anesthesia Tool (CAT) to Predict Intensive Care Unit (ICU) Admission for Pediatric Cardiac Patients Undergoing Non-cardiac Surgery: A Retrospective Cohort Study. Background And Aim: of the work: Pediatric cardiac patients often undergo non-cardiac surgical procedures and many of these patients would require intensive care unit admission, but can we predict the need for ICU admission in pediatric cardiac patients undergoing non-cardiac procedures. Numerous preoperative and intraoperative variables were strongly associated with ICU admission. Given the variations in the underlying cardiac physiology and the diversity of noncardiac surgical procedures along with the scarce predictive clinical tools, we aimed to develop a simple and practical tool to predict the need for ICU admission in pediatric cardiac patients undergoing non-cardiac procedures.
Material And Methods: This is a retrospective study, where all files of pediatric cardiac patients who underwent noncardiac surgical procedures from January 1, 2015, to December 31, 2019, were reviewed. We retrieved details of the preoperative and intraoperative variables including age, weight, comorbid conditions, and underlying cardiac physiology. The primary outcome was the need for ICU admission. We performed multiple logistic regression analyses and analyses of the area under receiver operating characteristics (ROC) curves to develop a predictive tool.
Results: In total, 519 patients were included. The mean age and weight were 4.6 ± 3.4 year and 16 ± 13 Kg respectively. A small proportion (n = 90, 17%) required ICU admission. Statistically, there was strong association between each of American society of anesthesiologist's physical status (ASA-PS) class III and IV, difficult intubation, operative time more than 2 hours, requirement of transfusion and the failure of a deliberately planned extubation and ICU admission. Additional analysis was done to develop a Cardiac Anesthesia Tool (CAT) based on the weight of each variable derived from the regression coefficient. The CAT list is composed of the ASA-PS, operative time, and requirement of transfusion, difficult intubation and the failure of deliberately planned extubation. The minimum score is zero and the maximum is eight. The probability of ICU admission is proportional to the score.
Conclusion: CAT is a practical and simple clinical tool to predict the need for ICU admission based on simple additive score. We propose using this tool for pediatric cardiac patients undergoing non-cardiac procedure.
abstract_id: PUBMED:34404449
How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool. Background: Healthcare Audit and Feedback (A&F) interventions have been shown to be an effective means of changing healthcare professional behavior, but work is required to optimize them, as evidence suggests that A&F interventions are not improving over time. Recent published guidance has suggested an initial set of best practices that may help to increase intervention effectiveness, which focus on the "Nature of the desired action," "Nature of the data available for feedback," "Feedback display," and "Delivering the feedback intervention." We aimed to develop a generalizable evaluation tool that can be used to assess whether A&F interventions conform to these suggestions for best practice and conducted initial testing of the tool through application to a sample of critical care A&F interventions.
Methods: We used a consensus-based approach to develop an evaluation tool from published guidance and subsequently applied the tool to conduct a secondary analysis of A&F interventions. To start, the 15 suggestions for improved feedback interventions published by Brehaut et al. were deconstructed into rateable items. Items were developed through iterative consensus meetings among researchers. These items were then piloted on 12 A&F studies (two reviewers met for consensus each time after independently applying the tool to four A&F intervention studies). After each consensus meeting, items were modified to improve clarity and specificity, and to help increase the reliability between coders. We then assessed the conformity to best practices of 17 critical care A&F interventions, sourced from a systematic review of A&F interventions on provider ordering of laboratory tests and transfusions in the critical care setting. Data for each criteria item was extracted by one coder and confirmed by a second; results were then aggregated and presented graphically or in a table and described narratively.
Results: In total, 52 criteria items were developed (38 ratable items and 14 descriptive items). Eight studies targeted lab test ordering behaviors, and 10 studies targeted blood transfusion ordering. Items focused on specifying the "Nature of the Desired Action" were adhered to most commonly-feedback was often presented in the context of an external priority (13/17), showed or described a discrepancy in performance (14/17), and in all cases it was reasonable for the recipients to be responsible for the change in behavior (17/17). Items focused on the "Nature of the Data Available for Feedback" were adhered to less often-only some interventions provided individual (5/17) or patient-level data (5/17), and few included aspirational comparators (2/17), or justifications for specificity of feedback (4/17), choice of comparator (0/9) or the interval between reports (3/13). Items focused on the "Nature of the Feedback Display" were reported poorly-just under half of interventions reported providing feedback in more than one way (8/17) and interventions rarely included pilot-testing of the feedback (1/17 unclear) or presentation of a visual display and summary message in close proximity of each other (1/13). Items focused on "Delivering the Feedback Intervention" were also poorly reported-feedback rarely reported use of barrier/enabler assessments (0/17), involved target members in the development of the feedback (0/17), or involved explicit design to be received and discussed in a social context (3/17); however, most interventions clearly indicated who was providing the feedback (11/17), involved a facilitator (8/12) or involved engaging in self-assessment around the target behavior prior to receipt of feedback (12/17).
Conclusions: Many of the theory-informed best practice items were not consistently applied in critical care and can suggest clear ways to improve interventions. Standardized reporting of detailed intervention descriptions and feedback templates may also help to further advance research in this field. The 52-item tool can serve as a basis for reliably assessing concordance with best practice guidance in existing A&F interventions trialed in other healthcare settings, and could be used to inform future A&F intervention development.
Trial Registration: Not applicable.
abstract_id: PUBMED:24842177
Perceptions about blood transfusion: a survey of surgical patients and their anesthesiologists and surgeons. Background: Although blood transfusion is a common therapeutic intervention and a mainstay of treating surgical blood loss, it may be perceived by patients and their physicians as having associated risk of adverse events. Practicing patient-centered care necessitates that clinicians have an understanding of an individual patient's perceptions of transfusion practice and incorporate this into shared medical decision-making.
Methods: A paper survey was completed by patients during routine outpatient preoperative evaluation. An online survey was completed by attending anesthesiologists and surgeons at the same institution. Both surveys evaluated perceptions of the overall risk of transfusions, level of concern regarding 5 specific adverse events with transfusion, and perceptions of the frequency of those adverse events. Group differences were evaluated with conventional inferential biostatistics.
Results: A total of 294 patients and 73 physicians completed the surveys. Among the surveyed patients, 20% (95% confidence interval, 15%-25%) perceived blood transfusions as "very often risky" or "always risky." Greater perceived overall blood transfusion risk was associated with African American race (P = 0.028) and having a high school or less level of education (P = 0.022). Greater perceived risk of allergic reaction (P = 0.001), fever (P < 0.001), and dyspnea (P = 0.001) were associated with African American race. Greater perceived risk of allergic reaction (P = 0.009), fever (P = 0.039), dyspnea (P = 0.004), human immunodeficiency virus/acquired immune deficiency syndrome and hepatitis (P = 0.003), and medical error (P = 0.039) were associated with having a high school or less level of education. Patients and physicians also differed significantly in their survey responses, with physicians reporting greater overall perceived risk with a blood transfusion (P = 0.001).
Conclusions: Despite improvements in blood transfusion safety in the United States and other developed countries, the results of this study indicate that a sizeable percentage of patients still perceive transfusion as having significant associated risk. Furthermore, patients and their anesthesiologists/surgeons differ in their perceptions about transfusion-related risks and complications. Understanding patients' perceptions of blood transfusion and identifying groups with the greater specific concerns will better enable health care professionals to address risk during the informed consent process and recommend blood management in accordance with the individual patient's values, beliefs, and fears or concerns.
abstract_id: PUBMED:26529138
Perceived blood transfusion safety: a cross-European comparison. Background And Objectives: During the past decades, blood transfusions have become an ever safer clinical procedure in developed countries. Extensive donor screening together with improved infectious disease testing has led to a minimization of risks for transfusion recipients. Still, the general public perceives the process of blood transfusion as risky.
Materials And Methods: This study tested variation in perceived transfusion safety across countries and explained it with individual and country factors. We examined whether individual demographic and macro-level factors (i.e. Human Development Index and Power Distance Index) explain variation within and across European countries. We applied multilevel models to 2009 Eurobarometer data collected in 26 countries (N = 20 874).
Results: Results were largely in line with expectations derived from risk perception and power and status difference theories. Generally, women, older adults, the lower educated and those earning lower incomes perceived heightened risk. Most of the variation across Europe was explained by the Human Development Index. Risk perception regarding blood transfusions was lower in countries with higher Human Development Indices, that is countries with higher average education, life expectancy and Gross Domestic Product.
Conclusion: This study provides new insights of how risk perception regarding blood transfusions is shaped within and across Europe. Both individual demographic factors and country characteristics play a role.
Answer: The perceived hazards of transfusion are a concern for both patients and clinicians. A study aimed to evaluate the use of a tool prompting counselling behaviour for blood transfusion by assessing clinicians' self-reported counselling behaviours and changes in patients' beliefs about transfusion (PUBMED:22724532). The study used a mixed quantitative and qualitative methodology in two phases. In the first phase, clinicians' responses to a semi-structured questionnaire were analyzed to identify the content of discussions with patients about different aspects of receiving a blood transfusion. The content was coded using illness representation concepts from the Common Sense Self-Regulation Model. In the second phase, patients scheduled for elective surgery completed a questionnaire on their beliefs about transfusion before and after counselling.
The results showed that the most frequently coded illness representations targeted by clinicians using the tool were 'consequence of treatment' and 'cure/control'. After counselling with the checklist, patients were more likely to disagree with the statement that doctors relied too much on transfusion and more likely to agree that blood transfusion can result in new health problems. This indicates that the tool has the potential to influence patients' beliefs about transfusion. However, the study also highlighted the need for optimizing the use of the tool before undertaking a fully randomized evaluation of it.
In conclusion, a clinician tool designed to prompt counselling behaviour for blood transfusion has shown potential in helping patients understand the consequences and control aspects of blood transfusion, as well as influencing their beliefs about the risks associated with transfusion. Further optimization and evaluation of the tool are needed to fully assess its effectiveness in clinical practice (PUBMED:22724532). |
Instruction: Do PT and APTT sensitivities to factors' deficiencies calculated by the H47-A2 2008 CLSI guideline reflect the deficiencies found in plasmas from patients?
Abstracts:
abstract_id: PUBMED:26338156
Do PT and APTT sensitivities to factors' deficiencies calculated by the H47-A2 2008 CLSI guideline reflect the deficiencies found in plasmas from patients? Introduction: Prothrombin time (PT) and activated partial thromboplastin time (APTT) sensitivity for detecting isolated factor deficiencies varies with different reagents and coagulometers. The Clinical and Laboratory Standards Institute (CLSI) H47A2 guideline proposed a method to calculate these sensitivities, but some inconsistency has been reported. This study aimed to calculate factor sensitivities using CLSI guideline and to compare them with those obtained from single factor-deficient patients' data.
Methods: Different mixtures of normal pooled and deficient plasmas were prepared (<1IU/dL to 100 IU/dL) according to the CLSI H47A2 guideline. PT with rabbit brain (RB) and human recombinant (HR) thromboplastins, APTT and factors' activities were measured in an ACL TOP coagulometer. Sensitivities (maximum factor concentration that produces PT or APTT values out of the reference range) were calculated from mixtures and from patients with single-factor deficiencies: 17 factor FV, 36 FVII, 19 FX, 39 FVIII, 15 FIX 15 FXI and 24 FXII.
Results: PT sensitivity with RB was as follows: FV 38 and 59, FVII 35 and 58, FX 56 and 64 IU/dL; PT sensitivity with HR was as follows: FV 39 and 45, FVII 51 and 50, FX 33 and 61 IU/dL; and APTT sensitivity was as follows: FV 39 and 45, FX 32 and 38, FVIII 47 and 60, FIX 35 and 44, FXI 33 and 43, FXII 37 and 46 IU/dL, respectively.
Conclusions: Reagent-coagulometer combination has adequate sensitivities to factor deficiencies according to guideline recommendations (>30 IU/dL). These should not be considered as actual sensitivities because those obtained by analysing patients' plasmas with single-factor deficiencies were higher for most factors and could induce misinterpretation of the basic coagulation test results.
abstract_id: PUBMED:27384253
Point-of-care PT and aPTT in patients with suspected deficiencies of coagulation factors. Introduction: There are several clinical settings and patient conditions especially in intensive care units, emergency departments, and operating theaters, where the coagulation status of a patient must be known immediately and point-of-care (POC) systems are beneficial due to low time to result.
Methods: This noninterventional, single-blinded, multicenter study with prospectively collected whole blood samples was performed to evaluate the diagnostic accuracy of the CoaguChek PT Test (POC PT) and CoaguChek aPTT Test (POC aPTT) compared to standard laboratory testing in patients with suspected deficiencies of coagulation factors.
Results: In total, 390 subjects were included. Both POC PT and POC aPTT showed concordance with the laboratory PT and aPTT. Lot-to-lot variation was below 2% both for POC PT and for POC aPTT. The mean relative difference of capillary blood compared to venous blood was 0.2 % with POC PT and 8.4% with POC aPTT. The coefficients of variation for repeatability of POC PT using whole blood were found to be between 2% and 3.6%.
Conclusion: Our findings suggest reliable quantitative results with this POC system to support on-site decision-making for patients with suspected deficiencies of coagulation factors in acute and intensive care.
abstract_id: PUBMED:23718922
Determination of APTT factor sensitivity--the misguiding guideline. Introduction: The Clinical and Laboratory Standards Institute (CLSI) has produced a guideline detailing how to determine the activated partial thromboplastin time's (APTT) sensitivity to clotting factor deficiencies, by mixing normal and deficient plasmas. Using the guideline, we determined the factor sensitivity of two APTT reagents.
Methods: APTTs were performed using Actin FS and Actin FSL on a Sysmex CS-5100 analyser. The quality of factor-deficient and reference plasmas from three commercial sources was assessed by assaying each of the clotting factors within the plasmas and by performing thrombin generation tests (TGT).
Results: Testing samples from 50 normal healthy subjects gave a two-standard deviation range of 21.8-29.2 s for Actin FS and 23.5-29.3 s for Actin FSL. The upper limits of these ranges were subsequently used to determine APTT factor sensitivity. Assay of factor levels within the deficient plasmas demonstrated that they were specifically deficient in a single factor, with most other factors in the range 50-150 iu/dL (Technoclone factor VII-deficient plasma has 26 iu/dL factor IX). APTTs performed on mixtures of normal and deficient plasmas gave diverse sensitivity to factor deficiencies dependent on the sources of deficient plasma. TGT studies on the deficient plasmas revealed that the potential to generate thrombin was not solely associated with the levels of their component clotting factors.
Conclusion: Determination of APTT factor sensitivity in accordance with the CLSI guideline can give inconsistent and misleading results.
abstract_id: PUBMED:22917038
Investigation of coagulation time: PT and APTT The first case report describes an extremely prolonged activated partial thromboplastin time (APTT) in a patient with no history of increased bleeding tendency. Heparin use was excluded. The APTT mixing study combined with the medical history suggests a deficiency in one of the non-essential coagulation factors. This was confirmed by factor XII activity of <1%. The second case report describes a prolonged APTT in a patient with no history of increased bleeding tendency. The negative bleeding tendency in combination with a failure of the mixing study to correct the coagulation assay results suggests a factor inhibitor, most probably lupus anticoagulant. Indeed, the lupus anticoagulant was positive and the anti-cardiolipin antibody titre was also positive. Aberrations in the process of haemostasis can be efficiently screened using a platelet count, an APTT, a PT and a thorough physical examination combined with a thorough medical history taking. Common causes of prolonged PT and/or APTT are the use of oral anticoagulants or heparin, vitamin K deficiency and liver disease. Other causes include coagulation factor deficiencies, coagulation factor inhibitors and diffuse intravascular coagulation.
abstract_id: PUBMED:32753955
Prediction of Poor Outcomes in Patients with Colorectal Cancer: Elevated Preoperative Prothrombin Time (PT) and Activated Partial Thromboplastin Time (APTT). Background And Objective: Tools for the non-invasive assessment of colorectal cancer (CRC) prognosis have profound significance. Although plasma coagulation tests have been investigated in a variety of tumours, the prognostic value of the prothrombin time (PT) and activated partial thromboplastin time (APTT) in CRC has not been discussed. Our study objective was to explore the prognostic significance of preoperative PT and APTT in CRC patients.
Patients And Methods: A retrospective analysis of preoperative coagulation indexes including PT, PTA, INR, APTT, FIB, TT, PLT, NLR and PLR in 250 patients with CRC was performed. Kaplan-Meier and multivariate Cox regression analysis were used to demonstrate the prognostic value of these preoperative coagulation indexes.
Results: The overall survival (OS, p<0.05) and disease-free survival (DFS, p<0.05) of CRC patients with lower PT and APTT levels were significantly prolonged. Based on univariate analysis, PT levels (p<0.001, p<0.001), PTA levels (p=0.001, p=0.001), APTT levels (p=0.001, p<0.001), INR levels (p<0.001, p<0.001), fibrinogen levels (p=0.032, p=0.036), tumour status (p=0.005, p=0.003), nodal status (p<0.001, p<0.001), metastasis status (p<0.001, p<0.001) and TNM stages (p<0.001, p<0.001) were remarkably associated with DFS and OS. Multivariate Cox regression analysis suggested that the levels of PT (HR: 2.699, p=0.006) and APTT (HR: 1.942, p=0.015), metastasis status (HR: 2.091, p= 0.015) and TNM stage (HR: 7.086, p=0.006) were independent predictors of survival in CRC. In the whole cohort, the enrolled patients were then divided into three groups according to their PT and APTT levels. The OS and DFS differed notably among the low-risk (PT<11.85 sec and APTT<25.85 sec), medium-risk (PT≥11.85 sec or APTT≥25.85 sec), and high-risk (PT≥11.85 sec and APTT≥25.85 sec) groups.
Conclusion: Elevated levels of preoperative PT and APTT were predictors of poor outcomes in CRC patients. Moreover, the combination of preoperative PT and APTT can be a new prognostic stratification approach for more precise clinical staging of CRC.
abstract_id: PUBMED:8356955
Paradoxic effect of multiple mild coagulation factor deficiencies on the prothrombin time and activated partial thromboplastin time. Single coagulation factor deficiencies predictably prolong the prothrombin time (PT) and activated partial thromboplastin time (APTT) at levels below 35% of normal activity. Acquired coagulopathies generally are characterized by multiple coagulation factor deficiencies. The effect was studied of such combined deficiencies on the PT/APTT using plasma from patients congenitally deficient in specific factors and pooled normal plasma. The PT begins to lengthen when individual factor levels fall below 25%. The APTT becomes prolonged when the levels of Factor V fall below 45%; the levels of Factors II and XI fall below 40%; and the levels of Factors I, V, VII, VIII, IX, and XII fall below 25% of normal. When plasma samples containing 50% activity of a single factor and 100% of all other factors were prepared by mixing the congenitally deficient plasma samples with the normal pool, the resulting mixtures had normal PT and APTT values. However, when two of these 50% factor-deficient plasmas were combined so that the mixture contained 75% activity of two coagulation factors and 100% of all other factors, the resulting PT and APTT were prolonged over the clotting times of either 50% factor-deficient plasma. Similar findings were obtained in patients with mild factor reductions caused by warfarin treatment. These data indicate that prolongations of the PT and APTT in disorders of coagulation affecting multiple factors represent less of a reduction in factor levels than is generally appreciated. This may explain the poor clinical correlation between abnormalities in these test results and clinical bleeding in acquired disorders of hemostasis.
abstract_id: PUBMED:30828165
Comparison of Rapid Centrifugation Technique with Conventional Centrifugation for Prothrombin Time (PT) and Activated Partial Thromboplastin Time (APTT) Testing. Prothrombin Time (PT) and activated partial thromboplastin time (APTT) are frequently performed coagulation tests in patients with coagulation disorders especially in critical care areas and in monitoring patients on anticoagulation therapy. In coagulation testing, sample processing especially centrifugation is one of the most critical steps that affect turnaround time (TAT). This study was carried out over a period of 1 year. Three hundred paired samples from patients sent for PT and APTT estimation were included. One sample was centrifuged in a regular bench top centrifuge at 1500g for 20 min. The other sample was divided into two polypropylene aliquots and centrifuged in a microcentrifuge at 13000g for 3 min. The plasma obtained from both methods was tested for PT and APTT using the automated method on STA Compact coagulometer (Stago) using commercial thromboplastin STAR-NeoplastineR C1 Plus and phospholipid (cephalin), STAR-C K PRESTR 5 respectively. Data were analyzed using descriptive statistics, Student t test, correlation coefficient and Bland-Altman plots. Mean PT, INR and APTT for both centrifugation methods was comparable with no statistically significant difference (p > 0.05). PT, INR and APTT also showed good correlation (r > 0.98) when compared between the two methods of centrifugation. Bland-Altman comparison between rapid and conventional methods of centrifugation for PT, INR and APTT also showed acceptable agreement. Rapid centrifugation technique for routine coagulation testing can be used safely with a significant reduction in the TAT. This can benefit patients in critical care settings and those on outpatient oral anticoagulant therapy.
abstract_id: PUBMED:15883452
Micronutrient deficiencies and gender: social and economic costs. Vitamin and mineral deficiencies adversely affect a third of the world's people. Consequently, a series of global goals and a serious amount of donor and national resources have been directed at such micronutrient deficiencies. Drawing on the extensive experience of the authors in a variety of institutional settings, the article used a computer search of the published scientific literature of the topic, supplemented by reports and published and unpublished work from the various agencies. In examining the effect of sex on the economic and social costs of micronutrient deficiencies, the paper found that: (1) micronutrient deficiencies affect global health outcomes; (2) micronutrient deficiencies incur substantial economic costs; (3) health and nutrition outcomes are affected by sex; (4) micronutrient deficiencies are affected by sex, but this is often culturally specific; and finally, (5) the social and economic costs of micronutrient deficiencies, with particular reference to women and female adolescents and children, are likely to be considerable but are not well quantified. Given the potential impact on reducing infant and child mortality, reducing maternal mortality, and enhancing neuro-intellectual development and growth, the right of women and children to adequate food and nutrition should more explicitly reflect their special requirements in terms of micronutrients. The positive impact of alleviating micronutrient malnutrition on physical activity, education and productivity, and hence on national economies suggests that there is also an urgent need for increased effort to demonstrate the cost of these deficiencies, as well as the benefits of addressing them, especially compared with other health and nutrition interventions.
abstract_id: PUBMED:35150232
An Analysis of the Sensitivity of the Activated Partial Thromboplastin Time (APTT) Assay, as Used in a Large Laboratory Network, to Coagulation Factor Deficiencies. Objectives: To advance knowledge in using the ex vivo method to identify factor sensitivity of the activated partial thromboplastin time (APTT), using data from a hemophilia and reference hemostasis laboratory; to evaluate application of inclusion and exclusion criteria to eliminate data outliers; and to discuss outcomes with reference to comparable studies.
Methods: An ex vivo, retrospective analysis was performed on patient samples with conjointly ordered APTT and intrinsic pathway factors (VIII, IX, XI, XII) for application to a large network of laboratories. The relationship between factor levels and APTT, before and after application of exclusion criteria, is demonstrated.
Results: Curvilinear relationships were found between all factor levels and APTTs, which demonstrated both similarities and differences with available studies. Factor sensitivity data are presented. Study strengths include large sample size and use of real-world data. Limitations include inability to exclude all residual outliers and paucity of patient samples singularly deficient in factors other than FVIII.
Conclusions: This ex vivo, retrospective analysis of the sensitivity of the APTT assay to intrinsic pathway factor deficiencies using real-world data from a hemophilia and reference hemostasis laboratory contains the largest sample size using this approach to date. The outcomes assist in informing practice in this area and can be used as a reference for further studies.
abstract_id: PUBMED:38280831
Prevalence and determinants of micronutrient deficiencies in malnourished older hospitalized patients. Background And Objective: Malnutrition and micronutrient deficiencies represent significant concerns in geriatric care, leading to adverse health outcomes in older adults. The study aimed to investigate the prevalence and determinants of micronutrient deficiencies in malnourished older hospitalized patients.
Design And Setting: This prospective, observational study was conducted in a geriatric acute care unit.
Participants: The study included 156 malnourished older adults.
Measurements: Malnutrition was identified using the Mini Nutritional Assessment-Short Form. Micronutrient status was assessed through serum analysis of vitamins (A, B1, B6, B12, C, D, E, H, K, folic acid) and minerals (iron, zinc, copper, selenium) within 24 h post-admission.
Results: The average patient age was 82.3 ± 7.5 years, with 69% female. The results revealed a high prevalence of micronutrient deficiencies, with 90% of patients exhibiting deficiencies in three or more micronutrients. Notably, every patient presented at least one micronutrient deficiency. Common deficiencies were found in vitamins C (75%), D (65%), H (61%), and K (45%), as well as folic acid (37%), iron (31%), zinc (36%) and selenium (35%). In binary regression analysis, the amount of previous weight loss was significantly associated with a higher prevalence of multiple (>2) micronutrient deficiencies (P = 0.045). Other variables such age (P = 0.449), gender (P = 0.252), BMI (P = 0.265) and MNA-SF score (P = 0.200) did not show any significant association with the prevalence multiple micronutrient deficiencies.
Conclusion: The high prevalence of micronutrient deficiencies in malnourished older hospitalized patients underscore the urgent need for targeted interventions to address micronutrient deficiencies in this population, promoting their health status.
Answer: The PT and APTT sensitivities to factors' deficiencies calculated by the H47-A2 2008 CLSI guideline do not necessarily reflect the deficiencies found in plasmas from patients. A study aimed to calculate factor sensitivities using the CLSI guideline and compare them with those obtained from single factor-deficient patients' data found that the reagent-coagulometer combination has adequate sensitivities to factor deficiencies according to guideline recommendations (>30 IU/dL). However, these should not be considered as actual sensitivities because those obtained by analyzing patients' plasmas with single-factor deficiencies were higher for most factors and could induce misinterpretation of the basic coagulation test results (PUBMED:26338156).
Another study that determined the APTT factor sensitivity in accordance with the CLSI guideline reported inconsistent and misleading results. The study found that APTT performed on mixtures of normal and deficient plasmas gave diverse sensitivity to factor deficiencies dependent on the sources of deficient plasma. The potential to generate thrombin was not solely associated with the levels of their component clotting factors, indicating that the guideline may not provide accurate sensitivity measures (PUBMED:23718922).
These findings suggest that while the CLSI guideline provides a method to calculate PT and APTT sensitivities, the actual sensitivities observed in patient plasmas can differ, potentially leading to misinterpretation of test results. Therefore, the sensitivities calculated by the H47-A2 2008 CLSI guideline do not always accurately reflect the factor deficiencies found in patients' plasmas. |
Instruction: Does forgiveness mediate the impact of school bullying on adolescent mental health?
Abstracts:
abstract_id: PUBMED:25531584
Does forgiveness mediate the impact of school bullying on adolescent mental health? Objective: The link between both bullying and victimisation and psychopathology has been well established. Forgiveness has been associated with better mental health. However, few studies have examined the relationship between adolescent forgiveness, psychopathology and bullying/victimisation. This study investigated forgiveness as a mediator of the adverse mental health problems experienced by bullies and victims of bullying.
Method: Participants were 355 Year 10 or Year 11 pupils (age = 14.9 years) from two British secondary schools in 2007, who completed self-administered measures on bullying and victimisation, mental health, forgiveness of self and others, and forgivingness. The mediating influence of forgiveness on the impact of bullying/victimisation on mental health was tested with a structural equation model.
Results: Data from 55.6% of the 639 eligible pupils were analysed. Results confirmed an association between bullying/victimisation, forgiveness and psychopathology. Forgiveness scores were found to play a mediating role between bullying/victimisation and psychopathology.
Conclusions: Victimised adolescents who were better able to forgive themselves were more likely to report lower levels of psychopathology, while bullying adolescents who were unable to forgive others were more likely to report higher levels of psychopathology. This suggests a greater role for forgiveness within future research, intervention and policy on bullying. Forgiveness can form a valuable part of preventative and educational anti-bullying programmes.
abstract_id: PUBMED:37753538
Adolescent mental health in Japan and Russia: The role of body image, bullying victimisation and school environment. This study examined associations between self-reported mental health problems, body image, bullying victimisation and school safety in large adolescent samples in Japan and Russia, considering the effects of gender, culture and their interactions. In both Japan and Russia, girls reported a greater number of mental health problems, less bullying victimisation and much higher body dissatisfaction than boys did. Japanese adolescents rated themselves higher on total difficulties, reported less body dissatisfaction and bullying victimisation, and rated their school safety lower than that of Russian youths. Cross-cultural differences in total difficulties and body image were qualified by gender. Body dissatisfaction, bullying victimisation and school safety all independently contributed to adolescent mental health problems. The protective effect of school safety on total difficulties was larger for girls than for boys; the strength of the association between bullying victimisation and adolescent mental health problems differed across genders and cultures. The findings indicate a need for a cross-cultural approach and provide a strong basis for targeted interventions that seek to improve adolescent mental health.
abstract_id: PUBMED:30373296
Traditional Bullying, Cyberbullying and Mental Health in Early Adolescents: Forgiveness as a Protective Factor of Peer Victimisation. Traditional and online bullying are prevalent throughout adolescence. Given their negative consequences, it is necessary to seek protective factors to reduce or even prevent their detrimental effects in the mental health of adolescents before they become chronic. Previous studies have demonstrated the protective role of forgiveness in mental health after several transgressions. This study assessed whether forgiveness moderated the effects of bullying victimisation and cybervictimisation on mental health in a sample of 1044 early adolescents (527 females; M = 13.09 years; SD = 0.77). Participants completed a questionnaire battery that measures both forms of bullying victimisation, suicidal thoughts and behaviours, satisfaction with life, and forgiveness. Consistent with a growing body of research, results reveal that forgiveness is a protective factor against the detrimental effects of both forms of bullying. Among more victimised and cybervictimised adolescents, those with high levels of forgiveness were found to report significantly higher levels of satisfaction compared to those with low levels of forgiveness. Likewise, those reporting traditional victimisation and higher levels of forgiveness levels showed lower levels of suicidal risk. Our findings contribute to an emerging relationship between forgiveness after bullying and indicators of mental health, providing new areas for research and intervention.
abstract_id: PUBMED:35219979
Relationship between school bullying and mental health status of adolescent students in China: A nationwide cross-sectional study. Introduction: School bullying, as a public health problem, has been linked to many emotional disorders. However, the overall status of school bullying among adolescent students in China is unknown. This nationwide study aimed to investigate school bullying in China and evaluate the relationships between school bullying and mental health status.
Methods: A total of 15, 415 middle and high school students were enrolled in this study through multistage stratified cluster random sampling. Multinomial logistic regression models were used to examine the association between school bullying and mental health status and the analysis was stratified by gender.
Results: Students were divided into four groups: 2.72%, bully/victims; 1.38%, bullies; 10.89%, victims; 85.01%, uninvolved. Compared with uninvolved students, students with anxiety symptoms, non-suicidal self-injury and suicide ideation had a higher risk of being involved in school bullying and were more likely to be bully/victims, bullies, and victims. Stratified analysis indicated that boys with anxiety symptoms and non-suicidal self-injury risks tended to be bullies, victims and bully/victims. However, for girls, bullying others or being bullied was related to anxiety symptoms and suicide ideation.
Conclusion: Our study indicated that school bullying is still a health problem in the adolescent students of China, and is related to many mental health problems. Intervention programs are in urgent need to help the students involved in school bullying, both in terms of their mental and physical health.
abstract_id: PUBMED:26101439
Forgiveness Reduces Anger in a School Bullying Context. Forgiveness has been shown to be a helpful strategy for victims of many different forms of abuse and trauma. It has also been theoretically linked to positive outcomes for victims of bullying. However, it has never been experimentally manipulated in a school bullying context. This research investigates an experimental manipulation providing children with response advice following a bullying incident. Children read hypothetical physical and verbal bullying scenarios, followed by advice from a friend to either respond with forgiveness, avoidance, or revenge, in a within-subjects repeated measures design. One hundred eighty-four children aged 11 to 15 from private schools in Sydney participated in this study. Results indicated that advice to forgive the perpetrator led to significantly less anger than advice to either avoid or exact revenge. Avoidance was the most likely advice to be followed by students and the most likely to result in ignoring the bullying and developing empathy for their abuser. However, it also resulted in interpretations of the bullying as being more serious. Forgiveness is suggested as an effective coping response for ameliorating the affective aggressive states of victimized youth, with further exploration needed regarding the interplay between the avoidance and forgiveness processes.
abstract_id: PUBMED:30353845
Forgiveness and friendship protect adolescent victims of bullying from emotional maladjustment. Background: Adolescent victims of bullying often present high levels of maladjustment, such as depression, anxiety, and the inability to manage anger. Both forgiveness and friendship have been found to be moderating agents for the debilitating psychological effects seen in the victims of bullying. Our aim was to explore the roles of forgiveness and friendship in the psychological adjustment of victimised youths.
Method: The sample was composed of 2,105 adolescents (age range 13-20) recruited from central and southern Italy. We collected information on bullying, forgiveness, friendship, depression, anxiety and anger.
Results: We found that more victimisation and not having a best friend had an additive effect on maladjustment. Moreover, adolescents who scored lower in forgiveness were more likely to be depressed and angry.
Discussion: Our data provide confirmation that forgiveness is a protective factor for Italian adolescents, as is friendship, although they do not operate as interactive protective factors. Given that forgiveness is so significantly associated with wellbeing and the fact that it can be taught and enhanced in both clinical and school settings, it would be worthwhile to include work on forgiveness in prevention and treatment programmes.
abstract_id: PUBMED:30885261
Impact of parent-adolescent bonding on school bullying and mental health in Vietnamese cultural setting: evidence from the global school-based health survey. Background: The mental well-being of adolescents is a crucial issue affecting lives of both adults and young people. Bullying and mental health problems are important factors that can have a negative impact on the mental well-being of adolescents. Public awareness of mental health problems among adolescents is rapidly growing in Vietnam. However, current approaches to identifying risk factors influencing mental health problems do not pay attention to potentially protective factors. This study was performed to examine the associations between parent-adolescent bonding and mental health outcomes as protective elements during the adolescent period.
Methods: Data collected from 3331 respondents in grade 8-12 as part of the Vietnam Global School-based Student Health Survey (GSHS) 2013 was used for the analysis. A three-stage cluster sample design was used to produce data representative of students. Multivariate logistic regression analysis was performed to examine the association of demographic characteristics and data regarding parent-adolescent bonding associations with status of mental health problems in adolescents.
Results: Parental understanding, parental monitoring were significantly associated with reduced likelihood of being bullied and mental health problems (P < 0.05). However, parental control was significantly associated with greater likelihoods of being physically attacked (adjusted odd ratio (aOR) = 1.36, 95%CI, 1.06, 1.75) and mental health problems, such as suicidal ideation, and loneliness (aOR = 1.96, 95%CI, 1.49, 2.57, aOR = 2.35, 95%CI, 1.75, 3.15, respectively), after adjusting for potential confounders.
Conclusions: The study indicated the significant associations between parental understanding, monitoring and control in a proxy of parent-adolescent bonding and mental well-being during the period of adolescent rebellion. Thus, parent-adolescent bonding in Southeast Asian cultural context may provide an effective means to promote the mental well-being of adolescents.
abstract_id: PUBMED:31434555
The Relationship Between Forgiveness, Bullying, and Cyberbullying in Adolescence: A Systematic Review. The study of bullying in adolescence has received increased attention over the past several decades. A growing body of research highlights the role of forgiveness and its association with aggression. In this article, we systematically review published studies on the association among online and traditional bullying and forgiveness in adolescents. Systematic searches were conducted in PsycINFO, MEDLINE, PsycArticles, and Scopus databases. From a total of 1,093 studies, 637 were nonduplicated studies and 18 were eventually included. Together, these studies provided evidence that forgiveness and bullying behaviors are negatively related: Adolescents with higher forgiveness levels bully less. Similarly, forgiveness is negatively related to victimization: Adolescents with higher forgiveness show less victimization. Unforgiveness was positively related to traditional and online bullying. This relationship appears to be consistent beyond types of bullying, certain background characteristics, and forgiveness measures. These findings are discussed, and clinical implications and guidelines for future research are presented.
abstract_id: PUBMED:28438245
Bullying and the detrimental role of un-forgiveness in adolescents' wellbeing. Background: Many studies have shown that victimisation by bullies is linked with psychopathology. Research has also demonstrated that forgiveness is associated with the mental health of victims of bullying.
Method: Our objective was to explore the multiple components of forgiveness (i.e., benevolence, decreased avoidance of the perpetrator and diminished desire for revenge) as mediators of the negative mental health effects of bullying in Italian adolescents. Our hypothesis was that those who forgive their bullies would show lower levels of depression, state anger, and behaviour problems than those who did not forgive. Participants were 319 students ages 14 to 22 from two schools in Southern Italy who completed five self-report questionnaires measuring levels of victimisation, forgiveness, depression, anger, and total behaviour problems.
Results: The results varied according to the components of forgiveness: although benevolence toward the victim was not a significant correlate of outcome, harbouring a desire for revenge and avoiding the perpetrator emerged as very maladaptive.
Conclusions: Our study indicates that is it important for those working with adolescents to help victims of bullying overcome the negative mental health effects of the victimisation by letting go of un-forgiveness.
abstract_id: PUBMED:35206559
Effects of Bullying Forms on Adolescent Mental Health and Protective Factors: A Global Cross-Regional Research Based on 65 Countries. Adolescent bullying is a public health issue of great global concern. Given the serious negative effect of bullying on adolescent mental health, it is critical to seek protective factors to protect adolescent mental health. From a global cross-regional perspective, the study aims to explore the relationship between forms of bullying and adolescent mental health and the role of parental support as a protective factor. Data were drawn from adolescents aged 12-17 years in 65 countries from the Global School-based Student Health Survey between 2003 and 2015. After controlling the state-fixed effects, individual adolescent behavior, and family factors, the ordinary least squares model was used to analyze the influence of bullying frequency and forms of bullying on adolescent mental health. The results found that the prevalence of bullying in the sample of 167,286 adolescents was 32.03%, with the highest prevalence of bullying in the sample countries in Africa. Verbal bullying had the highest prevalence and the most significant negative effect on adolescent mental health. The study also discussed the differences in bullying among adolescents by gender, age, and region. "Parental supervision", "parental connectedness" and "parental bonding" played a positive and protective role in the mental health of adolescents who experienced bullying.
Answer: Yes, forgiveness has been found to mediate the impact of school bullying on adolescent mental health. Studies have shown that forgiveness is associated with better mental health outcomes for adolescents who have experienced bullying. Specifically, victimized adolescents who were better able to forgive themselves reported lower levels of psychopathology, while bullying adolescents who were unable to forgive others reported higher levels of psychopathology (PUBMED:25531584). Forgiveness has been identified as a protective factor against the detrimental effects of both traditional and cyberbullying, with higher levels of forgiveness associated with higher levels of life satisfaction and lower levels of suicidal risk among more victimized adolescents (PUBMED:30373296). Additionally, forgiveness has been shown to reduce anger in a school bullying context, suggesting it as an effective coping response for victimized youth (PUBMED:26101439). Furthermore, forgiveness and friendship have been found to protect adolescent victims of bullying from emotional maladjustment, with lower forgiveness scores correlating with higher levels of depression and anger (PUBMED:30353845). Overall, the research indicates that forgiveness can play a valuable role in interventions and policies aimed at addressing the mental health impacts of bullying on adolescents. |
Instruction: Ventricular tachycardia in non-compaction of left ventricle: is this a frequent complication?
Abstracts:
abstract_id: PUBMED:32724738
Left Ventricle Non-Compaction Cardiomyopathy Admitted With Multiorgan Failure: A Case Report. Left ventricle non-compaction (LVNC) is a rare congenital cardiomyopathy characterized by thickened myocardium due to an arrest of the normal compaction of the embryonic sponge-like meshwork of myocardial fibers. We present a 40-year-old man with no known systemic illnesses admitted with cardiogenic shock and multiorgan failure. Echocardiogram revealed severe enlargement of all four chambers with left ventricular ejection fraction (LVEF) <10%. Cardiac magnetic resonance imaging (CMR) showed hypertrabecular left ventricular myocardium with a ratio of non-compact to compact myocardium of 2.3, diffuse myocardial thinning, and a 16-mm left ventricular thrombus. These findings were compatible with LVNC. The patient was treated with intravenous inotropic vasopressors for cardiogenic shock and enoxaparin as bridging for warfarin to a goal of INR 2.0-3.0. Due to refractory heart failure (HF) and dependency on inotropic support, the patient was placed on the waiting list for a heart transplant. Unfortunately, 27 days after admission, he presented ventricular tachycardia arrest and did not respond to aggressive advanced cardiac life support measures. A high index of suspicion is required for the early diagnosis, which in turn allows the physician to prevent complications of this condition. There is no specific therapy, so management is directed toward the clinical manifestations including HF, arrhythmias, and systemic embolic events. Heart transplantation is the only definitive treatment.
abstract_id: PUBMED:24130428
Successful Right Ventricular Tachycardia Ablation in a Patient with Left Ventricular Non-compaction Cardiomyopathy. We report a case of a 67-year old male with a recent diagnosis of left ventricular non-compaction (LVNC), initially presenting with symptomatic ventricular ectopy and runs of non-sustained ventricular tachycardia (VT). This ventricular arrhythmia originated in a structurally normal right ventricle (RV) and was successfully localized and ablated with the aid of the three-dimensional mapping and remote magnetic navigation.
abstract_id: PUBMED:38017672
Clinical and genetic characteristics of catecholaminergic polymorphic ventricular tachycardia combined with left ventricular non-compaction. Background: Catecholaminergic polymorphic ventricular tachycardia is an ion channelopathy, caused by mutations in genes coding for calcium-handling proteins. It can coexist with left ventricular non-compaction. We aim to investigate the clinical and genetic characteristics of this co-phenotype.
Methods: Medical records of 24 patients diagnosed with catecholaminergic polymorphic ventricular tachycardia in two Chinese hospitals between September, 2005, and January, 2020, were retrospectively reviewed. We evaluated their clinical and genetic characteristics, including basic demographic data, electrocardiogram parameters, medications and survival during follow-up, and their gene mutations. We did structural analysis for a novel variant ryanodine receptor 2-E4005V.
Results: The patients included 19 with catecholaminergic polymorphic ventricular tachycardia mono-phenotype and 5 catecholaminergic polymorphic ventricular tachycardia-left ventricular non-compaction overlap patients. The median age of onset symptoms was 9.0 (8.0,13.5) years. Most patients (91.7%) had cardiac symptoms, and 50% had a family history of syncope. Overlap patients had lower peak heart rate and threshold heart rate for ventricular tachycardia and ventricular premature beat during the exercise stress test (p < 0.05). Sudden cardiac death risk may be higher in overlap patients during follow-up. Gene sequencing revealed 1 novel ryanodine receptor 2 missense mutation E4005V and 1 mutation previously unreported in catecholaminergic polymorphic ventricular tachycardia, but no left ventricular non-compaction-causing mutations were observed. In-silico analysis showed the novel mutation E4005V broke down the interaction between two charged residues.
Conclusions: Catecholaminergic polymorphic ventricular tachycardia overlapping with left ventricular non-compaction may lead to ventricular premature beat/ventricular tachycardia during exercise stress test at lower threshold heart rate than catecholaminergic polymorphic ventricular tachycardia alone; it may also indicate a worse prognosis and requires strict follow-up. ryanodine receptor 2 mutations disrupted interactions between residues and may interfere the function of ryanodine receptor 2.
abstract_id: PUBMED:22121465
Isolated Non-compaction Cardiomyopathy Presented with Ventricular Tachycardia. Non-compaction cardiomyopathy is a recently recognized disorder, based on an arrest in endomyocardial morphogenesis. The disease is characterized by heart failure (both diastolic and systolic), systemic emboli and ventricular arrhythmias. The diagnosis is established by two-dimensional echocardiography. Isolated left ventricular non-compaction cardiomyopathy (IVNC) is an exceedingly rare congenital cardiomyopathy. Only a few cases of this condition have been reported. It is characterized by prominent and excessive trabeculation in a ventricular wall segment, with deep intertrabecular spaces perfused from the ventricular cavity. Echocardiographic findings are important clues for the diagnosis. We report a case of isolated non-compaction of the left ventricular myocardium presented with ventricular tachycardia.
abstract_id: PUBMED:32775315
Different Manifestations in Familial Isolated Left Ventricular Non-compaction: Two Case Reports and Literature Review. Left ventricular non-compaction (LVNC) is a form of cardiomyopathy characterized by prominent trabeculae and deep intertrabecular recesses which form a distinct "non-compacted" layer in the myocardium. It results from intrauterine arrest of the compaction process of the left ventricular myocardium. Clinical manifestations vary from asymptomatic to heart failure (HF), arrhythmias, or thromboembolic events. We present a case of mother and son diagnosed with isolated LVNC (ILVNC). A 4-years-old male patient, diagnosed at 3 months with ILVNC, and NYHA functional class IV HF, was admitted to the Emergency Institute for Cardiovascular Diseases and Transplantation of Targu Mures, Romania, for cardiologic reevaluation, and diagnosis confirmation. ILVNC was confirmed using echocardiography, revealing a non-compaction to compaction (NC/C) ratio of > 2.7. His evolution was stationary until the age of 8 years, when severe pneumonia caused hemodynamic decompensation, and he was listed for heart transplantation (HT). The patient underwent HT at the age of 11 years with favorable postoperative outcome. Meanwhile, a 22-years-old female patient, mother of the aforementioned patient, was also admitted to our institute due to severe fatigue, dyspnea, and recurrent palpitations with multiple implantable cardioverter defibrillator (ICD) shock delivery. Extensive medical history revealed that a presumptive ILVNC diagnosis was established when she was 11 years old. She was asymptomatic until 18 years old, when 3 months post-partum, she developed NYHA functional class III HF, and subsequently underwent ICD implantation. Her diagnosis was confirmed using multi-detector computed tomography angiography, which revealed a NC/C ratio of > 3.3. ICD adjustments were carried out with a favorable evolution under chronic drug therapy. The last evaluation, at 27 years old, revealed that she was in NYHA functional class II HF. In conclusion, ILVNC, even when familial, can present different clinical pictures and therefore requires different medical approaches.
abstract_id: PUBMED:21487972
Non-compaction cardiomyopathy History And Admission Findings: A 49-year-old man with loss of performance, ventricular ectopy and left bundle branch block was referred for diagnostic workup. He had dysmorphic skeletal features with shortening and muscular atrophy of arm and leg and syndactylia.
Investigations: Echocardiography revealed peculiar hypertrabeculation of left ventricular myocardium which was confirmed by cardiac magnetic resonance imaging. Therefore a diagnosis of non-compaction cardiomyopathy was made. Holter monitoring showed non-sustained ventricular tachycardia, coronary heart disease was excluded by coronary angiography. TREATMENT AUND COURSE: The patient received optimal heart failure drug therapy. Because of malignant ventricular ectopy a biventricular ICD system (CRT-D system) was implanted. Signs and symptoms of heart failure have been stable for four years (NYHA-class II). Repeated sustained ventricular tachycardia were terminated by ICD overstimulation. A history of inadequate ICD shocks was due to incompliance of the patient regarding beta blocker use.
Conclusion: Non-compaction cardiomyopathy is a rare but characteristic cause of heart failure which is associated with potential malignant arryhthmias. Diagnosis is based on echocardiographic workup with typical features. However, because of its low prevalence and awareness deficits in the medical community, non-compaction cardiomyopathy probably is diagnosed too late and not frequently enough.
abstract_id: PUBMED:36147147
Isolated left ventricular non-compaction cardiomyopathy complicated by acute ischemic stroke: A rare case repor. Introduction: and importance: Isolated left ventricular noncompaction cardiomyopathy (LVNC), uncommon type of primary hereditary cardiomyopathy. It is a spongy morphological appearance of the myocardium that occurs largely in the LV.
Case Presentation: We discuss here a case of 19 years old female with no known past medical history who present with Shortness of breath (SOB) and left sided weakness following delivery.Bedside Echocardiography demonstrated Left ventricular trabiculation with reduced ejection fraction. While brain Computed tomography showed acute ischemic stroke primly due to non-compaction cardiomyopathy as the embolic. Patient was discharged after successfully managed.
Clinical Discussion: Left ventricular non-compaction cardiomyopathy (LVNC) is characterized by progressive ventricular trabeculation and deep intratrabecular recesses caused by the functional arrest of myocardial maturation, which is a rare case of congenital cardiomyopathy. Our patient had isolated non-compaction cardiomyopathy of the type that was complicated by an acute ischemic stroke and was treated accordingly.
Conclusion: It is usually associated with congenital heart disease, but isolated left ventricular non-compaction cardiomyopathy is very uncommon.
abstract_id: PUBMED:30980206
Clinical and genetic insights into non-compaction: a meta-analysis and systematic review on 7598 individuals. Background: Left ventricular non-compaction has been increasingly diagnosed in recent years. However, it is still debated whether non-compaction is a pathological condition or a physiological trait. In this meta-analysis and systematic review, we compare studies, which investigated these two different perspectives. Furthermore, we provide a comprehensive overview on the clinical outcome as well as genetic background of left ventricular non-compaction cardiomyopathy in adult patients.
Methods And Results: We retrieved PubMed/Medline literatures in English language from 2000 to 19/09/2018 on clinical outcome and genotype of patients with non-compaction. We summarized and extensively reviewed all studies that passed selection criteria and performed a meta-analysis on key phenotypic parameters. Altogether, 35 studies with 2271 non-compaction patients were included in our meta-analysis. The mean age at diagnosis was the mid of their fifth decade. Two-thirds of patients were male. Congenital heart diseases including atrial or ventricular septum defect or Ebstein anomaly were reported in 7% of patients. Twenty-four percent presented with family history of cardiomyopathy. The mean frequency of neuromuscular diseases was 5%. Heart rhythm abnormalities were reported frequently: conduction disease in 26%, supraventricular tachycardia in 17%, and sustained or non-sustained ventricular tachycardia in 18% of patients. Three important outcome measures were reported including systemic thromboembolic events with a mean frequency of 9%, heart transplantation with 4%, and adequate ICD therapy with 15%. Nine studies investigated the genetics of non-compaction cardiomyopathy. The most frequently mutated gene was TTN with a pooled frequency of 11%. The average frequency of MYH7 mutations was 9%, for MYBPC3 mutations 5%, and for CASQ2 and LDB3 3% each. TPM1, MIB1, ACTC1, and LMNA mutations had an average frequency of 2% each. Mutations in PLN, HCN4, TAZ, DTNA, TNNT2, and RBM20 were reported with a frequency of 1% each. We also summarized the results of eight studies investigating the non-compaction in altogether 5327 athletes, pregnant women, patients with sickle cell disease, as well as individuals from population-based cohorts, in which the presence of left ventricular hypertrabeculation ranged from 1.3 to 37%.
Conclusion: The summarized data indicate that non-compaction may lead to unfavorable outcome in different cardiomyopathy entities. The presence of key features in a multimodal diagnostic approach could distinguish between benign morphological trait and manifest cardiomyopathy.
abstract_id: PUBMED:35795877
Co-occurrence of Myocardial Sarcoidosis and Left Ventricular Non-compaction in a Patient with Advanced Heart Failure. A 46-year-old man with systolic heart failure, end-stage renal disease on dialysis, ventricular tachycardia and pulmonary sarcoidosis presented with decompensated heart failure and cardiogenic shock of unknown aetiology. The hospital course was complicated by worsening shock requiring inotropic and mechanical circulatory support, as well as eventual dual heart and kidney transplantation. Cardiac imaging was used to assess the aetiology of the patient's non-ischaemic cardiomyopathy, including a PET scan and cardiac MRI. Imaging demonstrated findings consistent with left ventricular non-compaction, but was inconclusive for cardiac sarcoidosis. After eventual heart transplantation, histopathology of the patient's explanted heart showed evidence of both non-compaction and cardiac sarcoidosis. In this case report, the authors review the pathophysiology of both cardiac sarcoidosis and left ventricular non-compaction, and highlight a multimodality approach to the diagnosis of non-ischaemic cardiomyopathy.
abstract_id: PUBMED:32128492
Isolated left ventricular apical hypoplasia with myocardial non-compaction: a case report. Background: Isolated left ventricular apical hypoplasia (ILVAH) is a rare congenital cardiac abnormality, which might result in severe symptomatic heart failure (HF) with pulmonary hypertension, atrial fibrillation (AF), or malignant ventricular tachycardia in adults.
Case Summary: A 32-years-old man presented with exertional dyspnoea New York Heart Association Class II and persistent AF. Echocardiography and cardiac magnetic resonance showed the presence of (i) spherical remodelling of the left ventricle (LV) with impaired contractile function (three-dimensional ejection fraction, EF 32%); (ii) substitution of apical myocardium by fatty tissue; (iii) abnormal origin of a papillary muscle network; and (iv) an elongated right ventricle, compatible with ILVAH. In addition, non-compacted endomyocardial layer of the LV was observed. Because of a high risk of sudden cardiac death in symptomatic HF patients with reduced EF, an implantable cardioverter-defibrillator was placed which followed by pulmonary vein isolation. After the procedures and restoration of sinus rhythm, the patient demonstrated improvement in HF symptoms and exercise tolerance. This was accompanied by an enhancement of left and right ventricular systolic function by echocardiography. At 6-month, 1, and 2-year follow-up the clinical conditions of the patient and echocardiographic findings remained stable.
Discussion: A rare combination of ILVAH and left ventricular myocardium non-compaction was observed in this young adult who presented with symptomatic HF and persistent AF. The use of consecutive invasive cardiac procedures leads to restoration of sinus rhythm, the improvement of myocardial contractility and clinical manifestation of HF.
Answer: Ventricular tachycardia is indeed a recognized complication in patients with left ventricular non-compaction (LVNC). LVNC is characterized by prominent trabeculae and deep intertrabecular recesses in the myocardium, and it can present with a range of clinical manifestations including heart failure, systemic emboli, and ventricular arrhythmias such as ventricular tachycardia (PUBMED:32775315; PUBMED:22121465; PUBMED:32724738).
The occurrence of ventricular tachycardia in LVNC patients is frequently reported. For instance, one case report describes a patient with LVNC presenting with symptomatic ventricular ectopy and runs of non-sustained ventricular tachycardia (VT), which was successfully ablated (PUBMED:24130428). Another study found that heart rhythm abnormalities, including sustained or non-sustained ventricular tachycardia, were reported in 18% of patients with LVNC (PUBMED:30980206). Additionally, a case of isolated non-compaction cardiomyopathy presented with ventricular tachycardia, further emphasizing the association between LVNC and ventricular arrhythmias (PUBMED:22121465).
Moreover, a study on catecholaminergic polymorphic ventricular tachycardia (CPVT) combined with LVNC found that patients with this overlap condition may have a higher risk of sudden cardiac death and may present with ventricular premature beats/ventricular tachycardia during exercise stress tests at lower threshold heart rates than those with CPVT alone, suggesting a worse prognosis (PUBMED:38017672).
In conclusion, ventricular tachycardia is a frequent and potentially serious complication in patients with LVNC, and it is important for clinicians to be aware of this association to provide appropriate management and follow-up for affected individuals. |
Instruction: Should clinicians deliver decision aids?
Abstracts:
abstract_id: PUBMED:34295744
A systematic review of decision aids for gender affirming therapy. Background: Transgender and gender diverse (TGD) persons considering gender affirming therapy have to make many complex medical decisions, potentially without understanding the associated harms or benefits of hormonal and surgical interventions. Further, clinicians are often unaware of how best to communicate information to persons seeking gender affirming therapy. Patient decision aids have been developed to provide evidence-based information as a way to help people make decisions in collaboration with their clinicians. It is unclear whether such tools exist for persons seeking gender affirming therapy. The objective of our systematic review is to search for and determine the quality of any existing patient decision aids developed for TGD persons considering gender affirming therapy, and the outcomes associated with their use.
Methods: We adapted a search strategy for databases using two key concepts "decision support intervention/patient decision aid" and "transgender". We also conducted a brief online search of Google and abstracts from relevant conferences to identify any tools not published in the academic literature. Following study selection and data extraction, we used the International Patient Decision Aid Standards instrument (IPDASi) to assess the quality of patient decision aids, and the Standards for UNiversal reporting of patient Decision Aid Evaluations (SUNDAE) checklist to assess the quality of evaluations.
Results: We identified 762 studies; none were identified from Google or conference content. One tool met our inclusion criteria: an online, pre-encounter patient decision aid for transmasculine genital gender-affirming surgery developed in Amsterdam, translated in English and Dutch. The tool met all the IPDASi qualifying criteria, and scored a 17/28 on the certification criteria, and 57/112 on the quality criteria. The efficacy of the patient decision aid has not been evaluated.
Conclusions: Despite multiple decisions required for gender affirming therapies, only one patient decision aid has been developed for transmasculine genital reconstruction. Further research is required to develop patient decision aids for the multiple decision points along the gender affirming journey.
abstract_id: PUBMED:35501228
Development of an adjective-selection measure evaluating clinicians' attitudes towards using patient decision aids: The ADOPT measure. Background: The implementation of shared decision-making and patient decision aids (PDAs) is impeded by clinicians' attitudes.
Objective: To develop a measure of clinician attitude towards PDAs.
Methods: To develop the ADOPT measure, we used four stages, culminating in measure responses by medically qualified clinicians, 25 from each of the following specialties: emergency medicine, family medicine, oncology, obstetrics and gynaecology, orthopaedics, and psychiatry. To assess validity, we also posed three questions to assess the participants' attitudinal and behavioural endorsement of PDAs. Allocating a point per adjective, we calculated the sum as well as positive and negative scores. We used univariate logistic regression to determine associations between the scores and attitudinal or behavioural endorsements.
Results: 152 clinicians completed the measure. 'Time-saving' (39%) and 'easy' (34%) were the most frequently selected adjectives. 'Time-consuming' and 'unfamiliar' were the most frequently selected negative adjectives (both 19%). The sum scores were significantly associated with behavioural endorsement of PDAs.
Discussion: Clinicians were able to respond to adjective-selection methods and the ADOPT measure could help assess clinician attitudes to PDAs. Validation will require further research.
Practice Implications: The ADOPT measure could help identify the extent and source of attitudinal resistance.
abstract_id: PUBMED:30649604
Adaptation and qualitative evaluation of encounter decision aids in breast cancer care. Purpose: Shared decision-making is currently not widely implemented in breast cancer care. Encounter decision aids support shared decision-making by helping patients and physicians compare treatment options. So far, little was known about adaptation needs for translated encounter decision aids, and encounter decision aids for breast cancer treatments were not available in Germany. This study aimed to adapt and evaluate the implementation of two encounter decision aids on breast cancer treatments in routine care.
Methods: We conducted a multi-phase qualitative study: (1) translation of two breast cancer Option Grid™ decision aids; comparison to national clinical standards; cognitive interviews to test patients' understanding; (2) focus groups to assess acceptability; (3) testing in routine care using participant observation. Data were analysed using qualitative content analysis.
Results: Physicians and patients reacted positively to the idea of encounter decision aids, and reported being interested in using them; patients were most receptive. Several adaptation cycles were necessary. Uncertainty about feasibility of using encounter decision aids in clinical settings was the main physician-reported barrier. During real-world testing (N = 77 encounters), physicians used encounter decision aids in one-third of potentially relevant encounters. However, they did not use the encounter decision aids to stimulate dialogue, which is contrary to their original scope and purpose.
Conclusions: The idea of using encounter decision aids was welcomed, but more by patients than by physicians. Adaptation was a complex process and required resources. Clinicians did not follow suggested strategies for using encounter decision aids. Our study indicates that production of encounter decision aids alone will not lead to successful implementation, and has to be accompanied by training of health care providers.
abstract_id: PUBMED:29858145
Exploring the use of Option Grid™ patient decision aids in a sample of clinics in Poland. Background: Research on the implementation of patient decision aids to facilitate shared decision making in clinical settings has steadily increased across Western countries. A study which implements decision aids and measures their impact on shared decision making has yet to be conducted in the Eastern part of Europe.
Objective: To study the use of Option GridTM patient decision aids in a sample of Grupa LUX MED clinics in Warsaw, Poland, and measure their impact on shared decision making.
Method: We conducted a pre-post interventional study. Following a three-month period of usual care, clinicians from three Grupa LUX MED clinics received a one-hour training session on how to use three Option GridTM decision aids and were provided with copies for use for four months. Throughout the study, all eligible patients were asked to complete the three-item CollaboRATE patient-reported measure of shared decision making after their clinical encounter. CollaboRATE enables patients to assess the efforts clinicians make to: (i) inform them about their health issues; (ii) listen to 'what matters most'; (iii) integrate their treatment preference in future plans. A hierarchical logistic regression model was performed to understand which variables had an effect on CollaboRATE.
Results: 2,048 patients participated in the baseline phase; 1,889 patients participated in the intervention phase. Five of the thirteen study clinicians had a statistically significant increase in their CollaboRATE scores (p<.05) when comparing baseline phase to intervention phase. All five clinicians were located at the same clinic, the only clinic where an overall increase (non-significant) in the mean CollaboRATE top score percentage occurred from baseline phase (M=60 %, SD=0.49; 95 % CI [57-63 %]) to intervention phase (M=62 %, SD=0.49; 95% CI [59-65%]). Only three of those five clinicians who had a statistically significant increase had a clinically significant difference.
Conclusion: The implementation of Option GridTM helped some clinicians practice shared decision making as reflected in CollaboRATE scores, but most clinicians did not have a significant increase in their scores. Our study indicates that the effect of these interventions may be dependent on clinic contexts and clinician engagement.
abstract_id: PUBMED:19605885
Should clinicians deliver decision aids? Further exploration of the statin choice randomized trial results. Background: Statin Choice is a decision aid about taking statins. The optimal mode of delivering Statin Choice (or any other decision aid) in clinical practice is unknown.
Methods: To investigate the effect of mode of delivery on decision aid efficacy, the authors further explored the results of a concealed 2 x 2 factorial clustered randomized trial enrolling 21 endocrinologists and 98 diabetes patients and randomizing them to 1) receive either the decision aid or pamphlet about cholesterol, and 2) have these delivered either during the office visit (by the clinician) or before the visit (by a researcher). We estimated between-group differences and their 95% confidence intervals (CI) for acceptability of information delivery (1-7), knowledge about statins and coronary risk (0-9), and decisional conflict about statin use (0-100) assessed immediately after the visit. Follow-up was 99%.
Results: The relative efficacy of the decision aid v. pamphlet interacted with the mode of delivery. Compared with the pamphlet, patients whose clinicians delivered the decision aid during the office visit showed significant improvements in knowledge (difference of 1.6 of 9 questions, CI 0.3, 2.8) and nonsignificant trends toward finding the decision aid more acceptable (odds ratio 3.1, CI 0.9, 11.2) and having less decisional conflict (difference of 7 of 100 points, CI -4, 18) than when a researcher delivered the decision aid just before the office visit.
Conclusions: Delivery of decision aids by clinicians during the visit improves knowledge and shows a trend toward better acceptability and less decisional conflict.
abstract_id: PUBMED:27044963
Ten Years, Forty Decision Aids, And Thousands Of Patient Uses: Shared Decision Making At Massachusetts General Hospital. Shared decision making is a core component of population health strategies aimed at improving patient engagement. Massachusetts General Hospital's integration of shared decision making into practice has focused on the following three elements: developing a culture receptive to, and health care providers skilled in, shared decision making conversations; using patient decision aids to help inform and engage patients; and providing infrastructure and resources to support the implementation of shared decision making in practice. In the period 2005-15, more than 900 clinicians and other staff members were trained in shared decision making, and more than 28,000 orders for one of about forty patient decision aids were placed to support informed patient-centered decisions. We profile two different implementation initiatives that increased the use of patient decision aids at the hospital's eighteen adult primary care practices, and we summarize key elements of the shared decision making program.
abstract_id: PUBMED:34187484
Decision aids linked to evidence summaries and clinical practice guidelines: results from user-testing in clinical encounters. Background: Tools for shared decision-making (e.g. decision aids) are intended to support health care professionals and patients engaged in clinical encounters involving shared decision-making. However, decision aids are hard to produce, and onerous to update. Consequently, they often do not reflect best current evidence, and show limited uptake in practice. In response, we initiated the Sharing Evidence to Inform Treatment decisions (SHARE-IT) project. Our goal was to develop and refine a new generation of decision aids that are generically produced along digitally structured guidelines and evidence summaries.
Methods: Applying principles of human-centred design and following the International Patient Decision Aid Standards (IPDAS) and GRADE methods for trustworthy evidence summaries we developed a decision aid prototype in collaboration with the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence project (DECIDE). We iteratively user-tested the prototype in clinical consultations between clinicians and patients. Semi-structured interviews of participating clinicians and patients were conducted. Qualitative content analysis of both user-testing sessions and interviews was performed and results categorized according to a revised Morville's framework of user-experience. We made it possible to produce, publish and use these decision aids in an electronic guideline authoring and publication platform (MAGICapp).
Results: Direct observations and analysis of user-testing of 28 clinical consultations between physicians and patients informed four major iterations that addressed readability, understandability, usability and ways to cope with information overload. Participants reported that the tool supported natural flow of the conversation and induced a positive shift in consultation habits towards shared decision-making. We integrated the functionality of SHARE-IT decision aids in MAGICapp, which has since generated numerous decision aids.
Conclusion: Our study provides a proof of concept that encounter decision aids can be generically produced from GRADE evidence summaries and clinical guidelines. Online authoring and publication platforms can help scale up production including continuous updating of electronic encounter decision aids, fully integrated with evidence summaries and clinical practice guidelines.
abstract_id: PUBMED:34052064
A systematic review and meta-analysis of effectiveness of decision aids for vaccination decision-making. Background: This systematic review and meta-analysis aimed to assess the effectiveness of vaccination decision aids compared with usual care on vaccine uptake, vaccine attitudes, decisional conflict, intent to vaccinate and timeliness.
Methods: Searches were conducted in OVID Medline, OVID Embase, CINAHL, PsycINFO, the Cochrane Library and SCOPUS. Randomised controlled trials were included if they evaluated the impact of decision aids as defined by the International Patient Decision Aids Standards Collaboration. Where possible, meta-analysis was undertaken. Where meta-analysis was not possible, we conducted a narrative synthesis. Risk of bias in included trials was assessed using the Cochrane Collaboration's risk of bias tool. Data were analysed using STATA.
Results: Five RCTs were identified that evaluated the effectiveness of decision aids in the context of vaccination decision making. Meta-analysis of four studies showed that decision aids may have slightly increased vaccination uptake, but this was reduced to no effect once studies with higher risk of bias were excluded. Meta-analysis of three studies showed that decision aids moderately increased intention to vaccinate. Narrative synthesis of two studies suggested that decision aids reduced decisional conflict. One study reported that decision aids decreased perceived risk of vaccination. Content, format and delivery method of the decision aids varied across the studies. It was not clear from the information reported whether these variations affected the effectiveness of the decision aids.
Conclusion: Decision aids can assist in vaccine decision making. Future studies of decision aids could provide greater detail of the decision aids themselves, which would enable comparison of the effectiveness of different elements and formats. Standardising decision aids would also allow for easier comparison between decision aids.
abstract_id: PUBMED:37603407
Clinicians' Perspectives and Proposed Solutions to Improve Contraceptive Counseling in the United States: Qualitative Semistructured Interview Study With Clinicians From the Society of Family Planning. Background: Contraceptive care is a key element of reproductive health, yet only 12%-30% of women report being able to access and receive the information they need to make these complex, personal health care decisions. Current guidelines recommend implementing shared decision-making approaches; and tools such as patient decision aid (PtDA) applications have been proposed to improve patients' access to information, contraceptive knowledge, decisional conflict, and engagement in decision-making and contraception use. To inform the design of meaningful, effective, elegant, and feasible PtDA applications, studies are needed of all users' current experiences, needs, and barriers. While multiple studies have explored patients' experiences, needs, and barriers, little is known about clinicians' experiences, perspectives, and barriers to delivering contraceptive counseling.
Objective: This study focused on assessing clinicians' experiences, including their perspectives of patients' needs and barriers. It also explored clinicians' suggestions for improving contraceptive counseling and the feasibility of a contraceptive PtDA.
Methods: Following the decisional needs assessment approach, we conducted semistructured interviews with clinicians recruited from the Society of Family Planning. The Ottawa Decision Support Framework informed the interview guide and initial codebook, with a specific focus on decision support and decisional needs as key elements that should be assessed from the clinicians' perspective. An inductive content approach was used to analyze data and identify primary themes and suggestions for improvement.
Results: Fifteen clinicians (12 medical doctors and 3 nurse practitioners) participated, with an average of 19 years of experience in multiple regions of the United States. Analyses identified 3 primary barriers to the provision of quality contraceptive counseling: gaps in patients' underlying sexual health knowledge, biases that impede decision-making, and time constraints. All clinicians supported the development of contraceptive PtDAs as a feasible solution to these main barriers. Multiple suggestions for improvement were provided, including clinician- and system-level training, tools, and changes that could support successful implementation.
Conclusions: Clinicians and developers interested in improving contraceptive counseling and decision-making may wish to incorporate approaches that assess and address upstream factors, such as sexual health knowledge and existing heuristics and biases. Clinical leaders and administrators may also wish to prioritize solutions that improve equity and accessibility, including PtDAs designed to provide education and support in advance of the time-constrained consultations, and strategic training opportunities that support cultural awareness and shared decision-making skills. Future studies can then explore whether well-designed, user-centered shared decision-making programs lead to successful and sustainable uptake and improve patients' reproductive health contraceptive decision-making.
abstract_id: PUBMED:22778792
Decision aids for patients. Background: Patients want to be more involved in medical decision-making. To this end, some decision aids are now available.
Methods: We present an overview of this subject, in which we explain the terms "shared decision-making", "decision aid", and "evidence-based patient information" and survey information on the available decision aids in German and other languages on the basis of a literature search in MEDLINE, EMBASE and PsycInfo and a current Cochrane Review. We also searched the Internet for providers of decision aids in Germany.
Results: Decision aids exist in the form of brochures, decision tables, videos, and computer programs; they address various topics in the prevention, diagnosis, and treatment of disease. They typically contain information on the advantages and disadvantages of the available options, as well as guidance for personal decision-making. They can be used alone or as a part of structured counseling or patient education. Minimal quality standards include an adequate evidence base, completeness, absence of bias, and intelligibility. Our search revealed 12 randomized controlled trials (RCTs) of decision aids in German and 106 RCTs of decision aids in other languages. These trials studied the outcome of the use of decision aids not just with respect to clinical developments, but also with respect to patient knowledge, adherence to treatment regimens, satisfaction, involvement in decision-making, autonomy preference, and decisional conflicts.
Conclusion: Only a small fraction of the available decision aids were systematically developed and have been subjected to systematic evaluation. Patients are still not receiving the help in decision-making to which medical ethics entitles them. Structures need to be put in place for the sustainable development, evaluation and implementation of high-quality decision aids.
Answer: The question of whether clinicians should deliver decision aids is addressed in the literature with various perspectives and findings. The Statin Choice randomized trial results suggest that when clinicians deliver decision aids during the office visit, it can improve patient knowledge and show a trend toward better acceptability and less decisional conflict compared to when a researcher delivers the decision aid just before the office visit (PUBMED:19605885). This indicates that the delivery of decision aids by clinicians during the visit may be beneficial.
However, the implementation and effectiveness of decision aids can be influenced by several factors, including clinician attitudes, the clinical setting, and the engagement of both clinicians and patients. For instance, a study on the use of Option Grid™ patient decision aids in Poland found that while the implementation of these aids helped some clinicians practice shared decision making as reflected in CollaboRATE scores, most clinicians did not have a significant increase in their scores, suggesting that the effect of decision aids may depend on clinic contexts and clinician engagement (PUBMED:29858145).
Furthermore, a qualitative study on the adaptation and evaluation of encounter decision aids in breast cancer care in Germany revealed that while the idea of using encounter decision aids was welcomed, particularly by patients, clinicians did not use the aids to stimulate dialogue as intended, indicating that simply producing decision aids is not enough for successful implementation and that training of healthcare providers is necessary (PUBMED:30649604).
Massachusetts General Hospital's experience with shared decision making highlights the importance of developing a culture receptive to shared decision making, training healthcare providers, and providing infrastructure and resources to support the implementation of decision aids in practice (PUBMED:27044963).
In conclusion, while there is evidence to support the delivery of decision aids by clinicians during clinical encounters, successful implementation requires consideration of various factors, including clinician attitudes, training, and the clinical environment. Clinicians' delivery of decision aids should be part of a broader strategy to promote shared decision making and patient engagement in healthcare decisions. |
Instruction: Does respiration perturb body balance more in chronic low back pain subjects than in healthy subjects?
Abstracts:
abstract_id: PUBMED:12206948
Does respiration perturb body balance more in chronic low back pain subjects than in healthy subjects? Objective: To determine whether body balance is perturbed more in low back pain patients than in healthy subjects, under the concept of posturo-kinetic capacity.
Design: Comparison of posturographic and respiratory parameters between low back pain and healthy subjects.
Background: It has been demonstrated that respiratory movements constitute a perturbation to posture, compensated by movements of the spine and of the hips, and that low back pain is frequently associated with a loss of back mobility.
Method: Ten low back pain patients and ten healthy subjects performed five posturographic tests under three different respiratory rate conditions: quiet breathing (spontaneous), slow breathing (0.1 Hz) and fast breathing (0.5 Hz).
Results: Intergroup comparison showed that the mean displacements of the center of pressure were greater for the low back pain group, especially along the antero-posterior axis, where respiratory perturbation is primarily exerted. Inter-condition comparison showed that in slow and fast breathing relatively to quiet breathing, the mean displacement of the center of pressure along the antero-posterior axis was significantly increased only for the low back pain group.
Conclusion: According to the results, respiration presented a greater disturbing effect on body balance in low back pain subjects.
Relevance: This study provides information on the causes of the impaired body balance associated with chronic low back pain, which could be used to improve treatment strategy.
abstract_id: PUBMED:1827539
Variations in balance and body sway in middle-aged adults. Subjects with healthy backs compared with subjects with low-back dysfunction. In a preliminary investigation of 45 middle aged adult subjects, 20 with low-back pain (LBP) and 25 with healthy backs (HB), balance responses (body sway) were measured under different sensory conditions with computerized force plate stabilometry. Compared with HB subjects, in the most stable and then the least stable balance positions, the LBP subjects demonstrated significantly greater postural sway, kept their center of force (COF) significantly more posterior, and were significantly less likely to be able to balance on one foot with eyes closed. Based on subjective observations, the LBP subjects were more likely to fulcrum about the hip and back to maintain uprightness in challenging balance tasks compared with healthy controls who maintained their fulcrum for the COF around the ankle. Research is needed to determine the incidence of balance problems in LBP patients compared with controls. Effective physical therapy assessment and treatment of LBP patients may require attention to postural alignment, strength, flexibility, joint stability, balance reactions, and postural strategies.
abstract_id: PUBMED:24453604
Comparison of static postural balance between healthy subjects and those with low back pain. Objective: To compare the static postural balance between women suffering from chronic low back pain and healthy subjects, by moving the center of pressure.
Methods: The study included 15 women with low back pain (LBP group) and 15 healthy women (healthy group). They were instructed to remain in standing on the force platform for 30 seconds. We analyzed the area and the speed of displacement of center of pressure of both groups. Data analysis was performed using the Student's t-test, with significance of 5%.
Results: Individuals with chronic low back pain showed a larger area of displacement of the center of pressure relative to the healthy ones but there was no significant difference in the speed of displacement of the center of pressure.
Conclusion: Individuals with chronic low back pain had alterations in static balance with respect to healthy ones. Level of Evidence III, Prognostic Studies.
abstract_id: PUBMED:24149491
Neuromuscular fatigue during a modified biering-sørensen test in subjects with and without low back pain. Studies employing modified Biering-Sørenson tests have reported that low back endurance is related to the potential for developing low back pain. Understanding the manner in which spinal musculature fatigues in people with and without LBP is necessary to gain insight into the sensitivity of the modified Biering-Sørenson test to differentiate back health. Twenty male volunteers were divided into a LBP group of subjects with current subacute or a history of LBP that limited their activity (n = 10) and a control group (n = 10). The median frequency of the fast Fourier transform was calculated from bilateral surface electromyography (EMG) of the upper lumbar erector spinae (ULES), lower lumbar erector spinae (LLES) and biceps femoris while maintaining a prescribed modified Biering-Sørensen test position and exerting isometric forces equivalent to 100, 120, 140 and 160% of the estimated mass of the head-arms-trunk (HAT) segment. Time to failure was also investigated across the percentages of HAT. Fatigue time decreased with increasing load and differences between groups increased as load increased, however these differences were not significant. Significant differences in the EMG median frequency between groups occurred in the right biceps femoris (p ≤ 0.05) with significant pairwise differences occurring at 140% for the left biceps femoris and at 160% for the right biceps femoris. There were significant pairwise differences at 120% for average EMG of the right biceps femoris and at 140% for the right ULES, and right and left biceps femoris (p ≤ 0.05). The modified Biering-Sørensen test as usually performed at 100% HAT is not sufficient to demonstrate significant differences between controls and subjects with varying degrees of mild back disability based on the Oswestry classification. Key pointsThe results do not wholly support the modified Biering-Sørensen test utilizing resistance of 100% HAT to discern differences in fatigue in subjects with mild low back pain.A greater activation of the biceps femoris by low back pain individuals probably contributed to the lack of significant differences in back fatigue times.The possibility exists that subjects with more sophisticated strategies could yield higher fatigue times despite inferior neuromuscular fatigue and the existence of low back pain.
abstract_id: PUBMED:33836396
Energy spectral density as valid parameter to compare postural control between subjects with nonspecific chronic low back pain vs healthy subjects: A case-control study. Background: Nonspecific chronic low back pain (NSCLBP) is one of the most common and frequent health problems. OBJETIVE: to compare postural control (i.e. center of pressure (CoP) displacement and energy spectral density (ESD)) using technological devices (accelerometers and pressure platform) between subjects with NSCLBP and healthy subjects.
Methods: A cross-sectional case-control study was conducted. Observational study (STROBE). The final sample consisted of 60 subjects (30 NSCLBP subjects and 30 healthy subjects). Triaxial accelerometer and pressure platform were used in order to obtain ESD and CoP displacement measurements during four balance tasks (i.e. with and without vision and on stable versus unstable surface). Independent t tests were used to compare participants with NSCLBP and healthy controls in the two clinical measurements (i.e., CoP displacement and ESD) for the four balance tests. A multivariate analysis of variance (MANOVA) together with a Fisher's linear discrimination was applied in order to categorize NSPLBP.
Results: Patients with NSCLBP showed greater CoP migration in the positions eyes open, stable surface on the anteroposterior axis (p = 0.012), eyes closed, stable surface on the mediolateral axis (p = 0.025), eyes closed, stable surface on the anteroposterior axis (p = 0.001), eyes open, unstable surface on the anteroposterior axis (p = 0.040), eyes closed, unstable surface on the anteroposterior axis (p = 0.015). Also the ESD was significantly greater for the four situations described (p ≤ 0.01) in subjects with NSCLBP.
Conclusions: Accelerometer appears to be a technological device that could offer a potential benefit within the battery of tests on physical performance among subjects with NSCLBP and healthy subjects.
abstract_id: PUBMED:30056411
Lower back pain and healthy subjects exhibit distinct lower limb perturbation response strategies: A preliminary study. Background: It is hypothesized that inherent differences in movement strategies exist between control subjects and those with a history of lower back pain (LBP). Previous motion analysis studies focus primarily on tracking spinal movements, neglecting the connection between the lower limbs and spinal function. Lack of knowledge surrounding the functional implications of LBP may explain the diversity in success from general treatments currently offered to LBP patients.
Objective: This pilot study evaluated the response of healthy controls and individuals with a history of LBP (hLBP) to a postural disturbance.
Methods: Volunteers (n= 26) were asked to maintain standing balance in response to repeated balance disturbances delivered via a perturbation platform while both kinematic and electromyographic data were recorded from the trunk, pelvis, and lower limb.
Results: The healthy cohort utilized an upper body-focused strategy for balance control, with substantial activation of the external oblique muscles. The hLBP cohort implemented a lower limb-focused strategy, relying on activation of the semitendinosus and soleus muscles. No significant differences in joint range of motion were identified.
Conclusions: These findings suggest that particular reactive movement patterns may indicate muscular deficits in subjects with hLBP. Identification of these deficits may aid in developing specific rehabilitation programs to prevent future LBP recurrence.
abstract_id: PUBMED:19444072
Differences in balance strategies between nonspecific chronic low back pain patients and healthy control subjects during unstable sitting. Study Design: A 2-group experimental design.
Objective: To investigate differences in postural control strategies of pelvis and trunk movement between nonspecific chronic low back pain (CLBP) patients and healthy control subjects using 3-dimensional motion analysis.
Summary Of Background Data: Increased postural sway assessed by center of pressure displacements have been documented in patients with low back pain (LBP). The 3-dimensional movement strategies used by patients with LBP to keep their balance are not well documented.
Methods: Nineteen CLBP patients and 20 control subjects were included based on detailed clinical criteria. Every subject was submitted to a postural control test in an unstable sitting position. A 3-dimensional motion analysis system, equipped with 7 infrared M1 cameras, was used to track 9 markers attached to the pelvis and trunk to estimate their angular displacement in the 3 cardinal planes.
Results: The total angular deviation in all 3 directions of pelvis and trunk was higher in the CLBP group compared with the control group. In 4 of the 6 calculated differences, a significant higher deviation was found in the CLBP group (significant P-values between 0.013 and 0.047). Subjects of both groups mostly used rotation compared with lateral flexion and flexion/extension displacements of pelvis and trunk to adjust balance disturbance. The CLBP group showed a high correlation (Pearson: 0.912-0.981) between movement of pelvis and trunk, compared with the control group.
Conclusion: A higher postural sway and high correlation between pelvis and trunk displacements was found in the LBP group compared with healthy controls.
abstract_id: PUBMED:9836168
Differences in static balance and weight distribution between normal subjects and subjects with chronic unilateral low back pain. Balance reactions are not routinely evaluated in patients with low back pain. The purpose of this study was to determine if there were differences in static balance and weight distribution between subjects with unilateral low back pain (N = 15) and pain-free controls (N = 15). Measurements included limits of stability (%LOS), target sway, weight distribution on each lower extremity in quiet standing, and center of gravity with measurements of maximal excursion in anterior/posterior and medial/lateral directions. Independent t tests were used to compare data between groups. Compared with control subjects, subjects with low back pain demonstrated greater anterior-posterior center of gravity excursion and total center of gravity excursion with eyes open and greater anterior-posterior, medial-lateral, and total center of gravity excursion, target sway, and %LOS with eyes closed. There was no difference in the weight-bearing distribution between groups. This study suggests that static balance in patients with chronic low back pain may be impaired and should be thoroughly evaluated and integrated into physical therapy treatment programs.
abstract_id: PUBMED:26835862
Long term test-retest reliability of Oswestry Disability Index in male office workers. Background: The Oswestry Disability Index (ODI) is one of the most common condition specific outcome measures used in the management of spinal disorders. But there is insufficient study on healthy populations and long term test-retest reliability. This is important because healthy populations are often used for control groups in low back pain interventions, and knowing the reliability of the controls affects the interpretation of the findings of these studies.
Objective: The purpose of this study is to determine the long term test-retest reliability of ODI in office workers.
Methods: Participants who have no chronic low back pain history were included in study. Subjects were assessed by the Turkish-ODI 2.0 (e-forms) on 1st, 2nd, 4th, 8th, 15th, 30th days to determine the stability of ODI scores over time. The study began with 58 (12 female, 46 male) participants. 36 (3 female, 33 male) participated for the full 30 days.
Statistics: Kolmogorov-Smirnov and Friedman tests were used. Test-retest reliability was evaluated by using nonparametric statistics. All tests were done by using SPSS-11.
Results: There was no statistically significant difference among the median scores of each day. (χ= 6.482, p > 0.05).
Conclusion: The difference between median score of the days with 1st day was neither statistically nor clinically significant. ODI has long term test re-test reliability in healthy subjects over a 1 month time interval.
abstract_id: PUBMED:25931735
The effects of whole body vibration on static balance, spinal curvature, pain, and disability of patients with low back pain. [Purpose] The purpose of this study was to investigate the impact of whole body vibration (WBV) on static balance, spinal curvature, pain, and the disability of patients with chronic lower back pain. [Subjects and Methods] The subjects were of 40 patients, who were randomly assigned to WBV and control groups. Twenty-five minutes of lumbar stability training and 5 minutes of WBV were conducted for the WBV group, and 30 minutes of lumbar stability training was conducted for the control group. The training was conducted three times per week for a total of 6 weeks. Static balance, spinal curvature, pain, and disability were measured before and after the intervention. [Results] After the intervention, the WBV group showed a significant differences in static balance, spinal curvature, pain, and disability. The control group presented significant differences in pain, and disability. In the comparison of the two groups, the WBV group showed more significant improvements in the fall index and pain. [Conclusion] WBV can be recommended for the improvement of the balance ability and pain of chronic lower back pain patients.
Answer: Yes, respiration perturbs body balance more in chronic low back pain subjects than in healthy subjects. This is supported by a study that compared posturographic and respiratory parameters between low back pain patients and healthy subjects. The study found that mean displacements of the center of pressure were greater for the low back pain group, particularly along the antero-posterior axis where respiratory perturbation is primarily exerted. This effect was significantly increased in slow and fast breathing conditions relative to quiet breathing, but only for the low back pain group (PUBMED:12206948). |
Instruction: Total thyroidectomy with harmonic scalpel combined to gelatin-thrombin matrix hemostatic agent: is it safe and effective?
Abstracts:
abstract_id: PUBMED:34591193
Hemostatic efficacy of two topical adjunctive hemostats in a porcine spleen biopsy punch model of moderate bleeding. Topical hemostatic agents have become essential tools to aid in preventing excessive bleeding in surgical or emergency settings and to mitigate the associated risks of serious complications. In the present study, we compared the hemostatic efficacy of SURGIFLO® Hemostatic Matrix Kit with Thrombin (Surgiflo-flowable gelatin matrix plus human thrombin) to HEMOBLAST™ Bellows Hemostatic Agent (Hemoblast-a combination product consisting of collagen, chondroitin sulfate, and human thrombin). Surgiflo and Hemoblast were randomly tested in experimentally induced bleeding lesions on the spleens of four pigs. Primary endpoints included hemostatic efficacy measured by absolute time to hemostasis (TTH) within 5 min. Secondary endpoints included the number of product applications and the percent of product needed from each device to achieve hemostasis. Surgiflo demonstrated significantly higher hemostatic efficacy and lower TTH (p < 0.01) than Hemoblast. Surgiflo-treated lesion sites achieved hemostasis in 77.4% of cases following a single product application vs. 3.3% of Hemoblast-treated sites. On average, Surgiflo-treated sites required 63% less product applications than Hemoblast-treated sites (1.26 ± 0.0.51 vs. 3.37 ± 1.16). Surgiflo provided more effective and faster hemostasis than Hemoblast. Since both products contain thrombin to activate endogenous fibrinogen and accelerate clot formation, the superior hemostatic efficacy of Surgiflo in the porcine spleen punch biopsy model seems to be due to Surgiflo's property as a malleable barrier able to adjust to defect topography and to provide an environment for platelets to adhere and aggregate. Surgiflo combines a flowable gelatin matrix and a delivery system well-suited for precise application to bleeding sites where other methods of hemostasis may be impractical or ineffective.
abstract_id: PUBMED:23159248
Fatal thromboembolism to the left pulmonary artery by locally applied hemostatic matrix after surgical removal of spinal schwannoma: a case report. Locally applied hemostatic agents, mostly consisting of gelatinous granules with admixed human or bovine thrombin, are used in various surgical procedures. In our case, a 78-year-old woman underwent neurosurgical removal of an extraforaminal schwannoma of the L5 dorsal root ganglion. During the procedure, the hemostatic matrix consisting of a meshwork of bovine gelatinous granules admixed with human thrombin was locally applied to control diffuse paravertebral bleeding. Eight hours after surgery, the patient developed dyspnea with right heart failure and finally died. At autopsy, we found complete occlusion of the left pulmonary artery with a large thromboembolus. Histologically, that thromboembolus consisted of gelatinous granules with only a thin rim of surrounding, classic parietal thrombus. To our knowledge, this is the first description of fatal pulmonary embolization of a major lung artery with this material. The report depicts a possible life-threatening complication associated with the local application of hemostatic agents.
abstract_id: PUBMED:38130367
Human gelatin thrombin matrix with rifampin for the treatment of prosthetic vascular graft infections. We aim to describe and report on a novel graft preservation technique using a human gelatin thrombin matrix with rifampin for the treatment of vascular graft infections. Eight patients with vascular graft infections were included, one with bilateral infections, for a total of nine cases from January 2016 through June 2021. All the patients underwent wound exploration and placement of human gelatin thrombin matrix with rifampin. No deaths or allergic reactions had been reported at the 30-day follow-up, with only one major amputation. The graft and limb salvage rates were 77.8% at the 1-year follow-up. The mean time to a major amputation was 122 days, and the mean time to graft excision was 30 days.
abstract_id: PUBMED:25046623
A combination of hemostatic agents may safely replace deep medullary suture during laparoscopic partial nephrectomy in a pig model. Purpose: We assessed whether a combination of the fibrin tissue adhesive Tisseel® (human fibrinogen and thrombin) plus the hemostatic matrix FloSeal® (bovine derived gelatin matrix/human thrombin) could safely replace the conventional deep medullary suture without compromising outcomes.
Materials And Methods: Laparoscopic mid pole and one-third partial nephrectomy was performed on the right kidney of 12 female pigs. The only difference between the 2 groups of 6 pigs each was the use of a fibrin tissue adhesive plus hemostatic matrix combination in group 2 instead of the deep medullary running suture in control group 1. Renal scans and angiograms were performed at baseline and before sacrifice at 5-week followup. Retrograde in vivo pyelogram was also done.
Results: No significant difference was seen in operative parameters or postoperative course between the groups. Renal scans revealed a statistically insignificant trend toward greater uptake loss in group 1 and angiograms showed 3 major vessel occlusions in that group. No active bleeding was detected. Those 3 kidneys had significantly poorer postoperative uptake on renal scan than that of other kidneys (18.6% vs 39.4%, p = 0.013). Only 1 small asymptomatic pseudoaneurysm was noted in group 1. No urine leakage was found in either group. No major vessel occlusion, pseudoaneurysm or urinary complications developed in group 2.
Conclusions: Even after deep one-third partial nephrectomy FloSeal with concurrent Tisseel appeared sufficient to control major medullary vascular injuries and replace the deep medullary conventional suture without compromising operative outcomes. The potential advantages seen during functional and vascular examinations by decreasing the risk of unnecessary segmental vessel occlusion need further clinical evaluation.
abstract_id: PUBMED:19027513
A gelatin-thrombin matrix for hemostasis after endoscopic sinus surgery. Purpose: Adequate hemostasis is necessary after endoscopic sinus surgery. This study evaluated the clinical performance of Surgiflo hemostatic matrix (Johnson&Johnson Wound Management, a division of Ethicon Inc, Somerville, NJ) with Thrombin-JMI (distributed by Jones Pharma Inc, Bristol, VA, a wholly owned subsidiary of King Pharmaceuticals, Bristol, TN) in achieving hemostasis in patients undergoing endoscopic sinus surgery. Surgiflo hemostatic matrix is a sterile, absorbable porcine gelatin intended to aid with hemostasis when applied to a bleeding surface.
Materials And Methods: This multicenter, prospective, single-arm study evaluated the success in achieving hemostasis within 10 minutes of product application in patients undergoing elective primary or revision endoscopic sinus surgery for chronic sinusitis with a bleeding surface requiring hemostasis. Patient satisfaction and postoperative healing were also evaluated.
Results: Thirty patients were enrolled, including 17 males and 13 females (average age, 48.2 +/- 15.1 years), with 54 operated sides. Twenty-nine patients achieved hemostasis within 10 minutes of product application (96.7% success rate; 1-sided 95% confidence interval, 85.1%-100%). The median total time to hemostasis including manual compression was 61 seconds. No complications, such as synechiae, adhesion, or infection, were reported.
Conclusions: Surgiflo hemostatic matrix with Thrombin-JMI was clinically effective in controlling bleeding in 96.7% of patients. Further randomized controlled trials are indicated.
abstract_id: PUBMED:36599740
Effectiveness of gelatin matrix with human thrombin for reducing blood loss in palliative decompressive surgery with posterior spinal fusion for metastatic spinal tumors. Background: This study aimed to investigate the effect of gelatin matrix with human thrombin (GMHT) on blood loss and survival time in patients with metastatic spinal tumors treated with palliative decompression surgery with posterior spinal fusion.
Methods: We retrospectively reviewed 67 consecutive patients with metastatic spinal tumors who underwent palliative decompression surgery with posterior spinal fusion. We compared patients in whom GMHT was not used during surgery with those in whom GMHT was used. The following baseline characteristics were evaluated: age, height, weight, sex, metastatic tumor diagnosis, medical history, use of antiplatelet drug, use of anticoagulant drug, use of NSAIDs, smoking, preoperative PLT value, preoperative APTT, preoperative PT-INR, Karnofsky Performance Status score, Charlson comorbidities index score, the percentage of patients who received perioperative chemotherapy, main tumor level, Frankel category, revised Tokuhashi score, spinal instability neoplastic score (SINS), number of fusion segments, operation time, intraoperative blood loss, drainage blood loss, red blood cell transfusion, hemoglobin level, total protein (TP), albumin values, total blood loss (TBL), hidden blood loss, postoperative bed rest and postoperative survival time. Perioperative complications were assessed.
Results: Age, height, weight, sex, metastatic tumor diagnosis, medical history, use of antiplatelet drug, use of anticoagulant drug, use of NSAIDs, smoking, preoperative PLT value, preoperative APTT, preoperative PT-INR, CCI score, main level of tumors, SINS score, preoperative Tokuhashi score and number of fusion segments did not differ significantly between the two groups. Operation time, intraoperative blood loss, postoperative drainage blood loss, and TBL were significantly decreased in the group with GMHT than in the group without GMHT. The total number of perioperative complications was significantly lesser in the group with GMHT than in the group without GMHT. The median postoperative survival time was significantly longer in the GMHT group than in the group without GMHT.
Conclusion: GMHT should be considered a valid option for the treatment of patients with metastatic spinal tumors with a short life expectancy.
abstract_id: PUBMED:26109904
Hemostatic effect and distribution of new rhThrombin formulations in rats. Recombinant human thrombin (rhThrombin) is a potential hemostatic alternative to bovine and human plasma-derived thrombin. Hemostatic, liver regeneration effect and plasma concentrations of rhThrombin (SCILL) tested in the form of solution and hydrogels (thermo-sensitive poloxamer gel and carbomer gel; hameln rds) were evaluated. In the bleeding model, rhThrombin was applied locally on the bleeding site. The time to hemostasis was measured. The rhThrombin in liquid form as well as the thermo-sensitive gel forming formulation significantly reduced the bleeding time in comparison to saline. In the regeneration model, a cut in the form "V" was made on the liver and rhThrombin in both formulations was applied at defined concentrations to the wound for 5 min. The rats survived 1, 3 and 5 days after the injury and treatment. Histological examination showed better results in the group treated with rh Thrombin gel in comparison to the liquid form - solution; differences were insignificant. Low [(125)I]-rhThrombin radioactivity was evaluated in plasma after topical application (solution and both hydrogels) at hemostatic effective doses to partial hepatectomy in rats. Locally applied rh Thrombin on the rat damaged liver tissue never reached pharmacologically active systemic levels. The plasmatic levels and the content of this active protein in injured liver tissue were lower after application of its hydrogels versus solution.
abstract_id: PUBMED:27713677
Intraoperative Anaphylactic Reaction: Is it the Floseal? When hemodynamic or respiratory instability occurs intraoperatively, the inciting event must be determined so that a therapeutic plan can be provided to ensure patient safety. Although generally uncommon, one cause of cardiorespiratory instability is anaphylactic reactions. During anesthetic care, these most commonly involve neuromuscular blocking agents, antibiotics, or latex. Floseal is a topical hemostatic agent that is frequently used during orthopedic surgical procedures to augment local coagulation function and limit intraoperative blood loss. As these products are derived from human thrombin, animal collagen, and animal gelatin, allergic phenomenon may occur following their administration. We present 2 pediatric patients undergoing posterior spinal fusion who developed intraoperative hemodynamic and respiratory instability following use of the topical hemostatic agent, Floseal. Previous reports of such reactions are reviewed, and the perioperative care of patients with intraoperative anaphylaxis is discussed.
abstract_id: PUBMED:27334382
Improved bleeding scores using Gelfoam(®) Powder with incremental concentrations of bovine thrombin in a swine liver lesion model. Topical hemostatic agents are used intra-operatively to prevent uncontrolled bleeding. Gelfoam(®) Powder contains a hemostatic agent prepared from purified pork skin gelatin, the efficacy of which is increased when combined with thrombin. However, the effect of increasing concentrations of thrombin on resultant hemostasis is not known. This study sought to evaluate the ability of various concentrations of thrombin in combination with Gelfoam Powder to control bleeding using a swine liver lesion model. Ten pigs underwent a midline laparotomy. Circular lesions were created in the left medial, right medial, and left lateral lobes; six lesions per lobe. Gelfoam Powder was hydrated with Thrombin-JMI(®) diluted to 250, 375, and 770 IU/mL. Each concentration was applied to two lesion sites per lobe. Bleeding scores were measured at 3, 6, 9, and 12 min using a 6-point system; comparison of bleeding scores was performed using ANOVA with the post hoc Tukey test. The bleeding scores with thrombin concentrations at 770 IU/mL were significantly lower than at 250 and 375 IU/mL at all four time points. The percentage of biopsies with a clinically acceptable bleeding score rose from 37.9, 46.6, and 71.2 % at 3 min to 55.2, 69.0, and 88.1 % at 12 min in the 250, 375, and 770 IU/mL thrombin groups, respectively. The study showed that the hemostatic response to thrombin was dose-related: using higher concentrations of thrombin with Gelfoam Powder yielded improved hemostasis, as determined by lower bleeding scores.
abstract_id: PUBMED:26920966
Gelatine matrix with human thrombin decreases blood loss in adolescents undergoing posterior spinal fusion for idiopathic scoliosis: a multicentre, randomised clinical trial. Aims: In a multicentre, randomised study of adolescents undergoing posterior spinal fusion for idiopathic scoliosis, we investigated the effect of adding gelatine matrix with human thrombin to the standard surgical methods of controlling blood loss.
Patients And Methods: Patients in the intervention group (n = 30) were randomised to receive a minimum of two and a maximum of four units of gelatine matrix with thrombin in addition to conventional surgical methods of achieving haemostasis. Only conventional surgical methods were used in the control group (n = 30). We measured the intra-operative and total blood loss (intra-operative blood loss plus post-operative drain output).
Results: Each additional hour of operating time increased the intra-operative blood loss by 356.9 ml (p < 0.001) and the total blood loss by 430.5 ml (p < 0.001). Multiple linear regression analysis showed that the intervention significantly decreased the intra-operative (-171 ml, p = 0.025) and total blood loss (-177 ml, p = 0.027). The decrease in haemoglobin concentration from the day before the operation to the second post-operative day was significantly smaller in the intervention group (-6 g/l, p = 0.013) than in the control group.
Conclusion: The addition of gelatine matrix with human thrombin to conventional methods of achieving haemostasis reduces both the intra-operative blood loss and the decrease in haemoglobin concentration post-operatively in adolescents undergoing posterior spinal fusion for idiopathic scoliosis.
Take Home Message: A randomised clinical trial showed that gelatine matrix with human thrombin decreases intra-operative blood loss by 30% when added to traditional surgical haemostatic methods in adolescents undergoing posterior spinal fusion for idiopathic scoliosis.
Answer: The provided abstracts do not directly address the specific question of the safety and effectiveness of using a gelatin thrombin matrix hemostatic agent in combination with a harmonic scalpel during a total thyroidectomy. However, they do provide insights into the general use, efficacy, and potential complications associated with gelatin thrombin matrix hemostatic agents in various surgical contexts.
For instance, the abstract from PUBMED:34591193 demonstrates that SURGIFLO® Hemostatic Matrix Kit with Thrombin (Surgiflo), which is a gelatin thrombin matrix, was effective in achieving hemostasis in a porcine spleen biopsy punch model. This suggests that gelatin thrombin matrix agents can be effective in controlling bleeding in surgical settings.
Conversely, the abstract from PUBMED:23159248 reports a case of fatal thromboembolism following the use of a hemostatic matrix consisting of bovine gelatinous granules admixed with human thrombin. This indicates that there can be serious complications associated with the use of gelatin thrombin matrix agents, although such events may be rare.
The abstract from PUBMED:36599740 discusses the effectiveness of a gelatin matrix with human thrombin (GMHT) in reducing blood loss during palliative decompressive surgery with posterior spinal fusion for metastatic spinal tumors. The study found that GMHT significantly decreased intraoperative blood loss and the total number of perioperative complications, suggesting that GMHT can be a safe and effective hemostatic agent in this context.
While these abstracts provide some information on the use of gelatin thrombin matrix agents in surgery, they do not specifically address the combination of such agents with a harmonic scalpel during a total thyroidectomy. Therefore, while it can be inferred that gelatin thrombin matrix agents have been used effectively in various surgical procedures, the safety and effectiveness of their specific use in total thyroidectomy with a harmonic scalpel would require further investigation or direct clinical evidence. |
Instruction: A review of risk factors and timing for postoperative hematoma after thyroidectomy: is outpatient thyroidectomy really safe?
Abstracts:
abstract_id: PUBMED:22714575
A review of risk factors and timing for postoperative hematoma after thyroidectomy: is outpatient thyroidectomy really safe? Background: Although postoperative hematoma after thyroidectomy is uncommon, patients traditionally have been advised to stay overnight in the hospital for monitoring. With the growing demand for outpatient thyroidectomy, we assessed its safety and feasibility by evaluating the potential risk factors and timing of postoperative hematoma after thyroidectomy.
Methods: From 1995-2011, 3,086 consecutive patients underwent thyroidectomy at our institution; of these, 22 (0.7 %) developed a postoperative hematoma that required surgical reexploration (group I). Potential risk factors were compared between group I and those without hematoma (n = 3,045) or with hematoma but not requiring reexploration (n = 19; group II). Variables that were significant in the univariate analysis were entered into multivariate analysis by binary logistic regression analysis.
Results: Group I was significantly more likely to have undergone previous thyroid operation than group II (27.3 vs. 8.2 %, p = 0.007). The median weight of excised thyroid gland (71.8 vs. 40 g, p = 0.018) and the median size of the dominant nodule (4.1 vs. 3 cm, p = 0.004) were significantly greater in group I than group II. Previous thyroid operation (odds ratio (OR) = 4.084; 95 % confidence interval (CI), 1.105-15.098; p = 0.035) and size of dominant nodule (OR = 1.315; 95 % CI, 1.024-1.687; p = 0.032) were independent factors for hematoma. Sixteen (72.7 %) had hematoma within 6 h, whereas the other 6 (27.3 %) had hematoma at 6-24 h.
Conclusions: Previous thyroid operation and large dominant nodule were independent risk factors for hematoma requiring surgical reexploration. Given that a quarter of hematoma occurred between 6 to 24 h after surgery, routine outpatient thyroidectomy could not be recommended.
abstract_id: PUBMED:35798991
Case Control Study of Risk Factors for Occurrence of Postoperative Hematoma After Thyroid Surgery: Ten Year Analysis of 6938 Operations in a Tertiary Center in Serbia. Background: Post-thyroidectomy bleeding is rare, but potentially life-threatening complication. Early recognition with immediate intervention is crucial for the management of this complication. Therefore, it is very important to identify possible risk factors of postoperative hemorrhage as well as timing of postoperative hematoma occurrence.
Methods: Retrospective review of 6938 patients undergoing thyroidectomy in a tertiary center in a ten year period (2009-2019) revealed 72 patients with postoperative hemorrhage requiring reoperation. Each patient who developed postoperative hematoma was matched with four control patients that did not develop postoperative hematoma after thyroidectomy. The patients and controls were matched by the date of operation and surgeon performing thyroidectomy.
Results: The incidence of postoperative bleeding was 1.04%. On univariate analysis older age, male sex, higher BMI, higher ASA score, preoperative use of anticoagulant therapy, thyroidectomy for retrosternal goiter, larger thyroid specimens, larger dominant nodules, longer operative time, higher postoperative blood pressure and the use of postoperative subcutaneous heparin were identified as risk factors for postoperative bleeding. Sixty-nine patients (95.8%) bled within first 24 h after surgery.
Conclusion: The rate of postoperative bleeding in our study is consistent with recent literature. Male sex, the use of preoperative anticoagulant therapy, thyroidectomy for retrosternal goiter and the use of postoperative subcutaneous heparin remained statistically significant on multivariate analysis (p < 0.001). When identified, these risk factors may be an obstacle to the outpatient thyroidectomy in our settings.
abstract_id: PUBMED:28765530
Risk factors for postoperative haemorrhage after total thyroidectomy: clinical results based on 2,678 patients. The aim of this study was to analyse postoperative haemorrhage (POH) after a total thyroidectomy and explore the possible risk factors. Records of patients receiving a total thyroidectomy were reviewed and analysed for risk factors of POH. From the 2,678 patients in this study, a total of 39 patients had POH, representing an incidence of 1.5%. The majority (59.0%) of POH events occurred within four hours after surgery. Arterial haemorrhage was the primary cause of POH and was identifiable prior to venous bleeding, making it the first sign of POH. A univariate analysis revealed an association between POH, certain disease factors and BMI, but only a BMI greater than 30 was found to significantly increase the risk of POH (almost 6-fold). At the first sign of POH, all patients showed an obvious red drainage, and 92.3% of the patients had neck swelling. In summary, arterial bleeding is the main cause and first sign of postoperative haemorrhage, as it starts earlier than venous bleeding. A BMI greater than 30 significantly increases the risk of neck haematoma.
abstract_id: PUBMED:26687739
Risk factors for post-thyroidectomy haematoma. Background: There has been increasing emphasis on performing 'same-day' or 'out-patient' thyroidectomy to reduce associated costs. However, acceptance has been limited by the risk of potentially life-threatening post-operative bleeding. This study aimed to review current rates of post-operative bleeding in a metropolitan teaching hospital and identify risk factors.
Method: Medical records of patients undergoing thyroidectomy between January 2007 and March 2012 were reviewed retrospectively. Pre-operative, operative and pathological data, and post-operative complication data, were examined.
Results: The study comprised 205 thyroidectomy cases. Mean age was 51.6 years (standard deviation = 14.74), with 80 per cent females. Unilateral thyroidectomy was performed in 81 cases (39.5 per cent) and total thyroidectomy was performed in 74 cases (36.1 per cent; 5.3 per cent with concomitant lymph node dissection). Nine patients (4.4 per cent) suffered post-operative bleeding, of which six required re-operation. Analysis showed that post-operative systolic blood pressure of 180 mmHg or greater was associated with post-operative bleeding (p = 0.003, chi-square test).
Conclusion: Rates of significant post-operative bleeding are consistent with recent literature. Post-operative hypertension, diabetes and high post-operative drain output were identified as independent risk factors on multivariate analysis; when identified, these may be caveats to same-day discharge of thyroidectomy patients.
abstract_id: PUBMED:33360304
Postoperative Complications After Thyroidectomy: Time Course and Incidence Before Discharge. Background: Although complication rates after thyroidectomy are well described in the literature, the timing of these events is less understood. This study delineates the timeline and risk factors for early adverse events after thyroidectomy.
Materials And Methods: This study included a retrospective review of 161,534 patients who underwent thyroidectomy between 2005 and 2018 using the American College of Surgeons National Surgical Quality Improvement Program database. Time to specific complications was analyzed for all patients undergoing thyroidectomy, with further stratification of hemithyroidectomy and total thyroidectomy cohorts. Univariate analyses were conducted to analyze demographics, preoperative comorbidities, and complications. A multivariate logistic regression model was generated to identify significant risk factors for 7-day postoperative complications.
Results: The overall complication rate was 3.28%. A majority of complications arose before discharge including the following: blood transfusion (96%), hematoma formation (68%), pneumonia (53%), and cardiac arrest (67%). Approximately 37% of unplanned reoperations occurred before discharge in the hemithyroidectomy versus 63% in the total thyroidectomy cohort. Greater than 65% of mortalities occurred after discharge in both groups. Complications generally occurring within 7 d for the entire cohort included the following: pneumonia (3; 2-8 [median postoperative day; interquartile range]), pulmonary embolism (6; 2-12), cardiac arrest (1; 0-5), myocardial infarction (2; 1-6), blood transfusions (0; 0-1), and hematoma formation (0; 0-2). Superficial surgical site infection (9; 6-16) occurred later. Patients who underwent outpatient surgery had a decreased risk of complications (odds ratio 0.41) in the 7-day postoperative period.
Conclusions: Although early complications after thyroidectomy are rare, they have a distinct time course, many of which occur after discharge. However, in selected patients undergoing outpatient thyroidectomy, overall risk of complications is decreased. Understanding timing helps establish better preoperative communication and education to improve postoperative expectations for the provider and patient.
abstract_id: PUBMED:37850042
Risk factors for postoperative cervical haematoma in patients undergoing thyroidectomy: a retrospective, multicenter, international analysis (REDHOT study). Background: Postoperative cervical haematoma represents an infrequent but potentially life-threatening complication of thyroidectomy. Since this complication is uncommon, the assessment of risk factors associated with its development is challenging. The main aim of this study was to identify the risk factors for its occurrence.
Methods: Patients undergoing thyroidectomy in seven high-volume thyroid surgery centers in Europe, between January 2020 and December 2022, were retrospectively analysed. Based on the onset of cervical haematoma, two groups were identified: Cervical Haematoma (CH) Group and No Cervical Haematoma (NoCH) Group. Univariate analysis was performed to compare these two groups. Moreover, employing multivariate analysis, all potential independent risk factors for the development of this complication were assessed.
Results: Eight thousand eight hundred and thirty-nine patients were enrolled: 8,561 were included in NoCH Group and 278 in CH Group. Surgical revision of haemostasis was performed in 70 (25.18%) patients. The overall incidence of postoperative cervical haematoma was 3.15% (0.79% for cervical haematomas requiring surgical revision of haemostasis, and 2.35% for those managed conservatively). The timing of onset of cervical haematomas requiring surgical revision of haemostasis was within six hours after the end of the operation in 52 (74.28%) patients. Readmission was necessary in 3 (1.08%) cases. At multivariate analysis, male sex (P < 0.001), older age (P < 0.001), higher BMI (P = 0.021), unilateral lateral neck dissection (P < 0.001), drain placement (P = 0.007), and shorter operative times (P < 0.001) were found to be independent risk factors for cervical haematoma.
Conclusions: Based on our findings, we believe that patients with the identified risk factors should be closely monitored in the postoperative period, particularly during the first six hours after the operation, and excluded from outpatient surgery.
abstract_id: PUBMED:24947642
Risk factors for hematoma after thyroidectomy: results from the nationwide inpatient sample. Background: Hematoma after thyroidectomy is a potentially lethal complication. We sought to evaluate risk factors for hematoma formation using the Nationwide Inpatient Sample. We hypothesized that certain risk factors could be identified and that this information would be useful to surgeons.
Methods: The Nationwide Inpatient Sample database was queried for patients who underwent thyroidectomy from 1998 to 2010. Bivariate analysis was used to compare patients with and without hematoma. Logistic regression was performed to identify important predictors of hematoma.
Results: There were 150,012 patients. The rate of hematoma was 1.25%. Female sex and high-volume hospitals were important for decreased hematoma risk (odds ratio 0.61[0.54-0.69] and 0.71 [0.56-0.83], respectively). Black race, age >45 years, inflammatory thyroid disease, partial thyroidectomy, chronic kidney disease, and bleeding disorders increased the risk of hematoma (odds ratio 1.37, 1.44, 1.59, 1.69, 1.8, 3.38; respectively). Overall mortality was 0.32% for the entire group and 1.34% in patients with postoperative hematoma (P < .001). Patients with hematoma after thyroidectomy were 2.94 [1.76-4.9] times more likely to die than those without hematoma.
Conclusion: We identified risk factors associated with postoperative hematoma after thyroidectomy. Such information should be useful for surgeons for predicting patients at risk for this potentially lethal complication.
abstract_id: PUBMED:24206619
A multi-institutional international study of risk factors for hematoma after thyroidectomy. Background: Cervical hematoma can be a potentially fatal complication after thyroidectomy, but its risk factors and timing remain poorly understood.
Methods: We conducted a retrospective, case-control study identifying 207 patients from 15 institutions in 3 countries who developed a hematoma requiring return to the operating room (OR) after thyroidectomy.
Results: Forty-seven percent of hematoma patients returned to the OR within 6 hours and 79% within 24 hours of their thyroidectomy. On univariate analysis, hematoma patients were older, more likely to be male, smokers, on active antiplatelet/anticoagulation medications, have Graves' disease, a bilateral thyroidectomy, a drain placed, a concurrent parathyroidectomy, and benign pathology. Hematoma patients also had more blood loss, larger thyroids, lower temperatures, and higher blood pressures postoperatively. On multivariate analysis, independent associations with hematoma were use of a drain (odds ratio, 2.79), Graves' disease (odds ratio, 2.43), benign pathology (odds ratio, 2.22), antiplatelet/anticoagulation medications (odds ratio, 2.12), use of a hemostatic agent (odds ratio, 1.97), and increased thyroid mass (odds ratio, 1.01).
Conclusion: A significant number of patients with a postoperative hematoma present >6 hours after thyroidectomy. Hematoma is associated with patients who have a drain or hemostatic agent, have Graves' disease, are actively using antiplatelet/anticoagulation medications or have large thyroids. Surgeons should consider these factors when individualizing patient disposition after thyroidectomy.
abstract_id: PUBMED:31340806
Risk factors for neck hematoma requiring surgical re-intervention after thyroidectomy: a systematic review and meta-analysis. Background: In this systematic review and meta-analysis, we aimed to determine the risk factors associated with neck hematoma requiring surgical re-intervention after thyroidectomy.
Methods: We systematically searched all articles available in the literature published in PubMed and CNKI databases through May 30, 2017. The quality of these articles was assessed using the Newcastle-Ottawa Quality Assessment Scale, and data were extracted for classification and analysis by focusing on articles related with neck hematoma requiring surgical re-intervention after thyroidectomy. Our meta-analysis was performed according to the Preferred Reporting Items for Systematic Review and Meta-Analyses guidelines.
Results: Of the 1028 screened articles, 26 met the inclusion criteria and were finally analyzed. The factors associated with a high risk of neck hematoma requiring surgical re-intervention after thyroidectomy included male gender (odds ratio [OR]: 1.86, 95% confidence interval [CI]: 1.60-2.17, P < 0.00001), age (MD: 4.92, 95% CI: 4.28-5.56, P < 0.00001), Graves disease (OR: 1.81, 95% CI: 1.60-2.05, P < 0.00001), hypertension (OR: 2.27, 95% CI: 1.43-3.60, P = 0.0005), antithrombotic drug use (OR: 1.92, 95% CI: 1.51-2.44, P < 0.00001), thyroid procedure in low-volume hospitals (OR: 1.32, 95% CI: 1.12-1.57, P = 0.001), prior thyroid surgery (OR: 1.93, 95% CI: 1.11-3.37, P = 0.02), bilateral thyroidectomy (OR: 1.19, 95% CI: 1.09-1.30, P < 0.0001), and neck dissection (OR: 1.55, 95% CI: 1.23-1.94, P = 0.0002). Smoking status (OR: 1.19, 95% CI: 0.99-1.42, P = 0.06), malignant tumors (OR: 1.00, 95% CI: 0.83-1.20, P = 0.97), and drainage used (OR: 2.02, 95% CI: 0.69-5.89, P = 0.20) were not significantly associated with postoperative neck hematoma.
Conclusion: We identified certain risk factors for neck hematoma requiring surgical re-intervention after thyroidectomy, including male gender, age, Graves disease, hypertension, antithrombotic agent use, history of thyroid procedures in low-volume hospitals, previous thyroid surgery, bilateral thyroidectomy, and neck dissection. Appropriate intervention measures based on these risk factors may reduce the incidence of postoperative hematoma and yield greater benefits for the patients.
abstract_id: PUBMED:36540508
Post-Thyroidectomy Hematoma: Risk Factors To Be Considered for Ambulatory Thyroidectomy. Introduction: Post-thyroidectomy hematoma is a serious, potentially life-threatening complication and it is the most frequent indication for reoperation. In this study, we aimed to identify the risk factors and prevalence of post-thyroidectomy hematoma and perform a literature review of the incidence of post-thyroidectomy hematoma at other centers in Saudi Arabia.
Methods: We retrospectively enrolled 372 consecutive patients who had undergone thyroidectomy between January 2015 and December 2020. Patients with bleeding disorders were excluded from the study. Data were analyzed using SPSS software, version 20.0 (IBM Corp., Armonk, NY).
Results: Three female (0.8%) patients developed a hematoma after thyroidectomy. Two patients developed a hematoma 24 hours after surgery and were treated conservatively. The third patient developed a hematoma within two hours of surgery and required surgical intervention. None of the patients required a tracheostomy, and there was no mortality. No significant association was found between age, sex, final pathology, the extent of thyroidectomy, and risk of hematoma.
Conclusion: A post-thyroidectomy hematoma is a rare but dangerous complication. Identifying the risk factors for hematoma formation is of great importance, particularly when considering outpatient thyroidectomy. A large prospective multicenter study is needed for further investigation.
Answer: Outpatient thyroidectomy, where patients are discharged on the same day as their surgery, has been a topic of debate due to concerns about the safety and management of potential complications, such as postoperative hematoma. Postoperative hematoma after thyroidectomy is a rare but serious complication that can be life-threatening if not promptly recognized and treated. The safety of outpatient thyroidectomy is contingent upon understanding the risk factors and timing of postoperative hematoma occurrence.
Several studies have identified risk factors for postoperative hematoma after thyroidectomy. These include previous thyroid operation and large dominant nodule size (PUBMED:22714575), older age, male sex, higher body mass index (BMI), higher American Society of Anesthesiologists (ASA) score, preoperative use of anticoagulant therapy, thyroidectomy for retrosternal goiter, larger thyroid specimens, longer operative time, higher postoperative blood pressure, and the use of postoperative subcutaneous heparin (PUBMED:35798991). Other factors include a BMI greater than 30 (PUBMED:28765530), postoperative hypertension, diabetes, and high post-operative drain output (PUBMED:26687739), as well as male sex, older age, higher BMI, unilateral lateral neck dissection, drain placement, and shorter operative times (PUBMED:37850042).
The timing of hematoma formation is also critical. A significant proportion of hematomas occur within the first 24 hours post-surgery, with many occurring within the first 6 hours (PUBMED:22714575, PUBMED:35798991, PUBMED:28765530, PUBMED:37850042). This suggests that close monitoring during the immediate postoperative period is essential.
Given these risk factors and the timing of hematoma occurrence, the safety of outpatient thyroidectomy may be questionable for certain patient populations. Patients with identified risk factors should be closely monitored, particularly during the first six hours after the operation, and may need to be excluded from outpatient surgery (PUBMED:37850042). While some studies suggest that selected patients undergoing outpatient thyroidectomy have a decreased risk of complications (PUBMED:33360304), it is crucial to individualize patient disposition after thyroidectomy, considering the risk factors for each patient (PUBMED:24206619).
In conclusion, while outpatient thyroidectomy can be safe for some patients, it is not universally recommended due to the risk of hematoma and the associated risk factors. Surgeons must carefully assess each patient's risk profile before considering outpatient thyroidectomy and ensure that there are protocols in place for the early detection and management of postoperative hematoma. The decision to discharge a patient on the same day as their thyroidectomy should be made on a case-by-case basis, with a thorough understanding of the patient's individual risk factors and with appropriate postoperative monitoring and follow-up plans in place (PUBMED:22714575, PUBMED:35798991, PUBMED:28765530, PUBMED:26687739, PUBMED:37850042, PUBMED:33360304, PUBMED:24206619).
Furthermore, the literature indicates that while some risk factors such as smoking status, malignant tumors, and drainage used may not be significantly associated with postoperative neck hematoma (PUBMED:31340806), other factors like male gender, age, Graves disease, hypertension, antithrombotic drug use, thyroid procedure in low-volume hospitals, prior thyroid surgery, bilateral thyroidectomy, and neck dissection are associated with a higher risk of neck hematoma requiring surgical re-intervention (PUBMED:31340806).
In summary, while outpatient thyroidectomy may be considered for a subset of patients with low-risk profiles, it is not advisable for all patients due to the potential for serious complications such as postoperative hematoma. Careful patient selection, risk assessment, and postoperative care are essential to ensure the safety of outpatient thyroidectomy. |
Instruction: A questionnaire: will plateletpheresis donors accept multicomponent donation?
Abstracts:
abstract_id: PUBMED:12350055
A questionnaire: will plateletpheresis donors accept multicomponent donation? Background And Objectives: New technological developments make it possible to collect red blood cells (RBC) by apheresis which provides standardised products and has the potential for improved RBC quality. The purpose of this study was to evaluate the donors' opinion about the multicomponent donation procedure.
Material And Methods: For evaluating the donors' opinion about this new apheresis technique we compiled a questionnaire. The questionnaire was given to all single needle actual plateletpheresis donors (n = 133) that donated platelets in our Institute during February-March 2001. The questionnaire contained 12 questions related to: (1) general information about previous donations of our donors and (2) donors' opinion about multicomponent donation. After implementation of multicomponent donation in December 2001 the data of the questionnaire were compared with the actual opinions of the donors about the procedure.
Results: The mean age of the donors was 38.1 +/- 9.1 years. The median number of previous platelets donations of the interviewed donors was 30. The majority of donors (92.4%) were willing for multicomponent donation. In the same time the majority of donors (74.8%) were willing to donate multicomponents four times per year. The different donation time was not an argument for the donors for the multicomponent donation, while the reduction of incidence of transfusion transmitted diseases was a motivation for them. The decrease of hemoglobin and the side effects caused by possible iron-supplementation therapy were found acceptable from most of our donors. Approximately 74% of the donors thought that the donation of a second component should result in better remuneration whereas 20% of them believed that the remuneration should be unchanged. Seventy-five RBC units were concurrently collected with platelets since December 15th, 2001. Six donors (7.4%) were unwilling to donate an additional RBC unit.
Conclusion: Acceptance and disacceptance rates were almost equal after the implementation of multicomponent donation and at the time point when the interview was performed. The majority of donors was highly motivated to donate multicomponents, by these means we were able to increase our RBC supply and to improve standardization of our products.
abstract_id: PUBMED:34349455
A comparative study of knowledge, attitude, and practices about organ donation among blood donors and nonblood donors. Introduction: Shortage of organs by donation is a national problem which needs a multipronged approach for its strengthening. Educating the people and increasing the awareness of the need for donation would be of the foremost priority. Identifying the target population who are more likely to respond would be very important to reap the maximum results. There is speculation that blood donors would be more amenable and likely to accept the idea and thought of organ donation. This study is being designed to study the same.
Methodology: This was a cross-sectional comparative questionnaire-based study among two groups: blood donors and nonblood donors. Donors were defined as aged above 18 years and have made at least one whole blood/apheresis donation. Nondonors were the ones who were aged above 18 years and have not donated whole blood/apheresis blood products in the past. All the responses were entered in the Microsoft Excel sheets, and statistical analysis was carried out using Statistical Package for Social Sciences.
Results: A total of 829 participated in the study. Among the 829 participants, 416 were donors, and 413 were nondonors. There was no difference in knowledge regarding organ donation among the groups except for perceived risks of organ donation among nondonors. Concerning attitudes, they were more favorable among blood donors, and it was statistically significant at a P < 0.05.
Conclusion: There was no difference with respect to knowledge between donors and nondonors. However, donors had a more favorable attitude toward organ donation. Factors like concerns about misuse of donated organs, lack of clarity on their religion's policy toward organ donation, and potential for harm for the organ donor seem to account for the unfavorable attitude of nondonors toward organ donation.
abstract_id: PUBMED:25254019
Blood donation by elderly repeat blood donors. Background: Upper age limits for blood donors are intended to protect elderly blood donors from donor reactions. However, due to a lack of data about adverse reactions in elderly blood donors, upper age limits are arbitrary and vary considerably between different countries.
Methods: Here we present data from 171,231 voluntary repeat whole blood donors beyond the age of 68 years.
Results: Blood donations from repeat blood donors beyond the age of 68 years increased from 2,114 in 2005 to 38,432 in 2012 (from 0,2% to 4.2% of all whole blood donations). Adverse donor reactions in repeat donors decreased with age and were lower than in the whole group (0.26%), even in donors older than 71 years (0.16%). However, from the age of 68 years, the time to complete recovery after donor reactions increased. Donor deferrals were highest in young blood donors (21.4%), but increased again in elderly blood donors beyond 71 years (12.6%).
Conclusion: Blood donation by regular repeat blood donors older than 71 years may be safely continued. However, due to a lack of data for donors older than 75 years, blood donation in these donors should be handled with great caution.
abstract_id: PUBMED:36324440
Media use and organ donation willingness: A latent profile analysis from Chinese residents. Background: Previous studies have paid attention to media as an important channel for understanding organ donation knowledge and have not divided groups according to the degree of media use to study their differences in organ donation. Therefore, the purpose of this study is to explore the influence of media use on organ donation willingness and the influencing factors of organ donation willingness of people with different media use levels.
Methods: A cross-sectional study of residents from 120 cities in China was conducted by questionnaire survey. Using Mplus 8.3 software, the latent profile analysis of seven media usage related items was made, and multiple linear regression was performed to analyze the influence of varying levels of media use on organ donation willingness of different population.
Results: All the interviewees were divided into three groups, namely, "Occluded media use" (9.7%), "Ordinary media use" (67.1%) and "High-frequency media use" (23.2%). Compared with ordinary media use, high-frequency media population (β = 0.06, P < 0.001) were positively correlated with their willingness to accept organ donation, residents who used media occlusion (β = -0.02, P < 0.001) were negatively correlated with their willingness to accept organ donation. The influencing factors of residents' accept willingness to organ donation were different among the types of occluded media use, ordinary media use and high-frequency media use.
Conclusion: It is necessary to formulate personalized and targeted dissemination strategies of organ donation health information for different media users.
abstract_id: PUBMED:30299264
Oocyte donors’ awareness on donation procedure and risks: A call for developing guidelines for health tourism in oocyte donation programmes Objective: In the recent years, oocyte donation programmes have widely spread worldwide becoming the drive of health tourism. In some countries, donation programmes are tightly regulated, whereas in others, the guidelines or regulations are not well defined. To evaluate donors’ awareness of the donation programmes and the ethical consequences in enrolling these programmes.
Material And Methods: A detailed questionnaire-based survey was conducted to evaluate the donors’ main drive to get involved in the donation programme and the donor’s knowledge and awareness of risk factors.
Results: The majority of the donors (70%) were undergoing donation programmes for financial gains through compensation. The donors were especially not aware of the long-term medical risks and the possibility of identity exposure through genetic screening.
Conclusion: The main duty of health professionals is to counsel donors about the basic procedures and any possible problems they may face during the donation programmes. Reimbursement of oocyte donors is a slippery slope in oocyte donation programmes. High compensation may make women think that donation is a profession without considering possible risks. Furthermore, with the wider use of direct-to-consumer genetic testing, and genetic anonymity may be at risk, thus the donors have to be counselled properly. Therefore, in this era of health tourism, it is crucial to set up well-defined counselling bodies in all oocyte donation centres and enable donors to make an informed choice in becoming oocyte donors.
abstract_id: PUBMED:32295744
Views of French oocyte donors at least 3 years after donation. Research Question: The study aimed to evaluate the percentage of oocyte donors who regretted their donation at least 3 years later.
Design: Between December 2018 and January 2019, this single-centre study sought to contact by telephone all women who had donated oocytes during the 6-year period from 2010 to 2015 at the Lille Centre for the study and storage of eggs and spermatozoa (CECOS).
Results: Among 118 women, 72 responded to the questionnaire by telephone and were included in the study. The response rate was 61%. No woman regretted having donated an oocyte, and 89% said that they would do it again in the same situation. The survey distinguished two types of donors: 'relational' (58%) and 'altruistic' (42%); some of their responses differed. Ninety per cent of the women had talked about the donation to family and friends. Among them, 74% felt supported by their family and friends, and 72% by their partner. The donation was something that 76% of the women sometimes thought about; 83% felt that this donation was something useful that they had accomplished. Finally, most donors felt that oocyte donation should remain unremunerated and anonymous.
Conclusions: None of the donors we interviewed regretted their donation. In France, the current principles governing this donation appear satisfactory to oocyte donors.
abstract_id: PUBMED:23905099
The knowledge, attitude and practice towards blood donation among voluntary blood donors in chennai, India. Introduction: An integrated strategy for blood safety is required for the provision of safe and adequate blood. Recruiting a sufficient number of safe blood donors is an emerging challenge. The shortage of blood in India is due to an increase in the demand, with fewer voluntary blood donors. A study on the knowledge, attitude and the practice of donors may prove to be useful in the successful implementation of the blood donation programme. Our aim was to find the level of the knowledge, attitude and practice of blood donation among voluntary blood donors.
Material And Methods: A structured questionnaire was given to 530 voluntary blood donors to assess their knowledge, attitude and practice with respect to blood donations. The statistical analyses were done by using the SPSS software. The associations between the demographic factors were analysed by using the Chi square test.
Results: Among the 530 donors, 436 (93%) were males and 36 (7%) were female donors. 273 (51.2%) donors knew about the interval of the donation and 421 (79.4%) donors knew about the age limit for the donation. 305 (57%) donors felt that creating an opportunity for the donation was an important factor for motivating the blood donation and 292 (55%) donors felt that the fear of pain was the main reason for the hesitation of the donors in coming forward to donate blood.
Conclusion: A majority of the donors were willing to be regular donors. The donors showed positive effects like a sense of satisfaction after the donation. Creating an opportunity for blood donation by conducting many blood donation camps may increase the voluntary blood donations.
abstract_id: PUBMED:30598827
Motivational factors for blood donation, potential barriers, and knowledge about blood donation in first-time and repeat blood donors. Background: Blood transfusion is an essential component of the health care system of every country and patients who require blood transfusion service as part of the clinical management of their condition have the right to expect that sufficient and safe blood will be available to meet their needs. However, this is not always the case, especially in developing countries. To recruit and retain adequate regular voluntary non-remunerated blood donors the motivators and barriers of donors must be understood. Equally important to this goal is the knowledge of blood donors.
Methodology: A cross-sectional study was conducted at the donor clinic of Tamale Teaching Hospital in the Northern Region of Ghana from 06 January to 02 February 2018. Purposive sampling technique was used to sample 355 eligible first-time and repeat whole blood donors. Data were collected face-to-face with a 27-item self-administered questionnaire. Chi-square test was used to determine the association between donor status and the motivators of blood donation, barriers to blood donation and the socio-demographic characteristics of donors.
Results: Out of the 350 donors, 192(54.9%) were first-time blood donors while 158 (45.1%) were repeat donors. Nearly all the donors, 316(90.3%), indicated they were motivated to donate when someone they know is in need of blood. Over four-fifths of the donors endorsed good attitude of staff (n = 291, 83.4%) and the desire to help other people in need of blood (n = 298, 85.1%) as motivators. Approximately two-thirds, 223(63.7%), of the donors endorsed poor attitude of staff as a deterrent to blood donation. More than half of the donors considered the level of privacy provided during pre-donation screening (n = 191, 54.6%) and the concern that donated blood may be sold 178(50.9%) as deterrents. Only a little over one-third of the donors knew the minimum age for blood donation (n = 126, 36.0%) and the maximum number of donations per year (n = 132, 37.7%).
Conclusion: Our findings suggest that public education on blood donation, regular prompts of donors to donate when there is a shortage, and friendly attitude of staff have the potential to motivate donors and eliminate barriers to blood donation.
abstract_id: PUBMED:35633143
Knowledge and attitudes of apheresis donors regarding apheresis blood donation. Background: Demand for apheresis blood donation has increased with the widening of the use of blood transfusion and the decrease in the donor pool. The knowledge level of apheresis donors, their attitudes such as donating again and recommending others to donate via apheresis are important in meeting this demand.
Objective: This analytical cross-sectional study was conducted with 182 plateletpheresis donors to determine their knowledge and attitudes regarding apheresis blood donation.
Material And Methods: Participants were asked 34 questions (which were prepared based on the literature and perfected by expert opinion and pre-administration) to determine their level of knowledge regarding apheresis. A value of 1 point was assigned for each 'correct' answer and 0 points for 'wrong' and 'do not know' answers. Participants' total level of knowledge scores was formulated to have a value between 0 and 100 (i.e., the score of each group was divided into the number of question and multiplied by 100). Participant attitudes were evaluated based on responses to 14 questions using a 5-point Likert questionnaire.
Results: Total knowledge scores regarding apheresis were moderate (55 ± 15.2). Those who were educated above the university level (compared to primary school and less, middle and high-school education levels) had higher level of knowledge scores regarding apheresis. In general, participants had a positive attitude regarding the importance and effects of apheresis blood donation. Those with the following characteristics had a positive attitude (p < 0.05) regarding the importance of apheresis blood donation: female (compared to men), single (compared to married), 18-33 years of age (compared to 34-49 and 50-65 years of age groups, with an above-university level of education (compared to primary school and less, middle and high-school education levels), informed regarding apheresis blood donation, first-time donors and donors to unknown recipients.
Conclusion: Study participants demonstrated a moderate level of knowledge and positive attitude regarding the importance of apheresis blood donation. Thus, to enhance attitudes on the procedure and reduce the risk of recipient infections, blood donors should be better informed regarding apheresis blood donation.
abstract_id: PUBMED:36625440
We want to feel valued: eggs donors' experiences of donation. Egg donation in New Zealand is identity-release, with donor-conceived individuals having the right to access donors' identifying information at the age of 18. It also allows donors and previously unknown recipients to meet prior to donation. Further, donation is altruistic, although reimbursement of costs is possible. In our previous paper we explored the motivations of 21 egg donors in this context and reported that they are motivated to donate as an act of personal gift-giving to recipients who may become known to them through donation, and that they do not want to be compensated for this financially. In this paper, drawing on in-depth interviews, we report on donors' experiences of the donation process and subsequent to donation. Donors understood their donations to be a significant act, both for the recipients and their families, but also for themselves, particularly given the multiple sacrifices which they willingly made. Donors wished for their gift and their role to be valued and acknowledged through being appreciated, informed, involved and supported by recipients and clinics before, during and after their donations. These findings have implications for clinical practice and care, offering insight into how best to support donors prior and subsequent to donation.
Answer: The study described in abstract PUBMED:12350055 aimed to evaluate plateletpheresis donors' opinions about multicomponent donation, which involves collecting red blood cells (RBC) by apheresis along with platelets. The questionnaire was given to 133 single needle actual plateletpheresis donors at the Institute during February-March 2001. The results showed that the majority of donors (92.4%) were willing to participate in multicomponent donation, and 74.8% were willing to donate multicomponents four times per year. The reduction of transfusion-transmitted diseases was a motivating factor for donors, and most found the potential decrease in hemoglobin and side effects from possible iron-supplementation therapy acceptable. Approximately 74% of donors believed that donating a second component should result in better remuneration, while 20% thought remuneration should remain unchanged. After implementing multicomponent donation in December 2001, 75 RBC units were collected concurrently with platelets, and only 6 donors (7.4%) were unwilling to donate an additional RBC unit. The study concluded that acceptance rates were almost equal after the implementation of multicomponent donation compared to when the interview was conducted, indicating that the majority of donors were highly motivated to donate multicomponents, which allowed the Institute to increase its RBC supply and improve product standardization. |
Instruction: Should chronic treatment-refractory akathisia be an indication for the use of clozapine in schizophrenic patients?
Abstracts:
abstract_id: PUBMED:1353492
Should chronic treatment-refractory akathisia be an indication for the use of clozapine in schizophrenic patients? Background: Clozapine, an atypical neuroleptic, is an effective medication in a subgroup of schizophrenic patients who have either failed to respond to the typical neuroleptics or experienced intolerable side effects such as neuroleptic malignant syndrome and disabling tardive dyskinesia. Its efficacy for persistent and disabling akathisia is less clear. Akathisia, especially the chronic and disabling form, can be a treatment dilemma for the clinician and the patient.
Method: We describe three representative case illustrations of schizophrenic patients who had severe, persistent treatment-resistant akathisia. Two of them had refractory psychoses and the third had multiple disabling side effects during treatment with typical neuroleptics. Two had tardive dyskinesia. These patients were treated with clozapine while other neuroleptics were discontinued.
Results: During a 2-year follow-up, these patients made impressive social and vocational strides coinciding with a fairly rapid remission of akathisia (under 3 months) and a lesser though notable improvement in the psychoses. Tardive dyskinesia also remitted, though over a period of 6 to 12 months.
Conclusion: Our experience leads us to suggest a trial of clozapine in a subgroup of schizophrenic patients, who in addition to refractory psychoses have persistent disabling akathisia. However, given the risk of agranulocytosis with clozapine, we suggest that the usual treatment strategies for akathisia be tried before clozapine is initiated in the approved manner. Future controlled trials of clozapine that specifically investigate persistent akathisia may answer this question more conclusively.
abstract_id: PUBMED:9269253
Clozapine treatment for neuroleptic-induced tardive dyskinesia, parkinsonism, and chronic akathisia in schizophrenic patients. Background: Previous studies on the use of clozapine in neuroleptic-resistant chronic schizophrenic patients have demonstrated positive effects on tardive dyskinesia but were less conclusive about chronic akathisia and parkinsonism. The aim of the present study was to investigate the short-term (18 weeks) efficacy of clozapine in neuroleptic-resistant chronic schizophrenic patients with coexisting tardive dyskinesia, chronic akathisia, and parkinsonism.
Method: Twenty chronic, neuroleptic-resistant schizophrenic patients with coexisting tardive dyskinesia, parkinsonism, and chronic akathisia were treated with clozapine. Assessment of tardive dyskinesia, parkinsonism, and chronic akathisia was made once weekly for 18 weeks with the Abnormal Involuntary Movement Scale (AIMS), Simpson-Angus Rating Scale for Extrapyramidal Side Effects, and Barnes Rating Scale for Drug-Induced Akathisia (BAS).
Results: At the end of 18 weeks of clozapine treatment, improvement rates were 74% for tardive dyskinesia, 69% for parkinsonism, and 78% for chronic akathisia. A statistically significant reduction in the scores on the AIMS and Simpson-Angus Scale was achieved at Week 5 and on the BAS at Week 6 (p < .0001).
Conclusion: Relatively low doses of clozapine are effective for the treatment of neuroleptic-induced extrapyramidal syndromes in neuroleptic-resistant chronic schizophrenic patients. The relief of tardive dyskinesia, parkinsonism, and chronic akathisia in this group of patients occurs more rapidly than the reduction in psychotic symptoms. Disturbing, long-term extrapyramidal syndromes in chronic schizophrenic patients should be considered an indication for clozapine treatment.
abstract_id: PUBMED:9413413
Olanzapine in treatment-refractory schizophrenia: results of an open-label study. The Spanish Group for the Study of Olanzapine in Treatment-Refractory Schizophrenia. Background: Clozapine is currently the treatment of choice for neuroleptic-resistant schizophrenia. Olanzapine is a new antipsychotic drug that has shown efficacy against positive and negative symptoms of schizophrenia, with minimal extrapyramidal side effects. However, the effectiveness of olanzapine has not yet been reported among treatment-refractory schizophrenic patients.
Method: A total of 25 schizophrenic patients (DSM-IV criteria) with documented lack of response to two conventional antipsychotic drugs entered this 6-week prospective, open-label treatment trial with olanzapine 15 to 25 mg/day. An optional extension up to 6 months was provided.
Results: As a group, the olanzapine-treated patients showed statistically significant improvement (p < .05) in both positive and negative symptoms by the end of 6 weeks of therapy. Overall, 9 of the patients (36%) met the a priori criteria for treatment-response (> or = 35% decrease in Brief Psychiatric Rating Scale [BPRS] total score, plus posttreatment Clinical Global Impression-Severity < or = 3 or BPRS total < 18). Only one patient discontinued treatment because of an adverse event during the study. Despite the relatively high dosages of olanzapine used, there were no reports of parkinsonism, akathisia, or dystonia, and no patients required anticholinergic medication.
Conclusion: This open study suggests that olanzapine may be effective and well tolerated for a substantial number of neuroleptic-resistant schizophrenic patients. Further blinded, controlled trials are needed to confirm our results.
abstract_id: PUBMED:1357741
Treatment of the neuroleptic-nonresponsive schizophrenic patient. The treatment and management of neuroleptic-resistant schizophrenic patients, who comprise 5 to 25 percent of all patients with that diagnosis, are major problems for psychiatry. In addition, another large group of schizophrenic patients, perhaps 5 to 20 percent, are intolerant of therapeutic dosages of neuroleptic drugs because of extrapyramidal symptoms, including akathisia, parkinsonism, and tardive dyskinesia. Because about 60 percent of neuroleptic-resistant schizophrenic patients respond to clozapine and a large percentage of neuroleptic-intolerant patients are able to tolerate clozapine, it should be considered the treatment of choice for such patients until other therapies are proven to be superior. A trial of clozapine alone should usually be continued for up to 6 months before it is terminated or supplemental agents are tried. Plasma levels of clozapine may be useful to guide dosage. The major side effects of clozapine are granulocytopenia or agranulocytosis (1%-2%) and a dose-related increase in the incidence of generalized seizures. Psychosocial treatments such as education of the patient and the family about the nature of the illness, rehabilitation programs, social skills training, and assistance in housing are generally needed to obtain optimal benefit from clozapine, as with other somatic therapies. If clozapine is unavailable, unacceptable, or not tolerated, a variety of approaches may be employed to supplement typical antipsychotic drugs for patients who do not respond adequately to these agents alone. These include lithium; electroconvulsive therapy; carbamazepine or valproic acid; benzodiazepines; antidepressant drugs; reserpine; L-dopa and amphetamine; opioid drugs; calcium channel blockers; and miscellaneous other pharmacologic approaches. The evidence for the efficacy of these ancillary somatic therapies in treatment-resistant patients is relatively weak. Polypharmacy should be tried only for discrete periods and with clear goals. If these are not achieved, supplemental medications should be discontinued. Psychosurgery is not a recommended alternative at this time.
abstract_id: PUBMED:31488788
Risk Factors for Noncompliance with Antipsychotic Medication in Long-Term Treated Chronic Schizophrenia Patients. Background: The attitudes of schizophrenic patients toward medications directly impact the treatment compliance. Although noncompliance represents a serious concern in long-term schizophrenia treatment, a detailed information on the factors that impair compliance is still limited. The present study aims to assess the factors related to noncompliance with antipsychotics agents, in long-term treated chronic paranoid schizophrenia patients.
Subjects And Methods: Two groups of such patients (total number n=162) were analyzed and compared: 1). patients with symptomatic remission on haloperidol (n=32), clozapine (n=40) or olanzapine (n=45), and 2). drug resistant patients (n=45). The mean duration of the disease was 19.3 years.
Results: Altogether, in our patient sample, a better drug attitude was found in the olanzapine and clozapine groups. Our findings have also revealed that worse attitude toward antipsychotics correlated with an earlier onset of schizophrenia, younger patient age, shorter duration of the disease, higher burden of symptoms, treatment with a typical antipsychotics, and higher severity of akathisia.
Conclusion: Our results suggest that detecting factors that influence the patient's attitude toward medications might be helpful for designing targeted educational strategies in chronic schizophrenia patients (particularly those with the high risk of noncompliance), and further trials are warranted to explore this topic.
abstract_id: PUBMED:16650902
Motoric neurological soft signs and psychopathological symptoms in schizophrenic psychoses. Motoric neurological soft signs (NSS) were investigated by means of the Brief Motor Scale (BMS) in 82 inpatients with DSM-III-R schizophrenic psychoses. To address potential fluctuations of psychopathological symptoms and extrapyramidal side effects, patients were examined in the subacute state, twice at an interval of 14 days on the average. NSS were significantly correlated with severity of illness, lower social functioning, and negative symptoms. Modest, but significant correlations were found between NSS and extrapyramidal side effects as assessed on the Simpson-Angus Scale. Neither the neuroleptic dose prescribed to the patient, nor scores for tardive dyskinesia and akathisia were significantly correlated with NSS. Moreover, NSS scores did not significantly differ between patients receiving clozapine and conventional neuroleptics. Patients in whom psychopathological symptoms remained stable or improved over the clinical course showed a significant reduction of NSS scores. This finding did not apply to those patients in whom psychopathological symptoms deteriorated. Our findings demonstrate that NSS in schizophrenic psychoses are relatively independent of neuroleptic side effects, but they are associated with the severity and persistence of psychopathological symptoms and with poor social functioning.
abstract_id: PUBMED:17017896
Evolution of antipsychotic intervention in the schizophrenic psychosis. The accidental discovery (in the 1950s) and subsequent development of antipsychotic drugs have revolutionized the care of many patients with the schizophrenic psychoses. The first-generation antipsychotics, though effective for hallucinations, delusions, as well as a treatment of the disorder in two-thirds of patients with schizophrenia, burdened many patients with extrapyramidal effects (EPS), including dystonias, akathisia, and pseudo-Parkinsonian morbidity. Moreover, they had little or no effect on the most disabling, core symptoms associated with withdrawal of interests and interpersonal relationships. The second-generation antipsychotics, which began to appear in the late 1980s with the introduction of clozapine, had strikingly less morbidity, contributing little or no EPS and providing at least modest promise of reduction of negative symptoms and enhancement of some aspects of cognition. However, some second-generation antipsychotics have induced considerable weight gain, and appear to lower the threshold for the development of the metabolic syndrome, which increases cardio-vascular morbidity. The actual mechanism(s) of action of the antipsychotic drugs is still in dispute. Direct and indirect effects on dopamine transmission have been supported by much of the evidence. Direct blockade of dopamine hyperactivity and partial restoration deficient dopamine has been the standard explanation of their effects. However, dysfunctional intracellular signal transduction and dysfunction of myelin are emerging as competing pathologies upon which antipsychotics act. It is likely that the next generation of antipsychotics will act more directly and more specifically on such underlying neuropathology.
abstract_id: PUBMED:16900439
Risperidone augmentation of clozapine: a critical review. Objective: Atypical antipsychotics are frequently used as augmentation agents in clozapine-resistant schizophrenic patients. Risperidone (RIS) is the one most studied as a clozapine (CLZ) adjunct. The aim of this study is to critically review all published studies regarding the efficacy and safety of RIS as an adjunctive agent in CLZ-resistant schizophrenic or schizoaffective patients.
Methods: A MEDLINE search from January 1988 to June 2005 was conducted. Identified papers were examined against several clinical, pharmacological and methodological parameters.
Results: A total of 15 studies were found (2 randomized controlled trials, 3 open-label trials (OTs) and 8 case-studies (CSs)) comprising 86 schizophrenic or schizoaffective patients (mean age 38.4 years). Mean CLZ dosage during the combined treatment was 474.2 mg/day. Plasma CLZ levels were assessed in 62 patients (72.1%). RIS was added at a mean dosage of 4.6 mg/day for a mean of 7.9 weeks. Significant improvement in psychopathology was reported for 37 patients (43%). A lower RIS dosage and a longer duration of the trial seemed to be associated with a better outcome. Main side effects reported were: extrapyramidal symptoms or akathisia (9.3%), sedation (7%) and hypersalivation (5.8%).
Conclusions: Existing evidence encourages the use of RIS as an adjunctive agent in CLZ-resistant schizophrenic or schizoaffective patients.
abstract_id: PUBMED:12633119
Switching from depot antipsychotic drugs to olanzapine in patients with chronic schizophrenia. Background: Patients with chronic schizophrenia (DSM-IV criteria) often receive depot antipsychotic medications to assure longer administration and better compliance with their treatment regimen. This study evaluated whether patients stabilized on depot antipsychotic medication could be successfully transitioned to oral olanzapine.
Method: In a 3-month open-label study, 26 clinically stable patients with schizophrenia taking depot antipsychotics for over 3 years were randomly assigned to continue on their current depot dose or to switch to oral olanzapine. Clinical ratings (Positive and Negative Syndrome Scale [PANSS], Global Assessment of Functioning [GAF] scale, and Clinical Global Impressions [CGI] scale) and side effect parameters (Abnormal Involuntary Movement Scale [AIMS], Barnes Akathisia Scale, AMDP-5 scale, vital signs, and weight) were obtained monthly.
Results: Oral olanzapine patients (N = 13) demonstrated significant clinical improvement over the depot control group (N = 13) from baseline to 3-month endpoint (PANSS total, p =.012; PANSS general, p =.068; PANSS negative, p =.098; CGI-Improvement, p =.007; CGI-Severity, p =.026; GAF, p =.015). Side effect rating scales showed no statistical differences between the 2 groups (AIMS, Barnes Akathisia Scale, AMDP-5, vital signs). The depot control group showed no statistical superiority in any measure except weight change (p =.0005). After 3 months, all olanzapine patients preferred olanzapine to their previous depot medications and chose to continue on olanzapine treatment.
Conclusion: Clinicians may expect clinical improvement when switching chronically psychotic patients from traditional depot antipsychotic drugs to oral olanzapine. Switching may be completed within a 4-week period with relative compliance being maintained and patients preferring oral olanzapine to their previous depot medications.
abstract_id: PUBMED:10349044
Does akathisia influence psychopathology in psychotic patients treated with clozapine? Background: Akathisia has been reported to predict more severe symptoms and poorer treatment response to typical neuroleptics among patients with schizophrenia. Akathisia has also been associated with symptom exacerbation. This study addressed four questions: 1) Does akathisia predict greater severity in global psychopathology? 2) Is this effect global or specific? 3) Does clozapine treatment alter this relationship? 4) Does severity of psychopathology covary with the level of akathisia?
Methods: Akathisia and clinical symptoms were examined in 33 "treatment refractory" schizophrenic patients treated with clozapine across 16 weeks. Weekly ratings were Barnes Akathisia Rating Scale, Abbreviated Dyskinesia Rating Scale, and Brief Psychiatric Rating Scale (BPRS). Patients were classified as "with" (n = 15) or "without" (n = 18) akathisia. Data analyses involved independent t-test comparisons of selected variables, between-group multivariate analyses of variance across time for BPRS Total scores and Guy's five factors, and partial correlations to assess covariation between BPRS scores and level of akathisia.
Results: Akathisia predicted more severe global psychopathology, specific to the Activation (AC) and Thought Disturbance (TH) factors. These relationships did not change with clozapine treatment even when akathisia declined. Interestingly, level of akathisia did not covary with severity of psychopathology.
Conclusions: In this sample, akathisia predicted more severe psychopathology, specific to AC and TH BPRS factor scores. Clozapine treatment did not alter this relationship. Although the presence of akathisia predicted more severe symptoms, the level of akathisia did not covary across time with severity of psychopathology, suggesting an "uncoupling" of these symptom domains.
Answer: Based on the provided abstracts, chronic treatment-refractory akathisia could be considered an indication for the use of clozapine in schizophrenic patients. Clozapine, an atypical antipsychotic, has been shown to be effective in patients who have not responded to typical neuroleptics or have experienced intolerable side effects such as neuroleptic malignant syndrome, disabling tardive dyskinesia, and chronic akathisia (PUBMED:1353492; PUBMED:9269253). Specifically, studies have reported that clozapine treatment resulted in improvement rates of 78% for chronic akathisia, and this relief occurred more rapidly than the reduction in psychotic symptoms (PUBMED:9269253). Additionally, the use of clozapine has been associated with a remission of akathisia within 3 months and an improvement in social and vocational functioning in patients (PUBMED:1353492).
However, it is important to note that clozapine carries a risk of agranulocytosis, and therefore, the usual treatment strategies for akathisia should be tried before initiating clozapine (PUBMED:1353492). Furthermore, the evidence for the efficacy of clozapine specifically for persistent akathisia may need to be confirmed through future controlled trials (PUBMED:1353492).
In summary, while the evidence suggests that clozapine can be effective for chronic treatment-refractory akathisia in schizophrenic patients, it should be considered after other treatment options have been exhausted due to the potential risks associated with its use. Clinicians should weigh the benefits against the risks and monitor patients closely if clozapine is initiated for this indication. |
Instruction: Determinants of sustained virological suppression in indigent, HIV-infected patients: is single protease inhibitor-based antiretroviral therapy truly highly active?
Abstracts:
abstract_id: PUBMED:15248135
Determinants of sustained virological suppression in indigent, HIV-infected patients: is single protease inhibitor-based antiretroviral therapy truly highly active? Background: Effective virological suppression with HAART is dependent on strict adherence to therapy. Compliance with therapy is influenced by clinical and psychosocial factors.
Method: We performed a retrospective study investigating determinants of effective virological suppression, defined as <400 RNA at 11-13 months of HAART, in an urban indigent population. The study included 366 new patients presenting for care to the Thomas Street Clinic, Houston, Texas, between April and December 1998. Median age, CD4 count, and viral load (VL) of the study population were 37.5 years, 189 cells/mm(3), and 53,000, respectively. Thirty-nine percent had AIDS, 20% had cocaine-positive drug screens, and 64% were antiretroviral naïve. Two hundred and sixty-seven patients were started on HAART. Thirty-four percent showed virological suppression.
Results: In multivariate analysis, adherence to HAART, care by experienced primary provider, baseline VL <100,000 copies/mL, age >35 years, and no active substance use were associated with virological suppression. Rates of virological suppression with HAART are unacceptably low in this urban indigent population.
Conclusion: Low rates of virological suppression are primarily due to lack of adherence rather than late utilization of care among ethnic minorities. Single protease-inhibitor-based antiretroviral therapy does not appear to be highly active in this patient population.
abstract_id: PUBMED:16945076
Regimen-dependent variations in adherence to therapy and virological suppression in patients initiating protease inhibitor-based highly active antiretroviral therapy. Objective: To examine differences among four protease inhibitor (PI)-based drug regimens in adherence to therapy and rate of achievement of virological suppression in a cohort of antiretroviral-naive patients initiating highly active antiretroviral therapy (HAART).
Methods: Participants were antiretroviral-naive and were first dispensed combination therapy containing two nucleosides and a ritonavir (RTV)-boosted PI, or unboosted nelfinavir, between 1 January 2000 and 30 September 2003. Logistic regression analysis was used to examine associations between the prescribed PI and other baseline factors associated with being >90% adherent to therapy and then to determine the associations of prescribed drug regimen, adherence to therapy and baseline variables with the odds of achieving two consecutive viral loads of <500 HIV-1 RNA copies/mL. RESULTS A total of 385 subjects were available for analysis. Lopinavir (LPV)/RTV was prescribed for 168 patients (42% of total); 86 (22%) received indinavir (IDV)/RTV; 91 (24%) received nelfinavir (NFV) and 40 (10%) received saquinavir (SQV)/RTV. SQV/RTV-based HAART was associated with reduced adherence to therapy [odds ratio (OR)=0.40; 95% confidence interval (CI) 0.19-0.83]. In multivariate models, IDV/RTV (OR=0.45; 95% CI 0.22-0.92), SQV/RTV (OR=0.18; 95% CI 0.07-0.43) and NFV were associated with reduced odds of achieving virological suppression within 1 year in comparison to LPV/RTV-based therapy. For patients receiving NFV, adjusting for adherence (OR=0.73; 95% CI 0.36-1.47) rendered this association nonsignificant.
Conclusion: Patients prescribed IDV/RTV, NFV or SQV/RTV were less likely to achieve virological suppression on their first regimen compared with patients prescribed LPV/RTV. Reduced adherence to these therapies only partly explained these observed differences.
abstract_id: PUBMED:17352763
Superior virological response to boosted protease inhibitor-based highly active antiretroviral therapy in an observational treatment programme. Background: The use of boosted protease inhibitor (PI)-based antiretroviral therapy has become increasingly recommended in international HIV treatment consensus guidelines based on the results of randomized clinical trials. However, the impact of this new treatment strategy has not yet been evaluated in community-treated cohorts.
Methods: We evaluated baseline characteristics and plasma HIV RNA responses to unboosted and boosted PI-based highly active antiretroviral therapy (HAART) among antiretroviral-naïve HIV-infected patients in British Columbia, Canada who initiated HAART between August 1997 and September 2003 and who were followed until September 2004. We evaluated time to HIV-1 RNA suppression (<500 HIV-1 RNA copies/mL) and HIV-1 RNA rebound (>or=500 copies/mL), while stratifying patients into those that received boosted and unboosted PI-based HAART as the initial regimen, using Kaplan-Meier methods and Cox proportional hazards regression.
Results: During the study period, 682 patients initiated therapy with unboosted PI and 320 individuals initiated HAART with a boosted PI. Those who initiated therapy with a boosted PI were more likely to have a CD4 cell count <200 cells/muL and to have a plasma HIV RNA>100 000 copies/mL, and to have AIDS at baseline (all P<0.001). However, when we examined virological response rates, those who initiated HAART with a boosted PI achieved more rapid virological suppression [relative hazard 1.26, 95% confidence interval (CI) 1.06-1.51, P=0.010].
Conclusions: Patients prescribed boosted PIs achieved superior virological response rates despite baseline factors that have been associated with inferior virological responses to HAART. Despite the inherent limitations of observational studies which require this study be interpreted with caution, these findings support the use of boosted PIs for initial HAART therapy.
abstract_id: PUBMED:15134189
Long-term virological response to multiple sequential regimens of highly active antiretroviral therapy for HIV infection. Objective: Information about the virological response to sequential highly active antiretroviral therapy (HAART) for HIV infection is limited. The virological response to four consecutive therapies was evaluated in the Swiss HIV Cohort.
Design: Retrospective analysis in an observational cohort.
Methods: 1140 individuals receiving uninterrupted HAART for 4.8 +/- 0.6 years were included. The virological response was classified as success (<400 copies/ml), low-level (LF: 400-5000 copies/ml) or high-level failure (HF: >5000 copies/ml). Potential determinants of the virological response, including patient demographics, treatment history and virological response to previous HAART regimens were analysed using survival and logistic regression analyses.
Results: 40.1% failed virologically on the first (22.0% LF; 18.1% HF), 35.1% on the second (14.2% LF; 20.9% HF), 34.2% on the third (9.9% LF; 24.3% HF) and 32.7% on the fourth HAART regimen (9% LF; 23.7% HF). Nucleoside pre-treatment (OR: 2.34; 95% CI: 1.67-3.29) and low baseline CD4 T-cell count (OR: 0.79/100 cells rise; 95% CI: 0.72-0.88) increased the risk of HF on the first HAART. Virological failure on HAART with HIV-1 RNA levels exceeding 1000 copies/ml predicted a poor virological response to subsequent HAART regimens. A switch from a protease inhibitor- to a non-nucleoside reverse transcriptase inhibitor-containing regimen significantly reduced the risk of HF. Multiple switches of HAART did not affect the recovery of CD4 T lymphocytes.
Conclusion: Multiple sequential HAART regimens do not per se reduce the likelihood of long-term virological suppression and immunological recovery. However, early virological failure increases significantly the risk of subsequent unfavourable virological responses. The choice of a potent initial antiretroviral drug regimen is therefore critical.
abstract_id: PUBMED:12891060
Virological rebound after suppression on highly active antiretroviral therapy. OBJECTIVE To determine the rate of virological rebound and factors associated with rebound among patients on highly active antiretroviral therapy (HAART) with previously undetectable levels of viraemia. DESIGN An observational cohort study of 2444 patients from the EuroSIDA study. METHODS Patients were followed from their first viral load under 400 copies/ml to the first of two consecutive viral loads above 400 copies/ml. Incidence rates were calculated using person-years of follow-up (PYFU), Cox proportional hazards models were used to determine factors related to rebound. RESULTS Of 2444 patients, 1031 experienced virological rebound (42.2%). The incidence of rebound decreased over time; from 33.5 in the first 6 months after initial suppression to 8.6 per 100 PYFU at 2 years after initial suppression (P < 0.0001). The rate of rebound was lower for treatment-naive compared with treatment-experienced patients. In multivariate models, patients who changed treatment were more likely to rebound, as were patients with higher viral loads on starting HAART. Treatment-naive patients were less likely to rebound. Among pretreated patients, those who were started on new nucleosides were less likely to rebound. CONCLUSION The rate of virological rebound decreased over time, suggesting that the greatest risk of treatment failure is in the months after initial suppression. Treatment-naive patients were at a lower risk of rebound, but among drug-experienced patients, those who added new nucleosides had a lower risk of rebound, as were patients with a good immunological response.
abstract_id: PUBMED:19043927
Determinants of virological failure after successful viral load suppression in first-line highly active antiretroviral therapy. Background: We aimed to investigate the long-term virological outcomes of a cohort initially showing good responses to first-line highly active antiretroviral therapy (HAART) with no evidence ofvirological failure during the first year after achieving viral load (VL) undetectability (<50 copies/ml).
Methods: Virological failure was defined as a confirmed VL >400 copies/ml or a single VL >400 copies/ml followed by a treatment change or end of follow-up. Risk factors for low-level VL rebound (50-400 copies/ml) in the first year after achieving undetectability and for virological failure during subsequent follow-up were investigated by logistic and Poisson regression.
Results: In the first year after achieving VL undetectability, 354/1386 (25.5%) patients experienced low-level VL rebound, the remaining patients maintained consistent undetectability. Low-level rebound occurred less commonly with non-nucleoside reverse transcriptase inhibitor (NNRTI)-based HAART than with other regimens (P = 0.01). Over median 2.2 (range 0.0-7.4) years of subsequent follow-up, 86 (6.2%) patients experienced virological failure, corresponding to 2.30 failures per 100 person-years (95% confidence interval [CI] 1.82-2.79). Independent predictors of virological failure included low-level rebound during the first year after achieving undetectability relative to consistent undetectability (rate ratio [RR] 2.18, 95%0 CI 1.15-4.10), female gender (RR 1.79, 95% CI 1.12-2.85) and receiving a ritonavir-boosted protease inhibitor (Pl/r) relative to NNRTI-based HAART (RR 1.88, 95% CI 1.02-3.46).
Conclusions: Patients on first-line HAART who maintain consistent VL undetectability for 1 year have a low risk of subsequent virological failure. A subset might benefit from targeted interventions, including women and patients on Pl/r-based HAART.
abstract_id: PUBMED:16021865
Sustained CD4 responses after virological failure of protease inhibitor-containing therapy. This article explores current controversies related to 'discordant' responses to antiviral therapy, that is to say, patients with a sustained CD4 cell response despite virological failure. Observational clinical data of patients receiving protease inhibitor-based therapy who had a sustained CD4 cell count even with incomplete viral suppression will be presented, as well as possible mechanisms and clinical implications.
abstract_id: PUBMED:19025495
Virological suppression achieved with suboptimal adherence levels among South African children receiving boosted protease inhibitor-based antiretroviral therapy. Sixty-six children who were receiving antiretroviral treatment were assessed for treatment adherence and virological outcome to compare boosted protease inhibitor-based regimens with nonnucleoside reverse-transcriptase inhibitor-based regimens. Children who were receiving protease inhibitor-based regimens demonstrated higher rates of virological suppression, even with poor treatment adherence (<80%). In children, boosted protease inhibitors seem to be more forgiving of poor adherence than do nonnucleoside reverse-transcriptase inhibitors.
abstract_id: PUBMED:17071284
Initial highly-active antiretroviral therapy with a protease inhibitor versus a non-nucleoside reverse transcriptase inhibitor: discrepancies between direct and indirect meta-analyses. Background: The optimum treatment choice between initial highly-active antiretroviral therapy (HAART) with a protease inhibitor (PI) versus a non-nucleoside reverse transcriptase inhibitor (NNRTI) is uncertain. An indirect analysis reported that PI-based HAART was better than NNRTI-based HAART. However, direct evidence for competing interventions is deemed more reliable than indirect evidence for making treatment decisions. We did a meta-analysis of head-to-head trials and compared the results with those of indirect analyses.
Methods: 12 trials of at least 24 weeks' duration directly compared NNRTI-based versus PI-based HAART in HIV-infected patients with limited or no previous exposure to antiretrovirals. We also identified six trials of NNRTI-based HAART and eight trials of PI-based HAART, each versus two NRTI regimens. We analysed the outcomes of virological suppression, death or disease progression, and withdrawals due to adverse events.
Findings: In the direct meta-analysis, NNRTI-based regimens were better than PI-based regimens for virological suppression (OR 1.60, 95% CI 1.31-1.96). The difference was reduced in higher-quality trials, but still favoured NNRTI-based HAART. There were no differences in death or disease progression (0.87, 0.56-1.35) or withdrawal because of adverse events (0.68, 0.43-1.08). By contrast, in indirect analyses NNRTI-based HAART was worse than PI-based HAART for virological suppression (0.26, 0.07-0.91). There were no significant differences for death or disease progression (1.28, 0.56-2.94) and withdrawals because of adverse events (1.46, 0.66-3.24). When trials of delavirdine were excluded, similar results were produced.
Interpretation: Results from direct analyses suggested that NNRTI-based HAART was more effective than PI-based HAART for virological suppression and was similar to PI-based HAART for clinical outcomes. Indirect comparisons could be unreliable for complex and rapidly evolving interventions such as HAART.
abstract_id: PUBMED:19352201
The nucleoside backbone affects durability of efavirenz- or nevirapine-based highly active antiretroviral therapy in antiretroviral-naive individuals. Objectives: We wished to determine the efficacy of nonnucleoside reverse transcriptase inhibitor (NNRTI)-based regimens in antiretroviral-naive patients commencing highly active antiretroviral therapy (HAART) and to evaluate the effect of calendar year, nucleoside analogue reverse transcriptase inhibitor (NRTI) backbone, sex, and ethnicity on treatment outcome.
Methods: Antiretroviral-naive individuals commencing efavirenz or nevirapine with dual-nucleoside analogue backbones were identified from a prospective database. Virological success was defined as HIV viral load <500 copies per milliliter. Treatment failure was defined as a switch or discontinuation of NNRTI or documented virological failure (2 measurements with viral load >500 copies/mL).
Results: From a cohort of 994 individuals, 73% commenced efavirenz- and 27% nevirapine-containing regimens. We found no differences between the 2 treatment groups for the time to virological success (proportion with virological success: efavirenz 71%, nevirapine 72%, P = 0.77) or treatment failure (proportion failing treatment: efavirenz 23%, nevirapine 26%, P = 0.58). There was a significant difference in the calendar year for commencing HAART for the time to virological success and treatment failure (P < 0.001). In the multivariable model, the likelihood of virological success for stavudine/lamivudine was 52% [relative hazard (RH) 1.52, 95% confidence interval (CI) 1.17 to 1.97, P = 0.002]. The nonthymidine analogue backbones as a group seemed to be least likely associated with virological success (RH 0.62, 95% CI 0.48 to 0.80, P < 0.001). This was however largely driven by tenofovir/didanosine being significantly associated with treatment failure (RH 6.48, 95% CI 3.81 to 11.0, P < 0.001). Sex and ethnicity were not associated with treatment outcome.
Conclusions: We found no significant differences between nevirapine and efavirenz for the time to virological success or treatment failure. Calendar year of commencing HAART and NRTI backbones were significant predictors of virological success and treatment failure, explaining differences in data to the 2NN study. The weaker the NNRTI (or the weaker the protease inhibitor) the more important the NRTI backbone becomes.
Answer: The determinants of sustained virological suppression in indigent, HIV-infected patients include adherence to HAART, care by an experienced primary provider, baseline viral load less than 100,000 copies/mL, age over 35 years, and no active substance use (PUBMED:15248135). However, single protease inhibitor-based antiretroviral therapy does not appear to be highly active in this patient population, as evidenced by unacceptably low rates of virological suppression primarily due to lack of adherence rather than late utilization of care among ethnic minorities (PUBMED:15248135).
Moreover, regimen-dependent variations in adherence to therapy and virological suppression have been observed in patients initiating protease inhibitor-based HAART. Patients prescribed indinavir/ritonavir, nelfinavir, or saquinavir/ritonavir were less likely to achieve virological suppression on their first regimen compared with patients prescribed lopinavir/ritonavir, and reduced adherence to these therapies only partly explained these observed differences (PUBMED:16945076).
Additionally, boosted protease inhibitor-based HAART has been associated with superior virological response rates despite baseline factors that have been associated with inferior virological responses to HAART, supporting the use of boosted PIs for initial HAART therapy (PUBMED:17352763).
In summary, while adherence and certain patient characteristics are key determinants of virological suppression, the choice of antiretroviral regimen, particularly the use of boosted versus unboosted protease inhibitors, also plays a significant role in achieving sustained virological suppression in indigent, HIV-infected patients. Single protease inhibitor-based therapy may not be as effective as boosted PI-based regimens, suggesting that the latter should be considered for initial HAART therapy to improve outcomes in this population. |
Instruction: Clustering of Parkinson disease: shared cause or coincidence?
Abstracts:
abstract_id: PUBMED:31720737
Influence of methodological changes on unicausal cause-of-death statistics and potentials of a multicausal data basis Background: As a complete survey, cause-of-death statistics are often used for research purposes, but they react sensitively to methodological changes.
Objectives: To show how sensitive the signing of the underlying cause reacts to methodological changes and what potential there is in the use of multicausal analyses.
Materials And Methods: The methodological examples are based on a sample of the 2016 annual material from Bavaria (age: 65 years or older). It includes n = 24,752 cases of death with information on underlying cause and multiple causes of death. The standardized ratio of multiple to underlying cause (SRMU) value was used to investigate the extent to which dementia and Parkinson's disease are underestimated in the mortality process and which other diseases are noted on the death certificate besides these causes of death.
Results: Changes in the set of rules, in the decision tables, and in the confidentiality concept can have an important influence on the signing of the underlying cause. This can lead to changes in the ranking of causes of death or to underestimation or overestimation of certain diseases. Dementia and Parkinson's disease are underestimated as a cause of death in the dying process if only the underlying disease is used for analysis.
Conclusions: Temporal and regional comparisons must always be interpreted against the background of changing methodological guidelines and procedures. Multicausal analyses in this respect offer the chance for the future to mitigate the difficulties of a unicausal approach.
abstract_id: PUBMED:35354268
Impact of the COVID-19 epidemic on total and cause-specific mortality in Rome (Italy) in 2020 Objectives: to estimate the impact of the COVID-19 epidemic on total and cause-specific mortality in people residing and dead in the Municipality of Rome (Italy) in 2020, and to describe the causes of death of subjects with SARS-CoV-2 infection confirmed by molecular test.
Design: descriptive analysis of total and cause-specific mortality in 2020 in Rome and comparison with a reference period (2015-2018 for total mortality and 2018 for cause-specific mortality); descriptive analysis of cause-specific mortality in the cohort of SARS-CoV-2 infected subjects.
Setting And Participants: 27,471 deaths registered in the Lazio mortality-cause Registry, relating to people residing and died in the municipality of Rome in 2020, 2,374 of which died from COVID-19.MAIN OUCOME MEASURES: all-cause mortality by month, gender, age group and place of death, cause-specific mortality (ICD-10 codes).
Results: in the municipality of Rome in 2020, an excess of mortality from all causes equal to +10% was observed, with a greater increase in the months of October-December (+27%, +56%, and +26%, respectively) in people aged 50+, with the greatest contribution from the oldest age groups (80+) who died in the nursing homes or at home. Lower mortality was observed in the age groups 0-29 years (-30%) and 40-49 years (-13%). In 2020, COVID-19 represents the fourth cause of death in Rome after malignant tumours, diseases of the circulatory system, and respiratory diseases. Excess mortality was observed from stroke and pneumonia (both in men and women), from respiratory diseases (in men), from diabetes, mental disorders, dementia and Parkinson's disease (in women). On the contrary, mortality is lower for all cancers, for diseases of the blood and haematopoietic organs and for the causes of the circulatory system. The follow-up analysis of SARS-CoV-2 positive subjects residing in Rome shows that a share of deaths (about 20%) reports other causes of death such as cardiovascular diseases, malignant tumours, and diseases of the respiratory system on the certificate collected by the Italian National Statistics Institute.
Conclusions: the 2020 mortality study highlighted excesses for acute and chronic pathologies, indicative of possible delays in the diagnosis or treatment of conditions indirectly caused by the pandemic, but also a share of misclassification of the cause of death that is recognized as COVID-19 death.
abstract_id: PUBMED:8091157
Multiple cause-of-death data as a tool for detecting artificial trends in the underlying cause statistics: a methodological study. The aims of the study were: (i) to identify trends in the underlying cause-of-death statistics that are due to changes in the coders' selection and coding of causes, and (ii) to identify changes in the coders' documented registration principles that can explain the observed trends in the statistics. 31 Basic Tabulation List categories from the Swedish national cause-of-death register for 1970-1988 were studied. The coders' tendency to register a condition as the underlying cause of death (the underlying cause ratio) was estimated by dividing the occurrence of the condition as underlying cause (the underlying cause rate) with the total registration of the condition (the multiple cause rate). When the development of the underlying cause rate series followed more closely the underlying cause ratio series than the multiple cause rate series, and a corresponding change in the registration rules could be found, the underlying cause rate trend was concluded to be due to changes in the coders' tendency to register the condition. For thirteen categories (fourteen trends), the trends could be explained by changes in the coders' interpretation practice: five upward, four insignificant, and five downward trends. In addition, for three categories the trends could be explained by new explicit ICD-9 rules.
abstract_id: PUBMED:20570207
The cause of death in idiopathic Parkinson's disease. Objectives: To identify the cause of death in patients with idiopathic Parkinson's disease (IPD).
Background: Current literature provides little data relating to cause of death in IPD and much is based on the recording of IPD on death certificates.
Methods: All patients under the care of a Parkinson's disease (PD) service who had died between 1999 and 2006 inclusive were identified and further classified into those with IPD according to the UK PD Society Brain Bank Criteria. Details were extracted from the service database and medical notes and further information obtained from the Office for National Statistics (ONS). Corrections were made for data classified using the International Classification of Diseases (ICD) 9 classification (prior to 2001) in order to compare accurately with data classified using ICD 10. Trends in cause of death were identified. Comparative data was obtained from the ONS for a control population.
Results: Of 219 patients on the database who had died, 143 were identified as having IPD. They were more likely to be classified as dying from pneumonia, and less likely as malignancy or ischaemic heart disease, than the control population. Pneumonia was a terminal event in 45%. IPD was recorded on the death certificate in only 63% of patients.
Conclusion: As expected, pneumonia is very often the terminal event. As previously demonstrated, malignancy is uncommon. Death certificate documentation is inadequate in one third of certificates; this has implications for research.
abstract_id: PUBMED:25689131
Chemogenomic profiling of endogenous PARK2 expression using a genome-edited coincidence reporter. Parkin, an E3 ubiquitin ligase, is a central mediator of mitochondrial quality control and is linked to familial forms of Parkinson's disease (PD). Removal of dysfunctional mitochondria from the cell by Parkin is thought to be neuroprotective, and pharmacologically increasing Parkin levels may be a novel therapeutic approach. We used genome-editing to integrate a coincidence reporter into the PARK2 gene locus of a neuroblastoma-derived cell line and developed a quantitative high-throughput screening (qHTS) assay capable of accurately detecting subtle compound-mediated increases in endogenous PARK2 expression. Interrogation of a chemogenomic library revealed diverse chemical classes that up-regulate the PARK2 transcript, including epigenetic agents, drugs controlling cholesterol biosynthesis, and JNK inhibitors. Use of the coincidence reporter eliminated wasted time pursuing reporter-biased false positives accounting for ∼2/3 of the actives and, coupled with titration-based screening, greatly improves the efficiency of compound selection. This approach represents a strategy to revitalize reporter-gene assays for drug discovery.
abstract_id: PUBMED:2355241
Cause of death among patients with Parkinson's disease: a rare mortality due to cerebral haemorrhage. Causes of death, with special reference to cerebral haemorrhage, among 240 patients with pathologically verified Parkinson's disease were investigated using the Annuals of the Pathological Autopsy Cases in Japan from 1981 to 1985. The leading causes of death were pneumonia and bronchitis (44.1%), malignant neoplasms (11.6%), heart diseases (4.1%), cerebral infarction (3.7%) and septicaemia (3.3%). Cerebral haemorrhage was the 11th most frequent cause of death, accounting for only 0.8% of deaths among the patients, whereas it was the 5th most common cause of death among the Japanese general population in 1985. The low incidence of cerebral haemorrhage as a cause of death in patients with Parkinson's disease may reflect the hypotensive effect of levodopa and a hypotensive mechanism due to reduced noradrenaline levels in the parkinsonian brain.
abstract_id: PUBMED:34316605
Parkinsonism and neurosarcoidosis: Cause and effect or coincidence? Movement disorders in demyelinating diseases can be coincidental or secondary to a demyelinating lesion. We here report the first case of coincidental association of neurosarcoidosis and idiopathic Parkinson's disease.
abstract_id: PUBMED:25697585
Associations between depression and all-cause and cause-specific risk of death: a retrospective cohort study in the Veterans Health Administration. Objective: Depression may be associated with increased mortality risk, but there are substantial limitations to existing studies assessing this relationship. We sought to overcome limitations of existing studies by conducting a large, national, longitudinal study to assess the impact of depression on all-cause and cause-specific risk of death.
Methods: We used Cox regression models to estimate hazard ratios associated with baseline depression diagnosis (N=849,474) and three-year mortality among 5,078,082 patients treated in Veterans Health Administration (VHA) settings in fiscal year (FY) 2006. Cause of death was obtained from the National Death Index (NDI).
Results: Baseline depression was associated with 17% greater hazard of all-cause three-year mortality (95% CI hazard ratio [HR]: 1.15, 1.18) after adjusting for baseline patient demographic and clinical characteristics and VHA facility characteristics. Depression was associated with a higher hazard of three-year mortality from heart disease, respiratory illness, cerebrovascular disease, accidents, diabetes, nephritis, influenza, Alzheimer's disease, septicemia, suicide, Parkinson's disease, and hypertension. Depression was associated with a lower hazard of death from malignant neoplasm and liver disease. Depression was not associated with mortality due to assault.
Conclusions: In addition to being associated with suicide and injury-related causes of death, depression is associated with increased risk of death from nearly all major medical causes, independent of multiple major risk factors. Findings highlight the need to better understand and prevent mortality seen with multiple medical disorders associated with depression.
abstract_id: PUBMED:35294114
Cause-Specific Mortality in Patients With Gout in the US Veterans Health Administration: A Matched Cohort Study. Objective: To compare all-cause and cause-specific mortality risk between patients with gout and patients without gout in the Veteran's Health Administration (VHA).
Methods: We performed a matched cohort study, identifying patients with gout in the VHA from January 1999 to September 2015 based on the presence of ≥2 International Classification of Diseases, Ninth Revision codes for gout (274.X). Gout patients were matched up to 1:10 on birth year, sex, and year of VHA enrollment with patients without gout and followed until death or end of study (December 2017). Cause of death was obtained from the National Death Index. Associations of gout with all-cause and cause-specific mortality were examined using multivariable Cox regression.
Results: Gout (n = 559,243) and matched non-gout controls (n = 5,428,760) had a mean age of 67 years and were 99% male. There were 246,291 deaths over 4,250,371 patient-years in gout patients and 2,000,000 deaths over 40,441,353 patient-years of follow-up in controls. After matching, gout patients had an increased risk of death (hazard ratio [HR] 1.09 [95% confidence interval (95% CI) 1.08-1.09]), which was no longer present after adjusting for comorbidities (HR 0.98 [95% CI 0.97-0.98]). The strongest association of gout with cause-specific mortality was observed with genitourinary conditions (HR 1.50 [95% CI 1.47-1.54]). Gout patients were at lower risk of death related to neurologic (e.g., Alzheimer's disease and Parkinson's disease) (HR 0.63 [95% CI 0.62-0.65]) and mental health (HR 0.66 [95% CI 0.65-0.68]) conditions.
Conclusion: A higher risk of death among gout patients in the VHA was related to comorbidity burden. While deaths attributable to neurologic and mental health conditions were less frequent among gout patients, genitourinary conditions were the most overrepresented causes of death.
abstract_id: PUBMED:8041379
Pulmonary embolism: an under-recognized yet frequent cause of death in parkinsonism. There are very few detailed reports on the cause of death in patients with Parkinson syndrome, and most reports lack postmortem verification. Pulmonary embolism has been reported as an infrequent terminal event. We reviewed the results of 60 complete autopsies performed on patients with Parkinson syndrome personally cared for at our institution. Pulmonary embolism was second only to pneumonia as the most common cause of death overall.
Answer: The clustering of Parkinson's disease (PD) can be influenced by both shared causes and coincidental occurrences. Parkinsonism and neurosarcoidosis have been reported as a coincidental association, suggesting that not all clustering of PD is due to a shared cause (PUBMED:34316605). Additionally, the cause of death among patients with PD can vary widely, with some studies indicating that cerebral hemorrhage is a rare cause of death among PD patients, which may reflect specific disease mechanisms or treatment effects unique to PD (PUBMED:2355241).
On the other hand, the presence of shared causes cannot be ruled out. For instance, chemogenomic profiling has identified various chemical classes that can up-regulate PARK2 expression, which is linked to familial forms of PD, suggesting that environmental or pharmacological factors could contribute to PD clustering through shared mechanisms (PUBMED:25689131). Moreover, depression, which is commonly associated with PD, has been linked to increased mortality risk from various medical causes, including Parkinson's disease, indicating that there may be shared pathophysiological processes between depression and PD that could contribute to clustering (PUBMED:25697585).
Furthermore, cause-of-death statistics are sensitive to methodological changes, and multicausal analyses have shown that diseases like dementia and Parkinson's disease are often underestimated in mortality data when only the underlying disease is used for analysis (PUBMED:31720737). This suggests that the clustering of PD-related deaths may be influenced by how causes of death are recorded and interpreted.
In summary, the clustering of Parkinson's disease may result from a combination of shared causes, such as genetic predispositions or environmental exposures, and coincidental associations with other conditions. The complexity of PD and its multifactorial nature make it challenging to attribute clustering to a single cause or to dismiss it as mere coincidence without considering the broader context of each case. |
Instruction: Does histologic subtype affect oncologic outcomes after nephron-sparing surgery?
Abstracts:
abstract_id: PUBMED:27692834
Localized chromophobe carcinomas treated by nephron-sparing surgery have excellent oncologic outcomes. Objective: To evaluate the oncologic outcomes of nephron-sparing surgery (NSS) for localized chromophobe renal cell carcinoma (cRCC).
Material And Methods: We performed a multicenter international study involving the French Network for Research on Kidney Cancer (UroCCR) and 5 international teams. Data from 808 patients treated with NSS between 2004 and 2014 for non-clear cell RCCs were analyzed.
Results: We included 234 patients with cRCC. There were 123 (52.6%) females. Median age was 61 (23-88) years. Median tumor size was 3 (1-11)cm. A positive surgical margin was identified in 14 specimens (6%). Pathologic stages were T1, T2, and T3a in 202 (86.3%), 9 (3.8%), and 23 (9.8%) cases, respectively. After a mean follow-up of 46.6 ± 36 months, 2 (0.8%) patients experienced a local recurrence. No patient had metastatic progression, and no patient died from cancer. Three-years estimated cancer-free survival and cancer-specific survival were 99.1% and 100%, respectively.
Conclusion: Oncological results of NSS for localized cRCC are excellent. In this series, only 2 patients had a local recurrence, and no patient had metastatic progression or died from cancer.
abstract_id: PUBMED:26149352
The subclassification of papillary renal cell carcinoma does not affect oncological outcomes after nephron sparing surgery. Objectives: To evaluate the oncological outcomes of papillary renal cell carcinoma (pRCC) following nephron sparing surgery (NSS) and to determine whether the subclassification type of pRCC could be a prognostic factor for recurrence, progression, and specific death.
Materials And Methods: An international multicentre retrospective study involving 19 institutions and the French network for research on kidney cancer was conducted after IRB approval. We analyzed data of all patients with pRCC who were treated by NSS between 2004 and 2014.
Results: We included 486 patients. Tumors were type 1 pRCC in 369 (76 %) cases and type 2 pRCC in 117 (24 %) cases. After a mean follow-up of 35 (1-120) months, 8 (1.6 %) patients experienced a local recurrence, 12 (1.5 %) had a metastatic progression, 24 (4.9 %) died, and 7 (1.4 %) died from cancer. Patients with type I pRCC had more grade II (66.3 vs. 46.1 %; p < 0.001) and less grade III (20 vs. 41 %; p < 0.001) tumors. Three-year estimated cancer-free survival (CFS) rate for type 1 pRCC was 96.5 % and for type 2 pRCC was 95.1 % (p = 0.894), respectively. Three-year estimated cancer-specific survival rate for type 1 pRCC was 98.4 % and for type 2 pRCC was 97.3 % (p = 0.947), respectively. Tumor stage superior to pT1 was the only prognostic factor for CFS (HR 3.5; p = 0.03).
Conclusion: Histological subtyping of pRCC has no impact on oncologic outcomes after nephron sparing surgery. In this selected population of pRCC tumors, we found that tumor stage is the only prognostic factor for cancer-free survival.
abstract_id: PUBMED:19628262
Does histologic subtype affect oncologic outcomes after nephron-sparing surgery? Objectives: To test whether renal cell carcinoma (RCC) histologic subtypes (HSs) affect cancer-specific mortality after nephron-sparing surgery (NSS). HSs are considered of prognostic value in RCC. For example, the papillary HS might confer a worse prognosis, and, at some centers, only radical nephrectomy is performed for the papillary HS.
Methods: We used univariate and multivariate Cox regression models to study patients with Stage T1N0M0 RCC treated with NSS (n = 1205) from 1988 to 2004. The data were taken from 9 Surveillance, Epidemiology, and End Results registries.
Results: At 36 months after NSS, the cancer-specific mortality rate was 97.8%, 100%, and 97.4% for a clear cell, chromophobe, and papillary RCC HS, respectively. On univariate and multivariate analyses, no statistically significant differences were recorded with regard to the HS.
Conclusions: Despite the suggested more aggressive phenotype of the papillary HS, we found no difference among the papillary, chromophobe, and clear cell variants. Thus, the diagnosis of one HS vs another HS should not deter from the use of NSS when cancer-specific mortality is considered as an endpoint.
abstract_id: PUBMED:35117269
Nephron-sparing approaches in the management of upper tract urothelial carcinoma: indications and clinical outcomes. Given the more aggressive phenotype and prognosis of the upper tract urothelial carcinoma (UTUC) than those of bladder tumor, radical removal of the entire ureter along with ipsilateral kidney has long been the standard of care. However, recent advances in diagnostic imaging and endoscopic armamentarium have markedly enhanced the role of nephron-sparing approaches for well-selected cases, especially with low-risk features. Historically, this strategy was exclusively considered for managing a patient unfit to undergo radical nephroureterectomy (RNU). After observing effective oncologic control for these imperative cases, an elective risk-based indication was introduced with care. Two representative modalities in this strategy include endoscopic management via retrograde ureterorenoscopy (URS) or antegrade percutaneous approach and segmental ureterectomy (SU). SU, which includes different types of unstandardized procedures, is the most widely reported alternative to RNU. Despite the lack of qualified evidence regardless of the strategy applied, systemic reviews have consistently revealed similar survival after endoscopic management and RNU for low-grade and noninvasive UTUC. However, selected patients with high-grade and invasive UTUC could benefit from SU while maintaining the oncologic outcomes observed after RNU. Based on these findings, the currently available guidelines have extended the use of the nephron-sparing approach gradually, recommending segmental resection especially for the distal ureteral tumors even in patients with high-risk features. Nonetheless, the fact that evidence supporting these conclusions remains poor and potentially biased cannot be overemphasized.
abstract_id: PUBMED:28211006
Surgical Margins in Nephron-Sparing Surgery for Renal Cell Carcinoma. The oncologic impact of positive surgical margins after nephron-sparing surgery is controversial. Herein, we discuss current data surrounding surgical margins in the operative management of renal cell carcinoma. The prevalence, risk factors, outcomes, and subsequent management of positive surgical margins will be reviewed. Literature suggests that the prevalence of positive surgical margins following kidney surgery varies by practice setting, tumor characteristics, and operation type. For patients undergoing nephron-sparing surgery, it is not necessary to remove a margin of healthy tissue. Tumor enucleation may be appropriate and is associated with comparable outcomes. Reflexive intraoperative frozen section use does not provide beneficial information and many patients with positive margins can be monitored closely with serial imaging. The impact of positive surgical margins on recurrence and survival remains conflicting. Though every effort must be performed to obtain negative margins, a positive surgical margin appears to have a marginal impact on recurrence and survival.
abstract_id: PUBMED:30443869
Oncologic outcomes of nephron-sparing surgery in patients with T1 multifocal renal cell carcinoma. Objective: This study is performed to explore the pathological characteristics and oncologic outcomes of T1 multifocal renal cell carcinoma (RCC).
Methods: The clinical data of 600 patients (442 males and 158 females) between the age of 29 and 73 years, diagnosed with T1 RCC were collected from three hospitals in China, out of which 421 cases had undergone nephron-sparing surgery (NSS) and 179 cases had undergone radical nephrectomy (RN) between December 2010 and January 2015.
Results: Tumor was identified with multifocality in 32 patients (5.33%), out of which 21 were set to receive NSS, and 11 to receive RN, respectively; 21 cases of clear cell tumor, 8 cases of papillary tumor, 1 case of chromophobe tumor and 2 cases of Xp.11.2 translocation RCC. Among 568 cases of monofocal tumors, 400 patients underwent NSS, and the remaining 168 patients underwent RN, respectively. After a median follow-up of 5 years, 13 patients were found with recurrent tumors out of those who had undergone NSS, 11 with monofocal tumors and 2 with multifocal tumors containing satellite tumor nodules (p = 0.13). Out of the 32 individuals with multifocal RCC, 4 cases were reported to have died of cancer, 2 of NSS and 2 of RN. From these findings, the cancer-specific survival for NSS and RN was estimated to be 90.48% and 81.82%, respectively (p = 0.48).
Conclusion: The findings from the study suggested that there were pathological differences in multifocal renal tumors, and that papillary carcinoma may be more common than clear cell carcinoma. The recurrence rate and survival rate of multifocal RCC were similar to monofocal tumors. Tumor recurrence may be related to satellite tumor nodules, which can only be detected once surgery is performed.
abstract_id: PUBMED:26754033
Renal Function Following Nephron Sparing Procedures: Simply a Matter of Volume? Partial nephrectomy (PN) is the current standard of care for the management of small renal masses (SRM), providing comparable oncologic control with improved renal functional outcomes. Additionally, new technologies such as thermal ablation provide attractive alternatives to traditional extirpative surgery. The obvious benefit of these nephron sparing procedures (NSPs) is to the preservation of renal parenchymal volume (RPV), but the factors that influence postoperative renal function are complex and inter-related, and include non-modifiable factors such as baseline renal function and tumor size, complexity, and location as well as potentially modifiable factors such as ischemia time, ischemia type, and RPV preservation. Our review presents the most recent evidence analyzing the relationship between the modifiable factors in PN and renal outcomes, with a focus on RPV preservation. Furthermore, novel surgical techniques, imaging modalities, and NSPs are discussed, evaluating their efficacy in maximizing functional nephron mass and improving long-term renal outcomes.
abstract_id: PUBMED:26692674
Nephron-sparing surgery in case of emphysematous pyelonephritis. Emphysematous pyelonephritis is fatal necrotizing infection where life saving emergency nephrectomy is recommended for severe cases, but we performed nephron sparing surgery. Elderly diabetic female presented with left flank pain and fever for 15 days. On examination tender lump was palpable in left lumbar region. Investigations showed hyperglycemia, leucocytosis and creatinine 3.0 mg/dl. NCCT-KUB suggested class 3B-EPN. Following emergency pigtail, a repeat CT-scan suggested upper and lower pole destruction. In open drainage both poles debrided with sparing of middle pole. Follow-up CECT-KUB showed spared kidney with normal function. No literature for nephron sparing surgery in similar cases of EPN was found.
abstract_id: PUBMED:36743884
Nephron-sparing surgery for Wilms tumor. The algorithm that has been used successfully in the surgical management of unilateral Wilms tumor, radical nephroureterectomy, cannot be used in children who present with synchronous bilateral renal masses. Instead, a surgical approach that removes all tumor masses while preserving as much normal renal parenchyma as possible is encouraged to avoid acute and long-term renal insufficiency. We will review technical aspects of the conduct of nephron-sparing surgery for synchronous bilateral Wilms tumor, including the more recent advances in the use of imaging adjuncts such as pre-operative 3D imaging and fluorescence-guided surgery. The potential role of nephron-sparing surgery for unilateral Wilms tumor will also be discussed.
abstract_id: PUBMED:20816617
Impact of nephron sparing on kidney function and non-oncologic mortality. The surgical management of kidney tumors has significantly evolved over the past decade. Partial nephrectomy and nephron sparing surgery have emerged as the treatments of choice for most newly diagnosed kidney tumors at tertiary care centers. The trend towards an organ sparing approach is largely due to an improved understanding of the global importance of kidney function as well as the impact that kidney surgery may have on non-oncologic morbidity and mortality. In addition to reviewing the methods of evaluating kidney function, this article discusses the effectiveness of various nephron sparing techniques in preserving kidney function and improving non-oncologic outcomes.
Answer: The histologic subtype of renal cell carcinoma (RCC) does not appear to significantly affect oncologic outcomes after nephron-sparing surgery (NSS). Studies have shown that localized chromophobe RCC treated with NSS have excellent oncologic outcomes, with a very low rate of local recurrence and no reported cases of metastatic progression or cancer-specific death after a mean follow-up of 46.6 months (PUBMED:27692834). Similarly, for papillary RCC, the subclassification into type 1 or type 2 does not impact oncologic outcomes after NSS, with both types showing high cancer-free survival and cancer-specific survival rates after a mean follow-up of 35 months (PUBMED:26149352). Furthermore, a study that included patients with Stage T1N0M0 RCC treated with NSS found no statistically significant differences in cancer-specific mortality rates among clear cell, chromophobe, and papillary RCC histologic subtypes (PUBMED:19628262). These findings suggest that the diagnosis of one histologic subtype versus another should not deter from the use of NSS when considering cancer-specific mortality as an endpoint. Overall, the evidence indicates that histologic subtype does not have a significant impact on the oncologic outcomes following NSS for renal cell carcinoma. |
Instruction: Is a small muscle mass index really detrimental for insulin sensitivity in postmenopausal women of various body composition status?
Abstracts:
abstract_id: PUBMED:22947543
Is a small muscle mass index really detrimental for insulin sensitivity in postmenopausal women of various body composition status? Objectives: We sought to determine if a small muscle mass index (MMI) is actually detrimental for insulin sensitivity when studying a large group of postmenopausal women displaying various body composition statuses and when age and visceral fat mass (VFM) are taken into account.
Methods: A cross-sectional study was conducted in 99 healthy postmenopausal women with a BMI of 28±4 kg/m(2). Fat mass and total fat-free mass (FFM) were obtained from DXA and VFM and MMI were estimated respectively by the equation of Bertin and by: Total FFM (kg)/height (m)(2). Fasting plasma insulin and glucose were obtained to calculate QUICKI and HOMA as an insulin sensitivity index.
Results: Total MMI and VFM were both significantly inversely correlated with QUICKI and positively with HOMA even when adjusted for VFM. A stepwise linear regression confirmed Total MMI and VFM as independent predictors of HOMA and plasma insulin level.
Conclusions: A small muscle mass might not be detrimental for the maintenance of insulin sensitivity and could even be beneficial in sedentary postmenopausal women. The impact of muscle mass loss on insulin sensitivity in older adults needs to be further investigated.
abstract_id: PUBMED:24120357
Is there a skeletal muscle mass threshold associated with the deterioration of insulin sensitivity in sedentary lean to obese postmenopausal women? Aim: The purpose of this study was to determine an optimal cut-off point of skeletal muscle mass, using appendicular lean body mass (LBM) index, that identifies at risk individuals with deteriorated insulin sensitivity, using an established quantitative insulin sensitivity index (QUICKI) cut-off.
Methods: We performed a cross-sectional analysis in 231 lean and obese (BMI: 18.7-51.0 kg/m(2)) menopausal women. Fasting plasma glucose and insulin were obtained to calculate QUICKI as an index of insulin sensitivity. Skeletal muscle mass was measured as appendicular LBM by DXA and expressed as appendicular LBM index [appendicular LBM (kg)/height (m(2))]. Cut-offs were determined using receiver operating characteristic (ROC) curve analyses.
Results: The best cut-off value for skeletal muscle mass index to identify menopausal women with reduced insulin sensitivity was 7.025 kg/m(2) which had a sensitivity of 69.5% and specificity of 58.2%.
Conclusion: Our results suggest that sedentary postmenopausal women with an appendicular skeletal muscle mass index above 7.025 kg/m(2) may be at greater risk of insulin resistance. Prospective studies are needed to validate our result.
abstract_id: PUBMED:17019382
Effects of ovarian failure and X-chromosome deletion on body composition and insulin sensitivity in young women. Objective: Menopause is associated with increased visceral adiposity and reduced insulin sensitivity. It remains unclear whether these changes are due primarily to ovarian failure or aging. The aim of this study was to clarify the impact of ovarian failure on body composition and insulin sensitivity in young women.
Design: In a cross-sectional study, we compared main outcome measures (body mass index, body composition by dual-energy x-ray absorptiometry, and insulin sensitivity by Quantitative Insulin Sensitivity Check Index) in three groups: women with 46,XX premature ovarian failure (POF), women with premature ovarian failure associated with 45,X or Turner syndrome (TS), and normal control women (NC). Participants were enrolled in National Institutes of Health Clinical Center protocols between years 2000 and 2005.
Results: Mean body mass index (+/- SD) was lower in women with POF (n = 398): 24.3 +/- 5 kg/m versus 27.8 +/- 7 for women with TS (n = 131) and 26.6 +/- 4 for controls (n = 73) (both P < 0.001). Only 33% of women with POF were overweight or obese, compared with 56% of those with TS and 67% of NC women (P < 0.0001 for both). Despite less obesity, women with POF had lower insulin sensitivity (0.367 +/- 0.03) compared with those with TS (0.378 +/- 0.03, P = 0.003) and NC women (0.376 +/- 0.03, P = 0.04). In groups selected for similar age and body mass index, women with POF (n = 89), women with TS (n = 48), and NC women (n = 40) had similar total body and trunk adiposity. After adjustment for age and truncal adiposity, women with POF had significantly lower insulin sensitivity than women with TS (P = 0.03) and NC women (P = 0.049).
Conclusions: In contrast to observations in middle-aged postmenopausal women, ovarian failure in young women is not associated with increased total or central adiposity. In fact, women with TS were similar to NC women, whereas women with POF were leaner. The lower insulin sensitivity observed in women with POF deserves further investigation.
abstract_id: PUBMED:17486172
Association of insulin sensitivity and muscle strength in overweight and obese sedentary postmenopausal women. The objective of this study was to examine the relationship between insulin sensitivity and lower body muscle strength in overweight and obese sedentary postmenopausal women. The design of the study was cross-sectional. The study population consisted of 82 non-diabetic overweight and obese sedentary postmenopausal women (age: 58.2 +/- 5.1 y; body mass index (BMI): 32.4 +/- 4.6 kg.m-2). Subjects were classified by dividing the entire cohort into quartiles based on relative insulin sensitivity expressed per kilograms of lean body mass (LBM) (Q1, < 10.3, vs. Q2, 10.3-12.4, vs. Q3, 12.5-14.0, vs. Q4, >14.0 mg.min-1.kg LBM-1). We measured insulin sensitivity (using the hyperinsulinemic-euglycemic clamp technique), body composition (using dual-energy X-ray absorptiometry), visceral fat and muscle attenuation (using computed tomography), and a lower-body muscle strength index expressed as weight lifted in kilograms per kilogram of LBM (kg.kg LBM-1) (using weight-training equipment). A positive and significant relationship was observed between insulin sensitivity and the muscle strength index (r = 0.37; p < 0.001). Moreover, a moderate but significant correlation was observed between the muscle strength index and muscle attenuation (r = 0.22; p < 0.05). Finally, the muscle strength index was significantly higher in the Q4 group compared with the Q2 and Q1 groups, respectively (3.78 +/- 1.13 vs. 2.99 +/- 0.77 and 2.93 +/- 0.91 kg.kg LBM-1; p < 0.05). Insulin sensitivity is positively associated with lower-body muscle strength in overweight and obese sedentary postmenopausal women.
abstract_id: PUBMED:26524194
Muscle mass and insulin sensitivity in postmenopausal women after 6-month exercise training. Objective: The common belief that high muscle mass improves insulin sensitivity is controversial and even recent studies have established that larger muscle mass is associated with insulin resistance in sedentary postmenopausal women. Physical activity induces a beneficial effect in muscle size and its metabolic properties. Hence, larger muscle mass induced by exercise training should ameliorate insulin sensitivity and the negative relationship between larger muscle mass and insulin sensitivity should disappear. This study examined the induced changes in muscle mass and insulin sensitivity in postmenopausal women after 6-month exercise training along with their possible correlations.
Methods: Forty-eight sedentary, overweight-to-obese postmenopausal women followed a 6-month mixed exercise training (three sessions/week; endurance and resistance). Lean body mass (LBM) and fat mass (FM) were measured by DXA, then the muscle mass index (MMI) was calculated (MMI = LBM (kg)/height (m(2))). Fasting glucose and insulin measurements were obtained and insulin resistance (IR) was estimated by the HOMA-IR formula.
Results: Baseline MMI was correlated with IR (r = 0.219, p = 0.015). After intervention, significant differences were observed in body weight, FM%, MMI, and glycemia, and changes in MMI were significantly correlated with changes in IR (r = 0.345, p = 0.016). Also linear regression showed that the increase in MMI explained 28% of the deterioration in insulin sensitivity (p = 0.001).
Conclusions: After 6 months of mixed training, changes in muscle mass remained correlated with changes in insulin resistance, overweight-to-obese women with large muscle gains being more insulin-resistant. This supports that muscle quality and functionality, and the loss of fat mass, should be targeted rather than muscle mass gains in postmenopausal women, especially in a context of no energy restriction.
abstract_id: PUBMED:32431102
Association between the Thigh Muscle and Insulin Resistance According to Body Mass Index in Middle-Aged Korean Adults. Background: We examined the associations between thigh muscle area (TMA) and insulin resistance (IR) according to body mass index (BMI) in middle-aged Korean general population.
Methods: TMA was measured using quantitative computed tomography and corrected by body weight (TMA/Wt) in 1,263 men, 788 premenopausal women, and 1,476 postmenopausal women all aged 30 to 64 years. The tertiles of TMA/Wt were calculated separately for men and for premenopausal and postmenopausal women. Homeostatic model assessment for insulin resistance (HOMA-IR) was performed using fasting blood glucose and insulin levels, and increased IR was defined according to sex-specific, top quartiles of HOMA-IR. Associations between the TMA/Wt tertiles and increased IR according to the BMI categories (<25 and ≥25 kg/m²) were assessed using multivariable logistic regression analysis.
Results: In men with higher BMIs, but not in those with lower BMIs, the presence of an increased IR had significantly higher odds ratios in the lower TMA/Wt tertiles, even after adjustment for visceral fat area. However, in premenopausal and postmenopausal women, there was no significant inverse association between TMA/Wt tertiles and increased IR, regardless of BMI category.
Conclusion: Our findings suggest that the thigh muscle is inversely associated with IR in men, particularly in those with higher BMIs.
abstract_id: PUBMED:18569559
Psychosocial correlates of cardiorespiratory fitness and muscle strength in overweight and obese post-menopausal women: a MONET study. The purpose of this study was to examine the psychosocial correlates of cardiorespiratory fitness (VO2peak) and muscle strength in overweight and obese sedentary post-menopausal women. The study population consisted of 137 non-diabetic, sedentary overweight and obese post-menopausal women (mean age 57.7 years, s = 4.8; body mass index 32.4 kg.m(-2), s = 4.6). At baseline we measured: (1) body composition using dual-energy X-ray absorptiometry; (2) visceral fat using computed tomography; (3) insulin sensitivity using the hyperinsulinaemic-euglycaemic clamp; (4) cardiorespiratory fitness; (5) muscle strength using the leg press exercise; and (6) psychosocial profile (quality of life, perceived stress, self-esteem, body-esteem, and perceived risk for developing chronic diseases) using validated questionnaires. Both VO2peak and muscle strength were significantly correlated with quality of life (r = 0.29, P < 0.01 and r = 0.30, P < 0.01, respectively), and quality of life subscales for: physical functioning (r = 0.28, P < 0.01 and r = 0.22, P < 0.05, respectively), pain (r = 0.18, P < 0.05 and r = 0.23, P < 0.05, respectively), role functioning (r = 0.20, P < 0.05 and r = 0.24, P < 0.05, respectively), and perceived risks (r = -0.24, P < 0.01 and r = -0.30, P < 0.01, respectively). In addition, VO2peak was significantly associated with positive health perceptions, greater body esteem, and less time watching television/video. Stepwise regression analysis showed that quality of life for health perceptions and for role functioning were independent predictors of VO2peak and muscle strength, respectively. In conclusion, higher VO2peak and muscle strength are associated with a favourable psychosocial profile, and the psychosocial correlates of VO2peak were different from those of muscle strength. Furthermore, psychosocial factors could be predictors of VO2peak and muscle strength in our cohort of overweight and obese sedentary post-menopausal women.
abstract_id: PUBMED:31554294
Relationship between Muscle Mass/Strength and Hepatic Fat Content in Post-Menopausal Women. Background and Objectives: Recent studies have shown that low skeletal muscle mass can contribute to non-alcoholic fatty liver disease through insulin resistance. However, the association between muscle mass/strength and hepatic fat content remains unclear in postmenopausal women. Methods: In this study, we assessed the associations between muscle mass/strength and various severities of non-alcoholic fatty liver disease. Using single-voxel proton magnetic resonance spectroscopy, 96 postmenopausal women between the ages of 50 and 65 were divided into four groups (G0-G3) by hepatic fat content: G0 (hepatic fat content <5%, n = 20), G1 (5% ≤ hepatic fat content < 10%, n = 27), G2 (10% ≤ hepatic fat content < 25%, n = 31), and G3 (hepatic fat content ≥25%, n = 18). Muscle mass indexes were estimated as skeletal muscle index (SMI)% (total lean mass/weight × 100) and appendicular skeletal muscular mass index (ASM)% (appendicular lean mass/weight × 100) by dual energy X-ray absorptiometry. Maximal isometric voluntary contraction of the handgrip, elbow flexors, and knee extensors was measured using an adjustable dynamometer chair. Fasting plasma glucose, insulin, and follicle-stimulating hormones were assessed in venous blood samples. Results: The results showed negative correlations between hepatic fat content and SMI% (r = -0.42, p < 0.001), ASM% (r = -0.29, p = 0.005), maximal voluntary force of grip (r = -0.22, p = 0.037), and knee extensors (r = -0.22, p = 0.032). Conclusions: These significant correlations almost remained unchanged even after controlling for insulin resistance. In conclusion, negative correlations exist between muscle mass/strength and the progressed severity of non-alcoholic fatty liver disease among post-menopausal women, and the correlations are independent of insulin resistance.
abstract_id: PUBMED:17510677
No difference in insulin sensitivity between healthy postmenopausal women with or without sarcopenia: a pilot study. Insulin plays a pivotal role in skeletal muscle protein metabolism and its action decreases with age. A loss of muscle mass, termed sarcopenia, also occurs with age. The age-associated decline in insulin sensitivity (IS) may negatively alter muscle protein metabolism and, therefore, be implicated in the aetiology of sarcopenia. However, no studies have yet compared the level of IS between older individuals with or without sarcopenia. Thus, in this study, we compared the IS of 20 class I sarcopenics (CIS), 8 class II sarcopeniscs (CIIS), and 16 non-sarcopenics (NS), among a group of otherwise healthy, non-obese, postmenopausal women. IS was estimated with the quantitative IS check index (QUICKI). Muscle mass index (MMI), which was used to determine sarcopenia, was calculated as follows: (appendicular muscle massx1.19)-1.01/h2, where h=height. Fat-free mass (FFM), fat mass (FM), and trunk FM (TFM) were measured by dual-energy X-ray absorptiometry. Accelerometry and indirect calorimetry were used to estimate resting (REE), daily (DEE), and physical activity (PAEE) energy expenditure. A 3 d food record was used to determine total energy, protein (animal and vegetal), and carbohydrate intakes. As expected, MMI and FFM differed significantly among groups. However, no significant differences were found among groups for IS, FM, TFM, REE, DEE, PAEE, or total energy, protein (both animal and vegetable), and carbohydrate intakes. Using QUICKI, a surrogate measure of IS, the present results suggest that the action of insulin does not play an important role in the development and maintenance of sarcopenia in healthy, non-obese, postmenopausal women.
abstract_id: PUBMED:11201511
The effects of sequential combined oral 17beta-estradiol norethisterone acetate on insulin sensitivity and body composition in healthy postmenopausal women: a randomized single blind placebo-controlled study. Objective: The androgenic effect of progestogen, necessary in early postmenopausal hormone replacement therapy (HRT), may adversely affect insulin sensitivity as well as body fat distribution and thereby increase the cardiovascular risk profile. The impact of HRT with sequential combined oral 17beta-estradiol and norethisterone acetate on insulin sensitivity and body composition in early menopause has not been studied.
Design: A randomized single blind placebo-controlled 6-month study of sequential combined 17beta-estradiol norethisterone acetate on insulin sensitivity and body composition was carried out. Thirty fit healthy postmenopausal women were enrolled and completed this 6-month study. Body composition was measured by dual-energy x-ray absorptiometry scanning, and insulin sensitivity was measured using the euglycemic hyperinsulinemic clamp. Studies were undertaken at baseline and after 6 months of therapy. The studies were performed during the estrogen-only phase of therapy.
Results: All women demonstrated a degree of decreased insulin sensitivity that was not modified by 6 months of hormone replacement therapy. Body composition remained unchanged over 6 months. There was no alteration in total body fat or the distribution of body fat. The percentage of central abdominal fat (android) was not altered.
Conclusion: Six months of HRT with sequential combined oral 17beta-estradiol norethisterone acetate does not have an adverse effect on insulin sensitivity and does not promote an increase in weight or the more android distribution of body fat, which could contribute to the increased cardiovascular risk profile that is evident in postmenopausal women.
Answer: The relationship between muscle mass index (MMI) and insulin sensitivity in postmenopausal women appears to be complex and may vary depending on body composition status and other factors. According to the studies provided:
- A small MMI might not be detrimental for the maintenance of insulin sensitivity and could even be beneficial in sedentary postmenopausal women. Total MMI and visceral fat mass (VFM) were both significantly inversely correlated with QUICKI and positively with HOMA even when adjusted for VFM, suggesting that a small muscle mass may not necessarily impair insulin sensitivity in this population (PUBMED:22947543).
- Another study found that sedentary postmenopausal women with an appendicular skeletal muscle mass index above 7.025 kg/m^2 may be at greater risk of insulin resistance, indicating that there may be a threshold of muscle mass above which insulin sensitivity begins to deteriorate (PUBMED:24120357).
- In young women with premature ovarian failure (POF), ovarian failure was not associated with increased total or central adiposity, but women with POF had lower insulin sensitivity compared to controls, suggesting that factors other than muscle mass may influence insulin sensitivity in this group (PUBMED:17019382).
- Overweight and obese sedentary postmenopausal women showed a positive association between insulin sensitivity and lower-body muscle strength, indicating that muscle strength, rather than muscle mass alone, may be an important factor for insulin sensitivity (PUBMED:17486172).
- After 6 months of mixed exercise training, changes in muscle mass remained correlated with changes in insulin resistance, with larger muscle gains being associated with more insulin resistance. This suggests that muscle quality and functionality, as well as the loss of fat mass, should be targeted rather than muscle mass gains in postmenopausal women (PUBMED:26524194).
- A study on the association between thigh muscle area (TMA) and insulin resistance according to BMI in middle-aged Korean adults found that in men with higher BMIs, lower TMA/Wt tertiles were associated with increased insulin resistance, but no significant inverse association was found in premenopausal and postmenopausal women, regardless of BMI category (PUBMED:32431102). |
Instruction: Surgical learning curve for open radical prostatectomy: Is there an end to the learning curve?
Abstracts:
abstract_id: PUBMED:35850976
The Surgical Learning Curve for Biochemical Recurrence After Robot-assisted Radical Prostatectomy. Background: Improved cancer control with increasing surgical experience-the learning curve-was demonstrated for open and laparoscopic prostatectomy. In a prior single-center study, we found that this might not be the case for robot-assisted radical prostatectomy (RARP).
Objective: To investigate the relationship between prior experience of a surgeon and biochemical recurrence (BCR) after RARP.
Design, Setting, And Participants: We retrospectively analyzed the data of 8101 patients with prostate cancer treated with RARP by 46 surgeons at nine institutions between 2003 and 2021. Surgical experience was coded as the total number of robotic prostatectomies performed by the surgeon before the patient operation.
Outcome Measurements And Statistical Analysis: We evaluated the relationship of prior surgeon experience with the probability of BCR adjusting for preoperative prostate-specific antigen, pathologic stage, grade, lymph-node involvement, and year of surgery.
Results And Limitations: Overall, 1047 patients had BCR. The median follow-up for patients without BCR was 33 mo (interquartile range: 14, 61). After adjusting for case mix, the relationship between surgical experience and the risk of BCR after surgery was not statistically significant (p = 0.2). The 5-yr BCR-free survival rates for a patient treated by a surgeon with prior 10, 250, and 1000 procedures performed were, respectively, 82.0%, 82.7%, and 84.8% (absolute difference between 10 and 1000 prior procedures: 1.6% [95% confidence interval: 0.4%, 3.3%). Results were robust to a number of sensitivity analyses.
Conclusions: These findings suggest that, as opposed to open and laparoscopic radical prostatectomy, surgeons performing RARP achieve adequate cancer control in the early phase of their career. Further research should explore why the learning curve for robotic surgery differs from prior findings for open and laparoscopic radical prostatectomy. We hypothesize that surgical education, including simulation training and the adoption of objective performance metrics, is an important mechanism for flattening the learning curve.
Patient Summary: We investigated the relationship between biochemical recurrence after robot-assisted radical prostatectomy and surgeon's experience. Surgeons at an early stage of their career had similar outcomes to those of more experienced surgeons, and we hypothesized that surgical education in robotics might be an important determinant of such a finding.
abstract_id: PUBMED:25791787
Surgical learning curve for open radical prostatectomy: Is there an end to the learning curve? Objectives: To analyze the impact of surgeon's experience on surgical margin status, postoperative continence and operative time after radical prostatectomy (RP) in a surgeon who performed more than 2000 open RP.
Patients And Methods: We retrospectively analyzed 2269 patients who underwent RP by one surgeon from April 2004 to June 2012. Multivariable logistic models were used to quantify the impact of surgeon's experience (measured by the number of prior performed RP) on surgical margin status, postoperative continence and operative time.
Results: Negative surgical margin rate was 86 % for patients with pT2 stage, and continence rate at 3 years after RP was 94 %. Patients with negative surgical margin had lower preoperative PSA level (p = 0.02), lower pT stage (p < 0.001) and lower Gleason score (p < 0.001). The influence of the experience of the surgeon was nonlinear, positive and highly significant up to 750 performed surgeries (75-90 % negative surgical margin) (p < 0.01). The probability of continence rises significantly with surgeon's experience (from 88-96 %) (p < 0.05). A reduction in operative time (90-65 min) per RP was observed up to 1000 RP.
Conclusions: In the present study, we showed evidence that surgeon's experience has a strong positive impact on pathologic and functional outcomes as well as on operative time. While significant learning effects concerning positive surgical margin rate and preserved long-term continence were detectable during the first 750 and 300 procedures, respectively, improvement in operative time was detectable up to a threshold of almost 1000 RP and hence is relevant even for very high-volume surgeons.
abstract_id: PUBMED:38036328
Positive Surgical Margins After Anterior Robot-assisted Radical Prostatectomy: Assessing the Learning Curve in a Multi-institutional Collaboration. Background: The learning curve for robot-assisted radical prostatectomy (RARP) remains controversial, with prior studies showing that, in contrast with evidence on open and laparoscopic radical prostatectomy, biochemical recurrence rates of experienced versus inexperienced surgeons did not differ.
Objective: To characterize the learning curve for positive surgical margins (PSMs) after RARP.
Design, Setting, And Participants: We analyzed the data of 13 090 patients with prostate cancer undergoing RARP by one of 74 surgeons from ten institutions in Europe and North America between 2003 and 2022.
Outcome Measurements And Statistical Analysis: Multivariable models were used to assess the association between surgeon experience at the time of each patient's operation and PSMs after surgery, with adjustment for preoperative prostate-specific antigen level, grade, stage, and year of surgery. Surgeon experience was coded as the number of robotic radical prostatectomies done by the surgeon before the index patient's operation.
Results And Limitations: Overall, 2838 (22%) men had PSMs on final pathology. After adjusting for case mix, we found a significant, nonlinear association between surgical experience and probability of PSMs after surgery, with a lower risk of PSMs for greater surgeon experience (p < 0.0001). The probabilities of PSMs for a patient treated by a surgeon with ten, 250, 500, and 2000 prior robotic procedures were 26%, 21%, 18%, and 14%, respectively (absolute risk difference between ten and 2000 procedures: 11%; 95% confidence interval: 9%, 14%). Similar results were found after stratifying patients according to extracapsular extension at final pathology. Results were also unaltered after excluding surgeons who had moved between institutions.
Conclusions: While we characterized the learning curve for PSMs after RARP, the relative contribution of surgical learning to the achievement of optimal outcomes remains controversial. Future investigations should focus on what experienced surgeons do to avoid positive margins and should explore the relationship between learning, margin rate, and biochemical recurrence. Understanding what margins affect recurrence and whether these margins are trainable or a result of other factors may shed light on where to focus future efforts in surgical education.
Patient Summary: In patients receiving robotic radical prostatectomy for prostate cancer, we characterized the learning curve for positive margins. The risk of surgical margins decreased progressively with increasing experience, and plateaued around the 500th procedure. Understanding what margins affect recurrence and whether these margins are trainable or a result of other factors has implications for surgeons and patients, and it may shed light on where to focus future efforts in surgical education.
abstract_id: PUBMED:20952022
The learning curve for laparoscopic radical prostatectomy: an international multicenter study. Purpose: It is not yet possible to estimate the number of cases required for a beginner to become expert in laparoscopic radical prostatectomy. We estimated the learning curve of laparoscopic radical prostatectomy for positive surgical margins compared to a published learning curve for open radical prostatectomy.
Materials And Methods: We reviewed records from 8,544 consecutive patients with prostate cancer treated laparoscopically by 51 surgeons at 14 academic institutions in Europe and the United States. The probability of a positive surgical margin was calculated as a function of surgeon experience with adjustment for pathological stage, Gleason score and prostate specific antigen. A second model incorporated prior experience with open radical prostatectomy and surgeon generation.
Results: Positive surgical margins occurred in 1,862 patients (22%). There was an apparent improvement in surgical margin rates up to a plateau at 200 to 250 surgeries. Changes in margin rates once this plateau was reached were relatively minimal relative to the CIs. The absolute risk difference for 10 vs 250 prior surgeries was 4.8% (95% CI 1.5, 8.5). Neither surgeon generation nor prior open radical prostatectomy experience was statistically significant when added to the model. The rate of decrease in positive surgical margins was more rapid in the open vs laparoscopic learning curve.
Conclusions: The learning curve for surgical margins after laparoscopic radical prostatectomy plateaus at approximately 200 to 250 cases. Prior open experience and surgeon generation do not improve the margin rate, suggesting that the rate is primarily a function of specifically laparoscopic training and experience.
abstract_id: PUBMED:23607379
"Learning curve" robotic radical hysterectomy compared to standardized laparoscopy assisted radical vaginal and open radical hysterectomy Objective: To compare intraoperative, pathologic and postoperative outcomes of "learning curve" robotic radical hysterectomy (RRH) with laparoscopy assisted radical vaginal hysterectomy (LARVH) and abdominal radical hysterectomy (ARH) in patients with early stage cervical carcinoma.
Design: Comparative study.
Setting: Department of Obstetrics and Gynecology, University Hospital, Olomouc.
Methods: The first twenty patients with cervical cancer stages IA2-IIA underwent RRH and were compared with previous twenty LARVH and ARH cases. The procedures were performed at University Hospital Olomouc, Czech Republic between 2004 and 2011.
Results: There were no differences between groups for age, body mass index, tumor histology, number of nodes removed or preoperative hemoglobin levels. The median theatre time in the learning period for the robot procedure was reduced from 400 min to less than 223 min and compared well to the 215 min for an open procedure. We found differences between the pre- and postoperative hemoglobin levels (RRH, 14.9 ±7 .6; LARVH, 23.0 ± 8.5; ARH, 28.0 ± 12.4). This difference was statistically significant in favor of RRH group ( p= 0.0012). Mean length of stay was significantly shorter for the RRH group (7.2 versus 8.8 days,p = 0.0005). Mean pelvic lymph node count was similar in the three groups. None of the robotic or laparoscopic procedures required conversion to laparotomy. The differences in major operative complications between the two groups were not significant.
Conclusion: Based on our experience, robotic radical hysterectomy showed better results than traditional laparoscopically assisted radical vaginal hysterectomy in early stage cervical carcinoma cases. Introduction of this new technique requires a learning curve of less than 20 cases that will reduce the operating time to a level comparable to open surger.
abstract_id: PUBMED:29935319
Learning Curve and Minimally Invasive Spine Surgery. Background: Minimally invasive surgery has become popular in recent times and has proved more advantageous than conventional open surgery methods, in terms of maximal preservation of natural anatomy and minimal postoperative complications. However, these advancements require a longer learning curve for inexperienced surgeons.
Overview: The learning curve in minimally invasive spine surgery is complex and difficult to measure, and therefore operating times, conversion to open procedures, visual analog scale, and periods of hospital stay are used. In assessing complications as a measure of the learning curve, it was noted that nearly all the complications had been documented previously and became minimum after the 30th consecutive case. As surgical experience increases, perioperative parameters (e.g., operative time and length of hospitalization) improve. The downside of minimally invasive spine surgery is starting unfamiliar procedures without tactile sensation, working in a narrow restricted surgical field, and using endoscopes via two-dimensional imaging.
Conclusions: Appropriate instruments, a trained team, and an adept radiographer are important assets for a smooth transition during the learning period. Structured training with cadavers and lots of practice, preferably while working under the guidance of experienced surgeons, is helpful. The learning curve can be shortened when a proficient surgeon gains relevant knowledge, understands three-dimensional anatomy, and has surgical aptitude along with manual dexterity.
abstract_id: PUBMED:36813677
Cumulative sum analysis of the learning curve of laparoendoscopic single-site robot-assisted radical prostatectomy. Background: Radical prostatectomy has become the gold standard for treating localized prostate cancer. Improvement in the single-site technique and surgeon's skill reduces not only the hospital duration but also the number of wounds. Realizing the learning curve for a new procedure can prevent unnecessary mistakes.
Objective: To analyze the learning curve of extraperitoneal laparoendoscopic single-site robot-assisted radical prostatectomy (LESS-RaRP).
Methods: We retrospectively evaluated 160 patients diagnosed with prostate cancer during June 2016 to December 2020 who underwent extraperitoneal LESS-RaRP. Calculated cumulative sum analysis (CUSUM) was used to evaluate the learning curves for the extraperitoneal setting time, robotic console time, total operation time, and blood loss. The operative and functional outcomes were also assessed.
Results: The learning curve of the total operation time was observed in 79 cases. For the extraperitoneal setting and robotic console times, the learning curve was observed in 87 and 76 cases, respectively. The learning curve for blood loss was observed in 36 cases. No in-hospital mortality or respiratory failure was observed.
Conclusion: Extraperitoneal LESS-RaRP using the da Vinci Si system is safe and feasible. Approximately 80 patients are required to achieve a stable and consistent operative time. A learning curve for blood loss was observed after 36 cases.
abstract_id: PUBMED:26430662
Learning curve analysis of laparoscopic radical hysterectomy for gynecologic oncologists without open counterpart experience. Objective: To evaluate the learning curve of laparoscopic radical hysterectomy (LRH) for gynecologic oncologists who underwent residency- and fellowship-training on laparoscopic surgery without previous experience in performing abdominal radical hysterectomy (ARH).
Methods: We retrospectively reviewed 84 patients with FIGO (International Federation of Gynecology and Obstetrics) stage IB cervical cancer who underwent LRH (Piver type III) between April 2006 and March 2014. The patients were divided into two groups (surgeon A group, 42 patients; surgeon B group, 42 patients) according to the surgeon with or without ARH experience. Clinico-pathologic data were analyzed between the 2 groups. Operating times were analyzed using the cumulative sum technique.
Results: The operating time in surgeon A started at 5 to 10 standard deviations of mean operating time and afterward steeply decreased with operative experience (Pearson correlation coefficient=-0.508, P=0.001). Surgeon B, however, showed a gentle slope of learning curve within 2 standard deviations of mean operating time (Pearson correlation coefficient=-0.225, P=0.152). Approximately 18 cases for both surgeons were required to achieve surgical proficiency for LRH. Multivariate analysis showed that tumor size (>4 cm) was significantly associated with increased operating time (P=0.027; odds ratio, 4.667; 95% confidence interval, 1.187 to 18.352).
Conclusion: After completing the residency- and fellowship-training course on gynecologic laparoscopy, gynecologic oncologists, even without ARH experience, might reach an acceptable level of surgical proficiency in LRH after approximately 20 cases and showed a gentle slope of learning curve, taking less effort to initially perform LRH.
abstract_id: PUBMED:32073801
The surgical learning curve Surgical perfection takes years of training and a learning curve to optimize outcomes. This creates an ethical dilemma: although healthcare systems benefit from training new surgeons, the learning curve may cause suboptimal outcomes for the first patients a resident operates on. Can collective interest for the betterment of healthcare be combined with the wellbeing of the individual patient? We argue that this is possible under controlled circumstances. Residency programmes can optimize the learning curve of the trainee by active and extensive supervision: the 'See one, do one' mentality is outdated, even in relatively simple procedures. Residents can improve their skills by using hands-on training on animals or cadavers, or by re-watching their own procedures. It is possible that despite all preventative measures, the effect of the learning curve on surgical outcomes is inevitable and necessary where new surgeons are trained within regular healthcare systems. Ultimately, practice makes perfect.
abstract_id: PUBMED:35546102
Learning curve for robot-assisted laparoscopic radical prostatectomy in a large prospective multicentre study. Objective: Differences in outcome after radical prostatectomy for prostate cancer can partly be explained by intersurgeon differences, where degree of experience is one important aspect. This study aims to define the learning curve of robot-assisted laparoscopic prostatectomy (RALP) regarding oncological and functional outcomes.
Materials And Methods: Out of 4003 enrolled patients in the LAPPRO trial, 3583 met the inclusion criteria, of whom 885 were operated on by an open technique. In total, 2672 patients with clinically localized prostate cancer from seven Swedish centres were operated on by RALP and followed for 8 years (LAPPRO trial). Oncological outcomes were pathology-reported surgical margins and biochemical recurrence at 8 years. Functional outcomes included patient-reported urinary incontinence and erectile dysfunction at 3, 12 and 24 months. Experience was surgeon-reported experience before and during the study. The relationship between surgeon experience and functional outcomes and surgical margin status was analysed by mixed-effects logistic regression. Biochemical recurrence was analysed by Cox regression, with robust standard errors.
Results: The learning curve for positive surgical margins was relatively flat, with rates of 21% for surgeons who had performed 0-74 cases and 24% for surgeons with > 300 cases. Biochemical recurrence at 4 years was 11% (0-74 cases) and 13% (> 300 cases). Incontinence was stable over the learning curve, but erectile function improved at 2 years, from 38% (0-74 cases) to 53% (> 300 cases).
Conclusions: Analysis of the learning curve for surgeons performing RALP showed that erectile function improved with increasing number of procedures, which was not the case for oncological outcomes.
Answer: The surgical learning curve for open radical prostatectomy does show a point where significant improvements in outcomes begin to plateau. According to a study analyzing the impact of a surgeon's experience on surgical margin status, postoperative continence, and operative time after radical prostatectomy, there is evidence that the surgeon's experience has a strong positive impact on these outcomes. The study found that significant learning effects concerning positive surgical margin rate and preserved long-term continence were detectable during the first 750 and 300 procedures, respectively. Improvement in operative time was detectable up to a threshold of almost 1000 radical prostatectomies, indicating that even for very high-volume surgeons, there is a point where the learning curve levels off (PUBMED:25791787).
This suggests that while there may not be a definitive end to the learning curve, as surgeons continue to refine their techniques over time, the most substantial improvements in outcomes occur within the initial several hundred cases. After reaching a certain level of experience, the incremental benefits of additional experience become less pronounced. However, it is important to note that continuous practice and experience may still contribute to maintaining high-quality surgical skills and potentially lead to further incremental improvements over time. |
Instruction: Return of bowel sounds indicating an end of postoperative ileus: is it time to cease this long-standing nursing tradition?
Abstracts:
abstract_id: PUBMED:22866434
Return of bowel sounds indicating an end of postoperative ileus: is it time to cease this long-standing nursing tradition? Unlabelled: Evidence and rationale supporting return of bowel sounds as an unreliable indicator of the end of postoperative ileus after abdominal surgery are provided.
Introduction: A loss of gastrointestinal motility, commonly known as postoperative ileus (POI), occurs after abdominal surgery. Since the 1900s, nurses and other clinicians have been taught to listen for return of bowel sounds to indicate the end of POI. Evidence-based nursing literature has challenged this long-standing traditional nursing practice.
Purpose: The purpose of this study was to provide evidence from a randomized clinical trial and rationale supporting evidence-based inquiry concerning return of bowel sounds as an unreliable indicator of the end of POI after abdominal surgery.
Method: Time (days) of return of bowel sounds after abdominal surgery was compared to the time (days) of first postoperative flatus, an indicator of the end of POI, in 66 patients recovering from abdominal surgery randomized to receive standard care compared to those who received standard care plus a rocking chair intervention.
Findings: Pearson's correlation between time to first flatus and return of bowel sounds for combined groups was not significant (r = 0.231, p = 0.062, p < 0.05) indicating that time to return of bowel sounds and time to first flatus were not associated.
Conclusions: The results of this study provide support to evidence-based inquiry that questions the relevance of traditional nursing practice activities such as listening to bowel sounds as an indicator of the end of POI.
abstract_id: PUBMED:10696888
Effect of Morphine and incision length on bowel function after colectomy. Purpose: Return of bowel function remains the rate-limiting factor in shortening postoperative hospitalization of patients with colectomies. Narcotics are most commonly used in the management of postoperative pain, even though they are known to affect gut motility. Narcotic use has been felt to be proportional to the length of the abdominal incision. The aim of this study was to determine whether return of bowel function after colectomy is directly related to narcotic use and to evaluate the effect of incision length on postoperative ileus.
Methods: A prospective evaluation of 40 patients who underwent uncomplicated, predominantly left colon and rectal resections was performed. Morphine administered by patient controlled analgesia was the sole postoperative analgesic. The amount of morphine used before the first audible bowel sounds, first passage of flatus and bowel movement, and incision length were recorded. Spearman correlation coefficients were calculated between all variables.
Results: The strongest correlation was between time to return of bowel sounds and amount of morphine administered (r = 0.74; P = 0.001). There were also significant correlations between morphine use and time to report of first flatus (r = 0.47; P = 0.003) and time to bowel movement (r = 0.48; P = 0.002). There was no relationship between incision length and morphine use or incision length and return of bowel function in the total group.
Conclusions: Return of bowel sounds, reflecting small-intestine motility after colectomy, correlated strongly with the amount of morphine used. Similarly, total morphine use adversely affects colonic motility. Because no relationship with incision length was found, efforts to optimize the care of patients with colectomies should be directed less toward minimizing abdominal incisions and more toward diminishing use of postoperative narcotics.
abstract_id: PUBMED:8862379
Thoracic versus lumbar epidural anesthesia's effect on pain control and ileus resolution after restorative proctocolectomy. Background: Epidural anesthesia as a perioperative adjunct has been shown to provide superior pain control and has been implicated in more rapid ileus resolution after major abdominal surgery, possibly through a sympatholytic mechanism. Studies suggest that the vertebral level of epidural administration influences these parameters.
Methods: One hundred seventy-nine patients (120 male, 59 female; average age, 36 years) underwent restorative proctocolectomy for ulcerative colitis or familial polyposis between 1989 and 1995. Patients were grouped according to type of anesthesia. Group THO (n = 53) received thoracic (T6 to T10) epidurals. Group LUM (n = 51) received lumbar (L2 to L4) epidurals, and group PCA (n = 75) received patient-controlled intravenous narcotic analgesia. Patients were compared for complications, perioperative risk factors, postoperative pain, and ileus resolution.
Results: Epidural narcotics, alone or combined with local anesthetics, were administered for an average of 2 (LUM) to 4 (THO) days without significant complications. Infrequent problems related to the epidural catheters included self-limited headaches or back pain (four) and site infections (two). Epidural failure, as measured by conversion to PCA for inadequate pain control, was not significantly greater for LUM (25%) than THO (23%). Average pain scores, rated daily on a visual analog scale, were significantly higher (indicating more pain) for PCA patients (4.2) during postoperative days 1 through 5 than for LUM (3.5) (p < 0.05) and for THO (2.4) (p < 0.05). Ileus resolution, as determined by stool output and return of bowel sounds, was significantly faster in THO than in LUM or PCA (p < 0.05). Resolution of ileus was not significantly different between PCA and LUM (p > 0.05).
Conclusions: Thoracic epidural analgesia has distinct advantages over both lumbar epidural or traditional patient-controlled analgesia in shortening parameters measuring postoperative ileus and in reducing surgical pain. The procedure is safe and associated with low morbidity. Thoracic epidural anesthesia is also economically justifiable and may prove to impact significantly on future postoperative management by reducing length of hospitalization. Our data and those of others are most striking in these regards for patients with thoracic catheters, indicating the importance of vertebral level in epidural drug administration.
abstract_id: PUBMED:6190233
The utilization of transcutaneous electric nerve stimulation in postoperative ileus. Pain stimuli with resulting sympathetic hyperactivity are responsible for the inhibition of intestinal motility. In this clinical study on 30 adult patients after laparotomy, the effect of intermittent transcutaneous stimulation with a diadynamic current on intestinal motility was determined. It was established that bowel sounds return soon after transcutaneous nerve stimulation, and more than 50% of the patients passed flatus within 24 hours after the commencement of stimulation.
abstract_id: PUBMED:24140940
Post-operative ileus in hemicolectomy for cancer: open versus laparoscopic approach. Aim: This study aims to verify if the duration of postoperative ileus (POI), in patients undergoing abdominal surgery, is related to the surgical approach used (open or laparoscopic) or rather to the manipulation of bowel loops.
Materials And Methods: Ninety patients, undergoing elective colon resection for cancer, were randomized in three groups with different surgical approaches: open technique with extensive manipulation of intestinal loops (GROUP A), open technique with minimal manipulation (GROUP B) and laparoscopic technique (GROUP C). Return of bowel functions was investigated by: detection of bowel sounds, passage of flatus and passage of stool.
Results: Detection of bowel sounds occurred after 2.18 days in GROUP A, after 1.35 days in GROUP B and after 1.19 days in GROUP C. Return of flatus occurred after 3.51 days in Group A, after 2.53 days in GROUP B and after 2.30 days in GROUP C. Passage of stool occurred after 4.48 days in GROUP A, after 3.75 days in GROUP B and after 3.61 days in GROUP C. In all end-points analyzed, differences between GROUP A and GROUP B and between GROUP A and GROUP C are significant (P< 0.01) whereas the differences between GROUP B and GROUP C are not significant (P > 0.01).
Conclusions: In colon surgery open technique with minimal manipulation of loops obtains similar results in those of the laparoscopic technique, in terms of resolution of postoperative ileus.
abstract_id: PUBMED:17445634
Treatment of postoperative ileus after bowel surgery with low-dose intravenous erythromycin. Objectives: Treatment of postoperative ileus remains unsatisfactory. Erythromycin (EM), a macrolide antibiotic, has prokinetic effects on the gut. We investigated whether intravenous erythromycin decreases the time to the return of normal bowel function after bowel surgery in patients with bladder cancer and interstitial cystitis who have undergone cystectomy and urinary diversion.
Methods: We conducted a double-blind, randomized, placebo-controlled study of 22 volunteers. On the first postoperative day, patients began receiving intravenous erythromycin (125 mg) or placebo every 8 hours (maximum of 21 doses). The patients' ability to tolerate a general diet and return of bowel function was monitored.
Results: A general diet was tolerated at a median of 9 days postoperatively for the EM arm and 8 for the placebo arm (P = 0.60). The first bowel sounds were detected at an average of 2 postoperative days for the EM arm and 3 for the placebo arm (P = 0.88). First flatus was present an average of 5 days postoperatively for both study arms (P = 0.35). The first bowel movement was present an average of 6 days postoperatively for the EM arm and 5 for the placebo arm (P = 0.98).
Conclusions: No significant difference was found between EM and placebo with regard to the onset of bowel sounds, passage of flatus, passage of the first bowel movement, and the time to tolerate a general diet. These data indicate that erythromycin is not useful in improving postoperative bowel function.
abstract_id: PUBMED:30211237
Effect of Topical Chamomile Oil on Postoperative Bowel Activity after Cesarean Section: A Randomized Controlled Trial. Objective: Postoperative ileus (POI) is a common complication after surgery that requires a multifactorial therapeutic approach. This study aims to assess the effect of topical chamomile oil on postoperative bowel activity after cesarian section.
Methods: This randomized controlled trial was carried out in 2015 at Chamran Hospital in Iran. A block randomization list was generated for 142 parturient divided into three groups. In the intervention group (arm A) (n = 47), chamomile oil was applied topically on abdominal region after the stability of the patient. Placebo group (arm B) (n = 47) received placebo oil and control group (arm C) (n = 48) had no intervention. A recovery program was used after surgery for all participants. The primary outcome was time to first flatus. Secondary outcomes were time to bowel sounds, defecation, return of appetite, hospital stay, and rate of nausea and vomiting, abdominal pain.
Findings: Times to first flatus were significantly shorter in Group A (arm A vs. B, P < 0.001 and arm A vs. C, P < 0.001). In addition, time to first bowel sounds (arm A vs. B, P < 0.001 and arm A vs. C, P < 0.001) and return of appetite (arm A vs. B, P < 0.001 and arm A vs. C, P < 0.001) were significantly shorter in arm A. The times from surgery to first defecation were shorter in Group A versus B and C. However, there were no statistically significant differences between three groups.
Conclusion: These results suggest that topical chamomile oil has a potential therapeutic effect on gastrointestinal motility and can reduce the duration of POI.
abstract_id: PUBMED:20013603
Gum chewing slightly enhances early recovery from postoperative ileus after cesarean section: results of a prospective, randomized, controlled trial. Postoperative ileus is one of the common problems after abdominal surgeries. It contributes to delayed recovery and prolongs hospital stay. Sham feeding, such as gum chewing, may accelerate return of bowel function and reduce morbidity and length of hospital stay. This study aimed to determine whether gum chewing in the immediate postoperative period facilitates a return to bowel function in cesarean-delivery patients. Three hundred eighty-eight patients who underwent cesarean delivery were randomly assigned to a gum-chewing group (group G, N = 193) or a control group (group C, N = 195). Demographic data, duration of surgery, type of anesthesia, and time of discharge from hospital were recorded. Patients in the gum-chewing group chewed gum three times per day as soon as returning from the operating theater to the ward until the time they defecated or were discharged. Patients were asked to chew gum at least half an hour each time. The T test and Pearson chi-square test was used for statistical analysis. Groups were comparable in age, weight, height, weeks of gestation, duration of surgery, and type of anesthesia. Bowel sounds were 5 hours earlier in the gum-chewing group (mean 18.2 hours) than in the control group (mean 23.2 hours). Passing flatus was 5.3 hours earlier in group G (mean 34.6 hours) than in group C (mean 39.9 hours). Patients having mild ileus symptoms were 9% less in group G (mean 12%) than in group C (mean 21%). The difference between the two groups were all highly significant (p < 0.001). Gum chewing was easily tolerated without any complications. Gum chewing is an inexpensive, convenient, and physiological method in enhancing the recovery of bowel function. But this may not facilitate early hospital discharge, lactation, or defecation.
abstract_id: PUBMED:35750598
The Effect of Xylitol Gum Chewing After Cesarean on Bowel Functions: A Randomized Controlled Study. Purpose: Postoperative ileus after cesarean is a common complication. The delay of bowel functions after cesarean influences early parenthood experience, increases the need for analgesic use, extends the duration of hospital stay, and causes cost increase. This study aimed to explore the effect of xylitol gum chewing after cesarean on bowel functions.
Design: A prospective randomized controlled trial was conducted on subjects immediately after second hour of the cesarean in a ward of maternity.
Methods: A total of 69 women were randomized to the xylitol gum chewing group (n = 23), the nonxylitol gum chewing group (n = 23), or the control group (n = 23). Data were collected from the women who agreed to participate and met the inclusion criteria. Starting with the second hour after the cesarean, women in the xylitol and nonxylitol gum chewing group were asked to chew gum for 15 minutes every 2 hours. The practice of chewing gum minimum of 8 hours and a maximum of 12 hours continued (until midnight). The data regarding the first times for bowel sounds, flatulation, defecation, a feeling of hunger, and length of hospital stay were compared.
Findings: There is no difference in starting time of first bowel sounds (P = .070) and the first feeling of hunger (P = .098) among the groups. The first flatulation started at an earlier time in the xylitol gum chewing group than the control group (17.35 ± 6.27 vs 11.18 ± 5.39 hours, P = .003); first defecation time started earlier in the xylitol gum chewing group than nonxylitol gum chewing group (44.05 ± 9.4 vs 37.58 ± 9.96 hours, P = .022). Xylitol gum chewing group (51.77 ± 10.3 vs 46.33 ± 8.11 hours, P = .028) and nonxylitol gum chewing group (51.77 ± 10.3 vs 50.23 ± 5.72 hours, P = .024) were discharged from the hospital earlier than control group.
Conclusions: In nursing care after cesarean, early period xylitol gum chewing is an efficient and user-friendly method to prompt bowel functions earlier. Xylitol gum chewing in nursing care is suggested to be involved.
abstract_id: PUBMED:28483182
Effect of Acupressure on Symptoms of Postoperative Ileus After Cesarean Section. Postoperative ileus (POI) is a common complication after most abdominal surgeries including cesarean section. It is associated with longer hospitalization and increased medical costs. This study is a randomized controlled trial investigating the effect of acupressure, and low-cost noninvasive traditional treatment, on POI symptoms after cesarean section. A total of 120 patients were randomly divided into two groups; the treatment group received two sessions of acupressure (an hour after attending the women's division; and 3 hours after the first session), each lasting 20 minutes. The time of flatus and defecation, time to presence of bowel sounds, and duration of postoperative bed rest were monitored. Patients in the treatment group had a shorter time to presence of bowel sounds compared with those in the control group (p<0.001), as well as shorter time to first passage of flatus (p<0.001) and shorter postoperative bed rest (p=0.005). However, the time to first defecation was not statistically significant (p=0.311). Acupressure has potential positive impacts on attenuating POI symptoms after cesarean section, and can be used as a low-cost noninvasive nursing care to reduce POI incidence and intensity after cesarean section.
Answer: The return of bowel sounds has traditionally been used as an indicator of the resolution of postoperative ileus (POI) after abdominal surgery. However, evidence suggests that this practice may not be reliable. A study comparing the time of return of bowel sounds to the time of first postoperative flatus (another indicator of the end of POI) found no significant correlation between the two, indicating that bowel sounds are not a reliable indicator of the end of POI (PUBMED:22866434). Additionally, another study found that the return of bowel sounds correlated strongly with the amount of morphine used, which affects gut motility, rather than being a direct indicator of bowel function recovery (PUBMED:10696888). These findings support the notion that listening for bowel sounds may not be the most accurate method to assess the resolution of POI.
Moreover, other interventions such as thoracic epidural analgesia have been shown to have a more significant impact on the resolution of ileus and reduction of surgical pain compared to traditional patient-controlled analgesia (PUBMED:8862379). Alternative methods like transcutaneous electric nerve stimulation have also been observed to hasten the return of bowel sounds and passage of flatus (PUBMED:6190233). Furthermore, the surgical approach, such as open technique with minimal manipulation of intestinal loops, has been found to influence the resolution of POI, with minimal manipulation showing similar results to laparoscopic techniques (PUBMED:24140940).
Other studies have explored the use of erythromycin, chamomile oil, gum chewing, and acupressure as interventions to enhance bowel function recovery after surgery. However, erythromycin did not show a significant difference in the onset of bowel sounds or passage of flatus compared to placebo (PUBMED:17445634). In contrast, chamomile oil application, gum chewing, and acupressure were associated with earlier return of bowel sounds and other bowel functions (PUBMED:30211237, PUBMED:20013603, PUBMED:35750598, PUBMED:28483182).
In light of this evidence, it may be time to reconsider the reliance on the return of bowel sounds as an indicator of the end of POI and to adopt more evidence-based practices that have been shown to have a more direct and significant impact on bowel function recovery. |
Instruction: Do e-mail alerts of new research increase knowledge translation?
Abstracts:
abstract_id: PUBMED:21099399
Do e-mail alerts of new research increase knowledge translation? A "Nephrology Now" randomized control trial. Purpose: As the volume of medical literature increases exponentially, maintaining current clinical practice is becoming more difficult. Multiple, Internet-based journal clubs and alert services have recently emerged. The purpose of this study is to determine whether the use of the e-mail alert service, Nephrology Now, increases knowledge translation regarding current nephrology literature.
Method: Nephrology Now is a nonprofit, monthly e-mail alert service that highlights clinically relevant articles in nephrology. In 2007-2008, the authors randomized 1,683 subscribers into two different groups receiving select intervention articles, and then they used an online survey to assess both groups on their familiarity with the articles and their acquisition of knowledge.
Results: Of the randomized subscribers, 803 (47.7%) completed surveys, and the two groups had a similar number of responses (401 and 402, respectively). The authors noted no differences in baseline characteristics between the two groups. Familiarity increased as a result of the Nephrology Now alerts (0.23 ± 0.087 units on a familiarity scale; 95% confidence interval [CI]: 0.06-0.41; P = .007) especially in physicians (multivariate odds ratio 1.83; P = .0002). No detectable improvement in knowledge occurred (0.03 ± 0.083 units on a knowledge scale; 95% CI: -0.13 to 0.20; P = .687).
Conclusions: An e-mail alert service of new literature improved a component of knowledge translation--familiarity--but not knowledge acquisition in a large, randomized, international population.
abstract_id: PUBMED:25414613
Differences in response rates between mail, e-mail, and telephone follow-up in hand surgery research. Background: There is a need to determine the difference in response to mail, e-mail, and phone in clinical research surveys.
Methods: We enrolled 150 new and follow-up patients presenting to our hand and upper extremity department. Patients were assigned to complete a survey by mail, e-mail, or phone 3 months after enrollment, altering the follow-up method every 5 patients, until we had 3 groups of 50 patients. At initial enrollment and at 3 month follow-up (range 2-5 months), patients completed the short version of the Disabilities of the Arm, Shoulder, and Hand questionnaire (QuickDASH), the short version of the Patient Health Questionnaire (PHQ-2), the Pain Self-Efficacy Questionnaire (PSEQ), and rated their pain intensity.
Results: The percent of patients that completed the survey was 34 % for mail, 24 % for e-mail, and 80 % for phone. Factors associated with responding to the survey were older age, nonsmoking, and lower pain intensity. Working full-time was associated with not responding.
Conclusions: The response rate to survey by phone is significantly higher than by mail or e-mail. Younger age, smoking, higher pain intensity, and working full-time are associated with not responding.
Type Of Study/level Of Evidence: Prognostic I.
abstract_id: PUBMED:25803184
Advantages and disadvantages of educational email alerts for family physicians: viewpoint. Background: Electronic knowledge resources constitute an important channel for accredited Continuing Medical Education (CME) activities. However, email usage for educational purposes is controversial. On the one hand, family physicians become aware of new information, confirm what they already know, and obtain reassurance by reading educational email alerts. Email alerts can also encourage physicians to search Web-based resources. On the other hand, technical difficulties and privacy issues are common obstacles.
Objective: The purpose of this discussion paper, informed by a literature review and a small qualitative study, was to understand family physicians' knowledge, attitudes, and behavior in regard to email in general and educational emails in particular, and to explore the advantages and disadvantages of educational email alerts. In addition, we documented participants' suggestions to improve email alert services for CME.
Methods: We conducted a qualitative descriptive study using the "Knowledge, Attitude, Behavior" model. We conducted semi-structured face-to-face interviews with 15 family physicians. We analyzed the collected data using inductive-deductive thematic qualitative data analysis.
Results: All 15 participants scanned and prioritized their email, and 13 of them checked their email daily. Participants mentioned (1) advantages of educational email alerts such as saving time, convenience and valid information, and (2) disadvantages such as an overwhelming number of emails and irrelevance. They offered suggestions to improve educational email.
Conclusions: The advantages of email alerts seem to compensate for their disadvantages. Suggestions proposed by family physicians can help to improve educational email alerts.
abstract_id: PUBMED:31922044
Enhancing the use of e-mail in scientific research and in the academy. From professors overwhelmed by anxiety-driven e-mails from students, through faculty and administrative staff wasting valued time on e-mail minutia, misuse of electronic mail in the academy has become ubiquitous. After a brief overview of the unique features of e-mail communication, this study provides guidelines to plan new educational activities on purposeful utilization of electronic mail in university and research centres of the digital era. The overall aim is to prioritize scholarly deep work by focusing on teaching and research work, freeing working time wasted on unproductive use of e-mail.
abstract_id: PUBMED:16305766
E-mail communication in general practice Introduction: Our aim was to examine how e-mail communication in general practice affects the doctor/patient relationship and doctors' workload.
Materials And Methods: Registration was done of e-mail activity in three Danish GP surgeries over a period of 12 months and focus group interviews with patients and doctors.
Results: The practices received 191 e-mails per 1,000 patients per year. The male/female ratio was 0.9 to 1. The average patient age was 47.2 years; the oldest user was 88 years old. Qualitative data were categorized under various headings, such as need, expectations, usability, workload, patient service, the communication part of professionalism, worries and barriers against use.
Discussion: E-mail is a requested and useful communication form between doctors and patients in general practice. But it requires guidance and structured communication. It works best when the doctor and patient know each other. It is being used less than might be expected. The barriers from patients' point of view are lack of knowledge of and access to computers, lack of awareness of the possibility of contacting the physician by e-mail and the absence of a personal invitation for use by the personal physician. It is a matter of concern that patients apparently don't read the recommendations concerning proper use of e-mail communication, even though they are clearly described on the Web site and patients have to click "has been read" to be able to continue.
abstract_id: PUBMED:25969410
The Role of Integrated Knowledge Translation in Intervention Research. There is widespread recognition across the full range of applied research disciplines, including health and social services, about the challenges of integrating scientifically derived research evidence into policy and/or practice decisions. These "disconnects" or "knowledge-practice gaps" between research production and use have spawned a new research field, most commonly known as either "implementation science" or "knowledge translation." The present paper will review key concepts in this area, with a particular focus on "integrated knowledge translation" (IKT)-which focuses on researcher-knowledge user partnership-in the area of mental health and prevention of violence against women and children using case examples from completed and ongoing work. A key distinction is made between the practice of KT (disseminating, communicating, etc.), and the science of KT, i.e., research regarding effective KT approaches. We conclude with a discussion of the relevance of IKT for mental health intervention research with children and adolescents.
abstract_id: PUBMED:16802419
E-mail in psychotherapy--an aftercare model via electronic mail for psychotherapy inpatients We introduce an aftercare program for psychotherapy inpatients, which is based on regular communication via E-mail. The organizational and operational structure of the program are described within the context of computer mediated communication. First results on utilization and acceptance are reported. In comparison to patients who did not participate in either aftercare program of the clinic, the E-mail participants are younger and higher educated. Inpatient treatment of the participants was three days shorter in duration than that of non participants. Both groups were similar with regard to symptom distress at discharge from hospital. A low dropout rate of 8%, the high activity and satisfaction emphasize the positive acceptance of the program. Therapists' E-mail activity turned out to be important for the participants. Neither age, internet experience or symptom related variables nor the own E-mail activity were associated with participants' evaluation of the new service. Based on these first positive experiences the perspectives of using E-mail in psychotherapy will be discussed.
abstract_id: PUBMED:35303224
Supporting researchers in knowledge translation and dissemination of their research to increase usability and impact. Purpose: One of the key areas of delivery of the 'Action Plan for Health Research 2019-2029', for the Health Service Executive (HSE) in Ireland, is adding value and using data and knowledge, including health-related quality of life (HRQoL), for improved health care, service delivery and better population health and wellbeing. The development of governance, management and support framework and mechanisms will provide a structure for ensuring research is relevant to the organisation's service plan, well designed, has a clear plan for dissemination and translation of knowledge, and minimises research waste. Developing a process for the translation, dissemination and impact of research is part of the approach to improving translation of research into practice and aligning it with knowledge gaps. A project was undertaken to develop a clear, unified, universally applicable approach for the translation, dissemination, and impact of research undertaken by HSE staff and commissioned, sponsored, or hosted by the organisation. This included the development of guidance, training, and information for researchers.
Methods: Through an iterative process, an interdisciplinary working group of experts in knowledge translation (KT), implementation science, quality improvement and research management, identified KT frameworks and tools to form a KT, dissemination, and impact process for the HSE. This involved a literature review, screening of 247 KT theories, models, and frameworks (TMFs), review of 18 TMFs selected as usable and applicable to the HSE, selection of 11 for further review, and final review of 6 TMFs in a consensus workshop. An anonymous online survey of HSE researchers, consisting of a mixture of multiple choice and free text questions, was undertaken to inform the development of the guidance and training.
Results: A pilot of the KT process and guidance, involving HSE researchers testing its use at various stages of their research, demonstrated the need to guide researchers through planning, stakeholder engagement, and disseminating research knowledge, and provide information that could easily be understood by novice as well as more experienced researchers. A survey of all active researchers across the organisation identified their support and knowledge requirements and led to the development of accompanying guidance to support researchers in the use of the process. Researchers of all levels reported that they struggled to engage with stakeholders, including evidence users and policy makers, to optimise the impact of their research. They wanted tools that would support better engagement and maximise the value of KT. As a result of the project a range of information, guidance, and training resources have been developed.
Conclusion: KT is a complex area and researchers need support to ensure they maximise the value of their research. The KT process outlined enables the distilling of a clear message, provides a process to engage with stakeholders, create a plan to incorporate local and political context, and can show a means to evaluate how much the findings are applied in practice. This is a beneficial application of KT in the field of patient reported outcomes. In implementing this work, we have reinforced the message that stakeholder engagement is crucial from the start of the research study and increases engagement in, and ownership of, the research knowledge.
abstract_id: PUBMED:26970652
Unpacking knowledge translation in participatory research: a micro-level study. Objectives: Funding bodies, policy makers, researchers and clinicians are seeking strategies to increase the translation of knowledge between research and practice. Participatory research encompasses a range of approaches for clinicians' involvement in research in the hope of increasing the relevance and usability of research. Our aim was to explore how knowledge is translated and integrated in participants' presentations and negotiations about knowledge.
Methods: Twelve collaboration meetings were observed, and discussions between researchers and clinicians were recorded. The material was examined using the following analytical terms: knowledge object, knowledge form, knowledge position and knowledge tasks.
Results: We identified a recurring rhetorical pattern in translational processes that we call 'relevance testing': a strategy by which the participants attempt to create coherence and identify relevance across different contexts. The limitation of this translational strategy was a tendency to reinforce a 'two-communities' logic: re-establishing the separated worlds and rationales between clinicians and researchers. The 'translational work' that unfolds during discussions remains implicit. It may be that participants are unable to explicitly address and identify the knowledge translation processes because they lack necessary conceptual tools.
Conclusions: Our results contribute to increased awareness about translational processes and provide a language through which barriers to translation can be addressed.
abstract_id: PUBMED:33871866
Changing Health-Related Behaviors 4: Realizing Impact of Health Research Through Knowledge Translation. Knowledge translation (KT) is critical to realizing real-world impacts from health research by reducing the amount of time it takes for evidence to inform practice. One way this is achieved is by engaging with knowledge users throughout the process to ensure that research responds to their needs. It is also important to share the study results in a way that is useful, accessible, and relevant to knowledge user groups. KT planning involves multiple categories described in templates that are available online and referenced in the text. Common knowledge translation challenges and suggested solutions, as well as real-world KT examples, are also provided.
Answer: Based on the provided abstracts, e-mail alerts of new research do increase a component of knowledge translation, specifically familiarity with current literature, but they do not necessarily improve knowledge acquisition. The "Nephrology Now" randomized control trial found that while e-mail alerts increased familiarity with nephrology literature, there was no detectable improvement in knowledge acquisition among the subscribers (PUBMED:21099399). This suggests that while e-mail alerts can help clinicians and researchers stay informed about new developments, they may not be sufficient on their own to enhance understanding or application of the new information. |
Instruction: Langerhans Cell Histiocytosis (LCH) in Egyptian Children: Does Reactivation Affect the Outcome?
Abstracts:
abstract_id: PUBMED:26133729
Langerhans Cell Histiocytosis (LCH) in Egyptian Children: Does Reactivation Affect the Outcome? Objective: To report a single centre outcome of management of Langerhans cell histiocytosis (LCH), a clonal disease with involvement of various body systems.
Methods: Retrospective analysis of 80 pediatric LCH patients at Children Cancer Hospital-Egypt between July 2007 and December 2011 was performed. Patients were stratified and treated according to LCH III protocol. The median follow up period was 42 mo (range: 1.18 to 71 mo).
Results: At wk 6 and 12, 'better' response was obtained in 61 (76 %) and 74 (93 %) patients respectively. Afterwards, reactivation occurred in 25 patients (38 %), of them multiple episodes occurred in 5 patients (6.25 %), managed by repetition of 1st line treatment for once or more. The 5 y overall survival (OS) and event free survival (EFS) was 96.3 and 55 % respectively. At last follow up, better status was reached in 70 patients, 3 in each 'intermediate' and 'worse' status. Three high risk patients died and one patient was lost to follow up.
Conclusions: In a single Egyptian pediatric LCH experience, the response to treatment is satisfactory and survival remains the rule except in high risk organs disease that still needs a new molecule for salvage. However in multiple reactivations, patients do well with repetition of the 1st line of treatment with or without methotrexate.
abstract_id: PUBMED:32706277
Egyptian experience in Langerhans cells histiocytosis: frequent multisystem affection and reactivation rates. Background: Histiocytoses are unique disorders; their clinical presentations vary from self-healing lesions to life-threatening disseminated disease. Objectives: We aimed to evaluate the different clinical presentations, frequency of reactivations, and treatment outcome of Langerhans cell histiocytosis among Egyptian children. Methods: we restrospectively analyzed the data of 37 Langerhans cell histiocytosis patients (LCH) registered at Ain Shams University Children's Hospital for clinicopathological features, treatment modalities and their outcomes. Results: Twenty seven (73%) of the studied patients with LCH had multisystem disease (MS), 24 (88.9%) of them had risk organ involvement (MS RO+) and only 3 without risk organ (MS RO-). Most of the patients received LCH III protocols. Eleven patients (29.7%) had reactivations with median time till reactivation of 17 months (IQR 5-23).Reactivation rates were 40% and 50% in patients with no evidence of active disease (NAD) and those with active disease better (AD better) at week 6 evaluation respectively (p = 0.71).We report 9 deaths (all had MS RO+, two died after reactivation and 7 had progressive disease. The 5 years EFS and OS were 49.4% and 81.2% respectively. Risk stratification did not significantly affect the EFS or OS (p = 0.64 and p = 0.5 respectively). Conclusion: A high reactivation rate was encountered in children with LCH and MS-RO + irrespective of 6 weeks response to induction therapy. A high mortality in patients with progressive disease necessitates a possible earlier aggressive salvage in such group.
abstract_id: PUBMED:35741837
Identification of Putative SNP Markers Associated with Resistance to Egyptian Loose Smut Race(s) in Spring Barley. Loose smut (LS) disease is a serious problem that affects barley yield. Breeding of resistant cultivars and identifying new genes controlling LS has received very little attention. Therefore, it is important to understand the genetic basis of LS control in order to genetically improve LS resistance. To address this challenge, a set of 57 highly diverse barley genotypes were inoculated with Egyptian loose smut race(s) and the infected seeds/plants were evaluated in two growing seasons. Loose smut resistance (%) was scored on each genotype. High genetic variation was found among all tested genotypes indicating considerable differences in LS resistance that can be used for breeding. The broad-sense heritability (H2) of LS (0.95) was found. Moreover, genotyping-by-sequencing (GBS) was performed on all genotypes and generated in 16,966 SNP markers which were used for genetic association analysis using single-marker analysis. The analysis identified 27 significant SNPs distributed across all seven chromosomes that were associated with LS resistance. One SNP (S6_17854595) was located within the HORVU6Hr1G010050 gene model that encodes a protein kinase domain-containing protein (similar to the Un8 LS resistance gene, which contains two kinase domains). A TaqMan marker (0751D06 F6/R6) for the Un8 gene was tested in the diverse collection. The results indicated that none of the Egyptian genotypes had the Un8 gene. The result of this study provided new information on the genetic control of LS resistance. Moreover, good resistance genotypes were identified and can be used for breeding cultivars with improved resistance to Egyptian LS.
abstract_id: PUBMED:22816232
A probable case of Hand-Schueller-Christian's disease in an Egyptian mummy revealed by CT and MR investigation of a dry mummy. The challenging mission of paleopathologists is to be capable to diagnose a disease just on the basis of limited information gained by means of one or more paleodiagnostic techniques. In this study a radiologic, anthropologic and paleopathologic analysis of an ancient Egyptian mummy through X-rays, CT and MR was conducted. An Ancient Egyptian mummy ("Mistress of the house", Archeological Museum, Zagreb, Croatia) underwent digital radiography, computed tomography and magnetic resonance imaging employing 3-dimensional ultra-short-echo time (UTE) sequence, that allows to image ancient dry tissue. Morphological observations on the skull and pelvis, the stages of epiphyseal union and dental wear indicated that the remains are those of a young adult male. Multiple osseous lytic lesions were observed throughout the spine as well as on the frontal, parietal, and occipital bone, orbital wall and the sella turcica of the sphenoid. Considering the sex and age of the individual and the features of the lesions, the authors propose the diagnosis of Hand-Schueller-Christian's disease. This is the first study to have effectively used MR images in the differential diagnosis of a disease. It also confirmed the CT value in revealing central nervous system involvement just by detecting skeletal lesions. Although the mummy was previously dated to 3rd century B.C. based on the properties of the sarcophagi, the sex of the mummy suggests that it was most probably transferred into these sarcophagi in later times. The mummification techniques used and radiometric data (C14) dated it to 900-790. B.C.
abstract_id: PUBMED:30247183
Outcome of High-risk Langerhans Cell Histiocytosis (LCH) in Egyptian Children, Does Intermediate-dose Methotrexate Improve the Outcome? High-risk multisystem organ (RO+) Langerhans cell histiocytosis (LCH) has the least survival. We present the outcome of RO+ LCH in a pediatric single center. Fifty RO+ LCH patients, treated between 07/2007 and 07/2015, were retrospectively analyzed. Induction vinblastine (VBL) and prednisone (PRED) with intermediate-dose methotrexate (idMTX) was adopted until 2012 (n=20) wherein idMTX was omitted (n=30). The 3-year overall survival (OS) of MTX and non-MTX groups was 75% and 63%, respectively, P=0.537, while the event-free survival (EFS) was 36.9% and 13.2%, respectively, P=0.005. At week 12 of induction, "better status" was obtained in 80% of those receiving MTX, and 55% of those who were not. The statistically significant factors associated with both poor OS and EFS were trihemopoietic cytopenias, hepatic dysfunction, tri RO+ combination, and single induction. The factors associated with disease progression (DP) on induction were trihemopoietic cytopenias, hepatic dysfunction, and lack of idMTX, while those for disease reactivations (REA), the season of autumn/winter, lung disease, male sex, and idMTX were the associated factors. The 1-year OS was remarkably affected with the occurrence of DP versus REA versus none, wherein it was 47%, 93%, and 95%, respectively, P=0.001. In conclusion, idMTX is associated with better EFS. DP on induction remains of dismal prognosis in relation to disease REA afterwards. Risk stratification should highlight the role of trihemopoietic cytopenias, hepatic dysfunction, tri RO+, central nervous system risk site, and lung association.
abstract_id: PUBMED:15562575
Current status and outcome of pediatric liver transplantation. In this article the diagnostic indications for hepatic transplantation are addressed in detail. The outcome of liver transplantation is also examined, including the impact of the following factors on survival: age and weight at transplantation, type of liver disease, and size of liver allograft, including living related transplantation. Morbidity and quality of life after transplantation are other aspects reviewed in this chapter.
abstract_id: PUBMED:8929511
Primary Langerhans' cell histiocytosis of the central nervous system with fatal outcome. Case report. An unusual case of primary parenchymal Langerhans' cell histiocytosis of the central nervous system is reported. The definitive diagnosis was obtained by ultrastructural detection of Birbeck granules and by immunohistochemical evidence of CD1a expression. Despite complete surgical resection, there was an early recurrence with multiple central nervous system metastases leading to a fatal outcome.
abstract_id: PUBMED:19833758
The localized scleroderma skin severity index and physician global assessment of disease activity: a work in progress toward development of localized scleroderma outcome measures. Objective: To develop and evaluate a Localized Scleroderma (LS) Skin Severity Index (LoSSI) and global assessments' clinimetric property and effect on quality of life (QOL).
Methods: A 3-phase study was conducted. The first phase involved 15 patients with LS and 14 examiners who assessed LoSSI [surface area (SA), erythema (ER), skin thickness (ST), and new lesion/extension (N/E)] twice for inter/intrarater reliability. Patient global assessment of disease severity (PtGA-S) and Children's Dermatology Life Quality Index (CDLQI) were collected for intrarater reliability evaluation. The second phase was aimed to develop clinical determinants for physician global assessment of disease activity (PhysGA-A) and to assess its content validity. The third phase involved 2 examiners assessing LoSSI and PhysGA-A on 27 patients. Effect of training on improving reliability/validity and sensitivity to change of the LoSSI and PhysGA-A was determined.
Results: Interrater reliability was excellent for ER [intraclass correlation coefficient (ICC) 0.71], ST (ICC 0.70), LoSSI (ICC 0.80), and PhysGA-A (ICC 0.90) but poor for SA (ICC 0.35); thus, LoSSI was modified to mLoSSI. Examiners' experience did not affect the scores, but training/practice improved reliability. Intrarater reliability was excellent for ER, ST, and LoSSI (Spearman's rho = 0.71-0.89) and moderate for SA. PtGA-S and CDLQI showed good intrarater agreement (ICC 0.63 and 0.80). mLoSSI correlated moderately with PhysGA-A and PtGA-S. Both mLoSSI and PhysGA-A were sensitive to change following therapy.
Conclusion: mLoSSI and PhysGA-A are reliable and valid tools for assessing LS disease severity and show high sensitivity to detect change over time. These tools are feasible for use in routine clinical practice. They should be considered for inclusion in a core set of LS outcome measures for clinical trials.
abstract_id: PUBMED:21277455
Infiltrative lung diseases in pregnancy. Pregnancy may affect the diagnosis, management, and outcome of infiltrative lung disease (ILD). Conversely, ILD may affect pregnancy. ILD may occur as a result of drugs administered commonly or specifically during pregnancy. Most ILDs predominate in patients older than 40 years and are thus rare in pregnant women. During pregnancy ILD may arise de novo and preexisting ILD may be exacerbated or significantly worsened. Some ILDs generally do not alter the management of pregnancy, labor, or delivery. Preexisting ILD no longer contraindicates pregnancy systematically, but thorough evaluation of ILD before pregnancy is required to identify potential contraindications and adapt monitoring.
abstract_id: PUBMED:17456920
Langerhans cell histiocytosis of the mandible in a six-year-old child. Histiocytosis is a term applied to a group of rare disorders of the reticuloendothelial system. Eosinophilic granuloma, the most benign and localized of the three Langerhans cell histiocytosis entities, may be solitary or multiple. Eosinophilic granuloma can affect almost any bone, but commonly involves the mandible when the jaws are affected. Conventional treatment of LCH is with surgery, radiotherapy, chemotherapy and steroid injections, alone or in combination. Spontaneous regression of localized disease has also been reported. We report a six and a half-year-old patient with Langerhans cell histiocytosis--solitary eosinophilic granuloma of the mandible that initially regressed but rapidly recurred even after radical treatment and had a fatal outcome.
Answer: The studies on Langerhans Cell Histiocytosis (LCH) in Egyptian children suggest that reactivation does indeed affect the outcome of the disease. In one study, reactivation occurred in 25 patients (38%), and multiple episodes occurred in 5 patients (6.25%). These patients were managed by repeating the first line of treatment with or without methotrexate. The 5-year overall survival (OS) and event-free survival (EFS) were 96.3% and 55% respectively, indicating that while survival rates were high, reactivation was a significant event in the disease course (PUBMED:26133729).
Another study reported a high reactivation rate in children with LCH, particularly among those with multisystem disease with risk organ involvement (MS RO+). Reactivation rates were 40% and 50% in patients with no evidence of active disease (NAD) and those with active disease better (AD better) at week 6 evaluation respectively. The study also noted a high mortality rate in patients with progressive disease, suggesting that reactivation and disease progression are critical factors in patient outcomes (PUBMED:32706277).
Furthermore, a study on high-risk multisystem organ (RO+) LCH in Egyptian children found that intermediate-dose methotrexate (idMTX) was associated with better event-free survival (EFS). The occurrence of disease progression (DP) on induction was of dismal prognosis compared to disease reactivations (REA) afterwards. This indicates that while reactivations are concerning, disease progression at induction is a more severe indicator of poor outcome (PUBMED:30247183).
In summary, reactivation in LCH patients does affect the outcome, with several studies indicating that reactivation is associated with a significant impact on event-free survival and overall prognosis. The management of reactivations and disease progression remains a critical aspect of improving patient outcomes in Egyptian children with LCH. |
Instruction: Does first episode polarity predict risk for suicide attempt in bipolar disorder?
Abstracts:
abstract_id: PUBMED:20046381
Polarity of the first episode and time to diagnosis of bipolar I disorder. Objective: The current study explored the relationship between the polarity of the first episode and the timing of eventual diagnosis of bipolar I disorder, and associated clinical implications.
Methods: Twelve years of clinical data from the medical records of 258 inpatients meeting DSM-III-R or DSM-IV criteria for bipolar I disorder were analyzed. Subjects were divided into two groups according to the polarity of the first episode: those with depressive polarity (FE-D), and those with manic polarity (FE-M). Comparisons were made between the two groups on variables associated with the timing of diagnosis and related outcomes.
Results: In population with bipolar I disorder, a significant longer time lapse from the first major mood episode to the confirmed diagnosis was associated with the FE-D group compared to the FE-M group [5.6 (+/-6.1) vs. 2.5 (+/-5.5) years, p<0.001]. FE-D subjects tended to have prior diagnoses of schizophrenia and major depressive disorder while FE-M subjects tended to have prior diagnoses of bipolar disorder and schizophrenia. A significantly higher rate of suicide attempts was associated with the FE-D group compared to the FE-M group (12.7 vs. 1.7%, p<0.001).
Conclusion: The results of this study indicate that first-episode depressive polarity is likely to be followed by a considerable delay until an eventual confirmed diagnosis of bipolar I disorder. Given that first-episode depressive patients are particularly vulnerable to unfavorable clinical outcomes such as suicide attempts, a more systematic approach is needed to differentiate bipolar disorder among depressed patients in their early stages.
abstract_id: PUBMED:17434597
Does first episode polarity predict risk for suicide attempt in bipolar disorder? Background: Defining bipolar disorder (BD) subtypes with increased risk of suicidal behavior may help clinical management. We tested the hypothesis that the polarity of a patient's first mood episode would be a marker for BD subtypes with differential risk for suicidality.
Methods: One hundred thirteen subjects with DSM-IV defined BD were classified based on whether their first reported episode was manic/hypomanic (FM) or depressed (FD). They were compared on demographic and clinical variables. Logistic regression adjusting for potential confounds tested the association between first episode polarity and history of suicide attempt.
Results: Multiple logistic regression analysis showed that FD group membership was associated with eightfold odds of a past suicide attempt, adjusting for years ill and total number of lifetime major depressive episodes.
Limitations: Sample size, retrospective design, recall bias, assessment during a mood episode, and imprecise recall of hypomania.
Conclusions: Polarity of patients' first reported mood episode suggested a depression-prone subtype with a greater probability of past suicide attempt. The FM group had more alcoholism and psychosis, but less likelihood of past suicide attempt. Validation of these putative subtypes requires prospective study.
abstract_id: PUBMED:36801515
Interactions between mood and paranoid symptoms affect suicidality in first-episode affective psychoses. Background: Suicide prevention is a major challenge in the treatment of first-episode affective psychoses. The literature reports that combinations of manic, depressive and paranoid symptoms, which may interact, are associated with an increased risk of suicide. The present study investigated whether interactions between manic, depressive and paranoid symptoms affected suicidality in first-episode affective psychoses.
Methods: We prospectively studied 380 first-episode psychosis patients enrolled in an early intervention programme and diagnosed with affective or non-affective psychoses. We compared intensity and presence of suicidal thoughts and occurrence of suicide attempts over a three-year follow-up period and investigated the impact of interactions between manic, depressive and paranoid symptoms on level of suicidality.
Results: At 12 months follow-up, we observed a higher level of suicidal thoughts and higher occurrence of suicide attempts among the affective psychoses patients compared to non-affective psychoses patients. Combined presence of either depressive and paranoid symptoms, or manic and paranoid symptoms, was significantly associated with increased suicidal thoughts. However, the combination of depressive and manic symptoms showed a significant negative association with suicidal thoughts.
Conclusions: This study suggests that paranoid symptoms combined with either manic or depressive symptoms are associated with an increased risk of suicide in first-episode affective psychoses. Detailed assessment of these dimensions is therefore warranted in first-episode affective patients and integrated treatment should be adapted to increased suicidal risk, even if patients do not display full-blown depressive or manic syndromes.
abstract_id: PUBMED:27936451
Onset polarity in bipolar disorder: A strong association between first depressive episode and suicide attempts. Background: The role of onset polarity (OP) in patients with bipolar disorder (BD) has been increasingly investigated over the last few years, for its clinical, prognostic, and therapeutic implications. The present study sought to assess whether OP was associated with specific correlates, in particular with a differential suicidal risk in BD patients.
Methods: A sample of 362 recovered BD patients was dichotomized by OP: depressed (DO) or elevated onset (EO: hypomanic/manic/mixed). Socio-demographic and clinical variables were compared between the subgroups. Additionally, binary logistic regression was performed to assess features associated with OP.
Results: DO compared with EO patients had older current age and were more often female, but less often single and unemployed. Clinically, DO versus EO had a more than doubled rate of suicide attempts, as well as significantly higher rates of BD II diagnosis, lifetime stressful events, current psychotropics and antidepressants use, longer duration of the most recent episode (more often depressive), but lower rates of psychosis and involuntary commitments.
Limitations: Retrospective design limiting the accurate assessment of total number of prior episodes of each polarity.
Conclusions: Our results support the influence of OP on BD course and outcome. Moreover, in light of the relationship between DO and a higher rate of suicide attempts, further investigation may help clinicians in identifying patients at higher risk of suicide attempts.
abstract_id: PUBMED:20855116
Correlates of first-episode polarity in a French cohort of 1089 bipolar I disorder patients: role of temperaments and triggering events. Objectives: As only a few studies so far systematically reported on bipolar patients subtyped according to first-episode polarity, we took the opportunity of having at disposal a large sample of bipolar I patients to specify the characteristics of patients included in these subtypes, with a special focus on temperament and triggering events.
Methods: A total of 1089 consecutive DSM-IV bipolar I manic inpatients were subtyped in manic onset (MO), depressive onset (DO) and mixed onset (MXO), and assessed for demographic, illness course, clinical, psychometric, comorbidity and temperament characteristics.
Results: The main characteristics of MO patients were a hyperthymic temperamental predisposition, a first episode triggered by substance abuse and an illness course with pure, severe and psychotic mania. In comparison, DO patients had more depressive temperaments, a first episode triggered by stress and alcohol, an illness course with more episodes, cyclicity, suicide attempts, anxious comorbidity and residual symptoms. Although sharing characteristics with either MO or DO, MXO patients had more mixed episodes and cyclothymic temperament.
Limitations: The following are the limitations of this study: retrospective design, bias toward preferential enrolment of MO patients, and lack of information on the number and polarity of lifetime episodes.
Conclusions: Findings from this study tend to confirm most of the differences previously evidenced among patients subtyped according to first-episode polarity. Differences found in temperamental predisposition and illness onset triggering events are worth noting and may help target early preventive interventions as well as orientate the search for specific genetic risk factors.
abstract_id: PUBMED:20141803
Depressive onset episode of bipolar disorder: clinical and prognostic considerations Both retrospective and high-risk individuals prospective studies show that a high percentage of patients experience one or more depressive episodes previous the diagnosis of bipolar disorder. Depressive onset bipolar disorders begin earlier than the ones with a manic onset, have a higher duration, a chronic course with frequent recurrences, a depressive dominant polarity, a higher lifetime rate of suicidal behaviour, less psychotic symptoms and more rapid cycling. A relation between frequent rapid cycling and previous prescription of antidepressants was suggested but not rigorously demonstrated. Thus, a high percentage of patients presenting a first depressive episode will later develop bipolar disorder. Several risk factors of bipolarity have been identified and might be detected during each depressive episode by using standardised evaluations and family interviews, if necessary. Among them, an early age at first episode, frequent recurrences, a family history of bipolar disorder, atypical features and hypomanic symptoms are particularly associated with the subsequent development of a bipolar disorder. The impact of a high risk of bipolarity on drug prescription is unclear ; however, one can strongly recommend to intensifying clinical monitoring and to proposing adjunctive psychoeducation.
abstract_id: PUBMED:17067865
Clinical correlates of first-episode polarity in bipolar disorder. Objective: To determine the clinical and long-term implications of mood polarity at illness onset.
Methods: During a 10-year follow-up prospective study, systematic clinical and outcome data were collected from 300 bipolar I and II patients. The sample was split into 2 groups according to the polarity of the onset episode (depressive onset [DO] vs manic/hypomanic onset [MO]). Clinical features and social functioning were compared between the 2 groups of patients.
Results: In our sample, 67% of the patients experienced a depressive onset. Depressive onset patients were more chronic than MO patients, with a higher number of total episodes and a longer duration of illness. Depressive onset patients experienced a higher number of depressive episodes than MO patients, who in turn had more manic episodes. Depressive onset patients made more suicide attempts, had a later illness onset, were less often hospitalized, and were less likely to develop psychotic symptoms. Depressive onset was more prevalent among bipolar II patients. Bipolar I patients with DO had more axis II comorbidity and were more susceptible to have a history of psychotic symptoms than bipolar II patients with DO.
Conclusion: The polarity at onset is a good predictor of the polarity of subsequent episodes over time. A depressive onset is twice as frequent as MO and carries more chronicity and cyclicity.
abstract_id: PUBMED:34975565
The Association Between Suicide Attempts, Anxiety, and Childhood Maltreatment Among Adolescents and Young Adults With First Depressive Episodes. Objective: Adolescents and young adults are susceptible to high-risk behaviors such as self-harm and suicide. However, the impact of childhood maltreatment on suicide attempts in adolescents and young adults with first episode of depression remains unclear. This study examined the association between suicide attempts and childhood maltreatment among adolescents and young adults with first depressive episodes. Methods: A total of 181 adolescents and young adults with first depressive episodes were included. The Child Trauma Questionnaire (CTQ), Beck Anxiety Inventory (BAI), and Patient Health Questionnaire-2 (PHQ-2) were used to assess childhood maltreatment and the severity of anxiety and depressive symptoms, respectively. The suicide item in the MINI-International Neuropsychiatric Interview (M.I.N.I.) 5.0 was used to assess the suicide attempts. Logistic regression analyses were used to explore the associated factors of suicide attempts. Results: The prevalence of SA in the total sample was 31.5% (95% CI = 24.9-38.1%). Multivariate logistic regression analyses revealed that the diagnosis of bipolar disorder (OR = 2.18, 95% CI = 1.07-4.40), smoking (OR = 2.64, 95% CI = 1.10-6.37), anxiety symptoms (OR = 1.05, 95% CI = 1.02-1.08), and childhood maltreatment (OR = 1.04, 95% CI = 1.01-1.07) were potential associated factors of SA. In addition, anxiety symptoms had a mediating effect on the relationship between childhood maltreatment and SA. Conclusion: Adolescents and young adults with first depressive episodes and having experiences of childhood maltreatment are at a high risk of suicide. The severity of anxiety symptoms may mediate the relation between childhood maltreatment and suicide attempts in this group of patients.
abstract_id: PUBMED:38063942
Factors associated with suicide attempts in the antecedent illness trajectory of bipolar disorder and schizophrenia. Background: Factors associated with suicide attempts during the antecedent illness trajectory of bipolar disorder (BD) and schizophrenia (SZ) are poorly understood.
Methods: Utilizing the Rochester Epidemiology Project, individuals born after 1985 in Olmsted County, MN, presented with first episode mania (FEM) or psychosis (FEP), subsequently diagnosed with BD or SZ were identified. Patient demographics, suicidal ideation with plan, self-harm, suicide attempts, psychiatric hospitalizations, substance use, and childhood adversities were quantified using the electronic health record. Analyses pooled BD and SZ groups with a transdiagnostic approach given the two diseases were not yet differentiated. Factors associated with suicide attempts were examined using bivariate methods and multivariable logistic regression modeling.
Results: A total of 205 individuals with FEM or FEP (BD = 74, SZ = 131) were included. Suicide attempts were identified in 39 (19%) patients. Those with suicide attempts during antecedent illness trajectory were more likely to be female, victims of domestic violence or bullying behavior, and have higher rates of psychiatric hospitalizations, suicidal ideation with plan and/or self-harm, as well as alcohol, drug, and nicotine use before FEM/FEP onset. Based on multivariable logistic regression, three factors remained independently associated with suicidal attempts: psychiatric hospitalization (OR = 5.84, 95% CI 2.09-16.33, p < 0.001), self-harm (OR = 3.46, 95% CI 1.29-9.30, p = 0.014), and nicotine use (OR = 3.02, 95% CI 1.17-7.76, p = 0.022).
Conclusion: Suicidal attempts were prevalent during the antecedents of BD and SZ and were associated with several risk factors before FEM/FEP. Their clinical recognition could contribute to improve early prediction and prevention of suicide during the antecedent illness trajectory of BD and SZ.
abstract_id: PUBMED:19578682
Bipolar disorder first episode and suicidal behavior: are there differences according to type of suicide attempt? Objective: The objective of this study was to test the hypothesis that the polarity of the first mood episode may be a marker for suicidal behavior, particularly the violent subtype.
Method: One hundred and sixty-eight patients diagnosed with bipolar disorder (DSM-IV) were grouped according to type of first episode: depression or manic/hypomanic. Groups were compared for demographic and clinical variables. We performed logistic regression in order to test the association between first episode polarity and suicidal behavior.
Results: We found that depressed patients have a lifetime history of more suicide attempts. However, univariate analysis of number of suicide attempts showed that the best model fits the bipolar II subtype (mean square = 15.022; p = 0.010) and lifetime history of psychotic episodes (mean square = 17.359; p = 0.021). Subgrouping the suicide attempts by subtype (violent or non-violent) revealed that manic/hypomanic patients had a greater tendency toward attempting violent suicide (21.2 vs. 14.7%, X(2) = 7.028, p = 0.03). Multiple logistic regression analysis confirmed this result.
Conclusion: Depressed patients had more suicide attempts over time, which could be explained by the higher prevalence of bipolar II subtype in this group, whereas manic/hypomanic patients had a lifelong history of more frequent violent suicide attempts, not explained by any of the variables studied. Our results support the evidence that non-violent suicide attempters and violent suicide attempters tend to belong to different phenotypic groups.
Answer: Yes, first episode polarity does predict risk for suicide attempt in bipolar disorder. Studies have found that individuals with bipolar disorder whose first reported mood episode was depressive (FD) have a significantly higher risk of past suicide attempts compared to those whose first episode was manic/hypomanic (FM) (PUBMED:17434597). Specifically, the FD group membership was associated with eightfold odds of a past suicide attempt, adjusting for years ill and total number of lifetime major depressive episodes (PUBMED:17434597).
Moreover, the polarity of the first episode has been shown to have clinical and prognostic implications. Patients with a first depressive episode have been found to have a more chronic course with frequent recurrences, a depressive dominant polarity, a higher lifetime rate of suicidal behavior, and less psychotic symptoms (PUBMED:20141803). Additionally, depressive onset compared with elevated onset (hypomanic/manic/mixed) was associated with a more than doubled rate of suicide attempts (PUBMED:27936451).
Another study found that depressive onset patients were more chronic than manic/hypomanic onset patients, with a higher number of total episodes and a longer duration of illness. Depressive onset patients also experienced a higher number of depressive episodes and made more suicide attempts (PUBMED:17067865).
Furthermore, the presence of depressive and paranoid symptoms, or manic and paranoid symptoms, was significantly associated with increased suicidal thoughts in patients with first-episode affective psychoses (PUBMED:36801515). This suggests that the combination of mood and paranoid symptoms can increase the risk of suicidality.
In summary, the polarity of the first mood episode in bipolar disorder is a significant predictor of the risk for suicide attempts, with depressive onset episodes being particularly associated with a higher risk of suicidality. |
Instruction: Does spironolactone have a dose-dependent effect on left ventricular remodeling in patients with preserved left ventricular function after an acute myocardial infarction?
Abstracts:
abstract_id: PUBMED:22963506
Does spironolactone have a dose-dependent effect on left ventricular remodeling in patients with preserved left ventricular function after an acute myocardial infarction? Aims: The aim of this study was to investigate the effects of spironolactone on left ventricular (LV) remodeling in patients with preserved LV function following acute myocardial infarction (AMI).
Methods And Results: Successfully revascularized patients (n = 186) with acute ST elevation MI (STEMI) were included in the study. Patients were randomly divided into three groups, each of which was administered a different dose of spironolactone (12.5, 25 mg, or none). Echocardiography was performed within the first 3 days and at 6 months after MI. Echocardiography control was performed on 160 patients at a 6-month follow-up. The median left ventricular ejection fraction (LVEF) increased significantly in all groups, but no significant difference was observed between groups (P = 0.13). At the end of the sixth month, the myocardial performance index (MPI) had improved in each of the three groups, but no significant difference was found between groups (F = 2.00, P = 0.15). The mean LV peak systolic velocities (Sm ) increased only in the control group during the follow-up period, but there is no significant difference between groups (F = 1.79, P = 0.18). The left ventricular end-systolic volume index (LVESVI) and the left ventricular end-diastolic volume index (LVEDVI) did not change significantly compared with the basal values between groups (F = 0.05, P = 0.81 and F = 1.03, P = 0.31, respectively).
Conclusion: In conclusion, spironolactone dosages of up to 25 mg do not augment optimal medical treatment for LV remodeling in patients with preserved cardiac functions after AMI.
abstract_id: PUBMED:20926948
The effects of spironolactone on atrial remodeling in patients with preserved left ventricular function after an acute myocardial infarction: a randomized follow-up study. Objectives: Atrial remodeling is an important part of cardiac remodeling after acute myocardial infarction (AMI). The aim of this study was to evaluate the effect of spironolactone on atria in patients with preserved left ventricular (LV) functions after AMI by using two-dimensional and tissue Doppler imaging techniques (TDI).
Methods: The study consisted of 110 patients with AMI, successfully revascularized with percutaneous coronary intervention, ejection fraction greater than or equal to 40%, and Killip class I-II. Patients were randomized into two groups: conventional therapy (n=55) and additional spironolactone of 25 mg/day with standard conventional therapy (n=55). Echocardiography was performed in the first 48-72 h of AMI and during 6 months of follow-up. Left atrial volume index and emptying fraction were obtained. The peak regional atrial contraction velocity, the time between the onset of p-wave on the monitor ECG and the onset, peak, and the end (TE) of the atrial contraction wave on the tissue Doppler technique curve were measured.
Results: The left atrial volume index and left atrium (LA) dimensions did not significantly change in either group. In the spironolactone group, left atrial emptying fraction increased compared with both baseline value (from 53.0 ± 0.16 to 57.0 ± 0.13 P=0.011) and conventional therapy group (from 50.0 ± 0.17 to 47.0 ± 0.16, P=0.013). The atrial contraction velocity did not change but the LA-TE, interatrial septum-TE, and right atrium-TE were prolonged in the conventional therapy group.
Conclusion: Additional spironolactone therapy provided a little benefit on LA remodeling and atrial electromechanic properties in patients with AMI and preserved LV functions.
abstract_id: PUBMED:18603054
Spironolactone alleviates late cardiac remodeling after left ventricular restoration surgery. Objective: Although left ventricular restoration is effective for treating ischemic cardiomyopathy caused by left ventricular remodeling and redilation, the initial improvement in left ventricular function is not always sustained. We have reported that the inhibition of the renin-angiotensin-aldosterone system by angiotensin-converting enzyme inhibitors and angiotensin receptor blockers is effective in preventing late remodeling after left ventricular restoration. However, the effects of spironolactone--an aldosterone blocker--after left ventricular restoration have not been elucidated.
Methods: Myocardial infarction was induced by ligating the left anterior descending artery. The rats developed left ventricular aneurysms and underwent left ventricular restoration by the plication of the left ventricular aneurysm 4 weeks after the ligation. Thereafter, the rats were randomized into a left ventricular restoration (vehicle) group and left ventricular restoration with spironolactone (100 mg/kg/d, by mouth) group.
Results: Echocardiography revealed that in the left ventricular restoration with spironolactone group, late cardiac redilation was significantly attenuated (left ventricular end-diastolic area: 0.51 +/- 0.03 cm(2) vs 0.63 +/- 0.03 cm(2), P < .05) and late left ventricular function was preserved (fractional area change: 48.8% +/- 3.0% vs 35.8% +/- 2.4%, P < .01). Hemodynamically, rats in the left ventricular restoration with spironolactone group exhibited improved systolic function (maximal end-systolic pressure-volume relationship: 0.38 +/- 0.03 mm Hg/microL vs 0.11 +/- 0.04 mm Hg/microL, P < .01) and diastolic function (tau: 18.5 +/- 1.5 sec vs 23.1 +/- 1.4 sec, P < .05) than those in the LVR group. Histologically, interstitial fibrosis in the remote area was significantly reduced (5.6% +/- 1.3% vs 12% +/- 1.0%, P < .01), and fibrosis around the pledgets (near area) was also attenuated in the left ventricular restoration with spironolactone group. The myocardial messenger ribonucleic acid expressions of transforming growth factor-beta1 and brain natriuretic peptide measured using the real-time polymerase chain reaction were lower in the left ventricular restoration with spironolactone group (transforming growth factor-beta1: 0.13 +/- 0.02 vs 0.28 +/- 0.02, P < .01; brain natriuretic peptide: 0.99 +/- 0.14 vs 1.54 +/- 0.18, P < .05). The systemic blood pressure and heart rate did not differ between the 2 groups.
Conclusion: Spironolactone reduced the gene expression of transforming growth factor-beta1 and brain natriuretic peptide and alleviated not only cardiac redilation but also the deterioration of left ventricular function late after left ventricular restoration without inducing hypotension, a major side effect of angiotensin-converting enzyme inhibitors or angiotensin receptor blocker. Spironolactone is a promising therapeutic option for alleviating remodeling after left ventricular restoration.
abstract_id: PUBMED:9515282
Effects of ramipril and spironolactone on ventricular remodeling after acute myocardial infarction: randomized and double-blind study Background: Studies have shown that angiotensin converting enzyme (ACE) inhibition prevents left ventricular remodeling and cardiovascular events after an acute myocardial infarction. The role of aldosterone in ventricular remodeling after a myocardial infarction has not been addressed.
Aim: To compare the effects of an ACE inhibitor, an aldosterone receptor antagonist and placebo on left ventricular remodeling after a first episode of transmural acute myocardial infarction.
Patients And Methods: Patients hospitalized for a first episode of acute myocardial infarction were blindly and randomly assigned to receive ramipril (2.5 mg bid), spironolactone (25 mg tid) or placebo. Ejection fraction, left ventricular end diastolic and end systolic volumes were measured by multigated radionuclide angiography, at baseline and after six months of treatment.
Results: Twenty four patients were assigned to placebo, 31 to ramipril and 23 to spironolactone. Age, gender, Killip class, treatment with thrombolytics, revascularization procedures and use of additional medications were similar in the three groups. After six months of treatment, ejection fraction increased from 34.5 +/- 2.3 to 40.2 +/- 2.4% in patients on ramipril, from 32.6 +/- 2.9 to 36.6 +/- 2.7% in patients on spironolactone, and decreased from 37 +/- 3 to 31 +/- 3% in patients on placebo (ANOVA between groups p < 0.05). Basal end systolic volume was similar in all three groups, increased from 43.4 +/- 3.4 to 61.4 +/- 6.0 ml/m2 in patients on placebo and did not change in patients on spironolactone or ramipril (ANOVA p < 0.05). End diastolic volume was also similar in the three groups, increased from 70.6 +/- 4.3 to 92.8 +/- 6.4 ml/m2 in patients on placebo and did not change with the other treatments.
Conclusions: Ramipril and spironolactone had similar effects on ventricular remodeling after acute myocardial infarction, suggesting that aldosterone contributes to this phenomenon and that inhibition of its receptor may be as effective as ACE inhibition in its prevention.
abstract_id: PUBMED:12555163
Do diuretics and aldosterone receptor antagonists improve ventricular remodeling? There is no evidence that loop diuretics improve ventricular remodeling in patients with heart failure. Aldosterone receptor antagonists, which have an effect on natriuresis and diuresis, especially in conjunction with an angiotensin converting enzyme-inhibitor, have been shown to improve ventricular remodeling in patients with left ventricular systolic dysfunction. The mechanisms for this beneficial effect and a reduction in death due to progressive heart failure seen in the randomized aldosterone evaluation study (RALES) is likely related to the effect of aldosterone receptor antagonism on myocardial collagen formation and ventricular hypertrophy. Further proof of this hypothesis should be forthcoming from the results of the Eplerenone Heart Failure Efficacy and Survival Study (EPHESUS) early in 2003 in which the aldosterone receptor antagonist eplerenone is being evaluated in patients with systolic left ventricular dysfunction post myocardial infarction.
abstract_id: PUBMED:18754271
Prevention of left ventricular remodeling after myocardial infarction: efficacy of physical training Post-myocardial infarction left ventricular remodelling should be considered an important therapeutic target in patients after an acute myocardial infarction, considering the heavy prognostic implication. The therapies used in these patients should reduce the progression of the left ventricular dysfunction to refractory heart failure. In order to prevent post-myocardial infarction cardiac remodelling, different therapies have been tested, and for ACE-inhibitors and beta-blockers a clear demonstration of efficacy has been obtained. Losartan and valsartan, two widely used angiotensin receptor blockers, demonstrated to be safe and equally useful compared to ACEI. The addition of spironolactone to the standard therapy for heart failure has a clear beneficial effect but the clinical use has been refrained by the risk of iperkaliemia. Aerobic physical training improves the left ventricular ejection fraction in patients with systolic dysfunction, reducing the progressive enlargement after myocardial infarction. The positive effect of aerobic training on cardiac remodelling might be related to the positive effect on neurhormonal assessment, to he improvement of microcirculatory myocardial perfusion and of endothelial function.
abstract_id: PUBMED:15134801
Effect of aldosterone blockade in patients with systolic left ventricular dysfunction: implications of the RALES and EPHESUS studies. Aldosterone blockade has been shown to be effective in reducing total mortality as well as hospitalization for heart failure in patients with systolic left ventricular dysfunction (SLVD) due to chronic heart failure and in patients with SLVD post acute myocardial infarction. The evidence for the effectiveness of aldosterone blockade in chronic heart failure comes from the randomized aldactone evaluation study (RALES) while that for patients post infarction from the eplerenone post acute myocardial infarction efficacy and survival study (EPHESUS). These studies suggest that mineralocorticoid receptor activation remains important despite the use of an angiotensin converting enzyme-inhibitor/angiotensin receptor blocking (ARB) agent and a beta blocker. Increasing evidence suggest that aldosterone blockade has important effects not only on the kidney but on ventricular remodeling, myocardial fibrosis, autonomic balance, fibrinolysis, oxidative stress, and activation of the NF-kappaB and AP-1 signaling pathways. The results of these studies in patients with SLVD has important implications not only for patients with chronic heart failure and post infarction but also for the therapy of patients with essential hypertension and other cardiovascular diseases.
abstract_id: PUBMED:15932659
Effect of spironolactone on left ventricular remodeling in patients with acute myocardial infarction Objective: To investigate the effect of spironolactone on left ventricular remodeling (LVRM) in patients with acute myocardial infarction.
Methods: In this multicentric, randomized, controlled study, spironolactone 40 mg/d was randomly administered in addition to the routine treatment for patients with AMI. During the 6 months the serum PIIINP, BNP and echocardiography were examined in all patients to assess myocardial fibrosis, LV function and volume.
Results: A total of 88 AMI patients entered the study came from 4 hospitals in Shijiazhuang. There were 43 patients with anterior MI and 45 with inferior MI. In anterior MI group 23 patients received spironolactone and 20 accepted the routine treatment. In inferior MI group 23 received spironolactone and 22 accepted the routine treatment. In anterior MI group: (1) At 3rd, 6th month PIIINP and BNP serum levels were significantly lower in the spironolactone group compared with those in control group [PIIINP (260.2 +/- 59.9) vs (328.0 +/- 70.3) ng/L, P = 0.001, (197.1 +/- 46.3) vs (266.7 +/- 52.4) ng/L, P < 0.001], [BNP (347.4 +/- 84.0) vs (430.1 +/- 62.9) ng/L, P < 0.001, (243.7 +/- 79.7) vs (334.6 +/- 62.8) ng/L, P < 0.001]; (2) There were smaller LVEDD and LVESD in spironolactone group compared with those in control group after 6 months intervention [(51.0 +/- 5.5) vs (55.6 +/- 4.5) mm, P = 0.005, (35.7 +/- 4.6) vs (39.1 +/- 5.6) mm, P = 0.046]. However, in inferior MI group: (1) There were no significant differences in PIIINP and BNP values between the two groups after 6 months intervention; (2) There were no significant differences in the LVEDD, LVESD, LVEF after 6 months treatment.
Conclusion: (1) In patients with anterior MI, spironolactone combined with the routine treatment could inhibit myocardial fibrosis and left ventricular dilation and prevent LVRM. (2) In patients with inferior MI, no significant difference in prevention of LVRM was found between the spironolactone combined with the routine treatment and the routine treatment alone.
abstract_id: PUBMED:20299607
Randomized, double-blind, multicenter, placebo-controlled study evaluating the effect of aldosterone antagonism with eplerenone on ventricular remodeling in patients with mild-to-moderate heart failure and left ventricular systolic dysfunction. Background: Aldosterone antagonism has been studied in patients with advanced heart failure (HF) and also in patients with post-myocardial infarction and left ventricular (LV) dysfunction with HF symptoms. Few data are available on effects of aldosterone antagonism in patients with mild-to-moderate HF.
Methods And Results: In a multicenter, randomized, double-blind, placebo-controlled study in patients with mild-to-moderate HF and LV systolic dysfunction, patients with New York Heart Association class II/III HF and LV ejection fraction (EF) < or =35% were randomly assigned to receive eplerenone 50 mg/d versus placebo in addition to contemporary background therapy. Quantitative radionuclide ventriculograms to assess LV volumes and ejection fraction were performed at baseline and again after 9 months of double-blind treatment and were analyzed in a central core laboratory, blinded to treatment. The primary efficacy analysis was the between-group comparison of the change in LV end-diastolic volume index. Secondary analyses examined changes in LV end-systolic volume index and ejection fraction as well as markers of collagen turnover. Of the total 226 patients enrolled, 117 were randomly assigned to receive eplerenone and 109 to receive placebo. There was high use of contemporary background therapy at baseline, with > 90% use of angiotensin-converting enzyme inhibitors and/or angiotensin receptor blockers and > 90% use of beta-blockers. Over 36 weeks of treatment, there was no apparent between-group difference in the changes in end-diastolic volume index or end-systolic volume index. There was a reduction in the collagen turnover marker procollagen type I N-terminal propeptide and plasma B-type natriuretic peptide in the eplerenone group compared with placebo (P=0.01 and P=0.04, respectively). There was no change in symptom status or quality-of-life measures.
Conclusions: In a clinically stable, well-treated population of patients with mild-to-moderate HF symptoms and LV dysfunction, 36 weeks of treatment of aldosterone antagonism with eplerenone at a dose of 50 mg daily had no detectable effect on parameters of LV remodeling.
abstract_id: PUBMED:15277726
Additive improvement of left ventricular remodeling by aldosterone receptor blockade with eplerenone and angiotensin II type 1 receptor antagonist in rats with myocardial infarction. We investigated the effects of the aldosterone blocker eplerenone alone and in combination with angiotensin II type 1 receptor antagonist on ventricular remodeling in rats with left ventricular (LV) dysfunction after extensive myocardial infarction (MI). Adding an aldosterone antagonist to an ACE inhibitor reduces mortality and morbidity in heart failure. Starting 1 day after MI, rats were treated with placebo, eplerenone (100 mg/kg/day), the angiotensin type 1 receptor antagonist candesartan (1 mg/kg/day), or a combination of both for nine weeks. Both monotherapies attenuated the rise in LV end-diastolic dimension (LVDd) and LV end-diastolic volume (LVEDV) compared with placebo, whereas combined treatment further attenuated LVDd and LVEDV and significantly improved LV function. Increased collagen type I and III gene expressions in the noninfarcted LV myocardium from MI placebo rats was attenuated by candesartan, but almost completely prevented by eplerenone and eplerenone/candesartan. The addition of eplerenone to candesartan prevented the increases in LV gene expression of ANP and BNP more effectively than either monotherapy. The aldosterone blocker eplerenone improved LV remodeling in rats with LV dysfunction after extensive MI. Combination therapy with an candesartan substantially potentiates this effect by a complementary prevention of LV fibrosis, cardiac hypertrophy, and molecular alterations.
Answer: Based on the provided abstracts, spironolactone does not appear to have a dose-dependent effect on left ventricular remodeling in patients with preserved left ventricular function after an acute myocardial infarction. In a study where patients were divided into groups receiving different doses of spironolactone (12.5 mg, 25 mg, or none), no significant difference was observed between the groups in terms of left ventricular ejection fraction, myocardial performance index, left ventricular peak systolic velocities, left ventricular end-systolic volume index, or left ventricular end-diastolic volume index (PUBMED:22963506). This suggests that spironolactone dosages of up to 25 mg do not augment optimal medical treatment for left ventricular remodeling in these patients.
However, spironolactone has shown some benefits in other aspects of cardiac remodeling. For instance, in a randomized follow-up study, spironolactone therapy provided a little benefit on left atrial remodeling and atrial electromechanic properties in patients with acute myocardial infarction and preserved left ventricular functions (PUBMED:20926948). Additionally, in a study involving rats that underwent left ventricular restoration surgery, spironolactone alleviated late cardiac remodeling and preserved left ventricular function (PUBMED:18603054).
Other studies have compared the effects of spironolactone with ACE inhibitors or placebo on ventricular remodeling after acute myocardial infarction. One study found that spironolactone had similar effects on ventricular remodeling as ramipril, an ACE inhibitor, suggesting that aldosterone contributes to ventricular remodeling and that its receptor antagonism may be as effective as ACE inhibition in its prevention (PUBMED:9515282). Another study indicated that spironolactone combined with routine treatment could inhibit myocardial fibrosis and left ventricular dilation in patients with anterior myocardial infarction (PUBMED:15932659).
In summary, while spironolactone may not have a dose-dependent effect on left ventricular remodeling in patients with preserved left ventricular function after an acute myocardial infarction, it does have beneficial effects on other aspects of cardiac remodeling and may be as effective as ACE inhibitors in preventing ventricular remodeling post-myocardial infarction. |
Instruction: Rapamycin has dual opposing effects on proteinuric experimental nephropathies: is it a matter of podocyte damage?
Abstracts:
abstract_id: PUBMED:19671594
Rapamycin has dual opposing effects on proteinuric experimental nephropathies: is it a matter of podocyte damage? Background: In clinical renal transplantation, an increase in proteinuria after conversion from calcineurin inhibitors to rapamycin has been reported. In contrast, there are studies showing a nephro-protective effect of rapamycin in proteinuric diseases characterized by progressive interstitial inflammatory fibrosis.
Methods: Because of the contradictory reports concerning rapamycin on proteinuria, we examined proteinuria and podocyte damage markers on two renal disease models, with clearly different pathophysiological mechanisms: a glomerular toxico-immunological model induced by puromycin aminonucleoside, and a chronic hyperfiltration and inflammatory model by mass reduction, both treated with a fixed high rapamycin dose.
Results: In puromycin groups, rapamycin provoked significant increases in proteinuria, together with a significant fall in podocin immunofluorescence, as well as clear additional damage to podocyte foot processes. Conversely, after mass reduction, rapamycin produced lower levels of proteinuria and amelioration of inflammatory and pro-fibrotic damage. In contrast to the puromycin model, higher glomerular podocin and nephrin expression and amelioration of glomerular ultrastructural damage were found.
Conclusions: We conclude that rapamycin has dual opposing effects on subjacent renal lesion, with proteinuria and podocyte damage aggravation in the glomerular model and a nephro-protective effect in the chronic inflammatory tubulointerstitial model. Rapamycin produces slight alterations in podocyte structure when acting on healthy podocytes, but it clearly worsens those podocytes damaged by other concomitant injury.
abstract_id: PUBMED:27129295
Spironolactone promotes autophagy via inhibiting PI3K/AKT/mTOR signalling pathway and reduce adhesive capacity damage in podocytes under mechanical stress. Mechanical stress which would cause deleterious adhesive effects on podocytes is considered a major contributor to the early progress of diabetic nephropathy (DN). Our previous study has shown that spironolactone could ameliorate podocytic adhesive capacity in diabetic rats. Autophagy has been reported to have a protective role against renal injury. The present study investigated the underlying mechanisms by which spironolactone reduced adhesive capacity damage in podocytes under mechanical stress, focusing on the involvement of autophagy. Human conditional immortalized podocytes exposed to mechanical stress were treated with spironolactone, LY294002 or rapamycin for 48 h. The accumulation of LC3 puncta was detected by immunofluorescence staining. Podocyte expression of mineralocorticoid receptor (MR), integrin β1, LC3, Atg5, p85-PI3K, p-Akt, p-mTOR were detected by Western blotting. Podocyte adhesion to collagen type IV was also performed with spectrophotometry. Immunofluorescence staining showed that the normal level of autophagy was reduced in podocytes under mechanical stress. Decreased integrin β1, LC3, Atg5 and abnormal activation of the PI3K/Akt/mTOR pathway were also detected in podocytes under mechanical stress. Spironolactone up-regulated integrin β1, LC3, Atg5 expression, down-regulated p85-PI3K, p-Akt, p-mTOR expression and reduced podocytic adhesive capacity damage. Our data demonstrated that spironolactone inhibited mechanical-stress-induced podocytic adhesive capacity damage through blocking PI3K/Akt/mTOR pathway and restoring autophagy activity.
abstract_id: PUBMED:30864227
Inhibition of high mobility group box 1 (HMGB1) attenuates podocyte apoptosis and epithelial-mesenchymal transition by regulating autophagy flux. Background: Podocyte injury, characterized by podocyte hypertrophy, apoptosis, and epithelial-mesenchymal transition (EMT), is the major causative factor of diabetic nephropathy (DN). Autophagy dysfunction is regarded as the major risk factor for podocyte injury including EMT and apoptosis. High mobility group box 1 (HMGB1) is involved in the progression of DN through the induction of autophagy. However, the underlying mechanism remains unknown.
Methods: Plasma HMGB1 concentrations were determined in DN patients using ELISA. Apoptosis of DN serum-treated podocytes was evaluated by flow cytometry. Podocyte autophagy flux was measured using immunofluorescence. Western blotting analysis was used to investigate HMGB1 expression, EMT, and autophagy-related signaling pathways.
Results: Upregulation of HMGB1 was found in DN patients and DN serum-treated podocytes. Removal of HMGB1 inhibited DN serum-mediated podocyte apoptosis by inhibiting autophagy and activating AKT/mammalian target of rapamycin (mTOR) signaling. In addition, HMGB1 depletion repressed the progression of podocyte EMT by inhibiting transforming growth factor (TGF)-β/smad1 signaling in vitro and in vivo. The combination of HMGB1 short interference (si) RNA and the autophagy activator rapamycin protected against podocyte apoptosis and EMT progression by inhibiting the AKT/mTOR and TGF-β/smad signaling pathway, respectively.
Conclusions: Although HMGB1 siRNA and rapamycin treatment had opposite effects on autophagy and AKT/mTOR signaling, there was no contradiction about the role of HMGB1 siRNA and rapamycin on AKT/mTOR pathway because autophagy and AKT/mTOR signaling play dual roles in intracellular biological processes. Based on the findings of this study, we may assume that HMGB1-initiated autophagy is harmful, whereas rapamycin is beneficial to podocyte survival. Possibly combined treatment with HMGB1 siRNA and rapamycin improved podocyte damage and EMT by regulating autophagy homeostasis.
abstract_id: PUBMED:29164562
Resveratrol transcriptionally regulates miRNA-18a-5p expression ameliorating diabetic nephropathy via increasing autophagy. Objective: To investigate the effects of resveratrol on autophagy in the chronically diabetic nephropathy and to study the effects of the different expression of microRNAs after resveratrol (RSV) treated in db/db mice (diabetic mice).
Materials And Methods: Db/m (non- diabetic) and db/db mice were randomly divided into intra gastric RSV treatment group or control group. Renal tissues were prepared for HE/PAS staining. In vitro, mouse podocytes cell lines were grown in different mediums with different dose of resveratrol treatment. microRNA (miRNA) gene chips assay was performed for differentially expressed miRNAs screening. Western blot was used to detect protein levels.
Results: In vivo, RSV significantly decreased urinary albumin, serum creatinine, mesangial area and glomerular size in db/db mice. After RSV treatment, LC3-II/LC3-I and synaptopodin were increased while cleaved-caspase 3 was decreased in kidney tissues. In vitro, podocytes treated with RSV exhibited significantly increased LC3-II/LC3-I and decreased cleaved caspase 3. Moreover, this effect of RSV can be enhanced by rapamycin (RAPA, an activator of autophagy) but partially reversed by 3-MA (an autophagy inhibitor). Further, we found that miR-18a-5p was significantly upregulated after RSV treatment in db/db mice. Overexpression of miR-18a-5p in podocytes resulted in significant inhibition of cleaved-caspase 3 protein, and increased the ratio of LC3-II/LC3-I. Dual luciferase report assay validated that Atactic telangiectasis mutation (ATM) was a target of miR-18a-5p. In podocytes, downregulation of cleaved caspase 3 and the enhanced ratio of protein LC3-II/LC3-I were detected in cells transfected with ATM siRNA.
Conclusions: Role of miRNA-18a-5p in the regulation of autophagy via targeting ATM may represent a promising therapeutic target for preventing and attenuating diabetic nephropathy.
abstract_id: PUBMED:26550253
Protective effects of low-dose rapamycin combined with valsartan on podocytes of diabetic rats. The aim of this study was to study the impacts and the mechanisms of low-dose rapamycin combined with valsartan on the renal functions of diabetic nephropathy (DN) rats. 50 SD rats were randomly divided into the normal control group (group A, n=10) and the DN model group (n=40), the DN model group was intraperitoneally injected streptozocin (STZ) for the modeling, which were then equally divided into the DN group (group B), the rapamycin group (group C, orally administrated rapamycin 1 mg/kg/d), the valsartan group (group D, orally administrated valsartan 30 mg/kg/d) and the combined therapy group (group E, orally administrated rapamycin 1 mg/kg/d + valsartan 30 mg/kg/d). Group A and group B were orally administrated the same amount of 0.5% carboxymethylcellulose. After 8-week treatment, the rats of each group were killed for the renal functional and pathological detection, as well as the expression detection of nephrin and podocin of kidney tissues. Compared with group A, the renal functions of the DN model groups were all decreased, and the pathological changes were significant. Meanwhile, the expressions of nephrin/podocin were reduced (P<0.05); among which group B exhibited the most serious changes, while the situations of group E were improved after the combined treatment, the expressions of nephrin/podocin were increased. Low-dose rapamycin and valsartan could enhance the expressions of nephrin and podocin, reduce kidney damages, thus achieving the protective effects towards the kidneys, and the effects of the combined therapy were superior to those of monotherapy.
abstract_id: PUBMED:30906437
Role of mTOR signaling in the regulation of high glucose-induced podocyte injury. Podocyte injury, which promotes progressive nephropathy, is considered a key factor in the progression of diabetic nephropathy. The mammalian target of rapamycin (mTOR) signaling cascade controls cell growth, survival and metabolism. The present study investigated the role of mTOR signaling in regulating high glucose (HG)-induced podocyte injury. MTT assay and flow cytometry assay results indicated that HG significantly increased podocyte viability and apoptosis. HG effects on podocytes were suppressed by mTOR complex 1 (mTORC1) inhibitor, rapamycin, and further suppressed by dual mTORC1 and mTORC2 inhibitor, KU0063794, when compared with podocytes that received mannitol treatment. In addition, western blot analysis revealed that the expression levels of Thr-389-phosphorylated p70S6 kinase (p-p70S6K) and phosphorylated Akt (p-Akt) were significantly increased by HG when compared with mannitol treatment. Notably, rapamycin significantly inhibited HG-induced p-p70S6K expression, but did not significantly impact p-Akt expression. However, KU0063794 significantly inhibited the HG-induced p-p70S6K and p-Akt expression levels. Furthermore, the expression of ezrin was significantly reduced by HG when compared with mannitol treatment; however, α-smooth muscle actin (α-SMA) expression was significantly increased. Immunofluorescence analysis on ezrin and α-SMA supported the results of western blot analysis. KU0063794, but not rapamycin, suppressed the effect of HG on the expression levels of ezrin and α-SMA. Thus, it was suggested that the increased activation of mTOR signaling mediated HG-induced podocyte injury. In addition, the present findings suggest that the mTORC1 and mTORC2 signaling pathways may be responsible for the cell viability and apoptosis, and that the mTORC2 pathway could be primarily responsible for the regulation of cytoskeleton-associated proteins.
abstract_id: PUBMED:32765037
The Effects of Puerarin on Autophagy Through Regulating of the PERK/eIF2α/ATF4 Signaling Pathway Influences Renal Function in Diabetic Nephropathy. Background And Purpose: Autophagy is the main protective mechanism against aging in podocytes, which are terminally differentiated cells that have a very limited capacity for mitosis and self-renewal. Here, a streptozotocin-induced DN C57BL/6 mouse model was used to investigate the effects of puerarin on the modulation of autophagy under conditions associated with endoplasmic reticulum stress (ERS). In addition, this study aimed to identify the potential underlying molecular mechanisms.
Methods And Results: DN C57BL/6 mouse model was induced by streptozotocin (150 mg/kg) injection. The mice were administered rapamycin and puerarin, respectively, daily for up to 8 weeks. After the serum and kidney samples were collected, the fasting blood glucose (FBG), parameters of renal function, histomorphology, and the podocyte functional proteins were analyzed. Moreover, the autophagy markers and the expressions of PERK/ATF4 pathway were studied in kidney. Results found that the FBG level in DN mice was significantly higher than in normal mice. Compared with DN model mice, puerarin-treated mice showed an increased expression of podocyte functional proteins, including nephrin, podocin, and podocalyxin. Furthermore, the pathology and structure alterations were improved by treatment with rapamycin and puerarin compared with the DN control. The results indicated an elevated level of autophagy in rapamycin and puerarin groups compared with the DN model, as demonstrated by the upregulated expression of autophagy markers Beclin-1, LC3II, and Atg5, and downregulated p62 expression. In addition, the levels of PERK, eIF2α, and ATF4 were reduced in the DN model, which was partially, but significantly, prevented by rapamycin and puerarin.
Conclusion: This study emphasizes the renal-protective effects of puerarin in DN mice, particularly in the modulation of autophagy under ERS conditions, which may be associated with activation of the PERK/eIF2α/ATF4 signaling pathway. Therefore, PERK may be a potential target for DN treatment.
abstract_id: PUBMED:36707848
Icariin synergizes therapeutic effect of dexamethasone on adriamycin-induced nephrotic syndrome. Background: Glomerular damage is a common clinical indicator of nephrotic syndrome. High-dose hormone treatment often leads to hormone resistance in patients. How to avoid resistance and improve the efficiency of hormone therapy draws much attention to clinicians.
Methods: Adriamycin (ADR) was used to induce nephropathy model in SD rats. The rats were treated with dexamethasone (DEX), icariin (ICA), and DEX + ICA combination therapy. The changes in urinary protein (UP), urea nitrogen (BUN), and serum creatinine (SCR) contents in rats were detected by enzyme-linked immunosorbent assay (ELISA), and the degree of pathological injury and the expression level of podocin were detected by HE staining and immunohistochemistry, to test the success of the model and the therapeutic effects of three different ways. The effect of treatments on podocytes autophagy was evaluated via transfection of mRFP-GFP-LC3 tandem adenovirus in vitro.
Results: The contents of UP, SCR, and BUN were significantly increased, the glomerulus was seriously damaged, and the expression of Nephrosis2 (NPHS2) was significantly decreased in the ADR-induced nephrotic syndrome rat model compared to that of the control group. DEX, ICA, and the DEX + ICA combined treatment significantly alleviated these above changes induced by ADR. The combined treatment of DEX + ICA exhibited better outcome than single treatment. The combined treatment also restored the podocyte autophagy, increased the expression of microtubule-associated protein light-chain 3II (LC3II), and reduced the expression of p62 in vitro. The combined treatment protects podocytes by mediating the PI3K/AKT/mTOR (rapamycin complex) signaling pathway.
Conclusion: ICA enhances the therapeutic effect of DEX on the nephrotic syndrome.
abstract_id: PUBMED:37245531
P2X7R/AKT/mTOR signaling mediates high glucose-induced decrease in podocyte autophagy. Diabetic nephropathy is one of the leading causes of end-stage renal disease worldwide. In our study we found that Adenosine triphosphate (ATP) content was significantly increased in the urine of diabetic mice. We examined the expression of all purinergic receptors in the renal cortex and found that only purinergic P2X7 receptor (P2X7R) expression was significantly increased in the renal cortex of wild-type diabetic mice and that the P2X7R protein partially co-localized with podocytes. Compared with P2X7R(-/-) non-diabetic mice, P2X7R(-/-) diabetic mice showed stable expression of the podocyte marker protein podocin in the renal cortex. The renal expression of microtubule associated protein light chain 3 (LC-3II) in wild-type diabetic mice was significantly lower than in wild-type controls, whereas the expression of LC-3II in the kidneys of P2X7R(-/-) diabetic mice was not significantly different from that of P2X7R(-/-) non-diabetic mice. In vitro, high glucose induced an increase in p-Akt/Akt, p-mTOR/mTOR and p62 protein expression along with a decrease in LC-3II levels in podocytes, whereas after transfection with P2X7R siRNA, Phosphorylated protein kinase B (p-Akt)/Akt, Phosphorylated mammalian target of rapamycin (p-mTOR)/mTOR, and p62 expression were restored and LC-3II expression was increased. In addition, LC-3II expression was also restored after inhibition of Akt and mTOR signaling with MK2206 and rapamycin, respectively. Our results suggest that P2X7R expression is increased in podocytes in diabetes, and that P2X7R is involved in the inhibition of podocyte autophagy by high glucose, at least in part through the Akt-mTOR pathway, thereby exacerbating podocyte damage and promoting the onset of diabetic nephropathy. Targeting P2X7R may be a potential treatment for diabetic nephropathy.
abstract_id: PUBMED:30200803
The protective effects of rapamycin on cell autophagy in the renal tissues of rats with diabetic nephropathy via mTOR-S6K1-LC3II signaling pathway. Background: Previous studies have shown that podocyte autophagy is an important trigger for proteinuria and glomerulosclerosis. The mammalian rapamycin target protein (mTOR) occupies a pivotal position in the autophagy pathway. In this study, we planned to clarify the mechanism of mTOR regulation of podocyte autophagy and the effect of rapamycin (RAPA).
Methods: All rats were randomly divided into normal control group (n = 8), DN group (n = 8), and RAPA group (n = 8). Blood and urine samples were collected at the 4th, 8th, and 12th weeks of the experiment. The serum creatinine (Scr), urine volume levels, and the 24 h urine protein (UP) levels were examined. The nephrin, podocin, mTOR, ribosomal S6 kinase 1 (S6K1), and autophagy marker light chain 3 (LC3II) expression levels were evaluated by immunohistochemistry, quantitative PCR, and immunoblotting.
Results: The urine volume, 24 h UP, and Scr of the DN and RAPA groups increased significantly compared with the NC group (p < .05). Nephrin and podocin expression was decreased in the kidney tissues of the DN and RAPA groups compared with the NC group (p < .05). The expression levels of mTOR and S6K1 increased and LC3II expression decreased in the renal tissues of the DN and RAPA groups compared with the NC group (p < .05). After RAPA treatment, all the above indexes were improved compared with the DN group (p < .05), but were significantly abnormal compared with the NC group (p < .05).
Conclusion: The proteinuria and kidney function had improved after RAPA treatment. These results confirmed that RAPA specifically binds to mTOR kinase, and inhibits mTOR activity, thereby regulating the pathological autophagic process.
Answer: Rapamycin indeed has dual opposing effects on proteinuric experimental nephropathies, and this appears to be related to the condition of the podocytes. In a study examining the effects of rapamycin on two different renal disease models, it was found that rapamycin significantly increased proteinuria and caused additional damage to podocyte foot processes in a glomerular toxico-immunological model induced by puromycin aminonucleoside. This was evidenced by a significant fall in podocin immunofluorescence, indicating podocyte damage. Conversely, in a chronic hyperfiltration and inflammatory model induced by mass reduction, rapamycin resulted in lower levels of proteinuria and improved inflammatory and pro-fibrotic damage. In this model, there was higher glomerular podocin and nephrin expression and amelioration of glomerular ultrastructural damage, suggesting a nephro-protective effect (PUBMED:19671594).
The dual roles of rapamycin are further supported by studies on autophagy, a process that has a protective role against renal injury. Rapamycin, an mTOR inhibitor, is known to induce autophagy, which can be beneficial for podocyte survival. For instance, spironolactone has been shown to promote autophagy and reduce adhesive capacity damage in podocytes under mechanical stress by inhibiting the PI3K/AKT/mTOR signaling pathway (PUBMED:27129295). Similarly, inhibition of high mobility group box 1 (HMGB1) attenuates podocyte apoptosis and epithelial-mesenchymal transition by regulating autophagy flux, and the combination of HMGB1 siRNA and rapamycin protected against podocyte apoptosis and EMT progression by regulating autophagy homeostasis (PUBMED:30864227).
Moreover, the therapeutic effects of rapamycin combined with other treatments, such as valsartan, have been shown to enhance the expression of podocyte proteins nephrin and podocin, reduce kidney damage, and provide protective effects on the kidneys (PUBMED:26550253). The mTOR signaling pathway is implicated in high glucose-induced podocyte injury, and inhibition of this pathway by rapamycin can suppress podocyte injury (PUBMED:30906437).
In summary, rapamycin's effects on podocyte health and proteinuria are complex and can be either damaging or protective depending on the underlying renal pathology and the state of podocyte injury. The beneficial effects are often associated with the induction of autophagy and the inhibition of the mTOR pathway, which are critical in maintaining podocyte function and survival. |
Instruction: Can lower aldehyde dehydrogenase activity in saliva be a risk factor for oral cavity cancer?
Abstracts:
abstract_id: PUBMED:23346903
Can lower aldehyde dehydrogenase activity in saliva be a risk factor for oral cavity cancer? Objectives: Salivary ALDH3A1 protects the oral cavity from aromatic and medium-chain aliphatic aldehydes originating from food and air pollution and generated during oxidative stress. Due to their reactivity, aldehydes may exhibit an irritating effect as well as cytotoxic, genotoxic, mutagenic and even carcinogenic effects. The aim of this study was to verify whether lower ALDH3A1 activity is a risk factor for oral cavity cancer.
Subjects And Methods: Fasting saliva samples were collected one day before and about one week after surgery from patients with oral cancer (OCC) (n = 59), other tumours (cysts, neoplasms) (n = 108), gnathic defects and fractures (controls after the surgery) (n = 63), and from healthy volunteers (n = 116). Enzyme activity was measured using a fluorometric method.
Results: Total ALDH3A1 activity [U g(-1) ] in patients with OCC was statistically lower than in patients with keratocystic odontogenic tumour (KCOT) (P = 0.00697), odontogenic cysts (OC) (P < 0.00001), neoplasms (P = 0.03343) and the healthy volunteers up to and over 40 years old (P < 0.00001; P = 0.00019). The activity in the saliva of OCC after surgery was lower than in the healthy volunteers (P < 0.00001) and in the groups with fractures (P = 0.00303) and gnathic defects (P = 0.00538).
Conclusion: Low salivary ALDH activity may be a risk factor for oral cancer development.
abstract_id: PUBMED:31038061
Enhancement in the Catalytic Activity of Human Salivary Aldehyde Dehydrogenase by Alliin from Garlic: Implications in Aldehyde Toxicity and Oral Health. Background: Lower human salivary aldehyde dehydrogenase (hsALDH) activity increases the risk of aldehyde mediated pathogenesis including oral cancer. Alliin, the bioactive compound of garlic, exhibits many beneficial health effects.
Objective: To study the effect of alliin on hsALDH activity.
Methods: Enzyme kinetics was performed to study the effect of alliin on the activity of hsALDH. Different biophysical techniques were employed for structural and binding studies. Docking analysis was done to predict the binding region and the type of binding forces.
Results: Alliin enhanced the dehydrogenase activity of the enzyme. It slightly reduced the Km and significantly enhanced the Vmax value. At 1 µM alliin concentration, the initial reaction rate increased by about two times. Further, it enhanced the hsALDH esterase activity. Biophysical studies indicated a strong complex formation between the enzyme and alliin (binding constant, Kb: 2.35 ± 0.14 x 103 M-1). It changes the secondary structure of hsALDH. Molecular docking study indicated that alliin interacts to the enzyme near the substrate binding region involving some active site residues that are evolutionary conserved. There was a slight increase in the nucleophilicity of active site cysteine in the presence of alliin. Ligand efficiency metrics values indicate that alliin is an efficient ligand for the enzyme.
Conclusion: Alliin activates the catalytic activity of the enzyme. Hence, consumption of alliincontaining garlic preparations or alliin supplements and use of alliin in pure form may lower aldehyde related pathogenesis including oral carcinogenesis.
abstract_id: PUBMED:28472683
Thymoquinone binds and activates human salivary aldehyde dehydrogenase: Potential therapy for the mitigation of aldehyde toxicity and maintenance of oral health. Human salivary aldehyde dehydrogenase (hsALDH) is a very important anti-oxidant enzyme present in the saliva. It is involved in the detoxification of toxic aldehydes and maintenance of oral health. Reduced level of hsALDH activity is a risk factor for oral cancer development. Thymoquinone (TQ) has many pharmacological activities and health benefits. This study aimed to examine the activation of hsALDH by TQ. The effect of TQ on the activity and kinetics of hsALDH was studied. The binding of TQ with the enzyme was examined by different biophysical methods and molecular docking analysis. TQ enhanced the dehydrogenase activity of crude and purified hsALDH by 3.2 and 2.9 fold, respectively. The Km of the purified enzyme decreased and the Vmax increased. The esterase activity also increased by 1.2 fold. No significant change in the nucleophilicity of the catalytic cysteine residue was observed. TQ forms a strong complex with hsALDH without altering the secondary structures of the enzyme. It fits in the active site of ALDH3A1 close to Cys 243 and the other highly conserved amino acid residues which lead to enhancement of substrate binding affinity and catalytic efficiency of the enzyme. TQ is expected to give better protection from toxic aldehydes in the oral cavity and to reduce the risk of oral cancer development through the activation of hsALDH. Therefore, the addition of TQ in the diet and other oral formulations is expected to be beneficial for health.
abstract_id: PUBMED:31759211
Arsenic inhibits human salivary aldehyde dehydrogenase: Mechanism and a population-based study. Human salivary aldehyde dehydrogenase (hsALDH) is an important detoxifying enzyme and maintains oral health. Subjects with low hsALDH activity are at a risk of developing oral cancers. Arsenic (As) toxicity causes many health problems in humans. The objective of this population-based study was to correlate As contamination and hence low hsALDH activity with high incidence of cancer cases in Bareilly district of India. Here, it was observed that As inhibited hsALDH (IC50 value: 33.5 ± 2.5 μM), and the mechanism of inhibition was mixed type (in between competitive and non-competitive). Binding of As to hsALDH changed the conformation of the enzyme. A static quenching mechanism was observed between the enzyme and As with a binding constant (Kb) of 9.77 × 104 M-1. There is one binding site for As on hsALDH molecule. Further, the activity of hsALDH in volunteers living in regions of higher As levels in drinking water (Bahroli and Mirganj village of Bareilly district, India), and those living in region having safe levels of As (Aligarh city, India) was determined. The As level in the saliva samples of the volunteers was determined by inductively coupled plasma mass spectroscopy (ICP-MS). Low hsALDH activity was found in volunteers living in the region of higher As levels. The activity of hsALDH and As concentration in the saliva was found to be negatively correlated (r = - 0.427, p < 0.0001). Therefore, we speculate that the high incidence of cancer cases reported in Bareilly district may be due to higher As contamination.
abstract_id: PUBMED:17357596
Fluorimetric detection of aldehyde dehydrogenase activity in human saliva in diagnostic of cancers of oral cavity. N/A
abstract_id: PUBMED:28057570
In vitro activity and stability of pure human salivary aldehyde dehydrogenase. Human salivary aldehyde dehydrogenase (HsALDH) appears to be the first line of defence against toxic aldehydes contained in exogenous sources and is important for maintaining healthy oral cavity and protection from oral cancer. Here, the activity and stability of purified hsALDH has been determined under different conditions such as temperature, in presence of denaturants [Urea and guanidine hydrochloride (GnHCl)] and in the presence of salt (NaCl). The pure enzyme exhibited low stability when stored at room temperature (25°C) as well as at low temperature (4°C). 10% glycerol significantly improved its storage stability, particularly at 25°C. HsALDH was observed to have very low thermal stability. At higher temperatures, the enzyme gets unfolded and loses its activity quite rapidly. Further, the enzyme is unstable in the presence of denaturants like urea and GnHCl which unfold the enzyme. Salt (NaCl) has an activating effect on the enzyme, resulting from perhaps due to some conformational changes in the enzyme which facilitates the catalysis process. HsALDH proved to be a labile enzyme under in vitro conditions and certain additives like glycerol and NaCl can improve the activity/stability of the enzyme. Hence, a stabilizing agent is required to use the enzyme in in vitro studies.
abstract_id: PUBMED:18536178
Fluorimetric detection of aldehyde dehydrogenase activity in human tissues in diagnostic of cancers of oral cavity. Aldehyde dehydrogenase (ALDH) is known to be susceptible to oxidation, and its assays require stabilization of the enzyme by thiols. Application of the fluorimetric method to assay the ALDH activity in human saliva demonstrated significant differences between procedures utilizing glutathione (GSH) and dithiothreitol (DTT) as stabilizing agents. It has been recently shown that average aldehyde dehydrogenase (ALDH3A1) activity in cancerous (oral cavity cancer) patients' tissues was higher than that found in the control group, what may indicate induction of tumor-specific ALDH.
abstract_id: PUBMED:37238685
The Influence of the Oral Microbiome on Oral Cancer: A Literature Review and a New Approach. In our recent article (Smędra et al.: Oral form of auto-brewery syndrome. J Forensic Leg Med. 2022; 87: 102333), we showed that alcohol production can occur in the oral cavity (oral auto-brewery syndrome) due to a disruption in the microbiota (dysbiosis). An intermediate step on the path leading to the formation of alcohol is acetaldehyde. Typically, acetic aldehyde is transformed into acetate particles inside the human body via acetaldehyde dehydrogenase. Unfortunately, acetaldehyde dehydrogenase activity is low in the oral cavity, and acetaldehyde remains there for a long time. Since acetaldehyde is a recognised risk factor for squamous cell carcinoma arising from the oral cavity, we decided to analyse the relationship linking the oral microbiome, alcohol, and oral cancer using the narrative review method, based on browsing articles in the PubMed database. In conclusion, enough evidence supports the speculation that oral alcohol metabolism must be assessed as an independent carcinogenic risk. We also hypothesise that dysbiosis and the production of acetaldehyde from non-alcoholic food and drinks should be treated as a new factor for the development of cancer.
abstract_id: PUBMED:26225960
Ethanol versus Phytochemicals in Wine: Oral Cancer Risk in a Light Drinking Perspective. This narrative review aims to summarize the current controversy on the balance between ethanol and phytochemicals in wine, focusing on light drinking and oral cancer. Extensive literature search included PUBMED and EMBASE databases to identify in human studies and systematic reviews (up to March 2015), which contributed to elucidate this issue. Independently from the type of beverage, meta-analyses considering light drinking (≤1 drinks/day or ≤12.5 g/day of ethanol) reported relative risks (RR) for oral, oro-pharyngeal, or upper aero-digestive tract cancers, ranging from 1.0 to 1.3. One meta-analysis measured the overall wine-specific RR, which corresponded to 2.1. Although little evidence exists on light wine intake, phytochemicals seem not to affect oral cancer risk, being probably present below the effective dosages and/or due to their low bioavailability. As expected, the risk of oral cancer, even in light drinking conditions, increases when associated with smoking habit and high-risk genotypes of alcohol and aldehyde dehydrogenases.
abstract_id: PUBMED:20518787
Comparison between self-reported facial flushing after alcohol consumption and ALDH2 Glu504Lys polymorphism for risk of upper aerodigestive tract cancer in a Japanese population. Some Japanese exhibit facial flushing after drinking alcohol. Facial flushing was considered to be caused by acetaldehydemia. The concentration of blood acetaldehyde was concerned with the catalytic activity of acetaldehyde dehydrogenase (ALDH). Acetaldehyde dehydrogenase (ALDH)-2 polymorphism (rs671, Glu504Lys) was known to be associated with upper aerodigestive tract (UAT) cancer due to modulation of ALDH2 enzyme activity. It remains controversial whether facial flushing is useful in predicting UAT cancer risk as a surrogate marker of ALDH2 polymorphism. We conducted a case-control study to assess the risk of UAT cancer and facial flushing and ALDH2 polymorphism. Cases and controls were 585 UAT cancer patients and matched 1170 noncancer outpatients of Aichi Cancer Center Hospital. Information on facial flushing and other lifestyle factors was collected via a self-administered questionnaire. Association between facial flushing, polymorphism, and UAT cancer was assessed by odds ratios and 95% confidence intervals by using conditional logistic regression models. The facial flushing had no significant association with UAT cancer, although ALDH2 Lys allele was significantly associated with UAT cancer. No significant interaction between facial flushing and alcohol consumption was observed in this study, whereas ALDH2 Lys allele had significant association with UAT cancer. The misclassification between facial flushing and ALDH2 genotype was observed in 18% of controls with ALDH2 Glu/Glu genotype and in 16% of controls with ALDH2 Glu/Lys genotype. Facial flushing was less useful to predict UAT cancer risk than genotyping ALDH2 polymorphism.
Answer: Yes, lower aldehyde dehydrogenase (ALDH) activity in saliva can be a risk factor for oral cavity cancer. A study found that total ALDH3A1 activity in patients with oral cavity cancer (OCC) was statistically lower than in patients with other oral tumors, such as keratocystic odontogenic tumor (KCOT), odontogenic cysts (OC), neoplasms, and healthy volunteers. This suggests that low salivary ALDH activity may be a risk factor for the development of oral cancer (PUBMED:23346903). Additionally, human salivary aldehyde dehydrogenase (hsALDH) is an important antioxidant enzyme that detoxifies toxic aldehydes, and a reduced level of hsALDH activity is considered a risk factor for oral cancer development (PUBMED:28472683).
Moreover, substances like alliin from garlic and thymoquinone have been shown to enhance the catalytic activity of hsALDH, which could potentially lower aldehyde-related pathogenesis, including oral carcinogenesis (PUBMED:31038061; PUBMED:28472683). Conversely, arsenic has been found to inhibit hsALDH, and a population-based study correlated arsenic contamination and low hsALDH activity with a high incidence of cancer cases, suggesting that environmental factors that inhibit hsALDH could contribute to oral cancer risk (PUBMED:31759211).
Therefore, maintaining adequate ALDH activity in saliva is important for oral health and may help mitigate the risk of developing oral cavity cancer. |
Instruction: Is renal impairment a predictor of the incidence of cataract or cataract surgery?
Abstracts:
abstract_id: PUBMED:15691566
Is renal impairment a predictor of the incidence of cataract or cataract surgery? Findings from a population-based study. Purpose: To explore the relationship between creatinine clearance, an estimate of glomerular filtration rate, and 5-year incidence of cataract and cataract surgery.
Design: Population-based cohort study.
Participants: Of the 3654 participants (aged 49 years or older) of the Blue Mountains Eye Study (BMES I) baseline examination (during 1992 to 1994), 2334 (75%) were reexamined after 5 years from 1997 to 1999 (BMES II).
Method: Risk factor data were collected for all participants at baseline (BMES I). Assessment of renal function was based on estimated creatinine clearance (C(Cr)) calculated with the Cockcroft-Gault formula, adjusted for body surface area, and expressed in ml/minute/1.73 m2. Cataract incidence was determined from graded photographs. The association between renal function and incidence of cataract and cataract surgery was analyzed by logistic regression.
Main Outcome Measures: Incidence of nuclear, cortical, and posterior subcapsular cataract, and cataract surgery.
Results: Mean C(Cr) +/- standard deviation was 60+/-13 ml/minute/1.73 m2. The overall 5-year incidence of nuclear, cortical, and posterior subcapsular cataract was 35.7% (417 of 1167 participants at risk), 16.7% (274 of 1641), and 4.3% (77 of 1790), respectively. Cataract surgery was performed in 6.8% (144 of 2123) of participants. There were no significant associations of renal function with incident nuclear (odds ratio [OR], 1.0; 95% confidence interval [CI], 0.99-1.02), cortical (OR, 1.0; CI, 0.98-1.01), and posterior subcapsular cataract (OR, 1.0; CI, 0.99-1.04) after adjusting for multiple risk factors. After adjusting for age, gender, and dark brown iris color, moderate or worse renal impairment (C(Cr) <60 ml/minute/1.73 m2) compared with normal or mildly impaired function (C(Cr) > or =60 ml/minute/1.73 m2) was significantly associated with incident cataract surgery (P<0.05), but the effect depended on age. Participants younger than 60 years of age with moderate to severe renal impairment had increased odds of incident cataract surgery (OR, 2.75; CI, 1.06-7.14), but this OR decreased with age. In participants 80 years old or older, the OR was 0.34 (CI, 0.11-1.10).
Conclusions: There were no significant effects of renal function on the incidence of cataract of any type. The effect of moderate or worse renal impairment on incident cataract surgery depended on age. However, the interpretation of this effect is uncertain because of additional factors that may be involved in patients having surgery.
abstract_id: PUBMED:12567748
A study of factors related to the incidence of cataract in patients with non-insulin dependent diabetes mellitus. Purpose: To investigate the factors related to the development of cataract in patients with non-insulin dependent diabetes mellitus(NIDDM).
Methods: 792 NIDDM patients received ophthalmologic examinations including visual acuity, external status of the eyes, slit lamp microscopy and ophthalmoscopy. Glucose, urea nitrogen(BUN), creatinine(Cr), urine acid(UA), N-acetyl-beta 2-D-glucosaminidase (NAG), beta 2-microglobulin(beta 2-MG) and serum albumin in blood were quantitatively tested. Glucose, pH value, protein, cells, cast and ketobodies in urine were assayed. Diagnosis of cataract was based on lens opacities classification system II. Any patient meeting "NII", "CII" or "PII" level was diagnosed as cataract.
Results: The incidence of cataract in this group of NIDDM was 62.37% (494/792), which significantly related to the duration of the disease course, but not to the sex of the patient. The occurrence rate of cataract in patients suffering from NIDDM of less than five years duration, from five to ten years, and more than ten years was 49.67% (228/459), 71.84% (125/174), and 88.68% (141/159), respectively. The occurrence of cataract in patients diagnosed of the disease from five to ten years and more than ten years was much higher than that of those with the course of the disease less than five years(P < 0.05 and P < 0.001, respectively). Rising concentrations of blood urea nitrogen, creatinine, glycosylated hemoglobin HbA1c(G-HbA1c), N-acetyl-beta 2-D-glucosaminidase(NAG) and beta 2-microglobulin(beta 2-MG) indicated malfunction of the kidneys, and the rate of cataract occurrence in these patients was higher.
Conclusion: This study indicates that prolongation of the duration of non-insulin dependent diabetes mellitus, renal dysfunction, as well as poor blood glucose control, may accelerate the development of cataract.
abstract_id: PUBMED:8789523
Renal function in the immediate postoperative period of abdominal and ophthalmic surgery Objectives: To study changes in kidney function immediately after abdominal or eye surgery and to assess the roles of vasoactive substances: antidiuretic hormone (ADH), renin and aldosterone and natriuretic factors (atrial natriuretic peptide [ANP] and digoxin-like immunoreactive factor [DLIF]) in renal function.
Patients And Methods: We distributed 23 patients into 2 groups. Group A contained 16 subjects undergoing high abdominal surgery (cholecystectomy) under general anesthesia, and group B included 7 patients undergoing cataract extraction and intraocular lens implantation with peribulbar anesthesia. The first blood sample was taken before anesthetic induction; the first urine sample had been taken 24 hours prior to surgery. The second blood and urine samples were taken 2 hours after the patient's arrival in the intensive care recovery ward.
Results: Patients undergoing abdominal surgery experienced significant decreases in diuresis (p < 0.01) and sodium excretion (p < 0.05) and increases of potassium in urine (p < 0.01) and urinary osmolarity, accompanied by high ADN (p < 0.01) and aldosterone (p < 0.01) levels in both blood (p < 0.05) and urine. Renin, ANP and DLIF did not change significantly in patients receiving peribulbar anesthesia.
Conclusions: The increases in ADH and aldosterone levels that occur as a response to stress in abdominal surgery are implicated in the antidiuretic and antinatriuretic effects observed in the postoperative period. Renin, ANP and DLIF do not seem to be responsible for kidney dysfunction.
abstract_id: PUBMED:28210182
Successful cataract surgery in a patient with refractory Wegener's granulomatosis effectively treated with rituximab: A case report. Wegener's granulomatosis is a granulomatous disorder associated with systemic necrotizing vasculitis. Eye involvement occurs in approximately 50% of Wegener's granulomatosis patients and is an important cause of morbidity. Conventional treatment-related morbidity and failure have led to studies of alternative treatment modalities. In this case of a 35-year-old man with severe Wegener's granulomatosis, conventional therapy failed to induce remission. Despite the standard immunosuppressive therapy, progression of the disease was observed, mainly with ocular manifestations and renal impairment. Rituximab was given intravenously and led to remission of both systemic and ocular manifestations of the disease. After 1 year of disease quiescence, he was admitted one week after his third regularly-scheduled rituximab treatment and was started on intravenous methylprednisolone, 500 mg/day for 3 days, before cataract surgery. Subsequently, the patient underwent phacoemulsification cataract surgery in his left eye. Six months later, in the same manner he underwent uneventful phacoemulsification cataract surgery in the right eye with a favorable outcome in both eyes.
Conclusion: In this patient, rituximab was a well-tolerated and effective remission induction agent for severe refractory Wegener's granulomatosis and led to successful cataract surgery.
abstract_id: PUBMED:22426176
Axial length and proliferative diabetic retinopathy. Purpose: To determine the correlation between axial length and diabetic retinopathy (DR) in patients with diabetes mellitus for 10 years or more.
Methods: This study was a prospective, observational, cross-sectional study. Patients with diabetes for 10 years or more were included. We excluded eyes with any other significant ocular disease or any prior intraocular surgery, except uncomplicated cataract surgery. Only one eye of each patient was included as the study eye. The severity of DR was graded as no DR, non-proliferative DR (NPDR), or proliferative DR (PDR). Axial length was measured by A-scan ultrasound (10 MHz Transducer, AL-2000 Biometer/Pachymeter; Tomey, Phoenix, AZ). Univariate logistic regression models were used to evaluate the relationship between the dependent variables (any DR, PDR) and all potential risk factors. Axial length and other factors with p value <0.1 were included in multivariate logistic regression models. Backward selection based on the likelihood ratio statistic was used to select the final models.
Results: We included 166 eyes from 166 patients (93 female and 73 male; mean age, 68.8 years). The mean diabetes duration was 15.4 years. Fifty-four (32.5%) eyes had no DR, 72 (43.4%) eyes had NPDR, and 40 (24.1%) eyes had PDR. In univariate analysis, hypertension (p = 0.009), renal impairment (p = 0.079), and insulin use (p = 0.009) were associated with developing any DR. Hypertension (p = 0.042), renal impairment (p = 0.014), insulin use (p = 0.040), pseudophakia (p = 0.019), and axial length (p = 0.076) were associated with developing PDR. In multivariate analysis, hypertension (p = 0.005) and insulin use (p = 0.010) were associated with developing any DR. Hypertension (p = 0.020), renal impairment (p = 0.025), pseudophakia (p = 0.006), and axial length (p = 0.024) were associated with developing PDR.
Conclusions: This observational study suggests an inverse relationship between axial length and the development of PDR in patients with diabetes for 10 years or more. No relationship was found between axial length and the development of any DR.
abstract_id: PUBMED:22982908
Long-term outcome of the difficult nephrotic syndrome in children. To determine the long-term outcome of nephrotic syndrome (NS) in children, we studied 48 patients with the NS aged seven months to 12 years at onset and followed for a long period (3-9 years). Consanguinity was positive in 31.2%. Patients' history of atopy was present in 25%, while family history of allergy was present in 18 (37.5%) patients. Renal impairment at initial presentation was observed in 12.5% of the patients. Among 32 biopsied patients, 11 (34.3%) had focal segmental glomerulosclerosis and eight (25%) revealed mesangial IgM nephropathy. Outcome at two years of presentation showed 41.6% patients as frequent relapsers, 39.5% as steroid dependent and 18.7% as steroid resistant. Forty-three patients were available for follow-up after ten years of presentation, 22 (51%) patients had complete remission, 15 (34.8%) were steroid dependent, two (4.6%) developed chronic renal failure and two (4.6%) died. Two patients (4.6%) developed insulin-dependent diabetes mellitus, two (4.6%) had cataract and one (2.3%) had documented peritonitis. In conclusion, the high incidence of steroid-dependent, frequent relapses and steroid resistance in children can be explained by different factors, including consanguinity, atopy and severe presentation at onset of disease. We suggest longer initial treatment at onset for this group of patients. The low incidence of infection in this group needs to be addressed in future studies.
abstract_id: PUBMED:37629352
Positive Association between Macular Pigment Optical Density and Glomerular Filtration Rate: A Cross-Sectional Study. Although decreased macular pigment density is associated with the development of age-related macular degeneration (AMD), exactly how this decrease may contribute to the development of AMD is still not fully understood. In this study, we investigated the relationship between macular pigment optical density (MPOD) and estimated glomerular filtration rate (eGFR). MPOD was measured using MPS II (Electron Technology, Cambridge, UK) in 137 participants who showed no clinical signs of AMD at 3 months after cataract surgery, and simple and multiple linear regression analyses were performed to determine the associations with age, sex, abdominal circumference, diabetes, hypertension, smoking, intraocular lens color, visual acuity before and after surgery, and eGFR. The participants were divided into two groups based on the median MPOD (0.58): the high-pigment and low-pigment groups. The mean value of eGFR in the high-pigment group was significantly higher than that in the low-pigment group (64.2 vs. 58.1, p = 0.02). The simple linear regression analysis revealed a significant positive association between MPOD and eGFR (β = 0.0034, 95% confidence interval [CI]: 0.0011-0.0056, p = 0.0038), and this association was independent of age, sex, abdominal circumference, diabetes, smoking, hypertension, best-corrected visual acuity (BCVA) before surgery, BCVA after surgery, and intraocular lens color (β = 0.0033, 95% CI: 0.00090-0.0058, p = 0.0076). These results show a strong association of renal dysfunction with the decrease in MPOD.
abstract_id: PUBMED:32340490
Oculocerebrorenal syndrome of Lowe: Survey of ophthalmic presentations and management. Background: Lowe syndrome is a rare X-linked disease that is characterized by renal dysfunction, developmental delays, congenital cataracts and glaucoma. Mutations in the oculocerebral renal syndrome of Lowe (OCRL) gene are found in Lowe syndrome patients. Although loss of vision is a major concern for families and physicians who take care of Lowe syndrome children, definitive cause of visual loss is still unclear. Children usually present with bilateral dense cataracts at birth and glaucoma, which occurs in more than half of cases, either concurrently or following cataract surgery.
Materials And Methods: A retrospective review was conducted on the prevalence and characteristics of ocular findings among families of patients with Lowe syndrome with 137 uniquely affected individuals.
Results: Of 137 patients, all had bilateral congenital cataracts. Nystagmus was reported in 69.3% of cases, glaucoma in 54.7%, strabismus in 35.0%, and corneal scar in 18.2% of patients. Glaucoma was reported as the most common cause of blindness (46%) followed by corneal scars (41%). Glaucoma occurred in 54.7% of patients and affected both eyes in the majority of cases. Of these patients, 55% underwent surgery for glaucoma, while the remaining patients used medications to control their eye pressure. Timolol and latanoprost were the most commonly used medications. Although trabeculectomy and goniotomy are commonly used for pressure management, aqueous tube shunts had the best outcomes.
Conclusion: Ocular manifestations in individuals with Lowe syndrome and carriers with OCRL mutation are reported which may help familiarize clinicians with the ocular manifestations and management of a rare and complex syndrome.
abstract_id: PUBMED:31901096
Nephrocalcinosis, Renal Dysfunction, and Calculi in Patients With Primary Hypoparathyroidism on Long-Term Conventional Therapy. Context: There are concerns about the long-term safety of conventional therapy on renal health in patients with hypoparathyroidism. Careful audit of these would help comparisons with upcoming parathyroid hormone therapy.
Objective: We investigated nephrocalcinosis, renal dysfunction, and calculi, their predictors and progression over long-term follow-up in patients with primary hypoparathyroidism (PH).
Design And Setting: An observational study at a tertiary care center was conducted.
Participants And Methods: A total of 165 PH patients receiving conventional therapy were evaluated by radiographs, ultrasonography, and computed tomography. Their glomerular filtration rate (GFR) was measured by Tc-99m-diethylenetriamine penta-acetic acid clearance. Clinical characteristics, serum total calcium, phosphorus, creatinine, hypercalciuria, and fractional excretion of phosphorus (FEPh) at presentation and during follow-up were analyzed as possible predictors of renal complications. Controls were 165 apparently healthy individuals.
Results: Nephrocalcinosis was present in 6.7% of PH patients but not in controls. Patients younger than 15 years at presentation and with higher serum calcium-phosphorus product were at higher risk. Nephrocalcinosis showed no significant association with cataract and intracranial calcification. Prevalence of renal calculi was comparable between hypoparathyroid patients and controls (5% vs 3.6%, P = .58). Fourteen percent of patients had a GFR less than 60 mL/min/1.73 m2. Increased FEPh during follow-up was the significant predictor of low GFR. Nephrocalcinosis developed in 9% of patients over 10 years of conventional therapy.
Conclusion: A total of 6.7% of PH patients had nephrocalcinosis, and 14% showed renal dysfunction. Prevalence of renal calculi was similar in patients and controls. Nine percent of patients developed nephrocalcinosis over 10 years of conventional therapy.
abstract_id: PUBMED:28138323
Review of Hypoparathyroidism. Hypoparathyroidism is a rare endocrine disorder in which parathyroid hormone (PTH) production is abnormally low or absent, resulting in low serum calcium and increased serum phosphorus. The most common cause of hypoparathyroidism is parathyroid gland injury or inadvertent removal during thyroid surgery. Current treatments include supplementation with calcium and active vitamin D, with goal albumin-corrected serum calcium level in the low-normal range of 8-9 mg/dl. Complications of the disease include renal dysfunction, nephrocalcinosis, kidney stones, extracellular calcifications of the basal ganglia, and posterior subcapsular cataracts, as well as low bone turnover and increased bone density. Until January 2015, hypoparathyroidism was the only classic endocrine disease without an available hormone replacement. Recombinant human PTH 1-84, full-length PTH, is now available for a selected group of patients with the disease who are not well controlled on the current standard therapy of calcium and active vitamin D. In addition, the role of PTH replacement on quality of life, intracerebral calcifications, cataracts, improving bone turnover, and reduction of renal complications of the disease remains to be further investigated.
Answer: Based on the findings from the population-based cohort study in the Blue Mountains Eye Study (BMES I and II), renal impairment, as measured by creatinine clearance, was not significantly associated with the incidence of nuclear, cortical, and posterior subcapsular cataract after adjusting for multiple risk factors. However, moderate or worse renal impairment (creatinine clearance <60 ml/minute/1.73 m2) compared with normal or mildly impaired function was significantly associated with incident cataract surgery in participants younger than 60 years of age, but this association decreased with age (PUBMED:15691566).
In contrast, another study investigating factors related to the development of cataract in patients with non-insulin dependent diabetes mellitus (NIDDM) found that renal dysfunction, along with the prolongation of the duration of NIDDM and poor blood glucose control, may accelerate the development of cataract (PUBMED:12567748).
Therefore, while one study suggests that renal impairment may be associated with an increased likelihood of undergoing cataract surgery, particularly in younger individuals, another study indicates that renal dysfunction can be a factor in the development of cataract in diabetic patients. It is important to note that the interpretation of these effects is uncertain due to additional factors that may influence the decision to have surgery and the development of cataracts in different populations. |
Instruction: Does polycystic ovarian morphology influence the response to treatment with pulsatile GnRH in functional hypothalamic amenorrhea?
Abstracts:
abstract_id: PUBMED:27129705
Does polycystic ovarian morphology influence the response to treatment with pulsatile GnRH in functional hypothalamic amenorrhea? Background: Pulsatile GnRH therapy is the gold standard treatment for ovulation induction in women having functional hypothalamic amenorrhea (FHA). The use of pulsatile GnRH therapy in FHA patients with polycystic ovarian morphology (PCOM), called "FHA-PCOM", has been little studied in the literature and results remain contradictory. The aim of this study was to compare the outcomes of pulsatile GnRH therapy for ovulation induction between FHA and "FHA-PCOM" patients in order to search for an eventual impact of PCOM.
Methods: Retrospective study from August 2002 to June 2015, including 27 patients with FHA and 40 "FHA-PCOM" patients (85 and 104 initiated cycles, respectively) treated by pulsatile GnRH therapy for induction ovulation.
Results: The two groups were similar except for markers of PCOM (follicle number per ovary, serum Anti-Müllerian Hormone level and ovarian area), which were significantly higher in patients with "FHA-PCOM". There was no significant difference between the groups concerning the ovarian response: with equivalent doses of GnRH, both groups had similar ovulation (80.8 vs 77.7 %, NS) and excessive response rates (12.5 vs 10.6 %, NS). There was no significant difference in on-going pregnancy rates (26.9 vs 20 % per initiated cycle, NS), as well as in miscarriage, multiple pregnancy or biochemical pregnancy rates.
Conclusion: Pulsatile GnRH seems to be a successful and safe method for ovulation induction in "FHA-PCOM" patients. If results were confirmed by prospective studies, GnRH therapy could therefore become a first-line treatment for this specific population, just as it is for women with FHA without PCOM.
abstract_id: PUBMED:27258574
Comparison between pulsatile GnRH therapy and gonadotropins for ovulation induction in women with both functional hypothalamic amenorrhea and polycystic ovarian morphology. Context: Ovulation induction in patients having both functional hypothalamic amenorrhea (FHA) and polycystic ovarian morphology (PCOM) has been less studied in the literature. As results remain contradictory, no recommendations have yet been established.
Objective: To compare pulsatile GnRH therapy versus gonadotropins for ovulation induction in "FHA-PCOM" patients and to determine if one treatment strikes as superior to the other.
Methods: A 12-year retrospective study, comparing 55 "FHA-PCOM" patients, treated either with GnRH therapy (38 patients, 93 cycles) or with gonadotropins (17 patients, 53 cycles).
Results: Both groups were similar, defined by low serum LH and E2 levels, low BMI, excessive follicle number per ovary and/or high serum AMH level. Ovulation rates were significantly lower with gonadotropins (56.6% versus 78.6%, p = 0.005), with more cancellation and ovarian hyper-responses (14% versus 34% per initiated cycle, p < 0.005). Pregnancy rates were significantly higher with GnRH therapy, whether per initiated cycle (26.9% versus 7.6%, p = 0.005) or per patient (65.8% versus 23.5%, p = 0.007).
Conclusion: In our study, GnRH therapy was more successful and safer than gonadotropins, for ovulation induction in "FHA-PCOM" patients. If results were confirmed by prospective studies, it could become a first-line treatment for this population, just as it is for FHA women without PCOM.
abstract_id: PUBMED:2274018
Pulsatile treatment with gonadotropin-releasing hormone (GnRH) The different ways of administration condition the frequency of the pulsatile GnRH, because of the different rate of depot. High frequencies (60-90/min) can be reached only with the intravenous route which is suitable for ovulation induction: ovulation is reached in 73-92% of women affected by hypothalamic amenorrhoea and in 41-51% of women affected by polycystic ovarian syndrome (previously suppressed with buserelin); overstimulation risk is lesser than during therapy with gonadotropins. Subcutaneous route is suitable for puberty induction which needs long-term treatment (18-24 months); the results are generally good, except in hypopituitaric patients. Intranasally route needs frequencies greater than 150-180 min: low frequencies administration (3 times/day) is sufficient to treat cryptorchidism and to reach good results (30-70%); moreover intranasally route can be useful to maintain the results reached with long term therapy with LHRH or gonadotropins.
abstract_id: PUBMED:35787707
Basal and dynamic relationships between serum anti-Müllerian hormone and gonadotropins in patients with functional hypothalamic amenorrhea, with or without polycystic ovarian morphology. Background: To evaluate in women with functional hypothalamic amenorrhea (FHA), whether there is a difference between patients with and without polycystic ovarian morphology (PCOM) concerning the response to a gonadotropin releasing hormone (GnRH) stimulation test and to pulsatile GnRH treatment.
Methods: In a retrospective observational study, 64 women with FHA who underwent a GnRH stimulation test and 32 age-matched controls without PCOM were included. Pulsatile GnRH treatment was provided to 31 FHA patients and three-month follow-up data were available for 19 of these.
Results: Serum levels of gonadotropins and estradiol were lower in FHA women than in controls (p < 0.05). FHA patients revealed PCOM in 27/64 cases (42.2%). FHA patients without PCOM revealed lower anti-Müllerian hormone (AMH) levels than controls (median 2.03 ng/mL, IQR 1.40-2.50, versus 3.08 ng/mL, IQR 2.24-4.10, respectively, p < 0.001). Comparing FHA patients with and without PCOM, the latter revealed lower AMH levels, a lower median LH increase after the GnRH stimulation test (240.0%, IQR 186.4-370.0, versus 604.9%, IQR 360.0-1122.0; p < 0.001) as well as, contrary to patients with PCOM, a significant increase in AMH after three months of successful pulsatile GnRH treatment (median 1.69 ng/mL at baseline versus 2.02 ng/mL after three months of treatment; p = 0.002).
Conclusions: In women with FHA without PCOM, the phenomenon of low AMH levels seems to be based on relative gonadotropin deficiency rather than diminished ovarian reserve. AMH tended to rise after three months of pulsatile GnRH treatment. The differences found between patients with and without PCOM suggest the former the existence of some PCOS-specific systemic and/or intra-ovarian abnormalities.
abstract_id: PUBMED:11397835
Specific factors predict the response to pulsatile gonadotropin-releasing hormone therapy in polycystic ovarian syndrome. Ovulation induction is particularly challenging in patients with polycystic ovarian syndrome (PCOS) and may be complicated by multifollicular development. Pulsatile GnRH stimulates monofollicular development in women with anovulatory infertility; however, ovulation rates are considerably lower in the subgroup of patients with PCOS. The aim of this retrospective study was to determine specific hormonal, metabolic, and ovarian morphological characteristics that predict an ovulatory response to pulsatile GnRH therapy in patients with PCOS. Subjects with PCOS were defined by chronic amenorrhea or oligomenorrhea and clinical and/or biochemical hyperandrogenism in the absence of an adrenal or pituitary disorder. At baseline, gonadotropin dynamics were assessed by 10-min blood sampling, insulin resistance by fasting insulin levels, ovarian morphology by transvaginal ultrasound, and androgen production by total testosterone levels. Intravenous pulsatile GnRH was then administered. During GnRH stimulation, daily blood samples were analyzed for gonadotropins, estradiol (E(2)), progesterone, inhibin B, and androgen levels, and serial ultrasounds were performed. Forty-one women with PCOS underwent a total of 144 ovulation induction cycles with pulsatile GnRH. Fifty-six percent of patients ovulated with 40% of ovulatory patients achieving pregnancy. Among the baseline characteristics, ovulatory cycles were associated with lower body mass index (P < 0.05), lower fasting insulin (P = 0.02), lower 17-hydroxyprogesterone and testosterone responses to hCG (P < 0.03) and higher FSH (P < 0.05). In the first week of pulsatile GnRH treatment, E(2) and the size of the largest follicle were higher (P < 0.03), whereas androstenedione was lower (P < 0.01) in ovulatory compared with anovulatory patients. Estradiol levels of 230 pg/mL (844 pmol/L) or more and androstenedione levels of 2.5 ng/mL (8.7 nmol/L) or less on day 4 and follicle diameter of 11 mm or more by day 7 of pulsatile GnRH treatment had positive predictive values for ovulation of 86.4%, 88.4%, and 99.6%, respectively. Ovulatory patients who conceived had lower free testosterone levels at baseline (P < 0.04). In conclusion, pulsatile GnRH is an effective and safe method of ovulation induction in a subset of patients with PCOS. Patient characteristics associated with successful ovulation in response to pulsatile GnRH include lower body mass index and fasting insulin levels, lower androgen response to hCG, and higher baseline FSH. In ovulatory patients, high free testosterone is negatively associated with pregnancy. A trial of pulsatile GnRH therapy may be useful in all PCOS patients, as E(2) and androstenedione levels on day 4 or follicle diameter on day 7 of therapy are highly predictive of the ovulatory response in this group of patients.
abstract_id: PUBMED:1929195
Induction of ovulation by pulsatile gonadoliberin administration. Indications and limits With its simplicity, innocuity and efficacy, pulsatile GnRH administration constitutes a considerable advance in ovulation induction techniques. Its purpose is not to replace classic methods like Clomiphene Citrate, gonadotropins or dopaminergic agonists, but to complement them. While the choice of administration route, IV vs SC is still controversial, the efficacy depends mainly on the selection of the patients susceptible of benefiting from this therapy. Low gonadotropic activity hypothalamic amenorrhea remains the best indication for pulsatile GnRH, as substantiated by the results published over the last 10 years. The other anovulation causes, including PCO-S, are more disputable indications, and prospective studies involving homogeneous populations are necessary to assess the true standing of GnRH in such indications.
abstract_id: PUBMED:14680786
Revelation of a polymicrocystic ovary syndrome after one month's treatment by pulsatile GnRH in a patient presenting with functional hypothalamic amenorrhea Polycystic ovary syndrome (PCOS) and hypothalamic amenorrhea (HA) are the most frequent causes of endocrine infertility, but their association is an uncommon occurrence. We report the case of a 28-year old woman suffering from infertility and amenorrhea. Her weight was normal (BMI = 19) and she had no hirsutism. She self-reported food restriction and a 10 kg weight loss 5 years ago, concomitant with the onset of amenorrhea. At the initial evaluation, the patient was considered as having HA due to food restriction. At ultrasonography, ovaries were small and multifollicular (right and left area: 2.2 and 2.5 cm(2), respectively; number of cysts 2-9 mm in diameter: 15 and 12, respectively), and no stromal hypertrophy was noted. She has been treated for 1 month by intravenous pulsatile GnRH administration. Although the doses were increased from 5 to 15 microg/pulse every 90 min, no E2 response and no follicular development were observed. Hormonal re-evaluation revealed normal levels of serum LH, FSH and androgens, and a normal LH/FSH ratio. However, a typical aspect of PCO was found at ultrasound (right and left area: 6.5 and 5.5 cm(2), respectively, and more than 15 small cysts arranged peripherally around an increased central stroma in each ovary). The treatment has been then switched to hMG, using the low dose step-up regimen and starting with 75 U/day. In the absence of response after 2 weeks, the dose was increased to 112.5 U/day and a multifollicular reaction occurred, leading to cancellation. In conclusion, we hypothesize that this patient had a "hidden" PCOS when she was hypogonadotrophic and that it developed very rapidly after restitution of a normal gonadotropin level under exogenous GnRH. This occurred despite a low insulin level, showing that hyperinsulinism is not a prerequisite for the development of PCOS in every case.
abstract_id: PUBMED:2658471
Endocrine dynamics during pulsatile GnRH administration in patients with hypothalamic amenorrhea and polycystic ovarian disease. The LH secretory patterns and ovarian endocrine responses have been determined during pulsatile gonadotropin-releasing hormone (GnRH) administration for induction of ovulation in patients with hypothalamic amenorrhea (HA). However, until now these endocrine dynamics during GnRH therapy have not been thoroughly investigated in patients with polycystic ovarian disease (PCOD). Seven patients with HA and 4 patients with PCOD have therefore been studied to determine changes in LH pulsatile activity and in serum sex steroid levels in response to chronic intermittent GnRH stimulation. GnRH was administered intravenously (5-10 micrograms/90 minutes) by means of a portable infusion pump. Blood samples were obtained at 15-minute intervals for 4 hours on the day before the start of GnRH stimulation (control day) and on treatment days 5, 10 and 15. LH was determined in all samples and FSH, serum androgens and estrogens were measured in baseline samples by RIA. While 8 (62%) ovulations and 5 conceptions were observed in 13 treatment cycles in patients with HA, no ovulations were achieved during 9 treatment cycles in patients with PCOD. On the control day significantly (p less than 0.05) higher basal LH and testosterone (T) levels and significantly (p less than 0.05) lower FSH levels were found in the PCOD patients. The LH pulsatile profiles of the PCOD patients showed significantly (p less than 0.05) higher pulse amplitudes and areas under the curve (integrated responses). Pulsatile GnRH administration induced a significant (p less than 0.05) increase in LH pulse amplitudes in both HA and PCOD patients, and also increased (p less than 0.05) the integrated responses in patients with HA. During the GnRH stimulation, the LH interpulse intervals of both HA and PCOD patients were found to be similar to the frequency in which exogenous GnRH was administered. FSH levels rose continuously (p less than 0.001) during stimulation in patients with HA, but remained unchanged in patients with PCOD. In HA patients, T, androstenedione (AD) and estrone (E1) did not change during the GnRH treatment, but estradiol (E2) rose so that the ratios of aromatized estrogens to non-aromatized androgens (E1/AD, E2/T) increased. In contrast, T and AD increased significantly (p less than 0.05 or less) and E2 remained unchanged during stimulations in PCOD patients, which resulted in decreasing ratios of estrogens to androgens. These observations confirm that pulsatile GnRH administration can successfully induce ovulation in patients with HA by restoring the ovarian physiology. The data also demonstrate that pulsatile GnRH administration can influence the LH secretory patterns in PCOD patients.(ABSTRACT TRUNCATED AT 400 WORDS)
abstract_id: PUBMED:3134391
Induction of ovulation with pulsatile GnRH in hypothalamic amenorrhoea. Pulsatile administration of gonadotrophin releasing hormone (GnRH) is a very effective treatment for induction of ovulation in hypothalamic amenorrhoea (HA). Thirty-seven women have been treated for a total of 117 cycles which resulted in 42 pregnancies--four treatment failures occurred. If these cycles are excluded, the 42 pregnancies were obtained within 2.3 cycles. One twin pregnancy occurred and no hyperstimulation was observed. The treatment was administered intravenously with a dosage schedule based on the grading of HA. We concluded that pulsatile GnRH was safe and very successful in induction of pregnancy in HA. Other indications (polycystic ovary syndrome and luteal phase defect) remain much less suitable for this treatment.
abstract_id: PUBMED:7962297
Treatment of anovulation with pulsatile gonadotropin-releasing hormone: prognostic factors and clinical results in 600 cycles. Pulsatile GnRH (pGnRH) was administered to 292 anovulatory patients in 600 consecutive cycles between February 1984 and February 1993. This represents the largest single pGnRH series ever reported. Patients were divided into the following groups: primary hypogonadotropic amenorrhea (PHA), 73 patients, 161 cycles; other hypogonadotropic hypogonadisms (OHH), 57 patients 107 cycles; multifollicular ovary (MFO), 39 patients 75 cycles; polycystic ovary (PCO), 85 patients 172 cycles; and other hyperandrogenic anovulations (OHA), 38 patients 85 cycles. GnRH was administered iv at a dose of 1.25-20.0 micrograms every 30-120 min; most cycles (505) were performed with a regimen of 2.5-5.0 micrograms GnRH every 60-90 min. In 228 cycles of MFO, PCO, and OHA patients, pGnRH was preceded by GnRH agonist (GnRH-A) suppression. Ovulatory rates were 75%, and pregnancy occurred in 105 cycles (pregnancy rate of 18%/treatment cycle and 23%/ovulatory cycle). Ovulatory and pregnancy rates were higher in PHA, OHH, and MFO and lower in PCO and OHA. Only 4 multiple pregnancies occurred (3.8%), none after GnRH-A suppression. The abortion rate was 30% and was highest in PCO (45%). GnRH-A pretreatment improved ovulatory rates only in PCO (from 49% to 71%; P < 0.001), whereas it had no significant effect on pregnancy and abortion rates in any group. Higher weight and insulin were associated with lower ovulatory and pregnancy rates; higher LH and testosterone were associated with lower ovulatory rates only. We conclude that 1) pGnRH is a highly effective ovulation induction method; 2) pGnRH does not cause ovarian hyperstimulation; 3) low dose pGnRH is associated with a remarkably low incidence of multiple pregnancy; 4) GnRH-A pretreatment improves pGnRH outcome in PCO and further lowers the incidence of multiple pregnancy; 5) pGnRH is associated with relatively elevated abortion rates, particularly in PCO; and 6) pGnRH is less successful in overweight patients and when high baseline LH, testosterone, and insulin levels are present.
Answer: Polycystic ovarian morphology (PCOM) does not appear to significantly influence the response to treatment with pulsatile GnRH in functional hypothalamic amenorrhea (FHA). A retrospective study comparing the outcomes of pulsatile GnRH therapy for ovulation induction between FHA and "FHA-PCOM" patients found no significant difference in ovarian response, ovulation rates, excessive response rates, ongoing pregnancy rates, miscarriage rates, multiple pregnancy rates, or biochemical pregnancy rates between the two groups. Both groups had similar ovulation rates (80.8% vs 77.7%) and excessive response rates (12.5% vs 10.6%) with equivalent doses of GnRH, suggesting that pulsatile GnRH is a successful and safe method for ovulation induction in "FHA-PCOM" patients (PUBMED:27129705).
Additionally, another study comparing pulsatile GnRH therapy versus gonadotropins for ovulation induction in "FHA-PCOM" patients found that GnRH therapy was more successful and safer than gonadotropins. The ovulation rates were significantly higher with GnRH therapy (78.6% vs 56.6%), and pregnancy rates were also significantly higher with GnRH therapy per initiated cycle (26.9% vs 7.6%) and per patient (65.8% vs 23.5%). This study suggests that GnRH therapy could become a first-line treatment for "FHA-PCOM" patients, just as it is for FHA women without PCOM (PUBMED:27258574).
In summary, the available evidence indicates that PCOM does not negatively impact the response to pulsatile GnRH therapy in women with FHA, and pulsatile GnRH therapy could be considered an effective first-line treatment for ovulation induction in this specific population. |
Instruction: Does religious coping and spirituality have a moderating role on depression and anxiety in patients with spinal cord injury?
Abstracts:
abstract_id: PUBMED:26123206
Does religious coping and spirituality have a moderating role on depression and anxiety in patients with spinal cord injury? A study from Iran. Objectives: We evaluate the level of anxiety and depression among patients with spinal cord injury (SCI) in relation with their religious coping and spiritual health.
Setting: Brain and Spinal Cord Injury Repair Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran.
Methods: A sample of patients with SCI participated in this cross-sectional study. They completed a sociodemographic questionnaire, the Hospital Anxiety and Depression Scale, the Brief Religious Coping Questionnaire and the Spiritual Well-being Scale. Then, the association between anxiety, depression and independent variables was examined.
Results: In all, 213 patients with SCI were studied. Of these, 64 (30%) have had anxiety and 32 (15%) have had depression. Multiple logistic regression analyses revealed that gender (odds ratio (OR) for female=3.34, 95% confidence interval (CI)=1.31-8.51, P=0.011), employment (OR for unemployed=5.71, 95% CI=1.17-27.78, P=0.031), negative religious coping (OR=1.15, 95% CI=1.04-1.28, P=0.006) and existential spiritual well-being (OR=0.93, 95% CI=0.89-0.97, P=0.003) were significant contributing factors to anxiety (Table 3), whereas negative religious coping (OR=1.21, 95% CI=1.06-1.37, P=0.004) and existential spiritual well-being (OR=0.90, 95% CI=0.84-0.96, P=0.001) were significant contributing factors to depression.
Conclusion: The findings indicated that depression and anxiety are two psychologically important side effects after SCI. The findings also indicated that religion and spiritual well-being have a moderating role on occurrence of depression and anxiety.
abstract_id: PUBMED:26882493
The relationship between anxiety, depression and religious coping strategies and erectile dysfunction in Iranian patients with spinal cord injury. Objectives: To assess the role of anxiety, depressive mood and religious coping in erectile function among Iranian patients with spinal cord injury (SCI).
Setting: Brain and Spinal Cord Injury Repair Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran.
Methods: A sample of N=93 men with SCI participated in this cross-sectional study. Levels of anxiety and depressive mood were assessed using the Hospital Anxiety and Depression Scale. Religious coping strategies were measured using the 14-items Brief Coping Questionnaire. Erectile function was assessed using the International Index of Erectile Function. The joint effect of anxiety, depressive mood and religious coping strategies on erectile function was assessed by performing stepwise multiple linear regression analyses.
Results: The mean age of the SCI patients was 37.8 years with a mean post-injury time of 4.6 years. Multivariate regression analyses indicated that age (B=-0.27, 95% CI=-0.47 to -0.07), education (B for higher education=0.63, 95% CI=0.24 to 1.02), the American Spinal Injury Association impairment scale (B for complete impairment=-3.36, 95% CI=-3.82 to -2.89), anxiety (B=-3.56, 95% CI=-5.76 to -1.42), positive religious coping (B=0.30, 95% CI=0.03 to 0.57), negative religious coping (B=-0.56, 95% CI=-0.82 to -0.29) and the duration of injury (B=-0.25, 95% CI=-0.22 to -0.29) were all independent factors influencing erectile function in SCI patients.
Conclusion: Overall, the results indicated that SCI patients who use positive religious coping strategies had better erectile function compared with individuals who applied negative religious coping strategies. Furthermore, higher levels of anxiety, greater impairment and longer duration of injury turned out to be risk factors for erectile dysfunction.
abstract_id: PUBMED:31187306
Investigating the Relationship between Religious Beliefs with Care Burden, Stress, Anxiety, and Depression in Caregivers of Patients with Spinal Cord Injuries. Spinal cord injury (SCI) is one of the most severe diseases associated with the central nervous system of the individuals, which can lead to disability in the patient. The aim of the present study was to determine the relationship between religious beliefs with CG, depression, anxiety and stress (DAS) in caregivers of patients with SCI in the city of Ilam, Iran. This is a descriptive-analytic article, and the study population were caregivers of patients with SCI. A sample size of 150 patients was selected according to previous studies. The questionnaires used for data collection included Religious Coping Questionnaire (RC), Caregiver Questionnaire (CG), and Depression, Anxiety and Stress Scale-21 (DASS-21) Items. In this study, caregivers of patients with SCI were included in the study using convenience sampling method in Ilam city. The researchers identified patient caregivers who met the inclusion criteria. The research objectives were described for caregivers, and the questioners were initiated if caregivers were willing to participate in the study. Literate caregivers completed questionnaires through interviews, and trained questioners completed for illiterate caregivers in the same way (interviewing). Data were analyzed using spss 16 statistical software, and descriptive and analytical methods were used for statistical analysis. According to the findings, the mean (SD) of RC is 18.41 (2.73), negative RC is 7.05 (2.06), positive RC is 11.36 (1.89), stress is 10.78 (6.27), anxiety is 10.12 (5.58), depression is 10.50 (3.08), and CG is 78.16 (27.09). There is a significant relationship between RC levels with stress (P = 0.000, F = 40.565), anxiety (P = 0.000, F = 45.300), and CG (P = 0.000, F = 37.332), but there was no relationship between the RC level with depression status (P = 0.42, F = 0.634). Considering that religion can affect the level of CG, stress, and anxiety of the caregivers of the patients, it is suggested to provide necessary conditions to improve the health status of caregivers of patients with SCI by improving the religion status in patients and performing appropriate interventions in this regard.
abstract_id: PUBMED:37909296
Does the disposition of passive coping mediate the association between illness perception and symptoms of anxiety and depression in patients with spinal cord injury during first inpatient rehabilitation? Purpose: To examine associations between illness perception, also called illness cognitions or appraisals, disposition of passive coping, and symptoms of anxiety and depression, and to test whether passive coping mediates the associations between illness perception and symptoms of anxiety and depression.
Materials And Methods: Longitudinal, multicentre study. Participants were inpatients of spinal cord injury (SCI) rehabilitation. Measures included the Brief Illness Perception Questionnaire (B-IPQ), the Utrecht Coping List passive coping subscale (UCL-P), and the Hospital Anxiety and Depression Scale (HADS). Mediation was tested with the PROCESS tool.
Results: The questionnaires were completed by 121 participants at admission and at discharge. Of them, 70% were male, 58% had a paraplegia, and 82% an incomplete lesion. Weak to strong (0.294-0.650) significant associations were found between each pair of study variables. The use of passive coping strategies mediated the associations between illness perception and symptoms of anxiety and depression.
Conclusion: Symptoms of anxiety and depression were more frequent in people who have a threatening illness perception combined with a lower use of passive coping strategies. Therefore, it is advised that patients are screened and treated for threatening illness perception and high use of passive coping strategies during rehabilitation after SCI.
abstract_id: PUBMED:32039872
Do spirituality, resilience and hope mediate outcomes among family caregivers after traumatic brain injury or spinal cord injury? A structural equation modelling approach. Background: A deficits approach to understanding psychological adjustment in family caregivers of individuals with a neurological disability is extensive, but further research in the field of positive psychology (spirituality, resilience, hope) may provide a potential avenue for broadening knowledge of the family caregiver experience after traumatic brain injury (TBI) or spinal cord injury (SCI).
Objective: To test a proposed model of spirituality among family caregivers of individuals with TBI or SCI, using structural equation modelling (SEM).
Methods: A cross-sectional design was employed to survey ninety-nine family participants (TBI = 76, SCI = 23) from six rehabilitation units from NSW and Queensland. Assessments comprised Functional Assessment of Chronic Illness Therapy-Spiritual Well-being Scale-Expanded, Connor -Davidson Resilience Scale, Herth Hope Index, and three measures of psychological adjustment including Caregiver Burden Scale, Positive and Negative Affect Scale, and Depression Anxiety Stress Scale.
Results: SEM showed the proposed model was a good fit. The main findings indicated spirituality had a direct negative link with burden. Spirituality had a direct positive association with hope which, in succession, had a positive link with resilience. Spirituality influenced positive affect indirectly, being mediated by resilience. Positive affect, in turn, had a negative association with depression in caregivers.
Conclusions: This study contributes to better targeting strength-based family interventions.
abstract_id: PUBMED:21508916
Comparison of the coping strategies, anxiety, and depression in a group of Turkish spinal cord injured patients and their family caregivers in a rehabilitation center. Background: Spinal cord injury (SCI) effects the emotional states and coping strategies of the patients and their families. The interesting point is the interaction between the emotional status and coping strategies.
Aim: The aim of this study was to investigate the coping strategies and emotional states of the individuals with SCI and their caregivers and to compare the results of the groups.
Design: Cross-sectional
Setting: Inpatient rehabilitation.
Population: Thirty one patients with traumatic SCI and 31 family caregivers admitted to the inpatient rehabilitation were evaluated.
Methods: The injury duration was ≤12 months. Coping strategies and emotional status of the participants were evaluated by Brief Ways of Coping Questionnaire and Hospital Anxiety and Depression Scale. ASIA impairment scale and Functional Independence Measurement (FIM) were used for the assessment of the lesion severity and functional status.
Results: The most common coping strategies were self confidence and optimistic strategies both in the patient and caregiver groups. There was no statistically significant difference between the coping strategies and emotional status of the groups (P>0.05). A positive correlation was found between helplessness strategy and age in patients with SCI. Coping strategies did not show correlation with FIM. Anxiety in caregivers correlated negatively with SCI duration (P<0.05).
Conclusion: As a result, the coping strategies and emotional status in the SCI patients and family caregiver groups showed similarity.
Clinical Rehabilitation Impact: SCI patients and also their family caregivers must be evaluated in terms of coping strategies, anxiety and depression. The couples with maladaptive coping styles and emotional mood disorders might be supported with special interventions to help the adaptation to SCI and to improve the rehabilitation efficacy.
abstract_id: PUBMED:12675978
Coping effectiveness training reduces depression and anxiety following traumatic spinal cord injuries. Objective: To extend the findings of a pilot study that evaluated a brief group-based psychological intervention aimed at improving psychological adjustment, self-perception and enhancing adaptive coping following spinal cord injury. The theoretical underpinnings of the Coping Effectiveness Training (CET) Programme are Lazarus and Folkman's (1984) cognitive theory of stress and coping, and cognitive behavioural therapy techniques.
Design: A controlled trial comparing patients that received the CET intervention with matched controls on measures of psychological adjustment and coping.
Method: A total of 45 intervention group participants and 40 matched controls were selected from inpatients at a hospital-based spinal cord injury centre. Outcome measures of anxiety and depression, self-perception and coping were collected before, immediately after and 6 weeks following the intervention.
Results: Intervention group participants showed a significant reduction in depression and anxiety, compared to the matched controls following the intervention. There was no evidence of a significant change in the pattern of coping strategies used by the intervention group compared to controls. The intervention group alone completed measures of self-perception. There was a significant decrease in the discrepancy between participants' 'ideal' self and 'as I am', and between 'as I would be without the injury' and 'as I am' following the intervention and at follow-up. Significant correlations were also found between self-perception, and anxiety and depression over time.
Conclusions: These results confirm the findings of the pilot study, that the CET intervention facilitated a significant improvement in psychological adjustment to spinal cord injury. It is proposed that this improvement may be understood in terms of changing participants' negative appraisals of the implications of spinal cord injury with the result of increasing the perceived manageability of its consequences. Such decatastrophizing alters appraisals which are associated with current mood. Participants found shared discussion and problem-solving to be particularly helpful. Avenues for further research are discussed.
abstract_id: PUBMED:21512753
The impact of perceptions of health control and coping modes on negative affect among individuals with spinal cord injuries. A wide range of demographic, medical, and personality and coping variables have been implicated as predictors of psychosocial outcomes following the onset of spinal cord injuries (SCI). The primary purpose of this study was to examine the role that perceptions of health control (internality, chance-determined, and other persons-determined) and coping strategies play in predicting respondents' negative affect, namely, reactions of depression and anxiety [i.e., posttraumatic stress disorder (PTSD)], as outcomes of psychosocial adaptation to disability. A second purpose was to investigate the potential role that time since injury (TSI) plays in moderating the influence of coping on psychosocial outcomes related to SCI. Ninety five survivors of SCI participated in the study by completing a battery of self-report measures. Two sets of multiple regression analyses were employed to address the study's goals. Findings indicated that after controlling the influence of gender, age, time since injury, and number of prior life traumas: (a) the use of disengagement coping successfully predicted both respondents' levels of depression and PTSD; (b) none of the perceptions of control of one's health significantly influenced psychosocial reactions to SCI, as indicated by depression and PTSD, although perceptions of chance control showed a moderate positive trend; and (c) time since injury did not moderate the relationships between coping and negative affect related to the onset of SCI. The implications of these findings to rehabilitation professionals are discussed.
abstract_id: PUBMED:33867367
Depression or anxiety symptoms associated with occupational role transitions in Brazilian adults with a traumatic spinal cord injury: A multivariate analysis. Background: Psychological morbidity is commonly experienced by people with a spinal cord injury (SCI), but whether it is associated with occupational role transitions in is unknown.
Objective: To analyze whether anxiety or depression symptoms are independently associated to increased likelihoods of role transitions in adults with SCI.
Methods: Cross-sectional study; multivariate analysis using a heteroscedastic Dirichlet regression.
Participants: Thirty persons with traumatic SCI.
Measures: Role Checklist (e.g. role transitions: dependent variables) and Beck's Depression Inventory and State-Trait Anxiety Inventory (independent variables), adjusted for socio-demographic, functional, and injury-level confounders.
Results: Greater depression symptoms independently increased the likelihood of occupational role transitions, either for roles loss [adjusted Odds Ratio (AOR): 1.04; 95% confidence interval (CI):1.009-1.080] or roles gain [AOR: 1.07; 95% CI:1.02-1.13], as opposed to continued occupational roles. Higher anxiety as a trait, in turn, independently reduced the likelihood of occupational roles gain [AOR: 0.93; 95% CI: 0.869-0.992]. The "worker" role was the one lost more frequently (83%).
Conclusion: Psychological morbidity is associated to occupational role transitions, as opposed to continued roles. Further research (e.g. with larger samples, longitudinal design, using structural equation modelling) should elucidate on the intricate relationships between mental health status and occupational role transitions in people experiencing SCI.
abstract_id: PUBMED:3639540
Personal control and coping effectiveness in spinal cord injured persons. The relationship between personal control and coping effectiveness was studied in a convenience sample of 104 hospitalized spinal cord injured males. Control was conceptualized as participants' preference to control, their generalized expectation for control, and their perception of options to control. Depression was used as an indicator of level of coping effectiveness. Demographic data and participants' trait anxiety were included as possible correlates of the variables of interest. For those who preferred to control, perceived options to do so were negatively related to levels of depression regardless of locus of control orientation. Depression scores were unrelated to expected or perceived control for those without such a preference. Findings indicate that the significance of having control is an individualized matter. Detailed assessments may need to be devised if patients are to obtain maximum benefit from participating in their own care.
Answer: Yes, religious coping and spirituality have been found to have a moderating role on depression and anxiety in patients with spinal cord injury (SCI). A study conducted in Iran with 213 patients with SCI revealed that negative religious coping was a significant contributing factor to both anxiety and depression, while existential spiritual well-being was inversely associated with these conditions, suggesting that higher levels of spiritual well-being were protective against anxiety and depression (PUBMED:26123206). Another study with Iranian patients with SCI found that positive religious coping strategies were associated with better erectile function, which can be an indicator of overall well-being, while negative religious coping strategies were detrimental. Additionally, higher levels of anxiety were identified as risk factors for erectile dysfunction (PUBMED:26882493).
Furthermore, research on caregivers of patients with SCI indicated that religious coping (RC) levels had a significant relationship with stress and anxiety, but not with depression. This suggests that religion can affect the level of caregiver burden, stress, and anxiety, highlighting the importance of considering religious coping in interventions aimed at improving the health status of caregivers (PUBMED:31187306).
In summary, the evidence suggests that religious coping and spirituality play a significant role in moderating the psychological impact of SCI on patients and their caregivers, influencing levels of depression, anxiety, and overall well-being. |
Instruction: Can fasting plasma glucose and glycated hemoglobin levels predict oral complications following invasive dental procedures in patients with type 2 diabetes mellitus?
Abstracts:
abstract_id: PUBMED:23644869
Can fasting plasma glucose and glycated hemoglobin levels predict oral complications following invasive dental procedures in patients with type 2 diabetes mellitus? A preliminary case-control study. Objective: To evaluate the effects of the levels of glycemic control on the frequency of clinical complications following invasive dental treatments in type 2 diabetic patients and suggest appropriate levels of fasting blood glucose and glycated hemoglobin considered to be safe to avoid these complications.
Method: Type 2 diabetic patients and non-diabetic patients were selected and divided into three groups. Group I consisted of 13 type 2 diabetic patients with adequate glycemic control (fasting blood glucose levels <140 mg/dl and glycated hemoglobin (HbA1c) levels <7%). Group II consisted of 15 type 2 diabetic patients with inadequate glycemic control (fasting blood glucose levels >140 mg/dl and HbA1c levels >7%). Group III consisted of 18 non-diabetic patients (no symptoms and fasting blood glucose levels <100 mg/dl). The levels of fasting blood glucose, glycated HbA1c, and fingerstick capillary glycemia were evaluated in diabetic patients prior to performing dental procedures. Seven days after the dental procedure, the frequency of clinical complications (surgery site infections and systemic infections) was examined and compared between the three study groups. In addition, correlations between the occurrence of these outcomes and the glycemic control of diabetes mellitus were evaluated.
Results: The frequency of clinical outcomes was low (4/43; 8.6%), and no significant differences between the outcome frequencies of the various study groups were observed (p>0.05). However, a significant association was observed between clinical complications and dental extractions (p = 0.02).
Conclusions: Because of the low frequency of clinical outcomes, it was not possible to determine whether fasting blood glucose or glycated HbA1c levels are important for these clinical outcomes.
abstract_id: PUBMED:25597411
Effectiveness of lifestyle change plus dental care program in improving glycemic and periodontal status in aging patients with diabetes: a cluster, randomized, controlled trial. Background: Currently, there is an increased prevalence of diabetes mellitus among the aging adult population. To minimize adverse effects on glycemic control, prevention and management of general and oral complications in patients with diabetes are essential. The objective of this study is to assess the effectiveness of the lifestyle change plus dental care (LCDC) program to improve glycemic and periodontal status in aging patients with diabetes.
Methods: A cluster, randomized, controlled trial was conducted in Health Centers 54 (intervention) and 59 (control) from October 2013 to April 2014. Sixty-six patients with diabetes per health center were included. At baseline, the intervention group attended 20-minute lifestyle and oral health education, individual lifestyle counseling, application of a self-regulation manual, and individual oral hygiene instruction. At month 3, the intervention group received individual lifestyle counseling and oral hygiene instruction. The intervention group received booster education every visit by viewing a 15-minute educational video. The control group received a routine program. Participants were assessed at baseline and 3- and 6-month follow-up for glycemic and periodontal status. Data were analyzed by using descriptive statistic, χ(2) test, Fisher exact test, t test, and repeated-measures analysis of variance.
Results: After the 6-month follow-up, participants in the intervention group had significantly lower glycated hemoglobin, fasting plasma glucose, plaque index, gingival index, probing depth, and attachment loss when compared with the control group.
Conclusion: The combination of lifestyle change and dental care in one program improved both glycemic and periodontal status in older patients with diabetes.
abstract_id: PUBMED:24934646
Effectiveness of lifestyle change plus dental care (LCDC) program on improving glycemic and periodontal status in the elderly with type 2 diabetes. Background: Currently, there is an increased prevalence of diabetes mellitus among the elderly. To minimize adverse effects on glycemic control, prevention and management of general and oral complications in diabetic patients is essential. The purpose of the present study is to assess the effectiveness of a Lifestyle Change plus Dental Care (LCDC) program to improve glycemic and periodontal status in the elderly with type 2 diabetes.
Methods: A quasi-experimental study was conducted in Health Centers 54 (intervention) and 59 (control) from October 2013 to January 2014. 66 diabetic patients per health center were included. At baseline, the intervention group attended a 20 minute lifestyle and oral health education program, individual lifestyle counseling using motivational interviewing (MI), application of self regulation manual, and individual oral hygiene instruction. The intervention group received booster education every visit by viewing a 15 minute educational video. The control group received a routine program. Participants were assessed at baseline and 3 month follow up for glycosylated hemoglobin (HbA1c), fasting plasma glucose (FPG), body mass index (BMI), periodontal status, knowledge, attitude and practice of oral health and diabetes mellitus. Data were analyzed by using descriptive statistic, Chi-square test, Fisher's exact test, t-test, and multiple linear regression.
Results: After the 3 month follow up, a multiple linear regression analysis showed that the intervention group was significantly negatively correlated in both glycemic and periodontal status. Participants in the intervention group had significantly lower glycosylated hemoglobin (HbA1c), fasting plasma glucose (FPG), plaque index score, gingival index score, pocket depth, clinical attachment level (CAL), and percentage of bleeding on probing (BOP) when compared to the control group.
Conclusions: The combination of lifestyle change and dental care in one program improved both glycemic and periodontal status in the elderly with type 2 diabetes.
Trial Registration: ClinicalTrials.in.th: TCTR20140602001.
abstract_id: PUBMED:36981651
Preoperative HbA1c and Blood Glucose Measurements in Diabetes Mellitus before Oral Surgery and Implantology Treatments. Diabetes mellitus has become a worldwide epidemic and is frequently accompanied by a number of complications proportional to the duration of hyperglycemia. The aim of this narrative review is to assess the most up-to-date guidelines on DM provided by both diabetes and dental associations. Furthermore, to gather evidence on the uni/bidirectional relationships of elevated HbA1c levels on dental surgery, implantology, bone augmentation, and periodontology and to demonstrate the importance of measuring HbA1c levels before invasive dental treatments. HbA1c and blood glucose measurements are a minimally invasive method for preventing complications in diabetes mellitus. The authors conducted a literature review to determine which oral conditions are affected by diabetes mellitus. MEDLINE served as a source with the use of a specific search key. Regarding oral complications of diabetes, prevention is the most vital factor. With this publication, we hope to assist physicians and dentists to make prompt diagnoses and to help in recognizing various oral manifestations of diabetes and follow the existing guidelines.
abstract_id: PUBMED:35722958
Nanomechanical and Nonlinear Optical Properties of Glycated Dental Collagen. Nonenzymatic glycation is a multistep, slow reaction between reducing sugars and free amino groups of long-lived proteins, which affects the structural and mechanical properties of collagen-rich tissues via accumulation of advanced glycation end products (AGEs). Dental collagen is exposed to glycation as part of the natural aging process. However, in case of chronically high blood glucose, the process can be accelerated, resulting in premature stiffening of dentin, leading to tooth fragility. The molecular mechanisms whereby collagen glycation evokes the loss of mechanical stability in teeth are currently unknown. In this study, we used 2-photon and atomic force microscopies to correlate structural and mechanical changes in dental collagen induced by in vitro glycation. Young tooth samples were demineralized and cut longitudinally into 30-µm sections, then artificially glycated in 0.5 M ribose solution for 10 wk. Two-photon microscopy analysis showed that both the autofluorescence and second harmonic-generated (SHG) signal intensities of glycated samples were significantly greater than those of the controls. Regarding the structural alteration of individual collagen fibers, a remarkable increase could be measured in fiber length of ribose-treated sections. Furthermore, nanoindentation of intertubular dentin regions revealed significantly higher stiffness in the ribose-treated samples, which points at a significant accumulation of AGEs. Thus, collagen glycation occurring during sustained exposure to reducing sugars leads to profound structural and mechanical changes in dentin. Besides the numerous oral complications associated with type 2 diabetes, the premature structural and mechanical deterioration of dentin may also play an important role in dental pathology.
abstract_id: PUBMED:26275194
Evaluation of salivary glucose, amylase, and total protein in Type 2 diabetes mellitus patients. Background: Diabetes mellitus is a complex multisystem metabolic disorder characterized by a deficit in the production of insulin. The oral complications of uncontrolled diabetes mellitus are devastating. Saliva is an organic fluid that can be collected noninvasively and by individuals with limited training. These reasons create an interest in evaluating the possibility of using saliva as a diagnostic tool.
Aims And Objectives: The aim of this study was to determine, if saliva can be used as a noninvasive tool to monitor glycemic control in Type 2 diabetes. Comparative assessment of salivary (glucose, amylase, total protein levels) in patients with Type 2 diabetes and controls.
Materials And Methods: A total of 40 individuals, 20 with Type 2 diabetes and 20 controls of age group 40-60 years were selected for the study. Diabetic status was assessed by estimating random blood glucose levels. Unstimulated saliva was collected from each participant and investigated for glucose, amylase, and total protein levels. Salivary glucose estimation was performed using glucose-oxidase method, amylase by the direct substrate kinetic enzymatic method, and total protein by pyrogallol red dye end point method. All the parameters were subjected to statistical analysis using SPSS version 20.0.
Results: Significantly higher salivary glucose, lower amylase, and total proteins were observed in patients with Type 2 diabetes than controls. There was no significant correlation between salivary and blood glucose levels.
Conclusion: These results suggest that diabetes influences the composition of saliva. Since a significant correlation was not observed between salivary and blood glucose levels, further research is needed to determine salivary glucose estimation as a diagnostic tool for diabetes mellitus.
abstract_id: PUBMED:26409987
Metabolic and bariatric surgery: Nutrition and dental considerations. Background And Overview: Oral health care professionals may encounter patients who have had bariatric surgery and should be aware of the oral and nutritional implications of these surgeries. Bariatric surgery is an effective therapy for the treatment of obesity. Consistent with the 1991 National Institutes of Health Consensus Development Conference on Gastrointestinal Surgery for Severe Obesity recommendations, patients must meet body mass index (BMI) criteria for severe obesity, defined as a BMI greater than or equal to 40 kilograms per square meter, as well as for those with a BMI of greater than or equal to 35 kg/m(2) with significant comorbidities.
Conclusions: Benefits of bariatric surgery in the treatment of severe obesity include significant and durable weight loss and improved or remission of obesity-related comorbidities including type 2 diabetes, hyperlipidemia, hypertension, heart disease, obstructive sleep apnea, and depression. Of the limited data published concerning the influences of bariatric surgical procedures on oral health, increased incidence of dental caries, periodontal diseases, and tooth wear have been reported in patients post-bariatric surgery.
Practical Implications: The oral health care practitioner familiar with the most common bariatric procedures performed in the United States and their mechanisms of actions, risks, and benefits is in the position to provide guidance to patients on the nutritional and oral complications that can occur.
abstract_id: PUBMED:34234496
Oral Health Messiers: Diabetes Mellitus Relevance. This article aims to narrate the various oral complications in individuals suffering from diabetes mellitus. Google search for "diabetes mellitus and oral complications" was done. The search was also carried out for "diabetes mellitus" and its oral complications individually. Diabetes mellitus is a chronic metabolic disorder that is a global epidemic and a common cause of morbidity and mortality in the world today. Currently, there are about 422 million cases of diabetes mellitus worldwide. Diabetic patients can develop different complications in the body such as retinopathy, neuropathy, nephropathy, cardiovascular disease. Complications in the oral cavity have been observed in individuals suffering from diabetes mellitus. A study noted that more than 90% of diabetic patients suffered from oral complications. Another research has shown a greater prevalence of oral mucosal disorders in patients with diabetes mellitus than non-diabetic population: 45-88% in patients with type 2 diabetes compared to 38.3-45% in non-diabetic subjects and 44.7% in type 1 diabetic individuals compared to 25% in the non-diabetic population. Oral complications in people with diabetes are periodontal disease, dental caries, oral infections, salivary dysfunction, taste dysfunction, delayed wound healing, tongue abnormalities, halitosis, and lichen planus. The high glucose level in saliva, poor neutrophil function, neuropathy, and small vessel damage contribute to oral complications in individuals with uncontrolled diabetes. Good oral health is imperative for healthy living. Oral complications cause deterioration to the quality of life in diabetic patients. Complications like periodontal disease having a bidirectional relationship with diabetes mellitus even contribute to increased blood glucose levels in people with diabetes. This article intends to promote awareness regarding the oral health of diabetics and to stress the importance of maintaining proper oral hygiene, taking preventive measures, early detection, and appropriate management of oral complications of these patients through a multidisciplinary approach.
abstract_id: PUBMED:31057116
Knowledge and Awareness of Oral Manifestations of Diabetes Mellitus and Oral Health Assessment among Diabetes Mellitus Patients- A Cross Sectional Study. Background And Objectives: Diabetes mellitus has increased rapidly throughout the world. The objectives of our study were to assess the knowledge and awareness about oral manifestations of diabetes, among type 2 diabetes mellitus patients, their risk for developing oral diseases due to complications associated with diabetes mellitus, and at same time, to perform an oral examination to detect these oral symptoms, if present any, along with the recording of Decayed Missing Filled Teeth Index (DMFT) and Community Periodontal Index (CPI) index.
Methodology: Structured questionnaires consisting of 12 different statements on the knowledge base of oral manifestations of diabetes mellitus were distributed to 447 Type 2 diabetes mellitus patients. Following this oral examination, brushing and dental visit history were noted, and CPI index and DMFT indices were recorded in all the patients.
Results: Results showed that the knowledge about oral manifestations of diabetes mellitus was poor with a mean value of 4.92 out of a possible score of 12. Among the study subjects, the average score of men was 4.42 while that of females, was 5.41. These scores, when subjected to statistical analysis, were highly significant. (P value- 0.005) Subjects also showed significantly high DMFT (P value <0.001) and CPI scores (P value- 0.270).
Conclusion: Our study concluded that there is a significant lack of knowledge about oral manifestations of diabetes mellitus among patients and hence steps have to be taken to increase their awareness through various outreach programs. All health professionals need to work together for promoting better oral health so that oral complications of diabetes can be brought under control.
abstract_id: PUBMED:23087863
Oral manifestations in type-2 diabetes and related complications. Background: Knowledge of the wide spectrum of the oral markers of diabetes is imperative as one frequently encounters individuals with undetected, untreated or poorly controlled diabetes.
Objectives: The objective was to study the oral manifestations in type 2 diabetes mellitus (DM) and to establish an association between oral manifestations and associated microvascular and macrovascular complications.
Materials And Methods: 50 cases of DM were selected who had oral complications. The control group comprised 50 age- and sex-matched diabetic patients without any oral complications.
Results: Oral manifestations in DM included periodontal disease in 34%, oral candidiasis in 24%, tooth loss in 24%, oral mucosal ulcers in 22%, taste impairment in 20%, xerostomia and salivary gland hypofunction in 14%, dental caries in 24%, and burning mouth sensation in 10% cases. Fasting [(FBG) (P = 0.003)] and postprandial blood glucose [(PPBG) (P = 0.0003)] levels were significantly higher among cases. The P values for neuropathy, retinopathy, nephropathy, cardiovascular disease, dyslipidemia, and sepsis were 0.0156, 0.0241, 0.68, 0.4047, 0.0278, and 0.3149, respectively, which were significant for neuropathy, retinopathy, and dyslipidemia.
Conclusions: Several oral complications are seen among diabetics. Association of oral markers in DM and microvascular complications suggests that there is a significant association between the two.
Answer: The available abstracts suggest that there is a relationship between glycemic control and oral health in patients with type 2 diabetes mellitus, but the evidence is not conclusive in predicting oral complications following invasive dental procedures based solely on fasting plasma glucose and glycated hemoglobin levels.
One study found that the frequency of clinical outcomes following invasive dental treatments was low and did not significantly differ between type 2 diabetic patients with adequate glycemic control and those with inadequate control, as well as non-diabetic patients. This suggests that fasting blood glucose or glycated HbA1c levels may not be directly predictive of clinical outcomes after such procedures (PUBMED:23644869).
However, other studies have shown that a lifestyle change plus dental care program can improve both glycemic and periodontal status in older patients with diabetes (PUBMED:25597411, PUBMED:24934646). This indicates that while fasting plasma glucose and glycated hemoglobin levels are important indicators of overall glycemic control, interventions that include lifestyle modifications and dental care can have a positive impact on oral health outcomes in diabetic patients.
A narrative review emphasized the importance of measuring HbA1c levels before invasive dental treatments to prevent complications in diabetes mellitus (PUBMED:36981651), suggesting that preoperative assessment of glycemic control is considered important in the dental management of diabetic patients.
In summary, while there is evidence that glycemic control is related to oral health, the prediction of oral complications following invasive dental procedures based on fasting plasma glucose and glycated hemoglobin levels is not clearly established. It is likely that a combination of factors, including glycemic control, lifestyle changes, and comprehensive dental care, contribute to the oral health outcomes in patients with type 2 diabetes mellitus. |
Instruction: Air embolism during liver procurement: an underestimated phenomenon?
Abstracts:
abstract_id: PUBMED:21168709
Air embolism during liver procurement: an underestimated phenomenon? A pilot experimental study. Background: Intrahepatic air embolism can occur during liver transplantation, jeopardizing the posttransplant outcome. Until now, the role of the procurement in the origin of intrahepatic air remains unclear; it might be underestimated. In this pilot study using magnetic resonance imaging (MRI), we observed a substantial amount of air trapped in porcine livers during multiorgan procurement. We quantified the amount of air, examining whether it could be reduced by avoiding direct contact of air with the lumen of the hepatic vasculature during procurement and back-table preparation.
Methods: Five livers (control group) were procured according to standard techniques for comparison with 6 livers (modified group) where air could not enter into the livers due to clamping of the vasculature. MRI was performed during continuous machine perfusion (MP) preservation there after. We counted the number of black signal voids on T(2)*-weighted images, which were indicative of air bubbles within the hepatic contour. Additionally, an MRI contrast agent (gadolinium-diethylene triamine pentaacetic acid [Gd-DTPA]) was injected into the hepatic artery and circulated by MP. Insufficiently perfused areas with less contrast enhancement were analyzed quantitatively in T(1)-weighted images and expressed as the percentage of total liver volume.
Results: The images of the control livers showed more air bubbles compared with the modified group (45 ± 27 vs 6 ± 3; P = .004). The percentage of insufficiently perfused areas was higher among the control compared with the modified group (28.0 ± 15.8% vs 2.6 ± 4.6%; P = .047) on first-pass postcontrast T(1)-weighted images. After recirculating the contrast agent, insufficiently perfused areas showed similar localizations and contours within every liver.
Conclusion: These data suggested that a substantial amount of air enters into the hepatic microcirculation through direct contact of air with the hepatic vasculature during standard procurement and back-table preparation. Avoiding opening the hepatic vessels to air substantially reduced this phenomenon.
abstract_id: PUBMED:30008587
Hyperbaric Oxygen Therapy in Liver Diseases. Hyperbaric oxygen therapy (HBOT) is an efficient therapeutic option to improve progress of lots of diseases especially hypoxia-related injuries, and has been clinically established as a wide-used therapy for patients with carbon monoxide poisoning, decompression sickness, arterial gas embolism, problematic wound, and so on. In the liver, most studies positively evaluated HBOT as a potential therapeutic option for liver transplantation, acute liver injury, nonalcoholic steatohepatitis, fibrosis and cancer, especially for hepatic artery thrombosis. This might mainly attribute to the anti-oxidation and anti-inflammation of HBOT. However, some controversies are existed, possibly due to hyperbaric oxygen toxicity. This review summarizes the current understandings of the role of HBOT in liver diseases and hepatic regeneration. Future understanding of HBOT in clinical trials and its in-depth mechanisms may contribute to the development of this novel adjuvant strategy for clinical therapy of liver diseases.
abstract_id: PUBMED:28217293
Application of hyperbaric oxygen in liver transplantation. In recent years, hyperbaric oxygen (HBO) has been used in the treatment of a lot of diseases such as decompression sickness, arterial gas embolism, carbon dioxide poisoning, soft tissue infection, refractory osteomyelitis, and problematic wound, but little is known about its application in liver transplantation. Although several studies have been conducted to investigate the protective effects of HBO on liver transplantation and liver preservation, there are still some controversies on this issue, especially its immunomodulatory effect. In this short review, we briefly summarize the findings supporting the application of HBO during liver transplantation (including donors and recipients).
abstract_id: PUBMED:32652295
Laparoscopic major liver resections: Current standards. Laparoscopic liver resection was slow to be adopted in the surgical arena at the beginning as there were major barriers including the fear of gas embolism, risk of excessive blood loss from the inability to control bleeding vessels effectively, suboptimal surgical instruments to perform major liver resection and the concerns about oncological safety of the procedure. However, it has come a long way since the early 1990s when the first successful laparoscopic liver resection was performed, spurring liver surgeons worldwide to start exploring the roles of laparoscopy in major liver resections. Till date, more than 9000 cases have been reported in the literature and the numbers continue to soar as the hepatobiliary surgical communities quickly learn and apply this technique in performing major liver resection. Large bodies of evidence are available in the literature showing that laparoscopic major liver resection can confer improved short-term outcomes in terms of lesser operative morbidities, lesser operative blood loss, lesser post-operative pain and faster recovery with shorter length of hospitalization. On the other hand, there is no compromise in the long-term and oncological outcomes in terms of comparable R0 resection rate and survival rates of this approach. Many innovations in laparoscopic major hepatectomies for complex operations have also been reported. In this article, we highlight the journey of laparoscopic major hepatectomies, summarize the technical advancement and lessons learnt as well as review the current standards of outcomes for this procedure.
abstract_id: PUBMED:376395
Fifteen years of clinical liver transplantation. Liver transplantation in humans was first attempted more than 15 yr ago. The 1-yr survival has slowly improved until it has now reached about 50%. In our experience, 46 patients have lived for at least 1 yr, with the longest survival being 9 yr. The high acute mortality in early trials was due in many cases to technical and management errors and to the use of damaged organs. With elimination of such factors, survival increased. Further improvements will depend upon better immunosuppression. Orthotopic liver transplantation (liver replacement) is the preferred operation in most cases, but placement of an extra liver (auxiliary transplantation) may have a role under special circumstances.
abstract_id: PUBMED:34131863
Standardized Technique of Selective Left Liver Vascular Exclusion During Laparoscopic Liver Resection for Benign and Malignant Tumors. Background: Tumors located close to major hepatic veins pose a technical challenge to standard laparoscopic liver resection. Hepatic outflow occlusion may reduce the risks of bleeding from hepatic vein and gas embolism. The aim of this study was to detail our standardized laparoscopic approach for a safe extrahepatic control of the common trunk of middle and left hepatic veins during laparoscopic liver resection and to assess its feasibility in patients with tumors located in both right and left lobes of the liver.
Methods: Data of 25 consecutive patients who underwent laparoscopic liver resection with extrahepatic control of the common trunk of middle and left hepatic veins were reviewed.
Results: All patients underwent primary hepatectomy. The vast majority (84%) of patients had malignant tumors. The control of the common trunk of middle and left hepatic veins was achieved in 96% of patients. There were 14 (56%) major hepatectomies and 11 (44%) minor hepatectomies. Some form of vascular clamping was performed in 23 (62%) patients: Pringle maneuver in 17 (median time = 45 min; range, 10-109) and selective vascular exclusion of the liver in 6 patients (median time = 30 min; range, 15-94). The median duration of operation was 254 min (range, 70-441). There was one case (4%) of gas embolism but without any complications during the postoperative course. Conversion to open surgery was performed in 2 (7.7%) patients: 1 for oncologic reason and 1 for non-progression during the transection plane. Perioperative blood transfusion rate was nil. The overall morbidity rate was 24%.
Conclusions: The laparoscopic approach for an extrahepatic control of the common trunk of middle and left hepatic veins is reproducible, safe, and effective, and can be applied during laparoscopic liver resection for tumors close to major hepatic veins.
abstract_id: PUBMED:194493
Vascular exclusion in surgery of the liver: experimental basis, technic, and clinical results. Because liver exeresis and surgery for liver trauma still carry a great mortality by hemorrhage or air embolism due to suprahepatic or vena cava injuries, temporary vascular exclusion of the liver is considered. Based on hemodynamic, biologic, and anatomopathologic experimental and clinical data, a catheter was designed that allows isolation of the retro-and suprahepatic portions of the inferior vena cava and consequently vascular exclusion of the liver for 30 minutes. We report on seven patients in whom this technic was utilized (4 requiring hepatectomy and 3 traumatized), confirming the utility, ease, and efficacy of this method.
abstract_id: PUBMED:34408914
Uncommon Occurrence of an Air Embolism during the Preanhepatic Phase of an Orthotopic Liver Transplant. Vascular air embolism (VAE) during liver transplantation usually occurs during the dissection phase of the procedure or during liver reperfusion. If this phenomenon occurs, it can cause significant cardiovascular, pulmonary, and neurological complications. Prompt identification of VAE is essential, and the surgeon should be immediately notified. The mainstay treatment is identification and rectification of the source of the air embolus, hemodynamic support, and prevention of further air entrainment. This case report describes the occurrence of a pulmonary air embolism during the preanhepatic phase of an orthotopic liver transplant.
abstract_id: PUBMED:2916865
Vascular occlusions for liver resections. Operative management and tolerance to hepatic ischemia: 142 cases. The intra- and early postoperative courses of 142 consecutive patients who underwent liver resections using vascular occlusions to reduce bleeding were reviewed. In 127 patients, the remnant liver parenchyma was normal, and 15 patients had liver cirrhosis. Eighty-five patients underwent major liver resections: right, extended right, or left lobectomies. Portal triad clamping (PTC) was used alone in 107 cases. Complete hepatic vascular exclusion (HVE) combining PTC and occlusion of the inferior vena cava below and above the liver was used for 35 major liver resections. These 35 patients had large or posterior liver tumors, and HVE was used to reduce the risks of massive bleeding or air embolism caused by an accidental tear of the vena cava or a hepatic vein. Duration of normothermic liver ischemia was 32.3 +/- 1.2 minutes (mean +/- SEM) and ranged from 8 to 90 minutes. Amount of blood transfusion was 5.5 +/- 0.5 (mean +/- SEM) units of packed red blood cells. There were eight operative deaths (5.6%). Overall, postoperative complications occurred in 46 patients (32%). The patients who experienced complications after surgery had received more blood transfusion than those with an uneventful postoperative course (p less than 0.001). The length of postoperative hospital stay was also correlated with the amount of blood transfused during surgery (p less than 0.001). On the other hand, there was no correlation between the durations of liver ischemia of up to 90 minutes and the lengths of postoperative hospital stay. The longest periods of ischemia were not associated with increased rates of postoperative complications, liver failures, or deaths. There was no difference in mortality or morbidity after major liver resections performed with the use of HVE as compared with major liver resections carried out with PTC alone, although the lesions were larger in the former group. It is concluded that the main priority during liver resections is to reduce operative bleeding. Vascular occlusions aim at achieving this goal and can be extended safely for up to 60 minutes.
abstract_id: PUBMED:20463181
Gas embolization of the liver in a rat model of rapid decompression. Occurrence of liver gas embolism after rapid decompression was assessed in 31 female rats that were decompressed in 12 min after 42 min of compression at 7 ATA (protocol A). Sixteen rats died after decompression (group I). Of the surviving rats, seven were killed at 3 h (group II), and eight at 24 h (group III). In group I, bubbles were visible in the right heart, aortic arch, liver, and mesenteric veins and on the intestinal surface. Histology showed perilobular microcavities in sinusoids, interstitial spaces, and hepatocytes. In group II, liver gas was visible in two rats. Perilobular vacuolization and significant plasma aminotransferase increase were present. In group III, liver edema was evident at gross examination in all cases. Histology showed perilobular cell swelling, vacuolization, or hydropic degeneration. Compared with basal, enzymatic markers of liver damage increased significantly. An additional 14 rats were decompressed twice (protocol B). Overall mortality was 93%. In addition to diffuse hydropic degeneration, centrilobular necrosis was frequently observed after the second decompression. Additionally, 10 rats were exposed to three decompression sessions (protocol C) with doubled decompression time. Their mortality rate decreased to 20%, but enzymatic markers still increased in surviving rats compared with predecompression, and perilobular cell swelling and vacuolization were present in five rats. Study challenges were 1) liver is not part of the pathophysiology of decompression in the existing paradigm, and 2) although significant cellular necrosis was observed in few animals, zonal or diffuse hepatocellular damage associated with liver dysfunction was frequently demonstrated. Liver participation in human decompression sickness should be looked for and clinically evaluated.
Answer: Air embolism during liver procurement is indeed an underestimated phenomenon. A pilot experimental study using magnetic resonance imaging (MRI) observed a substantial amount of air trapped in porcine livers during multiorgan procurement. The study quantified the amount of air and examined whether it could be reduced by avoiding direct contact of air with the lumen of the hepatic vasculature during procurement and back-table preparation. The results showed that the control livers, which were procured according to standard techniques, had more air bubbles compared to the modified group where air could not enter into the livers due to clamping of the vasculature. The percentage of insufficiently perfused areas was also higher among the control group compared to the modified group. These findings suggest that a significant amount of air enters the hepatic microcirculation through direct contact with the hepatic vasculature during standard procurement and back-table preparation, and that avoiding opening the hepatic vessels to air substantially reduced this phenomenon (PUBMED:21168709).
This issue is clinically relevant as air embolism can cause significant cardiovascular, pulmonary, and neurological complications during liver transplantation procedures (PUBMED:34408914). The occurrence of vascular air embolism (VAE) during liver transplantation is typically associated with the dissection phase of the procedure or during liver reperfusion, and prompt identification and treatment are essential to prevent further complications (PUBMED:34408914).
The importance of managing air embolism is also highlighted in the context of laparoscopic liver resections, where the fear of gas embolism was initially one of the major barriers to the adoption of this surgical technique (PUBMED:32652295). However, with advancements in surgical techniques and understanding, the risks of air embolism can be mitigated, as demonstrated by a study detailing a standardized laparoscopic approach for extrahepatic control of the common trunk of middle and left hepatic veins during laparoscopic liver resection, which reported only one case of gas embolism without postoperative complications (PUBMED:34131863).
Overall, the phenomenon of air embolism during liver procurement and transplantation is not to be underestimated, and careful surgical techniques are required to minimize the risk of air entering the hepatic vasculature. |
Instruction: Could our pretest probabilities become evidence based?
Abstracts:
abstract_id: PUBMED:12648252
Could our pretest probabilities become evidence based? A prospective survey of hospital practice. Objective: We sought to measure the proportion of patients on our clinical service who presented with clinical problems for which research evidence was available to inform estimates of pretest probability. We also aimed to discern whether any of this evidence was of sufficient quality that we would want to use it for clinical decision making.
Design: Prospective, consecutive case series and literature survey.
Setting: Inpatient medical service of a university-affiliated Veterans' Affairs hospital in south Texas.
Patients: Patients admitted during the 3 study months for diagnostic evaluation.
Measurements: Patients' active clinical problems were identified prospectively and recorded at the time of discharge, transfer, or death. We electronically searched medline and hand-searched bibliographies to find citations that reported research evidence about the frequency of underlying diseases that cause these clinical problems. We critically appraised selected citations and ranked them on a hierarchy of evidence.
Results: We admitted 122 patients for diagnostic evaluation, in whom we identified 45 different principal clinical problems. For 35 of the 45 problems (78%; 95% confidence interval [95% CI], 66% to 90%), we found citations that qualified as disease probability evidence. Thus, 111 of our 122 patients (91%; 95% CI, 86% to 96%) had clinical problems for which evidence was available in the medical literature.
Conclusions: During 3 months on our hospital medicine service, almost all of the patients admitted for diagnostic evaluation had clinical problems for which evidence is available to guide our estimates of pretest probability. If confirmed by others, these data suggest that clinicians' pretest probabilities could become evidence based.
abstract_id: PUBMED:15175211
Pretest probability estimates: a pitfall to the clinical utility of evidence-based medicine? Objectives: The Bayesian application of likelihood ratios has become incorporated into evidence-based medicine (EBM). This approach uses clinicians' pretest estimates of disease along with the results of diagnostic tests to generate individualized posttest disease probabilities for a given patient. To date, there is minimum scientific validation for the clinical application of this approach. This study is designed to evaluate variability in the initial step of this process, clinicians' estimates of pretest probability of disease, to assess whether this approach can be expected to yield consistent posttest disease estimates.
Methods: This cross-sectional cohort study was conducted at an urban county teaching hospital by using a sample of emergency and internal medicine residents and faculty, as well as emergency department (ED) midlevel practitioners. Participants read clinical vignettes designed to raise consideration for common ED disorders and were asked to estimate the likelihood of the suggested diagnosis based on the history and physical examination findings alone. No information about laboratory results or imaging studies was provided.
Results: Mean pretest probability estimates of disease ranged from 42% (95% confidence interval [95% CI] = 36.6% to 47.4%) to 77% (95% CI = 72.9% to 81.1%). The smallest difference in pretest probability magnitude for a single vignette was 70% (range 30-100%; interquartile range [IQR] 64-80%), whereas the largest was 95% (range 3-98%; IQR 30-60%).
Conclusions: Wide variability in clinicians' pretest probability estimates of disease may present a possible concern about decision-making models based on Bayes' theorem, because it may ultimately yield inconsistent posttest disease estimates.
abstract_id: PUBMED:22588733
Evidence-based medicine in otolaryngology, Part 3: everyday probabilities: diagnostic tests with binary results. Recent studies have demonstrated that the majority of physicians cannot accurately determine the predictive values of diagnostic tests. Physicians must understand the predictive probabilities associated with diagnostic testing in order to convey accurate information to patients, a key aspect of evidence-based practice. While sensitivity and specificity are widely understood, predictive values require a further understanding of conditional probabilities, pretest probabilities, and the prevalence of disease. Therefore, this third installment of the series "Evidence-Based Medicine in Otolaryngology" focuses on understanding the probabilities needed to accurately convey the results of dichotomous diagnostic tests in everyday practice.
abstract_id: PUBMED:32299367
Data sources and methods used to determine pretest probabilities in a cohort of Cochrane diagnostic test accuracy reviews. Background: A pretest probability must be selected to calculate data to help clinicians, guideline boards and policy makers interpret diagnostic accuracy parameters. When multiple analyses for the same target condition are compared, identical pretest probabilities might be selected to facilitate the comparison. Some pretest probabilities may lead to exaggerations of the patient harms or benefits, and guidance on how and why to select a specific pretest probability is minimally described. Therefore, the aim of this study was to assess the data sources and methods used in Cochrane diagnostic test accuracy (DTA) reviews for determining pretest probabilities to facilitate the interpretation of DTA parameters. A secondary aim was to assess the use of identical pretest probabilities to compare multiple meta-analyses within the same target condition.
Methods: Cochrane DTA reviews presenting at least one meta-analytic estimate of the sensitivity and/or specificity as a primary analysis published between 2008 and January 2018 were included. Study selection and data extraction were performed by one author and checked by other authors. Observed data sources (e.g. studies in the review, or external sources) and methods to select pretest probabilities (e.g. median) were categorized.
Results: Fifty-nine DTA reviews were included, comprising of 308 meta-analyses. A pretest probability was used in 148 analyses. Authors used included studies in the DTA review, external sources, and author consensus as data sources for the pretest probability. Measures of central tendency with or without a measure of dispersion were used to determine the pretest probabilities, with the median most commonly used. Thirty-two target conditions had at least one identical pretest probability for all of the meta-analyses within their target condition. About half of the used identical pretest probabilities were inside the prevalence ranges from all analyses within a target condition.
Conclusions: Multiple sources and methods were used to determine (identical) pretest probabilities in Cochrane DTA reviews. Indirectness and severity of downstream consequences may influence the acceptability of the certainty in calculated data with pretest probabilities. Consider: whether to present normalized frequencies, the influence of pretest probabilities on normalized frequencies, and whether to use identical pretest probabilities for meta-analyses in a target condition.
abstract_id: PUBMED:18491194
Tips for teachers of evidence-based medicine: clinical prediction rules (CPRs) and estimating pretest probability. Background: Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians' diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives.
Educational Objectives: In this article, we present 3 teaching tips aimed at helping clinical learners use clinical prediction rules and to more accurately assess pretest probability in every day practice. The first tip is designed to demonstrate variability in physician estimation of pretest probability. The second tip demonstrates how the estimate of pretest probability influences the interpretation of diagnostic tests and patient management. The third tip exposes learners to various examples and different types of Clinical Prediction Rules (CPR) and how to apply them in practice. PILOT TESTING: We field tested all 3 tips with 16 learners, a mix of interns and senior residents. Teacher preparatory time was approximately 2 hours. The field test utilized a board and a data projector; 3 handouts were prepared. The tips were felt to be clear and the educational objectives reached. Potential teaching pitfalls were identified.
Conclusion: Teaching with these tips will help physicians appreciate the importance of applying evidence to their every day decisions. In 2 or 3 short teaching sessions, clinicians can also become familiar with the use of CPRs in applying evidence consistently in everyday practice.
abstract_id: PUBMED:27602009
Experience-Based Probabilities Modulate Expectations in a Gender-Coded Artificial Language. The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces.
abstract_id: PUBMED:30512826
Birth of evidence-based medicine Birth of evidence-based medicine. A new way of thinking came out from North American medicine in the 90's, well summarized by Gordon Guyatt in the JAMA paper "Evidence-based medicine: a new way of teaching medicine". European and French medicines, in the past, were the most influential through the discussion around vaccines of Bernoulli and d'Alembert, the introduction of statistics by Laplace, the objective analysis of therapeutic results by Pierre-Alexandre Louis, and the "experimental " method of Claude Bernard. Koch's postulates and causality criteria of Bradford Hill were also major contributions to make medicine more rational. The critical assessment of the scientific literature (and why not of literature and art?) was promoted at the right time, when exploded the number of articles of unequal quality, and the promotion of practices based on tradition, authoritarianism, and patho physiological hypothesis. This paper provides an analysis of this major change in medical reasoning, and its necessary integration within a new world of accelerated and disputable communication.
abstract_id: PUBMED:30512827
Evidence-based medicine: basic principles Evidence-based medicine : basic principles. The clinical judgment of health professionals is improving over the time of experience, but they need to keep their medical knowledge up to date. Yet, practitionners are inundated by an exponentially increasing biomedical literature, whom quality is questioned. Evidence-based medicine (EBM) provides a useful framework by integrating the best available clinical evidence from systematic research with individual clinical expertise and the patient's values and situation for the sake of clinical decision making. The practice of EBM involves five steps. The first one starts from a real case and aims at formulating a simple clinical question structured in the « PICO » format: patient (P) profile, intervention (I), comparator (C) and outcome (O) of interest. The second step consists in locating the available evidence through a literature search focusing primarily on pre-evaluated articles or synthesis (Cochrane). The third step is a critical appraisal of the best available evidence for validity, importance and usefulness of the results for the current patient. The fourth step combines the evidence with the clinical judgment and patient's values and preferences. Once the exercise is completed for a specific patient, the fifth step consists in rating one's performance to keep updated. Clinical questions can be formulated in 4 main areas of case management: diagnosis, treatment, prognostic or etiology. This article presents an example in the area of treatment. The EBM approach is important for the development of practitionners' abilities to find, critically appraise and incorporate the best scientific evidence for the benefit of their patients.
abstract_id: PUBMED:17276026
Evidence-based medicine: methods and critics The evaluation of professional practices is now obligatory. Evidence-Based Medicine, defined as the use of current best evidence in making well-informed decisions, could play a major role for that purpose, as it allows an objective evaluation of the clinical practices by the use of there commendations it gives rise to. The aim of that study was to describe the EBM decision procedure with examples in anaesthesia, and to present its limits, particularly due to the problems of hierarchies of evidence.
abstract_id: PUBMED:30516913
Evidence-based medicine criticism Evidence-based medicine criticism. Evidence-based medicine is now considered as the rule in the practice of clinical medicine. It covers however a lot of issues and limitations, leading to numerous and severe pitfalls. Most are described and discussed in this review.
Answer: Yes, pretest probabilities could become evidence based. A study conducted on an inpatient medical service of a university-affiliated Veterans' Affairs hospital in south Texas found that for 78% of the principal clinical problems identified in patients admitted for diagnostic evaluation, there were citations in the medical literature that qualified as disease probability evidence. This meant that 91% of the patients had clinical problems for which evidence was available to guide estimates of pretest probability (PUBMED:12648252). This suggests that clinicians' pretest probabilities could indeed become evidence based if such data is confirmed by further studies.
However, there are challenges to this approach. Another study highlighted the wide variability in clinicians' pretest probability estimates of disease, which could yield inconsistent posttest disease estimates when applying Bayesian decision-making models (PUBMED:15175211). This variability may present a concern for the clinical utility of evidence-based medicine (EBM) as it relies on the accurate estimation of pretest probabilities.
Moreover, it is important for physicians to understand predictive probabilities associated with diagnostic testing, which requires knowledge of conditional probabilities, pretest probabilities, and disease prevalence (PUBMED:22588733). Cochrane diagnostic test accuracy reviews have used multiple sources and methods to determine pretest probabilities, indicating that there is not a single standardized approach to selecting these probabilities (PUBMED:32299367).
Teaching tips have been developed to help clinicians use clinical prediction rules and more accurately assess pretest probability in everyday practice, suggesting that with proper education and tools, the application of evidence-based pretest probabilities is feasible (PUBMED:18491194).
In conclusion, while there is potential for pretest probabilities to become evidence based, variability in clinician estimates and the complexity of determining accurate probabilities present challenges that need to be addressed through standardized methods, education, and further research. |
Instruction: Open versus closed enteral nutrition systems for critically ill adults: is there a difference?
Abstracts:
abstract_id: PUBMED:22569619
Open versus closed enteral nutrition systems for critically ill adults: is there a difference? Objective: To compare the volume, total calories, and protein received by critically ill patients between open and closed enteral nutrition (EN) systems and identify the main reasons for EN discontinuation.
Methods: A cohort study in which adult patients admitted to the intensive care unit (ICU) were followed-up in two periods: throughout November 2009 with all patients (n = 85) receiving EN using the open system (OS group); and from October 2010 to April 2011 with patients (n = 170) receiving EN using the closed system (CS group). Parametric and nonparametric tests were used to compare the variables, taking into account their distribution.
Results: Demographic and clinical characteristics were similar in both groups. There were minor differences with no statistical significance between groups: more calories/kg were prescribed to the OS group (p < 0.001), and a higher volume (mL/kg, p = 0.002) and protein (g/kg, p = 0.001) were prescribed to the CS group. Fasting, enteral feeding or gastrointestinal problems, and performance of procedures and ICU routines in different frequencies between groups (p = 0.001) led to the discontinuation of EN.
Conclusion: There was no clinically relevant difference between the volume, energy, and protein intake of EN prescribed and administered in OS and CS groups. Clinical instability, procedures, and ICU routines led to EN discontinuation in both groups.
abstract_id: PUBMED:33024380
Safety of Enteral Nutrition Practices: Overcoming the Contamination Challenges. Enteral nutrition (EN) has host of benefits to offer to critically ill patients and is the preferred route of feeding over parenteral nutrition. But along with the many outcome benefits of enteral feeding come the potential for adverse effects that includes gastrointestinal (GI) disturbances mainly attributed to contaminated feeds. Currently, EN is practiced using blenderized/kitchen prepared feeds or scientifically developed commercial feeds. Commercial feeds based on their formulation may be divided as ready-to-mix powder formulas or ready-to-hang sterile liquid formulas. A holistic view on potential sterility of EN from preparation to patient delivery would be looked upon. These sterility issues may potentially result in clinical complications, and hence process-related errors need to be eliminated in hospital practice, since immunocompromised intensive care unit patients are at high risk of infection. This review intends to discuss the various EN practices, risk of contamination, and ways to overcome the same for better nutrition delivery to the patients. Among the various types of enteral formulas and delivery methods, this article tries to summarize several benefits and risks associated with each delivery system using the currently available literature.
How To Cite This Article: Sinha S, Lath G, Rao S. Safety of Enteral Nutrition Practices: Overcoming the Contamination Challenges. Indian J Crit Care Med 2020;24(8):709-712.
abstract_id: PUBMED:31684690
Early enteral nutrition (within 48 hours) versus delayed enteral nutrition (after 48 hours) with or without supplemental parenteral nutrition in critically ill adults. Background: Early enteral nutrition support (within 48 hours of admission or injury) is frequently recommended for the management of patients in intensive care units (ICU). Early enteral nutrition is recommended in many clinical practice guidelines, although there appears to be a lack of evidence for its use and benefit.
Objectives: To evaluate the efficacy and safety of early enteral nutrition (initiated within 48 hours of initial injury or ICU admission) versus delayed enteral nutrition (initiated later than 48 hours after initial injury or ICU admission), with or without supplemental parenteral nutrition, in critically ill adults.
Search Methods: We searched CENTRAL (2019, Issue 4), MEDLINE Ovid (1946 to April 2019), Embase Ovid SP (1974 to April 2019), CINAHL EBSCO (1982 to April 2019), and ISI Web of Science (1945 to April 2019). We also searched Turning Research Into Practice (TRIP), trial registers (ClinicalTrials.gov, ISRCTN registry), and scientific conference reports, including the American Society for Parenteral and Enteral Nutrition and the European Society for Clinical Nutrition and Metabolism. We applied no restrictions by language or publication status.
Selection Criteria: We included all randomized controlled trials (RCTs) that compared early versus delayed enteral nutrition, with or without supplemental parenteral nutrition, in adults who were in the ICU for longer than 72 hours. This included individuals admitted for medical, surgical, and trauma diagnoses, and who required any type of enteral nutrition.
Data Collection And Analysis: Two review authors extracted study data and assessed the risk of bias in the included studies. We expressed results as risk ratios (RR) for dichotomous data, and as mean differences (MD) for continuous data, both with 95% confidence intervals (CI). We assessed the certainty of the evidence using GRADE.
Main Results: We included seven RCTs with a total of 345 participants. Outcome data were limited, and we judged many trials to have an unclear risk of bias in several domains. Early versus delayed enteral nutrition Six trials (318 participants) assessed early versus delayed enteral nutrition in general, medical, and trauma ICUs in the USA, Australia, Greece, India, and Russia. Primary outcomes Five studies (259 participants) measured mortality. It is uncertain whether early enteral nutrition affects the risk of mortality within 30 days (RR 1.00, 95% CI 0.16 to 6.38; 1 study, 38 participants; very low-quality evidence). Four studies (221 participants) reported mortality without describing the timeframe; we did not pool these results. None of the studies reported a clear difference in mortality between groups. Three studies (156 participants) reported infectious complications. We were unable to pool the results due to unreported data and substantial clinical heterogeneity. The results were inconsistent across studies. One trial measured feed intolerance or gastrointestinal complications; it is uncertain whether early enteral nutrition affects this outcome (RR 0.84, 95% CI 0.35 to 2.01; 59 participants; very low-quality evidence). Secondary outcomes One trial assessed hospital length of stay and reported a longer stay in the early enteral group (median 15 days (interquartile range (IQR) 9.5 to 20) versus 12 days (IQR 7.5 to15); P = 0.05; 59 participants; very low-quality evidence). Three studies (125 participants) reported the duration of mechanical ventilation. We did not pool the results due to clinical and statistical heterogeneity. The results were inconsistent across studies. It is uncertain whether early enteral nutrition affects the risk of pneumonia (RR 0.77, 95% CI 0.55 to 1.06; 4 studies, 192 participants; very low-quality evidence). Early enteral nutrition with supplemental parenteral nutrition versus delayed enteral nutrition with supplemental parenteral nutrition We identified one trial in a burn ICU in the USA (27 participants). Primary outcomes It is uncertain whether early enteral nutrition with supplemental parenteral nutrition affects the risk of mortality (RR 0.74, 95% CI 0.25 to 2.18; very low-quality evidence), or infectious complications (MD 0.00, 95% CI -1.94 to 1.94; very low-quality evidence). There were no data available for feed intolerance or gastrointestinal complications. Secondary outcomes It is uncertain whether early enteral nutrition with supplemental parenteral nutrition reduces the duration of mechanical ventilation (MD 9.00, 95% CI -10.99 to 28.99; very low-quality evidence). There were no data available for hospital length of stay or pneumonia.
Authors' Conclusions: Due to very low-quality evidence, we are uncertain whether early enteral nutrition, compared with delayed enteral nutrition, affects the risk of mortality within 30 days, feed intolerance or gastrointestinal complications, or pneumonia. Due to very low-quality evidence, we are uncertain if early enteral nutrition with supplemental parenteral nutrition compared with delayed enteral nutrition with supplemental parenteral nutrition reduces mortality, infectious complications, or duration of mechanical ventilation. There is currently insufficient evidence; there is a need for large, multicentred studies with rigorous methodology, which measure important clinical outcomes.
abstract_id: PUBMED:30547663
Critically ill patient enteral nutrition in the 21st century In order to make estimations on the evolution and near future of enteral nutrition in critically ill adult patients, we have revised the current clinical practices based on the latest guidelines for the provision and assessment of enteral nutrition support therapy. Once revised the suggested guideline recommendations we proceed to discuss the major recently published studies concerning these guidelines. Finally, we commented on several uncertainty areas highlighting priorities for clinical research in the near future. These uncertainty areas were as follows: administration methods of enteral nutrition, gastric residual volume monitorization, other aspects of gastrointestinal tolerance, protein requirements, glycemic monitorization and diabetic specific diets, immune-modulating formulas, permissive underfeeding or trophic enteral nutrition, supplementary nutrition and muscle wasting.
abstract_id: PUBMED:15813392
Early enteral nutrition in the critically-ill patient Enteral nutrition has demonstrated to be a useful and safe method to nourish critically ill patients admitted to the Intensive Care Unit. Although the time a severely ill patient can stand without nutrition is unknown, accelerated catabolism and fasting may be deleterious in those patients, and the more common recommendation is to start on artificial nutrition when a fasting period longer than seven days is foreseen. At an experimental level, advantages of enteral nutrition over parenteral nutrition are evident since the use of nutritional substrates via the gastrointestinal tract improves the local and systemic immune response and maintains the barrier functions of the gut. Clinical studies have demonstrated that early enteral nutrition administered within the first 48 hours of admission decreases the incidence of nosocomial infections in these patients, but not the mortality, with the exception of special groups of patients, particularly surgical ones. The major inconvenience of enteral nutrition is its digestive intolerance and the transpyloric approach, necessary when there is gastroparesia. Its efficacy is also questioned when the patient has tissue ischemia. For early enteral nutrition to be effective, a treatment strategy must be implemented that includes from simple measures, such as uprising the bed headrest, to more sophisticated ones, such as the transpyloric approach or the use of nutrients with immunomodulatory capabilities. To date, the use of early enteral nutrition is the best method for nutritional support in this kind of patients provided that it is individualized according to each patient clinical status and that is done following an adequate therapeutic strategy.
abstract_id: PUBMED:35268095
The Effects of Enteral Nutrition in Critically Ill Patients with COVID-19: A Systematic Review and Meta-Analysis. Background: Patients who are critically ill with COVID-19 could have impaired nutrient absorption due to disruption of the normal intestinal mucosa. They are often in a state of high inflammation, increased stress and catabolism as well as a significant increase in energy and protein requirements. Therefore, timely enteral nutrition support and the provision of optimal nutrients are essential in preventing malnutrition in these patients. Aim: This review aims to evaluate the effects of enteral nutrition in critically ill patients with COVID-19. Method: This systematic review and meta-analysis was conducted based on the preferred reporting items for systematic review and meta-Analysis framework and PICO. Searches were conducted in databases, including EMBASE, Health Research databases and Google Scholar. Searches were conducted from database inception until 3 February 2022. The reference lists of articles were also searched for relevant articles. Results: Seven articles were included in the systematic review, and four articles were included in the meta-analysis. Two distinct areas were identified from the results of the systematic review and meta-analysis: the impact of enteral nutrition and gastrointestinal intolerance associated with enteral nutrition. The impact of enteral nutrition was further sub-divided into early enteral nutrition versus delayed enteral nutrition and enteral nutrition versus parenteral nutrition. The results of the meta-analysis of the effects of enteral nutrition in critically ill patients with COVID-19 showed that, overall, enteral nutrition was effective in significantly reducing the risk of mortality in these patients compared with the control with a risk ratio of 0.89 (95% CI, 0.79, 0.99, p = 0.04). Following sub-group analysis, the early enteral nutrition group also showed a significant reduction in the risk of mortality with a risk ratio of 0.89 (95% CI, 0.79, 1.00, p = 0.05). The Relative Risk Reduction (RRR) of mortality in patients with COVID-19 by early enteral nutrition was 11%. There was a significant reduction in the Sequential Organ Failure Assessment (SOFA) score in the early enteral nutrition group compared with the delayed enteral nutrition group. There was no significant difference between enteral nutrition and parenteral nutrition in relation to mortality (RR = 0.87; 95% CI, 0.59, 1.28, p = 0.48). Concerning the length of hospital stay, length of ICU stay and days on mechanical ventilation, while there were reductions in the number of days in the enteral nutrition group compared to the control (delayed enteral nutrition or parenteral nutrition), the differences were not significant (p > 0.05). Conclusion: The results showed that early enteral nutrition significantly (p < 0.05) reduced the risk of mortality among critically ill patients with COVID-19. However, early enteral nutrition or enteral nutrition did not significantly (p > 0.05) reduce the length of hospital stay, length of ICU stay and days on mechanical ventilation compared to delayed enteral nutrition or parenteral nutrition. More studies are needed to examine the effect of early enteral nutrition in patients with COVID-19.
abstract_id: PUBMED:22261951
Combined enteral and parenteral nutrition. Purpose Of Review: To review and discuss the evidence and arguments to combine enteral nutrition and parenteral nutrition in the ICU, in particular with reference to the Early Parenteral Nutrition Completing Enteral Nutrition in Adult Critically Ill Patients (EPaNIC) study.
Recent Findings: The EPaNIC study shows an advantage in terms of discharges alive from the ICU when parenteral nutrition is delayed to day 8 as compared with combining enteral nutrition and parenteral nutrition from day 3 of ICU stay.
Summary: The difference between the guidelines from the European Society of Enteral and Parenteral Nutrition in Europe and American Society for Parenteral and Enteral Nutrition/Society of Critical Care Medicine in North America concerning the combination of enteral nutrition and parenteral nutrition during the initial week of ICU stay was reviewed. The EPaNIC study clearly demonstrates that early parenteral nutrition in the ICU is not in the best interests of most patients. Exactly at what time point the combination of enteral nutrition and parenteral nutrition should be considered is still an open question.
abstract_id: PUBMED:29883514
Enteral versus parenteral nutrition and enteral versus a combination of enteral and parenteral nutrition for adults in the intensive care unit. Background: Critically ill people are at increased risk of malnutrition. Acute and chronic illness, trauma and inflammation induce stress-related catabolism, and drug-induced adverse effects may reduce appetite or increase nausea and vomiting. In addition, patient management in the intensive care unit (ICU) may also interrupt feeding routines. Methods to deliver nutritional requirements include provision of enteral nutrition (EN), or parenteral nutrition (PN), or a combination of both (EN and PN). However, each method is problematic. This review aimed to determine the route of delivery that optimizes uptake of nutrition.
Objectives: To compare the effects of enteral versus parenteral methods of nutrition, and the effects of enteral versus a combination of enteral and parenteral methods of nutrition, among critically ill adults, in terms of mortality, number of ICU-free days up to day 28, and adverse events.
Search Methods: We searched CENTRAL, MEDLINE, and Embase on 3 October 2017. We searched clinical trials registries and grey literature, and handsearched reference lists of included studies and related reviews.
Selection Criteria: We included randomized controlled studies (RCTs) and quasi-randomized studies comparing EN given to adults in the ICU versus PN or versus EN and PN. We included participants that were trauma, emergency, and postsurgical patients in the ICU.
Data Collection And Analysis: Two review authors independently assessed studies for inclusion, extracted data, and assessed risk of bias. We assessed the certainty of evidence with GRADE.
Main Results: We included 25 studies with 8816 participants; 23 studies were RCTs and two were quasi-randomized studies. All included participants were critically ill in the ICU with a wide range of diagnoses; mechanical ventilation status between study participants varied. We identified 11 studies awaiting classification for which we were unable to assess eligibility, and two ongoing studies.Seventeen studies compared EN versus PN, six compared EN versus EN and PN, two were multi-arm studies comparing EN versus PN versus EN and PN. Most studies reported randomization and allocation concealment inadequately. Most studies reported no methods to blind personnel or outcome assessors to nutrition groups; one study used adequate methods to reduce risk of performance bias.Enteral nutrition versus parenteral nutritionWe found that one feeding route rather than the other (EN or PN) may make little or no difference to mortality in hospital (risk ratio (RR) 1.19, 95% confidence interval (CI) 0.80 to 1.77; 361 participants; 6 studies; low-certainty evidence), or mortality within 30 days (RR 1.02, 95% CI 0.92 to 1.13; 3148 participants; 11 studies; low-certainty evidence). It is uncertain whether one feeding route rather than the other reduces mortality within 90 days because the certainty of the evidence is very low (RR 1.06, 95% CI 0.95 to 1.17; 2461 participants; 3 studies). One study reported mortality at one to four months and we did not combine this in the analysis; we reported this data as mortality within 180 days and it is uncertain whether EN or PN affects the number of deaths within 180 days because the certainty of the evidence is very low (RR 0.33, 95% CI 0.04 to 2.97; 46 participants).No studies reported number of ICU-free days up to day 28, and one study reported number of ventilator-free days up to day 28 and it is uncertain whether one feeding route rather than the other reduces the number of ventilator-free days up to day 28 because the certainty of the evidence is very low (mean difference, inverse variance, 0.00, 95% CI -0.97 to 0.97; 2388 participants).We combined data for adverse events reported by more than one study. It is uncertain whether EN or PN affects aspiration because the certainty of the evidence is very low (RR 1.53, 95% CI 0.46 to 5.03; 2437 participants; 2 studies), and we found that one feeding route rather than the other may make little or no difference to pneumonia (RR 1.10, 95% CI 0.82 to 1.48; 415 participants; 7 studies; low-certainty evidence). We found that EN may reduce sepsis (RR 0.59, 95% CI 0.37 to 0.95; 361 participants; 7 studies; low-certainty evidence), and it is uncertain whether PN reduces vomiting because the certainty of the evidence is very low (RR 3.42, 95% CI 1.15 to 10.16; 2525 participants; 3 studies).Enteral nutrition versus enteral nutrition and parenteral nutritionWe found that one feeding regimen rather than another (EN or combined EN or PN) may make little or no difference to mortality in hospital (RR 0.99, 95% CI 0.84 to 1.16; 5111 participants; 5 studies; low-certainty evidence), and at 90 days (RR 1.00, 95% CI 0.86 to 1.18; 4760 participants; 2 studies; low-certainty evidence). It is uncertain whether combined EN and PN leads to fewer deaths at 30 days because the certainty of the evidence is very low (RR 1.64, 95% CI 1.06 to 2.54; 409 participants; 3 studies). It is uncertain whether one feeding regimen rather than another reduces mortality within 180 days because the certainty of the evidence is very low (RR 1.00, 95% CI 0.65 to 1.55; 120 participants; 1 study).No studies reported number of ICU-free days or ventilator-free days up to day 28. It is uncertain whether either feeding method reduces pneumonia because the certainty of the evidence is very low (RR 1.40, 95% CI 0.91 to 2.15; 205 participants; 2 studies). No studies reported aspiration, sepsis, or vomiting.
Authors' Conclusions: We found insufficient evidence to determine whether EN is better or worse than PN, or than combined EN and PN for mortality in hospital, at 90 days and at 180 days, and on the number of ventilator-free days and adverse events. We found fewer deaths at 30 days when studies gave combined EN and PN, and reduced sepsis for EN rather than PN. We found no studies that reported number of ICU-free days up to day 28. Certainty of the evidence for all outcomes is either low or very low. The 11 studies awaiting classification may alter the conclusions of the review once assessed.
abstract_id: PUBMED:26673199
Nutrition Support in the Open Abdomen. Early provision of enteral nutrition (EN) in critically ill and injured patients has become standard practice in surgical intensive care units (ICUs) due to its proven role in reducing septic complications. Increasingly, intensivists are confronted with patients with an open abdomen due to the use of damage control surgery and the recognition of the abdominal compartment syndrome; the role and timing of EN in these challenging patients continue to be debated. Patients with an open abdomen are often among the sickest in the ICU and hence could benefit from early nutrition support. However, the exposed abdominal viscera can understandably create anxiety regarding the initiation of EN; there is theoretic concern over exacerbation of bowel distention with resultant inability to close the abdomen and an increased aspiration risk due to paralytic ileus. Recent studies have investigated the utility of EN in the patient with an open abdomen, addressing these clinical concerns. The goal of this clinical review is to provide guidance to physicians caring for these complex patients.
abstract_id: PUBMED:8031947
Effect of enteral nutrition on the bacterial colonization of the rectum The proved relation that the gastrointestinal flora has in the origin of the pulmonary, urinary, injury infections, or in the critically ill patient's sepsis as it turns potentially pathologic, has caused us to carry out a study whose objective would be to determine the influence of the enteral feeding in the gastrointestinal flora, as for the variety of the species and their meaning. The prospective survey has been carried out from October, 1992 to May 1993, and it includes thirty patients, eighteen of whom received enteral nutrition while the others received nutritional supplementation through different ways. Only the patients with a minimum four day stay in the Intensive Care Unit of the Basurto hospital were included. The samples for microbiologic study were taken by means of rectal swab (80%) or faeces (20%). No qualitative neither quantitative differences have been obtained in the gastrointestinal flora between the patients who have received enteral nutrition and those who have not, except three cases attributable to the antibiotherapy. In conclusion, in our study, the enteral nutrition has not influenced the critic patient's gastrointestinal flora.
Answer: Based on the available abstracts, there appears to be no clinically relevant difference between open and closed enteral nutrition (EN) systems in terms of volume, energy, and protein intake prescribed and administered to critically ill adults. A cohort study comparing open system (OS) and closed system (CS) groups found minor differences that were not statistically significant, with more calories/kg prescribed to the OS group and a higher volume and protein prescribed to the CS group. The main reasons for EN discontinuation in both groups were fasting, enteral feeding or gastrointestinal problems, and performance of procedures and ICU routines (PUBMED:22569619).
The safety of EN practices, including the risk of contamination, is a concern, especially for immunocompromised ICU patients. The review by Sinha et al. discusses the various EN practices and the need to overcome contamination challenges for better nutrition delivery (PUBMED:33024380).
The timing of EN initiation is also discussed in the literature. Early EN (within 48 hours of ICU admission) is frequently recommended, but the evidence for its efficacy and safety compared to delayed EN is uncertain due to very low-quality evidence (PUBMED:31684690). The review by PUBMED:30547663 highlights several areas of uncertainty in EN practices, including administration methods, monitoring, and specific dietary needs, suggesting priorities for future clinical research.
In the context of COVID-19, a systematic review and meta-analysis found that early EN significantly reduced the risk of mortality among critically ill patients with COVID-19, although it did not significantly reduce the length of hospital stay, ICU stay, or days on mechanical ventilation compared to delayed EN or parenteral nutrition (PUBMED:35268095).
The combination of EN and parenteral nutrition (PN) is also explored, with the EPaNIC study suggesting an advantage in terms of discharges alive from the ICU when PN is delayed to day 8 compared with combining EN and PN from day 3 of ICU stay (PUBMED:22261951). Another review found insufficient evidence to determine whether EN is better or worse than PN, or than combined EN and PN for various outcomes, including mortality and adverse events (PUBMED:29883514).
In summary, the choice between open and closed EN systems for critically ill adults does not seem to result in clinically significant differences in nutrition delivery, although other factors such as safety, timing, and combination with PN may influence outcomes and require further research. |
Instruction: Does dual-mobility cup geometry affect posterior horizontal dislocation distance?
Abstracts:
abstract_id: PUBMED:24464508
Does dual-mobility cup geometry affect posterior horizontal dislocation distance? Background: Dual-mobility acetabular cups have been marketed with the purported advantages of reduced dislocation rates and improvements in ROM; however, the relative efficacies of these designs in terms of changing joint stability via ROM and dislocation distance have not been thoroughly evaluated.
Questions/purposes: In custom computer simulation studies, we addressed the following questions: (1) Do variations in component geometry across dual-mobility designs (anatomic, modular, and subhemispheric) affect the posterior horizontal dislocation distances? (2) How do these compare with the measurements obtained with standard hemispheric fixed bearings? (3) What is the effect of head size on posterior horizontal dislocation distances for dual-mobility and standard hemispheric fixed bearings? (4) What are the comparative differences in prosthetic impingement-free ROM between three modern dual-mobility components (anatomic, modular, and subhemispheric), and standard hemispheric fixed bearings?
Methods: CT scans of an adult pelvis were imported into computer-aided design software to generate a dynamic three-dimensional model of the pelvis. Using this software, computer-aided design models of three dual-mobility designs (anatomic, modular, and subhemispheric) and standard hemispheric fixed bearings were implanted in the pelvic model and the posterior horizontal dislocation distances measured. Hip ROM simulator software was used to compare the prosthetic impingement-free ROMs of dual-mobility bearings with standard hemispheric fixed-bearing designs.
Results: Variations in component design had greater effect on posterior horizontal dislocation distance values than increases in head size in a specific design (p < 0.001). Anatomic and modular dual-mobility designs were found to have greater posterior horizontal dislocation distances than the subhemispheric dual-mobility and standard hemispheric fixed-bearing designs (p < 0.001). Increasing head sizes increased posterior horizontal dislocation distances across all designs (p < 0.001). The subhemispheric dual-mobility implant was found to have the greatest prosthetic impingement-free ROM among all prosthetic designs (p < 0.001; R(2) = 0.86).
Conclusions: The posterior horizontal dislocation distances differ with the individual component geometries of dual-mobility designs, with the anatomic and modular designs showing higher posterior horizontal dislocation distances compared with subhemispheric dual-mobility and standard hemispheric fixed-bearing designs.
Clinical Relevance: Static, three-dimensional computerized simulation studies suggest differences that may influence the risk of dislocation among components with varying geometries, favoring anatomic and modular dual-mobility designs. Clinical studies are needed to confirm these observations.
abstract_id: PUBMED:34963863
Intraprosthetic Dislocation of Dual-Mobility Total Hip Arthroplasty: The Unforeseen Complication. Total hip arthroplasty (THA) is one of the most successful and widely accepted orthopedic procedures. Instability after THA is one of the most significant postoperative complications. Dual-mobility THA components were introduced in 1974 to overcome the risk of instability by increasing the jump distance. Dual-mobility bearings couple two articulations, namely, one between a 22-28 mm prosthetic head and polyethylene liner and another larger articulation between the polyethylene liner and the metal cup. Dislocation of the polyethylene liner and the consequent direct articulation between the prosthetic head and metal cup is recognized as intraprosthetic dislocation (IPD). This mode of THA failure is specific to dual-mobility implants. Despite the reduced incidence of IPD in modern dual-mobility implants compared to the early designs, iatrogenic IPD can occur during closed reduction of dislocated polyethylene liner-metal cup articulation. IPD requires timely diagnosis and early surgical intervention to minimize the necessity of major revision surgeries. This study presents a comprehensive review for dual-mobility-bearing THA, including the history and biomechanics, and focuses on the pathomechanics, diagnosis, and management of IPD.
abstract_id: PUBMED:32385555
The Lefèvre retentive cup compared with the dual mobility cup in total hip arthroplasty revision for dislocation. Background: Limiting the risk of dislocation is one of the main aims of both dual mobility and Lefèvre retentive cups. However, these devices have never been compared. The goal of our study was to compare these devices in total hip arthroplasty revisions for instability. The judgement criterion was non-recurrence of dislocation in a follow-up period of eight years.
Methods: This retrospective case-control study compared two continuous paired series of total hip arthroplasty revisions for instability. These series included 63 patients and 159 patients with implantation of a Lefèvre retentive cup and a dual mobility cup, respectively.
Results: The success rate at eight years (i.e., no recurrence) was 91 ± 0.05% and 95 ± 0.02% in the Lefèvre retentive cup and dual mobility groups, respectively. The difference was not statistically significant (p = 0.6).
Conclusion: It seems that the Lefèvre retentive cup provides comparable outcomes with the dual mobility cup in the total hip arthroplasty revisions for instability, avoiding recurrence in long term.
abstract_id: PUBMED:24997653
Relative head size increase using an anatomic dual mobility hip prosthesis compared to traditional hip arthroplasty: impact on hip stability. Smaller head sizes and head/cup ratios make cups smaller than 50mm and larger than 58mm, more prone to dislocation. Using computer modeling, we compared average head sizes and posterior horizontal dislocation distance (PHDD) in two 78-patient matched cohorts. Cup sizes were small (≤50mm) or large (≥58mm). The control cohort had conventional fixed bearing prostheses, while the experimental cohort had anatomical dual mobility (ADM) hip prostheses. ADM cups have larger average head sizes and PHDD than traditional fixed bearing prostheses by 11.5mm and 80% for cups ≤50mm, and 16.3mm and 90% for cups ≥58mm. Larger head sizes and increased head/cup ratio may allow the ADM prosthesis to reduce the incidence of dislocation.
abstract_id: PUBMED:33709946
Iatrogenic Intraprosthetic Dislocation of Dual Mobility Cup. Case Study. Intra-prosthetic dislocation of the dual-mobile acetabular cup is a rare complication. Most often, it is the result of wear of the polyethylene liner. It can also occur during a closed reduction of a dislocated dual-mobile cup. It is extremely important to recognize this complication immediately in order to avoid the consequences. This paper presents the first case of iatrogenic intraprosthetic dislocation at the Traumatology and Orthopaedics Department of the Military Medical Institute, our management of the case and suggestions for treating patients with a dislocation of the dual-mobile acetabular cup.
abstract_id: PUBMED:28913407
Early intraprosthetic dislocation in dual-mobility implants: a systematic review. Background: Dual mobility implants are subject to a specific implant-related complication, intraprosthetic dislocation (IPD), in which the polyethylene liner dissociates from the femoral head. For older generation designs, IPD was attributable to late polyethylene wear and subsequent failure of the head capture mechanism. However, early IPDs have been reportedly affecting contemporary designs.
Methods: A systematic review of the literature according to the preferred reporting items for systematic reviews and meta-analyses guidelines was performed. A comprehensive search of PubMed, MEDLINE, Embase, and Google Scholar was conducted for English articles between January 1974 and August 2016 using various combinations of the keywords "intraprosthetic dislocation," "dual mobility," "dual-mobility," "tripolar," "double mobility," "double-mobility," "hip," "cup," "socket," and "dislocation."
Results: In all, 16 articles met our inclusion criteria. Fourteen were case reports and 2 were retrospective case series. These included a total of 19 total hip arthroplasties, which were divided into 2 groups: studies dealing with early IPD after attempted closed reduction and those dealing with early IPD with no history of previous attempted closed reduction. Early IPD was reported in 15 patients after a mean follow-up of 3.2 months (2.9 SD) in the first group and in 4 patients after a mean follow-up of 15.1 months (9.9 SD) in the second group.
Conclusions: Based on the current data, most cases have been preceded by an attempted closed reduction in the setting of outer, large articulation dislocation, perhaps indicating an iatrogenic etiology for early IPD. Recognition of this possible failure mode is essential to its prevention and treatment.
abstract_id: PUBMED:36675742
Is Cemented Dual-Mobility Cup a Reliable Option in Primary and Revision Total Hip Arthroplasty: A Systematic Review. Background: Instability is a common complication following total hip arthroplasty (THA). The dual mobility cup (DMC) allows a reduction in the dislocation rate. The goal of this systematic review was to clarify the different uses and outcomes according to the indications of the cemented DMC (C-DMC). Methods: A systematic review was performed using the keywords "Cemented Dual Mobility Cup" or "Cemented Tripolar Cup" without a publication year limit. Of the 465 studies identified, only 56 were eligible for the study. Results: The overall number of C-DMC was 3452 in 3426 patients. The mean follow-up was 45.9 months (range 12-98.4). In most of the cases (74.5%) C-DMC was used in a revision setting. In 57.5% DMC was cemented directly into the bone, in 39.6% into an acetabular reinforcement and in 3.2% into a pre-existing cup. The overall dislocation rate was 2.9%. The most frequent postoperative complications were periprosthetic infections (2%); aseptic loosening (1.1%) and mechanical failure (0.5%). The overall revision rate was 4.4%. The average survival rate of C-DMC at the last follow-up was 93.5%. Conclusions: C-DMC represents an effective treatment option to limit the risk of dislocations and complications for both primary and revision surgery. C-DMC has good clinical outcomes and a low complication rate.
abstract_id: PUBMED:25246992
The dislocating hip replacement - revision with a dual mobility cup in 56 consecutive patients. Introduction: Recurrent dislocations of hip replacements are a difficult challenge. One treatment option for recurrent dislocations is the use of a dual mobility cup. The aim of this study was to retrospective investigate the effect of dual mobility cups as a treatment for recurrent dislocations in a consecutive series. Materials and.
Methods: 56 consecutive patients were revised in the period November 2000 to December 2010. The mean age at revision was 72 years (SD 11, range 37-92)) and median number of dislocations before revision surgery were 4 (IQR, 2-11). In all cases, revision was made with a Saturne dual mobility cup (Amplitude, Neyron, France). The mean follow-up period was 44 months (SD 30, range 0.1-119).
Results: One patient (1.8%) experienced a re-dislocation. Three patients (5.3%) had to be revised. One due to disintegration between the femoral head and inner shell, one due to loosening of the acetabular component, and one due to infection. Harris Hip Score improved from a mean of 76 before index surgery to 87 within one year after index surgery.
Conclusion: This study advocates the use of a dual mobility cup for treatment of recurrent dislocations of THR. However, studies with a longer follow up are needed in order to evaluate implant survival.
abstract_id: PUBMED:38292143
Femoral Neck Design Does Not Impact Revision Risk After Primary Total Hip Arthroplasty Using a Dual Mobility Cup. Background: The use of dual mobility (DM) cups has increased quickly. It is hypothesized that femoral neck taper geometry may be involved in the risk of prosthetic impingement and DM cup revision. We aim to (1) explore the reasons for revision of DM cups or head/liners and (2) explore whether certain femoral neck characteristics are associated with a higher risk of revision of DM cups.
Methods: Primary total hip arthroplasties with a DM cup registered in the Dutch Arthroplasty Register between 2007 and 2021 were identified (n = 7603). Competing risk survival analyses were performed, with acetabular component and head/liner revision as the primary endpoint. Reasons for revision were categorized in cup-/liner-related revisions (dislocation, liner wear, acetabular loosening). Femoral neck characteristics were studied to assess whether there is an association between femoral neck design and the risk of DM cup/liner revision. Multivariable Cox proportional hazard analyses were performed.
Results: The 5- and 10-year crude cumulative incidence of DM cup or head/liner revision for dislocation, wear, and acetabular loosening was 0.5% (CI 0.4-0.8) and 1.9% (CI 1.3-2.8), respectively. After adjusting for confounders, we found no association between the examined femoral neck characteristics (alloy used, neck geometry, CCD angle, and surface roughness) and the risk for revision for dislocation, wear, and acetabular loosening.
Conclusions: The risk of DM cup or head/liner revision for dislocation, wear, and acetabular loosening was low. We found no evidence that there is an association between femoral neck design and the risk of cup or head/liner revision.
abstract_id: PUBMED:31372217
Comparison of bipolar hemiarthroplasty and total hip arthroplasty with dual mobility cup in the treatment of old active patients with displaced neck of femur fracture: A retrospective cohort study. Background: The standard treatment of displaced femoral neck fracture is arthroplasty. THA is reportedly superior to BHA in terms of hip pain, function and reoperation rate. On the other hand THA has a higher rate of dislocation. Total hip replacement with dual mobility cup increases the range of motion and reduces the chances of dislocation. The aim of this study is to compare the functional outcome, rate of dislocation, complications and mortality between BHA and THA with dual mobility cuff for the treatment of displaced neck of femur fracture.
Patients And Method: This is a non-commercialized retrospective cohort study conducted at our tertiary care level 1 trauma centre. Patients of age group 60 years and above who underwent hip arthroplasty (BHA or THA with dual mobility implant) between 2015 and 2017 for displaced neck of femur fracture with a complete follow up for one year were included. Both groups were assessed for postoperative surgical complications including dislocation, fracture, surgical site infection, and medical complications, one-year mortality and functional outcome were analysed via Harris Hip Score (HHS) at the latest follow up.
Results: Overall 104 patients were included in the study out of which 77 patients underwent BHA while 27 underwent THA with dual mobility cup. Baseline characteristics were found to be similar in both groups. Mean pre-op HHS for bipolar group was found to be 71.01 while for THA with dual mobility cup group it was 73.52 with the difference being statistically insignificant (P = 0.12). Mean post-op HHS for bipolar group was noted to be 68.82 whereas for THA with dual mobility cup group it was 76.81. The difference was found to be statistically significant with a P-value of <0.01. With regards to post-operative complications and one-year post-operative mortality, no significant difference was noted between both groups.
Conclusion: In relatively young and active elderly patients with displaced neck of femur fracture, a THA with dual mobility cuff provides better hip functional outcome, does not increase mortality or morbidity as compared to BHA and can be considered as primary treatment modality.
Answer: Yes, dual mobility cup geometry does affect posterior horizontal dislocation distance. Variations in component design had a greater effect on posterior horizontal dislocation distance values than increases in head size in a specific design (p < 0.001). Anatomic and modular dual-mobility designs were found to have greater posterior horizontal dislocation distances than the subhemispheric dual-mobility and standard hemispheric fixed-bearing designs (p < 0.001). Increasing head sizes increased posterior horizontal dislocation distances across all designs (p < 0.001). The subhemispheric dual-mobility implant was found to have the greatest prosthetic impingement-free range of motion among all prosthetic designs (p < 0.001; R(2) = 0.86) (PUBMED:24464508). |
Instruction: Intrauterine growth restriction at term: induction or spontaneous labour?
Abstracts:
abstract_id: PUBMED:34844886
Labour induction in twin pregnancies. Medically-indicated deliveries are common in twin pregnancies given the increased risk of various obstetric complications in twin compared to singleton pregnancies, mainly hypertensive disorders of pregnancy and foetal growth restriction. Due to the unique characteristics of twin pregnancies, the success rates and safety of labour induction may be different than in singleton pregnancies. However, while there are abundant data regarding induction of labour in singleton pregnancies, the efficacy and safety of labour induction in twin pregnancies have been far less studied. In the current manuscript we summarize available data on various aspects of labour induction in twin pregnancies including incidence, success rate, prognostic factors, safety and methods for labour induction in twins. This information may assist healthcare providers in counselling patients with twin pregnancies when labour induction is indicated.
abstract_id: PUBMED:29277266
Changes in maternal placental growth factor levels during term labour. Placental growth factor (PlGF) has important angiogenic function that is critical to placental development. Lower levels of PlGF are associated with fetal growth restriction, pre-eclampsia and intrapartum fetal compromise. The aim of this study was to investigate the effect of labour on maternal PlGF levels.
Method: This was a prospective observational cohort study. Normotensive women with a singleton, normally grown, non-anomalous, fetus between 37 + 0 and 42 + 0 weeks gestation were eligible for inclusion. PlGF was assayed at two time-points in labour. Women undergoing elective caesarean section served as controls. The primary outcome was the intrapartum change in maternal PlGF levels.
Results: Fifty-nine labouring and 43 non-labouring participants were included. Median PlGF decreased from 105.5 pg/mL to 80.9 pg/mL during labour (-23.9%, p < 0.001). PlGF levels were significantly lower in the second stage of labour irrespective of onset of labour, parity, mode of birth or gestation ≥40 weeks. Compared to multiparous women, nulliparous women had significantly lower PlGF levels at both time-points but had similar overall decline in PlGF. Women who required operative vaginal delivery or emergency caesarean section had lower median PlGF levels at both PlGF time-points and greater drop in PlGF during labour compared to spontaneous vaginal deliveries but these were not statistically significant. No correlation was observed between duration of labour and decline in PlGF levels.
Conclusion: Overall, median PlGF levels fall by nearly one quarter during labour. This decline may reflect deteriorating placental function during labour.
abstract_id: PUBMED:28705190
Low dose aspirin in the prevention of recurrent spontaneous preterm labour - the APRIL study: a multicenter randomized placebo controlled trial. Background: Preterm birth (birth before 37 weeks of gestation) is a major problem in obstetrics and affects an estimated 15 million pregnancies worldwide annually. A history of previous preterm birth is the strongest risk factor for preterm birth, and recurrent spontaneous preterm birth affects more than 2.5 million pregnancies each year. A recent meta-analysis showed possible benefits of the use of low dose aspirin in the prevention of recurrent spontaneous preterm birth. We will assess the (cost-)effectiveness of low dose aspirin in comparison with placebo in the prevention of recurrent spontaneous preterm birth in a randomized clinical trial.
Methods/design: Women with a singleton pregnancy and a history of spontaneous preterm birth in a singleton pregnancy (22-37 weeks of gestation) will be asked to participate in a multicenter, randomized, double blinded, placebo controlled trial. Women will be randomized to low dose aspirin (80 mg once daily) or placebo, initiated from 8 to 16 weeks up to maximal 36 weeks of gestation. The primary outcome measure will be preterm birth, defined as birth at a gestational age (GA) < 37 weeks. Secondary outcomes will be a composite of adverse neonatal outcome and maternal outcomes, including subgroups of prematurity, as well as intrauterine growth restriction (IUGR) and costs from a healthcare perspective. Preterm birth will be analyzed as a group, as well as separately for spontaneous or indicated onset. Analysis will be performed by intention to treat. In total, 406 pregnant women have to be randomized to show a reduction of 35% in preterm birth from 36 to 23%. If aspirin is effective in preventing preterm birth, we expect that there will be cost savings, because of the low costs of aspirin. To evaluate this, a cost-effectiveness analysis will be performed comparing preventive treatment with aspirin with placebo.
Discussion: This trial will provide evidence as to whether or not low dose aspirin is (cost-) effective in reducing recurrence of spontaneous preterm birth.
Trial Registration: Clinical trial registration number of the Dutch Trial Register: NTR 5675 . EudraCT-registration number: 2015-003220-31.
abstract_id: PUBMED:31000886
Induction of Labour in Growth Restricted and Small for Gestational Age Foetuses - A Historical Cohort Study. Purpose Induction of labour for small-for-gestational-age (SGA) foetus or intrauterine growth restriction (IUGR) is common, but data are limited. The aim of this study was therefore to compare labour induction for SGA/IUGR with cases of normal foetal growth above the 10th percentile. Material and Methods This historical multicentre cohort study included singleton pregnancies at term. Labour induction for SGA/IUGR (IUGR group) was compared with cases of foetal growth above the 10th percentile (control group). Primary outcome measure was caesarean section rate. Results The caesarean section rate was not different between the 2 groups (27.0 vs. 26.2%, p = 0.9154). In the IUGR group, abnormal CTG was more common (30.8 vs. 21.9%, p = 0.0214), and foetal blood analysis was done more often (2.5 vs. 0.5%, p = 0.0261). There were more postpartum transfers to the NICU in the IUGR group (40.0 vs. 12.8%, p < 0.0001), too. Conclusion Induction of labour for foetal growth restriction was not associated with an increased rate of caesarean section.
abstract_id: PUBMED:27013750
Maternal and foetal outcome after epidural labour analgesia in high-risk pregnancies. Background And Aims: Low concentration local anaesthetic improves uteroplacental blood flow in antenatal period and during labour in preeclampsia. We compared neonatal outcome after epidural ropivacaine plus fentanyl with intramuscular tramadol analgesia during labour in high-risk parturients with intrauterine growth restriction of mixed aetiology.
Methods: Forty-eight parturients with sonographic evidence of foetal weight <1.5 kg were enrolled in this non-randomized, double-blinded prospective study. The epidural (E) group received 0.15% ropivacaine 10 ml with 30 μg fentanyl incremental bolus followed by 7-15 ml 0.1% ropivacaine with 2 μg/ml fentanyl in continuous infusion titrated until visual analogue scale was three. Tramadol (T) group received intramuscular tramadol 1 mg/kg as bolus as well as maintenance 4-6 hourly. Neonatal outcomes were measured with cord blood base deficit, pH, ionised calcium, sugar and Apgar score after delivery. Maternal satisfaction was also assessed by four point subjective score.
Results: Baseline maternal demographics and neonatal birth weight were comparable. Neonatal cord blood pH, base deficit, sugar, and ionised calcium levels were significantly improved in the epidural group in comparison to the tramadol group. Maternal satisfaction (P = 0.0001) regarding labour analgesia in epidural group was expressed as excellent by 48%, good by 52% whereas it was fair in 75% and poor in 25% in the tramadol group. Better haemodynamic and pain scores were reported in the epidural group.
Conclusion: Epidural labour analgesia with low concentration local anaesthetic is associated with less neonatal cord blood acidaemia, better sugar and ionised calcium levels. The analgesic efficacy and maternal satisfaction are also better with epidural labour analgesia.
abstract_id: PUBMED:3669058
The role of mycoplasmas, ureaplasmas and chlamydiae in the genital tract of women presenting in spontaneous early preterm labour. The genital carriage of Ureaplasma urealyticum, Mycoplasma hominis and Chlamydia trachomatis was assessed in 72 women admitted to hospital in spontaneous preterm labour and in 26 women requiring preterm delivery for other reasons who formed a control group. Women in preterm labour significantly more often carried ureaplasmas, had large numbers of M. hominis and subsequently developed chorioamnionitis than women in the control group. M. hominis, in particular, occurred more frequently and in large numbers in women who had chorioamnionitis associated with ruptured membranes. Genital carriage of the various micro-organisms appeared not to be associated with fetal growth retardation, although subsequent isolation of ureaplasmas from infants was common. It is suggested that mid-second-trimester vaginal specimens should be cultured on a research basis to establish whether these various micro-organisms identify women at risk of labouring preterm.
abstract_id: PUBMED:29604081
The peripheral chemoreflex: indefatigable guardian of fetal physiological adaptation to labour. The fetus is consistently exposed to repeated periods of impaired oxygen (hypoxaemia) and nutrient supply in labour. This is balanced by the healthy fetus's remarkable anaerobic tolerance and impressive ability to mount protective adaptations to hypoxaemia. The most important mediator of fetal adaptations to brief repeated hypoxaemia is the peripheral chemoreflex, a rapid reflex response to acute falls in arterial oxygen tension. The overwhelming majority of fetuses are able to respond to repeated uterine contractions without developing hypotension or hypoxic-ischaemic injury. In contrast, fetuses who are either exposed to severe hypoxaemia, for example during uterine hyperstimulation, or enter labour with reduced anaerobic reserve (e.g. as shown by severe fetal growth restriction) are at increased risk of developing intermittent hypotension and cerebral hypoperfusion. It is remarkable to note that when fetuses develop hypotension during such repeated severe hypoxaemia, it is not mediated by impaired reflex adaptation, but by failure to maintain combined ventricular output, likely due to a combination of exhaustion of myocardial glycogen and evolving myocardial injury. The chemoreflex is suppressed by relatively long periods of severe hypoxaemia of 1.5-2 min, longer than the typical contraction. Even in this setting, the peripheral chemoreflex is consistently reactivated between contractions. These findings demonstrate that the peripheral chemoreflex is an indefatigable guardian of fetal adaptation to labour.
abstract_id: PUBMED:33135779
The predicted clinical workload associated with early post-term surveillance and inductions of labour in south Asian women in a non-tertiary hospital setting. Background: Stillbirth increases steeply after 42 weeks gestation; hence, induction of labour (IOL) is recommended after 41 weeks. Recent Victorian data demonstrate that term stillbirth risk rises at an earlier gestation in south Asian mothers (SAM).
Aims: To determine the impact on a non-tertiary hospital in Melbourne, Australia, if post-dates IOL were recommended one week earlier at 40 + 3 for SAM; and to calculate the proportion of infants with birthweight < 3rd centile that were undelivered by 40 weeks in SAM and non-SAM, as these cases may represent undetected fetal growth restriction.
Materials And Methods: Singleton births ≥ 37 weeks during 2017-18 were extracted from the hospital Birthing Outcomes System. Obstetric and neonatal outcomes for pregnancies that birthed after spontaneous onset of labour or IOL were analysed according to gestation and country of birth.
Results: There were 5408 births included, and 24.9% were born to SAM (n = 1345). SAM women had a higher rate of IOL ≥ 37 weeks compared with non-SAM women (42.5% vs 35.0%, P < 0.001). If all SAM accepted an offer of IOL at 40 + 3, there would be an additional 80 term inductions over two years. There was no significant difference in babies < 3rd centile undelivered by 40 weeks in SAM compared with non-SAM (29.6% vs 37.7%, P = 0.42).
Conclusions: Earlier IOL for post-term SAM would only modestly increase the demand on birthing services, due to pre-existing high rates of IOL. Our current practices appear to capture the majority at highest risk of stillbirth in our SAM population.
abstract_id: PUBMED:32146087
Women's experiences of decision-making and attitudes in relation to induction of labour: A survey study. Background: Rates of induction of labour have been increasing globally to up to one in three pregnancies in many high-income countries. Although guidelines around induction, and strength of the underlying evidence, vary considerably by indication, shared decision-making is increasingly recognised as key. The aim of this study was to identify women's mode of birth preferences and experiences of shared decision-making for induction of labour.
Method: An antenatal survey of women booked for an induction at eight Sydney hospitals was conducted. A bespoke questionnaire was created assessing women's demographics, indication for induction, pregnancy model of care, initial birth preferences, and their experience of the decision-making process.
Results: Of 189 survey respondents (58% nulliparous), major reported reasons for induction included prolonged pregnancy (38%), diabetes (25%), and suspected fetal growth restriction (8%). Most respondents (72%) had hoped to labour spontaneously. Major findings included 19% of women not feeling like they had a choice about induction of labour, 26% not feeling adequately informed (or uncertain if informed), 17% not being given alternatives, and 30% not receiving any written information on induction of labour. Qualitative responses highlight a desire of women to be more actively involved in decision-making.
Conclusion: A substantial minority of women did not feel adequately informed or prepared, and indicated they were not given alternatives to induction. Suggested improvements include for face-to-face discussions to be supplemented with written information, and for shared decision-making interventions, such as the introduction of decision aids and training, to be implemented and evaluated.
abstract_id: PUBMED:24766260
Polymorphisms of toll-like receptors 2, 3 and 4 in patients that do and do not enter labour spontaneously at term. To assess the association of polymorphisms of toll-like receptors (TLRs) 2, 3 and 4 with the delay in onset of labour at term pregnancies, patients delivering at > 37 weeks and without pre-eclampsia, IUGR or a history of preterm delivery were prospectively evaluated. TLR2 Arg753Gln, TLR3 (c.1377C/T) and TLR4 Asp299Gly and Thr399Ile polymorphisms were genotyped by using PCR-RFLP. Patients labouring spontaneously before the 41st week were compared with those who did not labour spontaneously until this week in terms of baseline characteristics, TLR 2, 3 and 4 polymorphisms. The same comparisons were also performed by using a 40th week cut-off. Chi-square test, two-sample t-test or Mann-Whitney U tests were used for comparisons, as appropriate. In total, 79 patients delivering after 37 weeks were evaluated. All had CC genotype for TLR2 Arg753Gln and TLR4 Thr399Ile. There were no significant differences for TLR4 Asp299Gly GA and TLR3 (c.1377C/T) polymorphisms between patients spontaneously entering or not entering labour until the 41st week; the same was true when the 40th week cut-off was used. Delay in onset of labour at term pregnant women does not seem to be affected by the presence of TLR 2, 3 or 4 polymorphisms. Further studies are needed.
Answer: The decision to induce labor or await spontaneous labor in cases of intrauterine growth restriction (IUGR) at term can be complex and depends on various factors, including the condition of the fetus and the mother.
Induction of labor for small-for-gestational-age (SGA) fetuses or IUGR is common practice, but the data are limited. A historical cohort study aimed to compare labor induction for SGA/IUGR with cases of normal fetal growth above the 10th percentile. The study found that induction of labor for fetal growth restriction was not associated with an increased rate of cesarean section. In the IUGR group, abnormal cardiotocography (CTG) was more common, and there were more postpartum transfers to the neonatal intensive care unit (NICU), but the caesarean section rate was not different between the two groups (PUBMED:31000886).
In the context of twin pregnancies, which often have unique characteristics and increased risks of obstetric complications, the success rates and safety of labor induction may differ from singleton pregnancies. However, the efficacy and safety of labor induction in twin pregnancies have been less studied. Healthcare providers may need to counsel patients with twin pregnancies when labor induction is indicated, taking into account the limited data available (PUBMED:34844886).
For high-risk pregnancies, such as those with preeclampsia and IUGR of mixed etiology, epidural labor analgesia with low concentration local anesthetic has been associated with less neonatal cord blood acidaemia, better sugar and ionized calcium levels, and better analgesic efficacy and maternal satisfaction compared to intramuscular tramadol analgesia (PUBMED:27013750).
In conclusion, while induction of labor for IUGR at term is a common practice and does not seem to increase the rate of cesarean section, the decision should be individualized based on the specific circumstances of each case. Healthcare providers should consider the available evidence, the condition of the fetus, and maternal factors when counseling patients on the best course of action for labor and delivery in the presence of IUGR at term. |
Instruction: Does the obesity paradox apply to early postoperative complications after hip surgery?
Abstracts:
abstract_id: PUBMED:27290953
Does the obesity paradox apply to early postoperative complications after hip surgery? A retrospective chart review. Background: There is evidence that very obese patients (body mass index [BMI] >40 kg/m(2)) undergoing hip replacement have longer average hospital stays, as well as higher rates of complications and readmission compared with patients with normal BMI. However, there are sparse data describing how overweight and obese patients fare in the period immediately after hip replacement surgery compared with patients with low or normal BMI. In this study, we sought to explore the association of BMI with the rate of early postoperative complications in patients undergoing total hip arthroplasty.
Methods: A proprietary hospital software program, Clinical Looking Glass was used to query the Montefiore Medical Center database and create a list of patients with International Classification of Diseases, Ninth Revision code 81.51 (hip replacement) from the period of January 1, 2010, through December 31, 2012. The medical records of patients with length of stay 5 or more days were reviewed to evaluate the reason for the extended stay. The primary outcome studied was the association between BMI and occurrence of early complications in patients who had undergone total hip replacement surgery. Logistic regression was used to calculate adjusted odds ratio (OR) and 95% confidence interval (CI) for the association of BMI and early postoperative complications.
Results: Of the 802 patients undergoing hip replacement surgery within our time frame, 142 patient medical records were reviewed due to their length of stay of ≥5 days. Overall complication rate in the analyzed patients demonstrated a J-curve distribution pattern, with the highest morbidity being 23.5% in the underweight group, the second highest in the normal-weight group (17.3%), and decreasing to nadir in the overweight (8.0%) and obese class I (10.0%) and then higher again in classes II (14.3%) and III (16.7%). Adjusted ORs demonstrated the same J distribution pattern similar to the pattern observed in the univariate analysis. Of the variables studied, Charlson score (OR, 1.1; 95% CI, 1.1-1.2; P = .03), diagnosis of hip fracture (OR, 5.2; 95% CI, 2.8-9.8; P = .01), normal weight (OR, 1.9; 95% CI, 1.1-3.8; P = .04), and obese class III (OR, 2.5; 95% CI, 1.1-6.3; P = .04) were the factors associated with the highest odds of early complications after hip replacement surgery.
Conclusions: In this retrospective review of hip replacement surgery patients, BMI classification was a predictor of early postoperative complications. Although the exact underlying mechanisms are still not clear, these results are consistent with the obesity paradox, in which obesity or its correlates provide some form of protection.
abstract_id: PUBMED:29480892
Can Patient Selection Explain the Obesity Paradox in Orthopaedic Hip Surgery? An Analysis of the ACS-NSQIP Registry. Background: The "obesity paradox" is a phenomenon described in prior research in which patients who are obese have been shown to have lower postoperative mortality and morbidity compared with normal-weight individuals. The paradox is that clinical experience suggests that obesity is a risk factor for difficult wound healing and adverse cardiovascular outcomes. We suspect that the obesity paradox may reflect selection bias in which only the healthiest patients who are obese are offered surgery, whereas nonobese surgical patients are comprised of both healthy and unhealthy individuals. We questioned whether the obesity paradox (decreased mortality for patients who are obese) would be present in nonurgent hip surgery in which patients can be carefully selected for surgery but absent in urgent hip surgery where patient selection is minimized.
Questions/purposes: (1) What is the association between obesity and postoperative mortality in urgent and nonurgent hip surgery? (2) How is obesity associated with individual postoperative complications in urgent and nonurgent hip surgery? (3) How is underweight status associated with postoperative mortality and complications in urgent and nonurgent hip surgery?
Methods: We used 2011 to 2014 data from the American College of Surgeons National Surgical Quality Improvement Project (ACS-NSQIP) to identify all adults who underwent nonurgent hip surgery (n = 63,148) and urgent hip surgery (n = 29,047). We used logistic regression models, controlling for covariants including age, sex, anesthesia risk, and comorbidities, to examine the relationship between body mass _index (BMI) category (classified as underweight < 18.5 kg/m, normal 18.5-24.9 kg/m, overweight 25-29.9 kg/m, obese 30-39.9 kg/m, and morbidly obese > 40 kg/m) and adverse outcomes including 30-day mortality and surgical complications including wound complications and cardiovascular events.
Results: For patients undergoing nonurgent hip surgery, regression models demonstrate that patients who are morbidly obese were less likely to die within 30 days after surgery (odds ratio [OR], 0.12; 95% confidence interval [CI], 0.01-0.57; p = 0.038) compared with patients with normal BMI, consistent with the obesity paradox. For patients undergoing urgent hip surgery, patients who are morbidly obese had similar odds of death within 30 days compared with patients with normal BMI (OR, 1.18; 95% CI, 0.76-1.76; p = 0.54). Patients who are morbidly obese had higher odds of wound complications in both nonurgent (OR, 4.93; 95% CI, 3.68-6.65; p < 0.001) and urgent cohorts (OR, 4.85; 95% CI, 3.27-7.01; p < 0.001) compared with normal-weight patients. Underweight patients were more likely to die within 30 days in both nonurgent (OR, 3.79; 95% CI, 1.10-9.97; p = 0.015) and urgent cohorts (OR, 1.47; 95% CI, 1.23-1.75; p < 0.001) compared with normal-weight patients.
Conclusions: Patients who are morbidly obese appear to have a reduced risk of death in 30 days after nonurgent hip surgery, but not for urgent hip surgery. Our results suggest that the obesity paradox may be an artifact of selection bias introduced by careful selection of the healthiest patients who are obese for elective hip surgery. Surgeons should continue to consider obesity a risk factor for postoperative mortality and complications such as wound infections for both urgent and nonurgent surgery.
Level Of Evidence: Level III, therapeutic study.
abstract_id: PUBMED:33165206
Complications and 30-Day Mortality Rate After Hip Fracture Surgery in Superobese Patients. Objective: Paradoxically, overweight and obesity are associated with lower odds of complications and death after hip fracture surgery. Our objective was to determine whether this "obesity paradox" extends to patients with "superobesity." In this study, we compared rates of complications and death among superobese patients with those of patients in other body mass index (BMI) categories.
Methods: Using the National Surgical Quality Improvement Program database, we identified >100,000 hip fracture surgeries performed from 2012 to 2018. Patients were categorized as underweight (BMI <18.5), normal weight (BMI 18.5-24.9), overweight (BMI 25-29.9), obese (BMI 30-39.9), morbidly obese (BMI 40-49.9), or superobese (BMI ≥50). We analyzed patient characteristics, surgical characteristics, and 30-day outcomes. Using multivariate regression with normal-weight patients as the referent, we determined odds of major complications, minor complications, and death within 30 days by BMI category.
Results: Of 440 superobese patients, 20% had major complications, 33% had minor complications, and 5.2% died within 30 days after surgery. When comparing patients in other BMI categories with normal-weight patients, superobese patients had the highest odds of major complications [odds ratio (OR): 1.6, 95% confidence interval (CI), 1.2-2.0] but did not have significantly different odds of death (OR: 0.91, 95% CI, 0.59-1.4) or minor complications (OR: 1.2, 95% CI, 0.94-1.4).
Conclusion: Superobese patients had significantly higher odds of major complications within 30 days after hip fracture surgery compared with all other patients. This "obesity paradox" did not apply to superobese patients.
Level Of Evidence: Prognostic Level III. See Instructions for Authors for a Complete Description of Levels of Evidence.
abstract_id: PUBMED:34962353
Heart valve surgery and the obesity paradox: A systematic review. Obesity has been associated with increased incidence of comorbidities and shorter life expectancy, and it has generally been assumed that patients with obesity should have inferior outcomes after surgery. Previous literature has often demonstrated equivalent or even improved rates of mortality after cardiac surgery when compared to their lower-weight counterparts, coined the obesity paradox. Herein, we aim to review the literature investigating the impact of obesity on surgical valve interventions. PubMed and Embase were systematically searched for articles published from 1 January 2000 to 15 October 2021. A total of 1315 articles comparing differences in outcomes between patients of varying body mass index (BMI) undergoing valve interventions were reviewed and 25 were included in this study. Patients with higher BMI demonstrated equivalent or reduced rates of postoperative myocardial infarction, stroke, reoperation rates, acute kidney injury, dialysis and bleeding. Two studies identified increased rates of deep sternal wound infection in patients with higher BMI, although the majority of studies found no significant difference in deep sternal wound infection rates. The obesity paradox has described counterintuitive outcomes predominantly in coronary artery bypass grafting and transcatheter aortic valve replacement. Recent literature has identified similar trends in other heart valve interventions. While the obesity paradox has been well characterized, its causes are yet to be identified. Further study is essential in order to identify the causes of the obesity paradox so patients of all body sizes can receive optimal care.
abstract_id: PUBMED:33637296
"Obesity paradox" has not an impact on minimally invasive anatomical lung resection. Introduction: The paradoxical benefit of obesity, the 'obesity paradox', has been analyzed in lung surgical populations with contradictory results. Our goal was assessing the relationship of body mass index (BMI) to acute outcomes after minimally invasive major pulmonary resections.
Methods: Retrospective review of consecutive patients who underwent pulmonary anatomical resection through a minimally invasive approach for the period 2014-2019. Patients were grouped as underweight, normal, overweight and obese type I, II and III. Adjusted odds ratios regarding postoperative complications (overall, respiratory, cardiovascular and surgical morbidity) were produced with their exact 95% confidence intervals. All tests were considered statistically significant at p<0.05.
Results: Among 722 patients included in the study, 37.7% had a normal BMI and 61.8% were overweight or obese patients. When compared with that of normal BMI patients, adjusted pulmonary complications were significantly higher in obese type I patients (2.6% vs 10.6%, OR: 4.53 [95%CI: 1.86-12.11]) and obese type II-III (2.6% vs 10%, OR: 6.09 [95%CI: 1.38-26.89]). No significant differences were found regarding overall, cardiovascular or surgical complications among groups.
Conclusions: Obesity has not favourable effects on early outcomes in patients undergoing minimally invasive anatomical lung resections, since the risk of respiratory complications in patients with BMI≥30kg/m2 and BMI≥35kg/m2 is 4.5 and 6 times higher than that of patients with normal BMI.
abstract_id: PUBMED:15860147
Arthroscopy of the hip joint Purpose Of The Study: Arthroscopic examination of joints has recently gained wide application. Due to hip joint shape and a difficult approach to it, hip arthroscopy has long remained outside the attention and abilities of arthroscopists. The authors present their first experience with operative hip arthroscopy that offers new options for the treatment of intra-articular pathology of the hip joint.
Material: In the years 2001-2003, 24 hip arthroscopies were performed. The following pathological conditions were diagnosed and treated: loose bodies, chondral lesions of the femoral head and acetabulum, ruptures of the labrum acetabuli and ligamentum teres, impingement syndrome of the labrum acetabuli, and coxitis. No post-operative neurologic symptoms or vascular complications were observed.
Methods: All procedures were carried out on patients in a supine position, with the treated joint in traction. A standard 30 degrees device and common instruments for arthroscopic surgery were used. The instruments were inserted in the articular fissure with the use of an X-ray intensifier. Movement in the hip joint during surgery is very limited due to traction, joint shape and the length of working canals. After traction is released, it is possible to examine also the intra-articular part of the femoral neck.
Results: The pre-operative complaints (clunking, painful joint) were relieved up to 4 to 6 weeks after surgery in 23 patients. In one patient primarily diagnosed with coxitis, infection was not eradicated after lavage and debridement and, because inflammation deeply affected the femoral head, the hip was eventually treated by Girdlestone arthroplasty. The results were evaluated clinically and on the basis of the Merle d'Aubigne and Postel questionnaire assessing pain and walking abilities by both the patients and the surgeon. All 24 patients reported poor or average conditions before surgery and, after surgery, 23 experienced improvement to a very good or average condition. One patient's state failed to improve and was evaluated as poor both before and after surgery.
Discussion: Hip arthroscopy is a minimal invasive technique which allows us to diagnose and, at the same time, treat intra-articular pathology in a gentle manner. In arthroscopic surgery, correct diagnosis (X-ray, CT and MRI), correct patient's position, their body mass (obesity), selection of appropriate approaches to the joint, surgeon's experience and potentials of arthroscopic instruments all play an important role. We assume that, with increasing experience, the number of patients as well as the scope of diagnosed and treated pathological conditions of the hip joint will grow. The outcomes of operative arthroscopy were very good (improvement in 23 of 24 patients) and it is probable that this technique can slow down or prevent early wear-and-tear hip arthritis.
Conclusions: In our country, operative arthroscopy of the hip is only at its beginning. However, it can be assumed that, similarly to other large joints, it will soon become a widely used, indispensable diagnostic and therapeutic method.
abstract_id: PUBMED:21802250
The influence of obesity on early outcomes in primary hip arthroplasty. Obesity is considered an independent risk factor for adverse outcome after arthroplasty surgery. Data on 191 consecutive total hip arthroplasties were prospectively collected. Body mass index (BMI) was calculated for each patient and grouped into nonobese (BMI <30 kg/m(2)), obese (BMI 30-34.9 kg/m(2)), and morbidly obese (BMI ≥35 kg/m(2)). Primary outcomes included functional improvement (Oxford hip score, 6-minute walk test and Short Form-12 Health Survey general health questionnaire) and postoperative complications. Subgroup analysis of surgeons' overall perception of operative technical difficulty was also performed. This study shows that total hip arthroplasties in obese patients were perceived, by the surgeon, to be significantly more difficult. However, this did not translate to an increased risk of complications, operation time, or blood loss, nor suboptimal implant placement. In addition, our results suggest that obese patients gain similar benefit from hip arthroplasty as do nonobese patients, but morbidly obese patients have significantly worse 6-minute walk test scores at 6 weeks.
abstract_id: PUBMED:35598956
"Obesity paradox" has not an impact on minimally invasive anatomical lung resection. Introduction: The paradoxical benefit of obesity, the 'obesity paradox', has been analyzed in lung surgical populations with contradictory results. Our goal was assessing the relationship of body mass index (BMI) to acute outcomes after minimally invasive major pulmonary resections.
Methods: Retrospective review of consecutive patients who underwent pulmonary anatomical resection through a minimally invasive approach for the period 2014-2019. Patients were grouped as underweight, normal, overweight and obese type I, II and III. Adjusted odds ratios regarding postoperative complications (overall, respiratory, cardiovascular and surgical morbidity) were produced with their exact 95% confidence intervals. All tests were considered statistically significant at p<0.05.
Results: Among 722 patients included in the study, 37.7% had a normal BMI and 61.8% were overweight or obese patients. When compared with that of normal BMI patients, adjusted pulmonary complications were significantly higher in obese type I patients (2.6% vs 10.6%, OR: 4.53 [95%CI: 1.86-12.11]) and obese type II-III (2.6% vs 10%, OR: 6.09 [95%CI: 1.38-26.89]). No significant differences were found regarding overall, cardiovascular or surgical complications among groups.
Conclusions: Obesity has not favourable effects on early outcomes in patients undergoing minimally invasive anatomical lung resections, since the risk of respiratory complications in patients with BMI≥30kg/m2 and BMI≥35kg/m2 is 4.5 and 6 times higher than that of patients with normal BMI.
abstract_id: PUBMED:23021846
Hip arthroplasty. Total hip arthroplasty is a cost-effective surgical procedure undertaken to relieve pain and restore function to the arthritic hip joint. More than 1 million arthroplasties are done every year worldwide, and this number is projected to double within the next two decades. Symptomatic osteoarthritis is the indication for surgery in more than 90% of patients, and its incidence is increasing because of an ageing population and the obesity epidemic. Excellent functional outcomes are reported; however, careful patient selection is needed to achieve best possible results. The present economic situation in many developed countries will place increased pressure on containment of costs. Future demand for hip arthroplasty, especially in patients younger than 65 years, emphasises the need for objective outcome measures and joint registries that can track lifetime implant survivorship. New generations of bearing surfaces such as metal-on-metal, ceramic-on-ceramic, and metal-on-ceramic, and techniques such as resurfacing arthroplasty have the potential to improve outcomes and survivorship, but findings from prospective trials are needed to show efficacy. With the recall of some metal-on-metal bearings, new bearing surfaces have to be monitored carefully before they can be assumed to be better than traditional bearings.
abstract_id: PUBMED:35594620
Transient Hypocalcemia After Total Thyroidectomy: The Obesity Paradox at Work? Introduction: With increasing rates of obesity worldwide, a correlation between high body mass index (BMI) and postoperative morbidity after thyroid surgery remains unclear. Postoperative transient hypocalcemia is common after total thyroidectomy due to interruption of parathyroid function. This study examines the relationship between BMI and hypocalcemia after total thyroidectomy.
Materials And Methods: A retrospective review of prospectively collected data for 1135 patients who underwent total thyroidectomy for cancer, multinodular goiter (MNG), or Graves' disease between June 2009 and November 2020 at a single institution was performed. BMI groups followed the World Health Organization classification. Hypocalcemia was defined as serum calcium ≤8 mg/dL. Calcium levels measured on postoperative day 0 and the following morning were compared between the BMI groups.
Results: Of 1135 total thyroidectomy patients, 85% were women. The mean age and standard deviation of patients was 49 (± 13) y, with most of Hispanic origin (64%). Overall, 41.5% of patients had cancer, 45% nontoxic MNG, 5.8% toxic MNG, and 12% Graves' disease. Stratified by BMI, 27% of patients were normal, 34% overweight, and 39% obese. Overall, overweight and obese patients experienced less transient hypocalcemia at both time points compared to normal patients postoperatively (P = 0.01 and P = 0.009). Furthermore, overweight and obese patients with Graves' disease experienced less transient hypocalcemia at both time points (P = 0.04 and P = 0.05). There was no statistical difference in other groups.
Conclusions: A protective role of higher BMI or "obesity paradox" for postoperative hypocalcemia may exist in those obese patients after total thyroidectomy.
Answer: The concept of the obesity paradox, which suggests that obesity may confer some protection against certain health outcomes, has been explored in the context of early postoperative complications after hip surgery. The evidence from the provided abstracts is mixed, with some studies supporting the existence of the obesity paradox in this setting, while others do not.
In a retrospective chart review (PUBMED:27290953), BMI classification was found to be a predictor of early postoperative complications in patients undergoing total hip arthroplasty. The complication rates demonstrated a J-curve distribution, with the highest morbidity in the underweight group, a decrease to nadir in the overweight and obese class I groups, and then higher again in classes II and III. This suggests that being overweight or having class I obesity may be associated with a lower risk of early complications, consistent with the obesity paradox.
However, another study analyzing the ACS-NSQIP Registry (PUBMED:29480892) found that the obesity paradox may be an artifact of selection bias, as morbidly obese patients had a reduced risk of death within 30 days after nonurgent hip surgery but not for urgent hip surgery. This indicates that the obesity paradox may not apply when patient selection is minimized, such as in urgent surgeries.
A study on superobese patients (PUBMED:33165206) found that these patients had significantly higher odds of major complications within 30 days after hip fracture surgery compared with all other patients, suggesting that the obesity paradox does not apply to superobese patients.
In summary, while some evidence supports the existence of the obesity paradox in early postoperative complications after hip surgery, particularly for overweight and class I obese patients, other studies suggest that this may not hold true for superobese patients or when considering the potential for selection bias in nonurgent surgeries. Therefore, the application of the obesity paradox to early postoperative complications after hip surgery is not clear-cut and may vary depending on the specific patient population and surgical context. |
Instruction: Are magnetic resonance imaging recovery and laxity improvement possible after anterior cruciate ligament rupture in nonoperative treatment?
Abstracts:
abstract_id: PUBMED:24951134
Are magnetic resonance imaging recovery and laxity improvement possible after anterior cruciate ligament rupture in nonoperative treatment? Purpose: This study aimed to determine whether anterior cruciate ligament (ACL) features on magnetic resonance imaging (MRI) and knee laxity are improved 2 years after ACL rupture treated nonoperatively and to analyze the relation between changes in scores of ACL features and changes in laxity.
Methods: One hundred fifty-four eligible patients were included in a prospective multicenter cohort study with 2-year follow-up. Inclusion criteria were (1) ACL rupture diagnosed by physical examination and MRI, (2) MRI within 6 months after trauma, and (3) age 18 to 45 years. Laxity tests and MRI were performed at baseline and at 2-year follow-up. Fifty of 143 patients, for whom all MRI data was available, were treated nonoperatively and were included for this study. Nine ACL features were scored using MRI: fiber continuity, signal intensity, slope of ACL with respect to the Blumensaat line, distance between the Blumensaat line and the ACL, tension, thickness, clear boundaries, assessment of original insertions, and assessment of the intercondylar notch. A total score was determined by summing scores for each feature.
Results: Fiber continuity improved in 30 patients (60%), and the empty intercondylar notch resolved for 22 patients (44%). Improvement in other ACL features ranged from 4% to 28%. Sixteen patients (32%) improved on the Lachman test (change from soft to firm end points [n = 14]; decreased anterior translation [n = 2]), one patient (2%) showed improvement with the KT-1000 arthrometer (MEDmetric, San Diego, CA) and 4 patients (8%) improved on the pivot shift test. Improvement on the Lachman test was moderately negatively associated with the total score of ACL features at follow-up. Analyzing ACL features separately showed that only signal intensity improvement, clear boundaries, and intercondylar notch assessment were positively associated with improvement on the Lachman test.
Conclusions: Two years after ACL rupture and nonoperative management, patients experienced partial recovery on MRI, and some knee laxity improvement was present. Improvement of ACL features on MRI correlates moderately with improved laxity.
Level Of Evidence: Level II, Prospective comparative study.
abstract_id: PUBMED:12368642
The prevalence of soft tissue injuries in nonoperative tibial plateau fractures as determined by magnetic resonance imaging. Objective: To determine the incidence of meniscus tears and complete ligament disruption in nondisplaced and minimally displaced tibial plateau fractures, which are otherwise amenable to nonoperative management.
Design: Prospective clinical study.
Setting: Level I urban trauma center.
Intervention: Magnetic resonance imaging of 20 consecutive nonoperative tibial plateau fractures.
Results: Magnetic resonance imaging was performed on 20 consecutive nonoperative (nondisplaced or minimally displaced) tibial plateau fractures to determine the frequency of significant soft tissue injuries. Ninety percent (18 of 20) had magnetic resonance imaging-diagnosed significant injuries to the soft tissues, including 80% (16 of 20) with meniscal tears, and 40% (8 of 20) with complete ligament disruptions.
Conclusions: This study found a high prevalence of soft tissue injuries with nondisplaced fractures of the tibial plateau and cautions the physician and patient with respect to future knee function and possible arthrosis.
abstract_id: PUBMED:12642263
Acute grade III medial collateral ligament injury of the knee associated with anterior cruciate ligament tear. The usefulness of magnetic resonance imaging in determining a treatment regimen. Background: The appropriate management of acute grade III medial collateral ligament injury when it is combined with a torn anterior cruciate ligament has not been determined.
Hypothesis: Magnetic resonance imaging grading of grade III medial collateral ligament injury in patients who also have anterior cruciate ligament injury correlates with the outcome of their nonoperative treatment.
Study Design: Prospective cohort study.
Methods: Seventeen patients were first treated nonoperatively with bracing. Eleven patients with restored valgus stability received anterior cruciate ligament reconstruction only, and six with residual valgus laxity also received medial collateral ligament surgery.
Results: Magnetic resonance imaging depicted complete disruption of the superficial layer of the medial collateral ligament in all 17 patients and disruption of the deep layer in 14. Restoration of valgus stability was significantly correlated with the location of superficial fiber damage. Damage was evident over the whole length of the superficial layer in five patients, and all five patients had residual valgus laxity despite bracing. Both groups had good-to-excellent results 5 years later.
Conclusions: Location of injury in the superficial layer may be useful in predicting the outcome of nonoperative treatment for acute grade III medial collateral ligament lesions combined with anterior cruciate ligament injury.
abstract_id: PUBMED:10616814
Magnetic resonance imaging of the postoperative knee. Due to the recent development of arthroscopic techniques in meniscal surgery and anterior cruciate ligament reconstruction, an increasing number of postoperative patients are referred for a magnetic resonance examination of the knee because of recurrent injury. Contrary to the nonoperative patient, T2-weighted sequences and, in unequivocal cases, magnetic resonance arthrography play the most important role in the evaluation of a possible meniscal retear. In patients with anterior cruciate ligament reconstruction, the changes of the magnetic resonance appearance of the anterior cruciate ligament graft during the first year after surgery must be considered in the diagnosis of retears. Recent developments in articular cartilage defect repair and the possible role of magnetic resonance imaging in the follow-up are discussed.
abstract_id: PUBMED:33553452
A Comparison of Nonoperative and Operative Treatment of Type 2 Tibial Spine Fractures. Background: Tibial spine fractures (TSFs) are typically treated nonoperatively when nondisplaced and operatively when completely displaced. However, it is unclear whether displaced but hinged (type 2) TSFs should be treated operatively or nonoperatively.
Purpose: To compare operative versus nonoperative treatment of type 2 TSFs in terms of overall complication rate, ligamentous laxity, knee range of motion, and rate of subsequent operation.
Study Design: Cohort study; Level of evidence, 3.
Methods: We reviewed 164 type 2 TSFs in patients aged 6 to 16 years treated between January 1, 2000, and January 31, 2019. Excluded were patients with previous TSFs, anterior cruciate ligament (ACL) injury, femoral or tibial fractures, or grade 2 or 3 injury of the collateral ligaments or posterior cruciate ligament. Patients were placed according to treatment into the operative group (n = 123) or nonoperative group (n = 41). The only patient characteristic that differed between groups was body mass index (22 [nonoperative] vs 20 [operative]; P = .02). Duration of follow-up was longer in the operative versus the nonoperative group (11 vs 6.9 months). At final follow-up, 74% of all patients had recorded laxity examinations.
Results: At final follow-up, the nonoperative group had more ACL laxity than did the operative group (P < .01). Groups did not differ significantly in overall complication rate, reoperation rate, or total range of motion (all, P > .05). The nonoperative group had a higher rate of subsequent new TSFs and ACL injuries requiring surgery (4.9%) when compared with the operative group (0%; P = .01). The operative group had a higher rate of arthrofibrosis (8.9%) than did the nonoperative group (0%; P = .047). Reoperation was most common for hardware removal (14%), lysis of adhesions (6.5%), and manipulation under anesthesia (6.5%).
Conclusion: Although complication rates were similar between nonoperatively and operatively treated type 2 TSFs, patients treated nonoperatively had higher rates of residual laxity and subsequent tibial spine and ACL surgery, whereas patients treated operatively had a higher rate of arthrofibrosis. These findings should be considered when treating patients with type 2 TSF.
abstract_id: PUBMED:35058031
Validation of a magnetic resonance imaging based method to study passive knee laxity: An in-situ study. Knee laxity can be described as an increased anterior tibial translation (ATT) or decreased stiffness of the tibiofemoral joint under an applied force. Küpper et al. (2013, 2016) and Westover et al. (2016) previously developed and reported on a magnetic resonance (MR)-based in vivo measure of knee laxity. In this study, the application of an in situ knee loading apparatus (ISKLA) is presented as a step toward validating the MR-based methodology for measuring ATT and stiffness. The ISKLA is designed to measure these outcome variables using MR imaging and is validated against a gold-standard ElectroForce mechanical test instrument (TA Instruments 3550). Accuracy was assessed through an in situ experimental setup by testing four cadaveric specimens with both the MR-based methodology and in the ElectroForce system. The outcome of the current study showed that the MR-based ATTs and stiffness measurements using the ISKLA were within 1.44-2.10 mm and 0.16-6.14 N/mm, respectively, of the corresponding values measured by the gold standard system. An excellent ICC was observed for ATT (0.97) and good ICC for stiffness (0.87) between the MR and ElectroForce-based systems across all target force levels. These findings suggest that the MR-based approach can be used with satisfactory accuracy and correlation to the gold standard measure.
abstract_id: PUBMED:35488226
The lateral femoral notch sign and coronal lateral collateral ligament sign in magnetic resonance imaging failed to predict dynamic anterior tibial laxity. Purpose: To investigate the relationship between the lateral femoral notch sign as well as the coronal lateral collateral ligament (LCL) sign and anterior tibial translation using the GNRB arthrometer in patients with anterior cruciate ligament (ACL) injuries.
Methods: Forty-six patients with ACL injuries were retrospectively included from May 2020 to February 2022; four patients were excluded due to incomplete data. Magnetic resonance imaging (MRI) were reviewed for the lateral femoral notch sign and the coronal LCL sign. The GNRB arthrometer was used to evaluate the dynamic anterior tibial translation of the knee, and the side-to-side differences (SSDs) in tibial translation between the injured knee and healthy knee were calculated at different force levels. Two types of slopes for displacement-force curves were acquired.
Results: Six patients (14.3%) had the positive lateral femoral notch sign (notch depth > 2.0 mm), and 14 patients (33.3%) had the positive coronal LCL sign. The SSD of the anterior tibial translations under different loads as well as the slopes of displacement-force curves were the same in the positive and negative notch sign groups (p all > 0.05) and between the positive and negative coronal LCL sign groups (p all > 0.05). Meanwhile, the measured notch depth and notch length were also not significantly correlated with the anterior tibial translation SSD in the GNRB.
Conclusion: The presence of the lateral femoral notch sign and the coronal LCL sign did not indicate greater dynamic tibial laxity as measured using the GNRB.
abstract_id: PUBMED:37715506
Correlation Between the Location and Distance of Kissing Contusions and Knee Laxity in Acute Noncontact ACL Injury. Background: Bone bruise (BB) and kissing contusion are common features of acute anterior cruciate ligament (ACL) injury on magnetic resonance imaging (MRI). The correlation between the location and distance of kissing contusions and knee laxity remains unclear.
Purpose: To determine the significance of different patterns of BB in acute noncontact ACL injury and assess the correlation between the location and distance of kissing contusions and the severity of knee laxity.
Study Design: Cross-sectional study; Level of evidence, 3.
Methods: A total of 205 patients with acute noncontact ACL injury undergoing arthroscopic treatment between January 2021 and May 2022 were included in this retrospective analysis. Patients were grouped according to the different patterns of BB. The type of ACL injury and concomitant injuries were analyzed on MRI and confirmed by arthroscopy. Anterior knee laxity was assessed by the Ligs digital arthrometer and stress radiography, and rotational knee laxity was assessed by the intraoperative pivot-shift test. The MRI parameters of the location and distance of kissing contusions were measured to assess their correlations with the severity of knee laxity.
Results: Of the 205 patients with acute noncontact ACL injury, 38 were in the non-BB group and 167 were in the BB group, the latter including 32 with the isolated BB on the lateral tibial plateau and 135 with kissing contusions. There was no significant difference in the mean time from initial injury to MRI scan between the non-BB group and the BB group (14.34 ± 2.92 vs 15.17 ± 2.86 days; P = .109) or between the isolated BB subgroup and the kissing contusion subgroup (14.94 ± 2.92 vs 15.23 ± 2.85 days; P = .605). The side-to-side difference (SSD) in anterior knee laxity and the incidences of complete ACL injury, concomitant injuries, and high-grade pivot-shift test were significantly higher in the BB group than in the non-BB group, and in the kissing contusion subgroup compared with the isolated BB subgroup. The kissing contusion index of the lateral femoral condyle (LFC) and the sagittal distance of kissing contusions were significantly correlated with the SSD in anterior knee laxity and the grade of pivot-shift test (P < .001).
Conclusion: The presence of BB, in particular the appearance of kissing contusions, was related to greater knee laxity and higher incidences of complete ACL injury and concomitant injuries in acute noncontact ACL injury. For patients with kissing contusions, as the location of BB on the LFC moved forward and the distance between kissing contusions increased, anterior and rotational knee laxity became more serious.
abstract_id: PUBMED:11421523
Nonoperative treatment of an interosseous ganglion cyst. Ganglion cysts of the knee are being reported more frequently secondary to an increased rate of magnetic resonance imaging studies. Although knee pain is the impetus for imaging, ganglion cysts are often incidental findings. Nonoperative treatment is a successful therapeutic option. We report a patient with variable pain presentations over the course of her treatment. The pattern of complaints pointed to different primary etiologies about the knee, but all were common to an interosseous ganglion cyst. A stepwise assessment and expansion of the differential diagnosis allowed for appropriate utilization of modalities and limited morbidity with nonoperative therapy.
abstract_id: PUBMED:31054734
Editorial Commentary: Magnetic Resonance Imaging of the Knee Anterolateral Ligament: Does It Really Matter? Great interest remains in the structures of the knee anterolateral complex and how they work synergistically with the anterior cruciate ligament to control anterolateral rotatory laxity. Many studies have now used magnetic resonance imaging to assess the degree of damage to the anterolateral ligament. The systematic review described in this commentary rigorously highlights the many deficiencies that exist within our current understanding of the imaging analysis of these structures. Marked variability in the definition of anterolateral ligament injury, significant methodological differences, and the lack of a gold standard reference make it very challenging to translate the findings of these imaging studies into clinical practice. More information is required to fully understand the injury pattern, and then clinical studies are needed to guide treatment. Hopefully we will then have the ability to better treat our patients with these challenging complex laxity patterns that exist over and above an isolated anterior cruciate ligament injury.
Answer: Yes, magnetic resonance imaging (MRI) recovery and laxity improvement are possible after anterior cruciate ligament (ACL) rupture in nonoperative treatment. A study found that two years after ACL rupture and nonoperative management, patients experienced partial recovery on MRI, and some knee laxity improvement was present. Improvement of ACL features on MRI, such as fiber continuity, signal intensity, and clear boundaries, was moderately associated with improved laxity. Specifically, fiber continuity improved in 60% of patients, and the empty intercondylar notch resolved for 44% of patients. Improvement in other ACL features ranged from 4% to 28%. Additionally, 32% of patients improved on the Lachman test, 2% showed improvement with the KT-1000 arthrometer, and 8% improved on the pivot shift test (PUBMED:24951134). |
Instruction: Consent for use of personal information for health research: do people with potentially stigmatizing health conditions and the general public differ in their opinions?
Abstracts:
abstract_id: PUBMED:19630941
Consent for use of personal information for health research: do people with potentially stigmatizing health conditions and the general public differ in their opinions? Background: Stigma refers to a distinguishing personal trait that is perceived as or actually is physically, socially, or psychologically disadvantageous. Little is known about the opinion of those who have more or less stigmatizing health conditions regarding the need for consent for use of their personal information for health research.
Methods: We surveyed the opinions of people 18 years and older with seven health conditions. Participants were drawn from: physicians' offices and clinics in southern Ontario; and from a cross-Canada marketing panel of individuals with the target health conditions. For each of five research scenarios presented, respondents chose one of five consent choices: (1) no need for me to know; (2) notice with opt-out; (3) broad opt-in; (4) project-specific permission; and (5) this information should not be used. Consent choices were regressed onto: demographics; health condition; and attitude measures of privacy, disclosure concern, and the benefits of health research. We conducted focus groups to discuss possible reasons for observed consent choices.
Results: We observed substantial variation in the control that people wish to have over use of their personal information for research. However, consent choice profiles were similar across health conditions, possibly due to sampling bias. Research involving profit or requiring linkage of health information with income, education, or occupation were associated with more restrictive consent choices. People were more willing to link their health information with biological samples than with information about their income, occupation, or education.
Conclusions: The heterogeneity in consent choices suggests individuals should be offered some choice in use of their information for different types of health research, even if limited to selectively opting-out. Some of the implementation challenges could be designed into the interoperable electronic health record. However, many questions remain, including how best to capture the opinions of those who are more privacy sensitive.
abstract_id: PUBMED:27527514
A survey of patient perspectives on the research use of health information and biospecimens. Background: Personal health information and biospecimens are valuable research resources essential for the advancement of medicine and protected by national standards and provincial statutes. Research ethics and privacy standards attempt to balance individual interests with societal interests. However these standards may not reflect public opinion or preferences. The purpose of this study was to assess the opinions and preferences of patients with kidney disease about the use of their health information and biospecimens for medical research.
Methods: A 45-item survey was distributed to a convenience sample of patients at an outpatient clinic in a large urban centre. The survey briefly addressed sociodemographic and illness characteristics. Opinions were sought on the research use of health information and biospecimens including consent preferences.
Results: Two hundred eleven of 400 distributed surveys were completed (response rate 52.8 %). Respondents were generally supportive of medical research and trusting of researchers. Many respondents supported the use of their information and biospecimens for health research and also preferred consent be sought for use of health information and biospecimens. Some supported the use of their information and biospecimens for research without consent. There were significant differences in the opinions people offered regarding the research use of biospecimens compared to health information. Some respondent perspectives about consent were at odds with current regulatory and legal standards.
Conclusions: Clinical health data and biospecimens are valuable research resources, critical to the advancement of medicine. Use of these data for research requires balancing respect for individual autonomy, privacy and the societal interest in the greater good. Incongruence between some respondent perspectives and the regulatory standards suggest both a need for public education and review of legislation to increase understanding and ensure the public's trust is maintained.
abstract_id: PUBMED:19533044
Public health genetics. Fundamental rights aspects The fundamental right of "informational self-determination" (protection of personal data) protects the individual against collection, storage, use and disclosure of her/his personal data - including genetic data - without her/his informed consent. However, in cases of overriding public interest, limitations of this right are deemed legitimate. Public health, expressly guaranteed in some German state constitutions, may constitute such overriding public interest and justify corresponding state measures as long as they respect the principle of proportionality.
abstract_id: PUBMED:21071570
Public attitudes to the use in research of personal health information from general practitioners' records: a survey of the Irish general public. Introduction: Understanding the views of the public is essential if generally acceptable policies are to be devised that balance research access to general practice patient records with protection of patients' privacy. However, few large studies have been conducted about public attitudes to research access to personal health information.
Methods: A mixed methods study was performed. Informed by focus groups and literature review, a questionnaire was designed which assessed attitudes to research access to personal health information and factors that influence these. A postal survey was conducted of an electoral roll-based sample of the adult population of Ireland.
Results: Completed questionnaires were returned by 1575 (40.6%). Among the respondents, 67.5% were unwilling to allow GPs to decide when researchers could access identifiable personal health information. However, 89.5% said they would agree to ongoing consent arrangements, allowing the sharing by GPs of anonymous personal health information with researchers without the need for consent on a study-by-study basis. Increasing age (by each 10-year increment), being retired and primary level education only were significantly associated with an increased likelihood of agreeing that any personal health information could be shared on an ongoing basis: OR 1.39 (95% CI 1.18 to 1.63), 2.00 (95% CI 1.22 to 3.29) and 3.91 (95% CI 1.95 to 7.85), respectively.
Conclusions: Although survey data can be prone to response biases, this study suggests that prior consent agreements allowing the supply by GPs of anonymous personal health information to researchers may be widely supported, and that populations willing to opt in to such arrangements may be sufficiently representative to facilitate valid and robust consent-dependent observational research.
abstract_id: PUBMED:31176040
Personal health information in research: Perceived risk, trustworthiness and opinions from patients attending a tertiary healthcare facility. Background: Personal health information is a valuable resource to the advancement of research. In order to achieve a comprehensive reform of data infrastructure in Australia, both public engagement and building social trust is vital. In light of this, we conducted a study to explore the opinions, perceived risks and trustworthiness regarding the use of personal health information for research, in a sample of the public attending a tertiary healthcare facility.
Methods: The Consumer Opinions of Research Data Sharing (CORDS) study was a questionnaire-based design with 249 participants who were attending a public tertiary healthcare facility located on the Gold Coast, Australia. The questionnaire was designed to explore opinions and evaluate trust and perceived risk in research that uses personal health information. Concept analysis was used to identify key dimensions of perceived risk.
Results: Overall participants were supportive of research, highly likely to participate and mostly willing to share their personal health information. However, where the perceived risk of data misuse was high and trust in others was low, participants expressed hesitation to share particular types of information. Performance, physical and privacy risks were identified as key dimensions of perceived risk.
Conclusion: This study highlights that while participant views on the use of personal health information in research is mostly positive, where there is perceived risk in an environment of low trust, support for research decreases. The three key findings of this research are that willingness to share data is contingent upon: (i) data type; (ii) risk perception; and (iii) trust in who is accessing the data. Understanding which factors play a key role in a person's decision to share their personal health information for research is vital to securing a social license.
abstract_id: PUBMED:17712084
Alternatives to project-specific consent for access to personal information for health research: what is the opinion of the Canadian public? Objectives: This study sought to determine public opinion on alternatives to project-specific consent for use of their personal information for health research.
Design: The authors conducted a fixed-response random-digit dialed telephone survey of 1,230 adults across Canada.
Measurements: We measured attitudes toward privacy and health research; trust in different institutions to keep information confidential; and consent choice for research use of one's own health information involving medical record review, automated abstraction of information from the electronic medical record, and linking education or income with health data.
Results: Support was strong for both health research and privacy protection. Studying communicable diseases and quality of health care had greatest support (85% to 89%). Trust was highest for data institutes, university researchers, hospitals, and disease foundations (78% to 80%). Four percent of respondents thought information from their paper medical record should not be used at all for research, 32% thought permission should be obtained for each use, 29% supported broad consent, 24% supported notification and opt out, and 11% felt no need for notification or consent. Opinions were more polarized for automated abstraction of data from the electronic medical record. Respondents were more willing to link education with health data than income.
Conclusions: Most of the public supported alternatives to study-specific consent, but few supported use without any notification or consent. Consent choices for research use of one's health information should be documented in the medical record. The challenge remains how best to elicit those choices and ensure that they are up-to-date.
abstract_id: PUBMED:11794835
Health information: reconciling personal privacy with the public good of human health. The success of the health care system depends on the accuracy, correctness and trustworthiness of the information, and the privacy rights of individuals to control the disclosure of personal information. A national policy on health informational privacy should be guided by ethical principles that respect individual autonomy while recognizing the important collective interests in the use of health information. At present there are no adequate laws or constitutional principles to help guide a rational privacy policy. The laws are scattered and fragmented across the states. Constitutional law is highly general, without important specific safeguards. Finally, a case study is provided showing the important trade-offs that exist between public health and privacy. For a model public health law, see www.critpath.org/msphpa/privacy.
abstract_id: PUBMED:37808971
Overcoming personal information protection challenges involving real-world data to support public health efforts in China. In the information age, real-world data-based evidence can help extrapolate and supplement data from randomized controlled trials, which can benefit clinical trials and drug development and improve public health decision-making. However, the legitimate use of real-world data in China is limited due to concerns over patient confidentiality. The use of personal information is a core element of data governance in public health. In China's public health data governance, practical problems exist, such as balancing personal information protection and public value conflict. In 2021, China adopted the Personal Information Protection Law (PIPL) to provide a consistent legal framework for protecting personal information, including sensitive medical health data. Despite the PIPL offering critical legal safeguards for processing health data, further clarification is needed regarding specific issues, including the meaning of "separate consent," cross-border data transfer requirements, and exceptions for scientific research. A shift in the law and regulatory framework is necessary to advance public health research further and realize the potential benefits of combining real-world evidence and digital health while respecting privacy in the technological and demographic change era.
abstract_id: PUBMED:25506854
The importance of purpose: moving beyond consent in the societal use of personal health information. Background: Adoption of electronic health record systems has increased the availability of patient-level electronic health information.
Objective: To examine public support for secondary uses of electronic health information under different consent arrangements.
Design: National experimental survey to examine perceptions of uses of electronic health information according to patient consent (obtained vs. not obtained), use (research vs. marketing), and framing of the findings (abstract description without results vs. specific results).
Setting: Nationally representative survey.
Participants: 3064 African American, Hispanic, and non-Hispanic white persons (response rate, 65%).
Measurements: Appropriateness of health information use described in vignettes on a scale of 1 (not at all appropriate) to 10 (very appropriate).
Results: Mean ratings ranged from a low of 3.81 for a marketing use when consent was not obtained and specific results were presented to a high of 7.06 for a research use when consent was obtained and specific results were presented. Participants rated scenarios in which consent was obtained as more appropriate than when consent was not obtained (difference, 1.01 [95% CI, 0.69 to 1.34]; P<0.001). Participants rated scenarios in which the use was marketing as less appropriate than when the use was research (difference, -2.03 [CI, -2.27 to -1.78]; P<0.001). Unconsented research uses were rated as more appropriate than consented marketing uses (5.65 vs. 4.52; difference, 1.13 [CI, 0.87 to 1.39]).
Limitations: Participants rated hypothetical scenarios. Results could be vulnerable to nonresponse bias despite the high response rate.
Conclusion: Although approaches to health information sharing emphasize consent, public opinion also emphasizes purpose, which suggests a need to focus more attention on the social value of information use.
Primary Funding Source: National Human Genome Research Institute.
abstract_id: PUBMED:37647102
Trust and Health Information Exchanges: Qualitative Analysis of the Intent to Share Personal Health Information. Background: Digital health has the potential to improve the quality of care, reduce health care costs, and increase patient satisfaction. Patient acceptance and consent are a prerequisite for effective sharing of personal health information (PHI) through health information exchanges (HIEs). Patients need to form and retain trust in the system(s) they use to leverage the full potential of digital health. Germany is at the forefront of approving digital treatment options with cost coverage through statutory health insurance. However, the German population has a high level of technology skepticism and a low level of trust, providing a good basis to illuminate various facets of eHealth trust formation.
Objective: In a German setting, we aimed to answer the question, How does an individual form a behavioral intent to share PHI with an HIE platform? We discussed trust and informed consent through (1) synthesizing the main influence factor models into a complex model of trust in HIE, (2) providing initial validation of influence factors based on a qualitative study with patient interviews, and (3) developing a model of trust formation for digital health apps.
Methods: We developed a complex model of the formation of trust and the intent to share PHI. We provided initial validation of the influence factors through 20 qualitative, semistructured interviews in the German health care setting and used a deductive coding approach to analyze the data.
Results: We found that German patients show a positive intent to share their PHI with HIEs under certain conditions. These include (perceived) information security and a noncommercial organization as the recipient of the PHI. Technology experience, age, policy and regulation, and a disposition to trust play an important role in an individual's privacy concern, which, combined with social influence, affects trust formation on a cognitive and emotional level. We found a high level of cognitive trust in health care and noncommercial research institutions but distrust in commercial entities. We further found that in-person interactions with physicians increase trust in digital health apps and PHI sharing. Patients' emotional trust depends on disposition and social influences. To form their intent to share, patients undergo a privacy calculus. Hereby, the individual's benefit (eg, convenience), benefits for the individual's own health, and the benefits for public welfare often outweigh the perceived risks of sharing PHI.
Conclusions: With the higher demand for timely PHI, HIE providers will need to clearly communicate the benefits of their solutions and their information security measures to health care providers (physicians, nursing and administrative staff) and patients and include them as key partners to increase trust. Offering easy access and educational measures as well as the option for specific consent may increase patients' trust and their intention to share PHI.
Answer: The study described in PUBMED:19630941 investigated whether people with potentially stigmatizing health conditions and the general public differ in their opinions regarding the need for consent for the use of their personal information for health research. The results showed substantial variation in the control that people wish to have over the use of their personal information for research. However, consent choice profiles were similar across health conditions, which might be due to sampling bias. This suggests that there may not be a significant difference between the opinions of people with potentially stigmatizing health conditions and the general public regarding consent for the use of personal information in health research. The study also found that people were more willing to link their health information with biological samples than with information about their income, occupation, or education, indicating that the type of information linked to health data affects consent preferences. The heterogeneity in consent choices suggests that individuals should be offered some choice in the use of their information for different types of health research, even if limited to selectively opting out. |
Instruction: Is migration to Sweden associated with increased prevalence of risk factors for cardiovascular disease?
Abstracts:
abstract_id: PUBMED:18277190
Is migration to Sweden associated with increased prevalence of risk factors for cardiovascular disease? Background: [corrected] The proportion of elderly immigrants in Sweden is increasing. This is an important issue considering that the prevalence of cardiovascular disease (CVD) is a global health problem and that CVD is one of the main causes of morbidity among the elderly. The aim of this study is to analyze whether there is an association between migration status, that is being an elderly Iranian immigrant in Sweden, as compared with being an elderly Iranian in Iran, and the prevalence of risk factors for CVD.
Design: Population-based cross-sectional study with face-to-face interviews.
Participants And Setting: A total of 176 Iranians in Stockholm and 300 Iranians in Tehran, aged 60-84 years.
Methods: The prevalence of general obesity, abdominal obesity, hypertension, smoking, and diabetes was determined. Unconditional logistic regression analysis was used to calculate odds ratios (ORs) with 95% confidence intervals (CIs) for outcomes.
Results: The age-adjusted risk of hypertension and smoking was higher in Iranian women and men in Sweden. OR for hypertension was 1.9 (95% CI: 1.1-3.2) for women and 3.1 (95% CI: 1.5-6.3) for men and OR for smoking was 6.9 (95% CI: 2.2-21.6) for women and 4.7 (95% CI: 2.0-11.0) for men. The higher risk for hypertension and smoking remained significant after accounting for age, socioeconomic status, and marital status. Abdominal obesity was found in nearly 80% of the women in both groups.
Conclusion: The findings show a strong association between migration status and the prevalence of hypertension and smoking. Major recommendation for public health is increased awareness of CVD risk factors among elderly immigrants.
abstract_id: PUBMED:27737653
Human Puumala hantavirus infection in northern Sweden; increased seroprevalence and association to risk and health factors. Background: The rodent borne Puumala hantavirus (PUUV) causes haemorrhagic fever with renal syndrome in central and northern Europe. The number of cases has increased and northern Sweden has experienced large outbreaks in 1998 and 2006-2007 which raised questions regarding the level of immunity in the human population.
Methods: A randomly selected population aged between 25 and 74 years from northern Sweden were invited during 2009 to participate in a WHO project for monitoring of trends and determinants in cardiovascular disease. Health and risk factors were evaluated and sera from 1,600 participants were available for analysis for specific PUUV IgG antibodies using a recombinant PUUV nucleocapsid protein ELISA.
Results: The overall seroprevalence in the investigated population was 13.4 %, which is a 50 % increase compared to a similar study only two decades previously. The prevalence of PUUV IgG increased with age, and among 65-75 years it was 22 %. More men (15.3 %) than women (11.4 %) were seropositive (p < 0.05). The identified risk factors were smoking (OR = 1.67), living in rural areas (OR = 1.92), and owning farmland or forest (OR = 2.44). No associations were found between previous PUUV exposure and chronic lung disease, diabetes, hypertension, renal dysfunction, stroke or myocardial infarction.
Conclusions: PUUV is a common infection in northern Sweden and there is a high life time risk to acquire PUUV infection in endemic areas. Certain risk factors as living in rural areas and smoking were identified. Groups with increased risk should be targeted for future vaccination when available, and should also be informed about appropriate protection from rodent secreta.
abstract_id: PUBMED:26928830
Seroprevalence and Risk Factors of Inkoo Virus in Northern Sweden. The mosquito-borne Inkoo virus (INKV) is a member of the California serogroup in the family Bunyaviridae, genus Orthobunyavirus These viruses are associated with fever and encephalitis, although INKV infections are not usually reported and the incidence is largely unknown. The aim of the study was to determine the prevalence of anti-INKV antibodies and associated risk factors in humans living in northern Sweden. Seroprevalence was investigated using the World Health Organization Monitoring of Trends and Determinants in Cardiovascular Disease study, where a randomly selected population aged between 25 and 74 years (N = 1,607) was invited to participate. The presence of anti-INKV IgG antibodies was determined by immunofluorescence assay. Seropositivity for anti-INKV was significantly higher in men (46.9%) than in women (34.8%; P < 0.001). In women, but not in men, the prevalence increased somewhat with age (P = 0.06). The peak in seropositivity was 45-54 years for men and 55-64 years for women. Living in rural areas was associated with a higher seroprevalence. In conclusion, the prevalence of anti-INKV antibodies was high in northern Sweden and was associated with male sex, older age, and rural living. The age distribution indicates exposure to INKV at a relatively early age. These findings will be important for future epidemiological and clinical investigations of this relatively unknown mosquito-borne virus.
abstract_id: PUBMED:32424571
Burden and prevalence of prognostic factors for severe COVID-19 in Sweden. The World Health Organization and European Centre for Disease Prevention and Control suggest that individuals over the age of 70 years or with underlying cardiovascular disease, cancer, chronic obstructive pulmonary disease, asthma, or diabetes are at increased risk of severe COVID-19. However, the prevalence of these prognostic factors is unknown in many countries. We aimed to describe the burden and prevalence of prognostic factors of severe COVID-19 at national and county level in Sweden. We calculated the burden and prevalence of prognostic factors for severe COVID-19 based on records from the Swedish national health care and population registers for 3 years before 1st January 2016. 9,624,428 individuals were included in the study population. 22.1% had at least one prognostic factor for severe COVID-19 (2,131,319 individuals), and 1.6% had at least three factors (154,746 individuals). The prevalence of underlying medical conditions ranged from 0.8% with chronic obstructive pulmonary disease (78,516 individuals) to 7.4% with cardiovascular disease (708,090 individuals), and the county specific prevalence of at least one prognostic factor ranged from 19.2% in Stockholm (416,988 individuals) to 25.9% in Kalmar (60,005 individuals). We show that one in five individuals in Sweden is at increased risk of severe COVID-19. When compared with the critical care capacity at a local and national level, these results can aid authorities in optimally planning healthcare resources during the current pandemic. Findings can also be applied to underlying assumptions of disease burden in modelling efforts to support COVID-19 planning.
abstract_id: PUBMED:8354976
High serum insulin, insulin resistance and their associations with cardiovascular risk factors. The northern Sweden MONICA population study. Objectives: To estimate the prevalence of insulin resistance and high serum insulin levels and to investigate their relationship to other cardiovascular risk factors.
Design: Cross-sectional cardiovascular risk factor survey.
Setting: Northern Sweden.
Subjects: A subsample of the population-based Northern Sweden MONICA Study. This subsample underwent an oral glucose tolerance test after an overnight fast, and consisted of 354 men and 404 women in the 25-64-year age range.
Main Outcome Measures: Delineation of low insulin sensitivity and high serum insulin by the diagnostic test technique, prevalence of these variables and their associations with cardiovascular risk factors.
Results: The participants were classified into four subgroups by an insulin sensitivity index and fasting serum insulin. The combination of low insulin sensitivity and high serum insulin was present in 17% of the male and in 18% of the female 25-64-year-old population. In both sexes this combination was closely associated (P < 0.001) with body mass index, waist-hip ratio, blood pressure and serum triglycerides, and correlated inversely with serum HDL cholesterol (P < 0.001). When high serum insulin was present as an isolated entity it was as closely associated with other cardiovascular risk factors such as isolated low insulin sensitivity, except that impaired glucose tolerance occurred exclusively in the group with isolated low insulin sensitivity.
Conclusions: The combination of insulin resistance and high insulin levels is associated with a marked clustering of cardiovascular risk factors and is present in one-sixth of the middle-aged population in the north of Sweden.
abstract_id: PUBMED:27793513
Decreasing prevalence of abdominal aortic aneurysm and changes in cardiovascular risk factors. Objective: A significant reduction in the incidence of cardiovascular disease, including abdominal aortic aneurysm (AAA), has been observed in the past decades. In this study, a small but geographically well defined and carefully characterized population, previously screened for AAA and risk factors, was re-examined 11 years later. The aim was to study the reduction of AAA prevalence and associated factors.
Methods: All men and women aged 65 to 75 years living in the Norsjö municipality in northern Sweden in January 2010 were invited to an ultrasound examination of the abdominal aorta, registration of body parameters and cardiovascular risk factors, and blood sampling. An AAA was defined as an infrarenal aortic diameter ≥30 mm. Results were compared with a corresponding investigation conducted in 1999 in the same region.
Results: A total of 602 subjects were invited, of whom 540 (90%) accepted. In 2010, the AAA prevalence was 5.7% (95% confidence interval [CI], 2.8%-8.5%) among men compared with 16.9% (95% CI, 12.3%-21.6%) in 1999 (P < .001). The corresponding figure for women was 1.1% (95% CI, 0.0%-2.4%) vs 3.5% (95% CI, 1.2%-5.8%; P = .080). A low prevalence of smoking was observed in 2010 as well as in 1999, with only 13% and 10% current smokers, respectively (P = .16). Treatment for hypertension was significantly more common in 2010 (58% vs 44%; P < .001). Statins increased in the population (34% in 2010 vs 3% in 1999; P < .001), and the lipid profile in women had improved significantly between 1999 and 2010.
Conclusions: A highly significant reduction in AAA prevalence was observed during 11 years in Norsjö. Treatment for hypertension and with statins was more frequent, whereas smoking habits remained low. This indicates that smoking is not the only driver behind AAA occurrence and that lifestyle changes and treatment of cardiovascular risk factors may play an equally important role in the observed recent decline in AAA prevalence.
abstract_id: PUBMED:25857683
Increase in the Prevalence of Atrophic Gastritis Among Adults Age 35 to 44 Years Old in Northern Sweden Between 1990 and 2009. Background & Aims: Atrophic corpus gastritis (ACG) is believed to be an early precursor of gastric adenocarcinoma. We aimed to investigate trends of ACG in Northern Sweden, from 1990 through 2009, and to identify possible risk factors.
Methods: We randomly selected serum samples collected from 5284 participants in 1990, 1994, 1999, 2004, and 2009, as part of the population-based, cross-sectional Northern Sweden Multinational Monitoring of Trends and Determinants in Cardiovascular Disease study (ages, 35-64 y). Information was collected on sociodemographic, anthropometric, lifestyle, and medical factors using questionnaires. Serum samples were analyzed for levels of pepsinogen I to identify participants with functional ACG; data from participants with ACG were compared with those from frequency-matched individuals without ACG (controls). Blood samples were analyzed for antibodies against Helicobacter pylori and Cag pathogenicity island protein A. Associations were estimated with unconditional logistic regression models.
Results: Overall, 305 subjects tested positive for functional ACG, based on their level of pepsinogen I. The prevalence of ACG in participants age 55 to 64 years old decreased from 124 per 1000 to 49 per 1000 individuals between 1990 and 2009. However, the prevalence of ACG increased from 22 per 1000 to 64 per 1000 individuals among participants age 35 to 44 years old during this time period. Cag pathogenicity island protein A seropositivity was associated with risk for ACG (odds ratio, 2.29; 95% confidence interval, 1.69-3.12). Other risk factors included diabetes, low level of education, and high body mass index. The association between body mass index and ACG was confined to individuals age 35 to 44 years old; in this group, overweight and obesity were associated with a 2.8-fold and a 4.7-fold increased risk of ACG, respectively.
Conclusions: Among residents of Northern Sweden, the prevalence of ACG increased from 1990 through 2009, specifically among adults age 35 to 44 years old. The stabilizing seroprevalence of H pylori and the increasing prevalence of overweight and obesity might contribute to this unexpected trend. Studies are needed to determine whether these changes have affected the incidence of gastric cancer.
abstract_id: PUBMED:14660250
Diabetes and obesity in Northern Sweden: occurrence and risk factors for stroke and myocardial infarction. Aims: The authors describe the occurrence of diabetes and obesity in the population of Northern Sweden and the role of diabetes in cardiovascular disease.
Methods: Four surveys of the population aged 25 to 64 years were undertaken during a 14-year time span. Stroke events in subjects 35-74 years during 1985-92 and myocardial infarction in subjects 25-64 years 1989-93 were registered.
Results: The prevalence of diagnosed diabetes was 3.1 and 2.0% in men and women, respectively, and 2.6 and 2.7% for previously undiagnosed diabetes. During the 13-year observation period, BMI increased 0.96 kg/m(2) in men and 0.87 in women. The proportion of subjects with obesity (BMI>or=30) increased from 10.3% to 14.6% in men and from 12.5% to 15.7% in women. Hip circumference increased substantially more than waist circumference, leading to a decreasing waits-to-hip ratio (WHR). The relative risk for stroke or myocardial infarction was four to six times higher in a person with diabetes than in those without diabetes. The 28-day case fatality for myocardial infarction, but not for stroke, was significantly higher in both men and women with diabetes. Population-attributable risk for diabetes and stroke was 18% in men and 22% in women and for myocardial infarction it was 11% in men and 17% in women.
Conclusion: Obesity is becoming more common, although of a more distal than central distribution. The burden of diabetes in cardiovascular diseases in Northern Sweden is high.
abstract_id: PUBMED:23740599
Prevalence of cardiovascular disorders and risk factors in two 75-year-old birth cohorts examined in 1976-1977 and 2005-2006. Background And Aims: The number of older people are increasing worldwide, and cardiovascular diseases are the major causes of death in western societies. This study examines birth cohort differences in cardiovascular disorders and risk factors in Swedish elderly.
Methods: Representative samples of 75-year-olds living in Gothenburg, Sweden, examined in 1976-1977 and in 2005-2006. Blood pressure, s-cholesterol, s-triglycerides, height, body weight, body mass index, history of myocardial infarction, angina pectoris and stroke/TIA, and diabetes mellitus were measured.
Results: The prevalence of total cardiovascular disorders, hypertension and hypercholesterolemia decreased, and the prevalence of stroke increased in both genders. The prevalence of cardiovascular disorders was higher in women than in men in 1976-1977, and higher in men than in women in 2005-2006. The decrease in blood pressure occurred independently of antihypertensive treatment. The prevalence of current smokers decreased in men and increased in women. The prevalence of life-time smokers and diabetes mellitus increased only in women. The proportion on antihypertensive treatment and overweight and obesity increased only in men. Hypertension, overweight and obesity were more common in women in 1976-1977. These sex differences were not observed in 2005-2006.
Conclusions: The overall prevalence of cardiovascular disorders decreased, and sex differences reversed between the 1970s and 2000s among Swedish septuagenarians. Our findings emphasize the importance of environmental factors, not only for the prevalence of cardiovascular disorders, but also as explanations for sex differences. Reasons for changes could be increased survival in those with disorders and risk factors, changes in lifestyle and diet, and better preventive strategies, such as treatment of hypercholesterolemia and hypertension.
abstract_id: PUBMED:19063738
Prevalence of cardiovascular risk factors and the metabolic syndrome in middle-aged men and women in Gothenburg, Sweden. Background: Random samples of 50-year-old men living in Gothenburg have been examined every 10th year since 1963 with a focus on cardiovascular risk factors. The aims of the study were to acquire up-to-date information about risk factors in the fifth cohort of 50-year-old men and women, to re-examine those who were 50 years of age in 1993, and to analyse the prevalence of the metabolic syndrome (MetSyn) using different definitions.
Methods: A random sample of men and women born in 1953 were examined in 2003-2004 for cardiovascular risk factors. Men born in 1943 and that participated in the examination in 1993 were also invited. Descriptive statistics were calculated.
Results: The participation rate among men and women born in 1953 was 60 and 67% respectively. Among men born in 1943, the participation rate was 87%. The prevalence of obesity was from 15 to 17% (body mass index, BMI >or= 30) in the three samples. The prevalence of known diabetes was 4% among the 50-year-old men and 6% among the 60-year-old men, and 2% among the women. Increased fasting plasma glucose varied substantially from 4 to 33% depending on cut-off level and gender. Mean cholesterol was 5.4 to 5.5 mmol/l. Smoking was more common among women aged 50 (26%) than among men aged 50 (22%) and 60 years (15%). The prevalence of the MetSyn varied with the definition used: from 10 to 15.8% among the women, from 16.1 to 26% among 50-year-old men, and from 19.9 to 35% among the 60-year-old men. Only 5% of the men and women had no risk factors.
Conclusion: This study provides up-to-date information about the prevalence of cardiovascular risk factors and the MetSyn in middle-aged Swedish men and women. Different definitions of the MetSyn create confusion regarding which definition to use.
Answer: Yes, migration to Sweden is associated with an increased prevalence of risk factors for cardiovascular disease (CVD). A study analyzing the association between migration status and the prevalence of CVD risk factors found that elderly Iranian immigrants in Sweden had a higher age-adjusted risk of hypertension and smoking compared to elderly Iranians in Iran. The odds ratio (OR) for hypertension was 1.9 for women and 3.1 for men, while the OR for smoking was 6.9 for women and 4.7 for men. This increased risk remained significant even after accounting for age, socioeconomic status, and marital status. Abdominal obesity, another risk factor for CVD, was found in nearly 80% of the women in both the immigrant group in Sweden and the group in Iran (PUBMED:18277190). |
Instruction: Do personality disorders predict negative treatment outcome in obsessive-compulsive disorders?
Abstracts:
abstract_id: PUBMED:23770674
Does time-intensive ERP attenuate the negative impact of comorbid personality disorders on the outcome of treatment-resistant OCD? Background And Objectives: There is growing interest regarding patients with obsessive-compulsive disorder (OCD) who do not fully respond to cognitive-behavioural therapy (CBT). Limited data are available on the role of Comorbid Personality Disorders (CPDs) in the outcome of treatment-resistant obsessive-compulsive disorder (OCD), despite the fact that CPDs are considered a predictor of a poorer outcome. This study investigated whether a time-intensive scheduling of treatment could be an effective strategy aimed at attenuating the negative influence of CPDs on outcome in a sample of 49 inpatients with a primary diagnosis of treatment-resistant OCD.
Method: 38 inpatients completed the five-week individual treatment consisting of daily and prolonged sessions of exposure with response prevention (ERP) delivered for 2 h in the morning and 2 h in the afternoon. 44% of the sample received a full diagnosis of one or more CPDs. Following a pre-post-test design, outcome measures included the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS), Beck Depression Inventory-II (BDI-II) and Beck Anxiety Inventory (BAI).
Results: Data showed that the treatment was effective and indicated that CPDs were not a significant predictor of treatment failure.
Limitations: Future larger studies should evaluate the role of specific clusters of CPDs on the outcome of resistant OCD.
Conclusions: These findings suggest that an intensive treatment could be effective for severely ill patients who have not responded to weekly outpatient sessions and could also attenuate the negative impact of CPDs on outcome, evidencing the importance of a tailored therapeutic approach for patients who need a rapid reduction in OCD-related impairment.
abstract_id: PUBMED:17620165
Dissociation predicts symptom-related treatment outcome in short-term inpatient psychotherapy. Objective: Previous research has indicated that dissociation might be a negative predictor of treatment outcome in cognitive behavioural therapy for patients with obsessive-compulsive and anxiety disorders. Using a naturalistic design it was hypothesized that higher levels of dissociation predict poorer outcome in inpatients with affective, anxiety and somatoform disorders participating in a brief psychodynamic psychotherapy.
Method: A total of 133 patients completed the Symptom Check List (SCL-90), the German short version of the Dissociative Experiences Scale and the Inventory of Interpersonal Problems at the beginning and the end of treatment. The Global Severity Index (GSI) of the SCL-90 was chosen as outcome criterion.
Results: A total of 62.4% of study participants were classified as treatment responders, that is, they showed a statistically significant change of their GSI scores. Controlling for general psychopathology, the non-responders had significantly higher baseline dissociation scores than the responders. In a logistic regression analysis with non-response as a dependent variable, a comorbid personality disorder, low baseline psychopathology and high dissociation levels emerged as relevant predictors, but interpersonal problems and other comorbid disorders did not.
Conclusions: Dissociation has a negative impact on treatment outcome. It is suggested that dissociative subjects dissociate as a response to negative emotions arising in psychotherapy leading to a less favourable outcome. Additionally, dissociative patients may have an insecure attachment pattern negatively affecting the therapeutic relationship. Thus, dissociation may directly and indirectly influence the treatment process and outcome.
abstract_id: PUBMED:1990074
Effect of personality disorders on outcome of treatment. Although many clinicians have long believed that personality pathology may interfere with the effectiveness of treatment of axis I disorders, until recently there were no empirical studies on the subject. This report reviews the recent literature with regard to the following questions: a) Does personality pathology predict negative outcome of treatment for axis I disorders? b) If so, are there specific personality traits or disorders that account for such a negative outcome? The literature review reveals a robust finding that patients with personality pathology have a poorer response to treatment of axis I disorders than those without such pathology. Specific axis I disorders reported on include DSM-III major depression, panic disorder, and obsessive-compulsive disorder. Both inpatients and outpatients have been studied. There is too little literature to determine whether certain pathological personality traits are especially important, but there is enough to provide methodological guidance for future studies. Such studies should use standardized measures of personality and outcome, should match personality and nonpersonality groups on severity of the axis I disorder, and should be certain that axis I diagnoses are not confounded by axis II symptoms.
abstract_id: PUBMED:15967644
Do personality disorders predict negative treatment outcome in obsessive-compulsive disorders? A prospective 6-month follow-up study. Background: Comorbid personality disorders (PDs) are discussed as risk factors for a negative treatment outcome in obsessive-compulsive disorder (OCD). However, studies published so far have produced conflicting results. The present study examined whether PDs affect treatment outcome in patients with OCD.
Method: The treatment sample consisted of 55 patients with OCD who were consecutively referred to a Behaviour Therapy Unit for an in-patient or day-clinic treatment. Treatment consisted of an individualised and multimodal cognitive behaviour therapy (CBT, with or without antidepressive medication). Measurements were taken prior and after treatment and 6-month after admission.
Results: A large percentage of patients benefited from treatment irrespective of the presence of a PD and were able to maintain their improvement at follow-up. Duration of treatment was not prolonged in OCD patients with concomitant Axis II disorders. However, some specific personality traits (schizotypal, passive-aggressive) were baseline determinants for later treatment failure at trend level.
Conclusions: Results are encouraging for therapists working with patients co-diagnosed with Axis II disorders since these patients are not necessarily non-responders. The results stress the importance of a specifically tailored treatment approach based on an individual case formulation in OCD patients with complex symptomatology and comorbid Axis II disorders.
abstract_id: PUBMED:1444723
Effect of axis II diagnoses on treatment outcome with clomipramine in 55 patients with obsessive-compulsive disorder. We used the Structured Interview for DSM-III Personality Disorders to diagnose DSM-III personality disorders systematically in 55 patients with obsessive-compulsive disorder in the active-treatment cell of a controlled trial of clomipramine hydrochloride. Patients with a cluster A personality disorder had significantly higher obsessive-compulsive disorder severity scores at baseline, and the number of personality disorders was strongly related to baseline severity of obsessive-compulsive disorder symptoms. At the conclusion of the 12-week study, we found no significant difference in treatment outcome with clomipramine between those patients with at least one personality disorder and those with no personality disorders. However, the presence of schizotypal, borderline, and avoidant personality disorders, along with total number of personality disorders, did predict poorer treatment outcome. These variables were strongly related to having at least one cluster A personality disorder diagnosis, which was also a strong predictor of poorer outcome. Implications of these findings are discussed.
abstract_id: PUBMED:31704634
Predictors of treatment outcome in OCD: An interpersonal perspective. Although effective treatments for obsessive compulsive disorder (OCD) are increasingly available, a considerable percentage of patients fails to respond or relapses. Predictors associated with improved outcome of OCD were identified. However, information on interpersonal determinants is lacking. This study investigated the contribution of attachment style and expressed emotion to the outcome of exposure and response prevention (ERP), while accounting for previously documented intrapersonal (i.e., symptom severity and personality pathology) predictors. Using logistic regression analyses and multi-level modeling, we examined predictors of treatment completion and outcome among 118 adult OCD patients who entered ERP. We assessed outcome at post treatment, and at four and 13 months from treatment completion. OCD baseline severity and fearful attachment style emerged as the main moderators of treatment outcome. Severe and fearfully attached patients were more likely to dropout prematurely. The improvement of fearful clients was attenuated throughout treatment and follow-up compared to non-fearful clients. However, their symptom worsening at the long-term was also mitigated. Severe OCD patients had a more rapid symptom reduction during treatment and at follow-up, compared to less severe clients. The findings suggest that both baseline OCD severity and fearful attachment style play a role in the long-term outcome of ERP.
abstract_id: PUBMED:8103074
Effect of personality disorders on the treatment outcome of axis I conditions: an update. The authors review recent studies that assess the impact of personality pathology on the treatment outcomes of axis I disorders. Studies reported include both structured treatment trials and surveys of the outcome of naturalistic treatment. Personality diagnosis in each of the studies reviewed was established by a structured diagnostic instrument. In some of the studies examined, personality traits, not categorical personality diagnoses, are reported in relation to treatment outcome. The authors describe 17 studies published within the past 3 years, and discuss them in relation to a previous review that covered 21 earlier studies. Consistent with previous investigations, recent studies continue to describe an adverse impact of personality pathology on the treatment outcome of a wide range of axis I disorders. The authors examine new studies that describe the effect of specific aspects of personality dysfunction on outcome measures of axis I disorders. New developments in this area include the predictive importance of both personality traits and disorders as well as possible specificity of traits in predicting outcome in some circumstances.
abstract_id: PUBMED:9395153
Prediction of outcome and early vs. late improvement in OCD patients treated with cognitive behaviour therapy and pharmacotherapy. In this study, follow-up results of cognitive-behaviour therapy and of a combination of cognitive-behaviour therapy with a serotonergic antidepressant were determined. The study also examined factors that can predict this treatment effect, both in the long term and in the short term. In addition, it investigated whether differential prediction is possible for cognitive-behaviour therapy vs. a combination of cognitive-behaviour therapy with a serotonergic antidepressant. A total of 99 patients were included in the study. Treatment lasted 16 weeks, and a naturalistic follow-up measurement was made 6 months later. Of the 70 patients who completed the treatment, follow-up information was available for 61 subjects. Significant time effects were found on all outcome measures at both post-treatment measurement and follow-up. No differences in efficacy were found between the treatment conditions. Effectiveness at post-treatment measurement appears to predict success at follow-up. However, 17 of the 45 non-responders at the post-treatment measurement had become responders by the follow-up. The severity of symptoms, motivation for treatment and the dimensional score on the PDQ-R for cluster A personality disorder appear to predict treatment outcome. No predictors were found that related specifically to cognitive-behaviour therapy or combined treatment. These results indicate that the effectiveness of cognitive-behaviour therapy or a combination of cognitive-behaviour therapy and fluvoxamine at the post-treatment measurement is maintained at follow-up. However, non-response at post-treatment does not always imply non-response at follow-up. Patients with more severe symptoms need a longer period of therapy to become responders. Although predictors for treatment success were found, no evidence was found to determine the choice of one of the treatment modalities.
abstract_id: PUBMED:18833426
Personality traits and treatment outcome in obsessive-compulsive disorder. Objective: Comorbidity with personality disorders in obsessive-compulsive patients has been widely reported. About 40% of obsessive-compulsive patients do not respond to first line treatments. Nevertheless, there are no direct comparisons of personality traits between treatment-responsive and non-responsive patients. This study investigates differences in personality traits based on Cloninger's Temperament and Character Inventory scores between two groups of obsessive-compulsive patients classified according to treatment outcome: responders and non-responders.
Method: Forty-four responsive and forty-five non-responsive obsessive-compulsive patients were selected. Subjects were considered treatment-responsive (responder group) if, after having received treatment with any conventional therapy, they had presented at least a 40% decrease in the initial Yale-Brown Obsessive Compulsive Scale score, had rated "better" or "much better" on the Clinical Global Impressions scale; and had maintained improvement for at least one year. Non-responders were patients who did not achieve at least a 25% reduction in Yale-Brown Obsessive Compulsive Scale scores and had less than minimal improvement on the Clinical Global Impressions scale after having received treatment with at least three selective serotonin reuptake inhibitors (including clomipramine), and at least 20 hours of cognitive behavioral therapy. Personality traits were assessed using Temperament and Character Inventory.
Results: Non-responders scored lower in self-directedness and showed a trend to score higher in persistence than responders did.
Conclusion: This study suggests that personality traits, especially self-directedness, are associated with poor treatment response in obsessive-compulsive patients.
abstract_id: PUBMED:16092908
Prospective study of adolescents with subsyndromal psychosis: characteristics and outcome. Objective: The aim of this study was to examine the characteristics and outcome of adolescents with psychotic disorder not otherwise specified (PsyNOS) and brief psychotic disorder (BrPsy), two neglected subsyndromal diagnostic entities.
Methods: As part of an ongoing, naturalistic study investigating adolescents considered to be prodromal for schizophrenia, 29 youngsters (mean age, 16.2 +/- 2.7 years) with PsyNOS or BrPsy were identified as theoretically at highest risk for schizophrenia and followed for over 6 (mean, 22.8 +/- 19.4) months.
Results: Contrary to our expectations, only 7 of the 26 individuals (27.0%) with follow-up data developed schizophrenia or schizoaffective disorder, and only 2 subjects (7.7%) retained their diagnosis of BrPsy/PsyNOS. The most frequent other diagnoses at follow-up were mood disorders (34.6%), personality disorders (11.5%), and obsessive-compulsive disorder (7.7%). Regarding severity of outcome, 38.5% of the patients progressed to a syndromal psychotic disorder, 23.1% continued to have attenuated positive symptoms, and 38.4% improved to having attenuated negative symptoms only, or no positive or negative symptoms. BrPsy was associated with lower maximum levels of negative symptoms (p = 0.02) and higher likelihood of symptom remission (p = 0.02).
Conclusions: This study indicates that psychotic symptoms not fulfilling criteria for schizophrenia or a psychotic mood disorder are unreliable predictors of a syndromal psychotic disorder outcome at 2 years. Long-term studies of PsyNOS and BrPsy are needed to clarify where these disorders fall in the developmental course of schizophrenia.
Answer: Yes, personality disorders have been found to predict negative treatment outcomes in obsessive-compulsive disorders (OCD). The literature suggests that patients with personality pathology generally have a poorer response to treatment for axis I disorders, including OCD (PUBMED:1990074). Specifically, the presence of comorbid personality disorders (PDs) has been discussed as a risk factor for a negative treatment outcome in OCD (PUBMED:15967644). Studies have shown that certain personality traits, such as schizotypal and passive-aggressive traits, may be baseline determinants for later treatment failure (PUBMED:15967644). Additionally, the presence of schizotypal, borderline, and avoidant personality disorders, as well as the total number of personality disorders, have been found to predict poorer treatment outcomes (PUBMED:1444723).
However, it is important to note that not all patients with comorbid PDs are non-responders to treatment. A study found that a large percentage of patients benefited from treatment irrespective of the presence of a PD and were able to maintain their improvement at follow-up (PUBMED:15967644). Furthermore, time-intensive exposure with response prevention (ERP) has been shown to be effective for severely ill patients with treatment-resistant OCD and may attenuate the negative impact of comorbid personality disorders on treatment outcome (PUBMED:23770674).
In summary, while personality disorders can predict a negative treatment outcome in OCD, this is not universally the case, and intensive, tailored therapeutic approaches may help mitigate this negative impact. |
Instruction: ADHD and alcohol dependence: a common genetic predisposition?
Abstracts:
abstract_id: PUBMED:16759339
No association of dopamine receptor sensitivity in vivo with genetic predisposition for alcoholism and DRD2/DRD3 gene polymorphisms in alcohol dependence. This study sought to examine dopamine receptor sensitivity among alcoholics in vivo and to explore whether this sensitivity might be associated with functional variations of dopamine D2 (DRD2) and D3 (DRD3) receptor genes along with a genetic predisposition for alcoholism as reflected by an alcohol-dependent first-degree relative. We analyzed the -141C Ins/Del polymorphism in the promoter region of the DRD2 gene and the Ser9Gly (BalI) polymorphism in exon 1 of the DRD3 gene in 74 alcohol-dependent Caucasian men with or without genetic predisposition for alcoholism. In vivo dopamine receptor sensitivity was assessed by measuring apomorphine-induced growth hormone release. A three-way analysis of variance revealed no significant effects of DRD2, DRD3 genotypes and genetic predisposition on dopamine receptor sensitivity. Given the explorative and preliminary character of this investigation, we cannot provide evidence that in alcohol-dependent Caucasian men a genetic predisposition for alcoholism along with functional variants of the DRD2 and DRD3 genes are associated with differences in dopamine receptor sensitivity.
abstract_id: PUBMED:15570522
ADHD and alcohol dependence: a common genetic predisposition? Introduction: Nearly 50 % of subjects with continuing symptoms of attention-deficit hyperactivity disorder (ADHD) in adulthood show a comorbid substance use disorder. Both, ADHD and alcohol dependence have a high genetic load and might even share overlapping sources of genetic liability.
Method: We investigated phenotype and 5-HTT/5-HT2c allelic characteristics in 314 alcoholics of German descent.
Result: 21 % of the alcoholics fulfilled DSM-IV-criteria of ADHD with ongoing symptoms in adulthood. There was no significant difference in 5-HTT- or 5-HT2c-allele distribution between alcoholics and matched controls or between alcoholics with or without ADHD.
Conclusion: In our sample the functional relevant 5-HTT-promoter and the 5-HT2c-receptor Cys23Ser polymorphism do not contribute to the supposed common genetic predisposition of ADHD and alcohol dependence.
abstract_id: PUBMED:8986199
Genetic predisposition to organ-specific endpoints of alcoholism. Medical records of the 15,924 twin-pairs in the National Academy of Sciences-National Research Council (NAS-NRC) twin registry were collected for an additional 16 years through 1994 when the surviving twins were aged 67 to 77 years. Compared with earlier analyses (Hrubec, Z, and Omenn, G. S., Alcohol. Clin. Exp. Res., 5:207-215, 1981), when subjects were aged 51 to 61, there were 23% more diagnoses of alcoholism (34.4 per 1,000 prevalence), 32% more diagnoses of alcoholic psychosis (5.4 per 1,000), and 25% more twins with liver cirrhosis (17.7 per 1,000). Overall, 5.3% of the cohort had at least one of the diagnoses related to alcoholism. Probandwise concordance rates (%) were: alcoholism-26.7 monozygotic (MZ), 12.2 dizygotic (DZ) (p < 0.0001); alcoholic psychosis-17.3 MZ, 4.8 DZ (p < 0.05); and cirrhosis-16.9 MZ, 5.3 DZ (p < 0.001). Concordance for any diagnosis related to alcoholism was 30.2 MZ, 13.9 DZ (p < 0.0001). Maximum-likelihood modeling indicated that approximately 50% of the overall variance was due to additive genetic effects; in all diagnosis categories, a totally environmental model gave a significantly poorer fit to the data. Bivariate and trivariate genetic analyses indicated most of the genetic liability for the organ-specific endpoints of psychosis and cirrhosis was due to the shared genetic liability for alcoholism. Once the shared variance with alcoholism was considered, there was no further shared genetic liability for psychosis and cirrhosis. Our results confirm Hrubec and Omenn's conclusion that there was significantly greater concordance in MZ twins-pairs for alcoholic psychosis and cirrhosis in the NAS-NRC twins, and concordance rates remained similar to those reported 16 years earlier. In contrast, we found most of the genetic liability to organ-specific complications of alcoholism was shared with the genetic liability for alcoholism per se; only a small portion of the genetic variance of the individual complications was independent of the genetic predisposition for alcoholism.
abstract_id: PUBMED:10804873
Genetic predisposition for alcoholism A number of socio-economic, cultural, biobehavioral factors and ethnic/gender differences are among the strongest determinants of drinking patterns in a society. Both epidemiological and clinical studies have implicated the excessive use of alcohol in the risk of developing a variety of organ, neuronal and metabolic disorders. Alcohol abuse related metabolic derangements affect almost all body organs and their functions. Race and gender differences in drinking patterns may play an important role in the development of medical conditions associated with alcohol abuse. The incidence of alcoholism in a community is influenced by per capita alcohol consumption and covariates with the relative price and availability of alcoholic drinks. The majority of the family, twin and adoption studies suggest that alcoholism is familial, a significant proportion of which can be attributed to genetic factors. The question is how much of the variance is explained by genetic factors and to what degree is this genetically mediated disorder moderated by personal characteristics. Among the most salient personal characteristics moderating, the genetic vulnerability may be factors such as age, ethnicity, and presence of psychiatric co morbidity. Cultural factors and familial environmental factors are most likely predictors as well.
abstract_id: PUBMED:28055135
A Combination of Naltrexone + Varenicline Retards the Expression of a Genetic Predisposition Toward High Alcohol Drinking. Background: This study examined whether naltrexone (NTX) or varenicline (VAR), alone or in combination, can retard the phenotypic expression of a genetic predisposition toward high alcohol drinking in rats selectively bred for high alcohol intake when drug treatment is initiated prior to, or concomitantly with, the onset of alcohol drinking.
Methods: Alcohol-naïve P rats were treated daily with NTX (15.0 mg/kg BW), VAR (1.0 mg/kg BW), a combination of NTX (15.0 mg/kg BW) + VAR (1.0 mg/kg BW), or vehicle (VEH) for 2 weeks prior to, or concomitantly with, their first opportunity to drink alcohol and throughout 21 days of daily 2-hour alcohol access. Drug treatment was then discontinued for 3 weeks followed by reinstatement of drug treatment for an additional 3 weeks.
Results: When P rats were pretreated with drug for 2 weeks prior to onset of alcohol access, only NTX + VAR in combination blocked the acquisition of alcohol drinking in alcohol-naïve P rats. When drug treatment was initiated concomitantly with the first opportunity to drink alcohol, NTX alone, VAR alone, and NTX + VAR blocked the acquisition of alcohol drinking. Following termination of drug treatment, NTX + VAR and VAR alone continued to reduce alcohol drinking but by the end of 3 weeks without drug treatment, alcohol intake in all groups was comparable to that seen in the vehicle-treated group as the expression of a genetic predisposition toward high alcohol drinking emerged in the drug-free P rats. After 3 weeks without drug treatment, reinstatement of NTX + VAR treatment again reduced alcohol intake.
Conclusions: A combination of NTX + VAR, when administered prior to, or concomitantly with, the first opportunity to drink alcohol, blocks the acquisition of alcohol drinking during both initial access to alcohol and during a later period of alcohol access in P rats with a genetic predisposition toward high alcohol intake. The results suggest that NTX + VAR may be effective in curtailing alcohol drinking in individuals at high genetic risk of developing alcoholism.
abstract_id: PUBMED:29275025
Genetic factors in alcohol dependence Genetic factors are involved in the predisposition to alcohol dependence with an heritability of about 0.5. Sequencing or analysis of the polymorphisms of genes or the whole human genome allow to identify genetic markers of alcohol dependence. Genes of the brain pathway of motivation and reward, including DRD2 and ANKK1, are associated with alcohol dependence. Genes encoding the gabaergic receptors show variants link to alcohol dependence. Polymorphisms in the genes encoding the enzymes alcohol desydrogenases (ADH) and aldehyde-dehydrogenases (ALDH) are associated to the susceptibility or the protection of alcohol dependence. Interaction between genes and environment, via the epigenetics, influence the predisposition to alcohol dependence.
abstract_id: PUBMED:10443977
What is inherited in the predisposition toward alcoholism? A proposed model. Background: The etiological factors associated with the predisposition to develop alcohol dependence remain largely unknown. In recent years, neurophysiological anomalies have been identified in young and adult offspring of alcoholic probands. These neuroelectric features have been replicated in several laboratories across many different countries and are observed in male and female alcoholics and some of their relatives and offspring. Moreover, these electrophysiological abnormalities are heritable and predictive of future alcohol abuse or dependence.
Methods: A model is presented which hypothesizes that the genetic predisposition to develop alcoholism involves an initial state of central nervous system (CNS) disinhibition/hyperexcitability. We propose that the event-related brain potential (ERP) anomalies reflect CNS disinhibition. This homeostatic imbalance results in excess levels of CNS excitability which are temporarily alleviated by the ingestion of alcohol. It is hypothesized that this hyperexcitability is heritable, and is critically involved in the predisposition toward alcoholism and the development of dependence. A brief review of the relevant literature is presented.
Results: Neurophysiological, neurochemical, and genetic evidence support the proposed model. In addition, strikingly similar observations between animal research and the human condition are identified. Finally, it is asserted that the proposed model is primarily biological in nature, and therefore does not account for the entire clinical variance.
Conclusion: A putative CNS homeostatic imbalance is noted as a critical state of hyperexcitability. This hyperexcitability represents a parsimonious model of what is inherited in the predisposition to develop alcoholism. It is our hope that this model will have heuristic value, resulting in the elucidation of etiological factors involved in alcohol dependence.
abstract_id: PUBMED:11590970
Influence of the endogenous opioid system on high alcohol consumption and genetic predisposition to alcoholism. There is increasing evidence supporting a link between the endogenous opioid system and excessive alcohol consumption. Acute or light alcohol consumption stimulates the release of opioid peptides in brain regions that are associated with reward and reinforcement and that mediate, at least in part, the reinforcing effects of ethanol. However, chronic heavy alcohol consumption induces a central opioid deficiency, which may be perceived as opioid withdrawal and may promote alcohol consumption through the mechanisms of negative reinforcement. The role of genetic factors in alcohol dependency is well recognized, and there is evidence that the activity of the endogenous opioid system under basal conditions and in response to ethanol may play a role in determining an individual's predisposition to alcoholism. The effectiveness of opioid receptor antagonists in decreasing alcohol consumption in people with an alcohol dependency and in animal models lends further support to the view that the opioid system may regulate, either directly or through interactions with other neurotransmitters, alcohol consumption. A better understanding of the complex interactions between ethanol, the endogenous opioids and other neurotransmitter systems will help to delineate the neurochemical mechanisms leading to alcoholism and may lead to the development of novel treatments.
abstract_id: PUBMED:34562071
Investigating perceived heritability of mental health disorders and attitudes toward genetic testing in the United States, United Kingdom, and Australia. Our beliefs about the heritability of psychiatric traits may influence how we respond to the use of genetic information in this area. In the present study, we aim to inform future education campaigns as well as genetic counseling interventions by exploring common fears and misunderstandings associated with learning about genetic predispositions for mental health disorders. We surveyed 3,646 genetic research participants from Australia, and 960 members of the public from the United Kingdom, and the United States, and evaluated attitudes toward psychiatric genetic testing. Participants were asked hypothetical questions about their interest in psychiatric genetic testing, perceived usefulness of psychiatric genetic testing, and beliefs about malleability of behavior, among others. We also asked them to estimate the heritability of alcohol dependence, schizophrenia, and major depression. We found a high interest in psychiatric genetic testing. In most cases, more than a third of the participants showed serious concerns related to learning about personal genetic predisposition, such as not wanting to have children if they knew they had a high genetic predisposition, or not wanting to choose a partner with a high genetic predisposition for a mental health problem. Finally, we found a significant association between most participants' attitudes and their lay estimates of heritability, which highlights the complexity of educating the public about genetics.
abstract_id: PUBMED:27454109
A pedigree-based proxy measure of genetic predisposition of drinking and alcohol use among female sex workers in China: a cross-sectional study. Scientific evidence has suggested that genetic factors accounted for more than half of the vulnerability of developing alcohol use problems. However, collecting genetic data poses a significant challenge for most population-based behavioral studies. The aim of this study was to assess the utilities of a pedigree-based proxy measure of genetic predisposition of drinking (GPD) and its effect on alcohol use behaviors as well as its interactions with personal and environmental factors. In the current study, cross-sectional data were collected from 700 female sex workers (FSW) in Guangxi, China. Participants provided information on a pedigree-based proxy measure of GPD and their alcohol use behaviors. Chi-square and independent t-test was applied for examining the bivariate associations between GPD and alcohol use behaviors; multivariate and ordinal regression models were used to examine the effect of GPD on alcohol use. This study found that women with a higher composite score of GPD tended to have a higher risk of alcohol use problem compared to their counterparts (p < .05). GPD was a significant predictor of alcohol use problems (p < .05), especially among women who had mental health issues or lack of health cares. The pedigree-based measure provided a useful proxy of GPD among participants. Both FSW's mental health and health care access interact with GPD and affect their drinking patterns. By understanding the genetic basis of alcohol use, we can develop scalable and efficacious interventions that will take into consideration the individual risk profile and environmental influences.
Answer: The abstracts provided do not establish a definitive common genetic predisposition between ADHD and alcohol dependence. One study investigated the potential overlap in genetic predisposition between ADHD and alcohol dependence by examining the 5-HTT/5-HT2c allelic characteristics in alcoholics, some of whom also had ADHD. However, the study found no significant difference in the allele distribution of these genes between alcoholics with or without ADHD, suggesting that these particular genetic markers do not contribute to a common genetic predisposition for both conditions (PUBMED:15570522).
Other abstracts discuss genetic predispositions to alcohol dependence more broadly. For instance, one study found that genetic factors account for about 50% of the variance in alcohol dependence and that the genetic liability for organ-specific complications of alcoholism is mostly shared with the genetic liability for alcoholism itself (PUBMED:8986199). Another abstract mentions that genes involved in the brain's motivation and reward pathways, including DRD2 and ANKK1, are associated with alcohol dependence, as are genes encoding gabaergic receptors and enzymes involved in alcohol metabolism (PUBMED:29275025).
While these studies highlight the genetic components of alcohol dependence, they do not directly link these genetic factors to ADHD. The abstract discussing the influence of the endogenous opioid system on high alcohol consumption and genetic predisposition to alcoholism suggests that genetic factors may influence an individual's predisposition to alcoholism, but it does not mention ADHD (PUBMED:11590970).
In summary, the abstracts provided do not support a clear common genetic predisposition between ADHD and alcohol dependence. Instead, they focus on the genetic underpinnings of alcohol dependence and its related complications, without establishing a direct genetic link to ADHD. |
Instruction: Utilisation review of clopidogrel: are they used under the FDA-approved indications?
Abstracts:
abstract_id: PUBMED:17551994
Utilisation review of clopidogrel: are they used under the FDA-approved indications? Background: Clopidogrel has shown benefit in patients with increased risk of cardiovascular diseases. Due to its high acquisition cost and its increased use, this study was conducted to review the use of clopidogrel based on the Food and Drug Administration (FDA)-approved indications and the ST-segment elevation myocardial infarction (STEMI).
Method: This was a cross-sectional study conducted at a tertiary-care, university-affiliated hospital in the Northern part of Thailand. Medical records of patients receiving clopidogrel during January 2005 to February 2006 were reviewed. Baseline characteristics of patients along with specific information regarding the use of clopidogrel were collected. Data were analysed using descriptive statistics.
Result: A total of 191 patients were included in this utilisation review (95 and 96 were inpatient and outpatient, respectively). The use of clopidogrel was deemed appropriate in 82.7% of cases including 72.2% for FDA-approved indications and 10.5% for STEMI. Clopidogrel/aspirin combination was indicated in 93 patients; however, 22 patients received clopidogrel monotherapy. On the contrary, 10 patients received clopidogrel/aspirin combination when only clopidogrel monotherapy was indicated. Moreover, 60% of patients who received clopidogrel monotherapy had no history of aspirin intolerance or recurrent events while on aspirin therapy.
Conclusion: The results showed that the majority of clopidogrel use was deemed appropriate based on FDA-approved indications and for medically justified indication. However, a significant number of patients received clopidogrel instead of aspirin while no aspirin intolerance was documented. Therefore, efforts should be made to promote the appropriate use of this agent to improve patient outcomes.
abstract_id: PUBMED:24456868
Discrepancies in the primary PLATO trial publication and the FDA reviews. The results of major indication seeking Phase 3 clinical trials are reported at international meetings, and simultaneously published In top medical journals. However, the data presented during such dual release do not disclose all the trial findings, suffer from overoptimistic interpretations heavily favoring the study sponsor. Ironically, after the New Drug Application is submitted for regulatory approval, and when the FDA secondary reviews become available for public, the benefit/risk assessment of a new drug is usually considered much less impressive. However, the community may ignore pivotal unreported findings later outlined in the government documents taking for granted the facts presented in the primary publication. The discrepancies between initial publication and the FDA files are not only confusing to the readership, but hold additional risks for patients. Indeed, if physicians are impressed with the initial interpretation of the trial, and do not have broad access to the FDA verified facts, chances are new agents will be prescribed based on exaggerated benefit and less safety concerns. The current pattern also hurts the reputation of the journal publishers, editors and reviewers challenging their trust and credibility. We here outline the disparity between the primary PLATO trial publication in the New England Journal of Medicine against the FDA verified numbers, and discuss how to avoid such mismatches in the future.
abstract_id: PUBMED:36246771
Retrospective Comparison of Patients ≥ 80 Years With Atrial Fibrillation Prescribed Either an FDA-Approved Reduced or Full Dose Direct-Acting Oral Anticoagulant. Direct-acting oral anticoagulants (DOACs) represent the standard for preventing stroke and systemic embolization (SSE) in patients with atrial fibrillation (AF). There is limited information for patients ≥ 80 years. We report a retrospective analysis of AF patients ≥ 80 years prescribed either a US Food and Drug Administration (FDA)-approved reduced (n = 514) or full dose (n = 199) DOAC (Dabigatran, Rivaroxaban, or Apixaban) between January 1st, 2011 (first DOAC commercially available) and May 31st, 2017. The following multivariable differences in baseline characteristics were identified: patients prescribed a reduced dose DOAC were older (p < 0.001), had worse renal function (p = 0.001), were more often prescribed aspirin (p = 0.004) or aspirin and clopidogrel (p < 0.001), and more often had new-onset AF (p = 0.001). SSE and central nervous system (CNS) bleed rates were low and not different (1.02 vs 0 %/yr and 1.45 vs 0.44 %/yr) for the reduced and full dose groups, respectively. For non-CNS bleeds, rates were 10.89 vs 4.15 %/yr (p < 0.001, univariable) for the reduced and full doses, respectively. The mortality rate was 6.24 vs 1.75 %/yr (p = 0.001, univariable) for the reduced and full doses. Unlike the non-CNS bleed rate, mortality rate differences remained significant when adjusted for baseline characteristics. Thus, DOACs in patients ≥ 80 with AF effectively reduce SSE with a low risk of CNS bleeding, independent of DOAC dose. The higher non-CNS bleed rate and not the mortality rate is explained by the higher risk baseline characteristics in the reduced DOAC dose group. Further investigation of the etiology of non-CNS bleeds and mortality is warranted.
abstract_id: PUBMED:22832506
Ticagrelor FDA approval issues revisited. Context: On July 20, 2011, the Food and Drug Administration (FDA) approved ticagrelor (Brilinta™) for use during acute coronary syndromes. The drug labeling includes a 'black box' warning for bleeding risks, conventional for antithrombotics, and a unique warning that higher than 100 mg/daily maintenance treatment with aspirin may reduce ticagrelor effectiveness. The approval was granted following ticagrelor secondary reviews, and review of complete response by FDA officials.
Objective: To summarize the recommendations of different FDA reviewers, and their impact on drug approval.
Design, Setting, And Patients: Review of the Platelet Inhibition and Clinical Outcomes (PLATO) trial comparing the efficacy of ticagrelor versus standard care treatment with clopidogrel. Patients (n = 18,624) with moderate- to high-risk acute coronary syndromes undergoing coronary intervention or being medically managed were randomized to ticagrelor (180-mg loading dose followed by 90 mg twice daily thereafter) or clopidogrel (300-600-mg loading dose followed by 75 mg once daily) for 6-12 months.
Results: The facts outlined in official reviews suggest that ticagrelor has been approved despite objections from both clinical primary reviewers assessing drug efficacy and safety. In addition, the statistical reviewer and cross-discipline team leader also recommended against approval. The putative grounds for their concerns were retrieved from the public FDA records and are briefly outlined here.
abstract_id: PUBMED:26709134
The FDA review on data quality and conduct in vorapaxar trials: Much better than in PLATO, but still not perfect. Background/aims: Vorapaxar, a novel antiplatelet thrombin PAR-1 inhibitor, has been evaluated in TRA2P and TRACER trials. The drug is currently approved for post-myocardial infarction and peripheral artery disease indications with concomitant use of clopidogrel and/or aspirin. The FDA ruled that the overall vorapaxar data quality was acceptable, but conducted the sensitivity analyses for potential censoring. This was unusual, intriguing, and directly related to the challenged quality of ticagrelor dataset in PLATO in the previous New Drug Application for an oral antiplatelet agent submitted to the same Agency.
Methods: Hence, we compared the FDA-confirmed evidence of conduct and data quality in vorapaxar (TRA2P, and TRACER) with those of ticagrelor (PLATO) trials.
Results: The FDA provides a detailed report on information censoring, and follow-up completeness for 3 trials. TRA2P and TRACER used independent CRO for site monitoring, exhibit no heterogeneity in trial results dependent on geography, and consistent adjudication results with much less censoring than in PLATO.
Conclusion: The data quality and trial conduct in vorapaxar trials were better than testing ticagrelor in PLATO, however, there is still some room for improvement especially with regard to follow-up completeness, and less information censoring.
abstract_id: PUBMED:26887783
Vorapaxar and diplopia: Possible off-target PAR-receptor mismodulation. Vorapaxar, a novel antiplatelet thrombin PAR-1 inhibitor, has been evaluated in the successful TRA2P trial and the failed TRACER trial. The drug is currently approved for post myocardial infarction and peripheral artery disease indications with concomitant use of clopidogrel and/or aspirin. The FDA ruled that the vorapaxar safety profile is acceptable. However, both trials revealed excess diplopia (double vision) usually reversible after vorapaxar. The diplopia risk appears to be small (about 1 extra case per 1,000 treated subjects), but real. Overall, there were 10 placebo and 34 vorapaxar diplopia cases (p=0.018) consistent for TRACER (2 vs 13 cases; p=0.010) and for TRA2P (8 vs 21 cases; p=0.018). Hence, we review the FDA-confirmed evidence and discuss potential causes and implications of such a surprising adverse association, which may be related to off-target PAR receptor mismodulation in the eye.
abstract_id: PUBMED:33675571
The FDA and PLATO Investigators death lists: Call for a match. Purpose: The FDA-issued PLATO trial dataset revealed that some primary death causes (PDCs) were inaccurately reported favouring ticagrelor. However, the PLATO Investigators operated the shorter death list of uncertain quality. We compared if PDC match when trial fatalities were reported to the FDA and by the PLATO Investigators.
Method: The FDA list contains precisely detailed 938 PLATO deaths, while shorter investigators dataset consists of 905 deaths. We matched four vascular (sudden, post-MI, heart failure and stroke), and three non-vascular (cancer, sepsis and suicide) PDC between death lists.
Results: There were more sudden deaths in the shorter list than in the FDA dataset (161 vs 138; P < .03) and post-AMI (373 vs 178; P < .001) but fewer heart failure deaths (73 vs 109; P = .02). Stroke numbers match well (39 vs 37; P = NS) with only two ticagrelor cases removed. Cancer matched well (32 vs 31; P = NS), and sepsis cases were identical (30 vs 30; P = NS). However, two extra clopidogrel suicides in the shorter list are impossible to comprehend.
Conclusions: The PLATO trial PDCs were mismatched between FDA and investigators sets. We are kindly asking the ticagrelor sponsor or/and concerned PLATO Investigators to clarify the PDC dataset match.
abstract_id: PUBMED:26522969
Temporal trends in the utilisation of preventive medicines by older people: A 9-year population-based study. Background: For older individuals with multimorbidity the appropriateness of prescribing preventive medicines remains a challenge.
Objective: Investigate the prevalence and temporal trends in utilisation of preventive medicines in older New Zealanders from 2005 to 2013 stratified according to age, sex, ethnicity and district health board domicile.
Methods: A repeated cross-sectional analysis was conducted on pharmaceutical dispensing data for all individuals' ≥ 65 years. Variable medication possession ratio (VMPR) was used to measure adherence. Prescribing of low-dose aspirin, clopidogrel, dipyridamole, warfarin, dabigatran, statins and bisphosphonates with a VMPR≥0.8 were examined.
Results: Aspirin utilisation increased by 19.55% (95% CI: 19.39-19.70), clopidogrel by 2.93% (95% CI: 2.88-2.97) and dipyridamole decreased by 0.65% (95% CI: -0.70 to -0.59). Utilisation of aspirin with clopidogrel increased by 1.78% (95% CI: 1.74-1.81) and aspirin with dipyridamole increased by 0.54% (95% CI: 0.50-0.58%).Warfarin decreased by 0.87% (95% CI: -0.96 to -0.78) and dabigatran increased by 0.65% (95% CI: 0.60-0.70). Statins increased by 7.0% (95% CI: 6.82-7.18) and bisphosphonates decreased by 2.37% (95% CI: -2.44 to -2.30). Aspirin, clopidogrel, dabigatran and statins utilisation showed a greater increase in males. Interestingly, clopidogrel, warfarin and statins use increased in older adults aged 85+ compared to the younger age groups (65-84 years).
Conclusion: To our knowledge, this is the first study investigating the prevalence and trends of preventive medicines use in older people in New Zealand. This study may facilitate further research to examine the appropriateness of prescribing these medicines in older people with multimorbidity.
abstract_id: PUBMED:28267691
Vorapaxar and Amyotrophic Lateral Sclerosis: Coincidence or Adverse Association? Background: Vorapaxar, a novel antiplatelet thrombin PAR-1 inhibitor, is currently approved for post myocardial infarction and peripheral artery disease indications with concomitant use of clopidogrel and/or aspirin. The vorapaxar safety profile was acceptable. However, aside from heightened bleeding risks, excesses of solid cancers and diplopia, there were more amyotrophic lateral sclerosis (ALS) diagnoses after vorapaxar.
Study Question: To assess the Food and Drug Administration (FDA) reviews on the potential association of vorapaxar with ALS.
Study Design: The review the public FDA records on reported adverse events after vorapaxar.
Measures And Outcomes: Incidence of ALS after vorapaxar and placebo.
Results: The ALS risk appears very small, about 1 case per 10,000 treated subjects, but quite probable. Indeed, there were overall 2 placebo and 4 vorapaxar ALS incidences in the Phase III clinical trials.
Conclusions: Potential adverse association of vorapaxar with ALS risks may be related to off-target neuronal PAR receptor(s) blockade beyond platelet inhibition.
abstract_id: PUBMED:10581993
Indications for antiplatelet medications Platelet active drugs are part of the antithrombotics. Their biological effect is not assessed in current practice. Their clinical efficacy has been firmly established in randomised controlled trials. Aspirin has been the most widely tested drug and is effective in various forms of coronary artery disease and in the secondary prevention after a first ischaemic stroke; in these settings, aspirin reduces the incidence of myocardial infarction, stroke and cardiac death; aspirin has been tested in various daily doses from 30 to 1300 mg: best evidence has been gathered for dosages between 75 and 300 mg; good clinical practice is to use the lowest effective dose. Ticlopidine and clopidogrel have been shown to be superior to aspirin in 2 trials where the incidence of myocardial infarction has been lowered by the new drugs; nevertheless the superiority is apparent only in patients with lower limb atherosclerosis and after stroke. The combination of dipyridamole and aspirin has been proven to be superior to aspirin in the secondary prevention of stroke in one trial contrasting with the other trials performed with other combinations of those two drugs. Glycoprotein GP IIb/IIIa antagonists have been tested in coronary angioplasty and in acute coronary syndromes and only in short intravenous administration; these drugs reduce the incidence of myocardial infarction without any effect on 6-month mortality.
Answer: The utilisation review of clopidogrel conducted at a tertiary-care, university-affiliated hospital in Northern Thailand found that the majority of clopidogrel use was deemed appropriate based on FDA-approved indications and for medically justified indications such as ST-segment elevation myocardial infarction (STEMI). Specifically, 82.7% of cases were considered appropriate, including 72.2% for FDA-approved indications and 10.5% for STEMI. However, the study also identified that a significant number of patients received clopidogrel instead of aspirin, even when there was no documented aspirin intolerance or recurrent events while on aspirin therapy. This suggests that while clopidogrel is largely used in accordance with FDA-approved indications, there is still room for improvement in ensuring its appropriate use to enhance patient outcomes (PUBMED:17551994). |
Instruction: Are modern health worries, personality and attitudes to science associated with the use of complementary and alternative medicine?
Abstracts:
abstract_id: PUBMED:17456283
Are modern health worries, personality and attitudes to science associated with the use of complementary and alternative medicine? Objective: To investigate whether personality traits, modern health worries (MHWs) and attitudes to science predict attitudes to, and beliefs about, complementary and alternative medicine (CAM). This study set out to test whether belief in, and use of CAM was significantly associated with high levels of MHWs, a high level of neuroticism and sceptical attitudes towards science.
Methods: Two hundred and forty-three British adults completed a four part questionnaire that measured MHWs, the Big Five personality traits and beliefs about science and medicine and attitudes to CAM.
Results: There were many gender differences in MHWs (females expressed more), though results were similar to previous studies. Contrary to prediction, personality traits were not related to MHWs, CAM usage or beliefs about CAM. Regular and occasional users of CAM did have higher MHWs than those non or infrequent users. Those with high totalled MHWs also tended to believe in the importance of psychological factors in health and illness, as well as the potential harmful effects of modern medicine. Young males who had positive attitudes to science were least likely to be CAM users. Further, positive attitudes to science were associated with increased scepticism about CAM.
Conclusion: Concern about health, belief about modern medicine and CAM are logically inter-related. Those who have high MHWs tend to be more sceptical about modern medicine and more convinced of the possible role of psychological factors in personal health and illness.
abstract_id: PUBMED:25408919
Hypochondriacal attitudes and beliefs, attitudes towards complementary and alternative medicine and modern health worries predict patient satisfaction. Objective: To investigate how hypochondriacal attitudes and beliefs, attitudes towards complementary and alternative medicine (CAM) and modern health worries (MHWs) related to patient satisfaction with their general practitioner.
Design: Participants completed a five-part questionnaire anonymously which measured satisfaction with one's doctor, hypochondriacal beliefs, attitudes to CAM, MHWs and personality.
Setting: England.
Participants: Included 215 adults from a variety of cultural backgrounds.
Main Outcome Measure: The Illness Attitudes Scales measuring the attitudes, fears and beliefs associated with hypochondriasis; Worry about Illness; Concerns about Pain, Health Habits, Hypochondriacal beliefs; Thanatophobia, Disease phobia, Bodily preoccupations; Treatment experience and Effects of symptoms.
Results: Correlations (around r = .10 to .25) and Regressions (R square from .06 to .09) showed demographic and personality variables only modestly related to patient satisfaction. Hypochondriasis, CAM and MHWs were associated with greater patient dissatisfaction as predicted with the former as the most powerful correlate.
Conclusion: The study indicates the different needs of potential patients in a typical medical consultation. It is important to ascertain patients' health beliefs and practices with regard to medical history, attitudes to CAM and MHWs to increase consultation satisfaction.
abstract_id: PUBMED:27084337
Attitudes, Knowledge, Use, and Recommendation of Complementary and Alternative Medicine by Health Professionals in Western Mexico. Background: The use of complementary and alternative medicine (CAM) has increased in many countries, and this has altered the knowledge, attitudes, and treatment recommendations of health professionals in regard to CAM.
Methods: Considering Mexican health professionals׳ lack of knowledge of CAM, in this report we surveyed 100 biomedical researchers and Ph.D. students and 107 specialized physicians and residents of a medical specialty in Guadalajara, México (Western Mexico) with a questionnaire to address their attitudes, knowledge, use, and recommendation of CAM.
Results: We observed that significantly more researchers had ever used CAM than physicians (83% vs. 69.2%, P = .023) and that only 36.4% of physicians had ever recommended CAM. Female researchers tended to have ever used CAM more than male researchers, but CAM use did not differ between genders in the physician group or by age in either group. Homeopathy, herbal medicine, and massage therapy were the most commonly used CAMs in both the groups. Physicians more frequently recommended homeopathy, massage therapy, and yoga to their patients than other forms of CAM, and physicians had the highest perception of safety and had taken the most courses in homeopathy. All CAMs were perceived to have high efficacy (>60%) in both the groups. The attitude questionnaire reported favorable attitudes toward CAM in both the groups.
Conclusions: We observed a high rate of Mexican health professionals that had ever used CAM, and they had mainly used homeopathy, massage therapy, and herbal medicine. However, the recommendation rate of CAM by Mexican physicians was significantly lower than that in other countries, which is probably due to the lack of CAM training in most Mexican medical schools.
abstract_id: PUBMED:25727902
A review of nurses' knowledge, attitudes, and ability to communicate the risks and benefits of complementary and alternative medicine. Aims And Objectives: This study reviewed existing literature to investigate how frequently nurses include complementary and alternative forms of medicine in their clinical practice. In so doing, we investigated nurses' knowledge of and attitudes towards complementary and alternative medicine as well as their ability to communicate the risks and benefits of these therapies with patients.
Background: Little information is available concerning nurses' knowledge and attitudes towards complementary and alternative medicine or how they incorporate these therapies into their practice. In addition, little is known about the ability of nurses to communicate the risks and benefits of complementary and alternative medicine to their patients.
Study Design: This study used a scoping review method to map and synthesise existing literature.
Data Sources: Both electronic and manual searches were used to identify relevant studies published between January 2007 and January 2014.
Review Methods: The review was conducted in five stages: (1) identification of research question(s), (2) locate studies, (3) selection of studies, (4) charting of data, and (5) collating, summarising, and reporting of results.
Results: Fifteen papers met the inclusion criteria for this review, among which 53·7% referenced how frequently nurses include complementary and alternative medicine in their practice. We found that 66·4% of nurses had positive attitudes towards complementary and alternative medicine; however, 77·4% did not possess a comprehensive understanding of the associated risks and benefits. In addition, nearly half of the respondents (47·3-67·7%) reported feeling uncomfortable discussing complementary and alternative medicine therapies with their patients.
Conclusion: The lack of knowledge about complementary and alternative medicine among nurses is a cause for concern, particularly in light of its widespread application.
Relevance To Clinical Practice: Findings from this study suggest that health care professionals need to promote evidence informed decision-making in complementary and alternative medicine practice and be knowledgeable enough to discuss complementary and alternative medicine therapies. Without involvement of complementary and alternative medicine communication on the part of our profession, we may put our patients at risk of uninformed and without medical guidance.
abstract_id: PUBMED:21040887
Attitudes toward complementary and alternative medicine influence its use. Objective: The aim of this study was to explore how attitudes toward complementary and alternative medicine (CAM) and conventional medicine influence CAM use in a healthy population, and how health locus of control and exercise further affect CAM use.
Design: A cross-sectional survey design was used.
Participants: The sample consisted of 65 healthy graduate students.
Main Outcome Measures: Since previous studies have focused on the attitudes of medical providers toward CAM, there are currently no standard, widely used measures of attitudes toward CAM from the perspective of the healthcare recipient. Thus, a new measure, the Complementary, Alternative, and Conventional Medicine Attitudes Scale (CACMAS) was created to address how attitudes of healthcare recipients affect CAM use. The Multidimensional Health Locus of Control Scale (MHLC) was used to investigate effects of health locus of control on CAM use, and participants reported which of 17 listed CAM treatments they had used in the past, were currently using, or would likely use in the future. Participants also reported days of exercise in the past month to explore if those engaging in healthy behaviors might report more CAM use.
Results: Having a philosophical congruence with CAM and agreement with holistic balance was associated with increased CAM use. Dissatisfaction with conventional medicine was also related to increased CAM use, but to a lesser extent. Those attributing health to personal behaviors (an internal health locus of control) reported more CAM use, as did those engaging in more resistance training in the previous month.
abstract_id: PUBMED:26472482
Women's attitudes towards the use of complementary and alternative medicine products during pregnancy. The aim of this study was to analyse women's attitudes towards the use of complementary and alternative medicine (CAM) products during pregnancy. The study sample was obtained via the Australian Longitudinal Study on Women's Health or ALSWH. A response rate of 79.2% (n = 1,835) was attained. Women who use herbal medicines (34.5%, n = 588) view CAM as a preventative measure, are looking for something holistic and are concerned about evidence of clinical efficacy when considering the use of these products during pregnancy. Women who use aromatherapy (17.4%, n = 319) and homoeopathy (13.3%, n = 244) want more personal control over their body and are concerned more about their own personal experience of the efficacy of CAM than clinical evidence of efficacy. As CAM use in pregnancy appears to be increasingly commonplace, insights into women's attitudes towards CAM are valuable for maternity healthcare providers.
abstract_id: PUBMED:26396087
A Survey of Medical Students' Knowledge and Attitudes Toward Complementary and Alternative Medicine in Urmia, Iran. Personal beliefs of medical students may interfere with their tendency for learning Complementary and Alternative Medicine concepts. This study aimed to investigate the knowledge and attitudes of medical students toward complementary and alternative medicine in Urmia, Iran. A structured questionnaire was used as data collection instrument. One hundred questionnaires were returned. Thirty-one percent of students reported use of alternative medicine for at least once. Iranian Traditional Medicine was the main type of alternative medicine used by medical students (93.5%). Neuromuscular disorders were the main indication of alternative medicine use among students (34.4%). Ninety percent of participants demonstrated competent knowledge about acupuncture while the lowest scores belonged to homeopathy (12%). Study results showed that 49% of medical students had positive attitudes and demonstrated a willingness to receive training on the subject. Thus, there appears a necessity to integrate complementary and alternative medicine into the medical curriculum, by taking expectations and feedbacks of medical students into consideration.
abstract_id: PUBMED:37407975
Attitudes pregnant women in Türkiye towards holistic complementary and alternative medicine and influencing factors: a web-based cross-sectional study. Background: Pregnant women turn to holistic complementary and alternative medicine to cope with problems associated with the changes they experience during pregnancy. This study aimed to determine the attitudes of pregnant women in Türkiye toward holistic complementary and alternative medicine and influencing factors.
Methods: This cross-sectional exploratory study was carried out between June and November 2022 with a web-based questionnaire distributed via social media and communication platforms. Two hundred and twenty-one pregnant women participated in the study. A "Participant Identification Form" and the "Attitudes towards Holistic Complementary and Alternative Medicine Questionnaire" were used to collect the data. Logistic regression analysis was used to determine correlations between variables and scale scores.
Results: It was determined that 84.2% of the participants had knowledge about traditional and complementary therapies, and 77.8% used traditional and complementary therapies. The participants reported that they preferred faith (77.4%), energy healing (76.9%), massage (75.6%), diet (74.2%), meditation/yoga (62.0%), and herbal (59.7%) traditional and complementary therapies the most, and most of them used these methods to reduce nausea, vomiting, edema, and fatigue during pregnancy. The mean Attitudes towards Holistic Complementary and Alternative Medicine Questionnaire score of the participants was 35.0 (5.04). It was seen that having high school or higher education (p < 0.05), having an income more than expenses (p < 0.001), having received advice from nurses when having a complaint (p < 0.001), having knowledge about traditional and complementary therapies (p < 0.001), and being a practitioner who received services of traditional and complementary therapies (p < 0.001) were positively associated with the utilization of traditional and complementary therapies.
Conclusion: In this study, it was determined that the attitudes of pregnant women towards holistic complementary and alternative medicine were high. Their personal characteristics, as well as their knowledge and practice of holistic complementary and alternative medicine affected their attitudes towards holistic complementary and alternative medicine. Obstetrics nurses/midwives should actively participate in training programs on traditional and complementary therapies focused on pregnant women.
abstract_id: PUBMED:27707901
Complementary and Alternative Medicine Use in Modern Obstetrics: A Survey of the Central Association of Obstetricians & Gynecologists Members. The use of complementary and alternative medicine during pregnancy is currently on the rise. A validated survey was conducted at the Central Association of Obstetrician and Gynecologists annual meeting to evaluate the knowledge, attitude, and practice of general obstetricians and gynecologists and maternal-fetal medicine specialists in America. We obtained 128 responses: 73 electronically (57%) and 55 via the paper survey (43%). Forty-five percent reported personally using complementary and alternative medicine and 9% of women respondents used complementary and alternative medicine during pregnancy. Overall, 62% had advised their patients to utilize some form of complementary and alternative medicine in pregnancy. Biofeedback, massage therapy, meditation, and yoga were considered the most effective modalities in pregnancy (median [semi-interquartile range] = 2 [0.5]). Maternal-fetal medicine specialists were significantly more likely to disagree on the use of complementary and alternative medicine for risk reduction of preterm birth compared to obstetricians and gynecologists ( P = .03). As the use of complementary and alternative medicine continues to rise in reproductive-age women, obstetricians will play an integral role in incorporating complementary and alternative medicine use with conventional medicine.
abstract_id: PUBMED:33228697
Potential factors that influence usage of complementary and alternative medicine worldwide: a systematic review. Objectives: To determine similarities and differences in the reasons for using or not using complementary and alternative medicine (CAM) amongst general and condition-specific populations, and amongst populations in each region of the globe.
Methods: A literature search was performed on Pubmed, ScienceDirect and EMBASE.
Keywords: 'herbal medicine' OR 'herbal and dietary supplement' OR 'complementary and alternative medicine' AND 'reason' OR 'attitude'. Quantitative or qualitative original articles in English, published between 2003 and 2018 were reviewed. Conference proceedings, pilot studies, protocols, letters, and reviews were excluded. Papers were appraised using valid tools and a 'risk of bias' assessment was also performed. Thematic analysis was conducted. Reasons were coded in each paper, then codes were grouped into categories. If several categories reported similar reasons, these were combined into a theme. Themes were then analysed using χ2 tests to identify the main factors related to reasons for CAM usage.
Results: 231 publications were included. Reasons for CAM use amongst general and condition-specific populations were similar. The top three reasons for CAM use were: (1) having an expectation of benefits of CAM (84% of publications), (2) dissatisfaction with conventional medicine (37%) and (3) the perceived safety of CAM (37%). Internal health locus of control as an influencing factor was more likely to be reported in Western populations, whereas the social networks was a common factor amongst Asian populations (p < 0.05). Affordability, easy access to CAM and tradition were significant factors amongst African populations (p < 0.05). Negative attitudes towards CAM and satisfaction with conventional medicine (CM) were the main reasons for non-use (p < 0.05).
Conclusions: Dissatisfaction with CM and positive attitudes toward CAM, motivate people to use CAM. In contrast, satisfaction with CM and negative attitudes towards CAM are the main reasons for non-use.
Answer: Yes, modern health worries (MHWs), personality, and attitudes to science are associated with the use of complementary and alternative medicine (CAM). A study found that regular and occasional users of CAM had higher MHWs than non or infrequent users, and those with high total MHWs tended to believe in the importance of psychological factors in health and illness, as well as the potential harmful effects of modern medicine. Young males who had positive attitudes to science were least likely to be CAM users, and positive attitudes to science were associated with increased skepticism about CAM (PUBMED:17456283). Additionally, hypochondriacal attitudes and beliefs, attitudes towards CAM, and MHWs were associated with greater patient dissatisfaction with their general practitioner, indicating that these factors can influence healthcare choices (PUBMED:25408919).
Health professionals' attitudes and knowledge about CAM also influence its use and recommendation. In Western Mexico, more biomedical researchers had used CAM compared to physicians, and the recommendation rate of CAM by Mexican physicians was significantly lower than in other countries, likely due to the lack of CAM training in Mexican medical schools (PUBMED:27084337). Nurses' knowledge and attitudes towards CAM are also important, as many do not possess a comprehensive understanding of the risks and benefits associated with CAM, and nearly half reported feeling uncomfortable discussing CAM therapies with their patients (PUBMED:25727902).
Furthermore, attitudes toward CAM and conventional medicine influence CAM use in healthy populations, with philosophical congruence with CAM and dissatisfaction with conventional medicine related to increased CAM use (PUBMED:21040887). Women's attitudes towards CAM during pregnancy also play a role, with different attitudes influencing the use of various CAM products (PUBMED:26472482). Medical students' personal beliefs can interfere with their tendency to learn about CAM, and integrating CAM into the medical curriculum could be beneficial based on students' expectations and feedback (PUBMED:26396087).
Pregnant women's attitudes towards holistic CAM and their personal characteristics, knowledge, and practice of CAM affect their attitudes and utilization of CAM therapies (PUBMED:37407975). Obstetricians and gynecologists' knowledge and attitudes towards CAM influence their advice to patients, with many advising the use of CAM in pregnancy (PUBMED:27707901). |
Instruction: Do physicians with self-reported non-English fluency practice in linguistically disadvantaged communities?
Abstracts:
abstract_id: PUBMED:21120633
Do physicians with self-reported non-English fluency practice in linguistically disadvantaged communities? Background: Language concordance between physicians and patients may reduce barriers to care faced by patients with limited English proficiency (LEP). It is unclear whether physicians with fluency in non-English languages practice in areas with high concentrations of people with LEP.
Objective: To investigate whether physician non-English language fluency is associated with practicing in areas with high concentrations of people with LEP.
Design: Cross-sectional cohort study.
Participants: A total of 61,138 practicing physicians no longer in training who participated in the California Medical Board Physician Licensure Survey from 2001-2007.
Measures: Self-reported language fluency in Spanish and Asian languages. Physician practice ZIP code corresponding to: (1) high concentration of people with LEP and (2) high concentration of linguistically isolated households.
Methods: Practice location ZIP code was geocoded with geographic medical service study designations. We examined the unadjusted relationships between physician self-reported fluency in Spanish and selected Asian languages and practice location, stratified by race-ethnicity. We used staged logistic multiple variable regression models to isolate the effect of self-reported language fluency on practice location controlling for age, gender, race-ethnicity, medical specialty, and international medical graduate status.
Results: Physicians with self-reported fluency in Spanish or an Asian language were more likely to practice in linguistically designated areas in these respective languages compared to those without fluency. Physician fluency in an Asian language [adjusted odds ratio (AOR) = 1.77; 95% confidence intervals (CI): 1.63-1.92] was independently associated with practicing in areas with a high number of LEP Asian speakers. A similar pattern was found for Spanish language fluency (AOR = 1.77; 95% CI: 1.43-1.82) and areas with high numbers of LEP Spanish-speakers. Latino and Asian race-ethnicity had the strongest effect on corresponding practice location, and this association was attenuated by language fluency.
Conclusions: Physicians who are fluent in Spanish or an Asian language are more likely to practice in geographic areas where their potential patients speak the corresponding language.
abstract_id: PUBMED:20526909
Self-reported fluency in non-english languages among physicians practicing in California. Background And Objectives: With increasing numbers of people with limited English proficiency in the United States, there is growing concern about the potential adverse effect of language barriers on patient care. We sought to compare the non-English language fluency of practicing physicians by physician race/ethnicity and location of medical school education.
Methods: We used cross-sectional analyses of California Medical Board Survey (2007) data of 61,138 practicing physicians. Measures examined were self-reported physician language fluency in 34 languages, race/ethnicity, and medical school of graduation.
Results: Forty-two percent of physicians reported having fluency in at least one language other than English. Fifty-six percent of international medical graduates (IMGs) reported fluency in a language other than English, compared to 37% of US medical graduates (USMG). Although the majority of physicians with fluency in Spanish are not Latino, fluency in Asian languages is primarily restricted to physicians who are of Asian race/ethnicity. Eighty-seven percent of physicians with fluency in Mandarin, Cantonese, or other Chinese languages are of Chinese ethnicity. A similar association between ethnicity and fluency was found for Southeast Asian languages, Pacific Island languages, and South Asian languages. IMGs constituted more than 80% of the physicians with fluency in Arabic, South Asian, and Pacific Islander languages.
Conclusions: IMGs contribute to the diversity of languages spoken by California physicians.
abstract_id: PUBMED:34881672
Intentional self-harm in culturally and linguistically diverse communities: A study of hospital admissions in Victoria, Australia. Purpose: To examine the rates and profiles of intentional self-harm hospital admissions among people from culturally and linguistically diverse and non-culturally and linguistically diverse backgrounds.
Methods: A retrospective analysis of 29,213 hospital admissions for self-harm among people aged 15 years or older in Victoria, Australia, was conducted using data from the Victorian Admitted Episodes Dataset between 2014/2015 and 2018/2019. The Victorian Admitted Episodes Dataset records all hospital admissions in public and private hospitals in Victoria (population 6.5 million). Population-based incidence of self-harm, logistic regression and percentages (95% confidence intervals) were calculated to compare between culturally and linguistically diverse groups by birthplaces and the non-culturally and linguistically diverse groups of self-harm admissions.
Results: When grouped together culturally and linguistically diverse individuals had lower rates of (hospital-treated) self-harm compared with the non-culturally and linguistically diverse individuals. However, some culturally and linguistically diverse groups such as those originating from Sudan and Iran had higher rates than non-culturally and linguistically diverse groups. Among self-harm hospitalised patients, those in the culturally and linguistically diverse group (vs non-culturally and linguistically diverse group) were more likely to be older, Metropolitan Victorian residents, from the lowest socioeconomic status, and being ever or currently married. Self-harm admissions by persons born in Southern and Eastern Europe were the oldest of all groups; in all other groups number of admissions tended to decrease as age increased whereas in this group the number of admissions increased as age increased.
Conclusion: There was considerable heterogeneity in rates of hospital-treated self-harm in culturally and linguistically diverse communities, with some countries of origin (e.g. Sudan, Iran) having significantly higher rates. Some of this variation may be due to factors relating to the mode of entry into Australia (refugee vs planned migration), and future research needs to examine this possibility and others, to better plan for support needs in the culturally and linguistically diverse communities most affected by self-harm. Combining all culturally and linguistically diverse people into one group may obscure important differences in self-harm. Different self-harm prevention strategies are likely to be needed for different culturally and linguistically diverse populations.
abstract_id: PUBMED:25847849
Improvements in Physicians' Knowledge, Difficulties, and Self-Reported Practice After a Regional Palliative Care Program. Context: Although several studies have explored the effects of regional palliative care programs, no studies have investigated the changes in physician-related outcomes.
Objectives: The primary aims of this study were to: (1) clarify the changes in knowledge, difficulties, and self-reported practice of physicians before and after the intervention, (2) explore the potential associations between the level of physicians' participation in the program and outcomes, and (3) identify the reasons and characteristics of physicians who did not participate in the program.
Methods: As a part of the regional palliative care intervention trial, questionnaires were sent to physicians recruited consecutively to obtain a representative sample of each region. Physician-reported knowledge, difficulty of palliative care, and self-perceived practice were measured using the Palliative Care Knowledge Test, Palliative Care Difficulty Scale, and Palliative Care Self-Reported Practice Scale (PCPS), respectively. The level of their involvement in the program and reason for non-participation were ascertained from self-reported questionnaires.
Results: The number of eligible physicians identified was 1870 in pre-intervention and 1763 in post-intervention surveys, and we obtained 911 and 706 responses. Total scores of the Palliative Care Knowledge Test, PCPS, and PCPS were significantly improved after the intervention, with effect sizes of 0.30, 0.52, and 0.17, respectively. Physicians who participated in workshops more frequently were significantly more likely to have better knowledge, less difficulties, and better self-reported practice.
Conclusion: After the regional palliative care program, there were marked improvements in physicians' knowledge and difficulties. These improvements were associated with the level of physicians' participation in the program.
abstract_id: PUBMED:26598501
Reflections: Using a Second Language to Build a Practice. The US Census Bureau reports that 20.7% of Americans speak a language other than English. This is an opportunity of otolaryngologists to build their practice on a second language. This Reflections piece reviews my personal experiences of using my language fluency in Chinese to build a practice in Philadelphia. Through translating office documents, networking with Chinese-speaking physicians, and volunteering at the free clinic in Chinatown, I was able to serve this non-English-speaking community. Although there are translator services in the hospital, there are terms that get lost in translation and cultural norms that outsiders may not understand. I encourage the otolaryngology community to celebrate its diversity and increase access to our specialty for non-English-speaking patients.
abstract_id: PUBMED:25265991
Motivational interviewing to explore culturally and linguistically diverse people's comorbidity medication self-efficacy. Aims And Objectives: To examine the perceptions of a group of culturally and linguistically diverse participants with the comorbidities of diabetes, chronic kidney disease and cardiovascular disease to determine factors that influence their medication self-efficacy through the use of motivational interviewing.
Background: These comorbidities are a global public health problem and their self-management is more difficult for culturally and linguistically diverse populations living in English-speaking communities. Few interventions have been tested in culturally and linguistically diverse people to improve their medication self-efficacy.
Design: A series of motivational interviewing telephone calls were conducted in the intervention arm of a randomised controlled trial using interpreter services.
Methods: Patients with these comorbidities aged ≥18 years of age whose preference it was to speak Greek, Italian or Vietnamese were recruited from nephrology outpatient clinics of two Australian metropolitan hospitals in 2009.
Results: The average age of the 26 participants was 73·5 years. The fortnightly calls averaged 9·5 minutes. Thematic analysis revealed three core themes which were attitudes towards medication, having to take medication and impediments to chronic illness medication self-efficacy. A lack of knowledge about medications impeded confidence necessary for optimal disease self-management. Participants had limited access to resources to help them understand their medications.
Conclusion: This work has highlighted communication gaps and barriers affecting medication self-efficacy in this group. Culturally sensitive interventions are required to ensure people of culturally and linguistically diverse backgrounds have the appropriate skills to self-manage their complex medical conditions.
Relevance To Clinical Practice: Helping people to take their medications as prescribed is a key role for nurses to serve and protect the well-being of our increasingly multicultural communities. The use of interpreters in motivational interviewing requires careful planning and adequate resources for optimal outcomes.
abstract_id: PUBMED:37821837
A critical exploration of the diets of UK disadvantaged communities to inform food systems transformation: a scoping review of qualitative literature using a social practice theory lens. The UK food system affects social, economic and natural environments and features escalating risk of food insecurity. Yet it should provide access to safe, nutritious, affordable food for all citizens. Disadvantaged UK communities [individuals and families at risk of food and housing insecurity, often culturally diverse] have often been conceptualised in terms of individual behaviour which may lead to findings and conclusions based on the need for individual change. Such communities face public health challenges and are often treated as powerless recipients of dietary and health initiatives or as 'choiceless' consumers within food supply chains. As transforming the UK food system has become a national priority, it is important a diverse range of evidence is used to support understanding of the diets of disadvantaged communities to inform food systems transformation research.A scoping review of UK peer reviewed qualitative literature published in MEDLINE, CINAHL Plus with Full Text, EMBASE, PsycINFO and Web of Science between January 2010 and May 2021 in English. Eligibility criteria were applied, a data extraction table summarised data from included studies, and synthesis using social practice theory was undertaken.Forty-five qualitative studies were reviewed, which included the views of 2,434 community members aged between 5 and 83. Studies used different measures to define disadvantage. Synthesis using social practice theory identified themes of food and dietary practices shaped by interactions between 'material factors' (e.g. transport, housing and money), 'meanings' (e.g. autonomy and independence), and 'competencies' (e.g. strategies to maximise food intake). These concepts are analysed and critiqued in the context of the wider literature to inform food systems transformation research.This review suggests to date, qualitative research into diets of UK disadvantaged communities provides diverse findings that mainly conceptualise disadvantage at an individual level. Whilst several studies provide excellent characterisations of individual experience, links to 'macro' processes such as supply chains are largely missing. Recommendations are made for future research to embrace transdisciplinary perspectives and utilise new tools (e.g., creative methods and good practice guides), and theories (e.g., assemblage) to better facilitate food systems transformation for disadvantaged communities.
abstract_id: PUBMED:20151191
Accuracy of physician self-report of Spanish language proficiency. As health systems strive to meet the needs of linguistically diverse patient populations, determining a physician's non-English language proficiency is becoming increasingly important. However, brief, validated measures are lacking. To determine if any of four self-reported measures of physician Spanish language proficiency are useful measures of fluency in Spanish. Physician self-report of Spanish proficiency was compared to Spanish-speaking patients' report of their physicians' language proficiency. 110 Spanish-speaking patients and their 46 physicians in two public hospital clinics with professional interpreters available. Physicians rated their Spanish fluency with four items: one general fluency question, two clinically specific questions, and one question on interpreter use. Patients were asked if their doctor speaks Spanish ("yes/no"). Concordance, sensitivity, specificity, and positive and negative predictive values (PPV, NPV) were calculated for each of the items, and receiver operating (ROC) curves were used to compare performance characteristics. Concordance between physician and patient reports of physician Spanish proficiency ranged from 84 to 91%. The PPV for each of the four items ranged from 91 to 99%, the NPV from 60 to 90%, and the area under their ROC curves from 90 to 95%. The general fluency question gave the best combination of PPV and NPV, and the item on holding sensitive discussions had the highest PPV, 99%. Physicians who reported fluency as "fair" were as likely to have patients report they did not speak Spanish as that they did. Physician self-report of Spanish language proficiency is highly correlated with patient report, except when physicians report "fair" general fluency. In settings where no financial or other incentives are linked to language skills, simple questions may be a useful way to assess physician language proficiency.
abstract_id: PUBMED:35124545
"How deep do I have to cut?": Non-suicidal self-injury and imagined communities of practice on Tumblr. This paper concerns itself with the study of non-suicidal self-injury (NSSI) content on Tumblr. Adding to existing valuable studies on social media and NSSI, we contribute an anthropological understanding of the communal formations and relationships between people who use social media to express their thoughts and feelings about NSSI. Using online ethnography as a method, we approach our data from the perspective of someone who is new to NSSI Tumblr and is learning how to engage with other people who self-injure sharing content on the site. We argue that people who share and interact with NSSI content on Tumblr form part of imagined communities of practice, through which they create shared meaning and interpretations of their experiences. Our results suggest that Tumbleloggers who engage with NSSI content draw on a variety of communal practices and norms to create a non-judgmental space away from societal stigma. As such, future research can benefit from an approach centered around communal practices to understand how people who self-injure connect and communicate on social media.
abstract_id: PUBMED:34631368
Building Prealgebra Fluency Through a Self-Managed Practice Intervention: Order of Operations. Behavioral fluency refers to a combination of accuracy and speed that enables students to function proficiently in the learning environment. The present study investigated the effects of a self-managed frequency-building intervention on the behavioral fluency of a critical prealgebra skill in four 6th-grade students. The intervention involved students having access to the PEMDAS (parentheses, exponents, multiplication, division, addition, and subtraction) mnemonic during frequency building. Using an alternating-treatments design, the first experimental condition presented the intervention as three 1-min practice trials with 30 s of feedback delivered immediately after each frequency-building trial ended. The second condition offered one 3-min practice trial with 90 s of feedback once the trial ended. A baseline condition (no practice) had the students engage in a 1-min timed trial with no feedback. The alternating-treatments design demonstrated that three of the four students produced a superior performance within the two intervention conditions when compared to baseline. However, the results did not conclusively show that one frequency-building intervention was superior to the other.
Answer: Yes, physicians with self-reported non-English fluency are more likely to practice in linguistically disadvantaged communities. A cross-sectional cohort study involving 61,138 practicing physicians in California found that physicians who reported fluency in Spanish or an Asian language were more likely to practice in areas with high concentrations of people with limited English proficiency (LEP) who spoke these languages. The study showed that physician fluency in an Asian language was independently associated with practicing in areas with a high number of LEP Asian speakers, and a similar pattern was found for Spanish language fluency with respect to areas with high numbers of LEP Spanish-speakers (PUBMED:21120633). |
Instruction: The Hispanic paradox: does it exist in the injured?
Abstracts:
abstract_id: PUBMED:26321296
The Hispanic paradox: does it exist in the injured? Background: Hispanics have similar or lower all-cause mortality rates in the general population than non-Hispanic whites (NHWs), despite higher risks associated with lower socioeconomic status, hence termed the "Hispanic Paradox." It is unknown if this paradox exists in the injured. We hypothesized that Hispanic trauma patients have equivalent or lower risk-adjusted mortality and observed-to-expected mortality ratios than other racial/ethnic groups.
Methods: Retrospective analysis of adult patients from the 2010 National Trauma Data Bank was performed. Hispanic patients were compared with NHWs and African Americans (AAs) to assess in-hospital mortality risk in each group.
Results: Compared with NHWs, Hispanic patients had lower unadjusted risk of mortality. After adjusting for potential confounders, the difference was no longer statistically significant. Mortality risk was significantly lower for Hispanic patients compared with AAs in both crude and adjusted models. Hispanic patients had significantly lower observed-to-expected mortality ratios than NHWs and AAs.
Conclusion: Despite reports of racial/ethnic disparities in trauma outcomes, Hispanic patients are not at greater risk of death than NHW patients in a nationwide representative sample of trauma patients.
abstract_id: PUBMED:7604898
Trauma among Hispanic children: a population-based study in a regionalized system of trauma care. We studied 1164 injured Hispanic and 2560 injured non-Hispanic White children newborn through 14 years triaged to the San Diego County Regionalized Trauma System from 1985 through 1990. Incidence rates did not differ by ethnic group. Hispanic children were more likely to be struck as pedestrians (odds ratio [OR] = 1.5) and less likely to be injured in falls (OR = 0.7) than non-Hispanic White children. For motor vehicle and pedal cycle injuries, Hispanic children were more likely not to have been restrained by seatbelts (OR = 4.0) or car seats (OR = 3.7).
abstract_id: PUBMED:31899254
Rural-urban differences in cannabis detected in fatally injured drivers in the United States. While there is a vast literature on rural and urban differences in substance use, little is known in terms of cannabis positive drug tests among fatally injured drivers. In the present study, we examined rural-urban differences in cannabis detected in fatally-injured drivers. Data were drawn from the 2015-2017 Fatality Analysis Reporting System. Multivariable logistic regression was performed to examine rural-urban differences in the percentage of cannabis detected in fatally-injured drivers. Analyses were stratified by rural-urban classification and sex. A positive cannabis test in fatally-injured drivers was more prevalent in urban locations. Compared to fatally-injured drivers in rural locations, urban drivers had higher odds of a positive test for cannabinoids (aOR: 1.21, 95% CI 1.14-1.28). Non-Hispanic Black drivers had higher odds of testing positive for cannabinoids (aOR: 1.43, 95% CI 1.31-1.55). Those aged at least 25 years had lower odds of a positive test for cannabinoids. Drivers involved in a weekend nighttime crash (aOR: 1.14, 95% CI 1.03-1.26) and weekday nighttime (aOR: 1.15, 95% CI 1.05-1.26) had higher odds of testing positive for cannabinoids compared to drivers involved in a weekend daytime crash. Results showed significant rural-urban differences in the prevalence of cannabis detected in fatally-injured drivers.
abstract_id: PUBMED:31137362
Metabolic and Structural Sites of Damage in Heat- and Sanitizer-Injured Populations of Listeria monocytogenes. Two food isolates of Listeria monocytogenes (strains ATCC 51414 and F5027) were sublethally injured by exposure to heat (56°C for 20 min) or to a chlorine sanitizer (Antibac, 100 ppm for 2 min). Percent injury following treatment ranged from 84% to 99%. Injured Listeria were repaired in Listeria repair broth (LRB) at 37°C. Comparison of the repair curves generated by each method indicated that the time for repair was greater for sanitizer-injured cells (14 h) than for heat-injured cells (5 h). Sites of injury were determined by repairing heat- and sanitizer-treated Listeria in LRB supplemented with one of the following inhibitors: rifampicin (10 and 20 μg/ml), chloramphenicol (5 μg/ml), cycloserine D (10 and 20 μg/ml), and carbonyl cyanide m-chlorophenyl-hydrazone(CCCP) (2.5 μg/ml). In both heat- and sanitizer-injured populations, a total inhibition of repair was seen following incubation with rifampicin, chloramphenicol and CCCP. These results clearly indicate a requirement for mRNA, protein synthesis, and oxidative phosphorylation for repair to occur. The cell wall is not a site of damage since cycloserine D had no effect on repair of heat- or sanitizer-injured Listeria . Investigation of damage to the cell membrane showed that stress caused by sublethal heat or sanitizer did not allow proteins or nucleotides to leak into the medium. The recognition of injury and repair in Listeria will lead to improved methods of detection and ultimately to control strategies which prevent outgrowth of this organism in foods.
abstract_id: PUBMED:15736327
Severe injury among Hispanic and non-Hispanic white children in Washington state. Objectives: The authors' anecdotal experience at a regional Level I trauma center was that Hispanic children were overrepresented among burn patients, particularly among children with burns due to scalding from hot food. This study describes injury incidence and severity among Hispanic and non-Hispanic white infants, children, and adolescents with serious traumatic injuries in Washington State.
Methods: Data from the Washington State Trauma Registry for 1995-1997 were used to identify injured individuals aged < or = 19 years. Ratios of overall and mechanism-specific injury incidence rates for Hispanic children relative to non-Hispanic white children were calculated using denominator estimates derived from U.S. Census Bureau population data. Hispanic children and non-Hispanic white children were also compared on several measures of severity of injury.
Results: In 1995-1997, serious traumatic injuries were reported to the Registry for 231 Hispanic children aged < or = 19 years (rate: 54 per 100,000 person-years) and for 2,123 non-Hispanic white children (56 per 100,000 person-years), yielding an overall rate ratio (RR) of 1.0 (95% confidence interval [CI] 0.8, 1.1). Motor vehicle crashes and falls accounted for one-third to one-half of the injuries for each group. Infants, children, and adolescents identified as Hispanic had higher rates of injuries related to hot objects (i.e., burns) (RR=2.3; 95% CI 1.3, 4.1), guns (RR=2.2; 95% CI 1.5 to 3.3), and being cut or pierced (RR=3.5; 95% CI 2.2 to 5.5). The Hispanic group had a lower injury rate for motor vehicle accidents (RR=0.7; 95% CI 0.5, 0.9). Mortality rates were similar (RR=1.1; 95% CI 0.7, 1.7). The mean length of hospital stay was 5.5 days for the Hispanic group and 8.8 days for the non-Hispanic white group (difference=3.3 days; 95% CI -0.7, 7.4).
Conclusions: The study found little difference between Hispanic and non-Hispanic white infants, children, and adolescents in the burden of traumatic pediatric injury. However, burns, guns, drowning, and being pierced/cut appeared to be particularly important mechanisms of injury for Hispanic children. More specific investigations targeted toward these injury types are needed to identify the underlying preventable risk factors involved.
abstract_id: PUBMED:8029740
Development of the Hispanic Low Back Pain Symptom Check List. Study Design: The Low Back Pain Symptom Check List identifies psychological disturbance in patients with low back pain. This report traces the development of a translation for Hispanic populations.
Objective: The study explores the reliability and assesses the equivalence of the translation in providing pain and psychological information.
Summary Of Background Data: A number of psychometric measures appear suitable for routinely assessing psychological disturbance among back injured patients. Unfortunately, there are few measures with language translations that can be applied to Hispanic populations.
Methods: In study 1, the English form was translated by two bilingual physicians. In study 2, reliability was examined using Cronbach's measure of internal consistency. In study 3, the equivalence of the Hispanic and English forms in predicting treatment outcome was examined.
Results: Coefficient alphas and mean pain scores were similar for the English and Hispanic forms. Overall agreement between the two forms in tracking psychological disturbance was 91%. The Hispanic form was equally accurate in predicting treatment outcome.
Conclusions: The Hispanic form is reliable and provides pain and psychological information much like the English form.
abstract_id: PUBMED:33915985
Correlation between Hospital Volume of Severely Injured Patients and In-Hospital Mortality of Severely Injured Pediatric Patients in Japan: A Nationwide 5-Year Retrospective Study. Appropriate trauma care systems, suitable for children are needed; thus, this retrospective nationwide study evaluated the correlation between the annual total hospital volume of severely injured patients and in-hospital mortality of severely injured pediatric patients (SIPP) and compared clinical parameters and outcomes per hospital between low- and high-volume hospitals. During the five-year study period, we enrolled 53,088 severely injured patients (Injury Severity Score, ≥16); 2889 (5.4%) were pediatric patients aged <18 years. Significant Spearman correlation analysis was observed between numbers of total patients and SIPP per hospital (p < 0.001), and the number of SIPP per hospital who underwent interhospital transportation and/or urgent treatment was correlated with the total number of severely injured patients per hospital. Actual in-hospital mortality, per hospital, of SIPP patients was significantly correlated with the total number patients per hospital (p < 0.001,). The total number of SIPP, requiring urgent treatment, was higher in the high-volume than in the low-volume hospital group. No significant differences in actual in-hospital morality (p = 0.246, 2.13 (0-8.33) vs. 0 (0-100)) and standardized mortality ratio (SMR) values (p = 0.244, 0.31 (0-0.79) vs. 0 (0-4.87)) were observed between the two groups; however, the 13 high-volume hospitals had an SMR of <1.0. Centralizing severely injured patients, regardless of age, to a higher volume hospital might contribute to survival benefits of SIPP.
abstract_id: PUBMED:10693079
Injury and employment patterns among Hispanic construction workers. This article describes non-fatal injuries among Hispanic construction workers treated at an emergency department from 1990 to 1998. Medical and interview data were analyzed to evaluate and explain the workers' apparently inflated risk of injury. The majority of the injured Hispanic workers were employed in the less-skilled trades. Compared with other injured workers, Hispanics had a higher proportion of serious injuries and were disadvantaged in terms of training and union status. With the exception of union status, these differences largely disappeared after controlling for trade. The physical, financial, and emotional consequences were more apparent 1 year later for injured Hispanics, even after controlling for trade. These observations suggest that minority status is a predictor of trade and that trade is a predictor of injury risk. In addition to reducing injury hazards, interventions should address the limited employment and union membership options that are available to minority workers in the construction industry.
abstract_id: PUBMED:37690414
Self-repair and resuscitation of viable injured bacteria in chlorinated drinking water: Achromobacter as an example. Chlorine disinfection for the treatment of drinking water can cause injury to the membrane and DNA of bacterial cells and may induce the surviving injured bacteria into a viable but non-culturable (VBNC) state. It is difficult to monitor viable injured bacteria by heterotrophic plate counting (HPC), and their presence is also easily miscalculated in flow cytometry intact cell counting (FCM-ICC). Viable injured bacteria have a potential risk of resuscitation in drinking water distribution systems (DWDSs) and pose a threat to public health when drinking from faucets. In this study, bacteria with injured membranes were isolated from chlorinated drinking water by FCM cell sorting. The culture rate of injured bacteria varied from 0.08% to 2.6% on agar plates and 0.39% to 6.5% in 96-well plates. As the dominant genus among the five identified genera, as well as an opportunistic pathogen with multiple antibiotic resistance, Achromobacter was selected and further studied. After treatment with chlorine at a concentration of 1.2 mg/L, Achromobacter entered into the intermediate injured state on the FCM plot, and the injury on the bacterial surface was observed by electron microscopy. However, the CTC respiratory activity assay showed that 75.0% of the bacteria were still physiologically active, and they entered into a VBNC state. The injured VBNC Achromobacter in sterile drinking water were resuscitated after approximately 25 h. The cellular repair behavior of injured bacteria was studied by Fourier transform infrared attenuated total reflectance (FTIR-ATR) and comet assays. It was found that DNA injury rather than membrane injury was repaired first. The expression of Ku and ligD increased significantly during the DNA repair period, indicating that non-homologous end-joining (NHEJ) played an important role in repairing DNA double-strand breaks. This study deepened the understanding of the effect of chlorine disinfection on bacterial viability in drinking water and will provide support for the improvement of the chlorine disinfection process for the treatment of drinking water.
abstract_id: PUBMED:11097699
Motor vehicle crash fatalities: A comparison of Hispanic and non-Hispanic motorists in Colorado. Study Objectives: We compare the population-based death rates from traffic crashes in the Hispanic and non-Hispanic white populations in a single state, and compare fatally injured Hispanic and non-Hispanic drivers with respect to safety belt use, alcohol involvement, speeding, vehicle age, valid licensure, and urban-rural location.
Methods: Hispanic and non-Hispanic white motorists killed in traffic crashes in 1991-1995 were studied (n=2,272). Data from death certificates (age, sex, education, race, and ethnicity) and the Fatality Analysis Reporting System (FARS; driver, vehicle, and crash information) were merged. Average annual age-adjusted fatality rates were calculated; to compare Hispanic and non-Hispanic white motorists, rate ratios (RR) and 95% confidence intervals (CIs) were calculated. Odds ratios (ORs), adjusted for age, sex, and rural locale, were calculated to measure the association between Hispanic ethnicity and driver and crash characteristics.
Results: Eighty-five percent of FARS records were matched to death certificates. Compared with non-Hispanic white motorists, Hispanics had higher crash-related fatality rates overall (RR 1.75, 95% CI 1.60 to 1.92) and for drivers only (RR 1.62, 95% CI 1.41 to 1.85). After adjustment for age, sex, and rural locale, Hispanic drivers had higher rates of safety belt nonuse (OR 1.81, 95% CI 1.20 to 2.72), legal alcohol intoxication (OR 2.73, 95% CI 1.97 to 3.79), speeding (OR 1.36, 95% CI 0.99 to 1.88), and invalid licensure (OR 2.58, 95% CI 1.78 to 3.75). The average vehicle age for Hispanic drivers (10.1 years, 95% CI 9.3 to 11.0) was greater than for non-Hispanic white motorists (8.8 years, 95% CI 8.4 to 9.2).
Conclusion: Compared with non-Hispanic whites, Hispanic drivers have higher rates of safety belt nonuse, speeding, invalid licensure and alcohol involvement, with correspondingly higher rates of death in traffic crashes. As traffic safety emerges as a public health priority in Hispanic communities, these data may help in developing appropriate and culturally sensitive interventions.
Answer: The concept of the Hispanic paradox refers to the observation that Hispanic individuals have similar or lower all-cause mortality rates compared to non-Hispanic whites (NHWs), despite often having a lower socioeconomic status and associated health risks. When examining whether this paradox extends to injured individuals, the evidence from the provided abstracts is mixed.
One study found that Hispanic trauma patients did not have a greater risk of death than NHW patients in a nationwide representative sample of trauma patients, suggesting that the Hispanic paradox might exist in the injured population (PUBMED:26321296). This study showed that Hispanic patients had lower unadjusted risk of mortality compared to NHWs, and after adjusting for potential confounders, the difference was no longer statistically significant. Moreover, Hispanic patients had significantly lower observed-to-expected mortality ratios than NHWs and African Americans (AAs).
However, other studies highlight specific injury risks and outcomes among Hispanic individuals that may not align with the notion of a paradox. For instance, Hispanic children were found to be more likely to be struck as pedestrians and less likely to be restrained in motor vehicles, suggesting higher risks in certain injury categories (PUBMED:7604898). Another study reported that Hispanic children had higher rates of injuries related to hot objects, guns, and being cut or pierced, although the overall burden of traumatic pediatric injury was similar between Hispanic and non-Hispanic white children (PUBMED:15736327). Additionally, Hispanic construction workers were found to have a higher proportion of serious injuries and were disadvantaged in terms of training and union status (PUBMED:10693079).
In summary, while there is some evidence to suggest that the Hispanic paradox may extend to injured individuals, with lower or equivalent mortality risks compared to other racial/ethnic groups, other studies indicate specific areas of injury risk and outcomes where Hispanic individuals may be at a disadvantage. Therefore, the existence of the Hispanic paradox in the injured population is not conclusively supported across all types of injuries and may vary depending on the context and specific injury mechanisms involved. |
Instruction: Is high-dose cholecalciferol justified in children with chronic kidney disease who failed low-dose maintenance therapy?
Abstracts:
abstract_id: PUBMED:23338053
Is high-dose cholecalciferol justified in children with chronic kidney disease who failed low-dose maintenance therapy? Background: We aimed to investigate the effect of single, high-dose intramuscular cholecalciferol on vitamin D3 and intact parathyroid hormone (iPTH) levels in children with chronic kidney disease (CKD).
Methods: Between January 2012 and June 2012, we conducted a prospective, uncontrolled study at the Pediatric Nephrology Unit of King Abdulaziz University Hospital, Jeddah, to investigate the effect of single, high-dose intramuscular vitamin D3 on 25(OH)D3 and iPTH levels in vitamin D insufficient/deficient children with CKD. Serum vitamin D3, iPTH, calcium, phosphate, alkaline phosphatase (ALP), and creatinine levels were measured before intramuscular vitamin D3 (300,000 IU) administration, and these were subsequently repeated at 1 and 3 months after treatment. Statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS Inc., Chicago, IL, USA).
Results: Nineteen children fulfilled the criteria. At 3 months after treatment, vitamin D3 levels were significantly higher than at baseline (p < 0.001) but lower than the levels at 1 month. iPTH levels decreased significantly at 3 months (p = 0.01); however, the drop in iPTH levels was not significant at 1 month (p = 0.447). There were no changes in calcium, phosphate, ALP, or creatinine levels after treatment.
Conclusions: Single-dose intramuscular vitamin D3 (300,000 IU) resulted in significant improvement of vitamin D3 and iPTH levels in children with CKD.
abstract_id: PUBMED:28770692
Effect of high-dose oral cholecalciferol on cardiac mechanics in children with chronic kidney disease. Cardiovascular factors are an important cause of mortality in chronic kidney disease, and vitamin-D deficiency is common in this patient population. Therefore, we aimed to investigate the effect of oral cholecalciferol on cardiac mechanics in children with chronic kidney disease. A total of 41 children with chronic kidney disease - the patient group - and 24 healthy subjects - the control group - free of any underlying cardiac or renal disease with low 25-hydroxyvitamin-D3 levels were evaluated by conventional tissue Doppler imaging and two-dimensional speckle-tracking echocardiography, both at baseline and following Stoss vitamin-D supplementation. Left ventricular strain and strain rate values were compared between the study groups. Initial longitudinal and radial strain as well as strain rate values of the left ventricle were significantly lower in patients. After vitamin-D supplementation, these improved significantly in patients, whereas no significant change was observed in the control group. Our study showed that, although conventional and tissue Doppler imaging methods could not determine any effect, two-dimensional speckle-tracking echocardiography revealed the favourable effects of high-dose cholecalciferol on cardiac mechanics, implying the importance of vitamin-D supplementation in children with chronic kidney disease.
abstract_id: PUBMED:25262147
Low-dose cholecalciferol supplementation and dual vitamin D therapy in haemodialysis patients. Background: Traditionally, secondary hyperparathyroidism (SHPT) due to low calcitriol synthesis in failing kidneys has been treated with synthetic vitamin D receptor (VDR) activators. Recently, also the importance of low native vitamin D status beyond the issue of SHPT has been recognized in these patients. The aim of this work was to evaluate the effect of cholecalciferol supplementation in haemodialysis patients with low vitamin D serum levels. Another aim was to evaluate dual vitamin D therapy (cholecalciferol supplementation plus paricalcitol) in haemodialysis patients with vitamin D deficiency and concomitant SHPT.
Methods: Ninety clinically stable maintenance haemodialysis patients were included. Supervised cholecalciferol supplementation was administered due to low vitamin D status. Patients with SHPT were also treated with synthetic VDR activator. Two pre hoc subgroups for statistical analysis were formed: patients treated solely with cholecalciferol (N=34; 5,000 IU once weekly) and patients treated with a combination of cholecalciferol (identical dose, i.e. 5,000 IU/week) plus paricalcitol (N=34, median dose 10 μg/week). Follow-up visit was scheduled 15 weeks later. Serum concentrations of calcidiol (25-D), parathyroid hormone (PTH) and beta-cross laps (CTX) were assessed at baseline and at follow-up. Serum calcium, phosphate and alkaline phosphatase (ALP) were monitored monthly. Only non-calcium gastrointestinal phosphate binders were administered. Dialysate calcium was 1.5 mmol/L in all patients, and no oral calcium-containing preparations were prescribed. Depending on data distribution, parametric or nonparametric statistical methods were used for comparison within each group (i.e. baseline vs. follow-up data) as well as between groups.
Results: In the whole group of 90 patients, mean baseline 25-D serum level was 20.3 (standard deviation 8.7) nmol/L, and it increased to 66.8 (19) nmol/L (p<0.0001) after supplementation. In both preformed subgroups, the effect of vitamin D supplementation was almost identical. In cholecalciferol monotherapy, 25-D levels increased from 18.4 (8.2) to 68.6 (21.2) and in dual vitamin D therapy from 18.4 (5.0) to 67.6 (17.7) nmol/L (both p<0.0001). In addition, both treatment modalities decreased serum PTH levels importantly: from 21.7 (interquartile range 17.3; 35.4) to 18.1 pmol/L (15.3; 24.7) in monotherapy (p=0.05) and from 38.6 (31.8; 53.3) to 33.9 pmol/L (26.1; 47.5) in dual vitamin D therapy (p=0.01). Serum calcium, phosphate, ALP and CTX did not change. We have not observed any episode of hypercalcemia in any subject during the whole period of follow-up. At baseline, slightly lower 25-D levels were observed in diabetic than in non-diabetic patients. This difference disappeared after substitution. Vitamin D status and its changes were not related to the patient's age.
Conclusion: Low 25-D levels were very common in haemodialysis patients. They were safely and effectively corrected with supervised low-dose cholecalciferol supplementation. In patients with higher baseline PTH levels, dual vitamin D therapy (cholecalciferol plus paricalcitol) was safely and effectively used.
abstract_id: PUBMED:38397979
The Efficacy and Safety of High-Dose Cholecalciferol Therapy in Hemodialysis Patients. Vitamin D deficiency and insufficiency are highly prevalent in CKD, affecting over 80% of hemodialysis (HD) patients and requiring therapeutic intervention. Nephrological societies suggest the administration of cholecalciferol according to the guidelines for the general population. The aim of the observational study was to evaluate the efficacy and safety of the therapy with a high dose of cholecalciferol in HD patients with 25(OH)D deficiency and insufficiency to reach the target serum 25(OH)D level > 30 ng/mL. A total of 22 patients (16 M), with an average age of 72.5 ± 13.03 years and 25(OH)D concentration of 13.05 (9.00-17.90) ng/mL, were administered cholecalciferol at a therapeutic dose of 70,000 IU/week (20,000 IU + 20,000 IU + 30,000 IU, immediately after each dialysis session). All patients achieved the target value > 30 ng/mL, with a mean time of 2.86 ± 1.87 weeks. In the first week, the target level of 25(OH)D (100%) was reached by 2 patients (9.09%), in the second week by 15 patients (68.18%), in the fourth week by 18 patients (81.18%), and in the ninth week by all 22 patients (100%). A significant increase in 1,25(OH)2D levels was observed during the study. However, only 2 patients (9.09%) achieved a concentration of 1,25(OH)2D above 25 ng/mL-the lower limit of the reference range. The intact PTH concentrations remained unchanged during the observation period. No episodes of hypercalcemia were detected, and one new episode of hyperphosphatemia was observed. In conclusion, our study showed that the administration of a high-therapeutic dose of cholecalciferol allowed for a quick, effective, and safe leveling of 25(OH)D concentration in HD patients.
abstract_id: PUBMED:34286472
Effects of high- vs low-dose native vitamin D on albuminuria and the renin-angiotensin-aldosterone system: a randomized pilot study. Background: Residual albuminuria is associated with an increased risk of progression to ESKD. We tested whether a supplementation with native vitamin D could reduce albuminuria in stable CKD patients under maximal renin-angiotensin system (RAS) blockade.
Methods: We conducted a randomized controlled study of high (cholecalciferol 100 000 UI per 10 days over 1 month) vs low-dose (ergocalciferol 400 UI/days over 1 month) supplementation with native vitamin D on urinary albumin/creatinine ratio, blood pressure and the RAS over 1 month in stable CKD patients with albuminuria and maximum tolerated RAS blockade.
Results: We included 31 patients, 21 in the high dose group and 10 in the low dose group. In contrast with a low dose, high dose vitamin D normalized plasma 25(OH)D, decreased iPTH but slightly increased plasma phosphate. High dose vitamin D decreased geometric mean UACR from 99.8 mg/mmol (CI 95% 60.4-165.1) to 84.7 mg/mmol (CI 95% 51.7-138.8, p = 0.046). In the low dose group, the change in geometric mean UACR was not significant. Blood pressure, urinary 24 h aldosterone and peaks and AUC of active renin concentrations after acute stimulation by a single dose of 100 mg captopril were unaffected by the supplementation in native vitamin D, irrespective of the dose. Native vitamin D supplementation was well tolerated.
Conclusions: We found a small (- 15%) but significant decrease in albuminuria after high dose vitamin D supplementation. We found no effect of vitamin D repletion on blood pressure and the systemic RAS, concordant with recent clinical studies.
abstract_id: PUBMED:34392411
Randomized trial of two maintenance doses of vitamin D in children with chronic kidney disease. Background: Correction of nutritional vitamin deficiency is recommended in children with chronic kidney disease (CKD). The optimal daily dose of vitamin D to achieve or maintain vitamin D sufficiency is unknown.
Methods: We conducted a phase III, double-blind, randomized trial of two doses of vitamin D3 in children ≥ 9 years of age with CKD stages 3-5 or kidney transplant recipients. Patients were randomized to 1000 IU or 4000 IU of daily vitamin D3 orally. We measured 25-hydroxvitamin D (25(OH)D) levels at baseline, 3 months and 6 months. The primary efficacy outcome was the percentage of patients who were vitamin D replete (25(OH)D ≥ 30 ng/mL) at 6 months.
Results: Ninety-eight patients were enrolled: 49 randomized into each group. Eighty (81.6%) patients completed the study and were analyzed. Baseline plasma 25(OH)D levels were ≥ 30 ng/mL in 12 (35.3%) and 12 (27.3%) patients in the 1000 IU and 4000 IU treatment groups, respectively. At 6 months, plasma 25(OH)D levels were ≥ 30 ng/mL in 33.3% (95% CI: 18.0-51.8%) and 74.4% (95% CI: 58.8-86.5%) in the 1000 IU and 4000 IU treatment groups, respectively (p = 0.0008). None of the patients developed vitamin D toxicity or hypercalcemia.
Conclusions: In children with CKD, 1000 IU of daily vitamin D3 is unlikely to achieve or maintain a plasma 25(OH)D ≥ 30 ng/mL. In children with CKD stages 3-5, a dose of vitamin D3 4000 IU daily was effective in achieving or maintaining vitamin D sufficiency.
Clinical Trial Registration: ClinicalTrials.gov identifier: NCT01909115.
abstract_id: PUBMED:36698076
The effect of high-dose vitamin D supplementation on hepcidin-25 and erythropoiesis in patients with chronic kidney disease. Background: Hepcidin is considered to play a central role in the pathophysiology of renal anemia. Recent studies in healthy individuals have demonstrated a suppressive effect of vitamin D (VD) on the expression of hepcidin. In this post-hoc analysis based on a randomized controlled study, we evaluated the effect of supplementing chronic kidney disease (CKD) patients (stage G3-G4) with a high daily dose of native VD on serum levels of hepcidin-25, the hepcidin/ferritin ratio, as well as on markers of erythropoiesis.
Methods: Patients with CKD stage G3-G4 included in a double blind, randomized, placebo (PBO) controlled study with available hepcidin measurements were analyzed. Study subjects received either 8000 international units (IU) of cholecalciferol daily or PBO for 12 weeks. We evaluated the change in markers of hepcidin expression, erythropoiesis, and iron status from baseline to week 12 and compared the change between the groups.
Results: Eighty five patients completed the study. Calcitriol, but not 25-hydroxyvitamin D (25(OH) D), was inversely correlated with serum levels of hepcidin-25 (rho = -0,38; p = < 0, 01 and rho = -0,02; p = 0, 89, respectively) at baseline. Supplementation with VD significantly raised the serum concentration of serum 25(OH)D in the treatment group (from 54 (39-71) to 156 (120-190) nmol/L; p = < 0, 01)) but had no effect on any of the markers of hepcidin, erythropoiesis, or iron status in the entire cohort. However, we did observe an increase in hemoglobin (HB) levels and transferrin saturation (TSAT) as compared to the PBO group in a subgroup of patients with low baseline 25(OH)D levels (< 56 nmol/L). In contrast, in patients with high baseline 25(OH)D values (≥ 56 nmol/L), VD supplementation associated with a decrease in HB levels and TSAT (p = 0,056) within the VD group in addition to a decrease in hepcidin levels as compared to the PBO group.
Conclusion: High-dose VD supplementation had no discernible effect on markers of hepcidin or erythropoiesis in the entire study cohort. However, in patients with low baseline 25(OH)D levels, high-dose VD supplementation associated with beneficial effects on erythropoiesis and iron availability. In contrast, in patients with elevated baseline 25(OH)D levels, high-dose VD supplementation resulted in a decrease in hepcidin levels, most likely due to a deterioration in iron status.
abstract_id: PUBMED:24101420
Very high-dose cholecalciferol and arteriovenous fistula maturation in ESRD: a randomized, double-blind, placebo-controlled pilot study. Purpose: While vitamin D is critical for optimal skeletal health, it also appears to play a significant role in vascular homeostasis. This pilot study compared arteriovenous (AV) access outcomes following cholecalciferol supplementation compared to placebo in end-stage renal disease patients preparing to undergo AV access creation.
Methods: A total of 52 adult hemodialysis patients preparing for arteriovenous fistula (AVF) creation were randomized to receive perioperative high-dose cholecalciferol versus placebo in this double-blind, randomized, placebo-controlled pilot study. The primary outcome was mean response to high-dose oral cholecalciferol versus placebo, and secondary outcome AV access maturation at 6 months. Logistic regression was used to assess the association between AV access maturation and baseline, posttreatment and overall change in vitamin D concentration.
Results: A total of 45% of cholecalciferol-treated and 54% of placebo-treated patients were successfully using their AVF or arteriovenous graft (AVG) at 6 months (p=0.8). Baseline serum concentrations of 25(OH)D and 1,25(OH)2D did not differ between those who experienced AVF or AVG maturation and those who did not (p=0.22 and 0.59, respectively). Similarly, there was no relationship between AVF or AVG maturation and posttreatment serum 25(OH)D and 1,25(OH)2D concentration (p=0.24 and 0.51, respectively).
Conclusions: Perioperative high-dose vitamin D3 therapy does correct 25(OH)D level but does not appear to have an association with AV access maturation rates. Future research may include extended preoperative vitamin D3 therapy in a larger population or in certain subpopulations at high risk for AVF failure.
abstract_id: PUBMED:25656524
Effects of a single, high oral dose of 25-hydroxycholecalciferol on the mineral metabolism markers in hemodialysis patients. Vitamin D deficiency is common in dialysis patients with chronic kidney disease. Low levels have been associated with increased cardiovascular risk and mortality. We evaluated the administration of a high, single oral dose of 25-OH cholecalciferol (3 mg of Hidroferol, 180 000 IU) in patients on chronic hemodialysis. The 94 chronic hemodialysis patients with vitamin D deficiency 25 (OH)D <30 ng/mL included in the study were randomized into two groups. Follow-up time was 16 weeks. Neither the usual treatment for controlling Ca/P levels nor the dialysis bath (calcium of 2.5 mEq/L) were modified. Of the 86 patients who finished the study, 42 were in the treated group and 44 in the control group. An increase in 25(OH)D levels was observed in the treated group that persisted after 16 weeks and was associated with a significant decrease in parathyroid hormone (PTH) levels during the 8 weeks post-treatment. Baseline 1,25(OH)2 D levels of the treated group increased two weeks after treatment (5.9 vs. 21.9 pg/mL, P<0.001) but gradually reduced to 8.4 at week 16. The administration of a single 3 mg dose of 25-OH cholecalciferol seems safe in patients on hemodialysis and maintains sufficient levels of 25(OH)D with a decrease in PTH for 3 months.
abstract_id: PUBMED:22237061
Efficacy and safety of a short course of very-high-dose cholecalciferol in hemodialysis. Background: Vitamin D deficiency is highly prevalent among hemodialysis patients, but little data exist in support of an optimal repletion regimen.
Objective: The objective was to ascertain the efficacy of weekly very-high-dose cholecalciferol (vitamin D(3)) in correcting vitamin D insufficiency and deficiency in patients with stage 5D chronic kidney disease.
Design: We conducted a prospective, double-blind, randomized controlled pilot study that compared placebo with very high doses of oral cholecalciferol for 3 wk (200,000 IU/wk) in hemodialysis patients. We examined the rate of correction of vitamin D insufficiency or deficiency and the effect of treatment on markers of mineral metabolism and routine laboratory variables.
Results: Twenty-seven subjects received placebo, and 25 received cholecalciferol. The majority (94%) of subjects had serum 25-hydroxyvitamin D [25(OH)D] concentrations <30 ng/mL. Study groups were similar with respect to baseline clinical characteristics, with the exception of hemoglobin concentrations, which were lower in the cholecalciferol-treated group (P < 0.04). At follow-up, 90.5% of subjects treated with cholecalciferol achieved serum 25(OH)D concentrations ≥30 ng/mL in contrast to 13.6% of the placebo group. There were no significant changes in serum calcium, phosphate, or intact parathyroid hormone during the study.
Conclusion: Short-term, high-dose oral cholecalciferol treatment of vitamin D deficiency in hemodialysis patients appears to be effective and with no evidence of toxic effects. This trial was registered at clinicaltrials.gov as NCT00912782.
Answer: Based on the available abstracts, high-dose cholecalciferol appears to be justified in children with chronic kidney disease (CKD) who have failed low-dose maintenance therapy. A study by PUBMED:23338053 demonstrated that a single high-dose intramuscular injection of vitamin D3 (300,000 IU) significantly improved vitamin D3 and intact parathyroid hormone (iPTH) levels in children with CKD. Another study by PUBMED:34392411 found that in children with CKD, a daily dose of 4000 IU of vitamin D3 was effective in achieving or maintaining vitamin D sufficiency, whereas a lower dose of 1000 IU daily was unlikely to achieve the same results.
Additionally, PUBMED:28770692 reported that high-dose oral cholecalciferol had favorable effects on cardiac mechanics in children with CKD, suggesting potential cardiovascular benefits. These findings imply that high-dose cholecalciferol can be beneficial in pediatric CKD patients who do not respond adequately to lower doses.
However, it is important to monitor for potential adverse effects, as vitamin D toxicity and hypercalcemia are concerns with high-dose therapy. The studies mentioned did not report significant changes in calcium levels or other adverse effects, indicating that high-dose cholecalciferol can be safe when administered carefully and with appropriate monitoring (PUBMED:23338053, PUBMED:34392411).
In conclusion, high-dose cholecalciferol therapy may be justified in children with CKD who have not responded to low-dose maintenance therapy, as it can improve vitamin D status and potentially provide additional health benefits. However, individual patient factors and careful monitoring are essential to ensure safety. |
Instruction: Can percutaneous endoscopic jejunostomy prevent gastroesophageal reflux in patients with preexisting esophagitis?
Abstracts:
abstract_id: PUBMED:11151874
Can percutaneous endoscopic jejunostomy prevent gastroesophageal reflux in patients with preexisting esophagitis? Objective: Percutaneous endoscopic jejunostomy has been used for preventing pulmonary aspiration arising from gastric contents by concomitant jejunal feeding and gastric decompression in susceptible patients. Our objective was to evaluate gastroesophageal reflux in patients with percutaneous endoscopic jejunostomy tube feeding.
Methods: Eight cerebrovascular accident patients with percutaneous endoscopic jejunostomy tube placement caused by reflux esophagitis with hematemesis, food regurgitation or vomiting, and/or recurrent aspiration pneumonia were tested for gastroesophageal reflux using 24-h esophageal pH monitoring during continuous jejunal liquid meal or saline infusion with concomitant gastric decompression. Twenty-four hour pH monitoring was also performed during intragastric feeding on a different day.
Results: During the liquid meal feeding period, percutaneous endoscopic jejunostomy feeding reduced esophageal acid exposure 46% [12.9% (4.9-28.2%) versus 24.0% (19.0-40.6%), p = 0.01], compared to intragastric feeding. However, in the period of the jejunal tube infusion, esophageal acid exposure was significantly lower during saline infusion than during meal infusion [3.2 (0.0%-10.8%) versus 12.9% (4.9-28.2%), p = 0.008].
Conclusion: Percutaneous endoscopic jejunostomy feeding reduced but did not eliminate gastroesophageal reflux, compared to intragastric feeding in patients with severe gastroesophageal reflux. However, gastroesophageal reflux during percutaneous jejunal feeding was associated with meal infusion. This might, in part, explain the failure of percutaneous endoscopic jejunostomy tube placement to prevent pulmonary aspiration.
abstract_id: PUBMED:1689950
Value of diagnostic upper endoscopy preceding percutaneous gastrostomy. Percutaneous gastrostomies, placed endoscopically or radiographically, have supplanted their surgical counterparts in many institutions. Although there are few comparative data, a cost advantage is claimed for the radiographic method, as no endoscopy is required. We performed upper endoscopy on 201 patients prior to attempted percutaneous endoscopic gastrostomy (PEG). The medical records of these patients were reviewed. Data collected included endoscopic findings which precluded gastrostomy, necessitated conversion to jejunostomy, or led to changes in medical management. For a total of 73 patients (36%), findings at pregastrostomy endoscopy led to major changes in medical management, including 35 patients with severe reflux esophagitis, 29 patients with peptic ulcers, and two patients with gastric outlet obstruction. Appropriate treatment of such conditions may improve morbidity, mortality, and cost by reducing length of hospital stay. The authors recommend diagnostic upper endoscopy in patients undergoing percutaneous gastrostomy, regardless of placement method.
abstract_id: PUBMED:29340935
Endoscopic Treatments of GERD. Opinion Statement: PURPOSE OF REVIEW: Endoscopic therapies for gastroesophageal reflux disease (GERD) are minimally invasive techniques which fill the gap between the medical therapy with proton pump inhibitors (PPIs) and surgical fundoplication. The main endoscopic therapies currently available in the USA are transoral incisionless fundoplication (TIF) using EsophyX device or less commonly, Medigus Ultrasonic Surgical Endostapler, and radiofrequency energy delivery to lower esophageal sphincter using Stretta device. Our aim was to examine the available evidence for these therapies.
Recent Findings: Consistent evidence for subjective improvement is available for fundoplication using EsophyX and Stretta, but improvement in objective parameters for GERD is not seen or evaluated in all the studies. There is a reduction in long-term efficacy seen with TIF and also to a lesser extent with Stretta. Endoscopic therapies do not replace surgical fundoplication and therefore are useful in patients with breakthrough symptoms on PPI such as regurgitation or those reluctant to take long-term PPI. An ideal patient is one who has symptoms and objective evidence of GERD such as abnormal pH study or erosive esophagitis without any significant anatomic distortion such as a hiatal hernia. Since these are endoluminal procedures, they do not address the hiatal hernia reduction or repair of crural defect. Adequate training in the technique and careful patient selection are essential prior to embarking on these procedures.
abstract_id: PUBMED:26945412
Risk of Esophageal Cancer Following Percutaneous Endoscopic Gastrostomy in Head and Neck Cancer Patients: A Nationwide Population-Based Cohort Study in Taiwan. Esophageal cancers account for majority of synchronous or metachronous head and neck cancers. This study examined the risk of esophageal cancer following percutaneous endoscopic gastrostomy (PEG) in head and neck cancer patients using the Taiwan National Health Insurance Research Database. From 1997 to 2010, we identified and analyzed 1851 PEG patients and 3702 sex-, age-, and index date-matched controls. After adjusting for esophagitis, esophagus stricture, esophageal reflux, and primary sites, the PEG cohort had a higher adjusted hazard ratio (2.31, 95% confidence interval [CI] = 1.09-4.09) of developing esophageal cancer than the controls. Primary tumors in the oropharynx, hypopharynx, and larynx were associated with higher incidence of esophageal cancer. The adjusted hazard ratios were 1.49 (95% CI = 1.01-1.88), 3.99 (95% CI = 2.76-4.98), and 1.98 (95% CI = 1.11-2.76), respectively. Head and neck cancer patients treated with PEG were associated with a higher risk of developing esophageal cancer, which could be fixed by surgically placed tubes.
abstract_id: PUBMED:10839831
Esophageal biopsy does not predict clinical outcome after percutaneous endoscopic gastrostomy in children. Clinically symptomatic gastroesophageal reflux may occur after percutaneous endoscopic gastrostomy (PEG). Preoperative evaluation for gastroesophageal reflux does not reliably predict those individuals who will develop reflux unresponsive to medical management after PEG. Esophageal histology at the time of PEG might be used to identify patients at risk for developing intractable gastroesophageal reflux. The study aim was to correlate the clinical outcome after PEG with esophageal histology at the time of PEG insertion. A retrospective review of 68 consecutive children who had an esophageal biopsy obtained at the time of PEG insertion was undertaken. Preoperative evaluation, esophageal histology, and clinical outcomes were compared. Preoperative gastroesophageal reflux was present in 23% of upper gastrointestinal series performed, in 10% of pH probe studies, and in 29% of reflux scans. Histology was normal in 57% of esophageal biopsies obtained at the time of PEG insertion. Symptomatic gastroesophageal reflux requiring antireflux surgery or conversion to gastrojejunostomy developed in 10% of patients after PEG placement. Only one of these patients had esophagitis on biopsy. In conclusion, preoperative esophageal histology does not reliably predict the development of symptomatic gastroesophageal reflux after PEG placement.
abstract_id: PUBMED:36161053
Is patient satisfaction sufficient to validate endoscopic anti-reflux treatments? Endoscopic anti-reflux treatment is emerging as a new option for gastro-esophageal reflux disease (GERD) treatment in patients with the same indications as for laparoscopic fundoplication. There are many techniques, the first of which are transoral incisionless fundoplication (TIF) and nonablative radio-frequency (STRETTA) that have been tested with comparative studies and randomized controlled trials, whereas the other more recent ones still require a deeper evaluation. The purpose of the latter is to verify whether reflux is abolished or significantly reduced after intervention, whether there is a valid high pressure zone at the gastroesophageal junction, and whether esophagitis, when present, has disappeared. Unfortunately in a certain number of cases, and especially in the more recently introduced ones, the evaluation has been based almost exclusively on subjective criteria, such as improvement in the quality of life, remission of heartburn and regurgitation, and reduction or suspension of antacid and antisecretory drug consumption. However, with the most studied techniques such as TIF and STRETTA, an improvement in symptoms better than that of laparoscopic fundoplication can often be observed, whereas the number of acid episodes and acid exposure time are similar or higher, as if the acid refluxes are better tolerated by these patients. The suspicion of a local hyposensitivity taking place after anti-reflux endoscopic intervention seems confirmed by a Bernstein test at least for STRETTA. This examination should be done for all the other techniques, both old and new, to identify the ones that reassure rather than cure. In conclusion, the evaluation of the effectiveness of the endoscopic anti-reflux techniques should not be based exclusively on subjective criteria, but should also be confirmed by objective examinations, because there might be a gap between the improvement in symptoms declared by the patient and the underlying pathophysiologic alterations of GERD.
abstract_id: PUBMED:34786233
A Rare Case of Gastric Outlet Obstruction With Severe Reflux Esophagitis Due to a Percutaneous Endoscopic Gastrostomy Tube Balloon Displacement. In patients with a functional gastrointestinal (GI) tract, enteral feeding is preferred over parenteral feeding as it has fewer complications and a relatively lower cost. Nasogastric and nasoenteric feeding tubes are available options but when long-term enteral feeding is desired, a percutaneous endoscopic gastrostomy (PEG) tube is more convenient. PEG tube can be associated with multiple complications; however, its displacement which causes gastric outlet obstruction (GOO) is a rare one. Here we present a case of an 81-year-old woman with dementia who presented with upper GI bleeding and was found to have GOO causing reflux esophagitis due to PEG tube displacement.
abstract_id: PUBMED:11868014
Is endoscopic follow-up needed in pediatric patients who undergo surgery for GERD? Background: This study evaluated the role of endoscopy in the postoperative management of pediatric patients who undergo fundoplication for GERD.
Methods: Medical records of 109 otherwise healthy children who underwent operation for GERD from 1979 to 1996 were reviewed. Patients with respiratory symptoms or esophageal stenosis were excluded. All patients underwent endoscopic surveillance with endoscopy being performed in the early (within 1 year) and late (between 1 and 2 years) postoperative periods. Specifically evaluated were the appearance of the wrap and evidence of esophagitis. The risk of a recurrence of esophagitis based on wrap appearance and the presence of clinical symptoms in patients with endoscopic evidence of esophagitis were also evaluated.
Results: At early endoscopy 3 patients with an intact wrap and 8 with a defective wrap had esophagitis (not significant). At late endoscopy, 5 patients with an intact wrap and 17 with a defective wrap had esophagitis (p < 0.05).
Conclusions: An intact wrap does not prevent recurrence of GERD. Such an occurrence is even more likely when endoscopy demonstrates a defective wrap. For all patients who have undergone fundoplication, endoscopic evaluation at 1 to 2 years is recommended to detect esophagitis in the absence of symptoms so treatment can be initiated before symptoms occur.
abstract_id: PUBMED:34273019
Per-oral endoscopic dual myotomy for the treatment of achalasia. Background: Repeat per-oral endoscopic myotomy is occasionally performed for persistent/recurrent symptoms in patients with achalasia, and yields favorable outcomes. We investigated a novel technique, per-oral endoscopic dual myotomy (dual-POEM), where a second myotomy was performed during a single session to augment the efficacy and avoid repeat interventions. The aim of this study was to evaluate its feasibility, safety and efficacy.
Methods: Consecutive patients diagnosed with achalasia who underwent dual-POEM (1/2018-5/2019) were prospectively collected and retrospectively analyzed. Patients with baseline Eckardt score ≥ 9, ≥ 10 years of symptoms, and/or having prior interventions other than myotomy received dual-POEM. The primary outcome was clinical success (Eckardt score ≤ 3). Secondary outcomes were procedure-related adverse events, change in lower esophageal sphincter (LES) pressure, and reflux complications.
Results: Seventeen patients received dual-POEM. Procedure-related adverse events were observed in 2 (11.8%) patients (mucosal injury and pneumonitis). Both were minor in severity. During a median follow-up of 33 months (interquartile range, IQR [31,35]; range, 19-36), clinical success was achieved in 16 (94.1%) patients. The median Eckardt score decreased from 9 (IQR [8, 11.5]; range 7-12) to 1 (IQR [1, 2]; range 0-4) (P < 0.001), and LES pressure decreased from 25.8 mmHg (IQR [21.7, 33.5]; range 17.7-46.3) to 7.4 mmHg (IQR [6.3, 10.4]; range 2.2-12.6) (P < 0.001). Seven (41.2%) patients developed postprocedural reflux either by gastroesophageal reflux disease questionnaire or esophagitis endoscopically, all successfully treated with proton pump inhibitors.
Conclusion: Dual-POEM preliminarily demonstrated high efficacy with a favorable safety profile in patients with achalasia with predictors of treatment failure.
abstract_id: PUBMED:32789722
Intraoperative impedance planimetry (EndoFLIP™) results and development of esophagitis in patients undergoing peroral endoscopic myotomy (POEM). Introduction: Peroral endoscopic myotomy (POEM) is a minimally invasive treatment for achalasia. Considerable evidence demonstrates a high incidence of gastroesophageal reflux disease (GERD) after POEM. The endoluminal functional lumen imaging probe (FLIP) uses impedance planimetry to obtain objective measurements of the gastroesophageal junction. This study aims to determine whether FLIP measurements collected at the time of POEM are associated with the development of reflux esophagitis postoperatively.
Methods: Patients who underwent POEM between 2012 and 2019 who subsequently had esophagogastroduodenoscopy (EGD) were included. Intraoperative FLIP measurements before and after myotomy, clinical data from EGD, and reflux specific quality of life questionnaires were collected. Comparisons between groups were made using the Wilcoxon rank-sum and Fisher's exact tests. Receiver operating characteristic (ROC) curves were used to determine optimal cutoffs of measurements to classify patients into those with high risk of postoperative esophagitis and those with lower risk.
Results: A total of 43 patients were included. Of those, 25 (58.1%) were found to have esophagitis on postoperative EGD: four patients (16%) with LA Grade A, five (20%) with LA Grade B, 11 (44%) with LA Grade C and two (8%) with LA grade D esophagitis. Patients with a final distensibility index ≥ 2.7 and a final cross-sectional area ≥ 83 were significantly more likely to develop esophagitis on postoperative EGD (p = 0.016 and p = 0.008, respectively). Gastroesophageal reflux disease health-related quality of life (GERD-HRQL) and reflux symptom index (RSI) scores were not significantly different in patients who developed esophagitis and those who did not.
Conclusion: Reflux affects some patients after POEM. We show that FLIP measurements collected during POEM may help predict which patients are more likely to develop reflux esophagitis postoperatively. Subjective symptoms on quality of life questionnaires are not reliable in determining which patients are at risk for esophagitis.
Answer: Percutaneous endoscopic jejunostomy (PEJ) has been utilized to prevent pulmonary aspiration from gastric contents by enabling jejunal feeding and gastric decompression in susceptible patients. However, the effectiveness of PEJ in preventing gastroesophageal reflux (GER) in patients with preexisting esophagitis is not absolute. A study involving eight cerebrovascular accident patients with PEJ due to reflux esophagitis showed that PEJ feeding reduced esophageal acid exposure by 46% compared to intragastric feeding. Despite this reduction, PEJ did not completely eliminate GER, and reflux was still associated with meal infusion, which could partly explain the failure of PEJ to prevent pulmonary aspiration (PUBMED:11151874).
Additionally, the presence of severe reflux esophagitis in patients undergoing percutaneous gastrostomy, which may necessitate conversion to jejunostomy, highlights the importance of addressing GER in these patients. Diagnostic upper endoscopy prior to percutaneous endoscopic gastrostomy (PEG) placement has been recommended to identify and appropriately treat conditions such as severe reflux esophagitis, which may improve morbidity, mortality, and cost by reducing hospital stay length (PUBMED:1689950).
In conclusion, while PEJ can reduce the occurrence of GER compared to intragastric feeding in patients with severe GER, it does not completely prevent it, especially during meal infusion. Therefore, PEJ may not be entirely effective in preventing GER in patients with preexisting esophagitis, and additional measures or treatments may be necessary to manage GER in these patients. |
Instruction: Does topical use of autologous serum help to reduce post-tonsillectomy morbidity?
Abstracts:
abstract_id: PUBMED:27210022
Does topical use of autologous serum help to reduce post-tonsillectomy morbidity? A prospective, controlled preliminary study. Background: To evaluate the effects of autologous serum usage on throat pain, haemorrhage and tonsillar fossa epithelisation in patients after tonsillectomy.
Methods: Thirty-two patients (aged 4-15 years) were included in the study. Tonsillectomy was performed and autologous serum was administered topically to the right tonsillar fossa during the operation, and at 8 and 24 hours post-operatively. The left side served as the control. A visual analogue scale was used to record the patient's pain every day. Each patient's oropharynx was observed on the 5th and 10th post-operative days to examine bleeding and epithelisation.
Results: The pain scores for the side administered autologous serum were significantly lower than those for the control side, on the night following the operation and on the 1st, 2nd, 5th and 6th post-operative days. Tonsillar fossa epithelisation was significantly accelerated on the study side compared with the control side on the 5th and 10th post-operative days.
Conclusion: In tonsillectomy patients, topically administered autologous serum contributed to throat pain relief and tonsillar fossa epithelisation during the post-operative period.
abstract_id: PUBMED:31492172
Topical biomaterials to prevent post-tonsillectomy hemorrhage. Despite advances in surgical technique, postoperative hemorrhage remains a common cause of mortality and morbidity for patients following tonsillectomy. Application of biomaterials at the time of tonsillectomy can potentially accelerate mucosal wound healing and eliminate the risk of post-tonsillectomy hemorrhage (PTH). To understand the current state and identify possible routes for the development of the ideal biomaterials to prevent PTH, topical biomaterials for eliminating the risk of PTH were reviewed. Alternative topical biomaterials that hold the potential to reduce the risk of PTH were also summarized.
abstract_id: PUBMED:18179827
Does topical ropivacaine reduce the post-tonsillectomy morbidity in pediatric patients? Objectives: To determine whether post-operative administration of topical ropivacaine hydrochloride decreases morbidity following adenotonsillectomy.
Study Design: Prospective, randomized, double-blind clinical trial.
Setting: University referral center; ENT Department.
Participants: Fourty one children, aged 4-16 years, undergoing tonsillectomy.
Methods: Patients received 1.0% ropivacaine hydrochloride soaked swabs packed in their tonsillar fossae while the control group received saline-soaked swabs. Mc Grath's face scale was used to compare the two groups in respect of pain control. Chi-square and two-tailed unpaired Student's t-tests or Mann-Whitney-U-tests were used to compare the two independent groups. As 10 we made 11 comparison between groups, for Bonferroni correction, p<0.005 was accepted as statistically significant.
Results: Only first hour there was no significant pain-relieving effect seen in the ropivacaine group (p>0.05). The other hours and days there were statistically significance between the two groups (p<0.001). Also, the other post-operative parameters such as nausea, fever, vomiting, odor, bleeding, otalgia and trismus were not statistically different between the two groups. There were no complications associated with ropivacaine hydrochloride. No patients in this study suffered systemic side effects related to the use of this medication.
Conclusion: Locally 1.0% ropivacaine administration significantly relieves the pain of pediatric tonsillectomy and, it is a safe and effective method. High concentrations of ropivaciane may produce clinically significant pain relief. It is more effective to reduce of post-operative analgesic requirement after first hour.
abstract_id: PUBMED:19500860
The effects of topical levobupivacaine on morbidity in pediatric tonsillectomy patients. Objective: To reduce the post-tonsillectomy morbidity by swab soaked with 5 ml levobupivacaine hydroclorur (25 mg/10 ml).
Study Design: A double-blind prospective randomized controlled clinical study.
Methods: In this randomized double-blind study in group I (30 children, mean age 7.5+/-2.6) we tightly packed swab soaked with 5 ml levobupivacaine hydroclorur (25mg/10 ml) and in group II (21 children, mean age 7.9+/-3.7) we used 5 ml saline swabs into each of the two tonsillar fossae after tonsillectomy for 5 min. We used McGrath's face scale to compare the two groups in respect of pain control.
Results: There was statistically significant pain relieving effect in the levobupivacaine group in the first 24h (p<0.05). But after 24h pain relieving effect of levobupivacaine was not significant (p>0.05). We did not see any serious complications for both groups. Postoperative morbidity mean results (nausea, vomiting, fever, bleeding, halitosis and ear pain) were not statistically different between the two groups (p>0.05).
Conclusion: Topical levobupivacaine seems to be a safe and easy medication for postoperative pain control in pediatric tonsillectomy patients.
abstract_id: PUBMED:30641307
Role of antibiotics in post-tonsillectomy morbidities; A systematic review. Objective: To evaluate the role of postoperative antibiotics on post-tonsillectomy morbidities.
Study Design: Systematic Review.
Methods: Published papers and electronic databases (Medline, Web of Science, Embase) were searched from January 1985 up to March 2016 using the following key words in different combinations; Tonsil; Tonsillectomy; Post-tonsillectomy; Adenotonsillectomy; Antibiotics; Post-tonsillectomy morbidity; Bleeding; Secondary Hemorrhage. Twelve randomized control clinical trials fit the inclusion criteria and were included in the meta-analysis. We evaluated 5 outcomes, hemorrhage, return to normal diet, return to normal activities, fever and pain.
Results: As regards secondary hemorrhage pooled analysis of 1397 patients revealed a relative risk (risk ratio, RR) of 1.052 with a 95% confidence interval (95% CI) of 0.739-1.497 (P-value, 0.779). As for return to normal diet pooled analysis of 527 patients showed a standardized mean difference (SMD) of -0.058 day with 95% CI of -0.233 to 0.118 (P-value, 0.518). As for return to normal activities pooled analysis of 257 patients showed a SMD of -0.014 day with a 95% CI of -0.258 to 0.230 (P-value, 0.908). As for Fever pooled analysis of 656 patients revealed a relative risk of 1.265 with 95% CI of 0.982-1.629 (P-value, 0.068). Finally for the postoperative pain due to the variability in the parameters used to assess the pain following tonsillectomy, we could not perform meta-analysis for this outcome.
Conclusion: The results of this study fail to support clear evidence to use routinely post-operative antibiotics to reduce post-tonsillectomy morbidities.
abstract_id: PUBMED:23010790
Antibiotics do not reduce post-tonsillectomy morbidity in children. The objectives are to assess the efficacy of antibiotics in reducing post-tonsillectomy morbidities in children. This is a clinical trial study that was undertaken at the Jordan University Hospital during the period from June 2008 to July 2009. All patients undergoing tonsillectomy were randomly divided into two matched groups on alternating basis: group A included patients who received antibiotics (amoxicillin with clavulanic acid) for 5 days in the post-tonsillectomy period and group B included patients who received none. The two groups were compared with respect to fever, secondary bleeding, throat pain, and the time to resume to normal diet. Bleeding was more common in group A (5.5 %) than in group B (2 %). The average duration of throat pain was 4.2 days in group A, while it was 3.9 days in group B. The average time to resume normal diet was 5.7 days in group A, whereas it was 5.3 days in group B. Fever was noted in 17 (31 %) patients from group A, while it was observed in 15 (30 %) patients from group B. The use of antibiotics in the post-tonsillectomy period does not reduce post-operative morbidity in children and therefore it is advised to use antibiotics on an individual basis rather than routinely for patients undergoing tonsillectomy.
abstract_id: PUBMED:33988093
Efficacy of topical application of autologous platelet-rich plasma in adult tonsillectomy patients: a randomised control study. Objective: Tonsillectomy is a painful surgery performed in cases of recurrent tonsillitis. Application of platelet-rich plasma to diminish the pain and morbidity post-tonsillectomy is gaining importance. This study evaluated post-operative pain and morbidity after autologous platelet-rich plasma application on the tonsil beds during tonsillectomy.
Method: Participants were randomised into group 1 (n = 28, peri-operative platelet-rich plasma intervention) and group 2 (n = 28, control). Post-tonsillectomy, patients were assessed (day 0, 1, 2, 3, 7 and 14) for pain, healing and time taken to return to normal activity. Data were analysed by independent t-test and chi-square test with p ≤ 0.05 as the significance level.
Results: A significant decrease in the mean pain score up to day 7 (p < 0.05) and tonsillar fossae healing on days 2 and 3 (p < 0.05) post-tonsillectomy was noted. The majority of the patients returned to their routine activities after a week post-tonsillectomy.
Conclusion: Platelet-rich plasma application was effective in accentuating healing and reducing post-tonsillectomy pain and morbidity.
abstract_id: PUBMED:18425926
Antibiotics to reduce post-tonsillectomy morbidity. Background: Tonsillectomy continues to be one of the most common surgical procedures performed in children and adults. Despite improvements in surgical and anaesthetic techniques, postoperative morbidity, mainly in the form of pain, remains a significant clinical problem. Postoperative bacterial infection of the tonsillar fossa has been proposed as an important factor causing pain and associated morbidity, and some studies have found a reduction in morbid outcomes following the administration of perioperative antibiotics.
Objectives: To determine whether perioperative antibiotics reduce pain and other morbid outcomes following tonsillectomy.
Search Strategy: Cochrane ENT Group Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, Issue 1 2007), MEDLINE (1950 to 2007) and EMBASE (1974 to 2007) were searched. The date of the last search was March 2007.
Selection Criteria: All randomised controlled trials examining the impact of perioperative administration of systemic antibiotics on post-tonsillectomy morbidity in children or adults.
Data Collection And Analysis: Two authors independently collected data. Primary outcomes were pain, consumption of analgesia and secondary haemorrhage (defined as significant if patient re-admitted, transfused blood products or returned to theatre, and total if any documented haemorrhage). Secondary outcomes were fever, time taken to resume normal diet and activities and adverse events. Where possible, summary measures were generated using random-effects models.
Main Results: Nine trials met the eligibility criteria. Most did not find a significant reduction in pain with antibiotics. Similarly, antibiotics were not shown to be effective in reducing the need for analgesics. Antibiotics were not associated with a reduction in significant secondary haemorrhage rates (Relative Risk (RR) 0.49, 95% CI 0.08 to 3.11, P = 0.45) or total secondary haemorrhage rates (RR 0.92, 95% CI 0.45 to 1.87, P = 0.81). With regard to secondary outcomes, antibiotics reduced the proportion of subjects with fever (RR 0.63, 95% CI 0.46 to 0.85, P = 0.002).
Authors' Conclusions: The present review suggests that there is little or no evidence that antibiotics reduce the main morbid outcomes following tonsillectomy (i.e. pain, the need for analgesia or secondary haemorrhage rates). They do however appear to reduce fever. Some important methodological shortcomings exist in the included trials which are likely to have produced bias favouring antibiotics. We therefore advocate caution when prescribing antibiotics routinely to all patients undergoing tonsillectomy. Whether a subgroup of patients who might benefit from selective administration of antibiotics exists is unknown and needs to be explored in future trials.
abstract_id: PUBMED:34408955
To Give or Not to Give? Prescribing Antibiotics to the Tonsillectomy Patients in a Tertiary Care Setting. Introduction Adeno-tonsillectomy is one of the most common procedures performed worldwide in pediatric age group. Antibiotics use after tonsillectomy is like any other surgical procedure; and it is thought that the antibiotic use may help to reduce post-operative morbidity. Giving antibiotics in tonsillectomy patients is a common practice for decades but recently there has been a paradigm shift towards not using the antibiotics, especially in the pediatric population. Methods A prospective study was done on a cohort of 123 patients and they were divided into two groups on the basis of choice to receive or not to receive antibiotics after tonsillectomy, and these patients were followed in post-operative period to see any differences in the rate of complications. Results No significant statistical correlation was found between age, gender or post-operative visits and post-operative complications in between the two groups. Half of the patients received antibiotics; however, the use of antibiotics did not show a significant decrease in post-operative complications. Conclusion Regular use of antibiotics in post-tonsillectomy patients should not be advised as the use of antibiotics do not prevent or reduce post-operative complications in tonsillectomy patients.
abstract_id: PUBMED:11210872
Autologous fibrin sealant reduces pain after tonsillectomy. Objectives/hypothesis: Pain is a major cause of morbidity after tonsillectomy. Although various efforts have been made to reduce pain, the use of oral analgesics, which can have adverse side effects, remains the standard of care. It is hypothesized that fibrin sealant, used to achieve hemostasis and enhance healing in many surgical procedures, might help decrease pain after this operation.
Study Design: A prospective, randomized, blinded study was performed on 20 children aged 5 to 17 years who were undergoing tonsillectomy, to evaluate the efficacy of FIBRIN SEALANT in reducing postoperative pain.
Methods: All patients pre-donated 40 mL of blood from which autologous concentrated fibrinogen was prepared by cryoprecipitation. In the fibrin sealant group, fibrinogen and topical bovine thrombin were sprayed onto the surgical site to form fibrin sealant at the conclusion of tonsillectomy. The 10 patients in the control group (C) received no fibrin sealant. Patients rated their level of pain immediately after surgery and at regular intervals for 3 days after surgery using the Wong-Baker Faces Pain Rating Scale (1-6). Emesis, postoperative bleeding, medications, and adverse events were also evaluated.
Results: At 7.00 P.M. on postoperative day (POD) 0, the mean +/- SD fibrin sealant group pain score (2.9+/-0.41 units) was significantly lower than for the C group (4.1+/-0.43 units; P < or = .05). There was also a trend in favor of less pain in the fibrin sealant group at 7:00 P.M. on POD 1, with a mean of 3.5+/-0.43 units versus 2.4+/-0.48 units for C (P = .15). The odds of a patient in C experiencing emesis were 8.16 times higher, (P < or = .05) than for patients in the fibrin sealant group.
Conclusions: Fibrin sealant significantly reduced pain the evening after pediatric tonsillectomy and also decreased the chance of experiencing emesis. Thus fibrin sealant may be clinically useful as an adjunct to tonsillectomy.
Answer: Yes, the topical use of autologous serum helps to reduce post-tonsillectomy morbidity. A prospective, controlled preliminary study found that the application of autologous serum topically to the tonsillar fossa during tonsillectomy and post-operatively significantly lowered pain scores on the night following the operation and on the 1st, 2nd, 5th, and 6th post-operative days compared to the control side. Additionally, tonsillar fossa epithelisation was significantly accelerated on the side treated with autologous serum on the 5th and 10th post-operative days (PUBMED:27210022). This suggests that topically administered autologous serum contributes to throat pain relief and faster healing of the tonsillar fossa during the post-operative period. |
Instruction: Tongue discoloration in an elderly kidney transplant recipient: Treatment-related adverse event?
Abstracts:
abstract_id: PUBMED:17062327
Tongue discoloration in an elderly kidney transplant recipient: Treatment-related adverse event? Background: With the increased occurrence of methicillin-resistant staphylococcus aureus infections, linezolid treatment might be administered more often. New rare adverse events are likely to follow.
Case Summary: A 65-year-old man (weight, 91 kg; height, 185 cm) presented to the emergency department at the University of Virginia-affiliated Salem Veterans Affairs Medical Center, Salem, Virginia, after a recent (8 weeks) kidney transplantation with a 24-hour history of fatigue, chills, arthralgias, increased urinary frequency, and onset of tongue discoloration. Two days before admission, he completed a 14-day course of linezolid 600 mg PO BID for ampicillin-resistant enterococcal urinary tract infection. He was afebrile on admission and the dorsal aspect of his tongue was blackened centrally, browner peripherally, with normal pink mucosa on the periphery. Based on the Naranjo probability scale, the calculated score for tongue discoloration as a drug-related adverse event was 7 out of a maximum score of 13 points, designating it as a probable cause. The patient's tongue discoloration improved moderately during the hospital stay and resolved 6 months after the discontinuation of linezolid.
Conclusions: We report a rare association of linezolid and tongue discoloration in an elderly kidney transplant recipient that improved with discontinuation. We present this case to increase clinicians' awareness of the potential adverse event.
abstract_id: PUBMED:34402788
Current status of adverse event profile of tacrolimus in patients with solid organ transplantation from a pharmacovigilance study. Objective: The calcineurin inhibitor tacrolimus has been widely used to prevent allograft rejection after transplantation. The purpose of this study was to clarify the adverse events associated with tacrolimus in solid organ transplantation using a spontaneous reporting system database.
Materials And Methods: We performed a retrospective pharmacovigilance disproportionality analysis using the Japanese Adverse Drug Event Report (JADER) database. Adverse event reports submitted to the Pharmaceuticals and Medical Devices Agency were analyzed, and the reporting odds ratio (ROR) and 95% confidence interval (CI) for each adverse event were calculated.
Results: The database comprised 26,620 reports associated with tacrolimus, of which 2,014, 1,988, and 725 reports involved heart, kidney, and liver transplantation, respectively. Infectious disorder was commonly detected in these transplant patients. There was a significant association between tacrolimus use and colon cancer in patients undergoing heart transplantation (ROR: 3.33, 95% CI: 2.18 - 5.08), but not kidney or liver transplantation. Tacrolimus use in those undergoing kidney transplantation is strongly associated with bronchitis (ROR, 8.95; 95% CI, 6.34 - 12.6). A signal for seizure was detected in liver transplant patients with tacrolimus (ROR, 4.12; 95% CI, 1.77 - 9.59).
Conclusion: It was suggested that there is a diversity in the strength of the association between tacrolimus and adverse events in patients receiving heart, kidney, and liver transplantation. Our results may provide useful information for treatment with tacrolimus, although further research with more data is needed to clarify this.
abstract_id: PUBMED:37433397
Analysis of death avoidance by concomitant use of prednisone in patients with renal transplant using the Food and Drug Administration Adverse Event Reporting System. Background: Patients with renal transplant are frequently administered immunosuppressants to prevent transplant-related adverse events. There are mainly nine immunosuppressants on the market, and multiple immunosuppressants are frequently administered for patients with renal transplant. Identifying which immunosuppressant was responsible when efficacy or safety was observed in patients taking multiple immunosuppressants is difficult. This study aimed to identify the immunosuppressant that was effective in reducing death in patients with renal transplant. A very large sample size was required to conduct prospective clinical trials of immunosuppressant combinations, which is impractical. We investigated cases wherein death occurred despite immunosuppressant administration in patients with renal transplant using Food and Drug Administration Adverse Event Reporting System (FAERS) data.
Material And Method: We used FAERS data reported between January 2004 and December 2022 in patients with renal transplant who received one or more immunosuppressants. Groups were defined for each combination of immunosuppressants. Comparison between two identical groups except for the presence or absence of prednisone was performed using the reporting odds ratio (ROR) and the adjusted ROR (aROR) controlling for differences in patient background.
Results: When the group without prednisone was set as the reference, the aROR for death was significantly <1.000 in several cases in the group to which prednisone was added.
Conclusions: The inclusion of prednisone in the immunosuppressant combinations was suggested to be effective in reducing death. We provided the sample code of software R that can reproduce the results.
abstract_id: PUBMED:25979348
Self-reported Medication Adherence and Adverse Patient Safety Events in CKD. Background: Promoting medication adherence is a recognized challenge for prescribers. In this study, we examine whether lower medication adherence is associated with adverse safety events in individuals with decreased estimated glomerular filtration rates (eGFRs).
Study Design: Cross-sectional baseline analysis of prospective cohort.
Setting & Participants: Baseline analysis of the Safe Kidney Care (SKC) Cohort Study, a prospective study of individuals with eGFRs<60 mL/min/1.73 m(2) intended to assess the incidence of disease-specific safety events. Kidney transplant recipients were excluded.
Predictor: Self-reported medication adherence based on responses to 3 questions ascertaining degree of medication regimen adherence.
Outcomes: Adverse safety events were self-reported at baseline (class I events), such as hypoglycemia or fall thought to be related to a medication, or detected incidentally during the baseline visit (class II events), for example, hypotension or hyperkalemia. Potential drug-related problems (DRPs) were determined by analyzing participants' medications with respect to dosing guidelines based on their screening eGFRs at the time of medication reporting.
Measurements: Relationship between medication adherence and disease-specific patient safety events.
Results: Of 293 SKC participants, 154 (53%) were classified as having lower medication adherence. After multivariable adjustment, lower medication adherence was significantly associated with a class I or II safety event (prevalence ratio [PR], 1.21; 95% CI, 1.04-1.41) and potential DRPs (PR, 1.29; 95% CI, 1.02-1.63). Lower medication adherence was also significantly associated with multiple (≥2) class I events (PR, 1.71; 95% CI, 1.18-2.49), multiple class I or II events (PR, 1.35; 95% CI, 1.04-1.76), and multiple potential DRPs (PR, 2.11; 95% CI, 1.08-2.69) compared with those with higher medication adherence.
Limitations: Use of self-reported medication adherence rather than pharmacy records. Clinical relevance of detected safety events is unclear.
Conclusions: Lower medication adherence is associated with adverse safety events in individuals with eGFRs<60 mL/min/1.73 m(2).
abstract_id: PUBMED:38138933
Adverse Drug Events after Kidney Transplantation. Introduction: Kidney transplantation stands out as the optimal treatment for patients with end-stage kidney disease, provided they meet specific criteria for a secure outcome. With the exception of identical twin donor-recipient pairs, lifelong immunosuppression becomes imperative. Unfortunately, immunosuppressant drugs, particularly calcineurin inhibitors like tacrolimus, bring about adverse effects, including nephrotoxicity, diabetes mellitus, hypertension, infections, malignancy, leukopenia, anemia, thrombocytopenia, mouth ulcers, dyslipidemia, and wound complications. Since achieving tolerance is not feasible, patients are compelled to adhere to lifelong immunosuppressive therapies, often involving calcineurin inhibitors, alongside mycophenolic acid or mTOR inhibitors, with or without steroids. Area covered: Notably, these drugs, especially calcineurin inhibitors, possess narrow therapeutic windows, resulting in numerous drug-related side effects. This review focuses on the prevalent immunosuppressive drug-related side effects encountered in kidney transplant recipients, namely nephrotoxicity, post-transplant diabetes mellitus, leukopenia, anemia, dyslipidemia, mouth ulcers, hypertension, and viral reactivations (cytomegalovirus and BK virus). Additionally, other post-kidney-transplantation drugs such as valganciclovir may also contribute to adverse events such as leukopenia. For each side effect, we propose preventive measures and outline appropriate treatment strategies.
abstract_id: PUBMED:29174434
Incidence and impact of adverse drug events contributing to hospital readmissions in kidney transplant recipients. Background: The incidence and impact of adverse drug events (ADEs) leading to hospitalization and as a predominant risk factor for late graft loss has not been studied in transplantation.
Methods: This was a longitudinal cohort study of adult kidney recipients transplanted between 2005 and 2010 and followed through 2013. There were 3 cohorts: no readmissions, readmissions not due to an adverse drug event, and adverse drug events contributing to readmissions. The rationale of the adverse drug events contribution to the readmission was categorized in terms of probability, preventability, and severity.
Results: A total of 837 patients with 963 hospital readmissions were included; 47.9% had at least one hospital readmission and 65.0% of readmissions were deemed as having an ADE contribute. The predominant causes of readmissions related to ADEs included non-opportunistic infections (39.6%), opportunistic infections (10.5%), rejection (18.1%), and acute kidney injury (11.8%). Over time, readmissions due to under-immunosuppression (rejection) significantly decreased (-1.6% per year), while those due to over-immunosuppression (infection, cancer, or cytopenias) significantly increased (2.1% increase per year [difference 3.7%, P = .026]). Delayed graft function, rejection, creatinine, graft loss, and death were all significantly greater in those with an ADE that contributed to a readmission compared the other two cohorts (P < .05).
Conclusion: These results demonstrate that ADEs may be associated with a significant increase in the risk of hospital readmission after kidney transplant and subsequent graft loss.
abstract_id: PUBMED:25874089
Late onset tacrolimus-induced life-threatening polyneuropathy in a kidney transplant recipient patient. A 59-year-old kidney recipient was diagnosed with a late onset of severe chronic inflammatory demyelinating polyradiculoneuropathy and almost fully recovered after stopping tacrolimus and one course of intravenous immunoglobulin treatment. Unique features of this patient are the unusually long time lapse between initiation of tacrolimus and the adverse effect (10 years), a strong causality link and several arguments pointing toward an inflammatory etiology. When facing new neurological signs and symptoms in graft recipients, it is important to bear in mind the possibility of a drug-induced adverse event. Discontinuation of the suspect drug and immunomodulation are useful treatment options.
abstract_id: PUBMED:2378152
Fungal involvement of the tongue and feces in dialysis-dependent patients 38 patients regularly receiving dialysis and 3 patients with a renal transplant were investigated with regard to possible colonization of yeasts. The tongue and stool were directly examined with Kimmig agar, the resulting yeasts were then differentiated by means of rice agar and an auxanogram. Candida albicans was the germ most frequently found both on the tongue (47.5%) and in the stool (50%). We discuss the significance of our results.
abstract_id: PUBMED:36475448
SLCO1B3 T334G polymorphisms and mycophenolate mofetil-related adverse reactions in kidney transplant recipients. Background: The correlation between SLCO1B3 T334G polymorphisms and mycophenolate mofetil (MMF) adverse reactions in kidney recipients is unknown. Methods: A single-center, retrospective study was performed in which 111 patients were divided into four groups according to the type of adverse effect experienced. The clinical data and concentrations of MMF at different months after transplantation were statistically analyzed. Results: The G allele in the gastrointestinal reaction group was significantly higher than that in the no adverse effects group (p < 0.05). Logistic regression model showed that the SLCO1B3 T334G genotype was an independent risk factor for gastrointestinal reactions caused by MMF. Conclusion: Patients with the SLCO1B3 T334G GG genotype were more likely to experience gastrointestinal reactions.
abstract_id: PUBMED:31236338
Characterizing the pharmacokinetics and pharmacodynamics of immunosuppressant medicines and patient outcomes in elderly renal transplant patients. This review examines what is currently known about the pharmacokinetics and pharmacodynamics of commonly prescribed immunosuppressant medicines, tacrolimus, cyclosporine, mycophenolate and prednisolone, in elderly renal transplant recipients, and reported patient outcomes in this cohort. Renal transplantation is increasing rapidly in the elderly, however, currently, long-term patient outcomes are relatively poor compared to younger adults. Some studies have suggested that elderly recipients may have higher dose-adjusted exposure and/or lower clearance of the calcineurin inhibitors tacrolimus and cyclosporine; with one study reporting up to 50% reduction in tacrolimus exposure in the elderly. Elderly transplant recipients do not appear to have higher dosage-adjusted exposure to mycophenolic acid (MPA). The effects of ageing on the pharmacokinetics of prednisolone are unknown. Only one study has examined how aging effects drug target enzymes, reporting no difference in baseline inosine 5'-monophosphate dehydrogenase (IMPDH) activity and MPA-induced IMPDH activity in elderly compared to younger adult renal transplant recipients. In elderly transplant recipients, immunosenescence likely lowers the risk of acute rejection, but increases the risk of drug-related adverse effects. Currently, the three main causes of death in elderly renal transplant recipients are cardiovascular disease, infection and malignancy. One study has showed that renal transplant recipients aged over 65 years are seven times more likely to die with a functioning graft compared with young adults (aged 18-29 years). This suggests that an optimal balance between immunosuppressant medicine efficacy and toxicity is not achieved in elderly recipients, and further studies are needed to foster long-term graft and patient survival. Lower maintenance immunosuppressant targets in elderly recipients may decrease patient susceptibility to drug side effects, however, further studies are required and appropriate targets need to be established.
Answer: Yes, tongue discoloration in an elderly kidney transplant recipient can be a treatment-related adverse event. In the case presented in the abstract from PUBMED:17062327, a 65-year-old man who had recently undergone a kidney transplant experienced tongue discoloration after completing a 14-day course of linezolid for an ampicillin-resistant enterococcal urinary tract infection. The dorsal aspect of his tongue was blackened centrally and browner peripherally, with normal pink mucosa on the periphery. The Naranjo probability scale, which was used to assess the likelihood of the tongue discoloration being a drug-related adverse event, yielded a score of 7 out of a maximum of 13 points, indicating that it was a probable cause. The patient's tongue discoloration improved moderately during his hospital stay and resolved 6 months after the discontinuation of linezolid. This case was reported to increase clinicians' awareness of the potential adverse event associated with linezolid treatment. |
Instruction: Newly acquired arthroscopic skills: Are they transferable during simulator training of other joints?
Abstracts:
abstract_id: PUBMED:30480024
Efficacy of a Virtual Arthroscopic Simulator for Orthopaedic Surgery Residents by Year in Training. Background: Virtual reality arthroscopic simulators are an attractive option for resident training and are increasingly used across training programs. However, no study has analyzed the utility of simulators for trainees based on their level of training/postgraduate year (PGY).
Purpose/hypothesis: The primary aim of this study was to determine the utility of the ArthroS arthroscopic simulator for orthopaedic trainees based on their level of training. We hypothesized that residents at all levels would show similar improvements in performance after completion of the training modules.
Study Design: Descriptive laboratory study.
Methods: Eighteen orthopaedic surgery residents performed diagnostic knee and shoulder tasks on the ArthroS simulator. Participants completed a series of training modules and then repeated the diagnostic tasks. Correlation coefficients (r2) were calculated for improvements in the mean composite score (based on the Imperial Global Arthroscopy Rating Scale [IGARS]) as a function of PGY.
Results: The mean improvement in the composite score for participants as a whole was 11.2 ± 10.0 points (P = .0003) for the knee simulator and 14.9 ± 10.9 points (P = .0352) for the shoulder simulator. When broken down by PGY, all groups showed improvement, with greater improvements seen for junior-level residents in the knee simulator and greater improvements seen for senior-level residents in the shoulder simulator. Analysis of variance for the score improvement variable among the different PGY groups yielded an f value of 1.640 (P = .2258) for the knee simulator data and an f value of 0.2292 (P = .917) for the shoulder simulator data. The correlation coefficient (r2) was -0.866 for the knee score improvement and 0.887 for the shoulder score improvement.
Conclusion: Residents training on a virtual arthroscopic simulator made significant improvements in both knee and shoulder arthroscopic surgery skills.
Clinical Relevance: The current study adds to mounting evidence supporting virtual arthroscopic simulator-based training for orthopaedic residents. Most significantly, this study also provides a baseline for evidence-based targeted use of arthroscopic simulators based on resident training level.
abstract_id: PUBMED:26318489
Newly acquired arthroscopic skills: Are they transferable during simulator training of other joints? Purpose: This randomized study investigates whether novices learning simulation-based arthroscopic skills in one anatomical joint environment can immediately transfer their learnt skills to another joint.
Methods: Medical students were randomized to a simulated diagnostic knee or shoulder arthroscopic task on benchtop training models. After nine task repetitions over 3 weeks on one model, each participant undertook the simulation task of the other anatomical joint. Performance was objectively measured using a validated electromagnetic motion analysis system and a validated global rating scale (GRS).
Results: Eighteen students participated; eight started the knee task and ten the shoulder task. All participants demonstrated a learning curve in all parameters during task repetition (time taken, hand path length, number of hand movements and GRS scores; p < 0.001) with learning effects >1 SD from initial performance (range 1.1-2.2 SD). When the groups swapped models, there was no immediate evidence of skill transfer, with a significant drop in performance between the final training episode and the transfer task (all parameters p < 0.003). In particular, the transfer task performance was no better than the first episode performance on that model by these novices.
Conclusion: This study showed basic arthroscopic skills did not immediately transfer to an unfamiliar anatomical environment within a simulated setting. These findings have important clinical implications with regard to surgical training as they potentially challenge the assumption that arthroscopic skills acquired in one joint are universally transferrable to other joints. Future orthopaedic simulation training should aim to deliver exposure to a greater variety of arthroscopic procedures and joint environments. This would allow trainees to become familiar with the different arthroscopic setting before undertaking real surgery and consequently improve patient safety.
Level Of Evidence: Therapeutic, Level II.
abstract_id: PUBMED:34189609
Ten hours of simulator training in arthroscopy are insufficient to reach the target level based on the Diagnostic Arthroscopic Skill Score. Purpose: Simulator arthroscopy training has gained popularity in recent years. However, it remains unclear what level of competency surgeons may achieve in what time frame using virtual training. It was hypothesized that 10 h of training would be sufficient to reach the target level defined by experts based on the Diagnostic Arthroscopic Skill Score (DASS).
Methods: The training concept was developed by ten instructors affiliated with the German-speaking Society of Arthroscopy and Joint Surgery (AGA). The programme teaches the basics of performing arthroscopy; the main focus is on learning and practicing manual skills using a simulator. The training was based on a structured programme of exercises designed to help users reach defined learning goals. Initially, camera posture, horizon adjustment and control of the direction of view were taught in a virtual room. Based on these skills, further training was performed with a knee model. The learning progress was assessed by quantifying the exercise time, camera path length and instrument path length for selected tasks. At the end of the course, the learners' performance in diagnostic arthroscopy was evaluated using DASS. Participants were classified as novice or competent based on the number of arthroscopies performed prior to the assessment.
Results: Except for one surgeon, 131 orthopaedic residents and surgeons (29 women, 102 men) who participated in the seven courses agreed to anonymous data analysis. Fifty-eight of them were competents with more than ten independently performed arthroscopies, and 73 were novices, with fewer than ten independently performed arthroscopies. There were significant reductions in exercise time, camera path length and instrument path length for all participants after the training, indicating a rapid increase in performance. No difference in camera handling between the dominant and non-dominant sides was found in either group. The competents performed better than the novices in various tasks and achieved significantly better DASS values on the final performance test.
Conclusions: Our data have demonstrated that arthroscopic skills can be taught effectively on a simulator, but a 10-h course is not sufficient to reach the target level set by experienced arthroscopists. However, learning progress can be monitored more objectively during simulator training than in the operating room, and simulation may partially replace the current practice of arthroscopic training.
Level Of Evidence: III.
abstract_id: PUBMED:25026928
Validation of the ArthroS virtual reality simulator for arthroscopic skills. Purpose: Virtual reality simulator training has become important for acquiring arthroscopic skills. A new simulator for knee arthroscopy ArthroS™ has been developed. The purpose of this study was to demonstrate face and construct validity, executed according to a protocol used previously to validate arthroscopic simulators.
Methods: Twenty-seven participants were divided into three groups having different levels of arthroscopic experience. Participants answered questions regarding general information and the outer appearance of the simulator for face validity. Construct validity was assessed with one standardized navigation task. Face validity, educational value and user friendliness were further determined by giving participants three exercises and by asking them to fill out the questionnaire.
Results: Construct validity was demonstrated between experts and beginners. Median task times were not significantly different for all repetitions between novices and intermediates, and between intermediates and experts. Median face validity was 8.3 for the outer appearance, 6.5 for the intra-articular joint and 4.7 for surgical instruments. Educational value and user friendliness were perceived as nonsatisfactory, especially because of the lack of tactile feedback.
Conclusion: The ArthroS™ demonstrated construct validity between novices and experts, but did not demonstrate full face validity. Future improvements should be mainly focused on the development of tactile feedback. It is necessary that a newly presented simulator is validated to prove it actually contributes to proficiency of skills.
abstract_id: PUBMED:29063680
Development of a physical shoulder simulator for the training of basic arthroscopic skills. Background: Orthopaedic training programs are incorporating arthroscopic simulations into their residency curricula. There is a need for a physical shoulder simulator that accommodates lateral decubitus and beach chair positions, has realistic anatomy, allows for an objective measure of performance and provides feedback to trainees.
Methods: A physical shoulder simulator was developed for training basic arthroscopic skills. Sensors were embedded in the simulator to provide a means to assess performance. Subjects of varying skill level were invited to use the simulator and their performance was objectively assessed.
Results: Novice subjects improved their performance after practice with the simulator. A survey completed by experts recognized the simulator as a valuable tool for training basic arthroscopic skills.
Conclusions: The physical shoulder simulator helps train novices in basic arthroscopic skills and provides objective measures of performance. By using the physical shoulder simulator, residents could improve their basic arthroscopic skills, resulting in improved patient safety.
abstract_id: PUBMED:35608178
Effective Skill Transfer From Fundamentals of Arthroscopic Surgery Training to Shoulder Arthroscopic Simulator in Novices. Purpose: To investigate whether novices could improve performance on a shoulder arthroscopic simulator (high-fidelity) through short-term training on a Fundamentals of Arthroscopic Surgery Training (FAST) simulator (low-fidelity).
Methods: Twenty-eight novices with no experience in arthroscopy were recruited to perform a pre-test on a shoulder arthroscopic simulator. Then they were randomized into two groups: the experimental group practiced five modules on the FAST simulator three times, and the control group did nothing. The experimental group performed a post-test immediately after FAST simulator practice. Control group rested for 70 minutes after experiencing pre-test before performing post-test. All parameters were recorded by the simulator.
Results: The experimental group outperformed the control group in terms of total score, procedure time, camera path length, and grasper path length. However, there was no statistical difference in scratching of humerus cartilage or glenoid cartilage. Significant differences were found in the improvement of both groups in total score, procedure time, and camera path length.
Conclusions: Arthroscopic skills gained after short-term training on FAST simulator could be transferred to the shoulder arthroscopic simulator. This research provides important evidence of the benefits of FAST simulator in shoulder arthroscopy training program.
abstract_id: PUBMED:37296326
The number of arthroscopies performed by trainees does not deduce the level of their arthroscopic proficiency. Purpose: It is reasonable to question whether the case volume is a suitable proxy for the manual competence of an arthroscopic surgeon. The aim of this study was to evaluate the correlation between the number of arthroscopies previously performed and the arthroscopic skills acquired using a standardized simulator test.
Methods: A total of 97 resident and early orthopaedic surgeons who participated in arthroscopic simulator training courses were divided into five groups based on their self-reported number of arthroscopic surgeries: (1) none, (2) < 10, (3) 10 to 19, (4) 20 to 39 and (5) 40 to 100. Arthroscopic manual skills were evaluated with a simulator by means of the diagnostic arthroscopy skill score (DASS) before and after training. Seventy-five points out of 100 must be achieved to pass the test.
Results: In the pretest, only three trainees in group 5 passed the arthroscopic skill test, and all other participants failed. Group 5 (57 ± 17 points; n = 17) scored significantly higher than the other groups (group 1: 30 ± 14, n = 20; group 2: 35 ± 14, n = 24; group 3: 35 ± 18, n = 23; and group 4: 33 ± 17, n = 13). After a two-day simulator training, trainees showed a significant increase in performance. In group 5, participants scored 81 ± 17 points, which was significantly higher than the other groups (group 1: 75 ± 16; group 2: 75 ± 14; group 3: 69 ± 15; and group 4: 73 ± 13). While self-reported arthroscopic procedures were n.s. associated with higher log odds of passing the test (p = 0.423), the points scored in the pretest were found to be a good predictor of whether a trainee would pass the test (p < 0.05). A positive correlation was observed between the points scored in the pretest and the posttest (p < 0.05, r = 0.59, r2 = 0.34).
Conclusions: The number of previously performed arthroscopies is not a reliable indicator of the skills level of orthopaedic residents. A reasonable alternative in the future would be to verify arthroscopic proficiency on the simulator by means of a score as a pass-fail examination.
Level Of Evidence: III.
abstract_id: PUBMED:33997080
Shoulder Arthroscopy Simulator Training Improves Surgical Procedure Performance: A Controlled Laboratory Study. Background: Previous simulation studies evaluated either dry lab (DL) or virtual reality (VR) simulation, correlating simulator training with the performance of arthroscopic tasks. However, these studies did not compare simulation training with specific surgical procedures.
Purpose/hypothesis: To determine the effectiveness of a shoulder arthroscopy simulator program in improving performance during arthroscopic anterior labral repair. It was hypothesized that both DL and VR simulation methods would improve procedure performance; however, VR simulation would be more effective based on the validated Arthroscopic Surgery Skill Evaluation Tool (ASSET) Global Rating Scale.
Study Design: Controlled laboratory study.
Methods: Enrolled in the study were 38 orthopaedic residents at a single institution, postgraduate years (PGYs) 1 to 5. Each resident completed a pretest shoulder stabilization procedure on a cadaveric model and was then randomized into 1 of 2 groups: VR or DL simulation. Participants then underwent a 4-week arthroscopy simulation program and completed a posttest. Sports medicine-trained orthopaedic surgeons graded the participants on completeness of the surgical repair at the time of the procedure, and a single, blinded orthopaedic surgeon, using the ASSET Global Rating Scale, graded participants' arthroscopy skills. The procedure step and ASSET grades were compared between simulator groups and between PGYs using paired t tests.
Results: There was no significant difference between the groups in pretest performance in either the procedural steps or ASSET scores. Overall procedural step scores improved after combining both types of simulator training (P = .0424) but not in the individual simulation groups. The ASSET scores improved across both DL (P = .0045) and VR (P = .0003), with no significant difference between the groups.
Conclusion: A 4-week simulation program can improve arthroscopic skills and performance during a specific surgical procedure. This study provides additional evidence regarding the benefits of simulator training in orthopaedic surgery for both novice and experienced arthroscopic surgeons. There was no statistically significant difference between the VR and DL models, which disproved the authors' hypothesis that the VR simulator would be the more effective simulation tool.
Clinical Relevance: There may be a role for simulator training in the teaching of arthroscopic skills and learning of specific surgical procedures.
abstract_id: PUBMED:35695551
The frequency of assessment tools in arthroscopic training: a systematic review. Background: Multiple assessment tools are used in arthroscopic training and play an important role in feedback. However, it is not fully recognized as to the standard way to apply these tools. Our study aimed to investigate the use of assessment tools in arthroscopic training and determine whether there is an optimal way to apply various assessment tools in arthroscopic training.
Methods: A search was performed using PubMed, Embase and Cochrane Library electronic databases for articles published in English from January 2000 to July 2021. Eligible for inclusion were primary research articles related to using assessment tools for the evaluation of arthroscopic skills and training environments. Studies that focussed only on therapeutic cases, did not report outcome measures of technical skills, or did not mention arthroscopic skills training were excluded.
Results: A total of 28 studies were included for review. Multiple assessment tools were used in arthroscopic training. The most common objective metric was completion time, reported in 21 studies. Technical parameters based on simulator or external equipment, such as instrument path length, hand movement, visual parameters and injury, were also widely used. Subjective assessment tools included checklists and global rating scales (GRS). Among these, the most commonly used GRS was the Arthroscopic Surgical Skill Evaluation Tool (ASSET). Most of the studies combined objective metrics and subjective assessment scales in the evaluation of arthroscopic skill training.
Conclusions: Overall, both subjective and objective assessment tools can be used as feedback for basic arthroscopic skill training, but there are still differences in the frequency of application in different contexts. Despite this, combined use of subjective and objective assessment tools can be applied to more situations and skills and can be the optimal way for assessment.
Level Of Evidence: Level III, systematic review of level I to III studies. Key messagesBoth subjective and objective assessment tools can be used as feedback for basic arthroscopic skill training.Combined use of subjective and objective assessment tools can be applied to more situations and skills and can be the optimal way for assessment.
abstract_id: PUBMED:29366742
Utility of Modern Arthroscopic Simulator Training Models: A Meta-analysis and Updated Systematic Review. Purpose: To determine the utility of modern arthroscopic simulators in transferring skills learned on the model to the operating room.
Methods: A meta-analysis and systematic review of all English-language studies relevant to validated arthroscopic simulation models using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines from 1999 to 2016 was performed. Data collected included the specific simulator model, the joint used, participant demographic characteristics, participant level of training, training session information, type and number of tasks, pre- and post-training assessments, and overall outcomes of simulator performance. Three independent reviewers analyzed all studies.
Results: Fifty-seven studies with 1,698 participants met the study criteria and were included. Of the studies, 25 (44%) incorporated an arthroscopic training program into the study methods whereas 32 (56%) did not. In 46 studies (81%), the studies' respective simulator models were used to assess arthroscopic performance, whereas 9 studies (16%) used Sawbones models, 8 (14%) used cadaveric models, and 4 (7%) evaluated subject performance on a live patient in the operating room. In 21 studies (37%), simulator performance was compared with experience level, with 20 of these (95%) showing that clinical experience correlated with simulator performance. In 25 studies (44%), task performance was evaluated before and after simulator training, with 24 of these (96%) showing improvement after training. All 4 studies that included live-patient arthroscopy reported improved operating room performance after simulator training compared with the performance of subjects not participating in a training program.
Conclusions: This review suggests that (1) training on arthroscopic simulators improves performance on arthroscopic simulators and (2) performance on simulators for basic diagnostic arthroscopy correlates with experience level. Limited data suggest that simulator training can improve basic diagnostic arthroscopy skills in vivo.
Level Of Evidence: Level IV, systematic review of Level I through IV studies.
Answer: The transferability of newly acquired arthroscopic skills during simulator training of other joints is not immediate, according to a study by PUBMED:26318489. This randomized study found that medical students who learned simulation-based arthroscopic skills in one anatomical joint environment did not show immediate skill transfer to another joint when they swapped models. The performance on the transfer task was significantly lower and no better than the first episode performance on that model by the novices. This suggests that basic arthroscopic skills acquired in one joint are not universally transferable to other joints within a simulated setting. The study concludes that future orthopaedic simulation training should provide exposure to a variety of arthroscopic procedures and joint environments to familiarize trainees with different settings before undertaking real surgery, thereby potentially improving patient safety. |