input
stringlengths 6.82k
29k
|
---|
Instruction: CT in searching for abscess after abdominal or pelvic surgery in patients with neoplasia: do abdomen and pelvis both need to be scanned?
Abstracts:
abstract_id: PUBMED:9216778
CT in searching for abscess after abdominal or pelvic surgery in patients with neoplasia: do abdomen and pelvis both need to be scanned? Purpose: This prospective study was undertaken to determine the incremental yield of combined abdominal and pelvic CT in searching for clinically suspected postoperative abscess in oncologic patients.
Method: One hundred seventeen oncologic patients underwent CT to exclude a clinically suspected abscess within 30 days of abdominal or pelvic surgery during an 8 month period. Scans were evaluated for the presence of ascites, loculated fluid collections, or other possible sources of fever. The clinical course and any intervention in the abdomen or pelvis within 30 days after CT were recorded.
Results: After abdominal surgery, 44 of 69 [64%; confidence interval (CI) 51-75%] patients had loculated fluid collections in the abdomen; no patient (0%; CI 0-5%) had a loculated fluid collection present only in the pelvis. After pelvic surgery, 22 of 48 (46%; CI 31-61%) patients had loculated fluid collections in the pelvis; no patient (0%; CI 0-7%) had a loculated collection present only in the abdomen. Loculated collections were present in both the abdomen and the pelvis in 4 of 69 (6%; CI 1.6-14%) patients after abdominal surgery and 3 of 48 (6%; CI 1.3-17%) after pelvic surgery.
Conclusion: Isolated pelvic abscesses after abdominal surgery and isolated abdominal abscesses after pelvic surgery appear to be very uncommon in oncologic patients. CT initially need be directed only to the region of surgery in this particular patient population.
abstract_id: PUBMED:28682968
Prognostic Impact of Intra-abdominal/Pelvic Inflammation After Radical Surgery for Locally Recurrent Rectal Cancer. Background: The influence of postoperative infectious complications, such as anastomotic leakage, on survival has been reported for various cancers, including colorectal cancer. However, it remains unclear whether intra-abdominal/pelvic inflammation after radical surgery for locally recurrent rectal cancer is relevant to its prognosis.
Objective: The purpose of this study was to evaluate factors associated with survival after radical surgery for locally recurrent rectal cancer.
Design: The prospectively collected data of patients were retrospectively evaluated.
Settings: This study was conducted at a single-institution tertiary care cancer center.
Patients: Between 1983 and 2012, patients who underwent radical surgery for locally recurrent rectal cancer with curative intent at the National Cancer Center Hospital were reviewed.
Main Outcome Measures: Factors associated with overall and relapse-free survival were evaluated.
Results: During the study period, a total of 180 patients were eligible for analyses. Median blood loss and operation time for locally recurrent rectal cancer were 2022 mL and 634 minutes. Five-year overall and 3-year relapse-free survival rates were 38.6% and 26.7%. Age (p = 0.002), initial tumor stage (p = 0.03), pain associated with locally recurrent rectal cancer (p = 0.03), CEA level (p = 0.004), resection margin (p < 0.001), intra-abdominal/pelvic inflammation (p < 0.001), and surgery period (p = 0.045) were independent prognostic factors associated with overall survival, whereas CEA level (p = 0.01), resection margin (p = 0.002), and intra-abdominal/pelvic inflammation (p = 0.001) were associated with relapse-free survival. Intra-abdominal/pelvic inflammation was observed in 45 patients (25.0%). A large amount of perioperative blood loss was the only factor associated with the occurrence of intra-abdominal/pelvic inflammation (p = 0.007).
Limitations: This study was limited by its retrospective nature and heterogeneous population.
Conclusions: Intra-abdominal/pelvic inflammation after radical surgery for locally recurrent rectal cancer is associated with poor prognosis. See Video Abstract at http://journals.lww.com/dcrjournal/Pages/videogallery.aspx.
abstract_id: PUBMED:35351741
Different uses of the breast implant to prevent empty pelvic complications following pelvic exenteration. Pelvic exenteration surgery is used as a standard procedure in recurrent pelvic cancers. Total pelvic exenteration (TPE) includes resection of the uterus, prostate, ureters, bladder and rectosigmoid colon from pelvic space. Empty pelvis syndrome is a complication of the TPE procedure. Following TPE, complications such as haematoma, abscess leading to permanent pus discharge and chronic infections can occur. Herein, we present the case of a man in his 50s who was referred for pelvic pain, foul-smelling discharge and non-functioning colostomy, and operated for distal rectal cancer 1.5 years ago and underwent low anterior resection. In this case, we performed TPE for the recurrent tumour. To prevent TPE complications, we used a breast implant for filling the pelvic cavity. The early and late postoperative course was uneventful.
abstract_id: PUBMED:22110836
Management of abdominal and pelvic abscess in Crohn's disease. Patients with Crohn's disease may develop an abdominal or pelvic abscess during the course of their illness. This process results from transmural inflammation and penetration of the bowel wall, which in turn leads to a contained perforation and subsequent abscess formation. Management of patients with Crohn's related intra-abdominal and pelvic abscesses is challenging and requires the expertise of multiple specialties working in concert. Treatment usually consists of percutaneous abscess drainage (PAD) under guidance of computed tomography in addition to antibiotics. PAD allows for drainage of infection and avoidance of a two-stage surgical procedure in most cases. It is unclear if PAD can be considered a definitive treatment without the need for future surgery. The use of immune suppressive agents such as anti-tumor necrosis factor-α in this setting may be hazardous and their appropriate use is controversial. This article discusses the management of spontaneous abdominal and pelvic abscesses in Crohn's disease.
abstract_id: PUBMED:18592800
Pelvic actinomycosis mimicking ovarian malignancy: three cases. Objective: Three cases of pelvic actinomycosis initially diagnosed as pelvic malignancy and treated surgically are reported.
Cases: The first case was a 38-year-old multiparous woman who was referred to our clinic because of bilateral ovarian solid masses. With the impression of ovarian carcinoma, a laparotomy was performed. During surgery adhesiolysis, total abdominal hysterectomy, bilateral salpingo-oophorectomy, infracolic omentectomy, appendectomy, peritoneal washings, and peritoneal abscess drainage were performed. The second patient was a 37-year-old woman who presented with a left-sided fixed solid mass highly suggestive of pelvic malignancy. Both ureters were found to be dilated with hydronephrosis in the right kidney supporting the diagnosis of retroperitoneal fibrosis. Excision of the mass, colectomy and temporary diverting colostomy and stent insertion to the left ureter were performed. Colostomy repair was performed five months later. On the fifth day postoperatively, fascial necrosis developed so a Bogota-bag was placed on the anterior abdominal wall and left for secondary healing. The third patient was a 51-year-old postmenopausal woman incidentally diagnosed as having a pelvic mass while having been investigated for constipation and nausea. She had had a colostomy one year before and a reanastomosis two months after. Total abdominal hysterectomy and bilateral salpingo-oophorectomy were performed. In all cases, histopathologic staining of the specimens revealed chronic inflammation containing actinomycosis abscesses confirmed with microbiologic identification.
Conclusion: Pelvic actinomycosis is an uncommon cause of a pelvic mass. However, it should be kept in mind in the differential diagnosis of pelvic masses, especially in the patients with a history of IUD use to avoid an unnecessary extensive surgical procedure.
abstract_id: PUBMED:18481146
The oblique rectus abdominal myocutaneous flap for complex pelvic wound reconstruction. Purpose: The oblique rectus abdominal myocutaneous flap is a seldom used flap design based on perforating vessels exiting the rectus near the umbilicus. Compared to other flaps, the oblique rectus abdominal myocutaneous flap provides increased soft tissue to fill pelvic dead space, with the further advantage of intact skin to close perineal defects. Here we detail the oblique rectus abdominal myocutaneous flap in achieving closure of complex perineal wounds.
Methods: A review of indications and outcomes in 16 patients undergoing complex pelvic operations requiring reconstruction with this flap was undertaken.
Results: All patients had been previously treated with pelvic irradiation for cancer. Indications for flap reconstruction included abdominal perineal resection for anal/rectal cancer, pelvic sarcoma/sacral resection/exenteration, small bowel/colonic fistula resection, and total proctocolectomy with vaginal reconstruction. Median follow-up was 17 (range, 1-57) months. Complications included epidermal necrosis at the flap tip (n = 2), delayed perineal wound breakdown (n = 1), one abdominal wound infection, one small abdominal dehiscence, and four pelvic abscesses all managed nonoperatively. A single recurrent fistula required operative resection three months postoperatively. There were no cases of complete flap necrosis, vascular failure or persistently draining perineal sinus, and no mortalities related to the flap reconstruction.
Conclusions: The treatment of complex pelvic wounds, especially following pelvic radiation, is facilitated by the oblique rectus abdominal myocutaneous flap. This technique provides ample tissue for large pelvic wounds, including skin for perineal defects. Comparing our results to existing literature, the oblique rectus abdominal myocutaneous flap displays a favorable morbidity profile, providing a safe means of delivering well-vascularized tissue to the pelvic cavity and perineal floor.
abstract_id: PUBMED:19656015
Chronic pelvic pain reveals sacral osteomyelitis three years after abdominal hysterectomy. Background: Deep pelvic abscess is a well-known infective complication in gynecologic practice. However, sacral osteomyelitis has been reported rarely. We describe sacral infection presenting three years after abdominal hysterectomy and point out the difficulty in management.
Methods: Case report and review of the pertinent literature.
Results: A 46-year-old woman who had undergone abdominal hysterectomy three years before presented with an 8-month history of abdominopelvic pain recently intensifying in the sitting position without fever. Gynecologic, urinary, and rectal examination did not yield positive findings. An abdominopelvic computed tomography (CT) scan was normal except for sacral osteolysis. A neoplasm was suspected, but magnetic resonance imaging revealed an S2-S4 cystic collection with presacral extension. Neurologic examination did not show any focal deficits. A posterior CT-guided biopsy-aspiration yielded purulent fluid. Pathologic examination revealed inflammatory granulations without any malignant tumor. Abscess cultures grew three microorganisms. The patient's symptoms resolved completely after 3 months of antibiotic therapy.
Conclusions: Sacral osteomyelitis has not been reported previously after abdominal hysterectomy. Early diagnosis was made difficult by the absence of neurologic findings. Such postoperative infection should be considered after pelvic surgery. Minimally invasive needle aspiration may confirm the diagnosis and reduce the necessary extent of surgical intervention.
abstract_id: PUBMED:37849264
Pelvic exenteration for late complications of radiation-induced pelvic injury: a preliminary study Objective: To investigate the safety and efficacy of total pelvic exenteration (TPE) for treating late complications of radiation-induced pelvic injury. Methods: This was a descriptive case series study. The inclusion criteria were as follows: (1) confirmed radiation-induced pelvic injury after radiotherapy for pelvic malignancies; (2) late complications of radiation-induced pelvic injury, such as bleeding, perforation, fistula, and obstruction, involving multiple pelvic organs; (3) TPE recommended by a multidisciplinary team; (4) patient in good preoperative condition and considered fit enough to tolerate TPE; and (5) patient extremely willing to undergo the procedure and accept the associated risks. The exclusion criteria were as follows: (1) preoperative or intraoperative diagnosis of tumor recurrence or metastasis; (2) had only undergone diversion or bypass surgery after laparoscopic exploration; and (3) incomplete medical records. Clinical and follow-up data of patients who had undergone TPE for late complications of radiation-induced pelvic injury between March 2020 and September 2022 at the Sixth Affiliated Hospital of Sun Yat-sen University were analyzed. Perioperative recovery, postoperative complications, perioperative deaths, and quality of life 1 year postoperatively were recorded. Results: The study cohort comprised 14 women, nine of whom had recto-vagino-vesical fistulas, two vesicovaginal fistulas, one ileo-vesical fistula and rectal necrosis, one ileo-vesical and rectovaginal fistulas, and one rectal ulcer and bilateral ureteral stenosis. The mean duration of surgery was 592.1±167.6 minutes and the median blood loss 550 (100-6000) mL. Ten patients underwent intestinal reconstruction, and four the Hartmann procedure. Ten patients underwent urinary reconstruction using Bricker's procedure and 7 underwent pelvic floor reconstruction. The mean postoperative hospital stay was 23.6±14.9 days. Seven patients (7/14) had serious postoperative complications (Clavien-Dindo IIIa to IVb), including surgical site infections in eight, abdominopelvic abscesses in five, pulmonary infections in five, intestinal obstruction in four, and urinary leakage in two. Empty pelvis syndrome (EPS) was diagnosed in five patients, none of whom had undergone pelvic floor reconstruction. Five of the seven patients who had not undergone pelvic floor reconstruction developed EPS, compared with none of those who had undergone pelvic floor reconstruction. One patient with EPS underwent reoperation because of a pelvic abscess, pelvic hemorrhage, and intestinal obstruction. There were no perioperative deaths. During 18.9±10.1 months of follow-up, three patients died, two of renal failure, which was a preoperative comorbidity, and one of COVID-19. The remaining patients had gradual and significant relief of symptoms during follow-up. QLQ-C30 assessment of postoperative quality of life showed gradual improvement in all functional domains and general health at 1, 3, and 6 months postoperatively (all P<0.05). Conclusions: TPE is a feasible procedure for treating late complications of radiation-induced pelvic injury combined with complex pelvic fistulas. TPE is effective in alleviating symptoms and improving quality of life. However, the indications for this procedure should be strictly controlled and the surgery carried out only by experienced surgeons.
abstract_id: PUBMED:3615864
Pelvic exenteration: role of CT in follow-up. Fifty-five computed tomography (CT) scans of the pelves and abdomens of 33 patients who had undergone pelvic exenteration were reviewed. There were 27 gynecologic and six colorectal malignancies. The interval between surgery and the first CT scan ranged from 2 weeks to 37 months (median, 8 months). CT findings included abnormal fluid collections (33.3%), abnormalities of the neovagina (30.3%) and presacral soft tissues (36.4%), increased hydronephrosis (54.5%), and lymphocysts (6.1%). Tumors recurred in 17 of 33 patients (51.5%) at a median interval of 9 months after surgery and had several CT manifestations. The most common of these was a soft-tissue mass of variable density and shape, but pelvic fluid collections as well as abnormalities of the neovagina and presacral soft-tissue layer were also associated with tumor recurrence. The surgical indications, methods, and potential post-operative complications of pelvic exenteration were reviewed and the role of CT in the follow-up of these patients was emphasized.
abstract_id: PUBMED:26857044
Pelvic actinomycosis: Diagnostic and therapeutic aspects Objectives: Actinomycosis is a rare little known granulomatous suppurative disease, more common in women, aided by the use of contraceptive purposes intrauterine device (IUD). Pelvic location is the rarest with an extension to adjacent organs making preoperative diagnosis difficult and misleading clinical presentation. Early diagnosis of this affection determines the therapeutic strategy and avoids mutilating interventions especially in young women.
Methods: We reviewed the record of women who consulted the department of obstetrics and gynecology at Ben Arous hospital (Tunisia) between January 2003 and December 2013 for a pelvic pain syndrome and in whom diagnosis of actinomycosis was suspected by clinical and imaging and confirmed by pathology.
Results: Eight cases of gynecologic abdominopelvic actinomycosis were diagnosed during the study period. Seven patients were carriers of an intrauterine device, with an average duration of 5 years wearing. Functional signs were essentially pelvic pain and fever. Physical examination of patients mainly showed two clinical presentations: a pelvic tumor syndrome or abdominopelvic and an array of pelvic abscess or pelvic inflammatory disease. Radiological investigations were allowed to suspect the diagnosis of actinomycosis only in one patient, in whom percutaneous biopsy confirmed the histological diagnosis without resorting to a surgical procedure. Operative procedures performed were varied as appropriate. The diagnosis of actinomycosis was made by pathology without any cases of bacterial isolation. All patients received antibiotic treatment with penicillin. The subsequent evolution was favorable.
Conclusion: The diagnosis of actinomycosis should be considered in any invasive abdominal mass of neoplastic appearance and in case of table of genital infection especially in patients bearing IUD for 5 years or more.
Answer: Based on the findings from the prospective study detailed in abstract PUBMED:9216778, it appears that isolated pelvic abscesses after abdominal surgery and isolated abdominal abscesses after pelvic surgery are very uncommon in oncologic patients. The study showed that after abdominal surgery, none of the patients had a loculated fluid collection present only in the pelvis. Similarly, after pelvic surgery, no patient had a loculated collection present only in the abdomen. Therefore, the conclusion drawn from this study is that CT initially need only be directed to the region of surgery in this particular patient population. This suggests that scanning both the abdomen and pelvis may not be necessary when searching for abscesses after surgery in patients with neoplasia, as long as the CT is directed to the region where the surgery was performed. |
Instruction: Does early verbal fluency decline after STN implantation predict long-term cognitive outcome after STN-DBS in Parkinson's disease?
Abstracts:
abstract_id: PUBMED:30363383
Association of MRI Measurements with Cognitive Outcomes After STN-DBS in Parkinson's Disease. Objectives: Subthalamic nucleus deep brain stimulation (STN-DBS) is an effective treatment for improving the motor symptoms of Parkinson's disease (PD). Overall, cognitive function remains stable after STN-DBS in most patients. However, cognitive decline, specifically in the verbal fluency domain, is seen in a subset of STN-DBS patients. Currently, predictors of cognitive decline in PD patients treated with STN-DBS are not well known. Thus, identification of presurgical predictors might provide an important clinical tool for better risk-to-benefit assessment. This study explores whether whole brain white matter lesion (WML) volume, or hippocampal and forebrain volumes, measured quantitatively on MRI, are associated with cognitive changes following STN-DBS in PD patients.
Methods: We conducted a retrospective study using presurgical, and ≥ 6-month postsurgical neuropsychological (NP) evaluation scores from 43 PD patients with STN-DBS. Mean pre/post NP test scores for measures of executive function, attention, verbal fluency, memory, and visuospatial function were analyzed and correlated with WML volume, and brain volumetric data.
Results: Although cognitive measures of verbal fluency, executive function, attention, memory, and visuospatial function showed declines following STN-DBS, we observed limited evidence that white matter lesion burden or cortical atrophy contributed to cognitive change following STN-DBS.
Conclusions: These results suggest that post-STN-DBS cognitive changes may be unrelated to presurgical WML burden and presence of cortical atrophy.
abstract_id: PUBMED:25125047
Does early verbal fluency decline after STN implantation predict long-term cognitive outcome after STN-DBS in Parkinson's disease? Backgrounds: An early and transient verbal fluency (VF) decline and impairment in frontal executive function, suggesting a cognitive microlesion effect may influence the cognitive repercussions related to subthalamic nucleus deep brain stimulation (STN-DBS).
Methods: Neuropsychological tests including semantic and phonemic verbal fluency were administered both before surgery (baseline), the third day after surgery (T3), at six months (T180), and at an endpoint multiple years after surgery (Tyears).
Results: Twenty-four patients (mean age, 63.5 ± 9.5 years; mean disease duration, 12 ± 5.8 years) were included. Both semantic and phonemic VF decreased significantly in the acute post-operative period (44.4 ± 28.2% and 34.3 ± 33.4%, respectively) and remained low at 6 months compared to pre-operative levels (decrease of 3.4 ± 47.8% and 10.8 ± 32.1%) (P < 0.05). Regression analysis showed phonemic VF to be an independent factor of decreased phonemic VF at six months. Age was the only independent predictive factor for incident Parkinson's disease dementia (PDD) (F (4,19)=3.4, P<0.03).
Conclusion: An acute post-operative decline in phonemic VF can be predictive of a long-term phonemic VF deficit. The severity of this cognitive lesion effect does not predict the development of dementia which appears to be disease-related.
abstract_id: PUBMED:36048377
Pre-operative cognitive burden as predictor of motor outcome following bilateral subthalamic nucleus deep brain stimulation in Parkinson's disease. Introduction: The interrelationship between neurocognitive impairments and motor functions was observed in patients with advanced Parkinson's disease (PD). This study was conducted to identify pre-operative neurocognitive and clinical predictors of short-term motor outcome following subthalamic nucleus deep brain stimulation (STN-DBS).
Methods: All consecutive PD patients who were eligible for bilateral STN-DBS from 2009 to 2019 were evaluated before and at 1 year following surgery. Standard motor evaluation and neurocognitive tests including global cognition, memory, executive functions (attention and category fluency), confrontational speech, visuospatial abilities, and mood were conducted at baseline. The post-operative STN-DBS effects were assessed at 1 year following the surgery. Multiple regression analysis was applied to identify baseline independent predictors of post-operative STN-DBS effect.
Results: A total of 82 patients were analyzed. It was found that younger age at operation, higher levodopa responsiveness at baseline based on UPDRS-III total score, and better baseline verbal delayed memory and category fluency predicted post-operative motor outcome at 1 year following STN-DBS (F = 9.639, p < 0.001, R2 = .340).
Conclusion: Our findings demonstrated the role of baseline cognitive burden, especially cognitive processes related to frontostriatal circuits, was significant clinical predictors of short-term motor outcomes following STN-DBS. Profile analysis of neurocognitive functions at baseline is recommended.
abstract_id: PUBMED:20362061
Patient-specific analysis of the relationship between the volume of tissue activated during DBS and verbal fluency. Deep brain stimulation (DBS) for the treatment of advanced Parkinson's disease involves implantation of a lead with four small contacts usually within the subthalamic nucleus (STN) or globus pallidus internus (GPi). While generally safe from a cognitive standpoint, STN DBS has been commonly associated with a decrease in the speeded production of words, a skill referred to as verbal fluency. Virtually all studies comparing presurgical to postsurgical verbal fluency performance have detected a decrease with DBS. The decline may be attributable in part to the surgical procedures, yet the relative contributions of stimulation effects are not known. In the present study, we used patient-specific DBS computer models to investigate the effects of stimulation on verbal fluency performance. Specifically, we investigated relationships of the volume and locus of activated STN tissue to verbal fluency outcome. Stimulation of different electrode contacts within the STN did not affect total verbal fluency scores. However, models of activation revealed subtle relationships between the locus and volume of activated tissue and verbal fluency performance. At ventral contacts, more tissue activation inside the STN was associated with decreased letter fluency performance. At optimal contacts, more tissue activation within the STN was associated with improved letter fluency performance. These findings suggest subtle effects of stimulation on verbal fluency performance, consistent with the functional nonmotor subregions/somatotopy of the STN.
abstract_id: PUBMED:30687215
Quantitative EEG and Verbal Fluency in DBS Patients: Comparison of Stimulator-On and -Off Conditions. Introduction: Deep brain stimulation of the subthalamic nucleus (STN-DBS) ameliorates motor function in patients with Parkinson's disease and allows reducing dopaminergic therapy. Beside effects on motor function STN-DBS influences many non-motor symptoms, among which decline of verbal fluency test performance is most consistently reported. The surgical procedure itself is the likely cause of this decline, while the influence of the electrical stimulation is still controversial. STN-DBS also produces widespread changes of cortical activity as visualized by quantitative EEG. The present study aims to link an alteration in verbal fluency performance by electrical stimulation of the STN to alterations in quantitative EEG. Methods: Sixteen patients with STN-DBS were included. All patients had a high density EEG recording (256 channels) while testing verbal fluency in the stimulator on/off situation. The phonemic, semantic, alternating phonemic and semantic fluency was tested (Regensburger Wortflüssigkeits-Test). Results: On the group level, stimulation of STN did not alter verbal fluency performance. EEG frequency analysis showed an increase of relative alpha2 (10-13 Hz) and beta (13-30 Hz) power in the parieto-occipital region (p ≤ 0.01). On the individual level, changes of verbal fluency induced by stimulation of the STN were disparate and correlated inversely with delta power in the left temporal lobe (p < 0.05). Conclusion: STN stimulation does not alter verbal fluency performance in a systematic way at group level. However, when in individual patients an alteration of verbal fluency performance is produced by electrical stimulation of the STN, it correlates inversely with left temporal delta power.
abstract_id: PUBMED:34678718
Anterior lead location predicts verbal fluency decline following STN-DBS in Parkinson's disease. Introduction: Verbal fluency (VF) decline is a well-documented cognitive effect of Deep Brain Stimulation of the subthalamic nucleus (STN-DBS) in patients with Parkinson's disease (PD). This decline may be associated with disruption to left-sided frontostriatal circuitry involving the anteroventral non-motor area of the STN. While recent studies have examined the impact of lead location in relation to functional STN subdivisions on VF outcomes, results have been mixed and methods have been limited by atlas-based location mapping.
Methods: Participants included 59 individuals with PD who underwent bilateral STN-DBS. Each participant's active contact location was determined in an atlas-independent fashion, relative to their individual MR-visualized STN midpoint. Multiple linear regression was used to examine lead location in each direction as a predictor of phonemic and semantic VF decline, controlling for demographic and disease variables.
Results: More anterior lead locations relative to the STN midpoint in the left hemisphere predicted greater phonemic VF decline (B = -2.34, B SE = 1.08, β = -0.29, sr2 = 0.08). Lead location was not a significant predictor of semantic VF decline.
Conclusion: Using an individualized atlas-independent approach, present findings suggest that more anterior stimulation of the left STN may uniquely contribute to post-DBS VF decline. This is consistent with models in which the anterior STN represents a "non-motor" functional subdivision with connections to frontal regions, e.g., the left dorsal prefrontal cortex. Future studies should investigate the effect of DBS lead trajectory on VF outcomes.
abstract_id: PUBMED:26831827
Verbal Fluency in Parkinson's Patients with and without Bilateral Deep Brain Stimulation of the Subthalamic Nucleus: A Meta-analysis. Objectives: Patients with Parkinson's disease often experience significant decline in verbal fluency over time; however, deep brain stimulation of the subthalamic nucleus (STN-DBS) is also associated with post-surgical declines in verbal fluency. The purpose of this study was to determine if Parkinson's patients who have undergone bilateral STN-DBS have greater impairment in verbal fluency compared to Parkinson's patients treated by medication only.
Methods: A literature search yielded over 140 articles and 10 articles met inclusion criteria. A total of 439 patients with Parkinson's disease who underwent bilateral STN-DBS and 392 non-surgical patients were included. Cohen's d, a measure of effect size, was calculated using a random effects model to compare post-treatment verbal fluency in patients with Parkinson's disease who underwent STN-DBS versus those in the non-surgical comparison group.
Results: The random effects model demonstrated a medium effect size for letter fluency (d=-0.47) and a small effect size for category fluency (d=-0.31), indicating individuals with bilateral STN-DBS had significantly worse verbal fluency performance than the non-surgical comparison group.
Conclusions: Individuals with Parkinson's disease who have undergone bilateral STN-DBS experience greater deficits in letter and category verbal fluency compared to a non-surgical group.
abstract_id: PUBMED:22846795
Early verbal fluency decline after STN implantation: is it a cognitive microlesion effect? Backgrounds: Worsening of verbal fluency is reported after subthalamic nucleus deep brain stimulation in Parkinson's disease. It is postulated that these changes could reflect microlesion consecutive to the surgical procedure itself.
Methods: We evaluated verbal fluency, in 26 patients (mean age, 57.9±8.5 years; mean disease duration, 11.4±3.5 years) both before surgery (baseline) and, after surgery respectively the third day (T3), the tenth day (T10) just after STN implantation before turning on the stimulation and at six months (T180).
Results: Number of total words and switches was significantly reduced at T3 and T10, while average cluster size was unchanged. Repeated post-operative neuropsychological testing demonstrated reliable improvement from T3 to T180 on verbal fluency.
Conclusion: This study provides evidence of transient verbal fluency decline consecutive to a microlesion effect. Further studies needed to determine a putative relationship between early and long-term verbal fluency impairment.
abstract_id: PUBMED:35005065
Subthalamic Nucleus Stimulation in Parkinson's Disease: 5-Year Extension Study of a Randomized Trial. Background: In Parkinson's disease (PD) long-term motor outcomes of subthalamic nucleus deep brain stimulation (STN-DBS) are well documented, while comprehensive reports on non-motor outcomes are fewer and less consistent.
Objective: To report motor and non-motor symptoms after 5-years of STN-DBS.
Methods: We performed an open 5-year extension study of a randomized trial that compared intraoperative verification versus mapping of STN using microelectrode recordings. Changes from preoperative to 5-years of STN-DBS were evaluated for motor and non-motor symptoms (MDS-UPDRS I-IV), sleep disturbances (PDSS), autonomic symptoms (Scopa-Aut), quality of life (PDQ-39) and cognition through a neuropsychological test battery. We evaluated whether any differences between the two randomization groups were still present, and assessed preoperative predictors of physical dependence after 5 years of treatment using logistic regression.
Results: We found lasting improvement of off-medication motor symptoms (total MDS-UPDRS III, bradykinetic-rigid symptoms and tremor), on-medication tremor, motor fluctuations, and sleep disturbances, but reduced performance across all cognitive domains, except verbal memory. Reduction of verbal fluency and executive function was most pronounced the first year and may thus be more directly related to the surgery than worsening in other domains. The group mapped with multiple microelectrode recordings had more improvement of bradykinetic-rigid symptoms and of PDQ-39 bodily discomfort sub-score, but also more reduction in word fluency. Older age was the most important factor associated with physical dependence after 5 years.
Conclusion: STN-DBS offers good long-term effects, including improved sleep, despite disease progression. STN-DBS surgery may negatively impact verbal fluency and executive function.
abstract_id: PUBMED:24231557
Cognitive predictors of cognitive change following bilateral subthalamic nucleus deep brain stimulation in Parkinson's disease. The beneficial effects of deep brain stimulation (DBS) of the subthalamic nucleus (STN) for the motor symptoms in advanced Parkinson's disease (PD) are well established. Early in PD, mild cognitive impairment is present in a proportion of patients. Hence, it can also be present in PD patients considered for DBS. The potential impact of even a modest decline post-surgically is a concern because it could result in impaired cognitive function. Therefore, attempts to determine which pre-operative cognitive measures predict post-operative cognitive change warrant further attention. We report our findings in a cohort of 30 routinely operated non-demented patients who underwent detailed neuropsychological assessments on average 7.1 months before and 9.4 months after STN DBS. We report the individual and group differences pre- and post-DBS. Stepwise regression analysis was used to analyse the best cognitive predictors of post-operative cognitive changes. We describe our data in relation to published normative data. Post-STN DBS, the immediate story recall component of verbal memory was the most affected cognitive function showing a significant decline in its group mean with a large effect size. The best predictors for this change were pre-surgical list learning and Full Scale Intelligence Quotient. These results suggest that non-demented patients, with even mild impairments in both general intellectual functions and list learning, may be at greater risk of decline in other aspects of verbal memory after STN DBS. Pre-existing mild executive dysfunction was not influenced post-operatively. These findings may help selection and consent for STN DBS.
Answer: Early verbal fluency decline after STN implantation has been observed and is considered a potential predictor of long-term cognitive outcomes following STN-DBS in Parkinson's disease. Specifically, an acute post-operative decline in phonemic verbal fluency has been shown to predict a long-term phonemic verbal fluency deficit. However, the severity of this cognitive lesion effect does not predict the development of dementia, which appears to be more closely related to the disease itself rather than the surgical intervention (PUBMED:25125047).
Moreover, a study found that more anterior lead locations relative to the STN midpoint in the left hemisphere predicted greater phonemic verbal fluency decline (PUBMED:34678718). This suggests that the specific location of the DBS lead can influence the extent of verbal fluency decline, with anterior placement being more detrimental.
While early verbal fluency decline is a concern, it is important to note that cognitive changes following STN-DBS may not be solely attributable to the surgical procedure or the stimulation effects. For instance, a study reported that post-STN-DBS cognitive changes might be unrelated to presurgical white matter lesion burden and presence of cortical atrophy (PUBMED:30363383). Additionally, another study indicated that cognitive predictors such as pre-surgical list learning and Full Scale Intelligence Quotient were the best predictors for post-operative cognitive changes, particularly in verbal memory (PUBMED:24231557).
In summary, while early verbal fluency decline after STN implantation may predict long-term phonemic verbal fluency deficits, it does not necessarily predict the development of dementia. Other factors, including lead placement and pre-existing cognitive function, also play a role in determining cognitive outcomes after STN-DBS in Parkinson's disease. |
Instruction: Can clock drawing test help to differentiate between dementia of the Alzheimer's type and vascular dementia?
Abstracts:
abstract_id: PUBMED:26138809
Can clock drawing differentiate Alzheimer's disease from other dementias? Background: Studies have shown the clock-drawing test (CDT) to be a useful screening test that differentiates between normal, elderly populations, and those diagnosed with dementia. However, the results of studies which have looked at the utility of the CDT to help differentiate Alzheimer's disease (AD) from other dementias have been conflicting. The purpose of this study was to explore the utility of the CDT in discriminating between patients with AD and other types of dementia.
Methods: A review was conducted using MEDLINE, PsycINFO, and Embase. Search terms included clock drawing or CLOX and dementia or Parkinson's Disease or AD or dementia with Lewy bodies (DLB) or vascular dementia (VaD).
Results: Twenty studies were included. In most of the studies, no significant differences were found in quantitative CDT scores between AD and VaD, DLB, and Parkinson's disease dementia (PDD) patients. However, frontotemporal dementia (FTD) patients consistently scored higher on the CDT than AD patients. Qualitative analyses of errors differentiated AD from other types of dementia.
Conclusions: Overall, the CDT score may be useful in distinguishing between AD and FTD patients, but shows limited value in differentiating between AD and VaD, DLB, and PDD. Qualitative analysis of the type of CDT errors may be a useful adjunct in the differential diagnosis of the types of dementias.
abstract_id: PUBMED:12211117
Can clock drawing test help to differentiate between dementia of the Alzheimer's type and vascular dementia? A preliminary study. Objectives: the purpose of this preliminary study was to determine if clock drawing performance may help to differentiate between dementia of the Alzheimer's type (DAT) and vascular dementia (VD) patients.
Methods: eighty-eight community-dwelling outpatients were comprehensively evaluated and met DSM-IV criteria for DAT or VD. Cognitive evaluation included the Mini-Mental State Examination (MMSE) and the Cambridge Cognitive Examination (CAMCOG). CAMCOG derived clock drawings were blindly evaluated by the same investigator, according to Freedman's method for clock drawing, and a total score as well as subscores (contour, numbers, hands and center) were determined.
Results: There were no significant differences between DAT and VD patients in terms of demographic (age, gender, education) and cognitive (MMSE score, CAMCOG score) characteristics. On the average, the VD group showed slightly poorer performance on each of the clock drawing test (CDT) measures studied. With application of the Bonferroni correction, only Freedman's total score and hands subscore were statistically different between groups (p<0.003, p<0.004, respectively). Stepwise logistic regression analyses showed that the only significant variable was Freedman's total score (B=-0.273, p=0.005). Stepwise discriminant analysis identified Freedman's total score as the only significant predictor of diagnosis (Wilkes' lambda=0.903, p=0.003). This model correctly classified 65.9% overall into the respective DAT and VD groups.
Conclusions: CDT scored according to a comprehensive technique may be of value in differentiating DAT from VD patients. We hypothesize that the classificatory ability of Freedman's method might be attributed to its presumed sensitivity to impaired executive functioning which is more pronounced in VD compared with DAT patients.
abstract_id: PUBMED:34219737
Classifying Non-Dementia and Alzheimer's Disease/Vascular Dementia Patients Using Kinematic, Time-Based, and Visuospatial Parameters: The Digital Clock Drawing Test. Background: Advantages of digital clock drawing metrics for dementia subtype classification needs examination.
Objective: To assess how well kinematic, time-based, and visuospatial features extracted from the digital Clock Drawing Test (dCDT) can classify a combined group of Alzheimer's disease/Vascular Dementia patients versus healthy controls (HC), and classify dementia patients with Alzheimer's disease (AD) versus vascular dementia (VaD).
Methods: Healthy, community-dwelling control participants (n = 175), patients diagnosed clinically with Alzheimer's disease (n = 29), and vascular dementia (n = 27) completed the dCDT to command and copy clock drawing conditions. Thirty-seven dCDT command and 37 copy dCDT features were extracted and used with Random Forest classification models.
Results: When HC participants were compared to participants with dementia, optimal area under the curve was achieved using models that combined both command and copy dCDT features (AUC = 91.52%). Similarly, when AD versus VaD participants were compared, optimal area under the curve was, achieved with models that combined both command and copy features (AUC = 76.94%). Subsequent follow-up analyses of a corpus of 10 variables of interest determined using a Gini Index found that groups could be dissociated based on kinematic, time-based, and visuospatial features.
Conclusion: The dCDT is able to operationally define graphomotor output that cannot be measured using traditional paper and pencil test administration in older health controls and participants with dementia. These data suggest that kinematic, time-based, and visuospatial behavior obtained using the dCDT may provide additional neurocognitive biomarkers that may be able to identify and tract dementia syndromes.
abstract_id: PUBMED:29417704
Cognitive impairment in Parkinson's disease, Alzheimer's dementia, and vascular dementia: the role of the clock-drawing test. Aim: Cognitive impairment is present in several neurodegenerative disorders. The clock-drawing test (CDT) represents a useful screening instrument for assessing the evolution of cognitive decline. The aim of this study was to investigate the sensitivity of the CDT in monitoring and differentiating the evolution of cognitive decline in Alzheimer's dementia (AD), vascular dementia (VaD), and Parkinson's disease (PD).
Methods: This study involved 139 patients, including 39 patients with PD and mild cognitive impairment, 16 demented PD patients, 21 VaD patients with mild cognitive impairment, 17 patients with VaD, 33 patients with mild cognitive impairment due to AD, and 13 patients with probable AD. All participants completed the CDT. The Mini-Mental State Examination was administered to establish patients' cognitive functioning.
Results: Comparisons of quantitative and qualitative CDT scores showed significant differences between the various diseases. Impairment of executive functioning seems to be more pronounced in PD and VaD than in AD. Patients with AD committed more errors related to a loss of semantic knowledge, indicating a severely reduced capacity in abstract and conceptual thinking.
Conclusion: Results support the usefulness and sensitivity of the CDT in the detection of different dementia subtypes. Qualitative error analysis of the CDT may be helpful in differentiating PD, VaD, and AD, even in the early stages of each disease.
abstract_id: PUBMED:23634401
The use of the mini-mental state examination and the clock-drawing test for dementia in a tertiary hospital. Introduction: An early and a quick identification of dementia is desirable to improve the overall care to the affected persons in the developing countries. The aim of this study was to evaluate the discriminative abilities of the Mini Mental State Examination (MMSE) and the Clock Drawing Test (CDT) in differentiating the demented patients from the controls and also the differentiation between the different types of dementia.
Patients And Methods: This study was designed to evaluate the patients with varied types and severities of dementia, who were diagnosed by using the Clinical Dementia Rating (CDR) scale. All the patients completed the MMSE and the simplified CDT.
Results: This study included 197 patients with an age range of 43-79 years. Fifty-one patients (25.9%) were diagnosed with Alzheimer Dementia (AD), 37 patients (18.8%) with Vascular Dementia (VD), 23 patients (11.7%) with Parkinson's Disease Dementia (PDD) and 86 patients (43.6%) with other variants of dementia. The total MMSE score of the enrolled patients was significantly lower as compared to that of the control subjects, with a non-significant difference between the varied diagnoses. The total CDT scores were significantly lower in the patients as compared to those in the controls, with significantly lower scores in the PDD group as compared to those in the AD group. The patients who had AD showed non-significantly higher CDT scores as compared to the patients who had vascular and other types of dementia.
Conclusion: A combined application of both MMSE and CDT can identify the persons with a cognitive affection and this may be a useful tool for the diagnosis of the non Alzheimer's type of dementia.
abstract_id: PUBMED:30906399
Usefulness of the Clock Drawing Test as a Cognitive Screening Instrument for Mild Cognitive Impairment and Mild Dementia: an Evaluation Using Three Scoring Systems. Background And Purpose: Although the clock drawing test (CDT) is a widely used cognitive screening instrument, there have been inconsistent findings regarding its utility with various scoring systems in patients with mild cognitive impairment (MCI) or dementia. The present study aimed to identify whether patients with MCI or dementia exhibited impairment on the CDT using three different scoring systems, and to determine which scoring system is more useful for detecting MCI and mild dementia.
Methods: Patients with amnestic mild cognitive impairment (aMCI), vascular mild cognitive impairment (VaMCI), mild Alzheimer's disease (AD), mild vascular dementia (VaD), and cognitively normal older adults (CN) were included. All participants were administered the CDT, the Korean-Mini Mental State Examination (K-MMSE), and the Clinical Dementia Rating scale. The CDT was scored using the 3-, 5-, and 15-point scoring systems.
Results: On all three scoring systems, all patient groups demonstrated significantly lower scores than the CN. However, while there were no significant differences among patients with aMCI, VaMCI, and AD, those with VaD exhibited the lowest scores. Area under the Receiver Operating Characteristic curves revealed that the three CDT scoring systems were comparable with the K-MMSE in differentiating aMCI, VaMCI, and VaD from CN. In differentiating AD from CN, however, the CDT using the 15-point scoring system demonstrated the most comparable discriminability with K-MMSE.
Conclusions: The results demonstrated that the CDT is a useful cognitive screening tool that is comparable with the Mini-Mental State Examination, and that simple CDT scoring systems are sufficient for differentiating patients with MCI and mild dementia from CN.
abstract_id: PUBMED:20712183
The four-point scoring system for the clock drawing test does not differentiate between Alzheimer's disease and vascular dementia. The purpose of this study was to explore the sensitivity and specificity of the Clock Drawing Test by using a widely employed four-point scoring system to discriminate between patients with Alzheimer's disease or vascular dementia. Receiver operating characteristic analysis indicated that the Clock Drawing Test was able to distinguish between normal elders and those with a dementia diagnosis. The cutoff score for differentiating patients with Alzheimer's disease from normal participants was < or =3. The cutoff score for differentiating those with vascular disease from normal participants was < or =3. Overall, the four-point scoring system demonstrated good sensitivity and specificity for identifying cognitive dysfunction associated with dementia; however, the current findings do not support the utility of the four-point scoring system in discriminating Alzheimer's disease and vascular dementia.
abstract_id: PUBMED:11180470
Clock drawing test: correlation with linear measurements of CT studies in demented patients. Objectives: To investigate a presumed correlation between clock drawing ratings and linear measurements of computerized tomography (CT) studies in demented patients.
Design: Blinded evaluations of clock drawing tests and CT studies of elderly dementia patients were conducted by a geriatric psychiatrist and a neuroradiologist.
Subjects: Fifty-one community-dwelling elderly subjects meeting the criteria for DSM-IV diagnosis of dementia (Alzheimer's type dementia: N=31; vascular dementia: N=15; "mixed" type dementia: N=5).
Materials: Mini-Mental State Examination (MMSE), Cambridge Cognitive Examination (CAMCOG), Clinical Dementia Rating (CDR). CAMCOG derived scored clock drawings were evaluated using adaptations of Shulman et al.'s and Freedman et al.'s methods. CT studies were evaluated using six different linear measurements of brain atrophy described in the literature.
Results: Of the CT linear measurements, only the Cerebro-Ventricular Index-2 (CVI-2; bicaudate index) significantly correlated with clock drawing ratings (CAMCOG's clock r=-0.407, p=0.003; Shulman's method r=0.357, p=0.01, Freedman's method r=-0.413, p=0.003) in the dementia group. There was no significant correlation between CVI-2 with demographic (age), cognitive (MMSE, CAMCOG) and clinical (duration of illness, CDR) ratings. Alzheimer's patients generally maintained a significant correlation between CVI-2 and clock drawings, but vascular dementia patients did not; CVI-2 also correlated significantly with the Praxis subtest of the CAMCOG in dementia and Alzheimer's patients but not in the vascular dementia group. Similarly, multiple stepwise regression analysis showed that only CVI-2 but not the other radiological measures studied, was selected as the significant variable to correlated with clock drawing test ratings in the dementia group and Alzheimer's patients. Partial correlation analysis controlling for demographic and clinical variables shows that controlled variables had no significant effect on the relationship between clock drawing ratings and CVI-2.
Conclusion: A single and easy to perform measure of caudate atrophy correlates specifically and consistently with impairments revealed in the clock drawing test and with a Praxis subtest, suggesting possible caudate involvement with clock drawings in dementia in general and of the Alzheimer's type in particular.
abstract_id: PUBMED:17690551
Diagnostic Performance of Clock Drawing Test by CLOX in an Asian Chinese population. Background/aims: Clock Drawing Tests are commonly used for cognitive screening, but their clinical utility has not yet been studied in Chinese Singaporeans. We examined the usefulness of a Clock Drawing Test, CLOX, in detecting dementia in our population and explored its performance in the dementia subtypes, Alzheimer's disease (AD), and the vascular composite group (VCG) of AD with cerebrovascular disease and vascular dementia.
Method: CLOX was administered to 73 subjects (49.3%) with dementia and 75 healthy controls (50.7%). Receiver operating characteristic analysis determined the diagnostic accuracy and optimal cut-off scores, stratified by education. Analysis of Variance was used to compare CLOX scores between AD and VCG.
Results: The diagnostic accuracy (area under the curve) was 84 and 85% for CLOX1 and CLOX2, respectively. Cut-offs at 10 for CLOX1 and 12 for CLOX2 yielded sensitivities of 75.3 and 75%, and specificities of 76 and 80%, respectively. The mean CLOX1 but not CLOX2 scores for AD (8.1) and VCG (5.5) remained significantly different (p = 0.002) after adjustment for the covariates age, gender, education, MMSE and dementia stage.
Conclusion: Our results support CLOX as a valid cognitive screen in Singaporean Chinese with adequate psychometric properties. In addition, CLOX may aid as an adjunct in differentiating AD from dementia with a vascular element, e.g. AD with cerebrovascular disease and vascular dementia.
abstract_id: PUBMED:22457687
Application of Choynowski's Memory Scale in assessment of patients with dementia. Introduction: Neuropsychological assessment of patients with dementia necessitates the use of varied memory tasks designed to measure different memory processes, including information memorization, retention and retrieval. A valid memory battery should be demonstrated to discriminate between demented and non-demented subjects and its scores should be related to the degree of intellectual impairment. The objective of this study was to evaluate the potential usefulness of Choynowski's Memory Scale in assessment of patients with dementia.
Material And Methods: Thirty-two patients with dementia and 64 age- and gender- matched healthy controls took part in the study. All participants were examined by means of Choynowski's Memory Scale, and the patients were additionally assessed by the Mini Mental State Examination and Clock Drawing Test (CDT).
Results: All subtests of Choynowski's Memory Scale were found to highly significantly (p ≤ 0.001) discriminate between patients with dementia and healthy controls. Except for Digit Span, all other subtests of Choynowski's battery were highly correlated with the global mental status as assessed by MMSE with correlation coefficients ranging from 0.59 to 0.81. Most subtests of Choynowski's Memory Scale were also moderately or highly correlated with performance on the CDT, and the correlations coefficients between the total score on Memory Scale and the CDT were r = 0.66 and r = 0.61 (p ≤ 0.001) for the free recall drawing and copying, respectively.
Conclusions: Choynowski's Memory Scale showed high discriminative properties and strong associations with the degree of intellectual impairment in dementia. The results encourage the use of this battery in clinical settings.
Answer: The clock drawing test (CDT) has been investigated as a tool to differentiate between dementia of the Alzheimer's type (DAT) and vascular dementia (VD), but the results have been mixed. Some studies have found that the CDT, particularly when scored using a comprehensive technique such as Freedman's method, may be of value in differentiating DAT from VD patients. This may be attributed to its sensitivity to impaired executive functioning, which is more pronounced in VD compared to DAT (PUBMED:12211117). However, other studies have indicated that the CDT, especially when using a simple four-point scoring system, does not effectively differentiate between Alzheimer's disease and vascular dementia (PUBMED:20712183).
Further research using digital clock drawing metrics suggests that kinematic, time-based, and visuospatial features extracted from the digital Clock Drawing Test (dCDT) can classify a combined group of Alzheimer's disease/Vascular Dementia patients versus healthy controls, and also differentiate dementia patients with Alzheimer's disease from those with vascular dementia (PUBMED:34219737). Additionally, qualitative error analysis of the CDT may be helpful in differentiating between different dementia subtypes, including PD, VaD, and AD, even in the early stages of each disease (PUBMED:29417704).
Overall, while the CDT may have some utility in distinguishing between Alzheimer's disease and vascular dementia, its effectiveness can vary depending on the scoring system used and the specific cognitive domains assessed. It appears that a more detailed analysis of CDT performance, possibly including qualitative error analysis or digital metrics, may enhance its discriminative power (PUBMED:26138809; PUBMED:23634401; PUBMED:30906399). Therefore, the CDT can be a helpful tool in the differential diagnosis of dementia types, but it should be used in conjunction with other assessments for a more accurate diagnosis. |
Instruction: Thyroiditis de Quervain. Are there predictive factors for long-term hormone-replacement?
Abstracts:
abstract_id: PUBMED:23653018
Thyroiditis de Quervain. Are there predictive factors for long-term hormone-replacement? Background: Subacute thyroiditis is a usually self-limiting disease of the thyroid. However, approximately 0.5-15% of the patients require permanent thyroxine substitution. Aim was to determine predictive factors for the necessity of long-term hormone-replacement (LTH).
Patients, Methods: We retrospectively reviewed the records of 72 patients with subacute thyroiditis. Morphological and serological parameters as well as type of therapy were tested as predictive factors of consecutive hypothyroidism.
Results: Mean age was 49 ± 11 years, f/m-ratio was 4.5 : 1. Thyroid pain and signs of hyperthyroidism were leading symptoms. Initial subclinical or overt hyperthyroidism was found in 20% and 37%, respectively. Within six months after onset 15% and 1.3% of the patients developed subclinical or overt hypothyroidism, respectively. At latest follow-up 26% were classified as liable to LTH. At onset the thyroid was enlarged in 64%, and at latest follow-up in 8.3%, with a significant reduction of the thyroid volume after three months. At the endpoint the thyroid volume was less in patients in the LTH group compared with the non-LTH group (41.7% vs. 57.2% of sex-adjusted upper norm, p = 0.041). Characteristic ultrasonographic features occurred in 74% of the patients in both lobes. Serological and morphological parameters as well as type of therapy were not related with the need of LTH.
Conclusions: In this study the proportion of patients who received LTH was 26%. At the endpoint these patients had a lower thyroid volume compared with euthyroid patients. No predictive factors for LTH were found.
abstract_id: PUBMED:1183345
de Quervain's thyroiditis Among 31 patients with de Quervain's thyroiditis (confirmed by biopsy in 15) there were seven who had an acute course. It is, therefore, suggested that the misleading term "subacute" thyroiditis for this form of inflammatory thyroid disease be avoided. It is nowadays the most frequent form of painful thyroiditis seen in clinical practice. Extremely rapid ESR with normal peripheral leucocyte count is a typical finding. Generally the disease is completely cured within a few weeks to months, transition to an immunothyroiditis or hypothyroidism being rare. Conventional antiinflammatory drugs, combined with thyroid hormone, give a good therapeutic response. Corticoids should be used only in exceptional instances.
abstract_id: PUBMED:14312927
SUBACUTE THYROIDITIS (DE QUERVAIN'S DISEASE) N/A
abstract_id: PUBMED:33644629
Prediction of Thyroid Hormone Replacement Following Thyroid Lobectomy: A Long-term Retrospective Study. Objective: Following thyroid lobectomy, patients are at risk for hypothyroidism. This study sought to determine the incidence of postlobectomy thyroid hormone replacement as well as predictive risk factors to better counsel patients.
Study Design: Retrospective cohort study.
Setting: Patients aged 18 to 75 years treated in a single academic institution who underwent thyroid lobectomy from October 2006 to September 2017.
Methods: Patients were followed for an average of 73 months. Demographic data, body mass index, size of removed and remnant lobe, preoperative thyroid-stimulating hormone (TSH) level, final thyroid pathology, and presence of thyroiditis were collected and analyzed. Risk factors were evaluated with chi-square analyses, t tests, logistic regression, and Kaplan-Meier analysis.
Results: Of the 478 patients reviewed, 369 were included in the analysis, 30% of whom eventually required thyroid hormone replacement. More than 39% started therapy >12 months postoperatively, with 90% treated within 36 months. Patient age ≥50 years and preoperative TSH ≥2.5 mIU/L were associated with odds ratios of 2.034 and 3.827, respectively, for thyroid hormone replacement. Malignancy on final pathology demonstrated an odds ratio of 7.76 for hormone replacement. Sex, body mass index, volume of resected and remaining lobes, and weight of resected lobe were not significant predictors.
Conclusion: Nearly a third of patients may ultimately require thyroid hormone replacement. Age at the time of surgery, preoperative TSH, and final pathology are strong, clinically relevant predictors of the need for future thyroid hormone replacement. After lobectomy, patients should have long-term thyroid function follow-up to monitor for delayed hypothyroidism.
abstract_id: PUBMED:10573824
Effect of long-term hormone substitution therapy on serum TSH level in postmenopausal women Objective: Dysfunction's of the thyroid gland are one of the most important endocrinological diseases. We report serum TSH levels in postmenopausal women before and during long-term hormone replacement therapy.
Material And Methods: 107 postmenopausal patients participated in this study. Criteria for inclusion were: no known thyroid dysfunction and request for hormone replacement. Before starting therapy TSH serum levels were measured in each patient. If basal levels were within normal range TSH serum levels were reported over 4 years of hormone replacement therapy.
Results: More than 10% of the postmenopausal women showed pathological TSH-levels without clinical symptoms requiring further diagnostic. During subsequent treatment cycles (4 years) serum TSH in euthyroid patients did not show significant changes. Women using hormone replacement therapy developed no new manifestation of thyroid disease.
Conclusion: In euthyroid women using long-term hormone replacement therapy are no changes in thyroid function caused by hormone replacement therapy to expect.
abstract_id: PUBMED:3764385
Occurrence of fibrosis in subacute de Quervain thyroiditis Subacute thyroiditis of de Quervain is histologically characterized by an inflammatory reaction with histiocytes and giant cells around residues of colloid, producing a tubercle-like granulomatous picture. A variable degree of fibrosis occurs, but recovery is generally almost complete. Investigation of a series of thyroid glands with de Quervain's thyroiditis gave the impression of rather extensive and increasing fibrosis in most of these glands. To substantiate this impression we reviewed the histological slides of all our cases of de Quervain's thyroiditis diagnosed at the Department of Pathology of the University of Zurich between 1940-1950 and 1974-1984. In the majority of the glands of both periods we found rather extensive fibrosis involving more than 50% of the surface. In young patients the fibrosis seemed to be more extensive than in older subjects. There was no sex difference. A certain degree of fibrosis appears to be characteristic of de Quervain's thyroiditis. Differences of frequency and degree of fibrosis between the two periods could not be demonstrated.
abstract_id: PUBMED:13401203
Etiopathogenesis of a case of De Quervain's disease N/A
abstract_id: PUBMED:33656790
Atypical de Quervain's thyroiditis diagnosed as atypia of undetermined significance by cytology and suspicious for cancer by Afirma Genomic Sequencing Classifier. We report a case of atypical de Quervain's thyroiditis diagnosed as atypia of undetermined significance by cytology and suspicious for cancer by Afirma Genomic Sequencing Classifier. A 71-year-old male underwent thyroid ultrasound for goiter and was found to have two American Thyroid Association (ATA) 2015 high-suspicion nodules. The larger, 2.2-cm nodule was biopsied and the cytology showed atypical follicular cells and histiocytes. The Afirma Genomic Sequencing Classifier (detecting mRNA expression profile) result was ''suspicious'' (risk of malignancy ~50%) but Afirma Xpression Atlas (detecting specific mutations) did not find mutations in BRAF V600E, RET/PTC1, or RET/PTC3. The patient saw two endocrine surgeons and two endocrinologists who each recommended hemithyroidectomy. The patient chose to monitor the nodules. A new diagnostic ultrasound performed 3 months after the first one showed that the thyroid was significantly smaller and the previously seen nodules were no longer found. Re-examination of the cellular smears confirmed that the cytological findings were also compatible with de Quervain's thyroiditis. This case illustrates that atypical de Quervain's thyroiditis should be in the differential diagnosis of thyroid nodules for cytologists, radiologists, and clinicians. Furthermore, this case demonstrates that atypical de Quervain's thyroiditis can generate false positive results of molecular tests for indeterminate thyroid nodules.
abstract_id: PUBMED:35200445
Vaccine-Induced Subacute Thyroiditis (De Quervain's) after mRNA Vaccine against SARS-CoV-2: A Case Report and Systematic Review. De Quervain's thyroiditis, sometimes referred to as subacute thyroiditis (SAT), is the most common granulomatous disease of the thyroid, typically found after a viral infection in middle-aged women. The mRNA encoding for the angiotensin-converting enzyme-2 (ACE-2) receptor is expressed in follicular thyroid cells, making them a potential target for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Besides infection, SARS-CoV-2 vaccines have also been implicated in SAT pathogenesis. We present a case of a woman developing SAT following vaccination with Comirnaty by Pfizer Inc. (New-York, USA). We performed a systematic review of similar cases available in the literature to provide a better understanding of the topic. We searched the databases PubMed and Embase and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Patient records were then sorted according to the type of administered vaccine and a statistical analysis of the extracted data was performed. No statistically significant difference between mRNA vaccines and other vaccines in inducing SAT was found, nor was any found in terms of patient demographics, symptoms at presentation, initial, or follow-up blood tests. In our case report, we described the possible association between SARS-CoV-2 mRNA-based vaccine Comirnaty and SAT.
abstract_id: PUBMED:19012473
Long-term effects of growth hormone replacement therapy on thyroid function in adults with growth hormone deficiency. Background: Clinical studies on the effect of growth hormone (GH) on thyroid function in patients with GH deficiency are contradictory. Further, the majority of published observations are limited to the first 6-12 months of GH replacement therapy. The aim of our study was to estimate the incidence of clinically relevant hypothyroidism in a cohort of patients with adult GH deficiency (AGHD) during long-term therapy with recombinant human GH (rhGH).
Methods: The study was designed as a retrospective collection of data on thyroid function in 49 AGHD patients of whom 44 (90%) had multiple hormone deficiency. Thirty-seven patients (76%) were on stable levothyroxine (LT4) replacement therapy (HYPO), and 12 (24%) were euthyroid (EUT). Therapy with rhGH was started at a dose of 3.5 microg/kg body weight and adjusted according to insulin-like growth factor-I (IGF-I) levels. At baseline, 6 months, 12 months, and yearly thereafter we measured free triiodothyronine (fT3), free thyroxine (fT4), thyroid-stimulating hormone, and IGF-I. Study outcome was fT4 level below the normal range (9 pmol/L), irrespectively of fT3 or thyroid-stimulating hormone levels.
Results: During a follow-up of 115 patient-years, mean fT4 level decreased significantly, although remaining within the normal range (p = 0.0242; month 48 vs. baseline). The largest decrease was between baseline and month 6, when fT4 decreased of 1.43 pmol/L (95% confidence interval, 0.33-2.53) per 1 unit (microg/kg body weight) increase in rhGH dose. The incidence of hypothyroidism was 1.2 (HYPO group) and 6.7 (EUT group) events per 100 patient-years.
Conclusion: We confirm that in patients with AGHD, rhGH therapy is associated with a small, although significant, decrement of fT4 in the first 6 months of replacement therapy. However, the incidence of hypothyroidism is low. Monitoring of thyroid function during rhGH therapy is advisable, particularly in the first year of therapy when the largest decrease in fT4 occurs.
Answer: According to the study reported in PUBMED:23653018, the aim was to determine predictive factors for the necessity of long-term hormone replacement (LTH) in patients with subacute thyroiditis, also known as de Quervain's thyroiditis. The retrospective review of 72 patients did not identify serological and morphological parameters or type of therapy as predictive factors for the need of LTH. However, it was found that at the endpoint of the study, patients who received LTH had a lower thyroid volume compared with euthyroid patients. Despite this observation, no definitive predictive factors for LTH were established in this study.
In contrast, another study on thyroid hormone replacement following thyroid lobectomy (PUBMED:33644629) identified age at the time of surgery, preoperative thyroid-stimulating hormone (TSH), and final pathology as strong predictors for the need for future thyroid hormone replacement. Although this study focused on postlobectomy patients rather than de Quervain's thyroiditis specifically, it suggests that certain factors such as age and preoperative TSH levels can be predictive in other thyroid-related conditions.
The other abstracts provided do not offer additional information on predictive factors for long-term hormone replacement in de Quervain's thyroiditis. PUBMED:1183345 and PUBMED:3764385 discuss the clinical presentation and histological characteristics of de Quervain's thyroiditis, respectively, but do not address predictive factors for LTH. PUBMED:10573824 discusses the effect of long-term hormone substitution therapy on serum TSH level in postmenopausal women, which is not directly related to de Quervain's thyroiditis. PUBMED:35200445 reports on vaccine-induced subacute thyroiditis after an mRNA vaccine against SARS-CoV-2, and PUBMED:19012473 discusses the long-term effects of growth hormone replacement therapy on thyroid function in adults with growth hormone deficiency, neither of which provide information on predictive factors for LTH in de Quervain's thyroiditis.
In conclusion, based on the abstract provided (PUBMED:23653018), there are no clearly established predictive factors for long-term hormone replacement in patients with de Quervain's thyroiditis. |
Instruction: Are there differences between women's and men's antidepressant responses?
Abstracts:
abstract_id: PUBMED:31034852
Comparison of antidepressant and side effects in mice after intranasal administration of (R,S)-ketamine, (R)-ketamine, and (S)-ketamine. The N-methyl-d-aspartate receptor (NMDAR) antagonist (R,S)-ketamine produces rapid and sustained antidepressant effects in treatment-resistant patients with depression although intranasal use of (R,S)-ketamine in ketamine abusers is popular. In March 5, 2019, nasal spray of (S)-ketamine for treatment-resistant depression was approved as a new antidepressant by the US Food Drug Administration. Clinical study of (R)-ketamine is underway. In a chronic social defeat stress (CSDS) model, we compared the antidepressant effects of (R,S)-ketamine, (R)-ketamine, and (S)-ketamine after a single intranasal administration. Furthermore, we also compared the side effects (i.e., locomotion, prepulse inhibition (PPI), abuse liability) of these three compounds in mice. The order of potency of antidepressant effects after a single intranasal administration was (R)-ketamine > (R,S)-ketamine > (S)-ketamine. In contrast, the order of locomotor activity and prepulse inhibition (PPI) deficits after a single intranasal administration was (S)-ketamine > (R,S)-ketamine > (R)-ketamine. In the conditioned place preference (CPP) test, both (S)-ketamine and (R,S)-ketamine increased CPP scores in mice after repeated intranasal administration, in a dose dependent manner. In contrast, (R)-ketamine did not increase CPP scores in mice. These findings suggest that intranasal administration of (R)-ketamine would be a safer antidepressant than (R,S)-ketamine and (S)-ketamine.
abstract_id: PUBMED:34831522
Socio-Psychological Functions of Men and Women Triathlon Participation. Motivations to run marathons have been recognised by many researchers, but few have paid attention to triathletes. Mass triathlon participation is a new trend, which manifests itself as a human need to invoke strong emotions and seek them in difficult sports, as well as to travel to participate in such events. Therefore, the main goal of this study was to recognise the motivations to participate in triathlons among men and women respondents, and to evaluate the differences between them. The empirical research among triathletes (n = 1141) recognised the motives for participation in mass triathlon sporting events in accordance with four types of orientation: social, experience, factual, and result. Most important conclusions resulting from the conducted research indicate that women significantly more often displayed the will to feel unity and integration, as well as the desire to gain recognition in the eyes of others, as compared to men. For men, the desire to feel equal was significantly more important than for women. Both men and women indicated the desire to maintain good physical condition and health, which turned out to be a significant factor. For men, Group B-specifying the experience orientation, was deemed the most important, while for women the most important group of motives was Group D-specifying the result orientation.
abstract_id: PUBMED:12411218
Are there differences between women's and men's antidepressant responses? Objective: The study examined a large data set to determine whether patients' sex affected the outcome of antidepressant treatment.
Method: Data for 1,746 patients aged 18-65 years who had been treated with tricyclic antidepressants, monoamine oxidase inhibitors (MAOIs), fluoxetine, or placebo were examined in a retrospective analysis to determine whether men and women differed in their responses to antidepressants. To examine the effect of menopausal status in the absence of data on individual patients' menopausal status, results for female patients younger or older than age 50, 52, 54, and 56 were compared.
Results: Men and women both younger and older than age 50 had equivalent response rates to tricyclics and fluoxetine. Women had a statistically superior response to MAOIs. Placebo response was equivalent across all groups.
Conclusions: Neither sex nor menopausal status may be relevant in antidepressant treatment of adult depressed patients up to 65 years of age. Although women had a statistically superior response to MAOIs, this difference may not be clinically relevant.
abstract_id: PUBMED:10624233
Men and women and their responses in spousal bereavement. In this study, the Grief Experience Inventory was used to examine grief responses among men and women associated with a Colorado hospice program. Using this inventory instrument, the findings document a broad range of emotional responses to grief among men and women, yet no significant differences were found. These findings appear to differ from the perceptions of bereavement counselors who often identify behavioral differences in the grief experiences of men and women.
abstract_id: PUBMED:34965407
CYP 450 enzymes influence (R,S)-ketamine brain delivery and its antidepressant activity. Esketamine, the S-stereoisomer of (R,S)-ketamine was recently approved by drug agencies (FDA, EMA), as an antidepressant drug with a new mechanism of action. (R,S)-ketamine is a N-methyl-d-aspartate receptor (NMDA-R) antagonist putatively acting on GABAergic inhibitory synapses to increase excitatory synaptic glutamatergic neurotransmission. Unlike monoamine-based antidepressants, (R,S)-ketamine exhibits rapid and persistent antidepressant activity at subanesthetic doses in preclinical rodent models and in treatment-resistant depressed patients. Its major brain metabolite, (2R,6R)-hydroxynorketamine (HNK) is formed following (R,S)-ketamine metabolism by various cytochrome P450 enzymes (CYP) mainly activated in the liver depending on routes of administration [e.g., intravenous (largely used for a better bioavailability), intranasal spray, intracerebral, subcutaneous, intramuscular or oral]. Experimental or clinical studies suggest that (2R,6R)-HNK could be an antidepressant drug candidate. However, questions still remain regarding its molecular and cellular targets in the brain and its role in (R,S)-ketamine's fast-acting antidepressant effects. The purpose of the present review is: 1) to review (R,S)-ketamine pharmacokinetic properties in humans and rodents and its metabolism by CYP enzymes to form norketamine and HNK metabolites; 2) to provide a summary of preclinical strategies challenging the role of these metabolites by modifying (R,S)-ketamine metabolism, e.g., by administering a pre-treatment CYP inducers or inhibitors; 3) to analyze the influence of sex and age on CYP expression and (R,S)-ketamine metabolism. Importantly, this review describes (R,S)-ketamine pharmacodynamics and pharmacokinetics to alert clinicians about possible drug-drug interactions during a concomitant administration of (R,S)-ketamine and CYP inducers/inhibitors that could enhance or blunt, respectively, (R,S)-ketamine's therapeutic antidepressant efficacy in patients.
abstract_id: PUBMED:24316345
R (-)-ketamine shows greater potency and longer lasting antidepressant effects than S (+)-ketamine. The N-methyl-D-aspartate (NMDA) receptor antagonist ketamine is one of the most attractive antidepressants for treatment-resistant major depressive disorder (MDD). Ketamine (or RS (±)-ketamine) is a racemic mixture containing equal parts of R (-)-ketamine and S (+)-ketamine. In this study, we examined the effects of R- and S-ketamine on depression-like behavior in juvenile mice after neonatal dexamethasone (DEX) exposure. In the tail suspension test (TST) and forced swimming test (FST), both isomers of ketamine significantly attenuated the increase in immobility time, seen in DEX-treated juvenile mice at 27 and 29 h respectively, after ketamine injections. In the 1% sucrose preference test (SPT), both isomers significantly attenuated the reduced preference for 1% sucrose consumption in DEX-treated juvenile mice, 48 h after a ketamine injection. Interestingly, when immobility times were tested by the TST and FST at day 7, R-ketamine, but not S-ketamine, significantly lowered the increases in immobility seen in DEX-treated juvenile mice. This study shows that a single dose of R-ketamine produced rapid and long-lasting antidepressant effects in juvenile mice exposed neonatally to DEX. Therefore, R-ketamine appears to be a potent and safe antidepressant relative to S-ketamine, since R-ketamine may be free of psychotomimetic side effects.
abstract_id: PUBMED:35893223
Pulmonary Embolism in Women: A Systematic Review of the Current Literature. Cardiovascular disease is the leading cause of death in women. Pulmonary embolism (PE) is the third most-common cause of cardiovascular death, after myocardial infarction (MI) and stroke. We aimed to evaluate the attributes and outcomes of PE specifically in women and explore sex-based differences. We conducted a systematic review of the literature using electronic databases PubMed and Embase up to 1 April 2022 to identify studies investigating PE in women. Of the studies found, 93 studies met the eligibility criteria and were included. The risk of PE in older women (especially >40 years of age) superseded that of age-matched men, although the overall age- and sex-adjusted incidence of PE was found to be lower in women. Risk factors for PE in women included age, rheumatologic disorders, hormone replacement therapy or oral contraceptive pills, pregnancy and postpartum period, recent surgery, immobilization, trauma, increased body mass index, obesity, and heart failure. Regarding pregnancy, a relatively higher incidence of PE has been observed in the immediate postpartum period compared to the antenatal period. Women with PE tended to be older, presented more often with dyspnea, and were found to have higher NT-proBNP levels compared to men. No sex-based differences in in-hospital mortality and 30-day all-cause mortality were found. However, PE-related mortality was higher in women, particularly in hemodynamically stable patients. These differences form the basis of future research and outlets for reducing the incidence, morbidity, and mortality of PE in women.
abstract_id: PUBMED:30341726
Treatment of Depression in Women. Women are more likely than men to experience depression throughout the life span. Sex differences in neurochemistry and brain structure, as well as societal factors may contribute to women's increased likelihood of depression. Pharmacological research targeting depression has historically excluded women, leading to a knowledge gap regarding effective antidepressant treatment in women. Antidepressant pharmacokinetics and pharmacodynamics are clearly different in men and women, necessitating a thoughtful approach to their prescription and management. Hormone changes associated with the menstrual cycle, pregnancy, and menopause also contribute to differences in depression and effective antidepressant use in women. Finally, it is important to consider potential interactions between antidepressant drugs and medications specifically used by women (oral contraceptives, tamoxifen, and estrogen).
abstract_id: PUBMED:9074729
Thionitrites as potent donors of nitric oxide: example of S-nitroso- and S,S'-dinitroso-dihydrolipoic acids Until recently, nitric oxide (NO.) was considered as a toxic radical, but it appears now as an essential messenger implicated in a wide range of biological processes, including immune system, cardiovascular system, and nervous system. An aspect of NO. metabolism in vivo is the formation of a variety of high and low molecular weight nitrosothiols. S-nitrosocysteine and S-nitrosoglutathione are among the biologically derived S-nitrosothiols that are postulated to be carriers of NO.. Although most of the S-nitrosothiols are unstable and spontaneously break down to produce NO. and a disulfide, some of them, including protein thiols, can show significant stability. These molecules are able to convey nitric oxide, that is, to keep, to carry, and then to generate NO. in physiological media, and might display pharmacological effects as potential vasodilators or neuroprotectors. Here, we present the development of new thionitrites R-S-NO having intrinsic antioxidant properties. We report the preparation, the characterization, and the stability studies in aqueous solutions of S-nitroso derivatives of dihydro-alpha-lipoic acid, known for its antioxidant properties.
abstract_id: PUBMED:34977228
Men's Attitude Towards Contraception and Sexuality, Women's Empowerment, and Demand Satisfied for Family Planning in India. Whilst the prevalence of unmet need and contraceptive use remained unchanged for 10 years (between 2005-2015) in India, gender restrictive norms and power imbalances also have persisted, preventing married women from meeting their family planning desires. Data for this study are from the 2015-6 National Family Household Survey, which contains information on fertility preferences and family planning for women in reproductive age. As a proxy for men's attitudinal norms, we aggregated men's perceptions regarding contraception (contraception is women's business, women who use contraception may become promiscuous) and control over their wife (if his wife refuses to have sex, men have the right to deny financial support, have sex with another woman, or beat wife) at district level. Using a three-level random intercepts model, we assessed individual and contextual-level associations of men's attitudinal norms and met need for contraception among sexually active women (aged 15-49) with any demand for family planning, while adjusting for women's empowerment indicators [education, job status, and adult marriage] and individual demographic factors. Our results indicate that men's attitudinal norms are negatively associated with women's contraceptive use; for instance, a 1 standard deviation increase in the proportion of men who believe that contraception is women's business was associated with a 12% reduced likelihood of contraceptive use (OR = 0.88, 95% CI 0.82-0.95). Similar associations remained or were stronger after considering only modern methods, or when excluding female sterilization. Furthermore, our contextual effects analysis revealed that women's higher education or wealth did not improve contraceptive uptake in communities with strong attitudinal norms, but working women or women married as children were more likely to use contraception in those communities. Our results suggest that men's attitudinal norms may be dominating over women's empowerment regarding family planning choices among reproductive age women. However, employment appeared to play a strong protective role associated with women's contraceptive use. It is important for programs seeking to transform gender equality and empower women in making contraceptive choices to consider women's employment opportunities and to also address male attitudinal norms in the context of the ecosystem in which men and women coexist and interact.
Answer: Yes, there are differences between women's and men's antidepressant responses. A study examining a large data set to determine whether patients' sex affected the outcome of antidepressant treatment found that men and women had equivalent response rates to tricyclics and fluoxetine, but women had a statistically superior response to monoamine oxidase inhibitors (MAOIs). However, the study concluded that neither sex nor menopausal status may be relevant in antidepressant treatment of adult depressed patients up to 65 years of age, and the difference in response to MAOIs may not be clinically relevant (PUBMED:12411218).
Additionally, pharmacological research targeting depression has historically excluded women, leading to a knowledge gap regarding effective antidepressant treatment in women. Antidepressant pharmacokinetics and pharmacodynamics are different in men and women, necessitating a thoughtful approach to their prescription and management. Hormone changes associated with the menstrual cycle, pregnancy, and menopause also contribute to differences in depression and effective antidepressant use in women (PUBMED:30341726).
Furthermore, the study of intranasal administration of (R,S)-ketamine, (R)-ketamine, and (S)-ketamine in mice suggested that (R)-ketamine could be a safer antidepressant than (R,S)-ketamine and (S)-ketamine, but the study did not specifically address sex differences in response (PUBMED:31034852). Another study on R (-)-ketamine and S (+)-ketamine showed that R (-)-ketamine had greater potency and longer-lasting antidepressant effects than S (+)-ketamine in juvenile mice, but again, sex differences were not the focus (PUBMED:24316345).
Overall, while there are differences in antidepressant responses between women and men, the clinical relevance of these differences may vary, and more research is needed to fully understand the implications for treatment. |
Instruction: Can we distinguish between benign versus malignant compression fractures of the spine by magnetic resonance imaging?
Abstracts:
abstract_id: PUBMED:7502133
Can we distinguish between benign versus malignant compression fractures of the spine by magnetic resonance imaging? Study Design: The authors investigate the usefulness of magnetic resonance imaging in differentiating benign versus malignant compression fractures by reviewing patients and a fracture model in a canine model.
Objectives: To determine the sensitivity and specificity of magnetic resonance imaging in differentiating benign versus malignant compression fractures of the spine and to obtain distinguishing features in magnetic resonance imaging.
Summary Of Background Data: The differentiation between benign and abnormal compression fractures of the thoracolumbar spine has important implications regarding patient treatment and prognosis. Plain radiographs, bone scans, and computed tomography are not accurate imaging modalities for this purpose.
Methods: Magnetic resonance imaging scans of 22 patients with confirmed lesions of the thoracolumbar spine were studied. There were 11 malignant and 11 benign lesions. Two experienced neuroradiologists blindly reviewed the magnetic resonance imaging scans and determined benign or malignant lesions. A canine study was performed to simulate a compression fracture model with a vertebral osteotomy in two dogs, and serial contrast-enhanced magnetic resonance imaging scans were performed 15, 30, 60 and 90 days after surgery.
Results: The correct interpretation between two neuroradiologists was 77% and 95%. The combined sensitivity rate was 88.5%, and the specificity rate was 89.5%. Magnetic resonance imaging reliably distinguished benign versus malignant lesions based on the anatomic distribution and intensity of signal changes of bone and adjacent tissues, contrast enhancement characteristics, and changes over time. Only one malignant lesion was misinterpreted by both neuroradiologists as benign, whereas there was one additional missed malignant lesion and three misinterpreted benign lesions by one radiologist. In the canine study, signal changes and enhancement were found 60 days after surgery, but no signal changes or enhancement were noted on the scan 90 days after surgery.
Conclusions: Magnetic resonance imaging scans can detect malignant vertebral lesions early, but acute healing compression fractures may mimic the findings of metastatic lesions. The use of contrast-enhanced magnetic resonance imaging scans and serial magnetic resonance imagings are helpful for additional differentiation between benign and malignant compression fractures. In addition to magnetic resonance imaging scans, other diagnostic tests and clinical findings should be correlated before biopsy or surgery of the suspected lesion.
abstract_id: PUBMED:19260246
Benign versus malignant compression fracture: a diagnostic accuracy of magnetic resonance imaging. Objective: To evaluate the accuracy, sensitivity, and specificity of various Magnetic Resonance Imaging (MRI) features in differentiating malignant from benign compression fracture of the spine.
Material And Method: Retrospective review of MRI spine of patients with vertebral compression fracture identified from the hospital database between June 2004 and February 2006 by two radiologists blinded to the clinical data. Various MRI features were evaluated for sensitivity, specificity, positive predictive value, and negative predictive value. An additional combination of two, three, four, and five MRI features that had statistically significant (P value less than 0.005) were also calculated for sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV).
Results: Fifty-eight spinal MRI were included from 35 patients with metastatic vertebral compression fractures and 23 patients with benign vertebral compression fractures. MR imaging features suggestive of malignant vertebral compression fracture were convex posterior border of the vertebral body, involvement of the pedicle or posterior element, epidural mass, paraspinal mass, and destruction of bony cortex. Among these, involvement of pedicle or posterior element was the most reliable finding (sensitivity 91.4% and specificity 82.6%) for diagnosis of malignant vertebral compression fracture. A combination of two or more MRI features gave very high specificity and PPV.
Conclusion: Certain MR imaging characteristics can reliably distinguish malignant from benign compression fracture of the spine. Combination of several MRI features strongly affirmed the diagnosis of malignant compression fracture, especially in a patient where tissue biopsy is not justified.
abstract_id: PUBMED:22210011
Research synthesis: what is the diagnostic performance of magnetic resonance imaging to discriminate benign from malignant vertebral compression fractures? Systematic review and meta-analysis. Study Design: This study is a research synthesis of the published literature evaluating the performance of magnetic resonance imaging (MRI) for differentiation of malignant from benign vertebral compression fractures (VCFs).
Objective: Perform a systematic review and meta-analysis to summarize and combine the published data on MRI for discriminating malignant from benign VCFs.
Summary Of Background Data: The differentiation between benign and malignant VCFs in the spine is a challenging problem confronting spine practitioners.
Methods: MEDLINE, EMBASE, and other databases were searched by 2 independent reviewers to identify studies that reported the performance of MRI for discriminating malignant from benign VCF. Included studies were assessed for described MRI features and study quality. The sensitivity, specificity, and diagnostic odds ratio (OR) of each feature were pooled with a random-effects model weighted by the inverse of the variance of each individual estimate.
Results: A total of 31 studies with 1685 subjects met the selection criteria. All the studies focused on describing specific features rather than overall diagnostic performance. Signal intensity ratio on opposed phase (chemical shift) imaging 0.8 or more (OR = 164), apparent diffusion coefficient on echo planar diffusion-weighted images 1.5 × 10(-3) mm2/s or less with b value 500 s/mm2 (OR = 130), presence of other noncharacteristic vertebral lesions (OR = 55), presence of paraspinal mass (OR = 33), involvement of posterior element (OR = 28), involvement of pedicle (OR = 24), complete replacement of normal bone marrow in VCF (OR = 19), presence of epidural mass (OR = 13), and diffuse convexity of posterior vertebral border (OR = 10) were associated with malignant VCFs, whereas coexisting healed benign VCF (OR = 0.006), presence of "fluid sign" (OR = 0.08), presence of focal posterior vertebral border convexity/retropulsion (OR = 0.08), and band-like shape of abnormal signal (OR = 0.07) were associated with benign VCFs.
Conclusion: Several specific MRI features using signal intensity characteristics, morphological characteristics, quantitative techniques, and findings at other levels can be useful for distinguishing benign from malignant VCFs and can serve as inputs for a prediction model. Observer performance reliability has not been adequately assessed.
abstract_id: PUBMED:15796256
Magnetic resonance imaging characteristics of benign and malignant vertebral fractures. Background: Attempts to differentiate benign and malignant vertebral fractures may be difficult, particularly when there is no obvious evidence of malignancy. Since early diagnosis and appropriate management of malignant vertebral fractures are important, a reliable imaging modality is required.
Methods: From January 1996 to December 2002, 48 patients with malignant vertebral fractures and 50 patients with benign processes were studied. All patients underwent conventional magnetic resonance imaging (MRI) scanning for acute vertebral compression fractures within 2 months of presenting with the complaint. Seven MRI characteristics were used as criteria, including signal intensity, gadolinium enhancement, epidural compression, multiple compression fractures, associated paraspinal soft tissue mass, pedicle involvement, and posterior element involvement. The predictive value of each MRI characteristic for distinguishing malignant from benign osteoporotic vertebral fractures was tested by statistical analysis.
Results: Lesions with negative gadolinium enhancement were favored as benign fractures. A uniform signal change in multiple involved vertebra lesions, round, smooth margins with marked epidural compression, a paraspinal soft tissue mass, and pedicle and posterior element involvement were probable malignant characteristics. Among them an associated paraspinal soft tissue mass was found to be significant in predicting the probability of malignancy.
Conclusions: Certain MRI characteristics allow early differentiation of benign and malignant vertebral fractures.
abstract_id: PUBMED:22204209
Acute vertebral compression fracture: differentiation of malignant and benign causes by diffusion weighted magnetic resonance imaging. Objective: To evaluate the sensitivity, specificity and accuracy of diffusion weighted (DWI) magnetic resonance imaging (MRI) in the diagnosis and differentiation between benign (osteoporotic/infectious) and malignant vertebral compression fractures in comparison with histology findings and clinical follow up.
Methods: The study was conducted at the Radiology Department, Aga Khan University Hospital (AKUH) Karachi. It was a one year cross-sectional study from 01/01/2009 to 01/01/2010. Forty patients with sixty three vertebral compression fractures were included. Diffusion-weighted sequences and apparent diffusion coefficient (ADC) images on a 1.5 T MR scanner were obtained in all patients to identify the vertebral compression fracture along with benign and malignant causes. Imaging findings were compared with histopathologic results and clinical follow-up.
Results: Diffusion-weighted MR imaging found to have, 92% sensitivity, 90% specificity and accuracy of 85% in differentiation of benign and malignant vertebral compression fracture while PPV and NPV were 78 % and 90% respectively.
Conclusion: Diffusion weighted magnetic resonance imaging offers a safe, accurate and non invasive modality to differentiate between the benign and malignant vertebral compression fracture.
abstract_id: PUBMED:16775260
The utility of in-phase/opposed-phase imaging in differentiating malignancy from acute benign compression fractures of the spine. Background And Purpose: Benign and malignant fractures of the spine may have similar signal intensity characteristics on conventional MR imaging sequences. This study assesses whether in-phase/opposed-phase imaging of the spine can differentiate these 2 entities.
Methods: Twenty-five consecutive patients who were evaluated for suspected malignancy (lymphoma [4 patients], breast cancer [3], multiple myeloma [2], melanoma [2], prostate [2], and renal cell carcinoma [1]) or for trauma to the thoracic or lumbar spine were entered into this study. An 18-month clinical follow-up was performed. Patients underwent standard MR imaging with an additional sagittal in-phase (repetition time [TR], 90-185; echo time [TE], 2.4 or 6.5; flip angle, 90 degrees ) and opposed-phase gradient recalled-echo sequence (TR, 90-185, TE, 4.6-4.7, flip angle, 90 degrees ). Areas that were of abnormal signal intensity on the T1 and T2 sequences were identified on the in-phase/opposed-phase sequences. An elliptical region of interest measurement of the signal intensity was made on the abnormal region on the in-phase as well as on the opposed-phase images. A computation of the signal intensity ratio (SIR) in the abnormal marrow on the opposed-phase to signal intensity measured on the in-phase images was made.
Results: Twenty-one patients had 49 vertebral lesions, consisting of 20 malignant and 29 benign fractures. There was a significant difference (P < .001, Student t test) in the mean SIR for the benign lesions (mean, 0.58; SD, 0.02) compared with the malignant lesions (mean, 0.98; SD, 0.095). If a SIR of 0.80 as a cutoff is chosen, with >0.8 defined as malignant and <0.8 defined as a benign result, in-phase/opposed-phase imaging correctly identified 19 of 20 malignant lesions and 26 of 29 benign lesions (sensitivity, 0.95; specificity, 0.89).
Conclusion: There is significant difference in signal intensity between benign compression fractures and malignancy on in-phase/opposed-phase MR imaging.
abstract_id: PUBMED:29093778
Incorporation of Whole Spine Screening in Magnetic Resonance Imaging Protocols for Low Back Pain: A Valuable Addition. Study Design: A retrospective review of lumbar magnetic resonance imaging (MRI) studies conducted at the Department of Radiodiagnosis & Imaging of a Tertiary Care Armed Forces Hospital between May 2014 and May 2016.
Purpose: To assess the advantages of incorporating sagittal screening of the whole spine in protocols for conventional lumbar spine MRI for patients presenting with low back pain.
Overview Of Literature: Advances in MRI have resulted in faster examinations, particularly for patients with low back pain. The additional detection of incidental abnormalities on MRI helps to improve patient outcomes by providing a swifter definitive diagnosis. Because low back pain is extremely common, any change to the diagnostic and treatment approach has a significant impact on health care resources.
Methods: We documented all additional incidental findings detected on sagittal screenings of the spine that were of clinical significance and would otherwise have been undiagnosed.
Results: A total of 1,837 patients who met our inclusion criteria underwent MRI of the lumbar spine. The mean age of the study population was 45.7 years; 66.8% were men and 33.2% women. Approximately 26.7% of the patients were diagnosed with incidental findings. These included determining the level of indeterminate vertebrae, incidental findings of space-occupying lesions of the cervicothoracic spine, myelomalacic changes, and compression fractures at cervicothoracic levels.
Conclusions: We propose that T2-weighted sagittal screening of the whole spine be included as a routine sequence when imaging the lumbosacral spine for suspected degenerative pathology of the intervertebral discs.
abstract_id: PUBMED:29858641
Proton density fat fraction (PDFF) MR imaging for differentiation of acute benign and neoplastic compression fractures of the spine. Objectives: To evaluate the diagnostic performance of proton density fat fraction (PDFF) magnetic resonance imaging (MRI) to differentiate between acute benign and neoplastic vertebral compression fractures (VCFs).
Methods: Fifty-seven consecutive patients with 46 acute benign and 41 malignant VCFs were prospectively enrolled in this institutional review board approved study and underwent routine clinical MRI with an additional six-echo modified Dixon sequence of the spine at a clinical 3.0-T scanner. All fractures were categorised as benign or malignant according to either direct bone biopsy or 6-month follow-up MRI. Intravertebral PDFF and PDFFratio (fracture PDFF/normal vertebrae PDFF) for benign and malignant VCFs were calculated using region-of-interest analysis and compared between both groups. Additional receiver operating characteristic and binary logistic regression analyses were performed.
Results: Both PDFF and PDFFratio of malignant VCFs were significantly lower compared to acute benign VCFs [PDFF, 3.48 ± 3.30% vs 23.99 ± 11.86% (p < 0.001); PDFFratio, 0.09 ± 0.09 vs 0.49 ± 0.24 (p < 0.001)]. The areas under the curve were 0.98 for PDFF and 0.97 for PDFFratio, yielding an accuracy of 96% and 95% for differentiating between acute benign and malignant VCFs. PDFF remained as the only imaging-based variable to independently differentiate between acute benign and malignant VCFs on multivariate analysis (odds ratio, 0.454; p = 0.005).
Conclusions: Quantitative assessment of PDFF derived from modified Dixon water-fat MRI has high diagnostic accuracy for the differentiation of acute benign and malignant vertebral compression fractures.
Key Points: • Chemical-shift-encoding based water-fat MRI can reliably assess vertebral bone marrow PDFF • PDFF is significantly higher in acute benign than in malignant VCFs • PDFF provides high accuracy for differentiating acute benign from malignant VCFs.
abstract_id: PUBMED:27111110
Shape, texture and statistical features for classification of benign and malignant vertebral compression fractures in magnetic resonance images. Purpose: Vertebral compression fractures (VCFs) result in partial collapse of vertebral bodies. They usually are nontraumatic or occur with low-energy trauma in the elderly secondary to different etiologies, such as insufficiency fractures of bone fragility in osteoporosis (benign fractures) or vertebral metastasis (malignant fractures). Our study aims to classify VCFs in T1-weighted magnetic resonance images (MRI).
Methods: We used the median sagittal planes of lumbar spine MRIs from 63 patients (38 women and 25 men) previously diagnosed with VCFs. The lumbar vertebral bodies were manually segmented and statistical features of gray levels were computed from the histogram. We also extracted texture and shape features to analyze the contours of the vertebral bodies. In total, 102 lumbar VCFs (53 benign and 49 malignant) and 89 normal lumbar vertebral bodies were analyzed. The k-nearest-neighbor method, a neural network with radial basis functions, and a naïve Bayes classifier were used with feature selection. We compared the classification obtained by these classifiers with the final diagnosis of each case, including biopsy for the malignant fractures and clinical and laboratory follow up for the benign fractures.
Results: The results obtained show an area under the receiver operating characteristic curve of 0.97 in distinguishing between normal and fractured vertebral bodies, and 0.92 in discriminating between benign and malignant fractures.
Conclusions: The proposed classification methods based on shape, texture, and statistical features have provided high accuracy and may assist in the diagnosis of VCFs.
abstract_id: PUBMED:19965294
Malignant versus benign vertebral collapse: are new imaging techniques useful? Benign and malignant vertebral collapse is common in the middle-aged and elderly population. Differential diagnosis sometimes remains difficult using radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) if strong edema is present. Established morphological criteria and new methods such as positron emission tomography (PET)-CT and diffusion and perfusion MRI are helpful for the correct diagnosis. Increased fluorodeoxyglucose (FDG) uptake accounts for the neoplastic cause of a fracture. Hyperintensity on diffusion-weighted images and a high plasma flow also are associated with the malignant cause of a fracture. However, the combination of all criteria should be taken into account for differential diagnosis.
Answer: Yes, magnetic resonance imaging (MRI) can be used to distinguish between benign and malignant compression fractures of the spine. Studies have shown that MRI can detect malignant vertebral lesions early, and certain MRI characteristics can reliably differentiate between benign and malignant compression fractures. These characteristics include the anatomic distribution and intensity of signal changes of bone and adjacent tissues, contrast enhancement characteristics, and changes over time (PUBMED:7502133). Specific MRI features such as the involvement of the pedicle or posterior element, epidural mass, paraspinal mass, and destruction of bony cortex have been associated with malignant vertebral compression fractures (PUBMED:19260246).
Moreover, a systematic review and meta-analysis have identified several MRI features that can be useful for distinguishing between benign and malignant vertebral compression fractures (VCFs), such as signal intensity ratio on opposed phase imaging, apparent diffusion coefficient on diffusion-weighted images, and the presence of paraspinal mass, among others (PUBMED:22210011). Diffusion-weighted MRI has also been found to have high sensitivity, specificity, and accuracy in differentiating between benign and malignant vertebral compression fractures (PUBMED:22204209). In-phase/opposed-phase imaging has been shown to have a significant difference in signal intensity between benign compression fractures and malignancy, aiding in differentiation (PUBMED:16775260).
Furthermore, proton density fat fraction (PDFF) MRI has demonstrated high diagnostic accuracy for differentiating acute benign and malignant vertebral compression fractures (PUBMED:29858641). Classification methods based on shape, texture, and statistical features extracted from MRI images have also provided high accuracy and may assist in the diagnosis of VCFs (PUBMED:27111110). While new imaging techniques such as PET-CT and diffusion and perfusion MRI can be helpful, the combination of all criteria should be taken into account for differential diagnosis (PUBMED:19965294).
In conclusion, MRI scans, particularly when combined with specific imaging features and possibly enhanced with advanced techniques, can be a valuable tool in differentiating between benign and malignant compression fractures of the spine. |
Instruction: Can Radiographs Predict Outcome in Patients With Idiopathic Clubfeet Treated With the Ponseti Method?
Abstracts:
abstract_id: PUBMED:30607204
Treatment of non-idiopathic clubfeet with the Ponseti method: a systematic review. Purpose: Although non-idiopathic clubfeet were long thought to be resistant to non-surgical treatment methods, more studies documenting results on treatment of these feet with the Ponseti method are being published. The goal of this systematic review is to summarize current evidence on treatment of non-idiopathic clubfeet using the Ponseti method.
Methods: PubMed and Limo were searched, reference lists of eligible studies were screened and studies that met the inclusion criteria were included. Data on average number of casts, Achilles tendon tenotomy (ATT), initial correction, recurrence, successful treatment at final follow-up and complications were pooled. The Methodological Index for Non-Randomized Studies was used to assess the methodological quality of the selected studies.
Results: In all, 11 studies were included, yielding a total of 374 non-idiopathic and 801 idiopathic clubfeet. Non-idiopathic clubfeet required more casts (7.2 versus 5.4) and had a higher rate of ATT (89.4% versus 75.7%). Furthermore, these feet had a higher recurrence rate (43.3% versus 11.5%) and a lower rate of successful treatment at final follow-up (69.3% versus 95.0%). Complications were found in 20.3% of the non--idiopathic cohort. When comparing results between clubfeet associated with myelomeningocele and arthrogryposis, the first group presented with a lower number of casts (5.4 -versus 7.2) and a higher rate of successful treatment at final follow-up (81.8% versus 58.2%).
Conclusion: The Ponseti method is a valuable and non-invasive option in the primary treatment of non-idiopathic clubfeet in young children. Studies with longer follow-up are necessary to evaluate its long-term effect.
Level Of Evidence: Level III - systematic review of Level-III studies.This work meets the requirements of the PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and -Meta-Analyses).
abstract_id: PUBMED:29263755
Alteration in hypoplasia of the hindfoot structures during early growth in clubfeet treated using the Ponseti method. Purpose: Previous reports have demonstrated diminished size of the hindfoot bones in patients with idiopathic clubfoot deformity. However, no study has quantified the percentage of hypoplasia as a function of early growth, during the brace phase of Ponseti treatment.
Methods: We measured the dimensions of ossified structures on radiographs in patients with unilateral Ponseti-treated clubfeet to determine changes in the percentage of hypoplasia between two and four years of age.
Results: The degree of hypoplasia varied among the osseous structures in Ponseti-treated clubfeet at age two years, with greater hypoplasia being observed in the talus (7.3%), followed by calcaneus (4.9%) and the cuboid (4.8%). Overall, the degree of hypoplasia diminished by four years, such that the degree of hypoplasia was greatest in the talus (4.2%) and the calcaneus (4.2%) followed by the cuboid (0.6%). At four years of age, the greatest degree of hypoplasia persisted in the talus and calcaneus.
Conclusions: Changes occurred in the size of the ossification of hindfoot bones between two and four years of age, and the observed changes in the percentage of hypoplasia varied among the different structures. At four years of age, the greatest percentage of hypoplasia was observed in the talus and calcaneus at values similar to those previously reported in skeletally mature patients. The results suggested that the relative difference in size of the feet may be expected to remain constant in a child with a unilateral clubfoot after this age.
abstract_id: PUBMED:22354441
Mid-term results of idiopathic clubfeet treated with the Ponseti method Aim: The Ponseti method is accepted worldwide for the treatment of congenital clubfoot. We report about our experience in a 7-year period. The purpose of the study was to evaluate the history of well treated feet between primary correction and the age of 5-6 years with relapse rate and functional results.
Material And Method: Between 1.1.2004 and 31.12.2005 we treated 71 patients with 102 idiopathic clubfeet with the Ponseti method. All patients were prospectively evaluated. We used the Pirani score. The patients' results were documented when the children started to walk and before primary school. The results were compared and statistically evaluated. We used the McKay score and measured the talocalcaneal angle on lateral and a. p. radiographs.
Results: 89 % clubfeet were successful treated with the Ponseti method. At walking age plantar flexion was between 30° und 50° (∅ 42°) and dorsiflexion between 5° and 30° (∅ 25°). Before primary school plantar flexion was between 30° and 50° (∅ 37,8°) and dorsiflexion between 0° and 25° (∅ 13,9°). Using the McKay score we had 91 % excellent or good results. 31 % cases had surgical treatment of a relapse. In the relapse group 82 % had an excellent or good result according to the McKay score.
Conclusion: The Ponseti method is a very effective technique to treat idiopathic clubfeet. In the first 5 to 6 years of age there is a significant loss of range of motion. The relapse rate is comparable to those of other clubfoot treatment concepts. The relapse treatment of the Ponseti technique, with recasting, tibialis anterior tendon transfer and Achilles tendon lengthening leads to good functional results.
abstract_id: PUBMED:36670703
Common Errors in the Management of Idiopathic Clubfeet Using the Ponseti Method: A Review of the Literature. Congenital talipes equinovarus is one of the most prevalent birth defects, affecting approximately 0.6 to 1.5 children per 1000 live births. Currently, the Ponseti method is the gold-standard treatment for idiopathic clubfeet, with good results reported globally. This literature review focuses on common errors encountered during different stages of the management of idiopathic clubfeet, namely diagnosis, manipulation, serial casting, Achilles tenotomy, and bracing. The purpose is to update clinicians and provide broad guidelines that can be followed to avoid and manage these errors to optimize short- and long-term outcomes of treatment of idiopathic clubfeet using the Ponseti method. A literature search was performed using the following keywords: "Idiopathic Clubfoot" (All Fields) AND "Management" OR "Outcomes" (All Fields). Databases searched included PubMed, EMBASE, Cochrane Library, Google Scholar, and SCOPUS (age range: 0-12 months). A full-text review of these articles was then performed looking for "complications" or "errors" reported during the treatment process. A total of 61 articles were included in the final review: 28 from PubMed, 8 from EMBASE, 17 from Google Scholar, 2 from Cochrane Library, and 6 from SCOPUS. We then grouped the errors encountered during the treatment process under the different stages of the treatment protocol (diagnosis, manipulation and casting, tenotomy, and bracing) to facilitate discussion and highlight solutions. While the Ponseti method is currently the gold standard in clubfoot treatment, its precise and intensive nature can present clinicians, health care providers, and patients with potential problems if proper diligence and attention to detail is lacking. The purpose of this paper is to highlight common mistakes made throughout the Ponseti treatment protocol from diagnosis to bracing to optimize care for these patients.
abstract_id: PUBMED:37215511
Effectiveness of Ponseti technique in management of arthrogrypotic clubfeet - a prospective study. Background: Clubfoot constitutes roughly 70 percent of all foot deformities in arthrogryposis syndrome and 98% of those in classic arthrogryposis. Treatment of arthrogrypotic clubfoot is difficult and challenging due to a combination of factors like stiffness of ankle-foot complex, severe deformities and resistance to conventional treatment, frequent relapses and the challenge is further compounded by presence of associated hip and knee contractures.
Method: A prospective clinical study was conducted using a sample of nineteen clubfeet in twelve arthrogrypotic children. During weekly visits Pirani and Dimeglio scores were assigned to each foot followed by manipulation and serial cast application according to the classical Ponseti technique. Mean initial Pirani score and Dimeglio score were 5.23 ± 0.5 and 15.79 ± 2.4 respectively. Mean Pirani and Dimeglio score at last follow up were 2.37 ± 1.9 and 8.26 ± 4.93 respectively. An average of 11.3 casts was required to achieve correction. Tendoachilles tenotomy was required in all 19 AMC clubfeet.
Result: The primary outcome measure was to evaluate the role of Ponseti technique in management of arthrogrypotic clubfeet. The secondary outcome measure was to study the possible causes of relapses and complications with additional procedures required to manage clubfeet in AMC an initial correction was achieved in 13 out of 19 arthrogrypotic clubfeet (68.4%). Relapse occurred in 8 out of 19 clubfeet. Five of those relapsed feet were corrected by re-casting ± tenotomy. 52.6% of arthrogrypotic clubfeet were successfully treated by the Ponseti technique in our study. Three patients failed to respond to Ponseti technique required some form of soft tissue surgery.
Conclusion: Based on our results, we recommend the Ponseti technique as the first line initial treatment for arthrogrypotic clubfeet. Although such feet require a higher number of plaster casts with a higher rate of tendo-achilles tenotomy but the eventual outcome is satisfactory. Although, relapses are higher than classical idiopathic clubfeet, most of them respond to re-manipulation and serial casting ± re-tenotomy.
abstract_id: PUBMED:25393569
Can Radiographs Predict Outcome in Patients With Idiopathic Clubfeet Treated With the Ponseti Method? Background: The aim of this study was to determine if radiographic measurements, taken before tenotomy, can predict outcome in children with idiopathic clubfoot treated by the Ponseti method.
Methods: A retrospective chart and radiographic review was performed on children with idiopathic clubfoot treated with the Ponseti method over a 10-year period with minimum 2-year follow-up that had a forced dorsiflexion lateral foot radiograph before tenotomy. All angles were measured in duplicate on the pretenotomy radiographs, including: foot dorsiflexion (defined as the 90 minus the angle between the tibial shaft and a plastic plate used to dorsiflex the foot), tibio-calcaneal, talo-calcaneal, and talo-first metatarsal angles. Clinical review of patient records identified different patient outcomes: no additional treatment required, relapse (additional casting and/or surgery required), recurrence (any additional surgery required), or reconstruction (surgery not including repeat tenotomy).
Results: Forty-five patients (71 feet) were included in the study. The median age at follow-up was 4.6 years. The intrareader reliability was acceptable for all measures. Thirteen of the 71 (18%) feet required additional surgery, occurring at a median age of 3.6 years. Of the 4 radiographic measures, only pretenotomy foot dorsiflexion predicted recurrence (hazard ratio=0.96, P=0.03). Youden's method identified 16.6 degrees of dorsiflexion as the optimal cutoff. Feet with at least that amount of dorsiflexion pretenotomy (n=21) experienced no recurrences; feet with less than that amount of dorsiflexion (n=50) experienced 13 recurrences (P=0.007).
Conclusions: Reduced foot dorsiflexion on lateral forced dorsiflexion pretenotomy radiograph was associated with an increased risk of recurrence. Radiographic dorsiflexion to 15 degrees past neutral before tenotomy appears to predict successful treatment via the Ponseti method.
abstract_id: PUBMED:30170140
Gait kinetics in children with clubfeet treated surgically or with the Ponseti method: A meta-analysis. Background: Currently, the Ponseti method is the gold standard for treatment of clubfeet. For long-term functional evaluation of this method, gait analysis can be performed. Previous studies have assessed gait differences between Ponseti treated clubfeet and healthy controls.
Research Question/purpose: The aims of this systematic review were to compare the gait kinetics of Ponseti treated clubfeet with healthy controls and to compare the gait kinetics between clubfoot patients treated with the Ponseti method or surgically.
Methods: A systematic search was performed in Embase, Medline Ovid, Web of Science, Scopus, Cochrane, Cinahl ebsco, and Google scholar, for studies reporting on gait kinetics in children with clubfeet treated with the Ponseti method. Studies were excluded if they only used EMG or pedobarography. Data were extracted and a risk of bias was assessed. Meta-analyses and qualitative analyses were performed.
Results: Nine studies were included, of which five were included in the meta-analyses. The meta-analyses showed that ankle plantarflexor moment (95% CI -0.25 to -0.19) and ankle power (95% CI -0.89 to -0.60, were significantly lower in the Ponseti treated clubfeet compared to the healthy controls. No significant difference was found in ankle dorsiflexor and plantarflexor moment, and ankle power between clubfeet treated with surgery compared to the Ponseti method.
Significance: Differences in gait kinetics are present when comparing Ponseti treated clubfeet with healthy controls. However, there is no significant difference between surgically and Ponseti treated clubfeet. These results give more insight in the possibilities of improving the gait pattern of patients treated for clubfeet.
abstract_id: PUBMED:36199722
Distal Tibial Epiphyseal Separation during Ponseti Casting in a Non-idiopathic Clubfoot- ACase Report. Introduction: Non-idiopathic clubfeet are more rigid compared to idiopathic clubfeet and usually require operative correction. Recent reports favor Ponseti casting in these feet. Iatrogenic fractures during and after casting have been reported in the literature but epiphyseal separation and subperiosteal ossification have not been reported earlier.
Case Report: A3-year-old female child presented with untreated bilateral clubfeet and lumbosacral myelomeningocele. She was treated by Ponseti casting. During the casting session, we noticed swelling and deformity in the left leg and feet. After X-ray, distal tibial, fibular epiphyseal separation, and displacement were noticed. She was treated by manipulation and casting and final correction achieved by bilateral tendoachilles tenotomy.
Conclusion: Ponseti casting for non-idiopathic clubfeet may develop epiphyseal displacement of distal tibia and fibula; hence, any abnormal swelling and deformity need to be evaluated by radiograph.
abstract_id: PUBMED:34415418
Long-term outcomes of the Ponseti method for treatment of clubfoot: a systematic review. Purpose: The Ponseti method has revolutionized the clubfoot treatment and has been adopted globally in the past couple of decades. However, most reported results of the Ponseti method are either short or midterm. Studies reporting long-term outcomes of the Ponseti method are limited. The following systematic review aimed to provide a comprehensive overview of the published articles on long-term outcomes of the Ponseti method.
Material And Methods: A literature search was performed for articles published in electronic database PubMed (includes Medline) and Cochrane for broad keywords: "Clubfoot"; "Ponseti method/technique"; "long term outcomes/results." Studies selected included full-text articles in English language on children less than one year with primary idiopathic clubfoot treated by the Ponseti method with mean ten year follow-up. Non-idiopathic causes or syndromic clubfoot and case reports/review articles/meta-analyses were excluded. The following parameters were included for analysis: number of patients/clubfeet, male/female, mean age at treatment, mean/range of follow-up, relapses, additional surgery, range of motion, various outcome scores, and radiological variables.
Results: Fourteen studies with 774 patients/1122 feet were included. The male:female ratio was 2.4:1. Mean follow-up recorded in studies was 14.5 years. Relapses occurred in 47% patients with additional surgery being required in 79% patients with relapses. Of these, 86% of surgery were extra-articular while 14% were intra-articular. Plantigrade foot was achieved in majority patients with mean ankle dorsiflexion of 11 degrees. The outcome scores were in general good in contrast to radiological angles which were mostly outside normal range with talar flattening/navicular wedging/degenerative osteoarthritis changes occurring in 60%, 76%, and 30%, respectively.
Conclusions: Long-term follow-up of infants with primary idiopathic clubfeet treated by the Ponseti method revealed relatively high relapse and additional surgery rates. Radiologically, the various angles were inconsistent compared to normal ranges and anatomical deformations/degenerative changes were present in treated feet. Moreover, the relapse rates and requirement of additional surgery increased on long-term follow-up. Despite this, majority feet were plantigrade and demonstrated good clinical results as measured by various outcome tools. There should be emphasis on long-term follow-up of children with clubfeet in view of late relapses and secondary late changes.
abstract_id: PUBMED:26704175
Radiographic Indicators of Surgery and Functional Outcome in Ponseti-Treated Clubfeet. Background: Evaluation of the results of treatment for clubfoot by the Ponseti technique is based on clinical and functional parameters. There is a need to establish universally recognized quantitative measurements to compare, better understand, and more precisely evaluate therapeutic outcome.
Methods: Nine angles were measured on standard radiographs of 145 children with idiopathic clubfeet treated between 2000 and 2010 with the Ponseti method. The average follow-up was 6.3 years. The need for additional surgery and the functional outcome was correlated to the radiologic measurements.
Results: Three radiologic parameters were identified as having significant (P ≤ .001) predictive value. The lateral tibiocalcaneal angle with the ankle at maximal dorsiflexion measuring fixed equinus was larger in the preoperated group (77 ± 12 degrees) compared to the nonoperated group (67 ± 14 degrees). Functional outcome was better with smaller angles. The lateral talocalcaneal angle with the ankle at maximal dorsiflexion measuring hindfoot rigidity was smaller (29 ± 8 degrees) in the preoperated group compared to the nonoperated group (35 ± 9 degrees). Functional outcome was better with larger angles. The lateral talo-first metatarsal angle with the ankle at maximal plantarflexion measuring foot cavus was larger in the preoperated group (31 ± 10 degrees) compared to the nonoperated group (22 ± 11 degrees), whereas functional outcome did not correlate with this angle.
Conclusions: The lateral view with the foot in maximal dorsiflexion was found to be the most significant and useful view to detect abnormality in Ponseti-treated clubfeet. In our practice now, we are using only this view in order to reduce the radiation exposure to the child.
Level Of Evidence: Level III, retrospective case series.
Answer: Yes, radiographs can predict outcomes in patients with idiopathic clubfeet treated with the Ponseti method. A study found that reduced foot dorsiflexion on lateral forced dorsiflexion pretenotomy radiographs was associated with an increased risk of recurrence. Specifically, feet with at least 16.6 degrees of dorsiflexion pretenotomy experienced no recurrences, while those with less dorsiflexion experienced recurrences. This suggests that achieving radiographic dorsiflexion to 15 degrees past neutral before tenotomy appears to predict successful treatment via the Ponseti method (PUBMED:30170140).
Additionally, another study identified three radiologic parameters with significant predictive value for the need for additional surgery and functional outcome. These parameters included the lateral tibiocalcaneal angle with the ankle at maximal dorsiflexion, the lateral talocalcaneal angle with the ankle at maximal dorsiflexion, and the lateral talo-first metatarsal angle with the ankle at maximal plantarflexion. The study concluded that the lateral view with the foot in maximal dorsiflexion was the most significant and useful view to detect abnormality in Ponseti-treated clubfeet (PUBMED:26704175). |
Instruction: Are babies getting bigger?
Abstracts:
abstract_id: PUBMED:17939625
Are babies getting bigger? Secular trends in fetal growth in Israel--a retrospective hospital-based cohort study. Background: A paradoxical secular trend of an increase in preterm births and a decrease in low birth weights has been reported in many developed countries over the last 25 years.
Objective: To determine if this trend is true for Israeli neonates, and to add new information on secular trends in crown-heel length and head circumference.
Methods: A hospital-based historic cohort design was used. Anthropometric data for 32,062 infants born at Rabin Medical Center in 1986-1987, 1994-1996, and 2003-2004 were collected from the hospital's computerized registry and compared over time for absolute values and proportional trends.
Results: For the whole sample (gestational age 24-44 weeks) there was a significant increase in mean birth weight (by 41 g), crown-heel length (by 1.3 cm), and head circumference (by 0.1 cm) from 1986 to 2004 (P < 0.001). A similar trend was found on separate analysis of the post-term babies. Term infants showed an increase in mean length and head circumference (P < 0.001), but not weight, and moderately preterm infants (33-36 weeks) showed an increase in mean weight (81 g, P < 0.001) and mean length (1.0 cm, P < 0.001), but not head circumference. The proportion of post-term (42-44 weeks), preterm (24-36 weeks), very preterm (29-32 weeks), extremely preterm (24-28 weeks), low birth weight (< 2500 g) and very low birth weight (< 1500 g) infants decreased steadily and significantly over time (P < 0.002).
Conclusions: Babies born in our facility, term and preterm, are getting bigger and taller. This increase is apparently associated with a drop (not a rise) in the proportion of preterm infants. These results might reflect improvements in antenatal care and maternal determinants.
abstract_id: PUBMED:22589619
Comparison Between Immunological Markers in Cord Blood of Preterm and Term Babies in Hospital USM. A cross sectional pilot study using convenient sampling method was conducted to evaluate various immunological parameters in preterm babies and term babies. Cord blood from 36 preterm and 36 term babies was taken and the following parameters were determined: Immunoglobulin G, A and M, Complement 3 and 4 and NBT. The results showed that NBT was significantly reduced in preterm babies compared to term babies (7.5% versus 12.0%; p= 0.001). The complement levels, C3 (0.5114 versus 0.7192 g/l; p<0.001) and C4 (0.07 versus 0.14g/l; p<0.001) were significantly lower in preterm babies than in the term babies. The mean IgG level in preterm babies was significantly lower than in term babies (9.5583 versus 14.2806 g/l, p<0.001). IgM (0.1 versus 0.2g/l; p<0.001) and IgA (0.210 versus 0.225g/l; p=0.036l) levels were significantly lower in the preterm than in term babies. In conclusion, we found that NBT reduction, IgG, IgA, IgM, C3 and C4 levels were significantly lower in the preterm compared to term babies.
abstract_id: PUBMED:27517887
Predicting Protein-Protein Interactions Using BiGGER: Case Studies. The importance of understanding interactomes makes preeminent the study of protein interactions and protein complexes. Traditionally, protein interactions have been elucidated by experimental methods or, with lower impact, by simulation with protein docking algorithms. This article describes features and applications of the BiGGER docking algorithm, which stands at the interface of these two approaches. BiGGER is a user-friendly docking algorithm that was specifically designed to incorporate experimental data at different stages of the simulation, to either guide the search for correct structures or help evaluate the results, in order to combine the reliability of hard data with the convenience of simulations. Herein, the applications of BiGGER are described by illustrative applications divided in three Case Studies: (Case Study A) in which no specific contact data is available; (Case Study B) when different experimental data (e.g., site-directed mutagenesis, properties of the complex, NMR chemical shift perturbation mapping, electron tunneling) on one of the partners is available; and (Case Study C) when experimental data are available for both interacting surfaces, which are used during the search and/or evaluation stage of the docking. This algorithm has been extensively used, evidencing its usefulness in a wide range of different biological research fields.
abstract_id: PUBMED:26692633
Pedagogy with babies: perspectives of eight nursery managers. The last 30 years have seen a significant increase in babies attending nursery, with corresponding questions about the aims and organisation of practice. Research broadly agrees on the importance of emotionally consistent, sensitive and responsive interactions between staff and babies. Policy objectives for nursery and expectations of parents and staff give rise to different and sometimes conflicting aims for such interactions; for example attachments to staff, peer interactions or early learning. Research shows marked variations of pedagogy aims and organisation with babies in nurseries in different national and cultural contexts. It also demonstrates variation between nurseries in similar contexts and between staff in their beliefs and values about work with babies. This paper reports on an exploratory study of the beliefs, aspirations and approaches of eight managers concerning pedagogy with babies in two similar English local authorities. These managers spoke of the importance of being responsive to the concerns and priorities of parents, whilst being sensitive to the demands of the work on their staff. The main finding was of the contradictions and confusions managers felt were inherent in the work, arising from both conflicting policy objectives and personal beliefs and aspirations; sometimes their own and sometimes those of individual staff and parents. Urban, Vandenbroeck, Van Laere, Lazzari, and Peeters' [(2012). Towards competent systems in early childhood education and care. Implications for policy and practice. European Journal of Education, 47(4), 508-526.] concept of the 'competent system' is used to recommend a grounded approach to the development of a more culturally, socially and individually responsive pedagogy with babies than appears to exist at present.
abstract_id: PUBMED:35080002
Getting Lost in People With Dementia: A Scoping Review Background: Many people with dementia suffer from getting lost, which not only impacts their daily lives but also affects their caregivers and the general public. The concept of getting lost in dementia has not been clarified in the literature.
Purpose: This scoping review was designed to provide a deeper understanding of the overall phenomenon of getting lost in people with dementia, with the results intended to provide caregivers with more complete information and enlightening research and practice related to dementia getting lost.
Methods: A systematic review method was used, and articles were retrieved from electronic databases including PubMed, Embase, Airiti Library, Cochrane Library, and Gray literature. Specific keywords, MeSH terms, and Emtree terms were used to search for articles on dementia and getting lost. A total of 10,523 articles published from 2011-2020 that matched the search criteria were extracted. After screening the topics and deleting repetitions, 64 articles were selected for further analysis. These articles were classified and integrated based on the six-step literature review method proposed by Arksey and O'Malley.
Results: The key findings of the review included: (1) The concept of getting lost in dementia is diverse and inseparable from wandering; (2) More than half of the assessment tools related to getting lost in dementia include the concept of wandering; (3) The factors identified as affecting getting lost in dementia include the patient's personal traits, disease factors, care factors, and environmental factors; (4) Getting lost in dementia negatively affects patients as well as their caregivers and the general public; (5) Most of the articles in this review were quantitative studies and were conducted in Western countries.
Conclusions / Implications For Practice: The scoping review approach may assist care providers to fully understand the phenomenon of getting lost in dementia, clarify its causes and consequences, and identify the limitations in the literature. The findings may be referenced in the creation of healthcare policies promoting related preventive measures and care plans as well as used to guide future academic research.
abstract_id: PUBMED:25664058
Blood and urine 8-iso-PGF2α levels in babies of different gestational ages. Objective: We measured cord blood and urine 8-iso-prostaglandin F2α (8-iso-PGF2α) levels in babies of different gestational ages to determine lipid peroxidation status.
Methods: Babies at gestational ages of 28-43 weeks were divided into group A (28-32 weeks), group B (33-36 weeks), group C (37-41 weeks), and group D (42-43 weeks). 8-iso-PGF2α in umbilical cord blood (UCB) at birth and urine at 6 hours after birth was and tested by ELISA.
Results: UCB and urine 8-iso-PGF2α levels in group C were 130.09 ± 31.73 pg/ml and 27.14 ± 6.73 pg/ml, respectively. UCB 8-iso-PGF2α levels in group A and B were 188.42 ± 59.34 pg/ml and 189.37 ± 68.46 pg/ml, and urine 8-iso-PGF2α were 32.14 ± 7.32 pg/ml and 30.46 ± 8.83 pg/ml, respectively. Blood and urine 8-iso-PGF2α levels in group D (post-term) were 252.01 ± 46.42 pg/ml and 44.00 ± 8.50 pg/ml. For all babies, UCB and urine iso-PGF2α levels were significantly correlated (r = 0.65, P < 0.01).
Conclusions: We established blood and urine iso-PGF2α levels in normal full-term babies. Urine 8-iso-PGF2α levels may reflect the extent of lipid peroxidation in babies. In pre-term and post-term babies, there was evidence for increased lipid peroxidation.
abstract_id: PUBMED:27407308
USEFULNESS OF EVALUATION OF ANTIMEASLES ANTIBODIES IN PRETERM BABIES. As per WHO recommendations, measles vaccine is administered at the age of 9 months which is based on studies demonstrating seroconversion (from positive to negative) at this age. However this contention may not hold good in preterm babies since they may have lower initial levels of passively transferred IgG antimeasles antibodies of maternal origin. To explore this possibility, 50 preterm babies (gestational age less than 37 weeks) were studied for antimeasles antibodies. Serum samples were collected at birth and then at 3 months and 5 months of age in all the cases. Antimeasles antibody assay was done in all the serum samples using ELISA kits. At birth 32% of infants were positive for antimeasles antibodies whereas 60% were weakly positive and 8% were negative. At 3 months of age 50% were sero negative, 2% positive and 40% weakly positive. The sero negativity was found to be 98% at 5 months with only 2% remaining positive. Since seroconversion is seen to occur in this vast majority of preterm infants at the age of 5 months, antimeasles vaccine should be administered at this age to this subset of more vulnerable babies.
abstract_id: PUBMED:27688543
Problems in Diagnosis of HIV Infection in Babies. Serological diagnosis of human immunodeficiency virus (HIV) infection in babies born to HIV infected mothers is difficult because of presence of maternal anti-HIV antibody up to 18 months. Conventional enzyme-linked immunosorbent assay (ELISA) and western blot may be positive in un-infected cases. Various other modalities which have been adopted include detection of HIV specific IgA, IgM, IgE, detection of p24 antigen, viral culture and detection of HIV nucleic acid by polymerase chain reaction (PCR). Viral culture or PCR positivity within first 48 hours of life indicates intrauterine infection. An early diagnosis of HIV infection in babies born to HIV infected mothers is essential as definite antiretroviral therapy (ART) can be instituted and unnecessary toxicity of drug therapy avoided if found negative. Though viral culture and DNA-PCR has sensitivity of >95% after one month of age, some cases can not be diagnosed during this period. Other tests like viral RNA detection by reverse transcription polymerase chain reaction (RT-PCR) and combination of tests will be required.
abstract_id: PUBMED:32220824
An anatomy of waste generation flows in construction projects using passive bigger data. Understanding waste generation flow is vital to any evidence-based effort by policy-makers and practitioners to successfully manage construction project waste. Previous research has found that accumulative waste generation in construction projects follows an S-curve, but improving our understanding of waste generation requires its investigation at a higher level of granularity. Such efforts, however, are often constrained by lack of quality "bigger" data, i.e. data that is bigger than normal small data. This research aims to provide an anatomy of waste generation flow in building projects by making use of a large set of data on waste generation in 19 demolition, 59 foundation, and 54 new building projects undertaken in Hong Kong between 2011 and 2019. We know that waste is generated in far from a steady stream as it is always impacted by contingent factors. However, we do find that peaks of waste generation in foundation projects appear when project duration is at 50-85%, and in new building projects at 40-70% of total project time. Our research provides useful information for waste managers in developing their waste management plans, arranging waste hauling logistics, and benchmarking waste management performance.
abstract_id: PUBMED:35856049
Effectiveness of multistimulation approach on feeding habits of low-birth-weight babies-A randomized control trial. Introduction: Low- birth- weight neonates face oral feeding difficulties due to hemodynamic instability, immaturity of central nervous systems, and incomplete development of oral functions. Use of several interventions might help in improvement of the feeding ability of neonates. The objective of the study was to evaluate the effect of the multistimulation approach in low-birth-weight babies on the oral feeding performance, oral intake volume, weight gain and transition time from tube to total oral intake.
Methods: A Randomized, parallel-group, multiple arm trial study was conducted, and a total of 44 low birth weight babies were randomized into three parallel groups with a 2:1:1 ratio. Babies who are Hemodynamically stable were included in the trial. In two Intervention groups, one received an oral stimulation program, another intervention group received tactile stimulation, and the control group received routine newborn procedures for the same duration of time. Oral feeding performance was determined by Oral Feeding Skills (OFS) on a daily basis for five days after providing ten days of intervention. Neonates were monitored until hospital discharge.
Results: Infants in the stimulation groups had significantly better oral feeding performance than infants in the control group in terms of mean proficiency, transfer rate and overall transfer of feeding volume. There was a substantial increase in mean feeding score, daily weight, oral intake volume, and early transition time in both intervention groups compared to control. There was no significant difference in feeding behaviours between the oromotor and multistimulation groups, but the multistimulation group gained more weight compared to the oromotor group.
Conclusions: Infants exposed to the stimulation programme had better feeding skills and a shorter transition period from tube feeding to oral feeding; however, the babies who received multistimulation gained greater weight than babies who received only oromotor stimulation. The study recommends multi stimulation in the form of oromotor, and tactile stimulation can be used as an effective NICU procedure for maintaining an infant's ability to take feeds orally before being discharged from the hospital.
Answer: Yes, babies are getting bigger. A retrospective hospital-based cohort study in Israel found a significant increase in mean birth weight, crown-heel length, and head circumference for infants born at Rabin Medical Center from 1986 to 2004. This trend was observed for the whole sample, which included gestational ages from 24 to 44 weeks. Specifically, there was an increase in mean birth weight by 41 grams, crown-heel length by 1.3 centimeters, and head circumference by 0.1 centimeters. The study also noted a decrease in the proportion of preterm, very preterm, extremely preterm, low birth weight, and very low birth weight infants over time, suggesting improvements in antenatal care and maternal determinants (PUBMED:17939625). |
Instruction: Does the direction of tumescent solution delivery matter in endovenous laser ablation of the great saphenous vein?
Abstracts:
abstract_id: PUBMED:26289048
Does the direction of tumescent solution delivery matter in endovenous laser ablation of the great saphenous vein? Background: The aim of this study was to compare the two different directions of tumescent solution delivery (from distal to proximal knee to the saphenofemoral junction [SFJ] or proximal to distal SFJ to the knee) in terms of differences in tumescent volume, number of punctures, and pain and comfort scores of patients.
Methods: A total of 100 patients were treated with endovenous laser ablation (EVLA) under local anesthesia between August 2013 and October 2013. These 100 patients were divided into two groups. In group 1, tumescent solution was delivered in a proximal to distal direction. In group 2, the tumescent solution was delivered in a distal to proximal direction. In each group, the great saphenous vein (GSV) diameter, delivered total energy, treated GSV length, delivered tumescent volume, number of punctures, and pain and comfort scores were recorded for each patient.
Results: All patients were treated unilaterally. EVLA was performed with 100% technical success in all patients. There was no difference statistically between group 1 and group 2 according to GSV diameter, delivered total energy, and treated GSV length. Average tumescent volume, number of punctures, and pain scores in group 2 were lower than in group 1 (p = 0.0001; p < 0.05). Also, the average comfort score was higher in group 2 than in group 1 (p = 0.0001; p < 0.05).
Conclusions: We believe that delivering the tumescent solution in a distal to proximal direction increases the comfort of both patient and surgeon with lower tumescent volume during the EVLA of the GSV.
abstract_id: PUBMED:24459131
Is the temperature of tumescent anesthesia applied in the endovenous laser ablation important? comparison of different temperatures for tumescent anesthesia applied during endovenous ablation of incompetent great saphenous vein with a 1470 nm diode laser. Introduction: We aimed to investigate whether the temperature of tumescent anesthesia is important, if so, to establish an opinion about the ideal temperature.
Materials And Methods: Endovenous laser ablations were performed in 72 patients; 35 patients (Group A) received tumescent anesthesia at +4℃, while other 37 patients (Group B) received tumescent anesthesia at room temperature. The groups were compared in terms of intraoperative pain, postoperative regional pain, ecchymosis, paresthesia, skin burns and necrosis. At month 1, great saphenous vein was evaluated for recanalization and patient satisfaction.
Results: The survey on intraoperative pain showed that patients receiving tumescent anesthesia at +4℃ experienced much less pain. Interestingly, statistical analysis showed that this difference was not significant (p = 0.072). No skin burns or necrosis occurred in either group, whereas ecchymosis and paresthesia were the most frequently observed side effects in both groups, but no significant difference was found between the groups. There was no significant difference between pain levels on postoperative days and no significant difference between the groups in terms of satisfaction with endovenous laser ablation procedure and postoperative satisfaction. All venous segments treated with endovenous laser ablation in both groups were occluded. At month 1 no recanalization was observed.
Conclusion: We conclude that the temperature of tumescent anesthesia solution is not important, while the proper administration of tumescent solution in adequate amounts ensuring delivery of the fluid to all segments appears to be a more significant determinant for the success of the procedure.
abstract_id: PUBMED:28860790
Evaluation of pain during endovenous laser ablation of the great saphenous vein with ultrasound-guided femoral nerve block. Background: Endoluminal laser ablation is now considered the method of choice for treating greater saphenous vein insufficiency. General anesthesia and peripheral nerve blocks with sedation have the risk of post-procedural delay in discharge and prolonged immobilization with the risk of deep vein thrombosis. The main pain experienced by patients during the procedure is during the laser ablation and the multiple needle punctures given along and around the great saphenous vein. The aim of our study was to evaluate the safety and efficacy of blocking the femoral nerve only under ultrasound-guidance without sedation, to reduce or prevent pain during injectable tumescent anesthesia in endovenous laser ablation of the greater saphenous vein.
Methods: Sixty patients in two groups underwent endovenous laser ablation for the greater saphenous vein insufficiency at an outpatient clinic. All patients received tumescent anesthesia. However, one group received a femoral nerve block (FNB) under ultrasound guidance before the procedure. All patients were asked to record the pain or discomfort, using the visual analog score, from the start of the procedure until the end of the great saphenous vein laser ablation. The length of the great saphenous vein and duration of the procedure were also recorded. The results were analyzed using statistical methods.
Results: No complications from FNB were observed. The pain associated with application of tumescent anesthesia and laser ablation was more intense in the group without an FNB (P < 0.001). There was no significant difference between the two groups in the length of the great saphenous vein or procedure duration.
Conclusion: Ultrasound-guided FNB (without other peripheral nerve blocks) is a safe, adequate, and effective option to decrease and/or eliminate the intraoperative discomfort associated with tumescent anesthesia injections and laser ablation during endoluminal laser ablation of the greater saphenous vein.
abstract_id: PUBMED:38182555
Laser-sclerosing foam hybrid treatment, a non-tumescent technique for insufficient great saphenous vein ablation. Objective: We aim to report on the Laser-Sclerosing Foam Hybrid Treatment (LSFHT) and its outcomes when used on patients with great saphenous vein (GSV) insufficiency.
Methods: This was a single center retrospective cohort study on patients with GSV insufficiency that were treated with the LSFHT technique, a surgical procedure that comprises the use of both sclerosing foam and endovenous ablation and avoids the use of tumescent anesthesia. Occlusion rates and complications were reported.
Results: 139 legs from 106 patients were operated, achieving a 100% occlusion rate, while only a small burn and 2 popliteal vein thrombosis cases occurred.
Conclusion: The study suggests that the LSFHT is a feasible fast procedure that proved both effective and safe for the treatment of GSV insufficiency.
abstract_id: PUBMED:28956693
A description of the 'smile sign' and multi-pass technique for endovenous laser ablation of large diameter great saphenous veins. Aims To report on great saphenous vein diameter distribution of patients undergoing endovenous laser ablation for lower limb varicose veins and the ablation technique for large diameter veins. Methods We collected retrospective data of 1929 (943 left leg and 986 right leg) clinically incompetent great saphenous vein diameters treated with endovenous laser ablation over five years and six months. The technical success of procedure, complications and occlusion rate at short-term follow-up are reported. Upon compression, larger diameter veins may constrict asymmetrically rather than concentrically around the laser fibre (the 'smile sign'), requiring multiple passes of the laser into each dilated segment to achieve complete ablation. Results Of 1929 great saphenous veins, 334 (17.31%) had a diameter equal to or over 15 mm, which has been recommended as the upper limit for endovenous laser ablation by some clinicians. All were successfully treated and occluded upon short-term follow-up. Conclusion We suggest that incompetent great saphenous veins that need treatment can always be treated with endovenous laser ablation, and open surgery should never be recommended on vein diameter alone.
abstract_id: PUBMED:25940645
Endovenous laser ablation is an effective treatment for great saphenous vein incompetence in teenagers. Objectives: The current knowledge of chronic venous disease in teenagers and its treatment is very limited. The aim of the study is to present our experience and the available literature data on the treatment of varicose veins in teenagers with endovenous laser ablation of the great saphenous vein.
Methods: Five patients, aged 15-17 years, were qualified for surgery, based on typical signs and symptoms of chronic venous disease. Minimally invasive treatment with endovenous laser ablation of the great saphenous vein was applied.
Results: The technical success of surgery was achieved in all patients. Over a 2-year follow-up we did not observe any case of recanalisation of the great saphenous vein, recurrence of varicose veins, or serious complications, such as deep vein thrombosis or pulmonary embolism. One patient presented with resolving of post-operative bruising, and two cases of local numbness were transient.
Conclusions: Endovenous laser ablation of the great saphenous vein in the treatment of chronic venous disease in teenagers is effective and safe. The method provides excellent cosmetic effects, very short recovery time and high levels of patient satisfaction.
abstract_id: PUBMED:27306991
Defining the optimum tumescent anaesthesia solution in endovenous laser ablation. Objectives To produce a tumescent anaesthesia solution with physiological pH for endovenous thermal ablation and evaluate its influence on peri- and postoperative pain, clinical and quality of life outcomes, and technical success. Methods Tumescent anaesthetic solution (0.1% lidocaine with 1:2,000,000 epinephrine) was titrated to physiological pH by buffering with 2 ml incremental aliquots of 8.4% sodium bicarbonate. Patients undergoing great saphenous vein endovenous laser ablation and ambulatory phlebectomy were studied before and after introduction of buffered tumescent anaesthetic. Primary outcome was perioperative pain measured on a 10 cm visual analogue scale. Secondary outcomes were daily pain scores during the first postoperative week, complications, time to return to normal activity, patient satisfaction, generic and disease-specific quality of life, and technical success. Patients were assessed at baseline, and at 1, 6 and 12 weeks following the procedure. Results A physiological pH was achieved with the addition of 10 ml of 8.4% sodium bicarbonate to 1 l of standard tumescent anaesthetic solution. Sixty-two patients undergoing great saphenous vein endovenous laser ablation with phlebectomy were recruited before and after the introduction of buffered tumescent anaesthetic solution. Baseline and operative characteristics were well matched. The buffered solution was associated with significantly lower (median (interquartile range)) periprocedural pain scores (1 (0.25-2.25) versus 4 (3-6), p < 0.001) and postoperative pain score at the end of the treatment day (1.8 (0.3-2.8) versus 3.0 (1.2-5.2), p = 0.033). There were no significant differences in postoperative pain scores between the groups at any other time. There were no significant differences in other clinical outcomes between the groups. Both groups demonstrated significant improvements in generic and disease-specific quality of life, with no intergroup differences. Both groups demonstrated 100% ultrasonographic technical success at all time points. Conclusions Buffering of tumescent anaesthetic solution during endovenous thermal ablation is a simple, safe, inexpensive and effective means of reducing perioperative and early postoperative pain.
abstract_id: PUBMED:23512897
A comparison of three tumescent delivery systems in endovenous laser ablation of the great saphenous vein. Different systems for delivering tumescent solution exist in endovenous laser ablation (EVLA). This study evaluated three different tumescent delivery systems in patients with primary varicose veins due to great saphenous vein reflux who were treated with EVLA. In this prospective non-randomized study, 60 patients with isolated GSV varicose veins were divided into three groups. All patients received EVLA treatment. Three different tumescent solution delivery systems were used. Systems consisted of a needle and a syringe in Group 1, a needle connected to an infusion bag system in Group 2 and a peristaltic infiltration pump in Group 3. Tumescent delivery durations were in Group 1: 6.56 SD 1.18 minutes, Group 2: 6.05 SD 2.19 minutes and Group 3: 5.19 SD 1.15 minutes (P = 0.014). In the outcomes of the study there were no significant difference between groups. Although peristaltic pump systems might provide shorter tumescent delivery durations without hand fatigue, shorter duration does not have any practical importance (about 1 minute and also it is not cost-effective. For delivering tumescent solutions in EVLA procedures, there was no major superiority between systems.
abstract_id: PUBMED:27295103
Endovenous ablation of saphenous vein varicosis In the past 15 years, the minimally invasive endovenous treatments of varicose veins have been widely accepted. The efficacy of the different endovenous methods and the minimal post operative side effects are meanwhile well documented in a large number of evidence based publications. The recent NICE Guidelines (2013) considering the varicose vein treatment recommend in case of an insufficiency of saphenous veins first the endovenous thermal ablation with radiofrequency or laser, then the ultrasound guided sclerotherapy and as the third line the classic surgical treatment with stripping and high ligation.
abstract_id: PUBMED:27126643
Which endovenous ablation method does offer a better long-term technical success in the treatment of the incompetent great saphenous vein? Review. Objective: The aim of this review article was to evaluate the long-term technical success rates of the known endovenous ablation procedures in the treatment of the incompetence of the great saphenous vein.
Methods: A literature search was conducted in the PubMed-database until the 5 January 2016. All publications with four to five years follow-up were eligible. Meta-analysis was performed by the IVhet-model.
Results: Eight hundred and sixty-two unique publications were found; 17 of them were appropriate for meta-analysis. Overall, 1420 limbs were included in the trial, 939 for endovenous laser ablation, 353 for radiofrequency ablation and 128 for ultrasound guided foam sclerotherapy. Overall, technical success rates were 84.8% for endovenous laser ablation, 88.7% for radiofrequency ablation and 32.8% for ultrasound guided foam sclerotherapy. There were no significant differences between endovenous laser ablation, radiofrequency ablation and ultrasound guided foam sclerotherapy regarding the great saphenous vein reopening (p = 0.66; OR: 0.22; 95% of CI: 0.08-0.62 for radiofrequency ablation vs. endovenous laser ablation; p = 0.96; OR: 0.11; 95% of CI: 0.06-0.20 for endovenous laser ablation vs. ultrasound guided foam sclerotherapy; p = 0.93; OR: 3.20; 95% of CI: 0.54-18.90 for ultrasound guided foam sclerotherapy vs. radiofrequency ablation).
Conclusion: Both endovenous laser ablation and radiofrequency ablation are efficient in great saphenous vein occlusion on the long term. Lacking long-conducted large trials, the efficacy and reliability of ultrasound guided foam sclerotherapy to treat great saphenous vein-reflux is not affirmed.
Answer: Yes, the direction of tumescent solution delivery does matter in endovenous laser ablation (EVLA) of the great saphenous vein (GSV). A study comparing two different directions of tumescent solution delivery—proximal to distal (from the saphenofemoral junction [SFJ] to the knee) and distal to proximal (from the knee to the SFJ)—found that delivering the tumescent solution in a distal to proximal direction resulted in lower average tumescent volume, fewer punctures, and lower pain scores. Additionally, the average comfort score was higher in the group where the solution was delivered in a distal to proximal direction (PUBMED:26289048). This suggests that the direction of tumescent solution delivery can impact the comfort of the patient and the efficiency of the procedure. |
Instruction: Is the Quebec provincial administrative database a valid source for research on chronic non-cancer pain?
Abstracts:
abstract_id: PUBMED:26105572
Is the Quebec provincial administrative database a valid source for research on chronic non-cancer pain? Purpose: The objective of this study was to evaluate the validity of diagnostic codes recorded in the Régie de l'assurance maladie du Québec (RAMQ) administrative database for identifying patients suffering from various types of chronic non-cancer pain.
Methods: The validity of published International Classification of Diseases, Ninth Revision, coding algorithms for identifying patients with particular chronic pain syndromes in the RAMQ database was tested using pain specialist-established diagnostic data of 561 patients enrolled in the Quebec Pain Registry, which was used as the reference standard. Modified versions of these algorithms (i.e., adaptation of the number of healthcare encounters) were also tested. For each algorithm, sensitivity, specificity, positive/negative predictive values, and their respective 95% confidence intervals (95%CI) were calculated.
Results: In the RAMQ database, some previously published algorithms and modified versions of these algorithms were found to be valid for identifying patients suffering from chronic lumbar pain (sensitivity: 0.65, 95%CI: 0.59-0.71; specificity: 0.83, 95%CI: 0.79-0.87), chronic back pain (sensitivity: 0.70, 95%CI: 0.64-0.76; specificity: 0.73, 95%CI: 0.68-0.78), and chronic neck/back pain (sensitivity: 0.71, 95%CI: 0.65-0.76; specificity: 0.78, 95%CI: 0.72-0.82). Algorithms to identify patients with other types of chronic pain showed low sensitivity: complex regional pain syndrome (≤0.07), fibromyalgia (≤0.42), and neuropathic pain (≤0.39).
Conclusions: Our study provides evidence supporting the value of the RAMQ administrative database for conducting research on certain types of chronic pain disorders including back and neck pain. Users should, however, be cautious about the limitations of this database for studying other types of chronic pain syndromes such as complex regional pain syndrome, fibromyalgia, and neuropathic pain.
abstract_id: PUBMED:25924096
Accuracy of Self-reported Prescribed Analgesic Medication Use: Linkage Between the Quebec Pain Registry and the Quebec Administrative Prescription Claims Databases. Objectives: The validity of studies conducted with patient registries depends on the accuracy of the self-reported clinical data. As of now, studies about the validity of self-reported use of analgesics among chronic pain (CP) populations are scarce. The objective of this study was to assess the accuracy of self-reported prescribed analgesic medication use. This was attained by comparing the data collected in the Quebec Pain Registry (QPR) database to those contained in the Quebec administrative prescription claims database (Régie de l'assurance maladie du Québec [RAMQ]).
Methods: To achieve the linkage between the QPR and the RAMQ databases, the first 1285 patients who were consecutively enrolled in the QPR between October 31, 2008 and January 27, 2010 were contacted by mail and invited to participate in a study in which they had to provide their unique RAMQ health insurance number. Using RAMQ prescription claims as the reference standard, κ coefficients, sensitivity, specificity, and their respective 95% confidence intervals were calculated for each therapeutic class of prescribed analgesic drugs that the participants reported taking currently and in the past 12 months.
Results: A total of 569 QPR patients responded to the postal mailing, provided their unique health insurance number, and gave informed consent for the linkage (response proportion=44%). Complete RAMQ prescription claims over the 12 months before patient enrollment into the QPR were available for 272 patients, who constituted our validated study population. Regarding current self-reported prescribed analgesic use, κ coefficients measuring agreement between the 2 sources of information ranged from 0.66 to 0.78 for COX-2-selective nonsteroidal anti-inflammatory drugs, anticonvulsants, antidepressants, skeletal muscle relaxants, synthetic cannabinoids, opiate agonists/partial agonists/antagonists, and antimigraine agents therapeutic classes. For the past 12-month self-reported prescribed analgesic use, QPR patients were less accurate regarding anticonvulsants (κ=0.59), opiate agonists/partial agonists/antagonists (κ=0.57), and antimigraine agents use (κ=0.39).
Discussion: Information about current prescribed analgesic medication use as reported by CP patients was accurate for the main therapeutic drug classes used in CP management. Accuracy of the past year self-reported prescribed analgesic use was somewhat lower but only for certain classes of medication, the concordance being good on all the others.
abstract_id: PUBMED:9589544
"Whiplash associated disorders: redefining whiplash and its management" by the Quebec Task Force. A critical evaluation. Study Design: The two publications of the Quebec Task Force on Whiplash-Associated Disorders were evaluated by the authors of this report for methodologic error and bias.
Objectives: To determine whether the conclusions and recommendations of the Quebec Task Force on Whiplash-Associated Disorders regarding the natural history and epidemiology of whiplash injuries are valid.
Summary Of The Background Data: In 1995, the Quebec Task Force authored a text (published by the Societe de l'Assurance Automobile du Quebec) and a pullout supplement in Spine entitled "Whiplash-Associated Disorders: Redefining Whiplash and its Management." The Quebec Task Force concluded that whiplash injuries result in "temporary discomfort," are "usually self-limited," and have a "favorable prognosis," and that the "pain [resulting from whiplash injuries] is not harmful."
Methods: The authors of the current report reviewed the text and the supplement for methodologic flaws that may have threatened the validity of the conclusions and recommendations of the Quebec Task Force.
Results: Five distinct and significant categories of methodologic error were found. They were: selection bias, information bias, confusing and unconventional use of terminology, unsupported conclusions and recommendations, and inappropriate generalizations from the Quebec Cohort Study.
Conclusion: The validity of the conclusions and recommendations of the Quebec Task Force regarding the natural course and epidemiology of whiplash injuries is questionable. This lack of validity stems from the presence of bias, the use of unconventional terminology, and conclusions that are not concurrent with the literature the Task Force accepted for review. Although the Task Force set out to redefine whiplash and its management, striving for the desirable goal of clarification of the numerous contentious issues surrounding the injury, its publications instead have confused the subject further.
abstract_id: PUBMED:31154033
ICD-10 Codes for the Study of Chronic Overlapping Pain Conditions in Administrative Databases. Chronic overlapping pain conditions (COPCs) are a set of painful chronic conditions characterized by high levels of co-occurrence. It has been hypothesized that COPCs co-occur in many cases because of common neurobiological vulnerabilities. In practice, most research on COPCs has focused upon a single index condition with little effort to assess comorbid painful conditions. This likely means that important phenotypic differences within a sample are obscured. The International Classification of Diseases (ICD) coding system contains many diagnostic classifications that may be applied to individual COPCs, but there is currently no agreed upon set of codes for identifying and studying each of the COPCs. Here we seek to address this issue through three related projects 1) we first compile a set of ICD-10 codes from expert panels for ten common COPCs, 2) we then use natural language searches of medical records to validate the presence of COPCs in association with the proposed expert codes, 3) finally, we apply the resulting codes to a large administrative medical database to derive estimates of overlap between the ten conditions as a demonstration project. The codes presented can facilitate administrative database research on COPCs. PERSPECTIVE: This article presents a set of ICD-10 codes that researchers can use to explore the presence and overlap of COPCs in administrative databases. This may serve as a tool for estimating samples for research, exploring comorbidities, and treatments for individual COPCs, and identifying mechanisms associated with their overlap.
abstract_id: PUBMED:33987504
Identifying cases of chronic pain using health administrative data: A validation study. Background: Most prevalence estimates of chronic pain are derived from surveys and vary widely, both globally (2%-54%) and in Canada (6.5%-44%). Health administrative data are increasingly used for chronic disease surveillance, but their validity as a source to ascertain chronic pain cases is understudied.
Aim: The aim of this study was to derive and validate an algorithm to identify cases of chronic pain as a single chronic disease using provincial health administrative data.
Methods: A reference standard was developed and applied to the electronic medical records data of a Newfoundland and Labrador general population sample participating in the Canadian Primary Care Sentinel Surveillance Network. Chronic pain algorithms were created from the administrative data of patient populations with chronic pain, and their classification performance was compared to that of the reference standard via statistical tests of selection accuracy.
Results: The most performant algorithm for chronic pain case ascertainment from the Medical Care Plan Fee-for-Service Physicians Claims File was one anesthesiology encounter ever recording a chronic pain clinic procedure code OR five physician encounter dates recording any pain-related diagnostic code in 5 years with more than 183 days separating at least two encounters. The algorithm demonstrated 0.703 (95% confidence interval [CI], 0.685-0.722) sensitivity, 0.668 (95% CI, 0.657-0.678) specificity, and 0.408 (95% CI, 0.393-0.423) positive predictive value. The chronic pain algorithm selected 37.6% of a Newfoundland and Labrador provincial cohort.
Conclusions: A health administrative data algorithm was derived and validated to identify chronic pain cases and estimate disease burden in residents attending fee-for-service physician encounters in Newfoundland and Labrador.
abstract_id: PUBMED:25142365
Costs of occupational injuries and diseases in Québec. Problem: Occupational injuries and diseases are costly for companies and for society as a whole. This study estimates the overall costs of occupational injuries and diseases in Québec, both human and financial, during the period from 2005 to 2007.
Method: The human capital method is used to estimate lost productivity. A health indicator (DALY) is used in combination with a value of statistical life (VSL) to estimate, in monetary terms, the pain and suffering costs resulting from occupational injuries.
Results: The costs of occupational injuries and diseases occurring in a single year in Québec are estimated at $4.62 billion, on average, for the 2005-2007 period. Of this amount, approximately $1.78 billion is allocated to financial costs and $2.84 billion to human costs. The average cost per case is $38,355. In view of the limitations identified in the study, it can be argued that this is an underestimation of the costs. Result analysis allows the injury/disease descriptors and industries for which the costs are highest to be identified.
Practical Applications: The results of these estimates are a relevant source of information for helping to determine research directions in OHS and prevention. The methodology used can be replicated for the purposes of estimating the costs of injuries and diseases in other populations.
abstract_id: PUBMED:36638305
The Canadian version of the National Institutes of Health minimum dataset for chronic low back pain research: reference values from the Quebec Low Back Pain Study. Abstract: The National Institutes of Health (NIH) minimum dataset for chronic low back pain (CLBP) was developed in response to the challenge of standardizing measurements across studies. Although reference values are critical in research on CLBP to identify individuals and communities at risk of poor outcomes such as disability, no reference values have been published for the Quebec (Canada) context. This study was aimed to (1) provide reference values for the Canadian version of the NIH minimum dataset among individuals with CLBP in Quebec, both overall and stratified by gender, age, and pain impact stratification (PIS) subgroups, and (2) assess the internal consistency of the minimum data set domains (pain interference, physical function, emotional distress or depression, sleep disturbance, and PIS score). We included 2847 individuals living with CLBP who completed the baseline web survey of the Quebec Low Back Pain Study (age: 44.0 ± 11.2 years, 48.1% women) and were recruited through social media and healthcare settings. The mean score was 6.1 ± 1.8 for pain intensity. Pain interference, physical function, emotional distress or depression, sleep disturbance, and PIS scores were 12.9 ± 4.1, 14.4 ± 3.9, 9.8 ± 4.4, 13.0 ± 3.6, and 26.4 ± 6.6, respectively. Emotional distress or depression showed floor effects. Good-to-excellent internal consistency was found overall and by language, gender, and age subgroups for all domains (alpha: 0.81-0.93) and poor-to-excellent internal consistency for PIS subgroups (alpha: 0.59-0.91). This study presents reference values and recommendations for using the Canadian version of the NIH minimum dataset for CLBP that can be useful for researchers and clinicians.
abstract_id: PUBMED:28280406
Development and Implementation of a Registry of Patients Attending Multidisciplinary Pain Treatment Clinics: The Quebec Pain Registry. The Quebec Pain Registry (QPR) is a large research database of patients suffering from various chronic pain (CP) syndromes who were referred to one of five tertiary care centres in the province of Quebec (Canada). Patients were monitored using common demographics, identical clinical descriptors, and uniform validated outcomes. This paper describes the development, implementation, and research potential of the QPR. Between 2008 and 2013, 6902 patients were enrolled in the QPR, and data were collected prior to their first visit at the pain clinic and six months later. More than 90% of them (mean age ± SD: 52.76 ± 4.60, females: 59.1%) consented that their QPR data be used for research purposes. The results suggest that, compared to patients with serious chronic medical disorders, CP patients referred to tertiary care clinics are more severely impaired in multiple domains including emotional and physical functioning. The QPR is also a powerful and comprehensive tool for conducting research in a "real-world" context with 27 observational studies and satellite research projects which have been completed or are underway. It contains data on the clinical evolution of thousands of patients and provides the opportunity of answering important research questions on various aspects of CP (or specific pain syndromes) and its management.
abstract_id: PUBMED:7732471
The Quebec Back Pain Disability Scale. Measurement properties. Study Design: The Quebec Back Pain Disability Scale is a 20-item self-administered instrument designed to assess the level of functional disability in individuals with back pain. The scale was administered as part of a larger questionnaire to a group of 242 back pain patients. Follow-up data were obtained after several days and after 2 to 6 months.
Objectives: The goal of this study was to determine whether the Quebec scale is a reliable, valid, and responsive measure of disability in back pain, and to compare it with other disability scales.
Summary Of Background Data: A number of functional disability scales for back pain are being used, but their conceptual validity is uncertain. Unlike most published instruments, the Quebec scale was constructed using a conceptual approach to disability assessment and empirical methods of item development, analysis, and selection.
Methods: The authors calculated test-retest and internal consistency coefficients, evaluated construct validity of the scale, and tested its responsiveness against a global index of change. Direct comparisons with the Roland, Oswestry, and SF-36 scales were carried out.
Results: Test-retest reliability was 0.92, and Cronbach's alpha coefficient was 0.96. The scale correlated as expected with other measures of disability, pain, medical history, and utilization variables, work-related variables, and socio-demographic characteristics. Significant changes in disability over time, and differences in change scores between patients that were expected to differ in the direction of change, were found.
Conclusions: The Quebec scale can be recommended as an outcome measure in clinical trials, and for monitoring the progress of individual patients participating in treatment or rehabilitation programs.
abstract_id: PUBMED:24825473
The use of palliative sedation: A comparison of attitudes of French-speaking physicians from Quebec and Switzerland. Objective: Previous literature has suggested that laws and regulations may impact the use of palliative sedation. Our present study compares the attitudes of French-speaking physicians practicing in the Quebec and Swiss environments, where different laws are in place regarding physician-assisted suicide.
Method: Data were drawn from two prior studies, one by Blondeau and colleagues and another by Beauverd and coworkers, employing the same two-by-two experimental design with length of prognosis and type of suffering as independent variables. Both the effect of these variables and the effect of their interaction on Swiss and Quebec physicians' attitudes toward sedation were compared. The written comments of respondents were submitted to a qualitative content analysis and summarized in a comparative perspective.
Results: The analysis of variance showed that only the type of suffering had an effect on physicians' attitudes toward sedation. The results of the Wilcoxon test indicated that the attitudes of physicians from Quebec and Switzerland tended to be different for two vignettes: long-term prognosis with existential suffering (p = 0.0577) and short-term prognosis with physical suffering (p = 0.0914). In both cases, the Swiss physicians were less prone to palliative sedation.
Significance Of Results: The attitudes of physicians from Quebec and Switzerland toward palliative sedation, particularly regarding prognosis and type of suffering, seem similar. However, the results suggest that physicians from Quebec could be slightly more open to palliative sedation, even though most were not in favor of this practice as an answer to end-of-life existential suffering.
Answer: The Quebec provincial administrative database, specifically the Régie de l'assurance maladie du Québec (RAMQ) database, has been evaluated for its validity as a source for research on chronic non-cancer pain. According to a study that tested the validity of International Classification of Diseases, Ninth Revision (ICD-9) coding algorithms against pain specialist-established diagnostic data from the Quebec Pain Registry, the RAMQ database was found to be a valid source for identifying patients with certain types of chronic pain disorders, including chronic lumbar pain, chronic back pain, and chronic neck/back pain. The sensitivity and specificity of these algorithms were found to be reasonably high for these conditions (PUBMED:26105572).
However, the study also noted that the RAMQ database had limitations and showed low sensitivity for identifying other types of chronic pain syndromes such as complex regional pain syndrome, fibromyalgia, and neuropathic pain. Therefore, while the database can be a valuable resource for conducting research on some chronic pain disorders, caution should be exercised when studying other types of chronic pain syndromes due to the potential for under-identification (PUBMED:26105572).
In summary, the Quebec provincial administrative database is a valid source for research on certain types of chronic non-cancer pain, but its validity varies depending on the specific pain syndrome being studied. Researchers should be aware of the database's limitations and consider them when designing studies and interpreting results. |
Instruction: Are there high-risk subgroups for isolated locoregional failure in patients who had T1/2 breast cancer with one to three positive lymph nodes and received mastectomy without radiotherapy?
Abstracts:
abstract_id: PUBMED:10369079
Synergistic cooperation between c-Myc and Bcl-2 in lymph node progression of T1 human breast carcinomas. The overexpression of Bcl-2, an anti-apoptotic oncogene, identifies human T1 breast cancer patients who have an increased risk of lymph-node metastasis. We examined in these patients (n = 142) whether the c-Myc oncogene influences metastatic progression in conjunction or not with Bcl-2 expression and the loss of apoptosis in tumors. The association between Bcl-2 and lymph-node metastasis was only significant when c-Myc was concomitantly expressed (chi2 test, p = 0.008). Moreover, very large associations (pOR = 6.4) between c-Myc and lymph-node metastasis were observed among Bcl-2 positive tumors and tumors with loss of apoptosis (pOR = 8.4). In contrast, the metastatic advantage linked to Bcl-2 was decreased (pOR = 2) when c-Myc was not coexpressed. It is concluded that the synergism between Bcl-2 and c-Myc oncogenes may promote metastasis in breast tumors, linked to loss of apoptosis.
abstract_id: PUBMED:9816145
Apoptosis loss and bcl-2 expression: key determinants of lymph node metastases in T1 breast cancer. The Bcl-2 proto-oncogene extends cell survival but does not confer any proliferative advantage to cells that express it. Thus, the loss of apoptosis may have a role in progression allowing the acquisition of additional mutations. To determine whether apoptosis loss at diagnosis is associated with the metastatic advantage of ductal breast carcinomas and to examine the relationship between Bcl-2 expression, p53, and tumor cell death status, we examined tumor samples from 116 patients diagnosed with T1 (2 cm or less) breast cancer with (n = 49) or without (n = 67) lymph node metastases. Apoptosis loss in histological sections was considered when <1% of tumor nuclei were stained with terminal deoxynucleotidyl transferase labeled with biotin. We studied the expression of Bcl-2 and p53 by immunohistochemistry and in 37 p53 mutations by single-strand conformational polymorphism analysis and cycle sequencing. Multivariate logistic regression modeling was used to estimate prevalence odds ratios (pORs) for apoptosis loss and presence of lymph node metastases. Patients with marked apoptosis loss in their tumor cells were about 5 times more likely to present lymph node metastases than those with no apoptosis loss in their tumor cells (adjusted pOR, 4.7; 95% confidence interval, 1.4-15.6; trend test, P = 0.008). Bcl-2 expression was strongly associated with both apoptosis loss (pOR, 6.9; trend test, P < 0.0001) and presence of lymph node metastases (pOR, 5.7; trend test, P = 0.002). These associations were more evident in histological grade I and II tumors than in poorly differentiated histological grade III tumors and in p53-negative tumors than in p53-positive tumors. This study demonstrates for the first time that the lymphatic progression of T1 human breast cancer is strongly related to apoptosis loss.
abstract_id: PUBMED:34029847
Photobiomodulation therapy combined with radiotherapy in the treatment of triple-negative breast cancer-bearing mice. This work investigated the effect of photobiomodulation therapy (PBM) combined with radiotherapy (RT) on triple-negative breast cancer (TNBC)-bearing mice. Female BALB/c mice received 4 T1 cells into a mammary fat pad. Local RT was performed with a total dose of 60 Gy divided into 4 consecutive sessions of 15 Gy. For PBM, a red laser was used in three different protocols: i-) single exposure delivering 150 J.cm-2 (24 h after the last RT session), and ii-) radiant exposure of 150 J.cm-2 or iii-) fractionated radiant exposure of 37.5 J.cm-2 (after each RT session). Tumor volume, complete blood cell count, clinical condition, metastasis, and survival of animals were monitored during 3 weeks post-RT. Our results demonstrated that regardless of the protocol, PBM arrested the tumor growth, improved the clinical condition, and prevented hemolytic anemia. Besides, although PBM groups have exhibited a high neutrophil:lymphocyte ratio (NLR), they decreased the number of lung metastases and enhanced mouse survival. Worthy of note, PBM should be used along with the RT sessions in higher radiant exposures, since PBM at 150 J.cm-2 per session significantly extended the survival rate. Together, these data suggest PBM could be a potential ally to RT to fight TNBC.
abstract_id: PUBMED:33058428
A three dimensional in vivo model of breast cancer using a thermosensitive chitosan-based hydrogel and 4 T1 cell line in Balb/c. The two-dimensional (2D) models of breast cancer still exhibit a limited success. Whereas, three-dimensional (3D) models provide more similar conditions to the tumor for growth of cancer cells. In this regard, a 3D in vivo model of breast cancer using 4 T1 cells and chitosan-based thermosensitive hydrogel were designed. Chitosan/β-glycerol phosphate hydrogel (Ch/β-GP) was prepared with a final ratio of 2% and 10%. The hydrogel properties were examined by Fourier transformed infrared spectroscopy, MTT assay, pH, scanning electron microscopy, and biodegradability assay. 3D model of breast cancer was induced by injection of 1 × 106 4 T1 cells in 100 μl hydrogel and 2D model by injection of 1 × 106 4 T1 cells in 100 μl phosphate-buffered saline (PBS) subcutaneously. After 3 weeks, induced tumors were evaluated by size and weight determination, ultrasound, hematoxylin- and eosin and Masson's trichrome staining and evaluating of cancer stem cells with CD44 and CD24 markers. The results showed that hydrogel with physiological pH had no cytotoxicity. In 3D model, tumor size and weight increased significantly (p ≤ .001) in comparison with 2D model. Histological and ultrasound analysis showed that 3D tumor model was more similar to breast cancer. Expression of CD44 and CD24 markers in the 3D model was more than 2D model (p ≤ .001). This 3D in vivo model of breast cancer mimicked native tumor and showed malignant tissue properties. Therefore, the use of such models can be effective in various cancer studies, especially in the field of cancer stem cells.
abstract_id: PUBMED:32433929
Flavonoid, stilbene and diarylheptanoid constituents of Persicaria maculosa Gray and cytotoxic activity of the isolated compounds. Persicaria maculosa (Polygonaceae) has been used as edible and as medicinal plant since ancient times. As a result of multistep chromatographic purifications, chalcones [2'-hydroxy-3',4',6'-trimethoxychalcone (1), pashanone (2), pinostrobin chalcone (3)], flavanones [6-hydroxy-5,7-dimethoxyflavanone (4), pinostrobin (5), onysilin (6), 5-hydroxy-7,8-dimethoxyflavanone (7)], flavonol [3-O-methylgalangin (8)], stilbene [persilben (9)], diarylheptanoids [1,7-diphenylhept-4-en-3-one (10), dihydroyashabushiketol (12), yashabushidiol B (13)] and 3-oxo-α-ionol-glucoside (11) were isolated from P. maculosa. The present paper reports for the first time the occurrence of diarylheptanoid-type constituents in the family Polygonaceae. Cytotoxicity of 1-5, 7 and 9-11 on 4 T1 mouse triple negative breast cancer cells was assayed by MTT test. None of the tested compounds reduced the cell viability to less than 80% of the control. On non-tumorigenic D3 human brain endothelial cells the decrease of cell viability was observed in case of 1 and 2. Further impedance measurements on 4 T1 and D3 cells a concentration-dependent decrease in the cell index of both cell types was demonstrated for 1, while 2 proved to be toxic only on endothelial cells.
abstract_id: PUBMED:30426312
Immunoelectrochemical detection of the human epidermal growth factor receptor 2 (HER2) via gold nanoparticle-based rolling circle amplification. The authors describe an adapted rolling circle amplification (RCA) method for the determination of human epidermal growth factor receptor 2 (HER2). This method (which is termed immunoRCA) combines an immunoreaction with DNA based signal amplification. Gold nanoparticles (AuNPs) were loaded with antibodies against HER2 and DNA, and then fulfill the functions of recognizing HER2 and achieving signal amplification. The DNA serves as a primer to trigger RCA. This results in formation of a long DNA containing hundreds of copies of circular DNA sequence on the electrode surface. Then, molybdate is added which reacts with the phosphate group of the long DNA to generate the redox-active molybdophosphate. This, in turn, results in an increased current and, thus, in strongly increased sensitivity of the immunoassay. A linear response is linear relationship between the change of current intensity and the logarithm of the concentration in the range from 1 to 200 pg·mL-1 of HER2, and the detection limit is 90 fg·mL-1 (at an S/N ratio of 3). The method was applied to the determination of HER2 in breast cancer patients serum samples, and the results correlated well with those obtained by an ELISA. The method was further successfully applied to the determination of HER2 in HER2-expressed mouse breast cancer 4 T1 cells. Conceivably, this strategy may be adapted to other DNA amplification methods and also may be used for the determination of other proteins and biomarkers by using the appropriate antibodies. Graphical abstract Schematic presentation of an adapted rolling circle amplification (RCA) strategy for the electrochemical detection of human epidermal growth factor receptor 2 (HER2), termed "immunoRCA" utilizing gold nanoparticles (AuNPs). Ab stands for antibody, Phi29 is an E.coli DNA polymerase, dNTP represents deoxynucleotides, and SWV stands for square wave voltammetry.
abstract_id: PUBMED:21948234
Phase I trial of adoptive cell transfer with mixed-profile type-I/type-II allogeneic T cells for metastatic breast cancer. Purpose: Metastatic breast cancer (MBC) response to allogeneic lymphocytes requires donor T-cell engraftment and is limited by graft-versus-host disease (GVHD). In mice, type-II-polarized T cells promote engraftment and modulate GVHD, whereas type-I-polarized T cells mediate more potent graft-versus-tumor (GVT) effects. This phase I translational study evaluated adoptive transfer of ex vivo costimulated type-I/type-II (T1/T2) donor T cells with T-cell-depleted (TCD) allogeneic stem cell transplantation (AlloSCT) for MBC.
Experimental Design: Patients had received anthracycline, taxane, and antibody therapies, and been treated for metastatic disease and a human leukocyte antigen (HLA)-identical-sibling donor. Donor lymphocytes were costimulated ex vivo with anti-CD3/anti-CD28 antibody-coated magnetic beads in interleukin (IL)-2/IL-4-supplemented media. Patients received reduced intensity conditioning, donor stem cells and T1/T2 cells, and monitoring for toxicity, engraftment, GVHD, and tumor response; results were compared with historical controls, identically treated except for T1/T2 product infusions.
Results: Mixed type-I/type-II CD4(+) T cells predominated in T1/T2 products. Nine patients received T1/T2 cells at dose level 1 (5 × 10(6) cells/kg). T-cell donor chimerism reached 100% by a median of 28 days. Seven (78%) developed acute GVHD. At day +28, five patients had partial responses (56%) and none had MBC progression; thereafter, two patients had continued responses. Donor T-cell engraftment and tumor responses appeared faster than in historical controls, but GVHD rates were similar and responders progressed early, often following treatment of acute GVHD.
Conclusion: Allogeneic T1/T2 cells were safely infused with TCD-AlloSCT, appeared to promote donor engraftment, and may have contributed to transient early tumor responses.
abstract_id: PUBMED:30652350
A novel bioreactor for combined magnetic resonance spectroscopy and optical imaging of metabolism in 3D cell cultures. Purpose: Fluorescence lifetime imaging microscopy (FLIM) of endogenous fluorescent metabolites permits the measurement of cellular metabolism in cell, tissue and animal models. In parallel, magnetic resonance spectroscopy (MRS) of dynamic nuclear (hyper)polarized (DNP) 13 C-pyruvate enables measurement of metabolism at larger in vivo scales. Presented here are the design and initial application of a bioreactor that connects these 2 metabolic imaging modalities in vitro, using 3D cell cultures.
Methods: The model fitting for FLIM data analysis and the theory behind a model for the diffusion of pyruvate into a collagen gel are detailed. The device is MRI-compatible, including an optical window, a temperature control system and an injection port for the introduction of contrast agents. Three-dimensional printing, computer numerical control machining and laser cutting were used to fabricate custom parts.
Results: Performance of the bioreactor is demonstrated for 4 T1 murine breast cancer cells under glucose deprivation. Mean nicotinamide adenine dinucleotide (NADH) fluorescence lifetimes were 10% longer and hyperpolarized 13 C lactate:pyruvate (Lac:Pyr) ratios were 60% lower for glucose-deprived 4 T1 cells compared to 4 T1 cells in normal medium. Looking at the individual components of the NADH fluorescent lifetime, τ1 (free NADH) showed no significant change, while τ2 (bound NADH) showed a significant increase, suggesting that the increase in mean lifetime was due to a change in bound NADH.
Conclusion: A novel bioreactor that is compatible with, and can exploit the benefits of, both FLIM and 13 C MRS in 3D cell cultures for studies of cell metabolism has been designed and applied.
abstract_id: PUBMED:36775221
Silver sulfide coated alginate radioenhancer for enhanced X-ray radiation therapy of breast cancer. A wide range of high-Z nanomaterials are fabricated to decrease radiation dose by sensitizing cells to irradiation through various mechanisms such as ROS generation enhancement. Alginate-coated silver sulfide nanoparticles (Ag2S@Alg) were synthesized and characterized by SEM, TEM, DLS, XRD, EPS, FT-IR, and UV-vis analysis techniques. Cytotoxicity of nanoparticles was tested against HFF-2, MCF-7, and 4 T1 cell lines for biocompatibility and radio enhancement ability evaluation, respectively. Moreover, the hemolysis assay demonstrated that the nanoparticles were biocompatible and nontoxic. In vitro intracellular ROS generation and calcein AM/PI co-staining unveiled cancerous cell death induction by nanoradiosensitizer, Ag2S@Alg. Further, histopathology results emphasized the tumor ablation capability of Ag2S@Alg. Silver anticancer properties were also recognized and combined with its radiosensitizing effect under X-ray irradiation.
abstract_id: PUBMED:24950740
High efficiency of alphaviral gene transfer in combination with 5-fluorouracil in a mouse mammary tumor model. Background: The combination of virotherapy and chemotherapy may enable efficient tumor regression that would be unachievable using either therapy alone. In this study, we investigated the efficiency of transgene delivery and the cytotoxic effects of alphaviral vector in combination with 5-fluorouracil (5-FU) in a mouse mammary tumor model (4 T1).
Methods: Replication-deficient Semliki Forest virus (SFV) vectors carrying genes encoding fluorescent proteins were used to infect 4 T1 cell cultures treated with different doses of 5-FU. The efficiency of infection was monitored via fluorescence microscopy and quantified by fluorometry. The cytotoxicity of the combined treatment with 5-FU and alphaviral vector was measured using an MTT-based cell viability assay. In vivo experiments were performed in a subcutaneous 4 T1 mouse mammary tumor model with different 5-FU doses and an SFV vector encoding firefly luciferase.
Results: Infection of 4 T1 cells with SFV prior to 5-FU treatment did not produce a synergistic anti-proliferative effect. An alternative treatment strategy, in which 5-FU was used prior to virus infection, strongly inhibited SFV expression. Nevertheless, in vivo experiments showed a significant enhancement in SFV-driven transgene (luciferase) expression upon intratumoral and intraperitoneal vector administration in 4 T1 tumor-bearing mice pretreated with 5-FU: here, we observed a positive correlation between 5-FU dose and the level of luciferase expression.
Conclusions: Although 5-FU inhibited SFV-mediated transgene expression in 4 T1 cells in vitro, application of the drug in a mouse model revealed a significant enhancement of intratumoral transgene synthesis compared with 5-FU untreated mice. These results may have implications for efficient transgene delivery and the development of potent cancer treatment strategies using alphaviral vectors and 5-FU.
Answer: The provided abstracts do not directly address the specific question of high-risk subgroups for isolated locoregional failure in patients with T1 breast cancer with one to three positive lymph nodes who underwent mastectomy without radiotherapy. The abstracts focus on various aspects of breast cancer research, including the role of Bcl-2 and c-Myc in lymph node progression of T1 breast carcinomas (PUBMED:10369079), the association of apoptosis loss with metastatic advantage (PUBMED:9816145), the effects of photobiomodulation therapy combined with radiotherapy in triple-negative breast cancer-bearing mice (PUBMED:34029847), the development of a three-dimensional in vivo model of breast cancer (PUBMED:33058428), cytotoxic activity of compounds from Persicaria maculosa on 4 T1 mouse triple-negative breast cancer cells (PUBMED:32433929), the detection of HER2 in breast cancer cells (PUBMED:30426312), a phase I trial of adoptive cell transfer for metastatic breast cancer (PUBMED:21948234), a novel bioreactor for metabolic imaging in 3D cell cultures (PUBMED:30652350), the use of silver sulfide coated alginate as a radioenhancer for breast cancer radiation therapy (PUBMED:36775221), and the efficiency of alphaviral gene transfer combined with 5-fluorouracil in a mouse mammary tumor model (PUBMED:24950740).
To answer the question about high-risk subgroups for isolated locoregional failure, one would need to look at clinical studies or trials that specifically evaluate the outcomes of patients with T1 breast cancer with one to three positive lymph nodes who have undergone mastectomy without subsequent radiotherapy. Such studies would typically examine factors such as tumor size, grade, hormone receptor status, HER2 status, and other molecular markers to identify patients who may be at increased risk of locoregional recurrence and could potentially benefit from additional treatments such as radiotherapy. None of the provided abstracts contain this specific information. |
Instruction: Evaluation of arteriovenous malformations (AVMs) with transcranial color-coded duplex sonography: does the location of an AVM influence its sonographic detection?
Abstracts:
abstract_id: PUBMED:16239654
Evaluation of arteriovenous malformations (AVMs) with transcranial color-coded duplex sonography: does the location of an AVM influence its sonographic detection? Objective: The clinical value of transcranial color-coded duplex sonography (TCCS) in the evaluation of arteriovenous malformations (AVMs) has not yet been fully investigated. In this study, 54 intracranial AVMs confirmed by angiography were prospectively examined over 6 years. The purpose of the study was to describe their typical sonographic features and to define sensitivity for diagnosis with regard to the location of an AVM.
Methods: Transcranial color-coded duplex sonographic findings for 54 patients with intracranial AVMs are presented. The vessels of the circle of Willis were identified by location, course, and direction of flow on color flow images.
Results: In accordance with digital subtraction angiography, the intracranial AVMs could be visualized in 42 cases (sensitivity, 77.8%). The pathologic vessels were coded in different shades of blue and red, corresponding to varying blood flow directions in the AVM. The major feeding vessels could be easily identified. Hemodynamic parameters showing increased systolic and diastolic flow velocities and a decreased pulsatility index were better attainable with TCCS than with conventional transcranial Doppler sonography. Arteriovenus malformations located near the cortex, that is, in the parietal, frontal, occipital, and cerebellar regions of the brain, could not be visualized. In contrast, AVMs located in the basal regions were very easy to image (sensitivity, 88.9%). Additionally, TCCS proved useful for follow-up examinations postoperatively or after embolization.
Conclusions: Transcranial color-coded duplex sonography is a valuable noninvasive method for the diagnosis and long-term follow-up of intracranial AVMs. Arteriovenous malformations located in the axial imaging plane can be more easily detected. Nevertheless, TCCS should not be used as a screening method.
abstract_id: PUBMED:29249075
The characteristics of transcranial color-coded duplex sonography in children with cerebral arteriovenous malformation presenting with headache. Purpose: Cerebral arteriovenous malformations (AVM) are uncommon lesions. They are most often presented in childhood as intracranial hemorrhage. The aim of this report is to present the use of transcranial color-coded duplex sonography (TCCS) in detection of AVMs in children suffering headache.
Methods: This report describes five pediatric patients with headache and cerebral AVM which were initially discovered by TCCS. Diagnosis was confirmed by magnetic resonance imaging and digital subtraction angiography.
Results: In all patients, TCCS showed saccular enlargement of the vessels with a multicolored pattern corresponding to the different directions of blood flow. Spectral analysis showed significantly high flow systolic and diastolic velocities and low resistance index.
Conclusions: In this report, we describe TCCS as a valuable non-invasive, harmless, low-cost, widely available method for the detection and follow-up of hemodynamic changes of AVMs in children with headache, before and after treatment.
abstract_id: PUBMED:10460438
Transcranial color-coded duplex sonography. Transcranial color-coded duplex sonography (TCCS) enables the reliable assessment of intracranial stenoses, occlusions, and cross-flow through the circle of Willis without using potentially hazardous compression tests. Transpulmonary ultrasound contrast agents (UCAs) increase the number of conclusive TCCS investigations, which suggests that UCAs may provide the conclusive evaluation of intracranial arteries in most patients with ischemic cerebrovascular disease. Further, contrast-enhanced TCCS may become an important tool both for the management of acute ischemic stroke by assessing intracranial hemodynamics and the displacement and diameter changes in supratentorial ventricles. TCCS is useful for the detection and monitoring of intracranial vasospasm, may visualize larger supratentorial hematomas with subcortical location and hemorrhagic transformation of ischemic infarcts, and provides the incidental detection of cerebral aneurysms and arteriovenous malformations. Second-generation UCAs and new ultrasound machines are very likely to further increase the frequency of conclusive TCCS studies. Power-based three-dimensional, contrast-enhanced TCCS is an important further development, which would make the method much less operator dependent. Site-targeted UCAs appear to provide a new and exciting method for ultrasonic diagnosis and management of patients with ischemic cerebrovascular disease.
abstract_id: PUBMED:16392059
Transcranial color-coded duplex ultrasonography of arteriovenous malformations Purpose: Using transcranial color coded duplex sonography (TCCS) it is possible to visualize intracranial arteriovenous malformations (AVMs). The purpose of this study is to describe their typical ultrasonographic features and to define sensitivity for diagnosis with regard to the localization of an AVM.
Materials And Methods: Over a period of six years we prospectively examined 54 intracranial AVMs confirmed by angiography. Using TCCS the vessels of the circle of Willis were identified by location, course and direction of flow on color flow images. The examination was performed during the first three years of the study using the Acuson 128 XP 10 system, equipped with a sector transducer with a 2.0/2.5 - MHz imaging frequency for the transcranial examination, and with a 7.0 MHz linear transducer fot the extracranial examination. During the second three years of the study, transcranial examination was performed with an Acuson Seqouia 512 ultrasound system equipped with a 2 - 4 MHz phased array transducer.
Results: In accordance with digital subtraction angiography, the intracranial AVMs could be visualized in 42 cases (77.8 %). The major feeding vessels of the AVMs could be easily identified due to typical hemodynamic parameters showing increased systolic and diastolic flow velocities and decreased pulsatility index. We failed to visualize AVMs localized near the cortex, i. e. in the parietal, frontal, occipital and cerebellar regions of the brain. In contrast, 88.9 % of AVMs localized in the basal regions were very easy to image. Additionally, TCCS was useful for postoperative or postinterventional follow-up, although only a limited number of patients could be examined by TCCS in the post-treatment period.
Conclusion: TCCS is a noninvasive method for the diagnosis and possibly valuable in the long-term follow-up of intracranial AVMs. However, further research is needed to establish TCCS as an imaging modality in the follow-up after treatment of AVMs. The method can be regarded as a useful supplement to the palette of established, noninvasive diagnostic techniques such as MRI and MRA. However, since TCCS cannot rule out an AVM, angiography is still the method of choice for the definitive diagnosis.
abstract_id: PUBMED:12418138
Evaluation of blood supply dynamics and possibilities of cerebral arteriovenous malformations (AVM) imaging by means of transcranial color-coded duplex sonography (TCCS) The present study was carried out in 12 patients (7 males and 5 females) aged 16-56 years (mean age 35.9) with arteriovenous malformation diagnosed in cerebral angiography. The examination was performed by means of ATL ULTRAMARK 9 with low-frequency (2MHz to 3MHz) transducer. Assessment was made of possibilities of imaging arteriovenous malformations by transcranial colour-coded duplex sonography and of blood flow values in the studied intracranial arteries. Statistical analysis was applied of the following blood flow parameters: mean velocity, peak systolic velocity, end diastolic velocity, pulsatility index (PI), resistance index (RI), pulsatility transmission index (PTI) and relative flow velocity (RFV). The malformations were imaged using TCCS in 9 (75%) patients. In two-dimensional B mode image the angiomas displayed echogenicity similar to the surrounding brain parenchyma and could not be precisely delineated. The colour-coded imaging made possible depicting the image of the vessel loops in the AVM nidus. In two other patients the presence of angiomas was confirmed by blood flow disturbances in the feeder vessel: increased blood flow, decreased pulsatility and resistance indices, and collateral circulation. Collateralization by the contralateral internal carotid artery via the anterior communicating artery was disclosed in 9 cases. The TCCS results corresponded closely to angiographic findings. Collateral circulation via the posterior communicating artery ipsilateral to the AVM was found in 3 of 5 patients in whom it was diagnosed in cerebral angiography. Doppler findings in the patients were similar to the conventional TCD examination. In the feeding vessels significant increase in blood flow velocity and decreased pulsatility and resistance indices was observed. RFV value for the end diastolic velocity was higher than RFV for the peak systolic velocity (p < 0.001) and mean velocity. These findings suggest that the increased end diastolic velocity in the feeding vessel is higher than the mean velocity and peak systolic velocity. It is thought that the TCCS can be successfully used in early diagnosis of cerebral arterio-venous malformations and as an instrument in follow-up examination.
abstract_id: PUBMED:7491654
Transcranial color-coded duplex sonography in cerebral arteriovenous malformations. Background And Purpose: It is well known that significant changes in cerebral hemodynamics may occur during the treatment of cerebral arteriovenous malformations with the complication of intracerebral hemorrhage and parenchymal edema. We used transcranial color-coded duplex sonography to study alterations in blood flow velocities during staged embolization.
Methods: Forty-one patients aged 40 +/- 13 years (mean +/- SD) with angiographically proven cerebral arteriovenous malformations were studied. The blood flow velocities of the anterior, middle, and posterior cerebral arteries were measured in 16 patients with supratentorial arteriovenous malformations, both before the first and then after each successive embolization (three to seven treatments).
Results: In 29 of 41 patients (71%), transcranial color-coded duplex sonography satisfactorily revealed the malformations and their main feeders. After the final embolization, we found a reduction in the peak flow velocity in treated feeders of 23 +/- 28% compared with the values before the first embolization. The untreated feeders showed an increase in peak flow velocities of 12 +/- 23% as an expression of increased collateral flow. After the treatment of the supplying feeders, we observed a reduction in flow velocity of 25 +/- 13% in seven patients, with cross-filling of the arteriovenous malformation through the contralateral anterior cerebral artery and the anterior communicating artery.
Conclusions: The technical advantage of transcranial color-coded duplex sonography compared with transcranial Doppler sonography is that it allows the exact identification of different feeding arteries in arteriovenous malformations. Repeated measurements during stepwise embolization with corrected insonation angle are easily achieved, and noninvasive quantification of hemodynamic changes is possible. The method may be helpful in the planning of the different steps of embolization.
abstract_id: PUBMED:15228761
Clinical applications of transcranial color-coded duplex sonography. Transcranial color-coded duplex sonography (TCCS), in contrast to "blind" conventional transcranial Doppler sonography (TCD), enables a sonographer to outline the intracranial bony and parenchymal structures, visualize the basal cerebral arteries in color, and measure angle-corrected blood flow velocities in a specific site of the artery in question. This makes measurements of flow velocity more valid than those obtained with conventional TCD. TCCS is becoming a reliable tool for detecting the occlusion and narrowing of major intracranial arterial trunks. TCCS can image the collateral flow through the anterior and posterior communicating arteries in patients with unilateral, high-grade stenosis or occlusion of the extracranial internal carotid artery, without using potentially dangerous compression tests. Large and medium-sized arteriovenous malformations can also be detected with TCCS. The rapid sonographic assessment of cerebral hemodynamics in a neurosurgical patient with increased intracranial pressure can guide further management. The use of sonographic contrast agents can increase the number of conclusive TCCS studies in patients with insufficient acoustic windows.
abstract_id: PUBMED:24449730
Comparison of transcranial color-coded real-time sonography and contrast-enhanced color-coded sonography for detection and characterization of intracranial arteriovenous malformations. Objectives: To compare the diagnostic value of transcranial color-coded real-time sonography and contrast-enhanced color-coded sonography in detection and characterization of intracranial arteriovenous malformations.
Methods: Thirty-one patients highly suspected to have an intracranial arteriovenous malformation were imaged with real-time and contrast-enhanced sonography. With digital subtraction angiography as the reference standard, the ability to detect the malformations and accurately determine their size and location was compared between the two imaging techniques.
Results: One cavernous hemangioma and 30 intracranial arteriovenous malformations were imaged with real-time and contrast-enhanced sonography, which were confirmed by angiography. The detectability of contrast-enhanced sonography, especially for optimizing visualization of malformations located in the frontal, parietal, and occipital lobes, was higher than that of real-time sonography, although the overall number of malformations was too small to demonstrate significance. The sizes of the malformations (6 in the frontal lobe, 1 in the parietal lobe, and 1 in the occipital lobe) were underestimated by real-time sonography compared to angiography, whereas there was agreement in the sizes between contrast-enhanced sonography and angiography. The detection rates for the 30 arteriovenous malformations on contrast-enhanced and real-time sonography were 96.7% (29 of 30) and 70.0% (21 of 30), respectively (P = .008). Moreover, contrast-enhanced sonography was significantly superior to real-time sonography for detection of feeding arteries (59.5% [22 of 37] versus 83.7% [31 of 37]; P = .004). Although the feeding arteries showed increased peak systolic and end-diastolic velocities after contrast agent injection, there were no statistically significant differences in the velocities before and after injection.
Conclusions: Transcranial contrast-enhanced color-coded sonography is superior to color-coded real-time sonography for detection of intracranial arteriovenous malformations, particularly for lesions located in the frontal, parietal, and occipital lobes of the brain.
abstract_id: PUBMED:12766355
Transcranial color duplex sonography in cerebrovascular disease: a systematic review. Transcranial color-coded duplex sonography (TCCS) enables the reliable assessment of intracranial stenoses (sensitivity 94-100%, specificity 99-100%), occlusions (middle cerebral artery: sensitivity 93-100%, specificity 98-100%) and cross-flow through the anterior (sensitivity 98%, specificity 100%) and posterior (sensitivity 84%, specificity 94%) communicating arteries without using potentially dangerous compression tests, as well as the midline shift in hemispheric infarcts. Ultrasound contrast agents (UCAs) increase the number of conclusive TCCS studies and allow the definite evaluation of intracranial arteries in most patients. TCCS is also useful for diagnosis and monitoring of vasospasm and detection of supratentorial hematomas (sensitivity 94%, specificity 95%) and aneurysms (sensitivity per patient 40-78%, specificity 90-91%), and may identify arteriovenous malformations. New developments are (1) UCAs that may increase the number of conclusive TCCS studies, (2) cerebral perfusion assessment, (3) measurement of arteriovenous cerebral transit time, which might enable the detection of small-vessel disease, and (4) site-targeted UCAs that may improve diagnosis and local drug delivery.
abstract_id: PUBMED:24309152
Transcranial color-coded Doppler assessment of cerebral arteriovenous malformation hemodynamics in patients treated surgically or with staged embolization. Objective: The etiology of hemodynamic disturbances following embolization or surgical resection of arteriovenous malformations (AVMs) has not been fully explained. The aim of the study was the assessment of the selected hemodynamic parameters in patients treated for cerebral AVMs using transcranial color-coded Doppler sonography (TCCS).
Materials And Methods: Forty-six adult patients (28 males, 18 females, aged 41 ± 13 years, mean ± SD) diagnosed with AVMs who were consecutively admitted to the Department of Neurosurgery between 2000 and 2012 treated surgically or with staged embolization were enrolled in the study. All patients were examined with TCCS assessing mean flow velocity (Vm), the pulsatility index (PI) and vasomotor reactivity (VMR) in all main intracranial arteries. The examined parameters were assessed in the vessel groups (feeding, ipsilateral and contralateral to the AVM) and they were compared between the examinations, i.e. at admission, within 24h after the first embolization or surgical resection (I control), and before the second embolization (II control).
Results: In feeders which were completely obliterated or surgically resected--I control examination showed a nonsignificant Vm decrease. The difference between Vm before embolization and II control examination was significant (102.0 ± 47.8 cm/s vs 54.3 ± 19.4 cm/s, p<0.01). A significant increase in PI (0.72 ± 0.18 vs 0.94 ± 0.24, p<0.01) and VMR (1.80 ± 0.59 vs 2.78 ± 0.78, p<0.01) of feeding vessels was observed in I control. No further increase in PI or in VMR was observed. In embolized feeding vessels after partial AVM embolization I control examination showed a significant decrease in Vm (116.1 ± 32.6 cm/s vs 93.4 ± 33.0 cm/s, p<0.01). No further significant decrease in Vm was noted. The pulsatility index increased significantly (I control, 0.54 ± 0.11 vs 0.66 ± 0.15, p<0.01) and then decreased nonsignificantly (II control). No statistically significant differences were found in VMR values between pretreatment, I and II control examinations. Both Vm in the ipsilateral internal carotid artery and the ratio of Vm of the embolized vessel to Vm of the corresponding contralateral vessel were significantly higher in I control examination compared to II control examination (111.8 ± 44.0 cm/s vs 101.3 ± 40.6 cm/s, p<0.01; 1.63 ± 0.61 vs 1.37 ± 0.62, p<0.01; respectively). No statistically significant correlation was observed between the decrease in Vm or the increase in PI in the embolized vessels and the reduction of AVM volume. In the nonembolized feeding vessels after partial AVM embolization II control examination revealed the increase in Vm and a significant decrease in PI (0.71 ± 0.21 vs 0.62 ± 0.16, p<0.01) compared to I examination. No statistically significant changes in the VMR value in the nonembolized feeders between the pretreatment, I and II control examinations were noted.
Conclusions: The decrease in Vm and the increase in the PI in the embolized feeding vessels after the first complete embolization or surgical resection is observed, whereas the PI returned to normal values before Vm does. The observed decrease in Vm and an increase in the PI in embolized AVM feeders after complete or partial embolization do not correlate with the extent of embolization. In these vessels a relative increase in blood flow velocity is maintained within the first 24h following embolization as compared to contralateral vessels. The increase in Vm is not related to disturbances in VMR. Blood redistribution to the nonembolized AVM feeders is observed after partial AVM embolization.
Answer: Yes, the location of an arteriovenous malformation (AVM) does influence its sonographic detection when using transcranial color-coded duplex sonography (TCCS). Studies have shown that AVMs located near the cortex, such as in the parietal, frontal, occipital, and cerebellar regions, could not be visualized using TCCS. In contrast, AVMs located in the basal regions of the brain were much easier to image, with a sensitivity of 88.9% (PUBMED:16239654; PUBMED:16392059). This suggests that the axial imaging plane is more favorable for the detection of AVMs with TCCS.
Furthermore, TCCS has been described as a valuable non-invasive method for the detection and follow-up of hemodynamic changes of AVMs in children with headache, before and after treatment (PUBMED:29249075). It has also been useful for the detection and monitoring of intracranial vasospasm, visualization of larger supratentorial hematomas with subcortical location, and provides incidental detection of cerebral aneurysms and AVMs (PUBMED:10460438).
The sensitivity of TCCS in visualizing intracranial AVMs confirmed by angiography was reported to be 77.8%, with the major feeding vessels of the AVMs being easily identified due to typical hemodynamic parameters showing increased systolic and diastolic flow velocities and decreased pulsatility index (PUBMED:16392059). Additionally, TCCS has been shown to be effective in evaluating blood supply dynamics and imaging AVMs in a study of 12 patients, with a 75% success rate in imaging the malformations (PUBMED:12418138).
In summary, TCCS is a noninvasive diagnostic tool that can be used for the diagnosis and follow-up of intracranial AVMs, but its effectiveness is influenced by the location of the AVM, with basal region AVMs being more readily detected than those near the cortex. |
Instruction: Are culture-expanded autogenous bone cells a clinically reliable option for sinus grafting?
Abstracts:
abstract_id: PUBMED:19077153
Are culture-expanded autogenous bone cells a clinically reliable option for sinus grafting? Objectives: This prospective clinical study was designed to examine the healing process during the first 12 months after sinus grafting (SG) with autogenous culture-expanded bone cells (ABC) and bovine bone mineral (BBM) histomorphometrically and radiologically.
Material And Methods: Twenty-two sinuses of 12 patients (mean age 56.2+/-9.3 years) were grafted. Four weeks before, SG bone biopsies were obtained with a trephine burr and the bone cells were isolated and expanded. Every sinus was grafted with BBM and ABC. After 6 months, a biopsy was taken from each sinus and implants (n=82) were placed. These were uncovered after another 6 months and fitted with dentures. The percent newly formed bone (NB) and the NB-to-BBM contact area were determined on undecalcified histologic sections. The sinus graft volume was evaluated by dental CT after SG (CT 1), after implant placement (CT 2) and after implant uncovery (CT 3).
Results: Postoperative healing was uneventful. The NB was 17.9+/-4.6% and the contact area 26.8+/-13.1%. The graft volume (in mm(3)) was 2218.4+/-660.9 at the time of CT 1, 1694+/-470.4 at the time of CT 2 and 1347.9+/-376.3 at the time of CT 3 (P<.01). Three implants were lost after uncovery. Reimplantation and prosthodontic rehabilitation were successful throughout.
Conclusions: These results suggest that SG with ABC and BBM in a clinical setting provides a bony implant site which permits implant placement and will tolerate functional loading.
abstract_id: PUBMED:15533135
Bone formation following sinus grafting with autogenous bone-derived cells and bovine bone mineral in minipigs: preliminary findings. Bone formation in a sinus grafted with a cell-free scaffold requires the presence of local progenitor cells that differentiate into osteoblasts. The purpose of this study was to examine the effect of culture expanded autogenous bone-derived cells (ABC) with bovine bone mineral (BBM) on bone formation after single-stage sinus grafting in minipigs. Bone biopsies from the iliac crest were harvested 4 weeks prior to sinus grafting and ABC were culture expanded in vitro. The sinuses of five adult minipigs were grafted. In one sinus of each minipig the space between Schneider's membrane (SM) and the sinus wall was grafted with ABC (325,000 cells per sinus, on average) and BBM. In the other sinus, BBM alone was used. The animals were sacrificed after 12 weeks. One block of each grafted area was prepared by saw cutting and the amount of newly formed bone was analysed by micro-computed tomography (mu-CT). The addition of ABC to BBM significantly increased the amount of newly formed bone compared with BBM alone on mu-CT analysis (ABC+BBM: 29.86+/-6.45% vs. BBM: 22.51+/-7.28% (P=0.016)). Bone formation was increased near SM (ABC+BBM: 20.7+/-4.5% vs. BBM: 15.43+/-3.62% (P=0.009)) and in the middle zone of the grafting material (ABC+BBM: 31.63+/-7.74% vs. BBM: 22.5+/-7.91% (P=0.001)), but not near the local host bone (ABC+BBM: 37.23+/-8.23% vs. BBM: 28.42+/-12.54% (P=0.15)). These preliminary findings indicate that supplementation of cell-free grafting material with culture expanded ABC can stimulate bone formation in areas with low bone-forming capacity.
abstract_id: PUBMED:28650796
Comparison of Bovine Bone-Autogenic Bone Mixture Versus Platelet-Rich Fibrin for Maxillary Sinus Grafting: Histologic and Histomorphologic Study. Numerous grafting materials have been used to augment the maxillary sinus floor for long-term stability and success for implant-supported prosthesis. To enhance bone formation, adjunctive blood-born growth factor sources have gained popularity during the recent years. The present study compared the use of platelet-rich fibrin (PRF) and bovine-autogenous bone mixture for maxillary sinus floor elevation. A split-face model was used to apply 2 different filling materials for maxillary sinus floor elevation in 22 healthy adult sheep. In group 1, bovine and autogenous bone mixture; and in group 2, PRF was used. The animals were killed at 3, 6, and 9 months. Histologic and histomorphologic examinations revealed new bone formation in group 1 at the third and sixth months. In group 2, new bone formation was observed only at the sixth month, and residual PRF remnants were identified. At the ninth month, host bone and new bone could not be distinguished from each other in group 1, and bone formation was found to be proceeding in group 2. PRF remnants still existed at the ninth month. In conclusion, bovine bone and autogenous bone mixture is superior to PRF as a grafting material in sinus-lifting procedures.
abstract_id: PUBMED:30894978
Sinus lift: 3 years follow up comparing autogenous bone block versus autogenous particulated grafts. Background/purpose: The aim of this prospective randomized controlled clinical trial was to compare vertical bone gain and bone resorption after sinus graft procedures performed either with particulate or with autogenous bone block.
Material And Methods: Forty-one patients underwent sinus graft procedures with autogenous bone. They were randomly assigned to one group. The first group of 22 patients was treated with autogenous bone block with or without particulated bone, while in the second group of 19 patients sinus floor elevation was performed only with particulated autogenous bone. Linear measurements were recorded before surgery with a computed tomography scan at surgery and at 36 months after sinus lift grafting with a second computed tomography scan. To detect statistical differences Student t test was applied. Differences were considered significant if P values were < 0.05.
Results: There was a statistically significant difference in bone gain for the group treated with bone block grafts.
Conclusion: As a general clinical guideline the clinician should prefer, wherever feasible, en-block bone grafts for sinus floor augmentation procedures.
abstract_id: PUBMED:33057980
Autogenous bone augmentation from the zygomatic alveolar crest: a volumetric retrospective analysis in the maxilla. Background: Autogenous bone augmentation is the gold standard for the treatment of extended bone defects prior to implantation. Bone augmentation from the zygomatic crest is a valuable option with several advantages, but the current literature for this treatment is scant. The aim of this study was to evaluate the increase in bone volume after locoregional bone augmentation using autogenous bone from the zygomatic alveolar crest as well as the complications and success rate.
Results: Analysis of the augmented bone volume in seven patients showed a maximum volume gain of 0.97 cm3. An average of 0.54 cm3 of autogenous bone (SD 0.24 cm3; median: 0.54 cm3) was augmented. Implantation following bone augmentation was possible in all cases. Complications occurred in three patients.
Conclusion: The zygomatic alveolar crest is a valuable donor site for autogenous alveolar onlay grafting in a locoregional area such as the maxillary front. Low donor site morbidity, good access, and its suitable convexity make it a beneficial choice for autogenous bone augmentation.
abstract_id: PUBMED:24516817
Assessment of the autogenous bone graft for sinus elevation. Objectives: The posterior maxillary region often provides a limited bone volume for dental implants. Maxillary sinus elevation via inserting a bone graft through a window opened in the lateral sinus wall has become the most common surgical procedure for increasing the alveolar bone height in place of dental implants in the posterior maxillary region. The purpose of this article is to assess the change of bone volume and the clinical effects of dental implant placement in sites with maxillary sinus floor elevation and autogenous bone graft through the lateral window approach.
Materials And Methods: In this article, the analysis data were collected from 64 dental implants that were placed in 24 patients with 29 lacks of the bone volume posterior maxillary region from June 2004 to April 2011, at the Department of Oral and Maxillofacial Surgery, Inha University Hospital. Panoramic views were taken before the surgery, after the surgery, 6 months after the surgery, and at the time of the final follow-up. The influence of the factors on the grafted bone material resorption rate was evaluated according to the patient characteristics (age and gender), graft material, implant installation stage, implant size, implant placement region, local infection, surgical complication, and residual alveolar bone height.
Results: The bone graft resorption rate of male patients at the final follow-up was significantly higher than the rate of female patients. The single autogenous bone-grafted site was significantly more resorbed than the autogenous bone combined with the Bio-Oss grafted site. The implant installation stage and residual alveolar height showed a significant correlation with the resorption rate of maxillary sinus bone graft material. The success rate and survival rate of the implant were 92.2% and 100%, respectively.
Conclusion: Maxillary sinus elevation procedure with autogenous bone graft or autogenous bone in combination with Bio-Oss is a predictable treatment method for implant rehabilitation.
abstract_id: PUBMED:26017402
High-Resolution Three-Dimensional Computed Tomography Analysis of the Clinical Efficacy of Cultured Autogenous Periosteal Cells in Sinus Lift Bone Grafting. Background And Purpose: Sinus lift (SL) using cultured autogenous periosteal cells (CAPCs) combined with autogenous bone and platelet-rich plasma (PRP) was performed to evaluate the effect of cell administration on bone regeneration, by using high-resolution three-dimensional computed tomography (CT).
Materials And Methods: SL with autogenous bone and PRP plus CAPC [CAPC(+)SL] was performed in 23 patients. A piece of periosteum taken from the mandible was cultured in M199 medium with 10% fetal bovine serum (FBS) for 6 weeks. As control, 16 patients received SL with autogenous bone and PRP [CAPC(-)SL]. Three-dimensional CT imaging was performed before and 4 months and 1 year after SL, and stratification was performed based on CT numbers (HUs) corresponding to soft tissue and cancellous or cortical bone.
Results: The augmented bone in CAPC(+)SL revealed an increase in HUs corresponding to cancellous bone as well as a decrease in HUs corresponding to grafted cortical bone. In addition, HUs corresponding to cancellous bone in the graft bed were increased in CAPC(+)SL but were decreased in CAPC(-)SL. Insertion torque during implant placement was significantly higher in CAPC(+)SL.
Conclusion: By promoting bone anabolic activity both in augmented bone and graft bed, CAPCs are expected to aid primary fixation and osseointegration of implants in clinical applications.
abstract_id: PUBMED:25052732
Osteogenic efficacy of bone marrow concentrate in rabbit maxillary sinus grafting. Maxillary sinus grafting is required to increase bone volume in the atrophic posterior maxilla to facilitate dental implant placement. Grafting with autogenous bone (AB) is ideal, but additional bone harvesting surgery is unpleasant. Alternatively, bone substitutes have been used but they limit new bone formation. The strategy of single-visit clinical stem cell therapy using bone marrow aspirate concentrate (BMAC) to facilitate new bone formation has been proposed. This study aimed to assess bone regeneration capacity of autologous BMAC mixed with bovine bone mineral (BBM) in maxillary sinus grafting. Twenty-four white New Zealand rabbits were used and their maxillary sinuses were randomly assigned for grafting with 4 different materials. Rates of new bone apposition in augmented sinuses were measured and bone histomorphometry were examined. Significant increase in the quantity of nucleated cells and colony forming unit-fibroblasts were confirmed in BMAC. Mesenchymal stem cells in BMAC retained their in vitro multi-differentiation capability. Higher rates of mineral appositions in the early period were detected in BBM + BMAC and AB than BBM alone, though they are not significantly different. Graft volume/tissue volumes in BBM and BBM + BMAC were found to be higher than those in AB and sham.
abstract_id: PUBMED:24471029
Porcine study on the efficacy of autogenous tooth bone in the maxillary sinus. Objectives: This study sought to elucidate the effect of autogenous tooth bone material by experimenting on minipig's maxillary sinus and performing histological and histomorphometric analyses.
Materials And Methods: Five 18-24 month-old male minipigs were selected, and right maxillary sinuses were grafted with bone graft material made of their respective autogenous teeth extracted eight weeks earlier. The left sides were grafted with synthetic hydroxyapatite as control groups. All minipigs were sacrificed at 12 weeks after bone graft, which was known to be 1 sigma (σ) period for pigs. Specimens were evaluated histologically under a light microscope after haematoxylin-eosin staining followed by semi-quantitative study via histomorphometric analysis. The ratio of new bone to total area was evaluated using digital software for calculation of area.
Results: All specimens were available, except one on the right side (experimental group), which was missing during specimen preparation. This study demonstrated new bone at the periphery of the existing bone in both groups, showing evidence of bone remodeling, however, encroachment of new bone on the central part of the graft at the 1 σ period was observed only in the autogenous tooth bone group (experimental group). Histomorphometric analysis showed more new bone formation in the experimental group compared to the control group. Although the difference was not statistically significant (P>0.05), the mean percentage area for new bone for the experimental and control groups were 57.19%±11.16% and 34.07%±13.09%, respectively.
Conclusion: The novel bone graft material using autogenous tooth is a good alternative to autogenous bone, comparable to autogenous bone, and outperforming synthetic hydroxyapatite bone graft materials in terms of bone regeneration capacity. Augmentation with autogenous tooth bone materials will reduce donor site morbidity without hampering the safety of the autogenous bone graft.
abstract_id: PUBMED:22099314
Maxillary sinus lift with solely autogenous bone compared to a combination of autogenous bone and growth factors or (solely) bone substitutes. A systematic review. Literature regarding the outcome of maxillary sinus floor elevation to create sufficient bone fraction to enable implant placement was systematically reviewed. Bone fraction and implant survival rate were assessed to determine whether grafting material or applied growth factor affected bone fraction. Trials where sinus floor elevations with autogenous bone (controls) were compared with autogenous bone combined with growth factors or bone substitutes, or solely with bone substitutes (test groups) were identified; 12 of 1124 fulfilled all inclusion criteria. Meta-analyses comparing the bone fraction after applying: autogenous bone; autologous bone with growth factors (platelet rich plasma); or autogenous bone and bone substitutes (bovine hydroxyapatite, bioactive glass, corticocancellous pig bone) revealed no significant differences in bone formation after 5 months. A significantly higher bone fraction was found in the autogenous bone group compared to the sole use of β-tricalciumphosphate (P=0.036). The one-year overall implant survival rate showed no significant difference between implants. Bone substitutes combined with autogenous bone provide a reliable alternative for autogenous bone as sole grafting material to reconstruct maxillary sinus bony deficiencies, for supporting dental implants after 5 months. Adding growth factors (platelet rich plasma) to grafting material and the sole use of β-tricalciumphosphate did not promote bone formation.
Answer: Yes, culture-expanded autogenous bone cells (ABC) appear to be a clinically reliable option for sinus grafting. A prospective clinical study examining the healing process after sinus grafting with ABC and bovine bone mineral (BBM) found that the procedure provided a bony implant site that permits implant placement and tolerates functional loading (PUBMED:19077153). Additionally, a study on minipigs showed that the addition of ABC to BBM significantly increased the amount of newly formed bone compared with BBM alone, indicating that supplementation of cell-free grafting material with culture-expanded ABC can stimulate bone formation in areas with low bone-forming capacity (PUBMED:15533135).
Furthermore, a study using high-resolution three-dimensional computed tomography analysis of sinus lift bone grafting with cultured autogenous periosteal cells (CAPCs) combined with autogenous bone and platelet-rich plasma (PRP) demonstrated that CAPCs promote bone anabolic activity both in augmented bone and graft bed, which is expected to aid primary fixation and osseointegration of implants in clinical applications (PUBMED:26017402).
These findings suggest that the use of culture-expanded autogenous bone cells is a viable and effective option for enhancing bone regeneration in sinus grafting procedures, which is crucial for the successful placement and integration of dental implants. |
Instruction: Does the cue help?
Abstracts:
abstract_id: PUBMED:32351374
Cue Valence Influences the Effects of Cue Uncertainty on ERP Responses to Emotional Events. Individuals often predict consequences, particularly emotional consequences, according to emotional or non-emotional signals conveyed by environmental cues (i.e., emotional and non-emotional cues, respectively). Some of these cues signify the consequences with certainty (i.e., certain cues), whereas others do not (i.e., uncertain cues). Several event-related potential (ERP) studies regarding non-emotional cues have suggested that the effects of cue uncertainty on attention to emotional events occur in both perception and evaluation processes. However, due to the limitations of previous studies, it is unclear what the effects of cue uncertainty would be in an emotional cue condition. Moreover, it is uncertain whether the effects of cue uncertainty are affected by cue valence (i.e., emotional and non-emotional cues). To address these questions, we asked participants to view cues and then to view emotional (positive or negative) pictures. The cues either did or did not indicate the emotional content of the picture. In the emotional cue condition, happy and fearful faces were used as certain cues indicating upcoming positive and negative pictures, respectively, and neutral faces were used as uncertain cues. In the non-emotional cue condition, scrambled faces outlined in red and blue indicated upcoming positive and negative pictures, respectively, and scrambled faces outlined in green served as uncertain cues. The results showed that for negative pictures, ERP responses in a time range between 60 and 1,000 ms were shifted to a more negative direction in a certain condition than in the uncertain condition when the cues were emotional. However, the effect was the reverse for positive pictures. This effect of cue uncertainty was similar in the non-emotional cue-negative condition. In contrast, there was no effect of cue uncertainty in the non-emotional cue-positive condition. Therefore, the findings indicate that cue uncertainty modulates attention toward emotional events when the events are signified by emotional cues. The findings may also suggest that cue valence modulates the effects of cue uncertainty on attention to emotional events.
abstract_id: PUBMED:38274505
Exploring retro-cue effects on visual working memory: insights from double-cue paradigm. In the realm of visual working memory research, the retro-cue paradigm helps us study retro-cue effects such as retro-cue benefit (RCB) and retro-cue cost (RCC). RCB reflects better performance with cued items, while RCC indicates poorer performance with uncued items. Despite consistent evidence for RCB, it's still uncertain whether it remains when previously uncued items are cued afterward. Additionally, research findings have been inconsistent. This study combines prior experiments by controlling the proportion of cue types and the number of memory items. Besides, using a CDA index to assess the status of items after the cue appeared. Results showed better performance under the double-cue condition (involving two cues pointing inconsistently with only the second cue being valid) compared to the neutral-cue condition, and better performance under the single-cue condition compared to double-cue. EEG data revealed that after the appearance of the second cue in the double-cue condition, there was a significant increase in CDA wave amplitude compared to the single-cue condition. Behavior results suggests that RCB occurs under double-cue but to a lesser extent than the single-cue. And EEG outcomes indicates that individuals did not remove the uncued item from their visual working memory after the first cue. Instead, they kept it in a passive state and then shifted it to an active state after the appearance of the second cue.
abstract_id: PUBMED:32705659
A comparison of methods of assessing cue combination during navigation. Mobile organisms make use of spatial cues to navigate effectively in the world, such as visual and self-motion cues. Over the past decade, researchers have investigated how human navigators combine spatial cues, and whether cue combination is optimal according to statistical principles, by varying the number of cues available in homing tasks. The methodological approaches employed by researchers have varied, however. One important methodological difference exists in the number of cues available to the navigator during the outbound path for single-cue trials. In some studies, navigators have access to all spatial cues on the outbound path and all but one cue is eliminated prior to execution of the return path in the single-cue conditions; in other studies, navigators only have access to one spatial cue on the outbound and return paths in the single-cue conditions. If navigators can integrate cues along the outbound path, single-cue estimates may be contaminated by the undesired cue, which will in turn affect the predictions of models of optimal cue integration. In the current experiment, we manipulated the number of cues available during the outbound path for single-cue trials, while keeping dual-cue trials constant. This variable did not affect performance in the homing task; in particular, homing performance was better in dual-cue conditions than in single-cue conditions and was statistically optimal. Both methodological approaches to measuring spatial cue integration during navigation are appropriate.
abstract_id: PUBMED:35023591
Environmental cue difference and training duration modulate spatial learning and cue preference in detour task. In this study, we investigated how different environmental cue and the proficiency of body motion influenced detour learning behaviour and cue preference in cue conflict situations. Domestic chicks were trained to detour around an obstacle and follow a fixed route to rejoin with their partners. When the environmental cue was red versus blue vertical stripes, the chicks learned the detour task quicker, and as the number of training trials after route acquisition increased, they switched their preference from the environmental cue to a body-motion cue in the cue conflict test. On the other hand, when the environmental cue was vertical versus horizontal blue stripes, the chicks learned the detour task slower and showed a dependence on the body-motion cue regardless of the number of training trials performed after route acquisition. When the environmental cue was removed, most chicks could still successfully detour according to the specific route on which they had been trained. Furthermore, a significant difference in detour latency was found between chicks using the environmental cue and chicks using the body-motion cue, suggesting separate neuronal circuits responsible for processing the two types of information. Our results demonstrated that young domestic chicks could use both environmental cue and body-motion cues to memorize the route during the detour learning task; however, the detour route preference could be dynamically modulated by difference of the environmental cue and the number of training trials they received.
abstract_id: PUBMED:33920907
The Impact of Shape-Based Cue Discriminability on Attentional Performance. With rapidly developing technology, visual cues became a powerful tool for deliberate guiding of attention and affecting human performance. Using cues to manipulate attention introduces a trade-off between increased performance in cued, and decreased in not cued, locations. For higher efficacy of visual cues designed to purposely direct user's attention, it is important to know how manipulation of cue properties affects attention. In this verification study, we addressed how varying cue complexity impacts the allocation of spatial endogenous covert attention in space and time. To gradually vary cue complexity, the discriminability of the cue was systematically modulated using a shape-based design. Performance was compared in attended and unattended locations in an orientation-discrimination task. We evaluated additional temporal costs due to processing of a more complex cue by comparing performance at two different inter-stimulus intervals. From preliminary data, attention scaled with cue discriminability, even for supra-threshold cue discriminability. Furthermore, individual cue processing times partly impacted performance for the most complex, but not simpler cues. We conclude that, first, cue complexity expressed by discriminability modulates endogenous covert attention at supra-threshold cue discriminability levels, with increasing benefits and decreasing costs; second, it is important to consider the temporal processing costs of complex visual cues.
abstract_id: PUBMED:32906716
A Scoping Review on Cue Reactivity in Methamphetamine Use Disorder. The experience of craving via exposure to drug-related cues often leads to relapse in drug users. This study consolidated existing empirical evidences of cue reactivity to methamphetamine to provide an overview of current literature and to inform the directions for future research. The best practice methodological framework for conducting scoping review by Arkey and O'Malley was adopted. Studies that have used a cue paradigm or reported on cue reactivity in persons with a history of methamphetamine use were included. Databases such as Medline, EMBASE, PsycINFO and CINAHL were searched using key terms, in addition to citation check and hand search. The search resulted in a total of 32 original research articles published between 2006 to 2020. Three main themes with regard to cue reactivity were identified and synthesized: (1) effects of cue exposure, (2) individual factors associated with cue reactivity, and (3) strategies that modulate craving or reactivity to cues. Exposure to methamphetamine-associated cues elicits significant craving and other autonomic reactivity. Evidence suggests that drug cue reactivity is strongly associated with indices of drug use and other individual-specific factors. Future studies should focus on high quality studies to support evidence-based interventions for reducing cue reactivity and to examine cue reactivity as an outcome measure.
abstract_id: PUBMED:33693413
The Role of Cue Utilization and Cognitive Load in the Recognition of Phishing Emails. Phishing emails represent a major threat to online information security. While the prevailing research is focused on users' susceptibility, few studies have considered the decision-making strategies that account for skilled detection. One relevant facet of decision-making is cue utilization, where users retrieve feature-event associations stored in long-term memory. High degrees of cue utilization help reduce the demands placed on working memory (i.e., cognitive load), and invariably improve decision performance (i.e., the information-reduction hypothesis in expert performance). The current study explored the effect of cue utilization and cognitive load when detecting phishing emails. A total of 50 undergraduate students completed: (1) a rail control task; (2) a phishing detection task; and (3) a survey of the cues used in detection. A cue utilization assessment battery (EXPERTise 2.0) then classified participants with either higher or lower cue utilization. As expected, higher cue utilization was associated with a greater likelihood of detecting phishing emails. However, variation in cognitive load had no effect on phishing detection, nor was there an interaction between cue utilization and cognitive load. Further, the findings revealed no significant difference in the types of cues used across cue utilization groups or performance levels. These findings have implications for our understanding of cognitive mechanisms that underpin the detection of phishing emails and the role of factors beyond the information-reduction hypothesis.
abstract_id: PUBMED:37754902
Cue Sources and Cue Utilization Patterns of Social Mentalizing during Two-Person Interactions. Social mentalizing plays a crucial role in two-person interactions. Depending on the target of inference and the content being inferred, social mentalizing primarily exists in two forms: first-order mentalizing and second-order mentalizing. Our research aims to investigate the cue sources and cue utilization patterns of social mentalizing during two-person interactions. Our study created an experimental situation of a two-person interaction and used the "Spot the difference" game to reveal our research question with multi-stage tasks. Our study was divided into two experiments, Experiment 1 and Experiment 2, which examined the cue sources and cue utilization patterns of first- and second-order mentalizing, respectively. The results of the experiments showed that (1) self-performance and other performance are significant cues utilized by individuals during social mentalizing. (2) Individuals employ discrepancies to modulate the relationship between self-performance and first-order mentalizing as well as to adjust the relationship between otherperformance and second-order mentalizing. The results of this study further complement the dual-processing model of mindreading and the anchoring and adjustment hypothesis during social inference.
abstract_id: PUBMED:31134173
Cue-Reminders to Prevent Health-Risk Behaviors: A Systematic Review. Introduction: It has been proposed that the use of cue-reminders may increase the effectiveness of interventions that aim to prevent health-risk behaviors (i.e., having unsafe sex, unhealthy dietary intake, lack of physical activity, and substance use). The aim of this systematic review was to explore whether there is evidence supporting this proposition, and to explore how cue-reminders are applied in health-risk behavior interventions to date. Method: We systemically reviewed (non-) randomized trials that examine differences in health-risk behaviors between an experimental group receiving an intervention with exposure to a cue-reminder and a control group receiving the intervention without such cue. Results: Six studies were eligible for inclusion. The studies differed in sample and research design, and how the cue-reminder was applied. One study demonstrated a positive and small effect, and one study found a negative medium effect of the cue-reminder. In the remaining studies, the effect sizes were positive but non-significant. Discussion: It is unclear whether complementing health-risk behavior interventions with cue-reminders increases the effectiveness of these interventions. Further investigation and experimentation into the efficiency and effectiveness of cue-reminders is needed before health-risk behavior interventions are complemented with cue-reminders.
abstract_id: PUBMED:28642725
Pigeons and the Ambiguous-Cue Problem: A Riddle that Remains Unsolved. The ambiguous-cue task is composed of two-choice simultaneous discriminations involving three stimuli: positive (P), ambiguous (A), and negative (N). Two different trial types are presented: PA and NA. The ambiguous cue (A) served as an S- in PA trials, but as an S+ in NA trials. When using this procedure, it is typical to observe a less accurate performance in PA trials than in NA trials. This is called the ambiguous-cue effect. Recently, it was reported in starlings that the ambiguous-cue effect decreases when the stimuli are presented on an angled (120°) panel. The hypothesis is that the angled panel facilitates that the two cues from each discrimination are perceived as a compound, precluding value transfer via a second-order conditioning mechanism. In this experiment, we used pigeons and a flat panel. Nevertheless, our data were quite similar to the previous data in starlings. We conclude that the form of the panel cannot explain the ambiguous-cue effect. Several alternatives to be explored in future experiments are suggested. The riddle of the ambiguous-cue problem still remains unsolved.
Answer: The research on cue utilization in various contexts suggests that cues can indeed help in different ways, depending on the specific task and the nature of the cues involved.
For instance, in the context of emotional events, cue valence and certainty can influence attentional processes, with emotional cues modulating attention toward emotional events and cue uncertainty affecting ERP responses differently for positive and negative pictures (PUBMED:32351374). In visual working memory, retro-cue effects such as retro-cue benefit and retro-cue cost have been observed, with cues helping to improve performance on cued items, although the extent of this benefit may vary under different conditions (PUBMED:38274505).
In navigation tasks, cues are used by organisms to navigate effectively, and the combination of spatial cues can lead to statistically optimal performance in homing tasks (PUBMED:32705659). Similarly, in a detour task, environmental cues and training duration were found to modulate spatial learning and cue preference, with chicks learning to use both environmental and body-motion cues to memorize routes (PUBMED:35023591).
The discriminability of shape-based cues has been shown to impact attentional performance, with more discriminable cues leading to better performance in an orientation-discrimination task (PUBMED:33920907). In the context of methamphetamine use disorder, cue reactivity to drug-related cues can elicit significant craving and autonomic reactivity, suggesting that cues can have a powerful influence on behavior (PUBMED:32906716).
In the detection of phishing emails, higher cue utilization was associated with a greater likelihood of detecting phishing emails, indicating that cues can aid in skilled detection (PUBMED:33693413). During two-person interactions, cues from self-performance and other performance are utilized in social mentalizing, with individuals using discrepancies to modulate relationships between performance and mentalizing (PUBMED:37754902).
However, the effectiveness of cue-reminders in preventing health-risk behaviors remains unclear, with mixed results observed in different studies (PUBMED:31134173). Additionally, the ambiguous-cue problem in pigeons demonstrates that cues do not always lead to the expected outcomes, with the ambiguous-cue effect remaining a challenge to understand (PUBMED:28642725).
In summary, cues can be helpful in guiding attention, improving memory and learning, aiding in decision-making, and influencing behavior, but their effectiveness can vary depending on the context and the specific characteristics of the cues. The effectiveness of cues also depends on how they are perceived and utilized by individuals, and whether they are integrated with other cues or information available in the environment. While cues can be beneficial in many scenarios, their impact is not universally positive or straightforward, and further research is needed to fully understand the conditions under which cues help and the mechanisms by which they exert their influence. |
Instruction: Alcohol use among college athletes: do intercollegiate, club, or intramural student athletes drink differently?
Abstracts:
abstract_id: PUBMED:25412412
Alcohol use among college athletes: do intercollegiate, club, or intramural student athletes drink differently? Background: Varsity student athletes are a high-risk drinking group, exhibiting a greater propensity to binge drink than their non-sport peers. Moreover, as intercollegiate athletic involvement increases, so too does alcohol consumption. There is little research, however, which examines drinking behaviors of students who participate in nonvarsity athletics.
Objectives: Identify differences in alcohol-related behaviors and associated consequences among U.S. varsity, club, and intramural athletes, and nonathlete college students.
Methods: Secondary data analysis of the 2011 National College Health Assessment (n = 29,939).
Results: Intramural athletes binge drank more frequently (M = 1.1, SD = 1.7) than club athletes (M = 1.0, SD = 1.6), intercollegiate athletes (M = 0.9, SD = 1.5), and nonathletes (M = 0.6, SD = 1.3) and also experienced greater alcohol-related consequences. Intramural athletes consumed the most during their last drinking episode (M = 4.1, SD = 4.0) and reached the highest blood alcohol concentration (BAC) (M = 0.062, SD = 0.09).Compared to club and varsity athletes [M = 0.8, SD = 1.4; t (8,131) = -9.6, p < .001], intramural-only athletes reported binge drinking significantly more frequently (M = 1.2, SD = 1.7) and also reached significantly higher BACs during most recent drinking episode (M = 0.064, SD = 0.08) than organized sport athletes [M = 0.057, SD = 0.08; t (8,050) = -3.0, p = .003].
Conclusions: Intramural athletes represent a higher-risk drinking group than other athlete and nonathlete college students. Future research should investigate factors contributing to drinking differences among different athlete groups.
abstract_id: PUBMED:25767148
Examining Drinking Patterns and High-Risk Drinking Environments Among College Athletes at Different Competition Levels. This study examined drinking patterns of three different college student groups: (a) intercollegiate athletes, (b) intramural/club athletes, and (c) nonathletes. Additionally, we investigated whether a relationship exists between drinking setting and risk of increased drinking. We analyzed data on the athletic involvement, drinking behaviors, and drinking settings of 16,745 undergraduate students. The findings revealed that drinking patterns for intramural/club athletes remained relatively consistent at all quantity levels; however, intercollegiate athletes consumed alcohol in higher quantities. Further, intramural/club athletes drank in almost every drinking setting, whereas intercollegiate athletes were more limited. The drinking patterns and settings suggest a stronger social motivation for drinking among intramural/club athletes than among intercollegiate athletes and point to a need to specify competition level when studying college athletes.
abstract_id: PUBMED:23157197
The culture of high-risk alcohol use among club and intramural athletes. Objective: The purpose of this study was to examine the drinking patterns of club and intramural college athletes and compare their alcohol consumption, perceived norms around the excessive use of alcohol, experience of negative consequences, and employment of protective strategies with those of campus varsity athletes.
Participants: A total of 442 undergraduate students attending a private, suburban institution in the Northeast participated in the American College Health Association National College Health Assessment-II Web survey in spring 2011. Thirty-five students identified themselves as varsity athletes, 76 identified as club sport athletes, and 196 students identified themselves as intramural athletes.
Methods: Survey responses were analyzed using Statistical Package for the Social Sciences. The Pearson's correlation coefficient and test for independence were applied to identify significant relationships between athlete status and identified variables related to alcohol use.
Results: Results indicated that there were significant correlations between athlete status and all variables, to varying degrees.
Conclusions: These findings have implications for campus health promotion professionals and athletics program coordinators seeking to address high-risk alcohol use among college athletes.
abstract_id: PUBMED:38206658
Comparing Drinking Game Motives, Behaviors, and Consequences among Varsity Athletes, Recreational Athletes, and Non-Student-Athletes: A Multisite University Study. Objective: Among college students, student-athletes are at increased risk for heavy alcohol consumption, participation in risky drinking practices (e.g., playing drinking games [DG]), and adverse alcohol-related consequences relative to non-student-athletes. Within the student-athlete population, level of sports participation (e.g., recreational or varsity sports) can affect alcohol use behaviors and consequences but our understanding of the extent to which level of sports participation influences engagement in DG is limited. Thus, in the present study, we examined differences in frequency of participation in DG, typical drink consumption while playing DG, negative DG consequences, and motives for playing DG among varsity, recreational, and non-student-athletes.
Method: College students (N=7,901 across 12 U.S. colleges/universities) completed questionnaires on alcohol use attitudes, behaviors, and consequences.
Results: Student-athletes (recreational or varsity sports) were more likely to have participated in DG within the past month than non-student-athletes. Among students who reported past month DG play, recreational athletes played more often and endorsed more enhancement/thrills motives for playing DG than non-student-athletes, and student-athletes (recreational or varsity) endorsed higher levels of competition motives for playing DG than non-student-athletes.
Conclusions: These findings shed light on some risky drinking patterns and motives of recreational athletes who are often overlooked and under-resourced in health research and clinical practice. Recreational and varsity student-athletes could benefit from alcohol screening and prevention efforts, which can include provision of competitive and alcohol-free social activities and promotion of alcohol protective behavioral strategies to help reduce recreational athletes' risk for harm while playing DG.
abstract_id: PUBMED:21979810
Religiosity, alcohol use, and sex behaviors among college student-athletes. College student-athletes tend to consume more alcohol, engage in sex, and report more sex partners than nonathlete students. The current study examined the relationship between religiosity (e.g., influence of religious beliefs and church attendance) and alcohol use and sex behavior among college student-athletes. Most of the student-athletes (n=83) were religious. Influence of religious beliefs was a significant predictor of less alcohol use and less sexual activity (i.e., oral and vaginal sex, number of sex partners). However, increased church attendance was not found to be a protective factor. Findings suggest that religious beliefs may contribute to reduction of alcohol use and sexual risk among college student-athletes. Consideration should be given to incorporating religiosity aspects in sexual and alcohol risk-reduction interventions for student-athletes.
abstract_id: PUBMED:30540548
Ethnic, gender, and seasonal difference in heavy drinking and protective behavioral strategies among student-athletes. Relations among gender, ethnicity, athlete seasonal status, alcohol consumption, and protective behavioral strategies were examined among student-athletes. The national sample (N = 670, Mage = 18.90) included Black (n = 199), Hispanic (n = 236), and White (n = 235) college student-athletes who use alcohol. There were significant gender and ethnic differences in alcohol consumption as well as gender differences in use of protective behavioral strategies. Within-group gender differences in alcohol use and PBS were present for White and Hispanic but not Black student-athletes. Implications for tailored prevention/intervention efforts and future directions are discussed.
abstract_id: PUBMED:33256426
Comparison of Pregaming Alcohol Use and Consequences by Season Status and Sex in College Student Athletes. As student athletes exhibit unique alcohol use patterns based on being in- versus out-of-season and biological sex, we aimed to explore student athlete (N = 442) alcohol use, pregaming behaviors, and associated negative outcomes. Results suggest being out-of-season and male are positively associated with negative alcohol-related consequences, and male athletes report greater numbers of pregame specific alcohol-related consequences than female athletes (p < .05). Female athletes indicated significantly higher estimated blood alcohol concentrations than male athletes on pregaming nights. No differences emerged between in- and out-of-season athletes on pregame consequences. Results suggest that further emphasis on the role season status and sex has on pregaming behaviors and experiencing negative outcomes may be an important next step toward enhancing prevention and intervention approaches.
abstract_id: PUBMED:27610821
Perceived norms and alcohol use among first-year college student-athletes' different types of friends. Objective: To describe first-year college student-athletes' friendship contexts and test whether their perceptions of alcohol use and approval by different types of friends are associated with their own alcohol use.
Participants: First-year student-athletes (N = 2,622) from 47 colleges and universities participating in National Collegiate Athletic Association (NCAA) sports during February-March 2013.
Methods: Student-athletes completed online surveys during the baseline assessment of an alcohol and other drug prevention program evaluation. Analyses tested whether perceptions of friends' alcohol use (descriptive norms) and perceptions of friends' approval of alcohol use (injunctive norms) predicted their alcohol use.
Results: Both use and approval perceptions by upperclassmen, same-team, and most influential friends significantly predicted alcohol use. By contrast, only perceived use by first-year, nonteam, and less influential friends significantly predicted alcohol use.
Conclusions: Athletics departments' alcohol policies and prevention programming for first-year student-athletes should address the potential influence of different types of friends on alcohol use.
abstract_id: PUBMED:29979953
Association between sex, race/ethnicity, season, day of week, and alcohol use and related risks in college student athletes and nonathletes. Objective: To examine personal and social constructs (eg, sex, race) and external influences (ie, competition schedules) that are associated with the pattern and amount of alcohol consumption by student athletes compared to nonathletes. Participants: Students (n = 2,984; 57% female, 61% white, 16% athlete) on a single campus were surveyed in Fall 2011 and Spring 2012. Methods: Anonymous surveys included demographics, alcohol use, and related problems. Results: Student athletes consumed more alcohol than nonathletes, with male and white athletes posing the highest levels of risk for heavy consumption. Expanded examination of in- and out-of-season athletes found out-of-season athletes drink more frequently and more per occasion than their in-season peers. Athletes were more likely than nonathletes to drink on specific days of the week and daily patterns differed between in- and out-of-season athletes. Conclusions: The risk of athlete status is moderated by sex, race, season, and day of the week.
abstract_id: PUBMED:34957907
Examining Sex Differences in Associations between Pain and Alcohol Use among College Student-Athletes. Background: College student-athletes are a high-risk population for both pain and alcohol use. Although a growing literature indicates that pain motivates alcohol consumption, no studies have tested associations between pain and alcohol in college student-athletes. Methods and Results: Among National Collegiate Athletic Association Division I student-athletes at a large public university (N = 65; 48% Female), nearly all student-athletes (97%) reported alcohol use and 65% reported pain due to an athletic injury. Pain intensity was positively associated with greater motivation to reduce drinking prior to receiving a brief alcohol intervention. Among females, pain intensity was associated with lower self-reported alcohol consumption. Among male athletes, pain-related interference was associated with greater alcohol-related problems and lower perceived ability to deal with alcohol-related situations. Conclusion: Results provide initial evidence that pain may be important to consider when addressing alcohol use among student-athletes.
Answer: Yes, intercollegiate, club, or intramural student athletes do drink differently. According to the research, intramural athletes binge drank more frequently and experienced greater alcohol-related consequences compared to club athletes, intercollegiate athletes, and nonathletes. Intramural athletes also consumed the most during their last drinking episode and reached the highest blood alcohol concentration (BAC) (PUBMED:25412412).
In another study, it was found that intercollegiate athletes consumed alcohol in higher quantities, while intramural/club athletes drank in almost every drinking setting, suggesting a stronger social motivation for drinking among intramural/club athletes (PUBMED:25767148). Additionally, club and intramural athletes were found to have significant correlations with all variables related to alcohol use, including excessive use, experience of negative consequences, and employment of protective strategies, to varying degrees (PUBMED:23157197).
Furthermore, recreational athletes, which include club and intramural athletes, were more likely to have participated in drinking games within the past month than non-student-athletes, played more often, and endorsed more enhancement/thrills motives for playing drinking games (PUBMED:38206658). These findings indicate that the level of sports participation affects alcohol use behaviors and consequences among college athletes.
Overall, the research suggests that intramural athletes represent a higher-risk drinking group than other athlete and nonathlete college students, and there are differences in alcohol-related behaviors and associated consequences among U.S. varsity, club, and intramural athletes (PUBMED:25412412). |
Instruction: Positive urine cytology but negative white-light cystoscopy: an indication for fluorescence cystoscopy?
Abstracts:
abstract_id: PUBMED:18793301
Positive urine cytology but negative white-light cystoscopy: an indication for fluorescence cystoscopy? Objective: To evaluate the possible benefit of fluorescence cystoscopy (FC) in detecting cytologically 'confirmed' lesions when assessing urothelial carcinoma of the bladder, as negative white-light cystoscopy in cases of a positive cytological finding represents a diagnostic dilemma.
Patients And Methods: From January 1996 to December 2006, 348 patients, who had cystoscopy for surveillance or due to suspicion of urothelial carcinoma, presented with an entirely negative white-light cystoscopy at our hospital. However, 77 of the 348 patients (22.2%) were diagnosed with a positive cytological finding. All patients had white-light cystoscopy first and a bladder-wash cytological specimen was obtained, then FC, followed by cold-cup biopsies and/or transurethral resection of the bladder tumour.
Results: In the 77 patients with a positive cytological specimen FC enabled the detection of the precise site of malignancy within the bladder in 63 (82%). As malignant or premalignant lesions, there were 18 moderate dysplasias, 27 carcinoma in situ (CIS), and 18 pTa-1/G1-3 tumours. Moreover using FC, malignant or premalignant lesions were detected in 43 of 271 patients (15.9%) who had a negative cytological specimen (15 moderate dysplasias, six CIS, 22 pTa-1/G1-3).
Conclusion: This study shows that FC is beneficial in the detection of malignant and premalignant lesions, if there is negative white-light cystoscopy but positive urine cytology. The immediate identification of the exact site of a malignant lesion during FC enables the physician to diagnose and treat these patients more accurately and with no delay.
abstract_id: PUBMED:30281893
Effect of blue-light cystoscopy on contemporary performance of urine cytology. Objective: To evaluate the performance of urine cytology based on contemporary data, including the effect of enhanced cystoscopic techniques.
Materials And Methods: Individual patient data were obtained from three prospective studies: the Photocure (PC) B305 and the PC B308 studies, evaluating the use of blue-light cystoscopy with hexaminolevulinate (BLC-H), and the Cxbladder monitoring study, evaluating the Cxbladder monitor test for the detection of recurrent urothelial carcinoma. The specificity and sensitivity of cytology in each study and for the overall cohort were calculated.
Results: A total of 1 487 urine samples from 1 375 patients were included in the analysis; overall 615 tumours were detected correlating to 41% of the cytological specimens. The pooled sensitivity and specificity for cytology were 40.8% and 92.8%, respectively. The pooled sensitivity was 11.4% for low-grade/World Health Organization (WHO) grade 1 disease and 54.3% for high-grade/WHO grade 3 disease. There were no differences in cytology sensitivity based on the type of cystoscopy used, with sensitivity of 41.3% and 40.4% in white-light cystoscopy (WLC) and BLC-H, respectively. Subgroup analysis including carcinoma in situ ( CIS) showed a trend towards lower cytology sensitivity in BLC-H (54.5%) vs WLC (69.2%).
Conclusions: Based on analysis of contemporary data, the sensitivity of cytology for detecting high-grade tumours and CIS remains low. On a per-patient analysis, cytology sensitivity was not affected by the use of advanced cystoscopic techniques except in patients with CIS. The use of cytology as the main adjunct to cystoscopy in patients at high risk can lead to missed opportunities for early detection of recurrence and for determining which patients are not responding to intravesical therapies such as bacille Calmette-Guérin.
abstract_id: PUBMED:19076151
Hexylaminolaevulinate 'blue light' fluorescence cystoscopy in the investigation of clinically unconfirmed positive urine cytology. Objective: To investigate the value of photodynamic diagnosis (PDD) using hexylaminolaevulinate (Hexvix, PhotoCure, Oslo, Norway) in the investigation of patients with positive urine cytology who have no evidence of disease after standard initial investigations.
Patients And Methods: Twenty-three patients referred with positive urine cytology but no current histological evidence of cancer were investigated between April 2005 and January 2007 with PDD, using Hexvix and the D-light system (Karl Storz, Tuttlingen, Germany) to detect fluorescence. The bladder was mapped initially under white light and then under 'blue-light'. Biopsies were taken from abnormal urothelium detected by white light, fluorescence, or both. All cytological specimens were reviewed by a reference cytopathologist unaware of the result of the PDD.
Results: Twenty-five PDD-assisted cystoscopies were carried out on 23 patients (20 men/3 women; median age 64 years, range 24-80 years). Of the 23 patients, 17 (74%) were previously untreated for transitional cell carcinoma (TCC), whilst six were under surveillance for previous TCC. Nineteen of the 23 (83%) cytology specimens were confirmed as suspicious or positive by the reference pathologist. TCC of the bladder or preneoplastic lesions were diagnosed in six patients, i.e. six (26%) of those investigated and six of 19 (32%) with confirmed positive cytology. Four of the six were under surveillance for previous bladder tumour. Additional pathology was detected by fluorescence in five of the six patients, including two carcinoma in situ (CIS), one CIS + G3pT1 tumour, and two dysplasia. Diagnoses in PDD-negative cases included one upper tract TCC and four patients with stones. In addition, one patient had CIS diagnosed on both white light and PDD 6 months later.
Conclusion: Additional pathology was detected by HAL fluorescence cystoscopy in 32% of patients with confirmed positive urinary cytology. PDD is a key step in the management of patients with positive urinary cytology and no evidence of disease on conventional tests.
abstract_id: PUBMED:21396840
Fluorescence and white light cystoscopy for detection of carcinoma in situ of the urinary bladder. Objectives: To understand the additional benefits of HAL compared with conventional cystoscopy at the patient level and to explore relationships of urine cytology and CIS.
Methods: We reanalyzed pooled data from 3 phase III studies comparing hexaminolevulinate (HAL, Hexvix) fluorescence cystoscopy with white light (WL) cystoscopy for detecting CIS.
Results: Of 551 patients, 174 had at least one CIS lesion detected by HAL, WL, or random biopsy. The CIS detection rate of HAL was 0.87 vs. 0.75 for WL (P = 0.006). By multivariate Poisson regression, female patients had fewer CIS lesions (P < 0.0001) while older patients (≥ 65) had a higher number of CIS lesions detected by HAL (P = 0.04). HAL was less likely to detect CIS in patients previously treated with chemotherapy or BCG (P = 0.01 and 0.03, respectively), after adjusting for age. CIS was unifocal in 44% and multifocal in 56%. Multifocal CIS was associated with positive cytology more frequently than unifocal (65% vs. 45%; P = 0.016) whereas a negative cytology was more frequently associated with unifocal CIS. Patients with positive urine cytology had twice as many CIS lesions detected by HAL as patients with negative urine cytology (P = 0.02).
Conclusions: HAL cystoscopy had a higher CIS detection rate than WL cystoscopy. The average number of CIS lesions detected was associated with baseline clinical characteristics. Cytology was positive more frequently in multifocal CIS suggesting that HAL may be particularly useful in this setting to optimize detection of the extent of CIS.
abstract_id: PUBMED:16153203
Urine cytology after flexible cystoscopy. Objective: To correlate urine cytology findings before and after flexible cystoscopy.
Patients And Methods: A total of 153 patients undergoing surveillance for bladder tumour provided voided urine for cytology before and immediately after flexible cystoscopy.
Results: Of the 153 patients, 116 had negative urine cytology before and after (96%) a visibly normal cystoscopy and 37 had positive urine cytology before and after cystoscopy that showed recurrent tumour.
Conclusions: Urine cytology immediately after flexible cystoscopy correlates well with results of urine cytology before cystoscopy.
abstract_id: PUBMED:24128299
Reflex fluorescence in situ hybridization assay for suspicious urinary cytology in patients with bladder cancer with negative surveillance cystoscopy. Objective: To assess the ability of reflex UroVysion fluorescence in situ hybridization (FISH) testing to predict recurrence and progression in patients with non-muscle-invasive bladder cancer (NMIBC) with suspicious cytology but negative cystoscopy.
Patients And Methods: Patients under NMIBC surveillance were followed with office cystoscopy and urinary cytology every 3-6 months. Between March 2007 and February 2012, 500 consecutive patients with suspicious cytology underwent reflex FISH analysis. Clinical and pathological data were reviewed retrospectively. Predictors for recurrence, progression and findings on subsequent cystoscopy (within 2-6 months after FISH) were evaluated using univariate and multivariate Cox regression.
Results: In all, 243 patients with suspicious cytology also had negative surveillance cystoscopy. Positive FISH was a significant predictor of recurrence (hazard ratio [HR] = 2.35, 95% confidence interval [CI]: 1.42-3.90, P = 0.001) in multivariate analysis and for progression (HR = 3.01, 95% CI: 1.10-8.21, P = 0.03) in univariate analysis, compared with negative FISH. However, positive FISH was not significantly associated with evidence of tumour on subsequent surveillance cystoscopy compared with negative FISH (odds ratio = 0.8, 95% CI: 0.26-2.74, P = 1).
Conclusions: Positive FISH predicts recurrence and progression in patients under NMIBC surveillance with suspicious cytology but negative cystoscopy. However, there was no association between the FISH result and tumour recurrence in the immediate follow-up period. Reflex FISH testing for suspicious cytology might have limited ability to modify surveillance strategies in NMIBC.
abstract_id: PUBMED:24166285
The role of urine markers, white light cystoscopy and fluorescence cystoscopy in recurrence, progression and follow-up of non-muscle invasive bladder cancer. Non-muscle invasive bladder cancer (NMIBC) accounts for approximately 70 % of all bladder cancer cases and represents a heterogeneous pathological entity, characterized by a variable natural history and oncological outcome. The combination of cystoscopy and urine cytology is considered the gold standard in the initial diagnosis of bladder cancer, despite the limited sensitivity. The first step in NMIBC management is transurethral resection of the bladder tumour (TURBT). This procedure is marked by a significant risk of leaving residual disease. The primary landmark in NMIBC is the high recurrence rate. Fluorescence cystoscopy improves the bladder cancer detection rate, especially for flat lesions, and improves the recurrence-free survival by decreasing residual tumour. Progression to muscle invasive tumours constitutes the second important landmark in NMIBC evolution. Stage, grade, associated CIS and female gender are the major prognostic factors in this regard. The evolution to MIBC has a major negative impact upon the survival rate and quality of life of these patients. Fluorescence cystoscopy improves the detection rate of bladder cancer but does not improve the progression-free survival. Urine markers such as ImmunoCyt and Uro Vysion (FISH) have also limited additional value in diagnosis and prognosis of NMIBC patients. Major drawbacks are the requirement of a specialized laboratory and the additional costs. In this review, the risks of recurrence and progression are analysed and discussed. The impact of white light cystoscopy, fluorescence cystoscopy and urine markers is reviewed. Finally, the means and recommendations regarding follow-up are discussed.
abstract_id: PUBMED:31331861
Management of Patients with Normal Cystoscopy but Positive Cytology or Urine Markers. This presentation considers follow-up after successful transurethral resection of a high-grade non-muscle-invasive tumour, with normal cystoscopy, followed by bacillus-Calmette-Guérin (BCG) therapy. Focusing on two possible outcomes, a positive cytology but a negative urinary biomarker result, versus positive biomarkers but a negative cytology, we discuss what the evidence and guidelines recommend and which test is more robust. PATIENT SUMMARY: Bladder cancer is usually assessed by examination of tissue taken from the bladder, either by surgery or by biopsy; however, trace elements in the urine, known as biomarkers, can also provide an assessment. The challenge arises when the two methods do not agree: the tissue sample is positive for cancer, but the biomarker is negative, or the reverse. For now, these authors conclude that the tissue examination is more reliable than the biomarker result.
abstract_id: PUBMED:34037496
The diagnostic challenge of suspicious or positive malignant urine cytology findings when cystoscopy findings are normal: an outpatient blue-light flexible cystoscopy may solve the problem. Purpose: To investigate whether outpatient blue-light flexible cystoscopy could solve the diagnostic challenge of positive or suspicious urine cytology findings despite normal white-light flexible cystoscopy results and normal findings on computerized tomography urography, in patients investigated for urothelial cancer.
Material And Methods: In a multicentre study, a total of 70 examinations were performed with the use of blue-light flexible cystoscopy (photodynamic diagnosis) after intravesical instillation of the fluorescence agent hexaminolevulinate. The examination started with a conventional white-light flexible cystoscopy and then the settings were switched to use blue light. Suspicious lesions were biopsied. Afterwards, the patients were interviewed regarding their experience of the examinations.
Results: Bladder cancer was diagnosed in 29 out of 70 (41%) cases, among them 14/29 (48%) had malignant lesions seen only in blue light. The majority had carcinoma in situ (21/29). Normal findings were seen in 41 cases that underwent BLFC. During the further course, malignancy of the bladder was detected in six cases (9%) and malignancy of the upper urinary tract was detected in one case (1%). The majority of patients (93%) preferred the blue-light flexible cystoscopy performed at the outpatient clinic instead of the transurethral resection under general anaesthesia.
Conclusion: Blue-light flexible cystoscopy at the outpatient clinic may be a useful tool to solve unclear cases of a malignant or suspicious urinary cytology suggestive of bladder cancer. The procedure was well tolerated by the patients.
abstract_id: PUBMED:34055868
Comparing CxBladder to Urine Cytology as Adjunct to Cystoscopy in Surveillance of Non-muscle Invasive Bladder Cancer-A Pilot Study. Purpose: Guidelines advocate cystoscopy surveillance (CS) for non-muscle invasive bladder cancer (NMIBC) post-resection. However, cystoscopy is operator dependent and may miss upper tract lesions or carcinoma in-situ (CIS). Urine cytology is a common adjunct but lacks sensitivity and specificity in detecting recurrence. A new mRNA biomarker (CxBladder) was compared with urine cytology as an adjunct to cystoscopy in detecting a positive cystoscopy findings during surveillance cystoscopy in our center. Materials and Methods: Consented patients older than 18, undergoing CS for NMIBC, provide paired urine samples for cytology and CxBladder test. Patients with positive cystoscopy findings would undergo re-Trans Urethral Resection of Bladder Tumor (TURBT). Results: Thirty-five patients were enrolled from April to June 2019. Seven contaminated urine samples were excluded. The remaining cohort of 23 (82%) and 5 (18%) females had a mean age of 66.69 (36-89). Eight (29%) patients with positive cystoscopy finding underwent TURBT. All 8 patients also had positive CxBladder result. This shows that CxBladder has a sensitivity and negative predictive value (NPV) of 100%, specificity of 75% and positive predictive value (PPV) of 62% in predicting a positive cystoscopy finding. TURBT Histo-pathological findings showed Low-grade Ta NMIBC in one patient (4%), and 7 (25%) patients had inflammatory changes. Urine cytology was only positive in one patient with a positive cystoscopy finding. This led to a sensitivity of merely 13% and NPV of 74%, while specificity and PPV was 100% in predicting a positive cystoscopy finding. Conclusion: CxBladder had high NPV and sensitivity which accurately predicted suspicious cystoscopy findings leading to further investigation. It has great potential for use as adjunct to cystoscopy for surveillance of NMIBC.
Answer: Yes, positive urine cytology but negative white-light cystoscopy is an indication for fluorescence cystoscopy. Studies have shown that fluorescence cystoscopy (FC) is beneficial in detecting malignant and premalignant lesions in cases where white-light cystoscopy fails to identify abnormalities despite positive urine cytology. In a study with 77 patients who had positive cytological findings but negative white-light cystoscopy, FC enabled the detection of the precise site of malignancy within the bladder in 82% of the cases (PUBMED:18793301). This immediate identification of the exact site of a malignant lesion during FC allows for more accurate diagnosis and treatment without delay.
Furthermore, hexylaminolaevulinate 'blue light' fluorescence cystoscopy has been shown to detect additional pathology in 32% of patients with confirmed positive urinary cytology when conventional tests showed no evidence of disease (PUBMED:19076151). This suggests that photodynamic diagnosis using fluorescence cystoscopy is a key step in managing patients with positive urinary cytology and no evidence of disease on conventional tests.
Additionally, fluorescence cystoscopy with hexaminolevulinate (HAL) has been reported to have a higher carcinoma in situ (CIS) detection rate than white light cystoscopy, and patients with positive urine cytology had twice as many CIS lesions detected by HAL as patients with negative urine cytology (PUBMED:21396840).
In an outpatient setting, blue-light flexible cystoscopy was able to diagnose bladder cancer in 41% of cases where there were positive or suspicious urine cytology findings despite normal white-light flexible cystoscopy results, with 48% of malignant lesions seen only in blue light (PUBMED:34037496).
Therefore, the evidence supports the use of fluorescence cystoscopy as an important diagnostic tool in cases where there is a discrepancy between positive urine cytology and negative findings on white-light cystoscopy. |
Instruction: Is participation in food and income assistance programmes associated with obesity in California adults?
Abstracts:
abstract_id: PUBMED:20701819
Is participation in food and income assistance programmes associated with obesity in California adults? Results from a state-wide survey. Objective: Public assistance programmes may increase risk of obesity among adults. The current study assessed whether participation in the Supplemental Nutrition Assistance Program (SNAP; formerly the Food Stamp Program), Supplemental Security Income (SSI) or California Work Opportunities and Responsibilities to Kids (CalWorks) was associated with obesity, independent of socio-economic status and food insecurity.
Design: A cross-sectional analysis of the 2007 Adult California Health Interview Survey. Outcome measures included BMI and obesity. Distribution of BMI and prevalence of obesity were compared by participation in each programme, using weighted linear and binomial regression models in which BMI or obesity was the outcome, respectively, and programme participation was the predictor.
Setting: A population survey of various health measures.
Subjects: Non-institutionalized adults (n 7741) whose household income was ≤130% of the federal poverty level.
Results: The prevalence of obesity was 27.4%. After adjusting for sociodemographic characteristics, food insecurity and participation in other programmes, the prevalence of obesity was 30% higher in SNAP participants (95% CI 6%, 59%; P=0.01) than in non-participants. This association was more pronounced among men than women. SSI participation was related to an adjusted 50% higher prevalence of obesity (95% CI 27%, 77%; P<0.0001) compared with no participation. SNAP and SSI participants also reported higher soda consumption than non-participants of any programme. CalWorks participation was not associated with obesity after multivariable adjustment.
Conclusions: Participation in SNAP or SSI was associated with obesity independent of food insecurity or socio-economic status. The suggestion that these associations may be mediated by dietary quality warrants further investigation among low-income populations.
abstract_id: PUBMED:35199832
Food insecurity and ultra-processed food consumption: the modifying role of participation in the Supplemental Nutrition Assistance Program (SNAP). Background: Ultra-processed foods contribute to risks of obesity and cardiometabolic disease, and higher intakes have been observed in low-income populations in the United States. Consumption of ultra-processed foods may be particularly higher among individuals experiencing food insecurity and participating in the Supplemental Nutrition Assistance Program (SNAP).
Objectives: Using data from the 2007-2016 NHANES, we examined the associations between food insecurity, SNAP participation, and ultra-processed food consumption.
Methods: The study population comprised 9190 adults, aged 20-65 y, with incomes ≤300% of the federal poverty level (FPL). Food insecurity was assessed using the Household Food Security Survey Module and SNAP participation over the past 12 mo was self-reported. Dietary intake was measured from two 24-h dietary recalls. Ultra-processed food consumption (percentage of total energy intake) was defined using the NOVA food classification system. Linear regression models were used to examine the associations between food insecurity, SNAP participation, and ultra-processed food consumption, adjusting for sociodemographic and health characteristics.
Results: More severe food insecurity was associated with higher intakes of ultra-processed foods (P-trend = 0.003). The adjusted means of ultra-processed food intake ranged from 52.6% for adults with high food security to 55.7% for adults with very low food security. SNAP participation was also associated with higher intakes of ultra-processed foods (adjusted mean: 54.7%), compared with income-eligible participants (adjusted mean: 53.0%). Furthermore, the association between food insecurity and ultra-processed foods was modified by SNAP participation (P-interaction = 0.02). Among income-eligible nonparticipants and income-ineligible nonparticipants, more severe food insecurity was associated with higher consumption of ultra-processed foods. Among SNAP participants, the association between food insecurity and consumption of ultra-processed foods was nonsignificant.
Conclusion: In a nationally representative sample of adults, food insecurity and SNAP participation were both associated with higher levels of ultra-processed food consumption.
abstract_id: PUBMED:26039391
Association between food assistance program participation and overweight. OBJECTIVE The objective of this study was to investigate the association between food assistance program participation and overweight/obesity according to poverty level. METHODS A cross-sectional analysis of data from 46,217 non-pregnant and non-lactating women in Lima, Peru was conducted; these data were obtained from nationally representative surveys from the years 2003, 2004, 2006, and 2008-2010. The dependent variable was overweight/obesity, and the independent variable was food assistance program participation. Poisson regression was used to stratify the data by family socioeconomic level, area of residence (Lima versus the rest of the country; urban versus rural), and survey year (2003-2006 versus 2008-2010). The models were adjusted for age, education level, urbanization, and survey year. RESULTS Food assistance program participation was associated with an increased risk of overweight/obesity in women living in homes without poverty indicators [prevalence ratio (PR) = 1.29; 95% confidence interval (CI) 1.06;1.57]. When stratified by area of residence, similar associations were observed for women living in Lima and urban areas; no associations were found between food assistance program participation and overweight/obesity among women living outside of Lima or in rural areas, regardless of the poverty status. CONCLUSIONS Food assistance program participation was associated with overweight/obesity in non-poor women. Additional studies are required in countries facing both aspects of malnutrition.
abstract_id: PUBMED:18462559
Food Stamp Program participation but not food insecurity is associated with higher adult BMI in Massachusetts residents living in low-income neighbourhoods. Objective: Food-insecure populations employ multiple strategies to ensure adequate household food supplies. These strategies may increase the risk of overweight and obesity. However, existing literature reports conflicting associations between these strategies and BMI. The objective of the present study was to examine whether food insecurity and strategies for managing food insecurity are associated with BMI in adults.
Design, Setting And Subjects: In 2005, RTI International and Project Bread conducted a representative survey of 435 adult residents of low-income census tracts in Massachusetts. Food insecurity was assessed using the US Department of Agriculture's eighteen-item Household Food Security Module.
Results: The prevalence of overweight and obesity was 51 % and 25 %, respectively. After adjusting for age, sex, sociodemographic characteristics and food insecurity, both participation in the Food Stamp Program (FSP) and participation in any federal nutrition programme 12 months prior to the survey were each associated with an approximate 3.0 kg/m2 higher adult BMI. In the subset of current FSP participants (n 77), participation for >or=6 months was associated with an 11.3 kg/m2 lower BMI compared with participation for <6 months. Respondents who consumed fast foods in the previous month had a mean BMI that was 2.4 kg/m2 higher than those who did not. Food insecurity was not associated with BMI after adjustment for sociodemographic characteristics and FSP participation.
Conclusions: Participation in federal nutrition programmes and consumption of fast food were each associated with higher adult BMI independent of food insecurity and other sociodemographic factors. However, prolonged participation in the FSP was associated with lower BMI.
abstract_id: PUBMED:26603573
Household food insecurity as a determinant of overweight and obesity among low-income Hispanic subgroups: Data from the 2011-2012 California Health Interview Survey. An estimated 78% of Hispanics in the United States (US) are overweight or obese. Household food insecurity, a condition of limited or uncertain access to adequate food, has been associated with obesity rates among Hispanic adults in the US. However, the Hispanic group is multi-ethnic and therefore associations between obesity and food insecurity may not be constant across Hispanic country of origin subgroups. This study sought to determine if the association between obesity and food insecurity among Hispanics is modified by Hispanic ancestry across low-income (≤200% of poverty level) adults living in California. Data are from the cross-sectional 2011-12 California Health Interview Survey (n = 5498). Rates of overweight or obesity (BMI ≥ 25), Calfresh receipt (California's Supplemental Nutrition Assistance Program), and acculturation were examined for differences across subgroups. Weighted multiple logistic regressions examined if household food insecurity was significantly associated with overweight or obesity and modified by country of origin after controlling for age, education, marital status, country of birth (US vs. outside of US), language spoken at home, and Calfresh receipt (P < .05). Significant differences across subgroups existed for prevalence of overweight or obesity, food security, Calfresh receipt, country of birth, and language spoken at home. Results from the adjusted logistic regression models found that food insecurity was significantly associated with overweight or obesity among Mexican-American women (β (SE) = 0.22 (0.09), p = .014), but not Mexican-American men or Non-Mexican groups, suggesting Hispanic subgroups behave differently in their association between food insecurity and obesity. By highlighting these factors, we can promote targeted obesity prevention interventions, which may contribute to more effective behavior change and reduced chronic disease risk in this population.
abstract_id: PUBMED:23364918
Food insecurity, food assistance and weight status in US youth: new evidence from NHANES 2007-08. Objective: To investigate food assistance participation as a risk factor for overweight and obesity in youth, and food insecurity as an effect modifier.
Methods: The sample included youth ages 4-17, in families ≤200% of the federal poverty line in the 2007-2008 National Health and Nutrition Examination Survey (n = 1321). Food insecurity was measured with the US Department of Agriculture survey module. Food assistance participation was assessed for Supplemental Nutrition Assistance Program, Special Supplemental Nutrition Program for Women, Infants and Children and school meals. Body size was classified by age- and sex-specific body mass index (BMI) percentile, BMI z-score and waist circumference percentile. Regression models with direct covariate adjustment and programme-specific propensity scores, stratified by food insecurity, estimated associations between food assistance participation and body size.
Results: Food assistance participation was not associated with increased body size among food-insecure youth in models with direct covariate adjustment or propensity scores. Compared with low-income, food-secure youth not participating in food assistance, BMI z-scores were higher among participants in models with direct covariate adjustment (0.27-0.38 SD and 0.41-0.47 SD, for boys and girls, respectively). Using propensity scores, results were similar for boys, but less so for girls.
Conclusions: Food assistance programme participation is associated with increased body size in food-secure youth, but not food-insecure youth. Using both direct covariate adjustment and a propensity score approach, self-selection bias may explain some, but not all, of the associations. Providing healthy food assistance that improves diet quality without contributing to excessive intake remains an important public health goal.
abstract_id: PUBMED:17374668
Participation in food assistance programs modifies the relation of food insecurity with weight and depression in elders. The relation of food insecurity in elders with outcomes such as overweight and depression, and the influence of participation in food assistance programs on these relations, has not been established. The aim of this study was to examine the relation between food insecurity and weight and depression in elders, and determine whether participation in food assistance programs modifies the effect of food insecurity on weight and depression. Two longitudinal data sets were used: the Health and Retirement Study (1996-2002) and the Asset and Health Dynamics Among the Oldest Old (1995-2002). The relation of food insecurity and participation in food assistance programs was assessed by multilevel linear regression analysis. Food insecurity was positively related to weight and depression among elders. Some analyses supported that food-insecure elders who participated in food assistance programs were less likely to be overweight and depressed than those who did not participate in food assistance programs. This finding implies that food assistance programs can have both nutritional and non-nutritional impacts. The positive impact of participation in food assistance programs of reducing or preventing poor outcomes resulting from food insecurity will improve elders' quality of life, save on their healthcare expenses, and help to meet their nutritional needs.
abstract_id: PUBMED:24530319
Supplemental Nutrition Assistance Program participation did not help low income Hispanic women in Texas meet the dietary guidelines. Objective: Low-income Hispanic women are at greater risk for dietary deficiencies and obesity. We assessed the association between Supplemental Nutrition Assistance Program participation and dietary intake among 661 Hispanic women aged 26-44 years living in Texas.
Methods: Cross-sectional data was collected using standard methods. Analysis of variance and logistic regression examined the influence of Supplemental Nutrition Assistance Program on diet after adjusting for household characteristics, body mass index, and food security status.
Results: Most women did not meet recommended dietary guidelines. Supplemental Nutrition Assistance Program participants consumed higher amounts of total sugars, sweets-desserts, and sugar-sweetened beverages than Supplemental Nutrition Assistance Program nonparticipants. High sodium intakes and low dairy consumption were observed in both groups. Only 27% of low-income eligible women received Supplemental Nutrition Assistance Program benefits.
Discussion: Low-income Hispanic women participating in Supplemental Nutrition Assistance Program reported less healthful dietary patterns than nonparticipants. This may contribute to the increased obesity prevalence and related comorbidities observed in this population.
Conclusion: Supplemental Nutrition Assistance Program should play an important role in enhancing the overall dietary quality of low-income households. Policy initiatives such as limiting the purchase of sugar-sweetened beverages and education to enable women to reduce consumption of high sodium processed foods deserve consideration as means to improve the dietary quality of Supplemental Nutrition Assistance Program participants. Effective measures are needed to increase Supplemental Nutrition Assistance Program participation rates among Hispanics.
abstract_id: PUBMED:28820019
The Supplemental Nutrition Assistance Program and frequency of sugar-sweetened soft drink consumption among low-income adults in the US. Background: The Supplemental Nutrition Assistance Program (SNAP) was designed to help low-income people purchase nutritious foods in the US. In recent years, there has been a consistent call for banning purchases of sugar drinks in SNAP.
Aim: The aim of this study was to examine the association between SNAP participation and the frequency of sugar-sweetened soft drink (SSD) consumption among low-income adults in the US.
Method: Data came from the 2009-2010 National Health and Nutrition Examination Survey. Low-income adults aged ≥20 years with a household income ≤250% of the Federal Poverty Level ( N = 1200) were categorized into two groups based on the household's SNAP receipt: SNAP recipients ( n = 393) and non-recipients ( n = 807). Propensity-score matching was used to minimize observable differences between these two groups that may explain the difference in SSD consumption, generating the final sample of 393 matched pairs (SNAP recipients, n = 393; non-recipients, n = 393). An ordinal logistic regression was conducted on the matched sample.
Results: SNAP recipients were more likely to report higher levels of SSD consumption, compared with non-recipients (adjusted odds ratio (AOR) = 1.55, 95% confidence interval (CI) = 1.17-2.07). Male gender (AOR = 1.69, 95% CI = 1.17-2.46), younger age (AOR = 0.97, 95% CI = 0.96-0.99), lower education level (AOR = 2.28, 95% CI = 1.33-3.89), and soda availability in homes (AOR = 2.24, 95% CI = 1.77-2.83) were also associated with higher levels of SSD consumption among low-income adults.
Conclusions: SNAP participation was associated with frequent SSD consumption. To reduce SSD consumption, strategic efforts need to focus on educating people about the harms of SSD and promoting nutritious food choices with SNAP benefits.
abstract_id: PUBMED:12840184
Food stamp program participation is positively related to obesity in low income women. This study examined the relationship between Food Stamp Program (FSP) participation and the obesity of low income individuals using data from the National Longitudinal Survey of Youth 1979. Obesity was defined as body mass index >or= 30 kg/m(2). The data were arranged as a panel with multiple observations per individual, and the models of obesity included current and long-term FSP participation, additional demographic, socioeconomic and environment characteristics and individual fixed effects. Individual fixed effects were used to take into account unobserved differences across individuals that did not vary over time. In ordinary least squares models, current and long-term FSP participation were significantly related to the obesity of low income women (P < 0.05), but not of low income men. For low income women, current participation in the FSP was associated with a 9.1% increase in the predicted probability of current obesity. Participation in the FSP in each of the previous five years compared to no participation over that time period was associated with approximately a 20.5% increase in the predicted probability of current obesity. These models did not control for food insecurity, and this omission potentially complicates the interpretation of the FSP participation variables.
Answer: Yes, participation in food and income assistance programs is associated with obesity in California adults. Specifically, participation in the Supplemental Nutrition Assistance Program (SNAP) and Supplemental Security Income (SSI) was found to be associated with a higher prevalence of obesity among California adults whose household income was ≤130% of the federal poverty level. After adjusting for sociodemographic characteristics, food insecurity, and participation in other programs, SNAP participants had a 30% higher prevalence of obesity compared to non-participants, and this association was more pronounced among men than women. Similarly, SSI participation was related to a 50% higher prevalence of obesity compared to those not participating in the program. Both SNAP and SSI participants also reported higher soda consumption than non-participants of any program. However, participation in California Work Opportunities and Responsibilities to Kids (CalWorks) was not associated with obesity after multivariable adjustment (PUBMED:20701819). |
Instruction: Spinal immobilization on a flat backboard: does it result in neutral position of the cervical spine?
Abstracts:
abstract_id: PUBMED:1854072
Spinal immobilization on a flat backboard: does it result in neutral position of the cervical spine? Study Objectives: To determine the amount of occipital padding required to achieve neutral position of the cervical spine when a patient is immobilized on a flat backboard. Neutral position was defined as the normal anatomic position of the head and torso that one assumes when standing looking straight ahead.
Design: Descriptive with hypothesis testing of selected descriptive elements.
Setting: University campus and hospital.
Subjects: One hundred healthy young adults with no history of back disease.
Interventions: Volunteers were measured in standing and supine positions.
Measurements: Occipital offset; height; weight; and head, neck, and chest circumferences were measured for each subject.
Main Results: The amount of occipital offset required to achieve neutral position varied from 0 to 3.75 in. (mean, 1.5 in.). Mean occipital offset for men (1.67 in.) was significantly greater than that for women (1.31 in.) Easily obtained body measurements did not accurately predict occipital offset.
Conclusion: Immobilization on a flat backboard would place 98% of our study subjects in relative cervical extension. Occipital padding would place a greater percentage of patients in neutral position and increase patient comfort during transport.
abstract_id: PUBMED:27683694
Cervical spine immobilization in the elderly population. Background: Immobilization of the cervical spine is a cornerstone of spinal injury management. In the context of suspected cervical spine injury, patients are immobilized in a 'neutral position' based on the head and trunk resting on a flat surface. It is hypothesized that the increased thoracic kyphosis and loss of cervical lordosis seen in elderly patients may require alternative cervical immobilization, compared with the 'neutral position'.
Methods: To investigate this, an audit of pan-scan CT performed on consecutive major trauma patients aged over 65 years was carried out over a 6-month period. Utilizing the pan-CT's localizing scout film, a novel measurement, the 'chin-brow horizontal' angle was independently measured by a senior spine surgeon (RJM) and a neurosurgeon (PJR) with the gantry used as a horizontal zero- degree reference. The benefit of the 'chin-brow horizontal' angle in the trauma setting is it can be assessed from the bedside whilst the patient is immobilized against a flat surface.
Results: During the 6-month study period, 58 patients were identified (30 male, 28 female), with an average age of 77.6 years (minimum 65, maximum 97). Results showed that 'chin-brow horizontal' angles varied widely, between +15.8 degrees in flexion to -30.5 degrees in extension (mean -12.4 degrees in extension, standard deviation 9.31 degrees. The interobserver correlation was 0.997 (95% CI: 0.995-0.998).
Conclusions: These findings suggest that, due to degenerative changes commonly seen in elderly patients, the routine use of the 'neutral position' adopted for cervical spine immobilization may not be appropriate in this population. We suggest that consideration be taken in cervical spine immobilization, with patients assessed on an individual basis including the fracture morphology, to minimize the risk of fracture displacement and worsened neurological deficit.
abstract_id: PUBMED:7473965
Pediatric cervical-spine immobilization: achieving neutral position? This study was designed to evaluate prospectively the ability of current spine-immobilization devices to achieve radiographic-neutral positioning of the cervical spine in pediatric trauma patients. All trauma patients who required spinal immobilization and a lateral cervical spine radiograph were included in the study. A lateral cervical spine radiograph was obtained while the child was immobilized. The Cobb angle (C2-C6) was measured using a handheld goniometer. The method of immobilization, age at injury, and Cobb angle were compared. One hundred and eighteen patients with an average age of 7.9 years were enrolled. The majority were males (71%). The most frequent mechanisms of injury included motor vehicle accidents (35%) and falls (32%). The average Glascow Coma Scale score was 14. Although 31% of the children complained of neck pain, 92% were without neurologic deficits. The Cobb angles ranged from 27 degree kyphosis to 27 degree lordosis, and only 12 of the patients presented in a neutral position (0 degrees). Greater than 5 degrees of kyphosis or lordosis was observed in 60% of the children. Thirty-seven percent of the patients had 10 degrees or greater angulation. The most frequent methods of immobilization included a collar, backboard, and towels (40%), and a collar, backboard, and blocks (20%), but these techniques provided < 5 degrees kyphosis or lordosis in only 45% and 26% of the children respectively. No single method or combination of methods of immobilization consistently placed the children in the neutral position.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:32430091
Remaining Cervical Spine Movement Under Different Immobilization Techniques. Background: Immobilization of the cervical spine by Emergency Medical Services (EMS) personnel is a standard procedure. In most EMS, multiple immobilization tools are available.The aim of this study is the analysis of residual spine motion under different types of cervical spine immobilization.
Methods: In this explorative biomechanical study, different immobilization techniques were performed on three healthy subjects. The test subjects' heads were then passively moved to cause standardized spinal motion. The primary endpoints were the remaining range of motion for flexion, extension, bending, and rotation measured with a wireless human motion detector.
Results: In the case of immobilization of the test person (TP) on a straight (0°) vacuum mattress, the remaining rotation of the cervical spine could be reduced from 7° to 3° by additional headblocks. Also, the remaining flexion and extension were reduced from 14° to 3° and from 15° to 6°, respectively. The subjects' immobilization was best on a spine board using a headlock system and the Spider Strap belt system (MIH-Medical; Georgsmarienhütte, Germany). However, the remaining cervical spine extension increased from 1° to 9° if a Speedclip belt system was used (Laerdal; Stavanger, Norway). The additional use of a cervical collar was not advantageous in reducing cervical spine movement with a spine board or vacuum mattress.
Conclusions: The remaining movement of the cervical spine is minimal when the patient is immobilized on a spine board with a headlock system and a Spider Strap harness system or on a vacuum mattress with additional headblocks. The remaining movement of the cervical spine could not be reduced by the additional use of a cervical collar.
abstract_id: PUBMED:31030223
Analysis of cervical spine immobilization during patient transport in emergency medical services. Purpose: It remains controversial how to immobilize the cervical spine (CS) in trauma patients. Therefore, we analyzed different CS immobilization techniques during prehospital patient transport.
Methods: In this explorative, biomechanical analysis of immobilization techniques conducted in a standardized setting, we recorded CS motion during patient transport using a wireless human motion tracker on a volunteer. To interpret spinal movement a benchmark called motionscore (MS) was developed based on biomechanics of the injured spine.
Results: We found the best spinal motion restriction using a spine board, head blocks and immobilization straps with and without a cervical collar (CC) (MS 45 vs. 27). Spinal motion restriction on a vacuum mattress with CC and head blocks was superior to no CC or head blocks (MS 103 vs. 152). An inclined vacuum mattress was more effective with head blocks than without (MS 124 vs. 187). Minimal immobilization with an ambulance cot, CC, pillow and tape was slightly superior to a vacuum mattress with CC and head blocks (MS 92 vs. 103). Minimal immobilization without CC showed the lowest spinal motion restriction (MS 517).
Conclusions: We suggest an immobilization procedure customized to the individual situation. A spine board should be used whenever spinal motion restriction is indicated and the utilization is possible. In some cases, CS immobilization by a vacuum mattress with CC and head blocks could be more beneficial. In an unstable status of the patient, minimal immobilization may be performed using an ambulance cot, pillow, CC and tape to minimize time on scene caused by immobilization.
abstract_id: PUBMED:34319432
A two-handed airway maneuver of mandibular advancement and mouth opening in the neutral neck position for immobilization of the cervical spine. Purpose: Immobilization of the cervical spine after trauma is recommended as standard care to prevent secondary injury. We tested the hypothesis that a two-handed airway maneuver, consisting of mandibular advancement and mouth opening in the neutral neck position, would minimize changes in the angle of the cervical vertebrae at the C0/4 level and tidal volume in non-obese patients under anesthesia with neuromuscular blockade.
Methods: Twenty consecutive patients without cervical spine injury undergoing general anesthesia were enrolled and evaluated. The primary variable was change in the angle of the cervical vertebrae at the C0/4 level during mask ventilation using the modified two-handed technique. Secondary variables included changes in the angles of the cervical vertebrae at each level between C0 and C4, anterior movement of the vertebral bodies, change in the angle between the head and neck, change in the pharyngeal airway space, and tidal volume during mask ventilation.
Results: The two-handed airway maneuver of mandibular advancement and mouth opening resulted in statistically significant changes in the angle of the cervical spine at the C0/4 level (3.2 ± 3.0 degrees, P < 0.001) and the C3/4 level (1.4 ± 2.2 degrees, P = 0.01). The two-handed airway maneuver provided adequate mask ventilation without anterior movement of the vertebral bodies.
Conclusion: Our study suggests that a two-handed airway maneuver of mandibular advancement and mouth opening in the neutral neck position results in only slight change in the cervical vertebral angle at the C0/4 level in non-obese patients under general anesthesia with neuromuscular blockade.
abstract_id: PUBMED:11265899
Accuracy of visual determination of neutral position of the immobilized pediatric cervical spine. Background: The definition of neutral position for the immobilized pediatric cervical spine is not well standardized. In this study, we attempted to determine whether 1) physicians and/or paramedics could accurately assess visually if the cervical spine was in a neutral position, 2) the visual assessments of the observers were in agreement, and 3) a radiographic Cobb angle would correlate with the visual determination.
Methods: Children presenting to a pediatric emergency department (ED) in full spinal immobilization were randomly selected (convenience sample) for this prospective study. The emergency physician and transporting paramedic independently determined positioning of the cervical spine. A radiologist, blinded to clinical information, determined Cobb angles from radiographs of the immobilized cervical spines.
Results: Of the 59 children studied, the evaluation of cervical spine position by the physician and paramedic correlated in 88% of the cases. For the 22 children with non-neutral Cobb angles (definition of neutral: between 5 degrees flexion and 5 degrees extension), observers agreed in 100% of the cases. However, in 21 of these cases (95%) the position was observed as neutral.
Conclusions: Although visual determinations of neutral position of the cervical spine by two observers may correlate, radiographic studies demonstrate that neutral position was not achieved in 37% of the cases.
abstract_id: PUBMED:33909105
The position of the head during treatment in the emergency room-an explorative analysis of immobilization of the cervical spine Background: Immobilization of the cervical spine is a standard procedure in emergency medicine mostly achieved via a cervical collar. In the emergency room other forms of immobilization are utilized as cervical collars have certain drawbacks. The present study aimed to provide preliminary data on the efficiency of immobilization in the emergency room by analyzing the residual spinal motion of the patient's head on different kinds of head rests.
Methods: In the present study biomechanical motion data of the cervical spine of a test subject were analyzed. The test subject was placed in a supine position on a mobile stretcher (Stryker M1 Roll-In System, Kalamazoo, MI, USA) wearing a cervical collar (Perfit ACE, Ballerup, Denmark). Three different head rests were tested: standard pillow, concave pillow and cavity pillow. The test subject carried out a predetermined motion protocol: right side inclination, left side inclination, flexion and extension. The residual spinal motion was recorded with wireless motion trackers (inertial measurement unit, Xsens Technologies, Enschede, The Netherlands). The first measurement was performed without a cervical collar or positioning on the pillows to measure the physiological baseline motion. Subsequently, three measurements were taken with the cervical collar applied and the pillows in place. From these measurements, a motion score was calculated that can represent the motion of the cervical spine.
Results: When the test subject's head was positioned on a standard pillow the physiological motion score was reduced from 69 to 40. When the test subject's head was placed on concave pillow the motion score was further reduced from 69 to 35. When the test subject's head was placed on cavity pillow the motion score was reduced from 69 to 59. The observed differences in the overall motion score of the cervical spine are mainly due to reduced flexion and extension rather than rotation or lateral inclination.
Conclusion: The motion score of the cervical spine using motion sensors can provide important information for future analyses. The results of the present study suggest that trauma patients can be immobilized in the early trauma phase with a cervical collar and a head rest. The application of a cervical collar and the positioning on the concave pillow may achieve a good immobilization of the cervical spine in trauma patients in the early trauma phase.
abstract_id: PUBMED:8780473
Optimal positioning for cervical immobilization. Study Objective: We hypothesized that optimal positioning of the head and neck to protect the spinal cord during cervical spine immobilization can be determined with reference to external landmarks. In this study we sought to determine the optimal position for cervical spine immobilization using magnetic resonance imaging (MRI) and to define this optimal position in a clinically reproducible fashion.
Methods: Our subjects were 19 healthy adult volunteers (11 women, 8 men). In each, we positioned the head to produce various degrees of neck flexion and extension. This positioning was followed by quantitative MRI of the cervical spine.
Results: The mean ratio of spinal canal and spinal cord cross-sectional areas was smallest at C6 but exceeded 2.0 at all levels from C2 to T1 (P < .05). At the C5 and C6 levels, the maximal area ratio was most consistently obtained with slight flexion (cervical-thoracic angle of 14 degrees) (P < .05). For a patient lying flat on a backboard, this corresponds to raising the occiput 2 cm. More extreme flexion or extension produced variable results.
Conclusion: In healthy adults, a slight degree of flexion equivalent to 2 cm of occiput elevation produces a favorable increase in spinal canal/spinal cord ration at levels C5 and C6, a region of frequent unstable spine injuries.
abstract_id: PUBMED:38086180
Pediatric trauma population spine immobilization during resuscitation: A call for improved guidelines. Introduction: This review aims to evaluate current practices regarding spine immobilization in pediatric trauma patients to evaluate their efficacy, reliability, and impact on clinical outcomes to guide future research and improved evidence-based practice guidelines.
Methods: PubMed, ProQuest, Embase, Google Scholar, and Cochrane were queried for studies pertaining to spinal immobilization practices in pediatric trauma patients. Articles were separated into studies that explored both the efficacy and clinical outcomes of spine immobilization. Outcomes evaluated included frequency of spinal imaging, self-reported pain level, emergency department length of stay (ED-LOS), and ED disposition.
Results: Six articles were included, with two studies examining clinical outcomes and 4 studies evaluating the efficacy and reliability of immobilization techniques. Immobilized children were significantly more likely to undergo cervical spine imaging (OR 8.2, p < 0.001), be admitted to the floor (OR 4.0, p < 0.001), be taken to the ICU or OR (OR 5.3, p < 0.05) and reported a higher median pain score. Older children were significantly more likely to be immobilized. No immobilization techniques consistently achieved neutral positioning, and patients most often presented in a flexed position. Lapses in immobilization occurred in 71.4% of patients.
Conclusion: Immobilized pediatric patients underwent more cervical radiographs, and had higher hospital and ICU admission rates, and higher mean pain scores than those without immobilization. Immobilization was inconsistent across age groups and often resulted in lapses and improper alignment. Further research is needed to identify the most appropriate immobilization techniques for pediatric patients and when to use them.
Answer: Spinal immobilization on a flat backboard does not consistently result in a neutral position of the cervical spine. A study found that immobilization on a flat backboard would place 98% of the subjects in relative cervical extension, and the amount of occipital offset required to achieve a neutral position varied significantly among individuals (PUBMED:1854072). This suggests that a flat backboard without proper adjustments does not accommodate the natural curvature of the cervical spine for most people.
Additionally, in the elderly population, the routine use of the 'neutral position' for cervical spine immobilization may not be appropriate due to degenerative changes, such as increased thoracic kyphosis and loss of cervical lordosis, which are common in this age group (PUBMED:27683694). Similarly, pediatric patients often did not achieve a radiographically neutral position with current spine-immobilization devices, with only 12 out of 118 patients presenting in a neutral position (PUBMED:7473965).
Furthermore, different immobilization techniques have been analyzed, and it was found that the remaining movement of the cervical spine is minimal when the patient is immobilized on a spine board with a headlock system and a Spider Strap harness system or on a vacuum mattress with additional headblocks. However, the additional use of a cervical collar did not reduce cervical spine movement when used with a spine board or vacuum mattress (PUBMED:32430091).
In conclusion, spinal immobilization on a flat backboard without additional adjustments or equipment does not typically result in a neutral position of the cervical spine, and individual assessment and adjustment are necessary to achieve proper immobilization (PUBMED:1854072; PUBMED:27683694; PUBMED:7473965; PUBMED:32430091). |
Instruction: Is knowledge a barrier to implementing low back pain guidelines?
Abstracts:
abstract_id: PUBMED:18373587
Is knowledge a barrier to implementing low back pain guidelines? Assessing the knowledge of Israeli family doctors. Objectives: To measure knowledge of Israeli low back pain (LBP) clinical practice guidelines among different subgroups of primary care doctors, prior to designing an intervention programme to enhance guideline adherence in practice.
Study Design: Confidential mailed survey questionnaire.
Setting: Family practices in the Haifa and western Galilee district, Israel.
Participants: Random sample of 163 primary care doctors. A total of 134 doctors (82%) completed the questionnaire.
Main Outcome Measures: A Multiple Choice Questionnaire measuring knowledge of the LBP guidelines. Instrument reliability and inter-item reliability were tested in a pilot phase. Content validity was assured by having the Israeli LBP guideline authors involved in a consensus procedure.
Results: Distribution of test scores significantly differentiated professional levels and background variables, demonstrating the instrument reliability. Cronbach's alpha was above 0.91. The average test score was 67.7 [standard deviation (SD) 16.2], family doctors had average scores of 75.2 (SD 9.8), general practitioners (GPs) 57.9 (SD 19) and family practice residents 67.4 (SD 13.2). The difference between the test average scores of family doctors, GPs and residents was significant (P < 0.001). Significant differences were also found for specific variables including the doctor's age, country of medical training and self-report familiarity with the LBP guidelines.
Conclusions: Striking differences exist between subgroups of primary care doctors regarding their knowledge of LBP guidelines. These differences will require the design of multiple interventions tailored to each subgroup.
abstract_id: PUBMED:33461555
Knowledge of and adherence to radiographic guidelines for low back pain: a survey of chiropractors in Newfoundland and Labrador, Canada. Background: Low back pain (LBP) rarely requires routine imaging of the lumbar spine in the primary care setting, as serious spinal pathology is rare. Despite evidence-based clinical practice guidelines recommending delaying imaging in the absence of red flags, chiropractors commonly order imaging outside of these guidelines. The purpose of this study was to survey chiropractors to determine the level of knowledge, adherence to, and beliefs about, clinical practice guidelines related to the use of lumbar radiography for LBP in Newfoundland and Labrador (NL), Canada.
Methods: A cross-sectional survey of chiropractors in NL (n = 69) was conducted between May and June 2018, including questions on demographics, awareness of radiographic guidelines, and beliefs about radiographs for LBP. We assessed behavioural simulation using clinical vignettes to determine levels of adherence to LBP guideline recommendations.
Results: The response rate was 77% (n = 53). Half of the participants stated they were aware of current radiographic guideline recommendations, and one quarter of participants indicated they did not use guidelines to inform clinical decisions. The majority of participants agreed that x-rays of the lumbar spine are useful for patients with suspected pathology, are indicated when a patient is non-responsive to 4 weeks of conservative treatment for LBP, and when there are neurological signs associated with LBP. However, a small proportion indicated that there is a role for full spine x-rays (~ 21%), x-rays to evaluate patients with acute LBP (~ 13%), and that patient expectations play a role in decision making (4%). Adherence rate to radiographic guidelines measured using clinical vignettes was 75%.
Conclusions: While many chiropractors in this sample reported being unsure of specific radiographic guidelines, the majority of respondents adhered to guideline recommendations measured using clinical vignettes. Nonetheless, a small proportion still hold beliefs about radiographs for LBP that are discordant with current radiographic guidelines. Future research should aim to determine barriers to guideline uptake in this population in order to design and evaluate tailored knowledge translation strategies to reduce unnecessary LBP imaging.
abstract_id: PUBMED:24604903
Physical therapists' clinical knowledge of multidisciplinary low back pain treatment guidelines. Background: Numerous clinical practice guidelines (CPGs) have been developed to assist clinicians in care options for low back pain (LBP). Knowledge of CPGs has been marginal across health-related professions.
Objective: The aims of this study were: (1) to measure US-based physical therapists' knowledge of care recommendations associated with multidisciplinary LBP CPGs and (2) to determine which characteristics were associated with more correct responses.
Design: A cross-sectional survey was conducted.
Methods: Consenting participants attending manual therapy education seminars read a clinical vignette describing a patient with LBP and were asked clinical decision-making questions regarding care, education, and potential referral. Descriptive statistics illustrating response accuracy and binary logistic regression determined adjusted associations between predictor variables and appropriate decisions.
Results: A total of 1,144 of 3,932 surveys were eligible for analysis. Correct responses were 55.9% for imaging, 54.7% for appropriate medication, 62.0% for advice to stay active, 92.7% for appropriate referral with failed care, and 16.6% for correctly answering all 4 questions. After adjustment, practicing in an outpatient facility was significantly associated with a correct decision on imaging. Female participants were more likely than male participants to correctly select proper medications, refer the patient to another health care professional when appropriate, and answer all 4 questions correctly. Participants reporting caseloads of greater than 50% of patients with LBP were more likely to select proper medications, give advice to stay active, and answer all 4 questions correctly. Participants attending more continuing education were more likely to give advice to stay active and older, and more experienced participants were more likely to appropriately refer after failed care.
Limitations: There was potential selection bias, which limits generalizability.
Conclusions: The survey identified varied understanding of CPGs when making decisions that were similar in recommendation to the CPGs. No single predictor for correct responses for LBP CPGs was found.
abstract_id: PUBMED:33998767
What does it take to facilitate the integration of clinical practice guidelines for the management of low back pain into practice? Part 2: A strategic plan to activate dissemination. Low back pain (LBP) is the leading cause of disability worldwide among all musculoskeletal disorders despite an intense focus in research efforts. Researchers and decision makers have produced multiple clinical practice guidelines for the rehabilitation of LBP, which contain specific recommendations for clinicians. Adherence to these recommendations may have several benefits, such as improving the quality of care for patients living with LBP, by ensuring that the best evidence-based care is being delivered. However, clinicians' adherence to recommendations from these guidelines is low and numerous implementation barriers and challenges, such as complexity of information and sheer volume of guidelines have been documented. In a previous paper, we performed a systematic review of the literature to identify high-quality clinical practice guidelines on the management of LBP, and developed a concise yet comprehensive infographic that summarizes the recommendations from these guidelines. Considering the wealth of scientific evidence, passive dissemination alone of this research knowledge is likely to have limitations to help clinicians implement these recommendations into routine practice. Thus, an active and engaging dissemination strategy, aimed at improving the implementation and integration of specific recommendations into practice is warranted. In this paper, we argue that a conceptual framework, such as the theoretical domains framework, could facilitate the implementation of these recommendations into clinical practice. Specifically, we present a systematic approach that could serve to guide the development of a theory-informed knowledge translation intervention as a means to overcome implementation challenges in rehabilitation of LBP.
abstract_id: PUBMED:24491612
Physical therapist vs. family practitioner knowledge of simple low back pain management in the U.S. Air Force. The purpose of this study was to compare knowledge in managing low back pain (LBP) between physical therapists and family practice physicians. Fifty-four physical therapists and 130 family practice physicians currently serving in the U.S. Air Force completed standardized examinations assessing knowledge, attitudes, the usefulness of clinical practice guidelines, and management strategies for patients with LBP. Beliefs of physical therapists and family practice physicians about LBP were compared using relative risks and independent t tests. Scores related to knowledge, attitudes, and the usefulness of clinical practice guidelines were generally similar between the groups. However, physical therapists were more likely to recommend the correct drug treatments for patients with acute LBP compared to family practice physicians (85.2% vs. 68.5%; relative risk: 1.24 [95% confidence interval: 1.06-1.46]) and believe that patient encouragement and explanation is important (75.9% vs. 56.2%; relative risk: 1.35 [95% confidence interval: 1.09-1.67]). In addition, physical therapists showed significantly greater knowledge regarding optimal management strategies for patients with LBP compared to family practice physicians. The results of this study may have implications for health policy decisions regarding the utilization of physical therapists to provide care for patients with LBP without a referral.
abstract_id: PUBMED:23047043
Therapist knowledge, adherence and use of low back pain guidelines to inform clinical decisions--a national survey of manipulative and sports physiotherapists in New Zealand. Identifying factors which influence guideline-informed clinical decisions by therapists will help tailor implementation strategies to improve guideline use. The aims of this study were to investigate; the extent to which current physiotherapy practice in New Zealand adheres to low back pain (LBP) guidelines and the factors which influence the use of guidelines to inform clinical decisions for patients with non-specific low back pain (NSLBP). A cross-sectional on-line survey of NZ physiotherapists (n = 1039) was conducted which included the guideline adherence measures, therapists' treatment orientation about NSLBP and a question on the perceived helpfulness of guidelines in decisions for patients with NSLBP. Data from 170 physiotherapists were analysed descriptively and univariate and multivariate associations were conducted for therapist factors (predictor variables) which predicted guidelines being helpful in decisions for management of patients with NSLBP (Y|N). The majority of respondents provided advice which was broadly inline with guideline recommendations [work (60%), activity (87.6%), and bed rest (63%)]. A lower biomedical belief orientation for LBP, higher reported LBP caseload and postgraduate qualifications demonstrated significant univariate associations (P ≤ 0.20) for guidelines being helpful to inform decisions for a patient with NSLBP. The only significant (P = 0.043) predictor variable in the multivariate model was the therapists' biomedical treatment orientation (Exp (B): odd ratio: 1.56). Differences between behaviours and beliefs in guideline use were found. A lower focus on a biomedical model for LBP influenced usage of LBP guidelines to inform clinical decisions for patients with LBP. Implications for improving guideline usage are discussed.
abstract_id: PUBMED:19564770
Orthopaedists' and family practitioners' knowledge of simple low back pain management. Study Design: Comparative knowledge survey.
Objective: This study compared the knowledge of orthopaedic surgeons and family practitioners in managing simple low back pain (LBP) with reference to currently published guidelines.
Summary Of Background Data: LBP is the most prevalent of musculoskeletal conditions. It affects nearly everyone at some point in time and about 4% to 33% of the population at any given point. Treatment guidelines for LBP should be based on evidence-based medicine and updated to improve patient management and outcome. Studies in various fields have assessed the impact of publishing guidelines on patient management, but little is known about the physicians' knowledge of the guidelines.
Methods: Orthopedic surgeons and family practitioners participating in their annual professional meetings were requested to answer a questionnaire regarding the management of simple low back pain. Answers were scored based on the national guidelines for management of low back pain.
Results: One hundred forty family practitioners and 253 orthopaedists responded to the questionnaire. The mean family practitioners' score (69.7) was significantly higher than the orthopaedists' score (44.3) (P < 0.0001). No relation was found between the results and physician demographic factors, including seniority. Most orthopaedists incorrectly responded that they would send their patients for radiologic evaluations. They would also preferentially prescribe cyclo-oxygenase-2-specific nonsteroidal anti-inflammatory drugs, despite the guidelines recommendations to use paracetamol or nonspecific nonsteroidal anti-inflammatory drugs. Significantly less importance was attributed to patient encouragement and reassurance by the orthopaedists as compared with family physicians.
Conclusion: Both orthopaedic surgeons' and family physicians' knowledge of treating LBP is deficient. Orthopedic surgeons are less aware of current treatment than family practitioners. Although the importance of publishing guidelines and keeping them up-to-date and relevant for different disciplines in different countries cannot be overstressed, disseminating the knowledge to clinicians is also very important to ensure good practice.
abstract_id: PUBMED:33111468
Tailored training for physiotherapists on the use of clinical practice guidelines: A mixed methods study. Introduction: Clinical practice guidelines (CPG) are vehicles for translating evidence into practice, but effective CPG-uptake requires targeted training. This mixed methods research project took a staged evidence-based approach to develop and test a tailored training programme (TTP) that addressed organisational and individual factors influencing CPG-uptake by South African physiotherapists treating patients with low back pain in primary healthcare settings.
Methods: This multi-stage mixed methods study reports the development, contextualisation and expert content validation of a TTP to improve CPG-uptake. Finally, the TTP was evaluated for its feasibility and acceptability in its current format.
Results: The TTP (delivered online and face-to-face) contained minimal theory, and focussing on practical activities, clinical scenarios and discussions. Pre-TTP, physiotherapists expressed skepticism about the relevance of CPG in daily practice. However, post-TTP they demonstrated improved knowledge, confidence, and commitment to CPG-uptake.
Discussion: The phased-construction of the TTP addressed South African primary healthcare physiotherapists' needs and concerns, using validated evidence-based educational approaches. The TTP content, delivered by podcasts and face-to-face contact, was feasible and acceptable in terms of physiotherapists' time constraints, and it appeared to be effective in improving all outcome domains. This TTP is now ready for intervention to a wider audience.
abstract_id: PUBMED:16012543
Knowledge, practice and attitudes to back pain among doctors, physiotherapists and chiropractors Background: In Norway, only doctors, physiotherapists and chiropractors are authorised to examine and treat patients suffering from low back pain. This study compares knowledge, attitudes and practice among these professional groups.
Material And Methods: All 1105 doctors, physiotherapists and chiropractors in three Norwegian counties received a questionnaire with one section about knowledge, one about the action that respondents would recommend for various diagnoses, and one focused on attitudes towards back pain.
Results And Interpretations: The chiropractors had the largest number of back pain patients in their practice and expressed the highest degree of professional interest in the field. We found no essential differences regarding knowledge between the groups. 77 % of the physiotherapists would refer acute sciatica to a doctor, while only 24 % of the chiropractors would do so. 65 % of the doctors and 10 % of the chiropractors would refer a chronic low back pain patient for physical therapy. Doctors and physiotherapists have a longstanding tradition of mutual cooperation, while chiropractors only seldom seem to refer to other care providers. One in five doctors and chiropractors, and 13 % of the physiotherapists, refer an acute unspecific low back pain to plain X-ray, which is not in accordance with the guidelines. Very few health care providers regarded low back pain as disabling. There are also few professionals who believe it possible to find an exact cause for the back pain in the individual patient.
abstract_id: PUBMED:21811035
Use of low-back pain guidelines by occupational therapists: a qualitative study of barriers and facilitators. Objective: Identify barriers and facilitators related to the use of low-back pain guidelines as perceived by occupational therapists (OTs).
Methods And Participants: A qualitative approach was used. Nine OTs with varied characteristics were randomly selected. Participants were asked to use the guidelines with at least two clients. Semi-structured interviews were done to identify barriers and facilitators related to use. Transcripts of the interviews were analyzed.
Results: Participants agreed with the guidelines, found them compatible with their current practice, and believed using them would help in preventing persistent disability. Some questioned if their work organization, based on group therapy, would limit their use of the guidelines. Because of the guidelines' biopsychosocial perspective, they anticipated problems regarding client expectations and collaboration with other health professionals.
Conclusions: Participating OTs raised few barriers related to them. Barriers were mostly related to client expectations, work organization and peers. It appears the guidelines would be easily implemented by OTs, but environmental elements could limit this.
Answer: Yes, knowledge appears to be a barrier to implementing low back pain (LBP) guidelines. Studies have shown that there are significant differences in knowledge of LBP guidelines among various subgroups of primary care doctors, with family doctors scoring higher on average than general practitioners and family practice residents (PUBMED:18373587). Additionally, a survey of chiropractors revealed that while many adhered to guideline recommendations, a small proportion still held beliefs about radiographs for LBP that were discordant with current guidelines (PUBMED:33461555). Similarly, a study of physical therapists found varied understanding of multidisciplinary LBP treatment guidelines, with no single predictor for correct responses (PUBMED:24604903).
Moreover, research has indicated that there are barriers to guideline uptake, such as complexity of information and the volume of guidelines, which necessitate active and engaging dissemination strategies to improve implementation (PUBMED:33998767). Comparisons between physical therapists and family practice physicians have also shown differences in knowledge and management strategies for LBP, with physical therapists demonstrating greater knowledge in some areas (PUBMED:24491612).
Furthermore, a national survey of manipulative and sports physiotherapists in New Zealand found that a lower biomedical belief orientation for LBP, higher reported LBP caseload, and postgraduate qualifications were associated with the perception that guidelines were helpful in informing clinical decisions (PUBMED:23047043). In contrast, orthopaedic surgeons were found to be less aware of current treatment guidelines for LBP than family practitioners (PUBMED:19564770).
A mixed methods study also highlighted the need for tailored training programs to address organizational and individual factors influencing guideline uptake by physiotherapists (PUBMED:33111468). Lastly, a qualitative study identified barriers related to client expectations, work organization, and collaboration with other health professionals as perceived by occupational therapists (PUBMED:21811035).
In summary, knowledge is a barrier to implementing LBP guidelines, and there is a need for targeted interventions and training to improve guideline adherence among healthcare professionals. |
Instruction: Rapid sequence induction in prehospital emergency medicine: is it safe?
Abstracts:
abstract_id: PUBMED:25776045
SPEEDBOMB: a simple and rapid checklist for Prehospital Rapid Sequence Induction. Prehospital emergency medical services often operate in the most challenging and austere environments. Checklist use for complex tasks in these circumstances is useful but must make task completion simpler, faster and more effective. The SPEEDBOMB checklist for Prehospital Rapid Sequence Induction (PRSI) management rapidly addresses critical steps in the RSI process, is designed to improve checklist compliance and patient safety, and is adaptable for local circumstances.
abstract_id: PUBMED:33683378
In-cabin rapid sequence induction : Experience from alpine air rescue on reduction of the prehospital time The survival of the severely injured is dependent on the rapid and efficient prehospital treatment. Despite all efforts over the last decades and despite an improved network of rescue helicopters, the time delay between the accident event and admission to the trauma room could not be reduced. A certain proportion of the severely injured need induction of anesthesia even before arrival in hospital (typically as rapid sequence induction, RSI). Due to the medical and technical progress in video laryngoscopy as well as in the means of air rescue used in German-speaking countries, under certain conditions the possibility to carry out induction of anesthesia and airway management in the cabin of the rescue helicopter, i.e. during the transportation, seems to be a possible option to reduce the prehospital time. The aspects dealt with in this article are elementary for a safe execution. A procedure that has been tried and trusted for some time is presented as an example; however, the in-cabin RSI should only be carried out by pretrained teams using a clear standard operating procedure.
abstract_id: PUBMED:9248907
Prehospital emergency rapid sequence induction of anaesthesia. Objective: To determine the number of and reasons for rapid sequence inductions done by accident and emergency (A&E) doctors out of hospital as part of the activities of the MEDIC 1 Flying Squad. "Rapid sequence induction" was defined as any attempted endotracheal intubation accompanied by use of drugs to assist intubation and ventilation, including opiates, benzodiazepines, intravenous and topical anaesthetics, and neuromuscular blocking drugs.
Methods: Retrospective study of all MEDIC 1 and A&E records over the period 1 February 1993 to 28 February 1996 (37 months). The anaesthetic technique used, drugs used, complications, difficulties, reasons for induction out of hospital, and grade of doctor performing the technique were determined.
Results: Various anaesthetic techniques were used to secure the airway definitively by endotracheal intubation. Several difficulties were encountered in the prehospital setting, all of which were dealt with successfully.
Conclusions: The lack of complications related to rapid sequence induction in prehospital care suggests that this technique is safe when done by A&E doctors on appropriate patients.
abstract_id: PUBMED:35656724
Describing the Challenges of Prehospital Rapid Sequence Intubation by Macintosh Blade Video Laryngoscopy Recordings. Study Objective: Structured review of video laryngoscopy recordings from physician team prehospital rapid sequence intubations (RSIs) may provide new insights into why prehospital intubations are difficult. The aim was to use laryngoscope video recordings to give information on timings, observed features of the airway, laryngoscopy technique, and laryngoscope performance. This was to both describe prehospital airways and to investigate which factors were associated with increased time taken to intubate.
Methods: Sydney Helicopter Emergency Medical Service (HEMS; the aeromedical wing of New South Wales Ambulance, Australia) has a database recording all intubations. The database comprises free-text case detail, airway dataset, scanned case sheet, and uploaded laryngoscope video. The teams of critical care paramedic and doctor use protocol-led intubations with a C-MAC Macintosh size four laryngoscope and intubation adjunct. First-pass intubation rate is approximately 97%. Available video recordings and their database entries were retrospectively analyzed for pre-specified qualitative and quantitative factors.
Results: Prehospital RSI video recordings were available for 385 cases from January 2018 through July 2020. Timings revealed a median of 58 seconds of apnea from laryngoscope entering mouth to ventilations. Median time to intubate (laryngoscope passing lips until tracheal tube inserted) was 35 seconds, interquartile range 28-46 seconds. Suction was required prior to intubation in 29% of prehospital RSIs. Fogging of the camera lens at time of laryngoscopy occurred in 28%. Logistic regression revealed longer time to intubate was associated with airway soiling, Cormack-Lehane Grade 2 or 3, multiple bougie passes, or change of bougie.
Conclusion: Video recordings averaging 35 seconds for first-pass success prehospital RSI with an adjunct give bed-side "definitions of difficulty" of 30 seconds for no glottic view, 45 seconds for no bougie placement, and 60 seconds for no endotracheal tube placement. Awareness of apnea duration can help guide decision making for oxygenation. All emergency intubators need to be cognizant of the need for suctioning. Improving the management of bloodied airways and bougie usage may reduce laryngoscopy duration and be a focus for training. Video screen fogging and missed recordings from some patients may be something manufacturers can address in the future.
abstract_id: PUBMED:9750807
Rapid sequence anesthetic induction via prehospital tracheal intubation The choice of sedation for emergency intubation remains controversial. This lack of consensus has led to various sedation protocols used in French prehospital care setting. A review of data from the literature suggests that the association etomidate-suxamethonium is probable the best choice for rapid sequence intubations in the prehospital setting. Its benefits include protection against myocardial and cerebral ischaemia, decreased risk of pulmonary aspiration, and a stable haemodynamic profile. Randomized studies are needed to substantiate the advantages of the association etomidate-suxamethonium for rapid sequences intubation in the prehospital setting.
abstract_id: PUBMED:11310456
Prehospital rapid sequence induction by emergency physicians: is it safe? Objectives: To determine if there were differences in practice or intubation mishap rate between anaesthetists and accident and emergency physicians performing rapid sequence induction of anaesthesia (RSI) in the prehospital setting.
Methods: All patients who underwent RSI by a Helicopter Emergency Medical Service (HEMS) doctor from 1 May 1997 to 30 April 1999 were studied by retrospective analysis of in-flight run sheets. Intubation mishaps were classified as repeat attempts at intubation, repeat drug administration and failed intubation.
Results: RSI was performed on 359 patients by 10 anaesthetists (202 patients) and nine emergency physicians (157 patients). Emergency physicians recorded a larger number of patients as having Cormack and Lehane grade 3 or 4 laryngoscopy than anaesthetists (p<0.0001) but were less likely to use a gum elastic bougie to assist intubation (p=0.024). Patients treated by emergency physicians did not have a significantly different pulse, blood pressure, oxygen saturation or end tidal CO2 to patients treated by anaesthetists at any time after intubation. Emergency physicians were more likely to anaesthetise patients with a Glasgow Coma Score >12 than anaesthetists (p=0.003). There were two failed intubations (1%) in the anaesthetist group and four (2.5%) in the emergency physician group. Repeat attempts at intubation and repeat drug administration occurred in <2% of each group.
Conclusions: RSI performed by emergency physicians was not associated with a significantly higher failure rate or an increased number of intubation mishaps than RSI performed by anaesthetists. Emergency physicians were able to safely administer sedative and neuromuscular blocking drugs in the prehospital situation. It is suggested that emergency physicians can safely perform rapid sequence induction of anaesthesia and intubation.
abstract_id: PUBMED:30176690
Rapid Sequence Induction A Rapid Sequence Induction and Intubation (RSI oder RSII) is a standard technique for emergency airway management and anaesthesia. The aim of an RSI is to prevent aspiration by fast endotracheal intubation without the use of facemask ventilation.Today, only few European countries have specific guidelines for RSI. During daily practice, head-up positioning is standard and provides some advantages as compared to other positions. A gastric tube should be left in place; it is not necessary to remove it. If no gastric tube is in place, it can be positioned after intubation. An opioid should be administered prior to RSI since it may reduce the dosage of the hypnotic drug and, therefore, side effects, too.
abstract_id: PUBMED:33985541
Rapid sequence induction: where did the consensus go? Background: Rapid Sequence Induction (RSI) was introduced to minimise the risk of aspiration of gastric contents during emergency tracheal intubation. It consisted of induction with the use of thiopentone and suxamethonium with the application of cricoid pressure. This narrative review describes how traditional RSI has been modified in the UK and elsewhere, aiming to deliver safe and effective emergency anaesthesia outside the operating room environment. Most of the key aspects of traditional RSI - training, technique, drugs and equipment have been challenged and often significantly changed since the procedure was first described. Alterations have been made to improve the safety and quality of the intervention while retaining the principles of rapidly securing a definitive airway and avoiding gastric aspiration. RSI is no longer achieved by an anaesthetist alone and can be delivered safely in a variety of settings, including in the pre-hospital environment.
Conclusion: The conduct of RSI in current emergency practice is far removed from the original descriptions of the procedure. Despite this, the principles - rapid delivery of a definitive airway and avoiding aspiration, are still highly relevant and the indications for RSI remain relatively unchanged.
abstract_id: PUBMED:34553502
Mapping haemodynamic changes with rapid sequence induction agents in the emergency department. Objective: Patients intubated in the ED are at an increased risk of post-intubation hypotension. However, evidence regarding the most appropriate induction agent is lacking. The present study aims to describe and compare the haemodynamic effect of propofol, ketamine and thiopentone during rapid sequence induction.
Methods: This is an observational study using data prospectively collected from the Australian and New Zealand Emergency Department Airway Registry between June 2012 and March 2019. The distribution of induction agents across medical and trauma patients were obtained with descriptive statistics. The relationship between induction agent, dose and change in pre- and post-intubation systolic blood pressure (SBP) was described using multivariable logistic regression. The SBP pre- and post-intubation was the primary measure of haemodynamic stability.
Results: From the 5063 intubation episodes, 2229 met the inclusion criteria. Of those, 785 (35.2%) patients were induced with thiopentone, 773 (34.7%) with propofol and 671 (30.1%) with ketamine. Of the included population, 396 (17.8%) patients experienced a reduction in pre-intubation SBP exceeding 20%. Both propofol (P = 0.01) and ketamine (P = 0.01) had an independent and dose-dependent association with hypotension, noting that a higher proportion of patients induced with ketamine had a shock index exceeding 0.9.
Conclusion: Propofol was associated with post-intubation hypotension and it is recommended clinicians consider using the lowest effective dose to reduce this risk. Reflecting its perceived haemodynamic stability, patients who received ketamine were more likely to have a higher shock index; however, there was also an association with post-intubation hypotension.
abstract_id: PUBMED:12454544
Role of rapid sequence induction for intubation in the prehospital setting: helpful or harmful? Use of rapid sequence induction for intubation was introduced to the prehospital environment in the hope of enhancing patient outcome by improving early definitive airway management. Varying success has been achieved in both air and ground transport emergency medical services systems, but concern persists about the potential to cause patients harm. Individual emergency medical services systems must determine the need for rapid sequence induction for intubation and their ability to implement a rapid sequence induction for intubation protocol effectively with minimal adverse events. Therefore, the value of rapid sequence induction for intubation is dependent on each emergency medical services system design in their ability to establish personnel requirements and ongoing training, expertise in airway management skills, medical direction and supervision, and a quality assurance program. If these principles are strictly adhered to, rapid sequence induction for intubation may be safely used as an advanced airway management technique in the prehospital setting.
Answer: Rapid sequence induction (RSI) in prehospital emergency medicine has been studied and implemented with varying degrees of success, and the question of its safety is addressed in several of the provided abstracts.
One study suggests that RSI performed by accident and emergency (A&E) doctors in the prehospital setting is safe when done on appropriate patients, as indicated by the lack of complications related to RSI in their retrospective study (PUBMED:9248907). Similarly, another study concludes that RSI performed by emergency physicians was not associated with a significantly higher failure rate or an increased number of intubation mishaps than RSI performed by anaesthetists, suggesting that emergency physicians can safely perform RSI (PUBMED:11310456).
The use of checklists, such as the SPEEDBOMB checklist for Prehospital Rapid Sequence Induction (PRSI), is designed to improve compliance and patient safety, indicating an awareness of the importance of safety measures in the process (PUBMED:25776045). Additionally, the use of video laryngoscopy recordings to review prehospital RSIs by physician teams has provided insights into the challenges and factors associated with increased time taken to intubate, which can help improve safety and training (PUBMED:35656724).
However, there are also concerns about the potential for harm, as noted in a study that discusses the need for individual emergency medical services (EMS) systems to determine their ability to implement RSI protocols effectively with minimal adverse events (PUBMED:12454544). The choice of sedation and the implementation of specific protocols, such as the association of etomidate-suxamethonium, are also discussed in the context of safety and the need for randomized studies to substantiate the advantages of certain drug combinations (PUBMED:9750807).
In summary, the evidence suggests that with proper training, adherence to protocols, and the use of safety measures such as checklists and video laryngoscopy reviews, RSI can be performed safely in the prehospital setting by emergency physicians. However, the safety of RSI also depends on the specific EMS system's design, personnel requirements, ongoing training, medical direction, and quality assurance programs. |
Instruction: Can diffusion-weighted imaging be used to differentiate brain abscess from other ring-enhancing brain lesions?
Abstracts:
abstract_id: PUBMED:22747861
Diffusion weighted MR imaging of ring enhancing brain lesions. Objective: To evaluate the role of diffusion weighted imaging in differentiating the cause of ring enhancing brain lesions.
Study Design: Analytical, descriptive study.
Place And Duration Of Study: Department of Radiology, The Aga Khan University Hospital, Karachi, from March 2007 to July 2011.
Methodology: Diffusion weighted imaging (DWI) was performed on 37 patients having ring enhancing lesions on their post-contrast brain MRI scans. These lesions were characterized into neoplastic and abscess cavity on the basis of diffusion restriction. Correlation of all these findings was done with histopathology obtained in all these patients. Sensitivity, specificity, positive and negative predictive values and diagnostic accuracy of DWI were calculated. Comparisons of mean ADC values of abscess and neoplastic lesions were also done using t-test.
Results: DWI had a sensitivity of 94.73%, specificity of 94.44%, positive predictive value of 94.73%, and negative predictive value of 94.44% and diagnostic accuracy of 94.5% in differentiating brain abscess from neoplastic brain lesions. Mean ADC value in central cavity and wall of neoplastic lesions and brain abscesses were calculated with significant p-value of 0.001 and 0.025 respectively.
Conclusion: Diffusion weighted imaging is non-invasive method with high sensitivity and specificity which can help in differentiation of ring enhancing neoplastic lesions and brain abscesses. This modality should be read in conjunction with conventional imaging.
abstract_id: PUBMED:24933524
Can diffusion-weighted imaging be used to differentiate brain abscess from other ring-enhancing brain lesions? A meta-analysis. Aim: To explore the role of diffusion-weighted imaging (DWI) in the discrimination of brain abscess from other ring-enhancing brain lesions through meta-analysis.
Materials And Methods: The PUMBED, OVID, and China National Knowledge Infrastructure (CNKI) databases, from January 1995 to March 2013, were searched for studies evaluating the diagnostic performance of DWI in the discrimination of brain abscess lesions. Using the data collected, pooled sensitivities and specificities across studies were determined, positive and negative likelihood ratios (LR) were calculated, and summary receiver operating characteristic (SROC) curves were constructed.
Results: A total of 11 studies fulfilled all of the inclusion criteria and were considered for the analysis. The pooled sensitivity values and pooled specificity values including 95% confidence intervals (CI) were 0.95 (0.87-0.98) and 0.94 (0.88-0.97). The pooled positive LR (95% CI) was 4.13(2.55-6.7); the pooled negative LR (95% CI) was 0.01 (0-1.7); and the area under the curve of the symmetric SROC was 0.98.
Conclusions: DWI has high sensitivity and specificity for the differentiation of brain abscess from other intracranial cystic mass lesions.
abstract_id: PUBMED:12485251
Hemorrhagic brain metastases with high signal intensity on diffusion-weighted MR images. A case report. Diffusion-weighted MR imaging has been applicable to the differential diagnosis of abscesses and necrotic or cystic brain tumors. However, restricted water diffusion is not necessarily specific for brain abscess. We describe ring-enhancing metastases of lung carcinoma characterized by high signal intensity on diffusion-weighted MR images. The signal pattern probably reflected intralesional hemorrhage. The present report adds to the growing literature regarding the differential diagnosis of ring-enhancing brain lesions.
abstract_id: PUBMED:16116554
Importance of diffusion-weighted imaging in the diagnosis of cystic brain tumors and intracerebral abscesses. Objective: It is often difficult to decide whether a cystic brain lesion is a tumor or an abscess by means of conventional MRI techniques. The immediate diagnosis of a brain abscess is important for the patient's outcome. Our goal was to study the ability of diffusion-weighted imaging and calculation of the apparent diffusion coefficient (ADC) to differentiate between these two pathologies.
Patients And Methods: Ten patients (five men, five women) with cystic brain lesions were examined with MRI. The ADC maps were calculated for each subject and the ADC value of each lesion was measured. Histology revealed glioblastoma multiforme in six patients and abscess in four patients.
Results: All brain abscesses showed markedly hyperintense signal changes on diffusion-weighted imaging, whereas the appearance of glioblastoma varied from slightly hyperintense to hypointense signal conversion. The mean ADC value calculated in the six patients with cystic brain tumor was: 2.05 x 10 (-3) mm(2)/s (1.38-2.88 x 10 (-3) mm(2)/s). The mean ADC value of the four patients with brain abscess was: 0.57 x 10 (-3) mm(2)/s (0.38-0.77 x 10 (-3) mm(2)/s).
Conclusion: Diffusion-weighted imaging and calculation of ADC maps constitute a helpful tool to differentiate between cystic brain tumors and brain abscesses.
abstract_id: PUBMED:20861784
Restricted diffusion in a ring-enhancing mucoid metastasis with histological confirmation: case report. We present a case of restricted diffusion in a ring-enhancing cerebellar metastasis in a 58-year-old man. Diffusion imaging showed restriction with low apparent diffusion coefficient values within the cavity. Diagnosis of abscess was suggested based on radiological findings. A suspicious lung nodule was found in the systemic evaluation, and histological examination of the brain lesion confirmed metastatic adenocarcinoma with mucoid content confirmed by further specific pathological tests. We discuss the reason of diffusion findings and the importance of the correct interpretation of this technique in a clinical situation. Our case confirms previous hypothesis about restricted diffusion related to mucoid content in metastasis.
abstract_id: PUBMED:12450032
Use of diffusion-weighted magnetic resonance imaging in differentiating purulent brain processes from cystic brain tumors. Object: Brain abscesses and other purulent brain processes represent potentially life-threatening conditions for which immediate correct diagnosis is necessary to administer treatment. Distinguishing between cystic brain tumors and abscesses is often difficult using conventional imaging methods. The authors' goal was to study the ability of diffusion-weighted (DW) magnetic resonance (MR) imaging to differentiate between these two pathologies in patients within the clinical setting.
Methods: Diffusion-weighted MR imaging studies and calculation of the apparent diffusion coefficient (ADC) values were completed in a consecutive series of 16 patients harboring surgically verified purulent brain processes. This study group included 11 patients with brain abscess (one patient had an additional subdural hematoma and another also had ventriculitis), two with subdural empyema, two with septic embolic disease, and one patient with ventriculitis. Data from these patients were compared with similar data obtained in 16 patients matched for age and sex, who harbored surgically verified neoplastic cystic brain tumors. In patients with brain abscess, subdural empyema, septic emboli, and ventriculitis, these lesions appeared hyperintense on DW MR images, whereas in patients with tumor, the lesion was visualized as a hypointense area. The ADC values calculated in patients with brain infections (mean 0.68 x 10(3) mm2/sec) were significantly lower than those measured in patients with neoplastic lesions (mean 1.63 x 10(3) mm2/sec; p < 0.05).
Conclusions: Diffusion-weighted MR imaging can be used to identify infectious brain lesions and can help to differentiate between brain abscess and cystic brain tumor, thus making it a strong additional imaging modality in the early diagnosis of central nervous system purulent brain processes.
abstract_id: PUBMED:34745401
Differential diagnosis of a ring-enhancing brain lesion in the setting of metastatic cancer and a mycotic aneurysm. A diagnostic challenge arises when a patient presents with a ring-enhancing lesion of the brain in the setting of both metastatic cancer and a source of infection. We report a case depicting this dilemma in an 80-year-old man with a history of metastatic oral squamous cell carcinoma who presented for left-sided hemiparesis. Computed tomography and magnetic resonance imaging revealed a ring-enhancing lesion of the right parietal vertex without signs of stroke. He was also found to have an aneurysm of the right common carotid artery with abnormal surrounding soft tissue density and gas, findings suspicious for a mycotic aneurysm. The likelihood of the brain lesion being an abscess formed by septic embolization was raised, leading to the recommendation to surgically explore the brain lesion and repair the aneurysm. Nevertheless, a high index of suspicion for a brain abscess and mycotic aneurysm is necessary in this type of clinical scenario.
abstract_id: PUBMED:27864577
Diagnosing necrotic meningioma: a distinctive imaging pattern in diffusion MRI and MR spectroscopy. The differential diagnosis of necrotic meningiomas includes brain abscess and malignant neoplasms. We report and discuss hereby the work-up of two patients diagnosed with necrotic meningioma using diffusion-weighted imaging, magnetic resonance spectroscopy, resective surgery, and histopathology. The purpose of the present article is to add to the scant literature on the use of advanced imaging modalities in the routine investigation of brain lesions and their utility in arriving at the final diagnosis.
abstract_id: PUBMED:32757543
Cerebral abscesses imaging: A practical approach. Brain abscesses (BAs) are focal infections of the central nervous system (CNS) that start as a localised area of weakening of the brain parenchyma (cerebritis) and develops into a collection of pus surrounded by a capsule. Pyogenic (bacterial) BAs represent the majority of all BAs; in some cases, the diagnostic and therapeutic management can be challenging. Imaging has a primary role in differentiating BAs from other lesions. Conventional magnetic resonance imaging (cMRI) is essential for the identification of the lesion, its localisation and its morphological features. However, cMRI does not allow to reliably differentiate BAs from other intracranial mass lesions such as necrotic tumours. Advanced sequences, such as diffusion-weighted imaging (DWI), perfusion-weighted imaging (PWI) and proton MR spectroscopy (1H-MRS) are very useful in the differential diagnosis from other brain lesions, such as non-pyogenic abscesses or necrotic tumours, and provide essential information on structural, vascular and metabolic characteristics allowing greater neuroradiological confidence. The aim of this pictorial review is to provide a practical approach showing the added value of more advanced MRI techniques in their diagnostic management.
abstract_id: PUBMED:11138630
Diffusion-weighted imaging in the evaluation of intracranial lesions. Conventional magnetic resonance (MR) imaging is able to show pathology early on and to provide the radiologist with some degree of lesion characterization based on the relaxation time of different tissues. Many times, however, conventional MR imaging is not capable of depicting abnormalities at a time when early therapy may be successful, or of differentiating among different types of lesions before surgery. Diffusion-weighted imaging (DWI), a technique that is relatively new, is rapidly gaining popularity. Its increased use stems from the fact that many of the newer MR units are echo-planar capable. Although DWI may be obtained without echo-planar techniques, most DWI is now obtained by using gradients capable of very fast rising times. Echo-planar DWI may be obtained in a matter of seconds and, thus, is much less sensitive to bulk motion than other imaging techniques. Although DWI has been used extensively for the evaluation of acute cerebral infarctions, new uses for it are being explored constantly. In this article we address the nature of DWI and its use in the stroke patient as well as in other clinical situations where we believe it is useful.
Answer: Yes, diffusion-weighted imaging (DWI) can be used to differentiate brain abscess from other ring-enhancing brain lesions. Studies have demonstrated that DWI has high sensitivity and specificity for this purpose. For instance, one study showed that DWI had a sensitivity of 94.73%, specificity of 94.44%, positive predictive value of 94.73%, and negative predictive value of 94.44% in differentiating brain abscess from neoplastic brain lesions (PUBMED:22747861). A meta-analysis also confirmed that DWI has high sensitivity and specificity for the differentiation of brain abscess from other intracranial cystic mass lesions, with pooled sensitivity values of 0.95 and pooled specificity values of 0.94 (PUBMED:24933524).
Furthermore, DWI and the calculation of apparent diffusion coefficient (ADC) values have been shown to be helpful tools in distinguishing between cystic brain tumors and brain abscesses. Brain abscesses typically show markedly hyperintense signal changes on DWI, whereas the appearance of glioblastoma varies from slightly hyperintense to hypointense signal conversion. The mean ADC value of brain abscesses is significantly lower than that of cystic brain tumors (PUBMED:16116554, PUBMED:12450032).
However, it is important to note that restricted water diffusion is not necessarily specific for brain abscess, as there have been reports of ring-enhancing metastases of lung carcinoma characterized by high signal intensity on DWI, which reflected intralesional hemorrhage (PUBMED:12485251). Additionally, mucoid content in metastasis can also lead to restricted diffusion (PUBMED:20861784).
In conclusion, while DWI is a valuable non-invasive method with high diagnostic accuracy for differentiating brain abscess from other ring-enhancing lesions, it should be interpreted in conjunction with conventional imaging and clinical context to ensure accurate diagnosis. |
Instruction: American tegumentary leishmaniasis: Is antimonial treatment outcome related to parasite drug susceptibility?
Abstracts:
abstract_id: PUBMED:16991093
American tegumentary leishmaniasis: Is antimonial treatment outcome related to parasite drug susceptibility? Background: Antimonials are the first drug of choice for the treatment of American tegumentary leishmaniasis (ATL); however, their efficacy is not predictable, and this may be linked to parasite drug resistance. We aimed to characterize the in vitro antimony susceptibility of clinical isolates of Peruvian patients with ATL who were treated with sodium stibogluconate and to correlate this in vitro phenotype with different treatment outcomes.
Methods: Thirty-seven clinical isolates were obtained from patients with known disease and treatment histories. These isolates were typed, and the susceptibility of intracellular amastigotes to pentavalent (SbV) and trivalent (SbIII) antimonials was determined.
Results: We observed 29 SbV-resistant isolates among 4 species of subgenus Viannia, most of which exhibited primary resistance; isolates resistant only to SbIII; and 3 combinations of in vitro phenotypes: (1) parasites sensitive to both drugs, (2) parasites resistant to both drugs, and (3) parasites resistant to SbV only (the majority of isolates fell into this category). There was no correlation between in vitro susceptibility to both antimonials and the clinical outcome of therapy.
Conclusion: Antimony insensitivity might occur in a stepwise fashion (first to SbV and then to SbIII). Our data question the definition of true parasite resistance to antimonials. Further studies of treatment efficacy should apply standardized protocols and definitions and should also consider host factors.
abstract_id: PUBMED:33011651
Activity of paromomycin against Leishmania amazonensis: Direct correlation between susceptibility in vitro and the treatment outcome in vivo. Paromomycin is an aminoglycoside antibiotic approved in 2006 for the treatment of visceral leishmaniasis caused by Leishmania donovani in Southeast Asia. Although this drug is not approved for the treatment of visceral and cutaneous leishmaniasis in Brazil, it is urgent and necessary to evaluate the potential of this drug as alternative for the treatment against species responsible for these clinical forms of the disease. In Brazil, Leishmania amazonensis is responsible for cutaneous and diffuse cutaneous leishmaniasis. The diffuse cutaneous form of the disease is difficult to treat and frequent relapses are reported, mainly when the treatment is interrupted. Here, we evaluated paromomycin susceptibility in vitro of a L. amazonensis clinical isolate from a patient with cutaneous leishmaniasis and the reference strain L. amazonensis M2269, as well as its in vivo efficacy in a murine experimental model. Although never exposed to paromomycin, a significant differential susceptibility between these two lines was found. Paromomycin was highly active in vitro against the clinical isolate in both forms of the parasite, while its activity against the reference strain was less active. In vivo studies in mice infected with each one of these lines demonstrated that paromomycin reduces lesion size and parasite burden and a direct correlation between the susceptibility in vitro and the effectiveness of this drug in vivo was found. Our findings indicate that paromomycin efficacy in vivo is dependent on intrinsic susceptibility of the parasite. Beyond that, this study contributes for the evaluation of the potential use of paromomycin in chemotherapy of cutaneous leishmaniasis in Brazil caused by L. amazonensis.
abstract_id: PUBMED:36803859
Drug resistance in Leishmania: does it really matter? Treatment failure (TF) jeopardizes the management of parasitic diseases, including leishmaniasis. From the parasite's point of view, drug resistance (DR) is generally considered as central to TF. However, the link between TF and DR, as measured by in vitro drug susceptibility assays, is unclear, some studies revealing an association between treatment outcome and drug susceptibility, others not. Here we address three fundamental questions aiming to shed light on these ambiguities. First, are the right assays being used to measure DR? Second, are the parasites studied, which are generally those that adapt to in vitro culture, actually appropriate? Finally, are other parasite factors - such as the development of quiescent forms that are recalcitrant to drugs - responsible for TF without DR?
abstract_id: PUBMED:16182864
Genes and susceptibility to leishmaniasis. Leishmania are digenetic protozoa which inhabit two highly specific hosts, the sandfly where they grow as motile, flagellated promastigotes in the gut, and the mammalian macrophage where they grow intracellularly as non-flagellated amastigotes. Leishmaniasis is the outcome of an evolutionary 'arms race' between the host's immune system and the parasite's evasion mechanisms which ensure survival and transmission in the population. The spectrum of disease manifestations and severity reflects the interaction between the genome of the host and that of the parasite, and the pathology is caused by a combination of host and parasite molecules. This chapter examines the genetic basis of host susceptibility to disease in humans and animal models. It describes the genetic tools used to map and identify susceptibility genes, and the lessons learned from murine and human cutaneous leishmaniasis.
abstract_id: PUBMED:37505650
In Vitro Drug Susceptibility of a Leishmania (Leishmania) infantum Isolate from a Visceral Leishmaniasis Pediatric Patient after Multiple Relapses. The parasitic protozoan Leishmania (Leishmania) infantum is the etiological agent of human visceral leishmaniasis in South America, an infectious disease associated with malnutrition, anemia, and hepatosplenomegaly. In Brazil alone, around 2700 cases are reported each year. Treatment failure can occur as a result of drug, host, and/or parasite-related factors. Here, we isolated a Leishmania species from a pediatric patient with visceral leishmaniasis that did not respond to chemotherapy, experiencing a total of nine therapeutic relapses and undergoing a splenectomy. The parasite was confirmed as L. (L.) infantum after sequencing of the ribosomal DNA internal transcribed spacer, and the clinical isolate, in both promastigote and amastigote forms, was submitted to in vitro susceptibility assays with all the drugs currently used in the chemotherapy of leishmaniasis. The isolate was susceptible to meglumine antimoniate, amphotericin B, pentamidine, miltefosine, and paromomycin, similarly to another strain of this species that had previously been characterized. These findings indicate that the multiples relapses observed in this pediatric patient were not due to a decrease in the drug susceptibility of this isolate; therefore, immunophysiological aspects of the patient should be further investigated to understand the basis of treatment failure in this case.
abstract_id: PUBMED:23416123
Assessment of drug resistance related genes as candidate markers for treatment outcome prediction of cutaneous leishmaniasis in Brazil. The great public health problem posed by leishmaniasis has substantially worsened in recent years by the emergence of clinical failure. In Brazil, the poor prognosis observed for patients infected by Leishmania braziliensis (Lb) or L. guyanensis (Lg) may be related to parasite drug resistance. In the present study, 19 Lb and 29 Lg isolates were obtained from infected patients with different treatment outcomes. Translated amino acid sequence polymorphisms from four described antimony resistance related genes (AQP1, hsp70, MRPA and TRYR) were tested as candidate markers for antimonial treatment failure prediction. Possibly due to the low intraspecific variability observed in Lg samples, none of the prediction models had good prognosis values. Most strikingly, one mutation (T579A) found in hsp70 of Lb samples could predict 75% of the antimonial treatment failure clinical cases. Moreover, a multiple logistic regression model showed that the change from adenine to guanine at position 1735 of the hsp70 gene, which is responsible for the T579A mutation, significantly increased the chance of Lb clinical isolates to be associated with treatment failure (OR=7.29; CI 95%=[1.17, 45.25]; p=0.0331). The use of molecular markers to predict treatment outcome presents practical and economic advantages as it allows the development of rapid assays to monitor the emergence of drug resistant parasites that can be clinically applied to aid the prognosis of cutaneous leishmaniasis in Brazil.
abstract_id: PUBMED:26253089
Lack of correlation between the promastigote back-transformation assay and miltefosine treatment outcome. Objectives: Widespread antimony resistance in the Indian subcontinent has enforced a therapy shift in visceral leishmaniasis treatment primarily towards miltefosine and secondarily also towards paromomycin. In vitro selection of miltefosine resistance in Leishmania donovani turned out to be quite challenging. Although no increase in IC50 was detected in the standard intracellular amastigote susceptibility assay, promastigote back-transformation remained positive at high miltefosine concentrations, suggesting a more 'resistant' phenotype. This observation was explored in a large set of Nepalese clinical isolates from miltefosine cure and relapse patients to assess its predictive value for patient treatment outcome.
Methods: The predictive value of the promastigote back-transformation for treatment outcome of a set of Nepalese L. donovani field isolates (n = 17) derived from miltefosine cure and relapse patients was compared with the standard susceptibility assays on promastigotes and intracellular amastigotes.
Results: In-depth phenotypic analysis of the clinical isolates revealed no correlation between the different susceptibility assays, nor any clear link to the actual treatment outcome. In addition, the clinical isolates proved to be phenotypically heterogeneous, as reflected by the large variation in drug susceptibility among the established clones.
Conclusions: This in vitro laboratory study shows that miltefosine treatment outcome is not necessarily exclusively linked with the susceptibility profile of pre-treatment isolates, as determined in standard susceptibility assays. The true nature of miltefosine treatment failures still remains ill defined.
abstract_id: PUBMED:22518860
Novel approach to in vitro drug susceptibility assessment of clinical strains of Leishmania spp. Resistance to antimonial drugs has been documented in Leishmania isolates transmitted in South America, Europe, and Asia. The frequency and distribution of resistance to these and other antileishmanial drugs are unknown. Technical constraints have limited the assessment of drug susceptibility of clinical strains of Leishmania. Susceptibility of experimentally selected lines and 130 clinical strains of Leishmania panamensis, L. braziliensis, and L. guyanensis to meglumine antimoniate and miltefosine was determined on the basis of parasite burden and percentage of infected U-937 human macrophages. Reductions of infection at single predefined concentrations of meglumine antimoniate and miltefosine and 50% effective doses (ED(50)s) were measured and correlated. The effects of 34°C and 37°C incubation temperatures and different parasite-to-host cell ratios on drug susceptibility were evaluated at 5, 10, and 20 parasites/cell. Reduction of the intracellular burden of Leishmania amastigotes in U-937 cells exposed to the predefined concentrations of meglumine antimoniate or miltefosine discriminated sensitive and experimentally derived resistant Leishmania populations and was significantly correlated with ED(50) values of clinical strains (for meglumine antimoniate, ρ = -0.926 and P < 0.001; for miltefosine, ρ = -0.906 and P < 0.001). Incubation at 37°C significantly inhibited parasite growth compared to that at 34°C in the absence of antileishmanial drugs and resulted in a significantly lower ED(50) in the presence of drugs. Susceptibility assessment was not altered by the parasite-to-cell ratio over the range evaluated. In conclusion, measurement of the reduction of parasite burden at a single predetermined drug concentration under standardized conditions provides an efficient and reliable strategy for susceptibility evaluation and monitoring of clinical strains of Leishmania.
abstract_id: PUBMED:31231359
Novel Loci Controlling Parasite Load in Organs of Mice Infected With Leishmania major, Their Interactions and Sex Influence. Leishmaniasis is a serious health problem in many countries, and continues expanding to new geographic areas including Europe and USA. This disease, caused by parasites of Leishmania spp. and transmitted by phlebotomine sand flies, causes up to 1.3 million new cases each year and despite efforts toward its functional dissection and treatment it causes 20-50 thousands deaths annually. Dependence of susceptibility to leishmaniasis on sex and host's genes was observed in humans and in mouse models. Several laboratories defined in mice a number of Lmr (Leishmania major response) genetic loci that control functional and pathological components of the response to and outcome of L. major infection. However, the development of its most aggressive form, visceral leishmaniasis, which is lethal if untreated, is not yet understood. Visceral leishmaniasis is caused by infection and inflammation of internal organs. Therefore, we analyzed the genetics of parasite load, spread to internal organs, and ensuing visceral pathology. Using a new PCR-based method of quantification of parasites in tissues we describe a network-like set of interacting genetic loci that control parasite load in different organs. Quantification of Leishmania parasites in lymph nodes, spleen and liver from infected F2 hybrids between BALB/c and recombinant congenic strains CcS-9 and CcS-16 allowed us to map two novel parasite load controlling Leishmania major response loci, Lmr24 and Lmr27. We also detected parasite-controlling role of the previously described loci Lmr4, Lmr11, Lmr13, Lmr14, Lmr15, and Lmr25, and describe 8 genetic interactions between them. Lmr14, Lmr15, Lmr25, and Lmr27 controlled parasite load in liver and lymph nodes. In addition, Leishmania burden in lymph nodes but not liver was influenced by Lmr4 and Lmr24. In spleen, parasite load was controlled by Lmr11 and Lmr13. We detected a strong effect of sex on some of these genes. We also mapped additional genes controlling splenomegaly and hepatomegaly. This resulted in a systematized insight into genetic control of spread and load of Leishmania parasites and visceral pathology in the mammalian organism.
abstract_id: PUBMED:28594938
The enemy within: Targeting host-parasite interaction for antileishmanial drug discovery. The state of antileishmanial chemotherapy is strongly compromised by the emergence of drug-resistant Leishmania. The evolution of drug-resistant phenotypes has been linked to the parasites' intrinsic genome instability, with frequent gene and chromosome amplifications causing fitness gains that are directly selected by environmental factors, including the presence of antileishmanial drugs. Thus, even though the unique eukaryotic biology of Leishmania and its dependence on parasite-specific virulence factors provide valid opportunities for chemotherapeutical intervention, all strategies that target the parasite in a direct fashion are likely prone to select for resistance. Here, we review the current state of antileishmanial chemotherapy and discuss the limitations of ongoing drug discovery efforts. We finally propose new strategies that target Leishmania viability indirectly via mechanisms of host-parasite interaction, including parasite-released ectokinases and host epigenetic regulation, which modulate host cell signaling and transcriptional regulation, respectively, to establish permissive conditions for intracellular Leishmania survival.
Answer: The relationship between antimonial treatment outcome and parasite drug susceptibility in American tegumentary leishmaniasis (ATL) is not straightforward. A study aimed to characterize the in vitro antimony susceptibility of clinical isolates from Peruvian patients with ATL treated with sodium stibogluconate found that there was no correlation between in vitro susceptibility to both pentavalent (SbV) and trivalent (SbIII) antimonials and the clinical outcome of therapy. The study observed a variety of in vitro phenotypes, including parasites sensitive to both drugs, resistant to both drugs, and resistant to SbV only, which was the majority. The findings suggest that antimony insensitivity might occur in a stepwise fashion and question the definition of true parasite resistance to antimonials (PUBMED:16991093).
In contrast, another study on the activity of paromomycin against Leishmania amazonensis showed a direct correlation between susceptibility in vitro and the treatment outcome in vivo. The study found that paromomycin was highly active in vitro against a clinical isolate and that in vivo studies in mice infected with this line demonstrated that paromomycin reduced lesion size and parasite burden. This indicates that paromomycin efficacy in vivo is dependent on the intrinsic susceptibility of the parasite (PUBMED:33011651).
Furthermore, the complexity of the relationship between drug resistance and treatment failure is highlighted by the fact that treatment failure can occur due to various factors, including drug resistance as measured by in vitro drug susceptibility assays, but the link between these factors is not always clear. Other parasite factors, such as the development of quiescent forms that are recalcitrant to drugs, may be responsible for treatment failure without drug resistance (PUBMED:36803859).
In summary, while some studies have found a correlation between in vitro drug susceptibility and treatment outcome, others have not, suggesting that the relationship between antimonial treatment outcome and parasite drug susceptibility in ATL is complex and may involve multiple factors beyond just the in vitro susceptibility of the parasite (PUBMED:16991093; PUBMED:33011651; PUBMED:36803859). |
Instruction: Unipolar versus Bipolar Hemiarthroplasty for Displaced Femoral Neck Fractures in the Elderly: Is There a Difference?
Abstracts:
abstract_id: PUBMED:31773262
Bipolar versus monopolar hemiarthroplasty for displaced femur neck fractures: a meta-analysis study. Introduction: Hemiarthroplasty is commonly performed to treat femoral neck fractures. Still, there is a lack of consensus concerning the best component for hemiarthroplasty: unipolar and bipolar implants. Last meta-analysis on this topic is outdated, and an update of current evidences is required. The purpose of this study is to conduct a meta-analysis comparing the unipolar versus bipolar implants for hemiarthroplasty, focusing on the clinical scores, perioperative data, further complications and mortality rate.
Materials And Methods: In September 2019, the main databases were accessed: all the clinical trials comparing unipolar versus bipolar hemiarthroplasty for displaced femoral neck fractures were considered for inclusion. For the methodological quality assessment, we referred to the PEDro score. For the statistical analysis, we referred to the Review Manager 5.3 (The Nordic Cochrane Collaboration, Copenhagen). For implant survivorship, we referred to the STATA/MP software version 14.1 (StataCorp, College Station, Texas).
Results: A total of 27 articles were considered for inclusion, consisting of 16 randomized and 11 non-randomized clinical trials. A total of 4511 patients were enrolled, undergoing a mean 21.26 months follow-up. A statistically significant reduction in the acetabular erosion was observed in the bipolar group (OR 3.16, P < 0.0001). Although statistically insignificant, the bipolar group reported a reduction in the mean Harris hip score, reduced surgical duration and hospitalization, reduced dislocation and revisions rate. Concerning the mortality, a reduction across all the follow-ups in favor of the bipolar group was detected, but without statistically significance.
Conclusions: This meta-analysis evidenced a reduction in the acetabular erosion after bipolar hemiarthroplasty compared to the unipolar implants. Any statistically significant difference concerning the other endpoints of interest was detected. Current evidence concerning this topic are controversial, and further randomized clinical trials are required.
abstract_id: PUBMED:31178145
Unipolar versus bipolar hemiarthroplasty for displaced femoral neck fractures: A pooled analysis of 30,250 participants data. Purpose: To assess the clinical outcomes of unipolar versus bipolar hemiarthroplasty for displaced intracapsular femoral neck fractures in older patients and to report whether bipolar implants yield better long-term functional results.
Methods: We searched PubMed, Scopus, EBSCO, and Cochrane Library for relevant randomized clinical trials (RCTs) and observational studies, comparing unipolar and bipolar hemiarthroplasty. Data were extracted from eligible studies and pooled as relative risk (RR) or mean difference (MD) with corresponding 95% confidence intervals (CI) using RevMan software for Windows.
Results: A total of 30 studies were included (13 RCTs and 17 observational studies). Analyses included 30,250 patients with a mean age of 79 years and mean follow-up time of 24.6 months. The overall pooled estimates showed that bipolar was superior to unipolar hemiarthroplasty in terms of hip function, range of motion and reoperation rate, but at the expense of longer operative time. In the longer term the unipolar group had higher rates of acetabular erosion compared to the bipolar group. There was no significant difference in terms of hip pain, implant related complications, intraoperative blood loss, mortality, six-minute walk times, medical outcomes, and hospital stay and subsequently cost.
Conclusions: Bipolar hemiarthroplasty is associated with better range of motion, lower rates of acetabular erosion and lower reoperation rates compared to the unipolar hemiarthroplasty but at the expense of longer operative time. Both were similar in terms of mortality, and surgical or medical outcomes. Future large studies are recommended to compare both methods regarding the quality of life.
abstract_id: PUBMED:35779144
Patients with femoral neck fractures treated by bipolar hemiarthroplasty have superior to unipolar hip function and lower erosion rates and pain: a systematic review and meta-analysis of randomized controlled studies. Purpose: We assessed acetabular erosion, hip function, quality of life (QoL), pain, deep infection, mortality, re-operation and dislocation rates in patients with displaced femoral neck fractures (dFNFs) treated with unipolar versus bipolar hemiarthroplasty at different postoperative time points.
Methods: Relevant Randomized Controlled Trials (RCTs) were identified, following comprehensive literature research in Medline, Cochrane Central and Scopus databases, from conception until August 31th, 2021 and analyzed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
Results: Database research retrieved 120 studies; sixteen met eligibility criteria, providing 1813 (1814 hips) evaluable patients. Acetabular erosion was significantly higher for unipolar group at 6 and 12 months (p = 0.02 and p = 0.01 respectively). Patients in the bipolar group presented significantly better hip function at 12 and 24 months (p = 0.02 and p = 0.04 respectively). Postoperative pain was significantly less in the bipolar group at 12, 24 and 48 months (p = 0.01). No statistically significant differences were found regarding the postoperative rates of deep infection, mortality, re-operation and dislocation.
Conclusion: This study showed that patients with dFNFs treated with bipolar hemiarthroplasty have lower acetabular erosion rates at 6 and 12 months postoperatively, better hip function at 12 and 24 months, better QoL and less pain, when compared with unipolar. No statistically significant difference could be established regarding deep infection, mortality, re-operation and dislocation rates.
abstract_id: PUBMED:29549451
Reasons for revision of failed hemiarthroplasty: Are there any differences between unipolar and bipolar? Background: Hemiarthroplasty (HA) is an effective procedure for treatment of femoral neck fracture. However, it is debatable whether unipolar or bipolar HA is the most suitable implant.
Objective: The purpose of this study was to compare the causes of failure and longevity in both types of HA.
Materials And Methods: We retrospectively reviewed 133 cases that underwent revision surgery of HA between 2002 and 2012. The causes of revision surgery were identified and stratified into early (≤ 5 years) failure and late (> 5 years) failure. Survival analyses were performed for each implant type.
Results: The common causes for revision were aseptic loosening (49.6%), infection (22.6%) and acetabular erosion (15.0%). Unipolar and bipolar HA were not different in causes for revision, but the unipolar group had a statistically significantly higher number of acetabular erosion events compared with the bipolar group (p = 0.002). In the early period, 24 unipolar HA (52.9%) and 28 bipolar HA (34.1%) failed. There were no statistically significant differences in the numbers of revised HA in each period between the two groups (p = 0.138). The median survival times in the unipolar and bipolar groups were 84.0 ± 24.5 and 120.0 ± 5.5 months, respectively. However, the survival times of both implants were not statistically significantly different.
Conclusions: Aseptic loosening was the most common reason for revision surgery after hemiarthroplasty surgery in early and late failures. Unipolar and bipolar hemiarthroplasty were not different in terms of causes of failure and survivorship except bipolar hemiarthroplasty had many fewer acetabular erosion events.
abstract_id: PUBMED:25476243
Bipolar versus unipolar hemiarthroplasty for displaced femoral neck fractures in the elder patient: a systematic review and meta-analysis of randomized trials. Objective: To assess the safety and efficacy that compare bipolar hemiarthroplasty with unipolar hemiarthroplasty for the treatment of femoral neck fracture in the patient aged more than 65 years.
Methods: We searched databases including PubMed Central, MEDLINE (from 1966), EMBASE (from 1980) and the Cochrane Central Register of Controlled Trials database. Only prospective randomized controlled trials (RCTs) that compare bipolar hemiarthroplasty with unipolar hemiarthroplasty for the treatment of femoral neck fracture in the elder patient were included. RevMan 5.2 from the Cochrane Collaboration was applied to perform the meta-analysis.
Results: Six relevant RCTs with a total of 982 patients were retrieved. From this meta-analysis, mortality rates showed no statistical difference between two treatments, 14.7% for bipolar versus 13.8% for unipolar. The acetabular erosion rates were significantly different between two groups (P=0.01), 1.2% in bipolar versus 5.5% in unipolar group. Overall complication rates, dislocation rates, infection rates and reoperation rates between two groups showed no statistical difference (P>0.05). Neither of two treatments appeared to be superior regarding the clinical function assessed by Harris hip scores or return to pre-injury state rates (P>0.05).
Conclusions: Both bipolar and unipolar hemiarthroplasty for the treatment of elderly patient suffering displaced femoral neck fracture achieve similar and satisfy clinical outcome in short-term follow-up. Unipolar hemiarthroplasty seems to be a more cost-effectiveness option for elderly patient.
abstract_id: PUBMED:38481377
Revision rate following unipolar versus bipolar hemiarthroplasty. Introduction: There has been much debate on use of bipolar or unipolar femoral heads in hemiarthroplasty for the treatment of femoral neck fractures. The outcome of these implants should be studied in the America Joint Replacement Registry (AJRR).
Methods: All primary femoral neck fractures treated with hemiarthroplasty between January 2012 and June 2020 were searched in the AJRR. All cause-revision of unipolar and bipolar hemiarthroplasty and reasons for revision were assessed for these patients until June of 2023.
Results: There were no differences in number and reason for all cause revisions between unipolar and bipolar hemiarthroplasty (p = 0.41). Bipolar hemiarthroplasty had more revisons at 6 months postoperatively (p = 0.0281), but unipolar hemiarthroplasty had more revisions between 2 and 3 years (p = 0.0003), and after 3-years (p = 0.0085), as analysed with a Cox model. Patients with older age (HR = 0.999; 95% CI, 0.998-0.999; p = 0.0006) and higher Charlson Comorbidity Index (HR = 0.996; 95% CI, 0.992- 0.999; p = 0.0192) had a significant increase in revision risk.
Conclusions: We suggest that surgeons should consider using bipolar prosthesis when performing hemiarthroplasty for femoral neck fracture in patients expected to live >2 years post injury.
abstract_id: PUBMED:34246481
Unipolar versus bipolar hemiarthroplasty for hip fractures in patients aged 90 years or older: A bi-centre study comparing 209 patients. Background: This study aimed to evaluate the outcome of unipolar and bipolar hemiarthroplasty to treat hip fractures in patients aged ≥ 90 years.
Methods: We conducted this study from 2007 to 2018 based on the electronic databases of two hospitals. Patients aged ≥ 90 years, treated for Arbeitsgemeinschaft Osteosynthese 31-B3 type fractures, were included. One hospital conducted the treatment only with unipolar prostheses; the other hospital used only bipolar prostheses. We assessed 23 peri‑ and postoperative variables including any revision, dislocation, and survival. The follow-up was completed after a minimum of 2 years postoperatively. At follow-up, the functional status was evaluated via telephone using the Parker score for every living patient.
Results: One-hundred unipolar prostheses, and 109 bipolar prostheses were examined. The patients' mean age was 92.9 years (range 90-102). Dementia was differently distributed between the groups (p < 0.001), with a lower survival risk (Odds Ratio 1.908; Confidence Interval 1.392 - 2.615; log rank <0.001). Based on this result, unipolar demonstrated significantly higher mortality rates compared with bipolar prostheses (log rank < 0.001). No effects were found for dislocation, revision and overall complication rate. At follow-up, 37 patients were available for functional status. The mean Parker score was 3.7 (range 0-9), with no effect.
Conclusions: Intracapsular hip fractures in patients aged ≥ 90 years can be treated with unipolar or bipolar hemiarthroplasty. The type of prostheses did not influence dislocation, revision, general complication, or functional status. The groups were significantly affected by dementia, a risk factor for shorter survival.
abstract_id: PUBMED:27022343
Cemented versus uncemented hemiarthroplasty in patients with displaced femoral neck fractures. Objective: This study compared functional outcomes and preoperative between cemented and uncemented bipolar hemiarthroplasty in patients older than 65 years with subcapital displaced femoral neck fracture.
Methods: Fifty one patients with displaced femoral neck fracture were enrolled in this study. Twenty nine patients underwent uncemented bipolar hemiarthroplasty and 22 underwent cemented bipolar hemiarthroplasty. Physical examination and radiographs were performed at the first and sixth months after operation and results were recorded. The patients' pain and function were measured with Visual analogue Scale and with Harris Hip Score (HHS), respectively and then compared with each other.
Results: The mean duration of follow up was 18.9 and 19.5 months in the cemented and uncemented groups, respectively. All patients were followed up for at least 6 months. Mean operation and bleeding times were longer in the cemented group compared to the uncemented group (P>0.05). The mean pain score was significantly less in the cemented group compared to the uncemented group (P=0.001). Hip functional outcome based on HHS was more in the cemented group (P= 0.001). The intraoperative and postoperative complication rate was higher in the uncemented group (P<0.05).
Conclusion: Although higher rates of intraoperative bleeding and surgery time were seen with cemented bipolar hemiarthroplasty in older patients with femoral neck fracture compared to uncemented bipolar hemiarthroplasty, cemented bipolar hemiarthroplasty can cause less complications and improve patients' function in less time.
abstract_id: PUBMED:26558663
Unipolar Versus Bipolar Hemiarthroplasty for Displaced Femoral Neck Fractures in Elderly Patients. Hip replacement using hemiarthroplasty (HA) is a common surgical procedure in elderly patients with femoral neck fractures. However, questions remain regarding the choice of unipolar or bipolar HA. A meta-analysis of randomized, controlled trials (RCTs) was performed to determine whether bipolar HA was associated with lower rates of dislocation, reoperation, acetabular erosion, mortality, and general complications, as well as lower Harris Hip Scores, compared with unipolar HA. The authors searched PubMed and the Cochrane Register of Controlled Trials database, and 8 RCTs (including a total of 1100 patients) were selected for meta-analysis. Risk ratios (RRs) and weighted mean differences (WMDs) from each trial were pooled using random-effects or fixed-effects models depending on the heterogeneity of the included studies. There were no differences in dislocation (RR=1.20; 95% confidence interval [CI], 0.47 to 3.07), reoperation (RR=0.64; 95% CI, 0.33 to 1.26), acetabular erosion (RR=2.29; 95% CI, 0.85 to 6.12), mortality (RR=0.85; 95% CI, 0.63 to 1.13), and general complications (RR=1.05; 95% CI, 0.70 to 1.56). The authors found no difference in postoperative Harris Hip Scores between patients undergoing unipolar vs bipolar HA (WMD=-1.32; 95% CI, -3.29 to 0.65; P=.19). Unipolar and bipolar HA achieved similar clinical outcomes in patients with displaced femoral neck fractures.
abstract_id: PUBMED:21519068
Unipolar versus bipolar uncemented hemiarthroplasty for elderly patients with displaced intracapsular femoral neck fractures. Purpose: To compare outcomes of unipolar versus bipolar uncemented hemiarthroplasty and determine factors affecting outcomes.
Methods: 433 and 274 elderly patients with displaced intracapsular femoral neck fractures underwent uncemented unipolar hemiarthroplasty and uncemented bipolar hydroxyapatite-coated hemiarthroplasty, respectively. Surgical options were based on the patient's general condition and preinjury mobility status. In the respective groups, the mortality of 377 and 242 patients and the mobility of 270 and 217 patients were reviewed at the 4-month follow-up. Patient age, sex, American Society of Anesthesiologists [ASA] score, mobility, and mortality of the 2 groups were compared.
Results: Patients who underwent unipolar hemiarthroplasty were significantly older, less fit, and less mobile (p<0.001). Patients who underwent bipolar hemiarthroplasty achieved better outcomes for mortality and mobility (p<0.001). Among patients who were able to walk unaided before injury, more of those having bipolar hemiarthroplasty were able to do so at month 4 than those having unipolar hemiarthroplasty (13% vs. 33%, p<0.001). Mortality within 4 months was associated with age, male gender, and ASA score. When patients were stratified according to age groups, mortality within 4 months was lower in patients having bipolar hemiarthroplasty.
Conclusion: In elderly patients with femoral neck fractures who were fit and physiologically young, uncemented bipolar hemiarthroplasty seemed to achieve better functional outcome.
Answer: The comparison between unipolar and bipolar hemiarthroplasty for treating displaced femoral neck fractures in the elderly has been the subject of several studies, and the results suggest that there are indeed differences between the two approaches.
Bipolar hemiarthroplasty has been associated with a reduction in acetabular erosion compared to unipolar implants, as evidenced by a meta-analysis that showed a statistically significant reduction in acetabular erosion in the bipolar group (PUBMED:31773262). Another pooled analysis of data from 30,250 participants found that bipolar hemiarthroplasty was superior to unipolar in terms of hip function, range of motion, and reoperation rate, although it required a longer operative time (PUBMED:31178145). Similarly, a systematic review and meta-analysis of randomized controlled studies indicated that patients treated with bipolar hemiarthroplasty had lower acetabular erosion rates, better hip function, better quality of life, and less pain compared to those treated with unipolar hemiarthroplasty (PUBMED:35779144).
In terms of reasons for revision of failed hemiarthroplasty, a study found that unipolar and bipolar hemiarthroplasty were not different in causes for revision, but the unipolar group had a statistically significantly higher number of acetabular erosion events compared with the bipolar group (PUBMED:29549451). A systematic review and meta-analysis of randomized trials also found that the acetabular erosion rates were significantly different between the two groups, with lower rates in the bipolar group (PUBMED:25476243).
However, a study from the American Joint Replacement Registry (AJRR) suggested that there were no differences in the number and reason for all-cause revisions between unipolar and bipolar hemiarthroplasty, although bipolar hemiarthroplasty had more revisions at 6 months postoperatively, and unipolar had more revisions between 2 and 3 years, and after 3 years (PUBMED:38481377). A bi-centre study comparing outcomes in patients aged 90 years or older found no effects on dislocation, revision, and overall complication rate between unipolar and bipolar prostheses, although the type of prostheses did not influence functional status (PUBMED:34246481).
In summary, while both unipolar and bipolar hemiarthroplasty are used to treat displaced femoral neck fractures in the elderly, the evidence suggests that bipolar hemiarthroplasty may offer some advantages over unipolar hemiarthroplasty, particularly in terms of lower rates of acetabular erosion, better hip function, and potentially better quality of life and less pain postoperatively (PUBMED:31773262; PUBMED:31178145; PUBMED:35779144). However, the choice between unipolar and bipolar hemiarthroplasty may also depend on the patient's expected lifespan post-injury, as suggested by the AJRR study, which recommended considering bipolar prosthesis for patients expected to live more than 2 years post-injury (PUBMED:38481377).
Additionally, a study comparing cemented and uncemented bipolar hemiarthroplasty found that cemented bipolar hemiarthroplasty could cause fewer complications and improve patients' function in less time compared to uncemented bipolar hemiarthroplasty (PUBMED:27022343). Another meta-analysis found no significant differences between unipolar and bipolar hemiarthroplasty in terms of dislocation, reoperation, mortality, and general complications, suggesting that both approaches achieve similar clinical outcomes (PUBMED:26558663).
Lastly, a study comparing outcomes of unipolar versus bipolar uncemented hemiarthroplasty in elderly patients found that those who underwent bipolar hemiarthroplasty achieved better outcomes for mortality and mobility, particularly among patients who were able to walk unaided before injury (PUBMED:21519068).
In conclusion, while both unipolar and bipolar hemiarthroplasty are viable options for treating displaced femoral neck fractures in the elderly, bipolar hemiarthroplasty may offer some clinical benefits, particularly in terms of hip function and acetabular erosion. However, the choice of implant should be individualized based on the patient's specific circumstances, including their general health condition, preinjury mobility status, and expected lifespan. Further research and randomized clinical trials are recommended to continue evaluating the long-term outcomes and quality of life implications of both types of hemiarthroplasty. |
Instruction: Nonoperative management of blunt renal trauma: is routine early follow-up imaging necessary?
Abstracts:
abstract_id: PUBMED:18768088
Nonoperative management of blunt renal trauma: is routine early follow-up imaging necessary? Background: There is no consensus on the role of routine follow-up imaging during nonoperative management of blunt renal trauma. We reviewed our experience with nonoperative management of blunt renal injuries in order to evaluate the utility of routine early follow-up imaging.
Methods: We reviewed all cases of blunt renal injury admitted for nonoperative management at our institution between 1/2002 and 1/2006. Data were compiled from chart review, and clinical outcomes were correlated with CT imaging results.
Results: 207 patients were identified (210 renal units). American Association for the Surgery of Trauma (AAST) grades I, II, III, IV, and V were assigned to 35 (16%), 66 (31%), 81 (39%), 26 (13%), and 2 (1%) renal units, respectively. 177 (84%) renal units underwent routine follow-up imaging 24-48 hours after admission. In three cases of grade IV renal injury, a ureteral stent was placed after serial imaging demonstrated persistent extravasation. In no other cases did follow-up imaging independently alter clinical management. There were no urologic complications among cases for which follow-up imaging was not obtained.
Conclusion: Routine follow-up imaging is unnecessary for blunt renal injuries of grades I-III. Grade IV renovascular injuries can be followed clinically without routine early follow-up imaging, but urine extravasation necessitates serial imaging to guide management decisions. The volume of grade V renal injuries in this study is not sufficient to support or contest the need for routine follow-up imaging.
abstract_id: PUBMED:28382258
Trends in nonoperative management of traumatic injuries - A synopsis. Nonoperative management of both blunt and penetrating injuries can be challenging. During the past three decades, there has been a major shift from operative to increasingly nonoperative management of traumatic injuries. Greater reliance on nonoperative, or "conservative" management of abdominal solid organ injuries is facilitated by the various sophisticated and highly accurate noninvasive imaging modalities at the trauma surgeon's disposal. This review discusses selected topics in nonoperative management of both blunt and penetrating trauma. Potential complications and pitfalls of nonoperative management are discussed. Adjunctive interventional therapies used in treatment of nonoperative management-related complications are also discussed.
Republished With Permission From: Stawicki SPA. Trends in nonoperative management of traumatic injuries - A synopsis. OPUS 12 Scientist 2007;1(1):19-35.
abstract_id: PUBMED:30505349
Safety of selective nonoperative management for blunt splenic trauma: the impact of concomitant injuries. Background: Nonoperative management for blunt splenic injury is the preferred treatment. To improve the outcome of selective nonoperative therapy, the current challenge is to identify factors that predict failure. Little is known about the impact of concomitant injury on outcome. Our study has two goals. First, to determine whether concomitant injury affects the safety of selective nonoperative treatment. Secondly we aimed to identify factors that can predict failure.
Methods: From our prospective trauma registry we selected all nonoperatively treated adult patients with blunt splenic trauma admitted between 01.01.2000 and 12.21.2013. All concurrent injuries with an AIS ≥ 2 were scored. We grouped and compared patients sustaining solitary splenic injuries and patients with concomitant injuries. To identify specific factors that predict failure we used a multivariable regression analysis.
Results: A total of 79 patients were included. Failure of nonoperative therapy (n = 11) and complications only occurred in patients sustaining concomitant injury. Furthermore, ICU-stay as well as hospitalization time were significantly prolonged in the presence of associated injury (4 versus 13 days,p < 0.05). Mortality was not seen. Multivariable analysis revealed the presence of a femur fracture and higher age as predictors of failure.
Conclusions: Nonoperative management for hemodynamically normal patients with blunt splenic injury is feasible and safe, even in the presence of concurrent (non-hollow organ) injuries or a contrast blush on CT. However, associated injuries are related to prolonged intensive care unit- and hospital stay, complications, and failure of nonoperative management. Specifically, higher age and the presence of a femur fracture are predictors of failure.
abstract_id: PUBMED:26814761
Nonoperative Management of Blunt Splenic Trauma: Also Feasible and Safe in Centers with Low Trauma Incidence and in the Presence of Established Risk Factors. Background: Treatment of blunt splenic trauma has undergone dramatic changes over the last few decades. Nonoperative management (NOM) is now the preferred treatment of choice, when possible. The outcome of NOM has been evaluated. This study evaluates the results following the management of blunt splenic injury in adults in a Swedish university hospital with a low blunt abdominal trauma incidence.
Method: Fifty patients with blunt splenic trauma were treated at the Department of Surgery, Lund University Hospital from January 1994 to December 2003. One patient was excluded due to a diagnostic delay of > 24 h. Charts were reviewed retrospectively to examine demographics, injury severity score (ISS), splenic injury grade, diagnostics, treatment and outcome measures.
Results: Thirty-nine patients (80%) were initially treated nonoperatively (NOM), and ten (20%) patients underwent immediate surgery (operative management, OM). Only one (3%) patient failed NOM and required surgery nine days after admission (failure of NOM, FNOM). The patients in the OM group had higher ISS (p < 0.001), higher grade of splenic injury (p < 0.001), and were hemodynamically unstable to a greater extent (p < 0.001). This was accompanied by increased transfusion requirements (p < 0.001), longer stay in the ICU unit (p < 0.001) and higher costs (p = 0.001). Twenty-seven patients were successfully treated without surgery. No serious complication was found on routine radiological follow-up.
Conclusion: Most patients in this study were managed conservatively with a low failure rate of NOM. NOM of blunt splenic trauma could thus be performed in a seemingly safe and effective manner, even in the presence of established risk factors. Routine follow-up with CT scan did not appear to add clinically relevant information affecting patient management.
abstract_id: PUBMED:31110844
Blunt Spleen and Liver Trauma. Blunt abdominal trauma is an important cause of pediatric morbidity and mortality. The spleen and liver are the most common abdominal organs injured. Trauma to either organ can result in life-threatening bleeding. Controversy exists regarding which patients should be imaged and the correct imaging modality depending on the level of clinical suspicion for injury. Nonoperative management of blunt abdominal trauma is the standard of care for hemodynamically stable patients. However, the optimal protocol to maximize patient safety while minimizing resource utilization is a matter of debate. Adjunctive therapies for pediatric spleen and liver trauma are also an area of ongoing research. A review of the current literature on the diagnosis, management, and follow-up of pediatric spleen and liver blunt trauma is presented.
abstract_id: PUBMED:20579668
Early outcomes of deliberate nonoperative management for blunt thoracic aortic injury in trauma. Objective: Traumatic blunt aortic injury has traditionally been viewed as a surgical emergency, whereas nonoperative therapy has been reserved for nonsurgical candidates. This study reviews our experience with deliberate, nonoperative management for blunt thoracic aortic injury.
Methods: A retrospective chart review with selective longitudinal follow-up was conducted for patients with blunt aortic injury. Surveillance imaging with computed tomography angiography was performed. Nonoperative patients were then reviewed and analyzed for survival, evolution of aortic injury, and treatment failures.
Results: During the study period, 53 patients with an average age of 45 years (range, 18-80 years) were identified, with 28% presenting to the Stanford University School of Medicine emergency department and 72% transferred from outside hospitals. Of the 53 patients, 29 underwent planned, nonoperative management. Of the 29 nonoperative patients, in-hospital survival was 93% with no aortic deaths in the remaining patients. Survival was 97% at a median of 1.8 years (range, 0.9-7.2 years). One patient failed nonoperative management and underwent open repair. Serial imaging was performed in all patients (average = 107 days; median, 31 days), with 21 patients having stable aortic injuries without progression and 5 patients having resolved aortic injuries.
Conclusions: This experience suggests that deliberate, nonoperative management of carefully selected patients with traumatic blunt aortic injury may be a reasonable alternative in the polytrauma patient; however, serial imaging and long-term follow-up are necessary.
abstract_id: PUBMED:36376123
Natural history and nonoperative management of penetrating cerebrovascular injury. Introduction: There is a modern precedent for nonoperative management of select penetrating cerebrovascular injuries (PCVIs); however, there is minimal data to guide management.
Patients And Methods: This study assessed treatments, radiographic injury progression, and outcomes for all patients with PCVIs managed at an urban Level I trauma center from 2016 to 2021 that underwent initial nonoperative management (NOM).
Results: Fourteen patients were included. There were 11,635 trauma admissions, 378 patients with blunt cerebrovascular injury, and 18 patients with operatively-managed PCVI during this timeframe. All patients received antithrombotic therapy, but this was delayed in some due to concomitant injuries. Three patients had stroke (21%): two before antithrombotic initiation, and one with unclear timing relative to treatment. Three patients underwent endovascular interventions. On follow-up imaging, 14% had injury resolution, 36% were stable, 21% worsened, and 29% had no follow-up vascular imaging. One patient died (7%), one had a bleeding complication (7%), and no patient required delayed operative intervention.
Discussion: Early initiation of antithrombotic therapy, early surveillance imaging, and selective use of endovascular interventions are important for nonoperative management of PCVI.
abstract_id: PUBMED:30568373
Nonoperative Management of Blunt Splenic Trauma: Outcomes of Gelfoam Embolization of the Splenic Artery. Context: Nonoperative management (NOM) is the standard of care in hemodynamically stable trauma patients with blunt splenic injury. Gelfoam splenic artery embolization (SAE) is a treatment option used in trauma patients.
Aims: The primary aim of this study was to retrospectively examine the use and outcomes of Gelfoam SAE in adult patients with blunt splenic injury.
Settings And Design: One hundred and thirty-two adult patients with blunt splenic injury admitted to a Level 1 trauma center between January 2014 and December 2015 were included in the study. Patients treated with Gelfoam SAE, NOM, and splenectomies were reviewed. Descriptive statistics including patient age, Glasgow Coma Scale, Injury Severity Score (ISS), hospital days, Intensive Care Unit (ICU) days, splenic grade, and amount of blood products administered were recorded. Complications, defined as any additional factors that contributed to the patient's overall length of hospital stay, were compared between the three groups. Technical aspects of Gelfoam SAE and associated complications were reviewed.
Subjects And Methods: Gelfoam SAE was performed in 25 (18.9%) of the 132 patients. Gelfoam SAE patients had fewer ICU days compared with those patients who had a splenectomy or NOM. There was no statistical difference in complications between patients who underwent Gelfoam SAE and those who did not. Patients who underwent Gelfoam SAE tended to have fewer complications including deep venous thrombosis's, PE, and infections and yielded no complications in 64% of the Gelfoam group.
Statistical Analysis: Statistical analysis included descriptives, ANOVA, and nonparametric tests as appropriate.
Conclusion: Gelfoam SAE can be used for blunt splenic injury for intermediate ISS and splenic grade as it reduced hospital and ICU days.
abstract_id: PUBMED:34756739
Preinjury warfarin does not cause failure of nonoperative management in patients with blunt hepatic, splenic or renal injuries. Background: For patients sustaining major trauma, preinjury warfarin use may make adequate haemostasis difficult. This study aimed to determine whether preinjury warfarin would result in more haemostatic interventions (transarterial embolization [TAE] or surgeries) and a higher failure rate of nonoperative management for blunt hepatic, splenic or renal injuries.
Methods: This was a retrospective cohort study from the Taiwan National Health Insurance Research Database (NHIRD) from 2003 to 2015. Patients with hepatic, splenic or renal injuries were identified. The primary outcome measurement was the need for invasive procedures to stop bleeding. One-to-two propensity score matching (PSM) was used to minimize selection bias.
Results: A total of 37,837 patients were enrolled in the study, and 156 (0.41%) had preinjury warfarin use. With proper 1:2 PSM, patients who received warfarin preinjury were found to require more haemostatic interventions (39.9% vs. 29.1%, p=0.016). The differences between the two study groups were that patients with preinjury warfarin required more TAE than the controls (16.3% vs 8.2%, p = 0.009). No significant increases were found in the need for surgeries (exploratory laparotomy (5.2% vs 3.6%, p = 0.380), hepatorrhaphy (9.2% vs 7.2%, p = 0.447), splenectomy (13.1% vs 13.7%, p = 0.846) or nephrectomy (2.0% vs 0.7%, p = 0.229)). Seven out of 25 patients (28.0%) in the warfarin group required further operations after TAE, which was not significantly different from that in the nonwarfarin group (four out of 25 patients, 16.0%, p = 0.306) CONCLUSION: Preinjury warfarin increases the need for TAE but not surgeries. With proper haemostasis with TAE and resuscitation, nonoperative management can still be applied to patients with preinjury warfarin sustaining blunt hepatic, splenic or renal injuries. Patients with preinjury warfarin had a higher risk for surgery after TAE.
abstract_id: PUBMED:34399602
Nonoperative Management of Blunt Hepatic Trauma: Comparison of Level I and II Trauma Centers. Introduction: Most blunt liver injuries are treated with nonoperative management (NOM), and angiointervention (AI) has become a common adjunct. This study evaluated the use of AI, blood product utilization, pharmacological venous thromboembolic prophylaxis (VTEp), and outcomes in severe blunt liver trauma managed nonoperatively at level I versus II trauma centers.
Methods: American College of Surgeons Trauma Quality Improvement Program (TQIP) study (2013-2016), including adult patients with severe blunt liver injuries (AIS score>/= 3) treated with NOM, was conducted. Epidemiological and clinical characteristics, severity of liver injury (AIS), use of AI, blood product utilization, and VTEp were collected. Outcomes included survival, complications, failure of NOM, blood product utilization, and length of stay (LOS).
Results: Study included 2825 patients: 2230(78.9%) in level I and 595(21.1%) in level II centers. There was no difference in demographics, clinical presentation, or injury severity between centers. Angiointervention was used in 6.4% in level I and 7.2% in level II centers (P=.452). Level II centers were less likely to use LMWH for VTEp (.003). There was no difference in mortality or failure of NOM. In level II centers, there was a significantly higher 24-hour blood product utilization (PRBC P = .015 and platelets P = .002), longer ventilator days (P = .012), and longer ICU (P< .001) and hospital LOS (P = .024). The incidence of ventilator-associated pneumonia was significantly higher in level II centers (P = .003).
Conclusion: Utilization of AI and NOM success rates is similar in level I and II centers. However, the early blood utilization, ventilator days, and VAP complications are significantly higher in level II centers.
Answer: Based on the available abstracts, routine early follow-up imaging may not be necessary for all cases of blunt renal trauma managed nonoperatively. Specifically, a study reviewing nonoperative management of blunt renal injuries found that routine follow-up imaging did not independently alter clinical management in most cases. It was deemed unnecessary for blunt renal injuries of grades I-III. However, for grade IV renovascular injuries, while routine early follow-up imaging may not be required, urine extravasation necessitates serial imaging to guide management decisions. The study did not have enough volume of grade V renal injuries to support or contest the need for routine follow-up imaging (PUBMED:18768088).
In general, the trend in nonoperative management of traumatic injuries has shifted towards less invasive approaches, with sophisticated imaging modalities aiding in the decision-making process. The role of follow-up imaging is to monitor for potential complications and guide further management if necessary (PUBMED:28382258).
For blunt splenic trauma, another study indicated that nonoperative management is safe even in the presence of concomitant injuries, although associated injuries are related to prolonged intensive care and hospital stay, complications, and failure of nonoperative management. Specific factors such as higher age and the presence of a femur fracture were predictors of failure (PUBMED:30505349).
In summary, while routine early follow-up imaging may not be necessary for all cases of blunt renal trauma, particularly for lower-grade injuries, it may be required in certain situations such as grade IV injuries with urine extravasation or in the presence of complicating factors. Clinical judgment and individual patient assessment remain crucial in determining the need for follow-up imaging during nonoperative management of blunt renal trauma. |
Instruction: Better functional status in American than Canadian patients with heart disease: an effect of medical care?
Abstracts:
abstract_id: PUBMED:7594019
Better functional status in American than Canadian patients with heart disease: an effect of medical care? Objectives: This study compared functional status in Americans and Canadians with and without prior symptoms of heart disease to separate the effects of medical care from nonmedical factors.
Background: Coronary angiography and revascularization are used more often in the United States than in Canada, yet rates of mortality and myocardial infarction are similar in the two countries. Recent data suggest that functional status after myocardial infarction is better among Americans than Canadians, but it is uncertain whether this difference is due to medical care or nonmedical factors.
Methods: Quality of life was measured in patients enrolled in seven American and one Canadian site in the Bypass Angioplasty Revascularization Investigation. Prior symptoms of heart disease were defined as angina, myocardial infarction or congestive heart failure before the episode of illness leading to randomization. Functional status was measured with the Duke Activity Status Index and overall emotional and social health using Medical Outcome Study measures on the basis of patient status before the index episode of acute ischemic heart disease.
Results: Quality of life was generally better in the 934 Americans than in the 278 Canadians, with overall health rated as excellent or very good in 30% of Americans versus 20% of Canadians (p = 0.0001), higher median Duke Activity Status Index scores (16 vs. 13.5, p = 0.03) but equivalent emotional health (76 vs. 76, p = 0.74) and social health scores (100 vs. 80, p = 0.07). Among the 350 patients without prior symptoms of heart disease, Americans and Canadians had similar overall health, Duke Activity Status Index and emotional and social health scores. However, of the 860 patients with previous symptoms of heart disease, Americans had higher overall health (p = 0.0001) and Duke Activity Status Index scores (p = 0.0008) but similar emotional and social health scores. The results were essentially unchanged after statistical adjustment for potential confounding factors.
Conclusions: The functional status of patients without prior symptoms of heart disease is similar in Americans and Canadians. However, among patients with previous symptomatic heart disease, functional status is higher in Americans than in Canadians. This difference may be due to different patterns of medical management of heart disease in the two countries.
abstract_id: PUBMED:37440176
Using Functional Status at the Time of Palliative Care Consult to Identify Opportunities for Earlier Referral. Background: In order to improve early access to palliative care, strategies for monitoring referral practices in real-time are needed. Objective: To evaluate how Australia-Modified Karnofsky Performance Status (AKPS) at the time of initial palliative care consult differs between serious illnesses and could be used to identify opportunities for earlier referral. Methods: We retrospectively evaluated data from an inpatient palliative care consult registry. Serious illnesses were classified using ICD-10 codes. AKPS was assessed by palliative care clinicians during consult. Results: The AKPS distribution varied substantially between the different serious illnesses (p < 0.001). While patients with cancer and heart disease often had preserved functional status, the majority of patients with dementia, neurological, lung, liver, and renal disease were already completely bedbound at the time of initial palliative care consult. Conclusion: Measuring functional status at the time of palliative care referral could be helpful for monitoring referral practices and identifying opportunities for earlier referral.
abstract_id: PUBMED:7720023
Native-American elders. Health care status. This article reviews current data relevant to the health care status of elderly Native Americans, a population cohort encompassing American Indians and Alaskan Natives/Aleutians. Several topics are addressed, including the history of Native American health policy, heart disease, diabetes mellitus, cancer, oral health, nutrition, long-term care, and the circumstances of urban Native American elders.
abstract_id: PUBMED:11642684
Functional status, health problems, age and comorbidity in primary care patients. Objectives: To determine the relationship between functional status and health problems, age and comorbidity in primary care patients.
Methods: Patients from 60 general practitioners who visited their general practitioner were recruited and asked to complete a written questionnaire, including a list of 25 health problems and the SF-36 to measure functional status. The response rate was 67% (n = 4,112). Differences between subgroups were tested with p < 0.01.
Results: Poorer functional status which was associated with increased age (except for vitality) and increased co-morbidity. Patients with asthma/ bronchitis/COPD, severe heart disease/infarction, chronic backpain, arthrosis of knees, hips or hands, or an 'other disease' had poorer scores on at least five dimensions of functional status. Patients with hypertension, diabetes mellitus or cancer did not differ from patients without these conditions on more than one dimension of functional status. In the multiple regression analysis age, had a negative effect on functional status (standardised beta-coefficients between -0.03 and -0.34) except for vitality. Co-morbidity had a negative effect on physical role constraints (-0.15) and bodily pain (-0.09). All health problems had effects on dimensions of functional status (coefficients between -0.04 and -0.13). General health and physical dimensions of functional status were better predicted by health problems, age and co-morbidity (between 6.4 and 16.5% of variation explained) than mental dimensions of functional status (between 1.1 and 3.2%).
Conclusion: Higher age was a predictor of poorer functional status, but there was little evidence for an independent effect of co-morbidity on functional status. Health problems had differential impact on functional status among primary care patients.
abstract_id: PUBMED:11092161
Attitudes about racism, medical mistrust, and satisfaction with care among African American and white cardiac patients. The authors examine determinants of satisfaction with medical care among 1,784 (781 African American and 1,003 white) cardiac patients. Patient satisfaction was modeled as a function of predisposing factors (gender, age, medical mistrust, and perception of racism) and enabling factors (medical insurance). African Americans reported less satisfaction with care. Although both black and white patients tended not to endorse the existence of racism in the medical care system, African American patients were more likely to perceive racism. African American patients were significantly more likely to report mistrust. Multivariate analysis found that the perception of racism and mistrust of the medical care system led to less satisfaction with care. When perceived racism and medical mistrust were controlled, race was no longer a significant predictor of satisfaction.
abstract_id: PUBMED:37587236
Early Functional Status Change After Cardiopulmonary Resuscitation in a Pediatric Heart Center: A Single-Center Retrospective Study. Children with cardiac disease are at significantly higher risk for in-hospital cardiac arrest (CA) compared with those admitted without cardiac disease. CA occurs in 2-6% of patients admitted to a pediatric intensive care unit (ICU) and 4-6% of children admitted to the pediatric cardiac-ICU. Treatment of in-hospital CA with cardiopulmonary resuscitation (CPR) results in return of spontaneous circulation in 43-64% of patients and survival rate that varies from 20 to 51%. We aimed to investigate the change in functional status of survivors who experienced an in-hospital CA using the functional status scale (FSS) in our heart center by conducting a retrospective study of all patients 0-18 years who experienced CA between June 2015 and December 2020 in a free-standing university-affiliated quaternary children's hospital. Of the 165 CA patients, 61% (n = 100) survived to hospital discharge. The non-survivors had longer length from admission to CA, higher serum lactate levels peri-CA, and received higher number of epinephrine doses. Using FSS, of the survivors, 26% developed new morbidity, and 9% developed unfavorable outcomes. There was an association of unfavorable outcomes with longer CICU-LOS and number of epinephrine doses given. Sixty-one-percent of CA patients survived to hospital discharge. Of the survivors, 26% developed new morbidity and 91% had favorable outcomes. Future multicenter studies are needed to help better identify modifiable risk factors for development of poor outcomes and help improve outcomes of this fragile patient population.
abstract_id: PUBMED:15808779
American College of Cardiology and American Heart Association methodology for the selection and creation of performance measures for quantifying the quality of cardiovascular care. The ability to quantify the quality of cardiovascular care critically depends on the translation of recommendations for high-quality care into the measurement of that care. As payers and regulatory agencies increasingly seek to quantify healthcare quality, the implications of the measurement process on practicing physicians are likely to grow. This statement describes the methodology by which the American College of Cardiology and the American Heart Association approach creating performance measures and devising techniques for quantifying those aspects of care that directly reflect the quality of cardiovascular care. Methods for defining target populations, identifying dimensions of care, synthesizing the literature, and operationalizing the process of selecting measures are proposed. It is hoped that new sets of measures will be created through the implementation of this approach, and consequently, through the use of such measurement sets in the context of quality improvement efforts, the quality of cardiovascular care will improve.
abstract_id: PUBMED:8292638
Medical evaluation of African American women entering drug treatment. This study examined the records of 252 admissions to an inpatient drug rehabilitation program for African American women between July 1989 and July 1991 to determine the prevalence and treatability of the medical conditions found on screening evaluation. All but 0.7% of subjects were on General Relief, Medicare, Medicaid, or had no payment source. The results showed a high prevalence of problems related to life style such as sexually transmitted diseases, anemia, and dental disease. Significant medical illness such as heart disease, abdominal surgical conditions, and breast masses were also found along with a high level of somatic discomfort of a subacute nature. Only 58% of patients referred to specialists kept the initial appointment. These results suggest that medical evaluation of impoverished African American women seeking rehabilitation for addiction may reveal many other health problems but that non-compliance severely limits the effectiveness of treatment. The role of the medical screening evaluation in determining fitness to participate in an inpatient program, detecting undiagnosed medical conditions, and patient education is discussed.
abstract_id: PUBMED:31403203
The Influence of Smoking Status on the Health Profiles of Older Chinese American Men. Objective: To examine the influence of smoking status on the health profiles of community-dwelling older Chinese American men in the greater Chicago, IL, area.
Design: This study utilized a cross-sectional study design to analyze data obtained from the larger Population Study of Chinese Elderly in Chicago (PINE).
Setting: A population-based study conducted in Chicago.
Participants: Baseline data from Chinese American men who participated in PINE (N = 1492).
Measures: Demographic characteristics measured included age, education years, marital status, income, health insurance coverage, and smoking pack-years. Self-reported smoking status included never smoker, current smoker, and former smoker. Health profile indicators included perceived health status, past 12-month changes in health, chronic medical conditions (heart diseases, stroke, cancer, diabetes, hypertension, high cholesterol, thyroid disease, and osteoarthritis), quality of life, and depression and anxiety.
Results: The mean age of the study sample was 72.5 years. Of the sample, 65% reported a smoking history, with 25.1% current smokers and 40.1% former smokers. Current smokers were younger, less educated, and uninsured. Former smokers had the poorest overall health profiles. Compared to former smokers, current smokers were less likely to have heart disease (odds ratio [OR] = 0.59; 95% confidence interval [CI] = 0.39-0.90), hypertension (OR = 0.54; 95% CI = 0.41-0.72), high cholesterol (OR = 0.74; 95% CI = 0.56-0.99), thyroid disease (OR = 0.44; 95% CI = 0.21-0.90), depression (rate ratio [RR] = 0.76; 95% CI = 0.58-0.99), and anxiety (RR = 0.72; 95% CI = 0.59-0.89), and they had fewer overall chronic medical conditions (RR = 0.79; 95% CI = 0.70-0.88) after controlling for demographic factors and smoking pack-year history. Compared to never smokers, former and current smokers reported poorer self-rated health (OR = 1.58; 95% CI = 1.11-2.26) and lower perceived quality of life (OR = 2.11; 95% CI = 1.04-4.29).
Conclusions: Consistent with prior research, smoking rates were elevated among this sample of older Chinese men. Counter to study hypotheses, former smokers had worse overall health. Study findings suggest the need for health promotion interventions for both current and former smokers. J Am Geriatr Soc 67:S577-S583, 2019.
abstract_id: PUBMED:22994918
Cross-sectional and longitudinal associations of functional and health status with institutional care use: results from the Survey of Health and Living Status of the Elderly in Taiwan. Aim: This study evaluated the cross-sectional and longitudinal associations of functional and health status with institutional care, and examined determinants of institutional care use over time.
Methods: Data of this study were obtained from the Survey of Health and Living Status of the Elderly in Taiwan (SHLSET), which was launched in 1989 and involved a nationally representative sample of nearly-old and old Taiwanese. The baseline data in this present study were collected in 1999, and followed in 2003 and 2007.
Results: Participants with institutional care use had a higher activities of daily living (ADL) score, more self-reported diseases and poorer self-reported health status than participants without institutional care use (all P<0.05). Cross-sectional analysis showed that a higher ADL score, having heart diseases and having a stroke were positively associated with institutional care use (P<0.05); whereas the number of self-reported diseases and poor self-reported health status were not associated with institutional care use. Longitudinal analysis showed that increased ADL scores and the number of self-reported diseases over 4- and 8 years were associated with an increased likelihood of subsequent institutional care use (all P<0.05). Worsening health status over 4 years was associated with an increased likelihood of subsequent institutional care use, but this association did not exist over 8 years.
Conclusions: Only ADL and ADL deterioration over time are cross-sectionally and longitudinally associated with increased institutional care use. Declining functional status is a major determinant of institutional care use for Taiwanese aged over 53 years.
Answer: The study comparing functional status in American and Canadian patients with heart disease found that Americans had generally better quality of life and higher median Duke Activity Status Index scores, suggesting better functional status among those with previous symptomatic heart disease (PUBMED:7594019). This difference may be attributed to different patterns of medical management of heart disease in the two countries, as the study concluded that among patients with prior symptoms of heart disease, functional status is higher in Americans than in Canadians (PUBMED:7594019).
While the study does not definitively prove that the observed differences in functional status are solely due to medical care, it does suggest that medical management practices could be a contributing factor. It is important to note that the study controlled for potential confounding factors and found the results to be essentially unchanged, strengthening the argument that medical care may play a role in the better functional status observed in American patients with heart disease (PUBMED:7594019).
However, it is also necessary to consider other studies and factors that could influence functional status. For instance, the study on functional status at the time of palliative care consult in Australia suggests that measuring functional status could help identify opportunities for earlier referral to palliative care (PUBMED:37440176). This indicates that the timing and nature of care, not just the presence of medical interventions, can impact functional status. Additionally, the study on functional status, health problems, age, and comorbidity in primary care patients shows that health problems have differential impacts on functional status (PUBMED:11642684), which implies that the relationship between medical care and functional status may also be influenced by the specific health issues and comorbidities present in patients.
In conclusion, while the study provides evidence that better functional status in American patients with heart disease may be an effect of medical care, it is important to consider the broader context of healthcare practices, patient characteristics, and other factors that could also influence functional status. |
Instruction: Do all girls with apparent idiopathic precocious puberty require gonadotropin-releasing hormone agonist treatment?
Abstracts:
abstract_id: PUBMED:32082242
Influence of Gonadotropin Hormone Releasing Hormone Agonists on Interhemispheric Functional Connectivity in Girls With Idiopathic Central Precocious Puberty. Purpose: The pubertal growth suppressive effects of gonadotropin hormone releasing hormone agonists (GnRHa) are well-known, although it remains unclear if long-term GnRHa treatment influences the brain function of treated children. The present study investigated the differences in the homotopic resting-state functional connectivity patterns in girls with idiopathic central precocious puberty (ICPP) with and without GnRHa treatment using voxel-mirrored homotopic connectivity (VMHC). Methods: Eighteen girls with ICPP who underwent 12 months of GnRHa treatment, 40 treatment-naïve girls with ICPP, and 19 age-matched girls with premature thelarche underwent resting-state functional magnetic resonance imaging using a 3T MRI. VMHC method was performed to explore the differences in the resting-state interhemispheric functional connectivity. The levels of serum pubertal hormones, including luteinizing hormone (LH), follicular-stimulating hormone, and estradiol, were assessed. Correlation analyses among the results of clinical laboratory examinations, neuropsychological scales, and VMHC values of different brain regions were performed with the data of the GnRHa treated group. Results: Significant decreases in VMHC of the lingual, calcarine, superior temporal, and middle frontal gyri were identified in the untreated group, compared with the control group. Medicated patients showed decreased VMHC in the superior temporal gyrus, when compared with the controls. Compared to the unmedicated group, the medicated group showed a significant increase in VMHC in the calcarine and middle occipital gyrus. Moreover, a positive correlation was observed between basal LH levels and VMHC of the middle occipital gyrus in medicated patients. Conclusions: These findings indicate that long-term treatment with GnRHa was associated with increased interhemispheric functional connectivity within several areas responsible for memory and visual process in patients with ICPP. Higher interhemispheric functional connectivity in the middle occipital gyrus was related to higher basal LH production in the girls who underwent treatment. The present study adds to the growing body of research associated with the effects of GnRHa on brain function.
abstract_id: PUBMED:9661615
Use of an ultrasensitive recombinant cell bioassay to determine estrogen levels in girls with precocious puberty treated with a luteinizing hormone-releasing hormone agonist. Although treatment of girls with precocious puberty should ideally restore estradiol levels to the normal prepubertal range, treatment effectiveness has usually been monitored by gonadotropin levels because estradiol RIAs have lacked sufficient sensitivity to monitor treatment effectiveness. We hypothesized that a recently developed ultrasensitive recombinant cell bioassay for estradiol would have sufficient sensitivity to demonstrate a dose-dependent suppression of estradiol during LH-releasing hormone agonist treatment and to determine whether currently used doses are able to suppress estradiol levels to the normal prepubertal range. Twenty girls with central precocious puberty were assigned randomly to receive deslorelin for 9 months at a dose of 1, 2, or 4 micrograms/ kg.day. A significant dose-response relationship was observed, with mean +/- SD estradiol levels of 16.7 +/- 6.1, 7.9 +/- 1.6, and 6.5 +/- 0.7 pmol/L at the doses of 1, 2, and 4 micrograms/kg.day, respectively (P < 0.01). The highest dose suppressed estradiol levels to just above the 95% confidence limits for normal prepubertal girls (< 0.07-6.3 pmol/L). We conclude that the ultrasensitive bioassay for estradiol has sufficient sensitivity for monitoring the response to LH-releasing hormone agonist treatment of central precocious puberty. Additionally, the observation that the deslorelin dose of 4 micrograms/kg.day did not fully restore estradiol levels to the normal prepubertal range suggests that some girls with precocious puberty may require higher doses to receive the maximal benefit of treatment. We suggest that restoration of estradiol levels to the normal prepubertal range should be the ultimate biochemical measure of efficacy, as estradiol is the key hormone that accelerates growth rate, bone maturation rate, and breast development in girls with precocious puberty.
abstract_id: PUBMED:34456870
The Diagnostic Utility of the Basal Luteinizing Hormone Level and Single 60-Minute Post GnRH Agonist Stimulation Test for Idiopathic Central Precocious Puberty in Girls. Objective: The present study aimed to assess the diagnostic utility of the Luteinizing hormone (LH) levels and single 60-minute post gonadotropin-releasing hormone (GnRH) agonist stimulation test for idiopathic central precocious puberty (CPP) in girls.
Methods: Data from 1,492 girls diagnosed with precocious puberty who underwent GnRH agonist stimulation testing between January 1, 2016, and October 8, 2020, were retrospectively reviewed. LH levels and LH/follicle-stimulating hormone (FSH) ratios were measured by immuno-chemiluminescence assay before and at several timepoints after GnRH analogue stimulation testing. Mann-Whitney U test, Spearman's correlation, χ2 test, and receiver operating characteristic (ROC) analyses were performed to determine the diagnostic utility of these hormone levels.
Results: The 1,492 subjects were split into two groups: an idiopathic CPP group (n = 518) and a non-CPP group (n = 974). Basal LH levels and LH/FSH ratios were significantly different between the two groups at 30, 60, 90, and 120 minutes after GnRH analogue stimulation testing. Spearman's correlation analysis showed the strongest correlation between peak LH and LH levels at 60 minutes after GnRH agonist stimulation (r = 0.986, P < 0.001). ROC curve analysis revealed that the 60-minute LH/FSH ratio yielded the highest consistency, with an area under the ROC curve (AUC) of 0.988 (95% confidence interval [CI], 0.982-0.993) and a cut-off point of 0.603 mIU/L (sensitivity 97.3%, specificity 93.0%). The cut-off points of basal LH and LH/FSH were 0.255 mIU/L (sensitivity 68.9%, specificity 86.0%) and 0.07 (sensitivity 73.2%, specificity 89.5%), respectively, with AUCs of 0.823 (95% CI, 0.799-0.847) and 0.843 (95% CI, 0.819-0.867), respectively.
Conclusions: A basal LH value greater than 0.535 mIU/L can be used to diagnose CPP without a GnRH agonist stimulation test. A single 60-minute post-stimulus gonadotropin result of LH and LH/FSH can be used instead of a GnRH agonist stimulation test, or samples can be taken only at 0, 30, and 60 minutes after a GnRH agonist stimulation test. This reduces the number of blood draws required compared with the traditional stimulation test, while still achieving a high level of diagnostic accuracy.
abstract_id: PUBMED:767354
Effect of cyproterone acetate therapy on gonadotropin response to synthetic luteinizing hormone-releasing hormone (LRH) in girls with idiopathic precocious puberty. 50 mg daily of cyproterone acetate (CA) were orally administered for 8 to 35 months to 7 girls with idiopathic precocious puberty. Plasma levels of FSH and LH, cortisol, testosterone, estradiol, and progesterone were measured in 6 patients before treatment. Compared with control subjects of the same chronological age, significantly higher values were found for gonadotropins, testosterone, and estradiol. After treatment, no significant variation was observed in FSH and LH levels; testosterone was reduced in the majority of the cases without significant decline in mean values; estradiol fell significantly and returned in the prepubertal range. The plasma gonadotropin pattern following exogenously administered luteinizing hormone-releasing hormone (LRH, 100 mug iv) was characterized before treatment by an exaggerated LH response both in terms of maximum level (32.02 +/- 4.35 SE mIU/ml; prepubertal controls: 16.20 +/- 1.45 SE mIU/ml) and maximum increment above baseline values (25.36 +/- 2.84 SE mIU/ml; prepubertal controls: 13.78 +/- 1.71 mIU/ml); plasma FSH response was similar to prepubertal subjects. Treatment with CA caused a significant reduction of mean LH response (P less than .025 in comparison with pre-treatment values for maximum level and maximum increment), whereas effect on FSH response was minimal. In all patients examined, a gonadotropin release from the pituitary after the injection of synthetic LRH was evident also after several months of therapy.
abstract_id: PUBMED:29280737
A Critical Appraisal of the Effect of Gonadotropin-Releasing Hormon Analog Treatment on Adult Height of Girls with Central Precocious Puberty. Central precocious puberty (CPP) is a diagnosis that pediatric endocrinologists worldwide increasingly make in girls of age 6-8 years and is mostly idiopathic. Part of the reason for increasing referral and diagnosis is the perception among the doctors as well as the patients that treatment of CPP with long-acting gonadotropin-releasing hormon analogues (GnRHa) promote height of the child. Although, the timing and the tempo of puberty does influence statural growth and achieved adult height, the extent of this effect is variable depending on several factors and is modest in most cases. Studies investigating GnRHa treatment in girls with idiopathic CPP demonstrate that treatment is able to restore adult height compromised by precocious puberty. However, reports on untreated girls with precocious puberty demonstrate that some of these girls achieve their target height without treatment as well, thus, blurring the net effect of GnRHa treatment on height in girls with CPP. Clinical studies on treatment of girls with idiopathic CPP on adult stature suffers from the solid evidence-base due mainly to the lack of well-designed randomized controlled studies and our insufficiencies of predicting adult height of a child with narrow precision. This is particularly true for girls in whom age of pubertal onset is close to physiological age of puberty, which are the majority of cases treated with GnRHa nowadays. Heterogeneous nature of pubertal tempo (progressive vs. nonprogressive) leading to different height outcomes also complicates the interpretation of the results in both treated and untreated cases. This review will attemp to summarize and critically appraise available data in the field.
abstract_id: PUBMED:27215137
Luteinizing Hormone Secretion during Gonadotropin-Releasing Hormone Stimulation Tests in Obese Girls with Central Precocious Puberty. Objective: Girls with precocious puberty have high luteinizing hormone (LH) levels and advanced bone age. Obese children enter puberty at earlier ages than do non-obese children. We analyzed the effects of obesity on LH secretion during gonadotropin-releasing hormone (GnRH) tests in girls with precocious puberty.
Methods: A total of 981 subjects with idiopathic precocious puberty who had undergone a GnRH stimulation testing between 2008 and 2014 were included in the study. Subjects were divided into three groups based on body mass index (BMI). Auxological data and gonadotropin levels after the GnRH stimulation test were compared.
Results: In Tanner stage 2 girls, peak stimulated LH levels on GnRH test were 11.9±7.5, 10.4±6.4, and 9.1±6.1 IU/L among normal-weight, overweight, and obese subjects, respectively (p=0.035 for all comparisons). In Tanner stage 3 girls, peak stimulated LH levels were 14.9±10.9, 12.8±7.9, and 9.6±6.0 IU/L, respectively (p=0.022 for all comparisons). However, in Tanner stage 4 girls, peak stimulated LH levels were not significantly different among normal, overweight, and obese children. On multivariate analysis, BMI standard deviation score was significantly and negatively associated with peak LH (β=-1.178, p=0.001).
Conclusion: In girls with central precocious puberty, increased BMI was associated with slightly lower peak stimulated LH levels at early pubertal stages (Tanner stages 2 and 3). This association was not valid in Tanner stage 4 girls.
abstract_id: PUBMED:6458765
Short-term treatment of idiopathic precocious puberty with a long-acting analogue of luteinizing hormone-releasing hormone. A preliminary report. The uncoupling of pituitary stimulation and response observed in adults during administration of the luteinizing hormone-releasing hormone analogue, D-Trp6-Pro9-NEt-LHRH (LHRHa) suggested that this drug might be useful in treating precocious puberty. We treated five girls with idiopathic precocious puberty (ages two to eight) for eight weeks with daily subcutaneous injections of LHRHa. The patients had Tanner II to IV pubertal development, advanced bone age, an estrogen effect on vaginal smear, measurable basal gonadotropin levels with pulsed nocturnal secretion, and a pubertal gonadotropin response to LHRH. Irregular vaginal bleeding was present in three patients. LHRHa significantly decreased basal (P less than 0.025) and LHRH-stimulated (P less than 0.01) gonadotropin levels as well as serum estradiol (P less than 0.05). The vaginal maturation-index score, which reflects the estrogen effect, fell by 25 per cent. Eight weeks after stopping treatment, all hormonal values and the vaginal maturation index had returned to pretreatment levels. These favorable short-term results will need further study before the benefits and risks of chronic treatment with LHRHa can be adequately assessed.
abstract_id: PUBMED:6415479
Long-term treatment of central precocious puberty with a long-acting analogue of luteinizing hormone-releasing hormone. Effects on somatic growth and skeletal maturation. The gonadotropin-releasing hormone-like agonist D-Trp6-Pro9-NEt-LHRH (LHRHa) has been shown to induce a reversible short-term suppression of gonadotropins and gonadal steroids in patients with central precocious puberty. Since accelerated statural growth and bone maturation are clinical features of precocity not well controlled by conventional therapies, we examined the effects of prolonged LHRHa therapy for 18 consecutive months on growth and skeletal maturation in nine girls with neurogenic or idiopathic precocious puberty. Suppression of gonadotropin pulsations and gonadal steroids was maintained in all subjects. Growth velocity fell from a mean rate (+/- S.E.M.) of 9.35 +/- 0.64 cm per year during the 19 months before treatment to 4.58 +/- 0.60 cm per year during treatment (P less than 0.001). Bone age advanced a mean of 9.4 +/- 2.3 months during treatment. These changes resulted in a mean increase of 3.3 cm in predicted height (P less than 0.01). Complete suppression of the pituitary-gonadal axis can be maintained by LHRHa therapy, resulting in slowing of excessively rapid growth and skeletal maturation and in increased predicted adult height in girls with precocious puberty.
abstract_id: PUBMED:6434582
Variable response to a long-acting agonist of luteinizing hormone-releasing hormone in girls with McCune-Albright syndrome. Six girls with McCune-Albright syndrome were treated for at least 2 months with the long-acting LHRH agonist D-Trp6-Pro9-NEt-LHRH, which previously was found to be an effective treatment for true precocious puberty. Nocturnal and LHRH-stimulated serum gonadotropin levels and plasma estradiol levels were measured before treatment and after 2-3 months of treatment. Five of the six girls had no decrease in serum gonadotropin or plasma estradiol levels during therapy, and their pubertal signs were unaffected by treatment. All five of these girls had serum gonadotropin levels that were within or below the normal prepubertal range. The sixth girl, who had gonadotropin levels in the normal pubertal range before treatment, had decreased serum gonadotropin and plasma estradiol levels during 1 yr of LHRH analog therapy. This was associated with cessation of menses and regression of secondary sexual changes. The failure of LHRH analog to modify the course of precocious puberty in the five patients with prepubertal serum gonadotropin concentrations is further evidence that the mechanism of precocious puberty in most girls with McCune-Albright syndrome differs from that in patients with true precocious puberty.
abstract_id: PUBMED:17048429
Bone mineral density and body composition in Thai Precocious Puberty girls treated with GnRH agonist. Treatment of true Precocious Puberty (PP) with GnRH agonist can improve final adult height by suppressing gonadotropin and sex hormone levels that delays the fusion of long bone epiphyseal growth plates. However, deprivation of estrogen may affect the acquisition of peak bone mass, especially in individuals with low calcium intake. Ten Thai girls with idiopathic true PP were evaluated for Bone Mineral Density (BMD) and body composition by DXA scanner (Hologic, Inc) before and after GnRH agonist therapy for 1 year. During treatment, all children were allowed to consume a normal diet without extra calcium supplementation. In addition, serum calcium, phosphate, alkaline phosphatase and osteocalcin were also measured. The results showed that GnRH agonist could improve predicted adult height from 149.4 +/- 5.4 to 153.6 +/- 6.8 cm (p < 0.001). Serum osteocalcin, representing the bone marker formation, decreased from 184.2 +/- 66.7 to 108.6 +/- 35.3 ng/mL (p = 0.012) However, the treatment had no negative effects on BMD lumbar spine and total BMD but increased percentage of fat mass from 25.7 +/- 5.2 to 31.6 +/- 5.5%. (p =0.007). In conclusion, treatment with GnRH agonist in Thai girls with true PP for 1 year can improve PAH without negative effects on BMD but a longer period of treatment needs to be studied.
Answer: No, not all girls with apparent idiopathic precocious puberty (IPP) require gonadotropin-releasing hormone agonist (GnRHa) treatment. The decision to treat with GnRHa is based on several factors, including the age of onset, rate of progression, bone age advancement, and potential psychological effects. Some girls with precocious puberty may achieve their target height without treatment, and the net effect of GnRHa treatment on height in girls with CPP can be variable and modest in most cases (PUBMED:29280737). Additionally, the treatment aims to restore adult height compromised by precocious puberty, but the extent of this effect depends on several factors (PUBMED:29280737).
Moreover, the diagnostic utility of basal luteinizing hormone (LH) levels and single 60-minute post-GnRH agonist stimulation tests can help differentiate between girls who have idiopathic CPP and those who do not, potentially reducing the need for unnecessary treatment in some cases (PUBMED:34456870). It is also important to note that some girls with precocious puberty may require higher doses of GnRHa to suppress estradiol levels to the normal prepubertal range, suggesting that treatment should be individualized (PUBMED:9661615).
In summary, while GnRHa treatment can be beneficial for managing IPP, not all girls with the condition require such treatment, and careful assessment and monitoring are necessary to determine the appropriate therapeutic approach for each individual case. |
Instruction: Does prostate HistoScanning™ play a role in detecting prostate cancer in routine clinical practice?
Abstracts:
abstract_id: PUBMED:24224648
Does prostate HistoScanning™ play a role in detecting prostate cancer in routine clinical practice? Results from three independent studies. Objectives: To evaluate the ability of prostate HistoScanning™ (PHS; Advanced Medical Diagnostics, Waterloo, Belgium) to detect, characterize and locally stage prostate cancer, by comparing it with transrectal ultrasonography (TRUS)-guided prostate biopsies, transperineal template prostate biopsies (TTBs) and whole-mount radical prostatectomy specimens.
Subjects And Methods: Study 1. We recruited 24 patients awaiting standard 12-core TRUS-guided biopsies of the prostate to undergo PHS immediately beforehand. We compared PHS with the TRUS-guided biopsy results in terms of their ability to detect cancer within the whole prostate and to localize it to the correct side and to the correct region of the prostate. Lesions that were suspicious on PHS were biopsied separately. Study 2. We recruited 57 patients awaiting TTB to have PHS beforehand. We compared PHS with the TTB pathology results in terms of their ability to detect prostate cancer within the whole gland and to localize it to the correct side and to the correct sextant of the prostate. Study 3. We recruited 24 patients awaiting radical prostatectomy for localized prostate cancer to undergo preoperative PHS. We compared PHS with standardized pathological analysis of the whole-mount prostatectomy specimens in terms of their measurement of total tumour volume within the prostate, tumour volume within prostate sextants and volume of index lesions identified by PHS.
Results: The PHS-targeted biopsies had an overall cancer detection rate of 38.1%, compared with 62.5% with standard TRUS-guided biopsies. The sensitivity and specificity of PHS for localizing tumour to the correct prostate sextant, compared with standard TRUS-guided biopsies, were 100 and 5.9%, respectively. The PHS-targeted biopsies had an overall cancer detection rate of 13.4% compared with 54.4% for standard TTB. PHS had a sensitivity and specificity for cancer detection in the posterior gland of 100 and 13%, respectively, and for the anterior gland, 6 and 82%, respectively. We found no correlation between total tumour volume estimates from PHS and radical prostatectomy pathology (Pearson correlation coefficient -0.096). Sensitivity and specificity of PHS for detecting tumour foci ≥0.2 mL in volume were 63 and 53%.
Conclusions: These three independent studies in 105 patients suggest that PHS does not reliably identify and characterize prostate cancer in the routine clinical setting.
abstract_id: PUBMED:28753891
Evaluation of Prostate HistoScanning as a Method for Targeted Biopsy in Routine Practice. Background: Prostate HistoScanning (PHS) is a tissue characterization system used to enhance prostate cancer (PCa) detection via transrectal ultrasound imaging.
Objective: To assess the impact of supplementing systematic transrectal biopsy with up to three PHS true targeting (TT) guided biopsies on the PCa detection rate and preclinical patient assessment.
Design, Setting, And Participants: This was a prospective study involving a cohort of 611 consecutive patients referred for transrectal prostate biopsy following suspicion of PCa. PHS-TT guided cores were obtained from up to three PHS lesions of ≥0.5cm3 per prostate and only one core per single PHS lesion. Histological outcomes from a systematic extended 12-core biopsy (Bx) scheme and additional PHS-TT guided cores were compared.
Outcome Measurements And Statistical Analysis: Comparison of PHS results and histopathology was performed per sextant. The χ2 and Mann-Whitney test were used to assess differences. Statistical significance was set at p<0.05.
Results And Limitations: PHS showed lesions of ≥0.5cm3 in 312 out of the 611 patients recruited. In this group, Bx detected PCa in 59% (185/312) and PHS-TT in 87% (270/312; p<0.001). The detection rate was 25% (944/3744 cores) for Bx and 68% (387/573 cores) for PHS-TT (p<0.001). Preclinical assessment was significantly better when using PHS-TT: Bx found 18.6% (58/312) and 8.3% (26/312), while PHS-TT found 42.3% (132/312) and 20.8% (65/312) of Gleason 7 and 8 cases, respectively (p<0.001). PHS-TT attributed Gleason score 6 to fewer patients (23.4%, 73/312) than Bx did (32.4%, 101/312; p=0.0021).
Conclusions: Patients with a suspicion of PCa may benefit from addition of a few PHS-TT cores to the standard Bx workflow.
Patient Summary: Targeted biopsies of the prostate are proving to be equivalent to or better than standard systematic random sampling in many studies. Our study results support supplementing the standard schematic transrectal ultrasound-guided biopsy with a few guided cores harvested using the ultrasound-based prostate HistoScanning true targeting approach in cases for which multiparametric magnetic resonance imaging is not available.
abstract_id: PUBMED:33825986
A systematic review and meta-analysis of Histoscanning™ in prostate cancer diagnostics. Context: The value of Histoscanning™ (HS) in prostate cancer (PCa) imaging is much debated, although it has been used in clinical practice for more than 10 years now.
Objective: To summarize the data on HS from various PCa diagnostic perspectives to determine its potential.
Materials And Methods: We performed a systematic search using 2 databases (Medline and Scopus) on the query "Histoscan*". The primary endpoint was HS accuracy. The secondary endpoints were: correlation of lesion volume by HS and histology, ability of HS to predict extracapsular extension or seminal vesicle invasion.
Results: HS improved cancer detection rate "per core", OR = 16.37 (95% CI 13.2; 20.3), p < 0.0001, I2 = 98% and "per patient", OR = 1.83 (95% CI 1.51; 2.21), p < 0.0001, I2 = 95%. The pooled accuracy was markedly low: sensitivity - 0.2 (95% CI 0.19-0.21), specificity - 0.12 (0.11-0.13), AUC 0.12. 8 of 10 studiers showed no additional value for HS. The pooled accuracy with histology after RP was relatively better, yet still very low: sensitivity - 0.56 (95% CI 0.5-0.63), specificity - 0.23 (0.18-0.28), AUC 0.4. 9 of 12 studies did not show any benefit of HS.
Conclusion: This meta-analysis does not see the incremental value in comparing prostate Histoscanning with conventional TRUS in prostate cancer screening and targeted biopsy. HS proved to be slightly more accurate in predicting extracapsular extension on RP, but the available data does not allow us to draw any conclusions on its effectiveness in practice. Histoscanning is a modification of ultrasound for prostate cancer visualization. The available data suggest its low accuracy in screening and detecting of prostate cancer.
abstract_id: PUBMED:25860379
Controversial evidence for the use of HistoScanning™ in the detection of prostate cancer. Introduction: Given the growing body of literature since first description of HistoScanning™ in 2008, there is an unmet need for a contemporary review.
Evidence Acquisition: Studies addressing HistoScanning™ in prostate cancer (PCa) were considered to be included in the current review. To identify eligible reports, we relied on a bibliographic search of PubMed database conducted in January 2015.
Evidence Synthesis: Twelve original articles were available to be included in the current review. The existing evidence was reviewed according to the three following topics: prediction of final pathology at radical prostatectomy, prediction of disease stage and application at prostate biopsy.
Conclusions: High sensitivity and specificity for HistoScanning™ to predict cancer foci ≥0.5 ml at final pathology were achieved in the pilot study. These results were questioned, when HistoScanning™ derived tumor volume does not correlate with final pathology results. Additionally, HistoScanning™ was not able to provide reliable staging information according to neither extraprostatic extension, nor seminal vesicle invasion prior to radical prostatectomy. Controversy data also exist according to the use of HistoScanning™ at prostate biopsy. Specifically, most encouraging results were recorded in a small patient cohort. Conversely, HistoScanning™ achieved poor prediction of positive biopsies, when relying on larger studies. Finally, the combination of HistoScanning™ and conventional ultrasound achieved lower detection rates than systematic biopsy. Currently, evidence is at best weak and questions whether HistoScanning™ might improve the detection of PCa.
abstract_id: PUBMED:25501797
Prostate histoscanning true targeting guided prostate biopsy: initial clinical experience. Objective: To evaluate the feasibility of prostate histoscanning true targeting (PHS-TT) guided transrectal ultrasound (TRUS) biopsy.
Methods: This is a prospective, single center, pilot study performed during February 2013-September 2013. All consecutive patients planned for prostate biopsy were included in the study, and all the procedure was performed by a single surgeon aided by the specialized true targeting software. Initially, the patients underwent PHS to map the abnormal areas within the prostate that were ≥0.2 cm(3). TRUS guided biopsies were performed targeting the abnormal areas with a specialized software. Additionally, routine bisextant biopsies were also taken. The final histopathology of the target cores was compared with the bisextant cores.
Results: A total of 43 patients underwent combined 'targeted PHS guided' and 'standard 12 core systematic' biopsies. The mean volume of abnormal area detected by PHS is 4.3 cm(3). The overall cancer detection rate was 46.5 % (20/43) with systemic cores and target cores detecting cancer in 44 % (19/43) and 26 % (11/43), respectively. The mean % cancer/core length of the PHS-TT cores were significantly higher than the systematic cores (55.4 vs. 37.5 %. p < 0.05). In biopsy naïve patients, the cancer detection rate (43.7 % vs. 14.8 %. p = 0.06) and the cancer positivity of the cores (30.1 vs. 6.8 %. p < 0.01) of target cores were higher than those patients with prior biopsies.
Conclusion: PHS-TT is feasible and can be an effective tool for real-time guidance of prostate biopsies.
abstract_id: PUBMED:23594146
Prostate HistoScanning: a screening tool for prostate cancer? Objective: To evaluate Prostate HistoScanning as a screening tool for prostate cancer in a pilot study.
Methods: During a 6-month period, 94 men with normal or suspicious digital rectal examination, normal or elevated prostate-specific antigen, or an increased prostate-specific antigen velocity were examined with Prostate HistoScanning. Based on these parameters and HistoScanning analysis, 41 men were referred for prostate biopsy under computer-aided ultrasonographic guidance. The number of random biopsy cores varied depending on the prostate volume. Targeted biopsies were taken in the case of computer-aided ultrasonographic area suspicious for malignancy. A logistic regression analysis was carried out to estimate the probability of resulting in a positive prostate biopsy based on the HistoScanning findings.
Results: Following a logistic regression analysis, after adjusting for age, digital rectal examination, serum prostate-specific antigen level, prostate volume and tumor lesion volume, every cancer volume increase of 1 mL estimated by HistoScanning was associated with a nearly threefold increase in the probability of resulting in a positive biopsy (odds ratio 2.9; 95% confidence interval 1.2-7.0; P-value 0.02). Prostate cancer was found in 17 of 41 men (41%). In patients with cancer, computer-aided ultrasonography-guided biopsy was 4.5-fold more likely to detect cancer than random biopsy. The prostate cancer detection rate for random biopsy and directed biopsy was 13% and 58%, respectively. HistoScanning-guided biopsy significantly decreased the number of biopsies necessary (P-value <0.0001).
Conclusions: Our findings suggest that Prostate HistoScanning might be helpful for the selection of patients in whom prostate biopsies are necessary. This imaging technique can be used to direct biopsies in specific regions of the prostate with a higher cancer detection rate.
abstract_id: PUBMED:26215749
True targeting-derived prostate biopsy: HistoScanning™ remained inadequate despite advanced technical efforts. Purpose: To verify the reliability of HistoScanning™-based, true targeting (TT)-derived prostate biopsy.
Methods: We relied on 40 patients suspicious for prostate cancer who underwent standard and TT-derived prostate biopsy. Sensitivity, specificity, positive predictive value, negative predictive value and the area under the curve (AUC) were assessed for the prediction of biopsy results per octant by HistoScanning™, using different HistoScanning™ signal volume cutoffs (>0, >0.2 and >0.5 ml).
Results: Overall, 319 octants were analyzed. Of those, 64 (20.1 %) harbored prostate cancer. According to different HistoScanning™ signal volume cutoffs (>0, >0.2 and >0.5 ml), the AUCs for predicting biopsy results were: 0.51, 0.51 and 0.53, respectively. Similarly, the sensitivity, specificity, positive predictive and negative predictive values were: 20.7, 78.2, 17.4 and 81.6 %; 20.7, 82.0, 20.3 and 82.3 %; and 12.1, 94.6, 33.3 and 82.9 %, respectively.
Conclusions: Prediction of biopsy results based on HistoScanning™ signals and TT-derived biopsies was unreliable. Moreover, the AUC of TT-derived biopsies was low and did not improve when additional signal volume cutoffs were applied (>0.2 and >0.5 ml). We cannot recommend a variation of well-established biopsy standards or reduction in biopsy cores based on HistoScanning™ signals.
abstract_id: PUBMED:31356028
Fusion biopsy of the prostate Aim: to compare the prostate cancer (PCa) detection rate, accuracy and safety of prostate image-guided fusion biopsy methods (cognitive fusion, software-fusion and HistoScanning-guided biopsy) on the basis of published studies in patients from 48 to 75 years with suspected prostate cancer during primary or repeat biopsy. To identify the limitations of these methods and improve the efficiency of fusion biopsy of the prostate in a further clinical trial.
Materials And Methods: search was carried out in the PubMed, Medline, Web of Science and eLibrary databases using following requests: (prostate cancer OR prostate adenocarcinoma) AND (MRI or magnetic resonance) AND (targeted biopsy); (prostate cancer OR prostate adenocarcinoma) AND (PHS OR Histoscanning) AND (targeted biopsy) and (prostate cancer OR prostate adenocarcinoma) AND (MRI or magnetic resonance) AND (targeted biopsy) AND (cognitive registration), targeted prostate biopsy, prostate histoscanning, histoscanning, cognitive prostate biopsy.
Results: a total of 672 publications were found, of which 25 original scientific papers were included in the analysis (n=4634). According to the results, PCa detection rate in patients with an average age of 62.5 years. (48-75) and an average PSA of 6.3 ng/ml (4.1-10.8), who underwent cognitive fusion biopsy under MRI control (MR-fusion) was 32.5%, compared to 30% and 35% for histoscanning in combination with a systematic biopsy and combination of methods (MR-fusion biopsy and histoscanning-guided biopsy), respectively. The accuracy of cognitive MR-fusion biopsy was 49.8% (20.8%-82%), the accuracy of the software MR-fusion biopsy was 52.5% (26.5%-69.7%), the accuracy of histoscanning-guided targeted biopsy was 46.8% (26%-75.8%). The highest values were observed in the patients undergoing primary biopsy (75.8%).
Discussion: Currently, imaging methods allow us to change the approach to the diagnosis of PCa by improving the efficiency of prostate biopsy, the only formal method for verifying PCa. A common method for PCa diagnosis in 2018 is a systematic prostate biopsy. However, due to the its drawbacks, fusion biopsy under control of MRI or ultrasound has being introduced into clinical practice with superior results. So far, there is a lack of sufficient scientific data to select a specific technique of the fusion biopsy of the prostate. According to the analysis, it was concluded that the incidence of complications didnt increase when performing targeted biopsy in addition to the systematic protocol.
Conclusion: The efficiency of cognitive MR-fusion biopsy is comparable to software MR-fusion biopsy. Histoscanning-guided biopsy has lower diagnostic value than MR-guided target biopsy using software. The lack of solid conclusions in favor of a particular prostate fusion biopsy technique stresses on the relevance of further research on this topic.
abstract_id: PUBMED:24291455
The PICTURE study -- prostate imaging (multi-parametric MRI and Prostate HistoScanning™) compared to transperineal ultrasound guided biopsy for significant prostate cancer risk evaluation. Objective: The primary objective of the PICTURE study is to assess the negative predictive value of multi-parametric MRI (mp-MRI) and Prostate HistoScanning™ (PHS) in ruling-out clinically significant prostate cancer.
Patients And Methods: PICTURE is a prospective diagnostic validating cohort study conforming to level 1 evidence. PICTURE will assess the diagnostic performance of multi-parametric Magnetic Resonance Imaging (mp-MRI) and Prostate HistoScanning™ (PHS) ultrasound. PICTURE will involve validating both index tests against a reference test, transperineal Template Prostate Mapping (TPM) biopsies, which can be applied in all men under evaluation. Men will be blinded to the index test results and both index tests will be reported prospectively prior to the biopsies being taken to ensure reporter blinding. Paired analysis of each of the index tests to the reference test will be done at patient level. Those men with an imaging lesion will undergo targeted biopsies to assess the clinical utility of sampling only suspicious areas. The study is powered to assess the negative predictive value of these imaging modalities in ruling-out clinically significant prostate cancer.
Discussion: The PICTURE study aims to assess the performance characteristics of two imaging modalities (mp-MRI and Prostate HistoScanning) for their utility in the prostate cancer pathway. PICTURE aims to identify if either imaging test may be useful for ruling out clinically significant disease in men under investigation, and also to examine if either imaging modality is useful for the detection of disease. Recruitment is underway and expected to complete in 2014.
abstract_id: PUBMED:29121982
Application of ultrasound imaging biomarkers (HistoScanning™) improves staging reliability of prostate biopsies. Objective: Imaging biomarkers like HistoScanning™ augment the informative value of ultrasound. Analogue image-guidance might improve the diagnostic accuracy of prostate biopsies and reduce misclassifications in preoperative staging and grading.
Results: Comparison of 77 image-guided versus 88 systematic prostate biopsies revealed that incorrect staging and Gleason misclassification occurs less frequently in image-guided than in systematic prostate biopsies. Systematic prostate biopsies (4-36 cores, median 12 cores) tended to detect predominantly unilateral tumors (39% sensitivity, 90.9% specificity, 17.5% negative and 50% positive predictive values). Bilateral tumors were diagnosed more frequently by image-guided prostate biopsies (87.9% sensitivity, 72.7% specificity, 50% negative and 96.8% positive predictive values). Regarding the detection of lesions with high Gleason scores ≥ 3 + 4, systematic prostate and image-guided biopsies yielded sensitivity and specificity rates of 66.7% vs 93.5%, 86% vs 64.5%, as well as negative and positive predictive values of 71.2% vs 87%, and 83.3% vs 79.6%, respectively. Potential reason for systematic prostate biopsies missing the correct laterality and the correct Gleason score was a mismatch between the biopsy template and the respective pathological cancer localization. This supports the need for improved detection techniques such as ultrasound imaging biomarkers and image-adapted biopsies.
Answer: The role of prostate HistoScanning™ (PHS) in detecting prostate cancer in routine clinical practice is debated and the evidence is mixed. Some studies suggest that PHS may not be reliable for identifying and characterizing prostate cancer. For instance, one study involving 105 patients across three independent studies found that PHS did not reliably detect or localize prostate cancer when compared with standard transrectal ultrasonography (TRUS)-guided biopsies, transperineal template prostate biopsies (TTBs), or whole-mount radical prostatectomy specimens (PUBMED:24224648). Another study concluded that PHS remained inadequate for prostate cancer diagnostics despite advanced technical efforts, with low sensitivity, specificity, and area under the curve (AUC) values (PUBMED:26215749).
Conversely, other studies have shown some potential benefits of PHS. A study involving 611 patients found that supplementing systematic transrectal biopsy with PHS true targeting (TT) guided biopsies improved the prostate cancer detection rate and preclinical patient assessment (PUBMED:28753891). Another study indicated that PHS might be helpful for selecting patients who require prostate biopsies, with a higher cancer detection rate when directing biopsies to specific regions of the prostate (PUBMED:23594146). Additionally, a pilot study reported that PHS true targeting guided prostate biopsy is feasible and can be an effective tool for real-time guidance of prostate biopsies (PUBMED:25501797).
However, a systematic review and meta-analysis of Histoscanning™ in prostate cancer diagnostics reported that the pooled accuracy of HS was markedly low, and most studies did not show additional value for HS (PUBMED:33825986). Another review highlighted controversial evidence for the use of HistoScanning™, with some studies showing high sensitivity and specificity, while others did not correlate well with final pathology results or provide reliable staging information (PUBMED:25860379).
In conclusion, while some studies suggest that PHS may have a role in enhancing prostate cancer detection and targeting biopsies, the overall evidence is mixed, with several studies questioning its reliability and accuracy. Further research is needed to clarify the role of PHS in routine clinical practice for detecting prostate cancer. |
Instruction: Is endoscopy useful for early diagnosis of ischaemic colitis after aortic surgery?
Abstracts:
abstract_id: PUBMED:9476191
Is endoscopy useful for early diagnosis of ischaemic colitis after aortic surgery? Results of a prospective trial. Background: Ischaemic colitis is an infrequent, but potentially fatal, complication of abdominal aortic surgery. Its presentation is often underestimated on account of a paucity of symptoms, thus the real incidence of ischaemic colitis may be higher.
Aim: To determine the prognostic value and sensitivity of endoscopy, early postoperative endoscopic findings were evaluated.
Methods: Over a period of three years a prospective study was undertaken in a consecutive series of 105 patients (mean age 68.9 years, range 51-85) undergoing routine rectosigmoidoscopy within 72 hours of aortic reconstructive surgery.
Results: Colonic ischaemia was found in 12 patients (11.4%); five had endoscopic evidence of mild ischaemic colitis, ulcerations were identified in five and diffuse superficial necrosis in two. Seven of the 12 patients were symptomatic. Laparotomy was never deemed necessary and all patients were successfully treated with a conservative regimen. There were no deaths. Elective reconstruction or urgent procedure did not correlate with the development of colonic ischaemia, nor did duration of aortic cross-clamp time, patency of the inferior mesenteric artery and its possible ligation or reimplantation or patency of the hypogastric arteries.
Conclusions: Rectosigmoidoscopy is effective for early diagnosis of ischaemic colitis. Early endoscopy should be routinely performed only for patients in whom impaired blood flow is suspected on the basis of the intraoperative objective assessment of the colon and in presence of symptoms.
abstract_id: PUBMED:16376117
Plasma D-lactate as a potential early marker for colon ischaemia after open aortic reconstruction. Background And Aim: The breakdown of mucosal barrier function due to intestinal hypo-perfusion is the earliest dysfunction of ischaemic colitis. Severe colon ischaemia after aortic reconstruction is associated with mortality rates up to 90%. Therefore, early detection and treatment of patients with extensive ischaemic colitis is of crucial importance. In experimental studies, both D-lactate and bacterial endotoxin have been reported as markers of intestinal mucosal barrier impairment. However, evidence of their value in clinical practice is lacking. The aim of this pilot prospective cohort study was to assess the association between ischaemia of the colon (assessed histologically) and plasma levels of D-lactate and endotoxin in patients undergoing open aortic reconstruction.
Patients And Methods: Twelve consecutive patients underwent surgery between February and April 2003. Six patients underwent emergency surgery and six patients elective aortic surgery. D-Lactate and endotoxin levels were measured in blood samples collected according to a standardised protocol. For histological examination biopsies were obtained by sigmoidoscopy on days 4-6 after surgery, or earlier if indicated clinically.
Results: As early as 2 h postoperatively, elevated plasma levels of d-lactate were measured in patients with histologically proven ischaemic colitis. The peak of D-lactate elevation was on postoperative days 1 and 2. Concentration of plasma endotoxin was not significantly different in patients with or without ischaemic colitis.
Conclusion: Our data suggest that plasma D-lactate levels are a useful marker for early detection of ischaemic colitis secondary to aortic surgery.
abstract_id: PUBMED:32002560
Colonic ischemia after open and endovascular aortic surgery : Epidemiology, Risk Factors, Diagnosis And Therapy Despite the successful establishment of endovascular techniques, colonic ischemia continues to be a serious complication of aortic surgery.The risk factors for colonic ischemia include aortic aneurysm rupture, prolonged aortic clamping, perioperative hypotension, the need for catecholamine therapy, occlusion of the hypogastric arteries and renal insufficiency.The clinical presentation of postoperative colonic ischemia is often unspecific. Classic symptoms include abdominal pain, diarrhea, peranal bleeding and rise of inflammatory parameters. A specific laboratory parameter for colonic ischemia does not exist. The diagnostic gold standard is endoscopy. Imaging methods such as sonography or computer tomography play only a supportive role. Transmural ischemia resulting in bowel wall necrosis is an indication for emergency surgery, predominantly colonic resection with creation of artificial anus.
abstract_id: PUBMED:12136459
Utility of sigmoid intramucosal pH in the early diagnosis of ischemic colitis after aorta surgery A 62-year-old man with grade III ischemia of the legs and occlusion of an aortofemoral shunt underwent axillofemoral bypass and bilateral profundoplasty. During surgery, an aneurysm in the aortic origin of the right common iliac artery ruptured, requiring ligation of the inferior vena cava, the iliac veins and the right common iliac artery. Upon transfer of the patient to the recovery unit, the sigmoid intramucosal pH (pHi) was 6.83 (arterial pH 7.35), the regional CO2 pressure (PrCO2) was 100 mmHg (arterial PCO2 35.2 mmHg), and the lactic acid concentration was 3.6 mmol/L. Ischemic colitis was suspected and colonoscopy confirmed the presence of severe rectal and moderate sigmoid inflammation. An extended sigmoidectomy was performed with colostomy. The patient died from multiorgan failure 48 hours after surgery. Ischemic colitis is a severe complication of aortic surgery. Sigmoid pHi monitoring is non-invasive and highly useful for the early diagnosis of ischemic colitis.
abstract_id: PUBMED:12653054
Ischemic colitis in patients submitted to aortic replacement surgery. Risk factors Unlabelled: Ischemic colitis (IC) is an important clinical problem, and may present after aortic surgical procedures. The aim of this work was to establish risk factors for IC presentation in aortic surgical replacement patients.
Material And Methods: A retrospective study of patients with aortic surgical replacement in a 3-year period was carried out. Patients were divided into two groups: patients without IC and patients with IC, the later group subdivided into patients with gangrenous ischemic colitis and without gangrenous ischemic colitis. Multiple logistic regressions was used to obtain the variables for possible risk factor for IC.
Results: We included 101 patients in the study; ischemic colitis was present in 16.8% of all cases, with 47.1% of gangrenous type. Metabolic acidosis was the most frequent alteration. Diagnosis was made by endoscopy in 94.1%. Mortality in IC group was 18.2% with an increase in the gangrenous group to 62.5%. Identified risk factors were disrupted aneurysm, previous colonic surgery, emergency surgery, and hemodynamic instability.
Conclusions: Ischemic colitis is most frequent in emergency surgery for disrupted aneurysm in the hemodynamically unstable patient with retroperitoneal hematoma. We must entertain a high suspicion index for IC in all patients with aortic surgical procedures for early detection and adequate treatment.
abstract_id: PUBMED:19238863
Diagnostic accuracy of sigmoidoscopy compared with histology for ischemic colitis after aortic aneurysm repair. Clinically relevant rates of ischemic colitis (IC) causing diarrhea, systemic involvement, colon necrosis, and, ultimately, death by multiple organ failure affect only a small proportion of patients after aortic reconstructions, with reported incidences of 2.7 to 3.3%. The key to treating and saving patients with this complication remains early detection and consequent treatment. The aim of this retrospective analysis of prospectively collected data was to compare the diagnostic accuracy of colonoscopy for detecting postoperative IC compared with histology and to evaluate the interobserver difference of two experienced surgeons. One hundred patients with infrarenal aortic aneurysms, operated on electively from March 2001 to December 2003, who had on postoperative days 3 to 6 a sigmoidoscopy by two independent surgeons and a histologic sample of the sigmoid mucosa, were included in the study. Patients with previous colon resection or inflammatory bowel disease were excluded from the study. All patients gave written informed consent. The study was approved by the Institutional Review Board. Histologic examination of the sigmoid mucosa revealed IC in 13 patients. The combined sensitivity of both investigators for detecting IC by sigmoidoscopy compared with histology was 84%, the specificity was 92.0%, the positive predictive value was 61.1%, the negative predictive value was 97.6%, and the diagnostic accuracy was 91.0%. There was no statistically significant difference between investigator 1 and investigator 2 (p=1.0) and between both investigators and histology (p=.380). Histology remains the gold standard for detecting IC after aortic surgery. Sigmoidoscopy, however, is a valid diagnostic tool allowing immediate clinical decision making with a negative predictive value of more than 94% and a diagnostic accuracy of 92%.
abstract_id: PUBMED:32827052
Outcome analysis and risk factors for postoperative colonic ischaemia after aortic surgery. Purpose: Colonic ischaemia (CI) represents a serious complication after aortic surgery. This study aimed to analyse risk factors and outcome of patients suffering from postoperative CI.
Methods: Data of 1404 patients who underwent aortic surgery were retrospectively analysed regarding CI occurrence. Co-morbidities, procedural parameters, colon blood supply, procedure-related morbidity and mortality as well as survival during follow-up (FU) were compared with patients without CI using matched-pair analysis (1:3).
Results: Thirty-five patients (2.4%) with CI were identified. Cardiovascular, pulmonary and renal comorbidity were more common in CI patients. Operation time was longer (283 ± 22 vs. 188 ± 7 min, p < 0.0001) and blood loss was higher (2174 ± 396 vs. 1319 ± 108 ml, p = 0.0049) in the CI group. Patients with ruptured abdominal aortic aneurysm (AAA) showed a higher rate of CI compared to patients with intact AAA (5.4 vs. 1.9%, p = 0.0177). CI was predominantly diagnosed by endoscopy (26/35), generally within the first 4 postoperative days (20/35). Twenty-eight patients underwent surgery, all finalised with stoma creation. Postoperative bilateral occlusion and/or relevant stenosis of hypogastric arteries were more frequent in CI patients (57.8 vs. 20.8%, p = 0.0273). In-hospital mortality was increased in the CI group (26.7 vs. 2.9%, p < 0.0001). Survival was significantly reduced in CI patients (median: 28.2 months vs. 104.1 months, p < 0.0001).
Conclusion: CI after aortic surgery is associated with considerable perioperative sequelae and reduced survival. Especially in patients at risk, such as those with rAAA, complicated intraoperative course, severe cardiovascular morbidity and/or perioperative deterioration of the hypogastric perfusion, vigilant postoperative multimodal monitoring is required in order to initiate diagnosis and treatment.
abstract_id: PUBMED:18295515
Serum procalcitonin (PCT) as a negative screening test for colonic ischemia after open abdominal aortic surgery. Background And Aim: We assessed serum procalcitonin (PCT) as a screening test for early detection of ischemic colitis.
Patients And Methods: Ninety-three patients (81 men and 12 women) undergoing elective aortic surgery were enrolled in this study. Their mean age was 70.3+/-8.1 years old. Serum procalcitonin was measured postoperatively.
Results: Four patients suffered from colon ischemia. Based on a cut off value of serum PCT>/=2.0 ng/ml, fourteen patients had a false positive but none had a false negative result. Sensitivity was 100%, and specificity was 83.9% in detecting ischemic colitis. Negative predictive vale was 100%.
Conclusion: Serum PCT is a non-invasive test that has a high negative predictive vale in ruling out colon ischemia after aortic surgery.
abstract_id: PUBMED:21724100
Prognostic factors of ischemic colitis after infrarenal aortic surgery. Background: Postoperative ischemic colitis (POIC) remains a frequent and extremely severe complication of infrarenal abdominal aorta surgery. However, its diagnosis and treatment are not always consensual because the incidence is very small. The aim of this retrospective study was to evaluate the prognostic factors of severe colitis after infrarenal aorta surgery.
Materials And Methods: We analyzed peroperative and perioperative data of the patients who, between 1998 and 2004, underwent infrarenal abdominal aorta surgery and were presented with confirmed POIC. We set two distinct groups: acute colitis group (operated POIC, perioperative deaths, or evolution toward a colic stenosis secondarily operated on) and a moderate colitis group (recovery without aftereffects and no surgery). The main goal was to individualize the prognostic factors of acute colitis. Using the Student's t-test or the Fisher's exact test, the potential prognostic factors were compared between these two groups.
Results: Between 1998 and 2004, 679 patients underwent infrarenal abdominal aorta surgery. Among these patients, 28 POIC cases were confirmed: 20 patients had acute POIC and eight had moderate POIC. Demographic and peroperative data were similar in the two groups. Among the 20 patients with acute POIC, 17 were operated on with a postoperative mortality rate of 58.8%. All the patients had at least a left colitis. In 59% of the cases, Hartmann's procedure was performed with a mortality rate of 50%. Early digestive symptoms (p = 0.05), use of vasopressors (p = 0.0377), diagnosis in intensive care unit (p = 0.0095), and a pH <7.35 at D1 (p = 0.0261) were independently associated with acute ischemic colitis. Moreover, we could observe the important role of endoscopy aiming at making diagnosis and prognosis.
Conclusion: This study highlighted significant prognosis factors of severe colitis. These factors could help take the decision to operate POIC, especially to lower the effect of multiple organ failure syndrome (MOFS).
abstract_id: PUBMED:17168275
Ischaemic colitis after endovascular repair of an infrarenal abdominal aortic aneurysm: a case report. Introduction: Endovascular repair of abdominal aortic aneurysms (EVAR) has proven to be an attractive and successful alternative to traditional open surgery in properly selected patients. As in open surgery, ischaemic colitis remains a feared complication, but the incidence and causes are not properly documented.
Objective: To present a case of endovascular aneurysm repair complicated by postoperative unilateral graft limb occlusion and ischaemic colitis.
Case Report: A 76-year-old woman presented with diffuse abdominal pain in the presence of an infrarenal abdominal aorta aneurysm of 5.5 cm. Based on CT and calibrated angiography, the patient was selected for endovascular repair which was performed with an Excluder bifurcation graft (W. L. Gore & Associates Inc., Newark USA). The endograft was placed successfully, but completion angiography showed a kinking of the left graft limb at the level of the aortic bifurcation. The patient developed acute ischaemia of the left limb 3 days postoperatively. Treatment consisted of femorofemoral cross-over graft. One week postoperatively she developed diarrhoea. A sigmoidoscopy was performed, showing ischaemic colitis grade I. In spite of the initial good result of conservative therapy, she had to undergo a left hemicolectomy with manual colorectal anastomosis and protective ileostomy 25 days later. Despite this intervention, the patient developed multiple organ failure and died after 2 months.
Conclusion: Postoperative ischaemic colitis has been described in extenso after open aneurysm repair. The incidence after endovascular repair is not well described. From 1998 to 2005, we performed 52 endovascular procedures with a bifurcation endoprosthesis in the treatment of an infrarenal abdominal aortic aneurysm. We report one patient out of this series, who developed an ischaemic colitis after the procedure. Possible causes include cholesterol embolization and peroperative exclusion of the inferior mesenteric artery of which the consequences might be aggravated in our patient by subsequent thrombosis of the left graft limb.
Answer: Yes, endoscopy is useful for the early diagnosis of ischaemic colitis after aortic surgery. Several studies have shown that endoscopy, particularly rectosigmoidoscopy, can effectively identify colonic ischaemia following aortic reconstructive surgery. A prospective study found that rectosigmoidoscopy within 72 hours post-surgery could detect colonic ischaemia in 11.4% of patients, and it was suggested that early endoscopy should be performed routinely for patients with suspected impaired blood flow based on intraoperative assessment or presence of symptoms (PUBMED:9476191). Another study indicated that colonic ischaemia is often diagnosed by endoscopy, typically within the first four postoperative days, and that vigilant postoperative multimodal monitoring is required for at-risk patients to initiate timely diagnosis and treatment (PUBMED:32827052).
Furthermore, the diagnostic accuracy of sigmoidoscopy for detecting postoperative ischemic colitis was found to be high, with a combined sensitivity of 84%, specificity of 92.0%, and a diagnostic accuracy of 91.0% when compared with histology (PUBMED:19238863). The study also highlighted that sigmoidoscopy allows for immediate clinical decision-making with a high negative predictive value and diagnostic accuracy.
In addition, the utility of sigmoid intramucosal pH has been demonstrated as a non-invasive and highly useful method for the early diagnosis of ischemic colitis after aortic surgery (PUBMED:12136459). This suggests that endoscopic techniques, along with other diagnostic tools, can play a significant role in the early detection of ischemic colitis, which is crucial given the high mortality rates associated with severe colon ischaemia after aortic reconstruction (PUBMED:16376117).
Overall, the evidence supports the use of endoscopy as a valuable tool for the early diagnosis of ischaemic colitis in patients who have undergone aortic surgery, which can lead to better management and outcomes for these patients. |
Instruction: Does left atrial appendage closure improve the success of pulmonary vein isolation?
Abstracts:
abstract_id: PUBMED:34557681
Practical Applications of Concomitant Pulmonary Vein Isolation and Left Atrial Appendix Closure Device Implantation. Pulmonary vein isolation (PVI) using cryoballoon causes acute tissue edema of the osteal region of the pulmonary veins and the left atrium. In two cases combining PVI with an implantation of a left atrial appendage closure device led to malsizing of the device, device shouldering, and a paraprosthetic residual flow. (Level of Difficulty: Advanced.).
abstract_id: PUBMED:26133284
Does left atrial appendage closure improve the success of pulmonary vein isolation? Results of a randomized clinical trial. Purpose: The combination of left atrial appendage (LAA) occlusion with pulmonary vein isolation (PVI) potentially represents a comprehensive treatment for atrial fibrillation (AF), controlling symptoms while at the same time reducing the risk of stroke and the need for chronic anticoagulation. The aim of this randomized clinical trial was to assess the impact of LAA closure added to PVI in patients with high-risk AF.
Methods: Patients with a history of symptomatic paroxysmal or persistent AF refractory to ≥ 2 antiarrhythmic drugs, CHA2DS2-VASc score ≥ 2, and HAS-BLED score ≥ 3 were randomized to PVI-only (n = 44) or PVI with LAA closure (n = 45).
Results: Six patients in PVI + LAA closure group crossed over to PVI-only group due to failure of LAA closure device implantation. On-treatment comparisons at the 24 month follow-up revealed that 33 (66%) of the 50 PVI group and 23 (59%) of the 39 PVI with LAA closure group were AF-free on no antiarrhythmic drugs (p = 0.34). The PVI + LAA closure treatment was significantly associated with a higher AF burden during the blanking period: 9.7 ± 10.8 vs 4.2 ± 4.1% (p = 0.004). At the end follow-up, there were no serious complications and no strokes or thromboembolic events in either group.
Conclusions: The combination of LAA closure device implantation with PVI was safe but was not observed to influence the success of PVI in patients with symptomatic refractory AF. Early AF after ablation, however, is increased by LAA closure.
Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT01695824.
abstract_id: PUBMED:35647151
Cryoballoon pulmonary vein isolation and left atrial appendage occlusion prior to atrial septal defect closure: A case report. Background: In patients who suffer from both atrial fibrillation (AF) and atrial septal defect (ASD), cryoballoon pulmonary vein isolation (PVI), sequential left atrial appendage (LAA) occlusion and ASD closure could be a strategy for effective prevention of stroke and right heart failure.
Case Summary: A 65-year-old man was admitted to our institution due to recurrent episodes of palpitations and shortness of breath for 2 years, which had been worsening over the last 48 h. He had a history of AF, ASD, coronary heart disease with stent implantation and diabetes. Physical and laboratory examinations showed no abnormalities. The score of CHA2DS2VASc was 3, and HAS-BLED was 1. Echocardiography revealed a 25-mm secundum ASD. Pulmonary vein (PV) and LAA anatomy were assessed by cardiac computed tomography. PV mapping with 10-pole Lasso catheter was performed following ablation of all four PVs with complete PVI. Following the cryoballoon PVI, the patient underwent LAA occlusion under transesophageal echocardiographic monitoring. Lastly, a 34-mm JIYI ASD occlude device was implanted. A follow-up transesophageal echocardiography at 3 mo showed proper position of both devices and neither thrombi nor leakage was found.
Conclusion: Sequential cryoballoon PVI and LAA occlusion prior to ASD closure can be performed safely in AF patients with ASD.
abstract_id: PUBMED:34317353
Late Presentation of Pulmonary Artery-Left Atrial Appendage Fistula Formation After Left Atrial Appendage Device Closure. Atrial fibrillation is the most common arrhythmia in clinical practice with indication for anticoagulation in those patients whose annual risk for thromboembolism is >2%. Left atrial appendage closure is growing as an alternative to anticoagulation. We present a case of pulmonary artery-left atrial appendage fistula seen after left atrial appendage closure. (Level of Difficulty: Intermediate.).
abstract_id: PUBMED:32743641
Management of thrombus formation after electrical isolation of the left atrial appendage in patients with atrial fibrillation. Aims: Left atrial appendage (LAA) electrical isolation (LAAEI) in addition to pulmonary vein isolation is an emerging catheter-based therapy to treat symptomatic atrial fibrillation. Previous studies found high incidences of LAA thrombus formation after LAAEI. This study sought to analyse therapeutic strategies aiming at the resolution of LAA thrombi and prevention of thromboembolism.
Methods And Results: Left atrial appendage electrical isolation was conducted via creation of left atrial linear lesions or cryoballoon ablation. Follow-up including transoesophageal echocardiography was conducted. In patients with LAA thrombus, oral anticoagulation (OAC) was adjusted until thrombus resolution was documented. Percutaneous LAA closure (LAAC) under use of a cerebral protection device was conducted in case of medically refractory LAA thrombi. Left atrial appendage thrombus was documented in 54 of 239 analysed patients who had undergone LAAEI. Thrombus resolution was documented in 39/51 patients (72.2%) with available follow-up after adjustment of OAC. Twenty-nine patients underwent LAAC and 10 patients were kept on OAC after LAAEI. No thromboembolic events or further LAA thrombi were documented after 553 ± 443 days of follow-up in these patients. Persistent LAA thrombi despite adaption of OAC was documented in 12/51 patients. One patient remained on OAC until the end of follow-up, while LAAC with a cerebral protection device was performed in 11 patients in the presence of LAA thrombus without complications.
Conclusion: Left atrial appendage thrombus formation is common after LAAEI. Adjustment of OAC leads to LAA thrombus resolution in most patients. Left atrial appendage closure in the presence of LAA thrombi might be a feasible option in case of failed medical treatment.
abstract_id: PUBMED:35242821
Abutting Left Atrial Appendage and Left Superior Pulmonary Vein Predicts Recurrence of Atrial Fibrillation After Point-by-Point Pulmonary Vein Isolation. Introduction: The role of the spatial relationship between the left superior pulmonary vein (LSPV) and left atrial appendage (LAA) is unknown. We sought to evaluate whether an abutting LAA and LSPV play a role in AF recurrence after catheter ablation for paroxysmal AF.
Methods: Consecutive patients, who underwent initial point-by-point radiofrequency catheter ablation for paroxysmal AF at the Heart and Vascular Center of Semmelweis University, Budapest, Hungary, between January of 2014 and December of 2017, were enrolled in the study. All patients underwent pre-procedural cardiac CT to assess left atrial (LA) and pulmonary vein (PV) anatomy. Abutting LAA-LSPV was defined as cases when the minimum distance between the LSPV and LAA was less than 2 mm.
Results: We included 428 patients (60.7 ± 10.8 years, 35.5% female) in the analysis. AF recurrence rate was 33.4%, with a median recurrence-free time of 21.2 (8.8-43.0) months. In the univariable analysis, female sex (HR = 1.45; 95%CI = 1.04-2.01; p = 0.028), LAA flow velocity (HR = 1.01; 95%CI = 1.00-1.02; p = 0.022), LAA orifice area (HR = 1.00; 95%CI = 1.00-1.00; p = 0.028) and abutting LAA-LSPV (HR = 1.53; 95%CI = 1.09-2.14; p = 0.013) were associated with AF recurrence. In the multivariable analysis, abutting LAA-LSPV (adjusted HR = 1.55; 95%CI = 1.04-2.31; p = 0.030) was the only independent predictor of AF recurrence.
Conclusion: Abutting LAA-LSPV predisposes patients to have a higher chance for arrhythmia recurrence after catheter ablation for paroxysmal AF.
abstract_id: PUBMED:36636509
Right Atrial Appendage Thrombus in a Patient Undergoing Thoracoscopic Left Atrial Appendectomy for Atrial Fibrillation. Left atrial appendage (LAA) closure may prevent atrial fibrillation (AF)-induced thromboembolism. We describe a rare case of right atrial (RA) thrombus after thoracoscopic left atrial appendectomy and pulmonary vein isolation. Careful evaluation for the presence of RA thrombus in patients with persistent AF after LAA occlusion may be necessary. (Level of Difficulty: Intermediate.).
abstract_id: PUBMED:37693855
Safety and Efficacy of Cryoballoon Pulmonary Vein Isolation and Left Atrial Appendage Closure Combined Procedure and Half-Dose Rivaroxaban After Operation in Elderly Patients with Atrial Fibrillation. Background: To investigate the safety and effectiveness of cryo-balloon pulmonary vein isolation (PVI) and left atrial appendage closure (LAAC) combined procedure and half-dose rivaroxaban after operation in elderly patients with atrial fibrillation (AF).
Patients And Methods: A total of 203 AF patients presented for cryo-balloon PVI, and LAAC combined procedure was included from 2019 to 2021. Postoperative patients were anticoagulated with rivaroxaban with/without clopidogrel for 60 days, with oral rivaroxaban of 10 mg in the elderly group and 20 mg in the non-elderly group. Patients with AF ≥80 and <80 years were considered elderly and non-elderly groups, respectively. Scheduled follow-ups and transesophageal echocardiography were used to assess peri- and post-procedural safety and effectiveness.
Results: A total of 203 patients underwent the combined procedure, 83 in the elderly and 120 in the non-elderly groups. All patients successfully obtained PVI and satisfactory LAAC. During the perioperative period, one patient had puncture complications in the elderly group and one with thrombosis in the non-elderly group. Oral rivaroxaban was administered to 83.2% and 75% of patients in the elderly and non-elderly groups, respectively, and rivaroxaban was combined with clopidogrel anticoagulation in the remaining patients. The annual rates of composite clinical events were 8.4% and 9.2% in the elderly and non-elderly groups, respectively, with no statistically significant difference. Patients in both groups had complete sealing, and there was no displacement of devices, death and peripheral arterial thrombosis. Recurrence of AF occurred in 25 and 32 patients in the elderly and non-elderly groups, respectively, with no statistically significant difference. Besides, the two groups had no statistically significant difference in cerebral infarction/transient ischemic attack and device-related thrombosis (p > 0.05).
Conclusion: This study suggests that cryo-balloon PVI and LAAC combined procedure and half-dose rivaroxaban after the operation is safe and effective in treating elderly patients with AF.
abstract_id: PUBMED:33334448
Impact of Left Atrial Appendage Closure on LAA Thrombus Formation and Thromboembolism After LAA Isolation. Objectives: This study sought to evaluate the safety and effectiveness of electrical isolation of the left atrial appendage (LAAEI) as well as the status of left atrial appendage closure (LAAC) in these patients.
Background: Catheter-based LAAEI is increasingly performed for treatment of symptomatic atrial fibrillation and pulmonary vein isolation nonresponders. Previous studies indicate an increased incidence of thromboembolic events after LAAEI despite effective oral anticoagulation. Interventional LAAC may prevent cardioembolic events after LAAEI but data regarding safety, feasibility, and efficacy of LAAC in this clinical setting are scarce.
Methods: Consecutive patients who underwent LAAEI at 2 German tertiary care hospitals were analyzed.
Results: A total of 270 patients underwent LAAEI by radiofrequency ablation in 255 (94.4%), cryoballoon ablation in 12 (4.4%), and by a combination of both techniques in 3 cases (1.1%). Stroke or transient ischemic attack occurred in 24 of 244 (9.8%) individuals with available follow-up. LAA thrombus formation was found in 53 patients (19.6%). A total of 150 patients underwent LAAC after LAAEI. No LAA thrombus was documented in any patient who underwent LAAC. Of the patients who underwent LAAEI, 67.6% were in sinus rhythm after a mean of 682.7 ± 61.7 days. LAA flow after LAAEI but not arrhythmia recurrence was identified as an independent predictor of stroke and/or transient ischemic attack or LAA thrombus (p < 0.0001).
Conclusions: Sinus rhythm was documented in about two-third of patients undergoing LAAEI as treatment of therapy refractory atrial arrhythmias. LAAC potentially prevents LAA thrombus formation and thromboembolism.
abstract_id: PUBMED:30573121
Combination of Left Atrial Appendage Isolation and Ligation to Treat Nonresponders of Pulmonary Vein Isolation. Objectives: This study investigated the outcome of wide-area left atrial appendage isolation (WLAAI) and subsequent LAA ligation in patients with recurrent atrial arrhythmias after pulmonary vein isolation (PVI).
Background: LAA isolation and ligation may improve rhythm control and prevent LAA thrombus formation in patients with atrial fibrillation who do not respond to PVI.
Methods: Patients (n = 31, mean age: 69.7 ± 7.8 years, 18 men) with arrhythmia recurrence after established PVI undergoing WLAAI with subsequent LAA ligation (LARIAT+ device) were studied. The incidence of arrhythmia recurrence, intracardiac thrombus formation, thromboembolic events, as well as changes in P-wave duration and P-wave dispersion were assessed.
Results: All 31 patients underwent successful WLAAI, and successful LAA ligation was performed in 27 patients (87%). Over a median follow-up of 498 (interquartile range: 159 to 791) days, post-ligation arrhythmia recurrence was documented in 8 patients (26%). Kaplan-Meier estimate of 24-month arrhythmia-free survival after WLAAI/ligation was 69.7% (95% confidence interval: 53.9 to 90.1). Following WLAAI, LAA thrombus formation was seen in 11 patients (35.5%), but in no patient after LAA ligation. WLAAI/ligation significantly reduced P-wave duration (from 93 ± 20 ms to 72 ± 20 ms; p = 0.001) and P-wave dispersion (from 63 ± 37 ms to 38 ± 16 ms; p = 0.001).
Conclusions: WLAAI and subsequent LAA ligation in PVI nonresponders led to an estimated freedom from arrhythmia recurrence in 70% of the patients at 24 months. LAA ligation successfully prevented recurrence of cardiac thrombus formation in patients with WLAAI. Significant decreases in P-wave duration and P-wave dispersion occurred with WLAAI/ligation, suggesting favorable electrical remodelling.
Answer: The combination of left atrial appendage (LAA) closure with pulmonary vein isolation (PVI) has been explored as a comprehensive treatment strategy for atrial fibrillation (AF), aiming to control symptoms while reducing the risk of stroke and the need for chronic anticoagulation. However, the impact of LAA closure on the success of PVI is not straightforward.
A randomized clinical trial assessed the impact of adding LAA closure to PVI in patients with high-risk AF. The results showed that the combination of LAA closure device implantation with PVI was safe but did not influence the success of PVI in preventing AF recurrence. In fact, the study found that the PVI + LAA closure treatment was associated with a higher AF burden during the blanking period, although there were no serious complications, strokes, or thromboembolic events in either group at the end of the follow-up period (PUBMED:26133284).
Other studies have reported on the safety and efficacy of combining these procedures. For instance, a case report described a successful sequential cryoballoon PVI and LAA occlusion prior to atrial septal defect closure in a patient with AF, suggesting that this approach can be performed safely (PUBMED:35647151). Another study found that a combined procedure of cryoballoon PVI and LAA closure followed by half-dose rivaroxaban was safe and effective in elderly patients with AF (PUBMED:37693855).
However, there are also reports of complications associated with LAA closure, such as malsizing of the device, device shouldering, and paraprosthetic residual flow when PVI using cryoballoon causes acute tissue edema (PUBMED:34557681). Additionally, a case of pulmonary artery-left atrial appendage fistula formation after LAA device closure has been documented (PUBMED:34317353).
In summary, while LAA closure combined with PVI is generally safe, current evidence from a randomized clinical trial does not support an improvement in the success of PVI attributable to LAA closure (PUBMED:26133284). Further research may be needed to fully understand the potential benefits and risks of this combined approach. |
Instruction: Digital photoplethysmography in the diagnosis of suspected lower limb DVT: is it useful?
Abstracts:
abstract_id: PUBMED:10388643
Digital photoplethysmography in the diagnosis of suspected lower limb DVT: is it useful? Objective: to determine the role of digital photoplethysmography (D-PPG) in the diagnosis of deep-vein thrombosis (DVT), in comparison to the "gold standard" of either contrast ascending venography (ACV) or colour-flow duplex imaging (CFDI).
Method: prospective study of 100 hospital inpatients (103 legs) referred to the X-ray department for ACV or CFDI with clinically suspected lower limb DVT in a district general hospital. Each patient was assessed by either ACV or CFDI, and D-PPG.
Results: thirty-seven limbs were found to have DVT as demonstrated by ACV or CFDI. All patients with a venous refilling time (RT) of greater than 20 s and venous pump (VP) of greater than 35 had a normal ACV or CFDI. Using RT of less than 21 s as the optimal cut-off point, D-PPG achieved a sensitivity of 100%, negative-predictive value of 100%, specificity of 47% and positive-predictive value of 51%. By using VP of less than 36 as the optimal cut-off point, a sensitivity of 100%, a negative-predictive value of 100%, a specificity of 35% and positive-predictive value of 46% were achieved.
Conclusions: these results validate the use of portable D-PPG as a useful screening tool for the diagnosis of clinically suspected lower limb DVT. A positive test requires further confirmation by one of the "gold standard" methods, whereas a negative test effectively excludes DVT.
abstract_id: PUBMED:9213729
Digital photoplethysmography and digital "strain-gauge" plethysmography in differential diagnosis of edema Photoplethysmography is a noninvasive method for fast diagnosis of blood flow disturbances in the venous system of the lower limbs. It is suitable for discriminating normal from pathological venous findings. The technique has been evaluated in three groups of subjects with edema of the lower extremities. The value of this screening method in the diagnosis of vascular disorders and edema of the lower limbs is discussed. Venous occlusion plethysmography is used for discriminatory and quantitative evaluation of venous function. The diagnostic value of strain gauge plethysmography was assessed in three groups of subjects. Measured parameters were venous capacity and venous outflow. Digital photoplethysmography is suitable for the discrimination of venous edema of the lower limbs from edema caused by other factors, and digital "strain gauge" plethysmography for the discrimination of edema of the lower limbs, caused by phlebothrombosis from edema caused by other factors.
abstract_id: PUBMED:18821414
Effect of postural changes on lower limb blood volume, detected with non-invasive photoplethysmography. This paper describes the effect of passive leg raising on blood volume change in the lower limb, using a dual probe photoplethysmography (PPG) system employing a tissue optics model. The normalized AC/DC ratio and DC value are introduced from the model to evaluate the dynamic pulsation and total blood volume changes due to postural effects. The AC and DC components of PPG signals were collected from a passive leg raising protocol. With the leg raised, the normalized AC/DC ratio significantly decreased when supine, while the normalized DC value increased significantly in both supine and reclining positions. The parameters from the stationary leg showed similar but smaller responses. These results demonstrate a local and systemic physiological phenomenon in the lower limb blood volume change caused by postural changes. The normalized AC/DC ratio and DC value derived from the tissue optics model could be applied to assess the blood volume change.
abstract_id: PUBMED:16088070
Photoplethysmography detection of lower limb peripheral arterial occlusive disease: a comparison of pulse timing, amplitude and shape characteristics. The assessment and diagnosis of lower limb peripheral arterial occlusive disease (PAOD) is important since it can lead progressively to disabling claudication, ischaemic rest pain and gangrene. Historically, the first assessment has been palpation of the peripheral pulse since it can become damped, delayed and diminished with disease. In this study we investigated the clinical value of objective photoplethysmography (PPG) pulse measurements collected simultaneously from the right and left great toes to diagnose disease in the lower limbs. In total, 63 healthy subjects and 44 patients with suspected lower limb disease were studied. Pulse wave analysis techniques extracted timing, amplitude and shape characteristics for both toes and for right-to-left toe differences. Normative ranges of pulse characteristics were then calculated for the healthy subject group. The relative diagnostic values of the different pulse features for detecting lower limb arterial disease were determined, referenced to the established ankle-brachial pressure index (ABPI) measurement. The ranges of pulse characteristics and degree of bilateral similarity in healthy subjects were established, and the degrees of pulse delay, amplitude reduction, and damping and bilateral asymmetry were quantified for different grades of disease. When pulse timing, amplitude and shape features were ranked in order of diagnostic performance, the shape index (SI) gave substantial agreement with ABPI (>90% accuracy, kappa 0.75). SI also detected higher grade disease, for legs with an ABPI less than 0.5, with a sensitivity of 100%. The simple-to-calculate timing differences between pulse peaks produced a diagnostic accuracy of 88% for all grades of arterial disease (kappa 0.70), and 93% for higher grade disease (kappa 0.77). These contrasted with the limited discriminatory value of PPG pulse amplitude. The low-cost and simplicity of this optical-based technology could offer significant benefits to healthcare, such as in primary care where non-invasive, accurate and simple-to-use (de-skilled) diagnostic techniques are desirable.
abstract_id: PUBMED:8472150
Photoplethysmography in the diagnosis of superficial venous valvular incompetence. Photoplethysmography was compared with clinical investigation combined with Doppler ultrasonography in the diagnosis of superficial venous valvular incompetence of the lower limb. In 268 consecutive patients, 536 limbs were investigated. A total of 22.1 per cent of the photoplethysmographic investigations were uninterpretable because they did not allow reliable determination of the refilling time. Agreement between clinical investigation combined with Doppler ultrasonography and photoplethysmography was found to be poor (kappa = 0.30). These results suggest that photoplethysmography is not the non-invasive method of choice for routine evaluation of superficial venous valvular incompetence of the leg.
abstract_id: PUBMED:16476612
Digital venous photoplethysmography in the seated position is a reproducible noninvasive measure of lower limb venous function in patients with isolated superficial venous reflux. Background: The value of photoplethysmography (PPG) has been questioned because of a lack of reproducibility. We performed this study to determine whether new digital technology has improved the reproducibility of PPG in the noninvasive assessment of lower limb venous function in patients with isolated superficial venous reflux.
Methods: This was a prospective study of 140 legs in 110 patients (65% female; median age [interquartile range], 45 years [36-59.25 years]; CEAP clinical grade C2/3, n = 114; C4-6, n = 26) who underwent repeated digital PPG measurements of refilling time (RT) in both the sitting and standing position after standard exercise regimens by a single observer. RT was measured in all patients 2 to 5 minutes apart and in a randomly selected subgroup of 30 patients (38 limbs) 1 to 2 weeks apart. RT variability was assessed by using Bland and Altman's coefficient of repeatability (CR-RT), where the CR-RT was 1.96 times the standard deviation of the mean difference in RT between two tests. Venous duplex scanning of both the deep and superficial veins was also performed, and a reverse flow of greater than 0.5 seconds was considered abnormal. Only patients with isolated superficial venous reflux were included in the study.
Results: The CR-RT of the tests on 140 limbs performed 2 to 5 minutes apart was 10 seconds overall, 3 seconds for RT up to 10 seconds, and 16 seconds for RT between 20 and 40 seconds. The CR-RT of the 38 tests performed 1 to 2 weeks apart was also 10 seconds. No systematic variation due to a nonrandom error was found between the measurements performed either 2 to 5 minutes or 1 to 2 weeks apart.
Conclusions: Digital PPG performed in the seated position in patients with isolated superficial venous reflux provides a reproducible method for the noninvasive assessment of lower limb venous function for both clinical and research purposes. However, the variation in precision of RT with the magnitude of the measurement must be taken into account when results are interpreted in individual patients.
abstract_id: PUBMED:20674063
Assessment of bilateral photoplethysmography for lower limb peripheral vascular occlusive disease using color relation analysis classifier. This paper proposes the assessment of bilateral photoplethysmography (PPG) for lower limb peripheral vascular occlusive disease (PVOD) using a color relation analysis (CRA) classifier. PPG signals are non-invasively recorded from the right and left sides at the big toe sites. With the time-domain technique, the right-to-left side difference is studied by comparing the subject's PPG data. The absolute bilateral differences construct various diminishing and damping patterns. These difference patterns in amplitude and shape distortion relate to the grades of PVOD, including the normal condition, lower-grade disease, and higher-grade disease. A CRA classifier is used to recognize the various patterns for PVOD assessment. Its concept is derived from the HSV color model and uses the hue, saturation, and value to depict the disease grades using the natural primary colors of red, green, and blue. PPG signals are obtained from 21 subjects aged 24-65 years using an optical measurement technique. The proposed CRA classifier is tested using the physiological measurements, and the tests reveal its practicality for monitoring PPG signals.
abstract_id: PUBMED:7600021
Photoplethysmography in the diagnosis of venous disease. Background: Photoplethysmography (PPG) has become a widely used method in the diagnosis of venous disease.
Objective: To describe briefly various aspects of PPG and its practical value.
Methods: Standard and quantitative PPG are described as well as various evaluation procedures.
Results: The refilling time is considered the most useful parameter; values higher than 22 seconds are considered normal. Quantitative PPG systems (calibrated or digital PPG) also permit evaluation of the drainage function.
Conclusion: While refilling time is presently the only quantitative parameter, it is expected that by introducing calibrated or digital PPG in diagnostic practice, venous function will be evaluated in a more comprehensive and quantitative form.
abstract_id: PUBMED:37233165
Radiation Exposure Reduction by Digital Variance Angiography in Lower Limb Angiography: A Randomized Controlled Trial. Background: digital variance angiography (DVA) provides higher image quality than digital subtraction angiography (DSA). This study investigates whether the quality reserve of DVA allows for radiation dose reduction during lower limb angiography (LLA), and compares the performance of two DVA algorithms.
Methods: this prospective block-randomized controlled study enrolled 114 peripheral arterial disease patients undergoing LLA into normal dose (ND, 1.2 µGy/frame, n = 57) or low-dose (LD, 0.36 µGy/frame, n = 57) groups. DSA images were generated in both groups, DVA1 and DVA2 images were generated in the LD group. Total and DSA-related radiation dose area product (DAP) were analyzed. Image quality was assessed on a 5-grade Likert scale by six readers.
Results: the total and DSA-related DAP were reduced by 38% and 61% in the LD group. The overall visual evaluation scores (median (IQR)) of LD-DSA (3.50 (1.17)) were significantly lower than the ND-DSA scores (3.83 (1.00), p < 0.001). There was no difference between ND-DSA and LD-DVA1 (3.83 (1.17)), but the LD-DVA2 scores were significantly higher (4.00 (0.83), p < 0.01). The difference between LD-DVA2 and LD-DVA1 was also significant (p < 0.001).
Conclusions: DVA significantly reduced the total and DSA-related radiation dose in LLA, without affecting the image quality. LD-DVA2 images outperformed LD-DVA1, therefore DVA2 might be especially beneficial in lower limb interventions.
abstract_id: PUBMED:32158896
An effective digital tourniquet utilising a fenestrated limb tourniquet. We describe a safe and effective digital tourniquet utilising a fenestrated limb tourniquet which would normally be used for venepuncture.
Answer: Digital photoplethysmography (D-PPG) has been studied for its utility in the diagnosis of suspected lower limb deep-vein thrombosis (DVT). A prospective study involving 100 hospital inpatients (103 legs) referred for either contrast ascending venography (ACV) or colour-flow duplex imaging (CFDI) also assessed each patient with D-PPG. The results showed that D-PPG had a sensitivity and negative-predictive value of 100% when using venous refilling time (RT) of less than 21 seconds or venous pump (VP) of less than 36 as the optimal cut-off points. However, the specificity and positive-predictive value were lower, at 47% and 51% for RT, and 35% and 46% for VP, respectively. These findings suggest that D-PPG can be a useful screening tool for lower limb DVT, as a negative test effectively excludes DVT, but a positive test requires further confirmation by more definitive methods (PUBMED:10388643).
In the context of differentiating the cause of edema in the lower limbs, digital photoplethysmography has been shown to be suitable for discriminating venous edema from edema caused by other factors (PUBMED:9213729). Additionally, photoplethysmography has been used to detect changes in lower limb blood volume due to postural changes (PUBMED:18821414), and to assess lower limb peripheral arterial occlusive disease (PAOD) by analyzing pulse timing, amplitude, and shape characteristics (PUBMED:16088070).
However, the utility of photoplethysmography in diagnosing superficial venous valvular incompetence has been questioned due to poor agreement with clinical investigation combined with Doppler ultrasonography (PUBMED:8472150). On the other hand, digital venous photoplethysmography has been found to be a reproducible noninvasive measure of lower limb venous function in patients with isolated superficial venous reflux (PUBMED:16476612).
In summary, digital photoplethysmography has demonstrated utility as a screening tool for lower limb DVT, with high sensitivity and negative predictive value, making it useful in ruling out the condition. However, its specificity is not as high, and positive results should be confirmed with more definitive diagnostic methods. |
Instruction: Does the use of fentanyl in epidural solutions for postthoracotomy pain management in neonates affect surgical outcome?
Abstracts:
abstract_id: PUBMED:16034755
Does the use of fentanyl in epidural solutions for postthoracotomy pain management in neonates affect surgical outcome? Background/purpose: Continuous epidural analgesia is routinely used to manage pain in infants undergoing resection of a congenital cystic adenomatoid malformation (CCAM) of the lung. Our aim was to determine if there is a difference in the length of stay (LOS), supplemental analgesic requirements, pain control, and the incidence of adverse respiratory events in infants receiving the 2 standard epidural solutions commonly used: bupivacaine 0.1% and bupivacaine 0.1% with fentanyl 2 to 5 microg/mL.
Methods: We retrospectively reviewed the charts of infants who received epidural infusions containing bupivacaine 0.1% (n = 18) and bupivacaine 0.1% with fentanyl 2 to 5 microg/mL (n = 10) after CCAM resection during a 12-month period. LOS, rescue opioid, and nonopioid analgesic use, incidence of respiratory depression, and pain scores were recorded.
Results: The LOS in patients receiving fentanyl in their epidural solution was 1 day longer than those receiving plain bupivacaine (median 4 vs 3 days, respectively). Nonopioid analgesic and rescue opioid use was greater in patients who did not have fentanyl in their epidural solutions. Pain ratings were not significantly different. The incidence of respiratory depression was greater in patients receiving epidural infusions containing fentanyl (50% vs 17%, respectively).
Conclusion: The addition of fentanyl to epidural infusions of bupivacaine in infants undergoing thoracotomy for resection of CCAM may prolong recovery and increase the incidence of adverse respiratory events without providing a significant analgesic benefit.
abstract_id: PUBMED:24385221
Preoperatıve ultrasound-guıded suprascapular nerve block for postthoracotomy shoulder paın. Background: Acute postthoracotomy pain is a well-known potential problem, with pulmonary complications, ineffective respiratory rehabilitation, and delayed mobilization in the initial postoperative period, and it is followed by chronic pain. The type of thoracotomy, intercostal nerve damage, muscle retraction, costal fractures, pleural irritation, and incision scar are the most responsible mechanisms.
Objective: Our aim was to assess whether preoperative ultrasound suprascapular nerve block with thoracic epidural analgesia was effective for postthoracotomy shoulder pain relief.
Methods: Thirty-six American Society of Anesthesiologist classification physical status I-III patients (2011-2012), with a diagnosis of lung cancer and scheduled for elective open-lung surgery, were prospectively included in the study. Eighteen of the patients received an ultrasound-guided suprascapular nerve block with 10-mL 0.5% levobupivacaine, using a 22-gauge spinal needle, 1 hour before operation (group S); 18 other patients had thoracic epidural analgesia only, and no nerve block was performed. Standard general anesthesia was administered. Degree of shoulder pain was assessed by a blinded observer when discharging patients from the recovery room, and thereafter at 1, 3, 6, 12, 24, 36, 48, and 72 hours on infusion at rest and 12, 24, 36, 48, and 72 hours on coughing. The same blinded observer also recorded the total amount of epidural levobupivacaine and fentanyl used by the 2 groups.
Results: In the suprascapular block group, the total amount of levobupivacaine (P = 0.0001) and fentanyl (P = 0.005) used postoperatively was statistically lower than in the epidural group. Visual analogue scale measurements in the suprascapular group were statistically significantly lower at 0, 1, 3, 6, 12, 24, 36, and 48 hours than those in the epidural group, both at rest and coughing.
Conclusion: Postthoracotomy shoulder pain reduces patient function and postsurgical rehabilitation potential after thoracotomy, and various studies on explaining the etiology and management of postthoracotomy shoulder pain have been conducted. Theories of the etiology involved either musculoskeletal origin or referred pain. In this study, we concluded that preoperative ultrasound-guided suprascapular nerve block with thoracic epidural analgesia could achieve effective shoulder pain relief for 72 hours postoperatively, both at rest and coughing.
abstract_id: PUBMED:12401624
A randomized, double-blinded comparison of thoracic epidural ropivacaine, ropivacaine/fentanyl, or bupivacaine/fentanyl for postthoracotomy analgesia. Unlabelled: Epidural ropivacaine has not been compared with bupivacaine for postthoracotomy analgesia. Eighty patients undergoing elective lung surgery were randomized in a double-blinded manner to receive one of three solutions for high thoracic epidural analgesia. A continuous epidural infusion of 0.1 mL. kg(-1). h(-1) of either 0.2% ropivacaine, 0.15% ropivacaine/fentanyl 5 micro g/mL, or 0.1% bupivacaine/fentanyl 5 micro g/mL was started at admission to the intensive care unit. We assessed pain scores (rest and spirometry), IV morphine consumption, spirometry, hand grip strength, PaCO(2), heart rate, blood pressure, respiratory rate, and side effects (sedation, nausea, vomiting, and pruritus) for 48 h. Thoracic epidural ropivacaine/fentanyl provided adequate pain relief similar to bupivacaine/fentanyl during the first 2 postoperative days after posterolateral thoracotomy. The use of plain 0.2% ropivacaine was associated with worse pain control during spirometry, larger consumption of IV morphine, and increased incidence of postoperative nausea and vomiting. Morphine requirements were larger in the ropivacaine group, with no differences between bupivacaine/fentanyl and ropivacaine/fentanyl groups. Patients in the ropivacaine group experienced more pain and performed worse in spirometry than patients who received epidural fentanyl. There was no significant difference in motor block. We conclude that epidural ropivacaine/fentanyl offers no clinical advantage compared with bupivacaine/fentanyl for postthoracotomy analgesia.
Implications: Thoracic epidural ropivacaine/fentanyl provided adequate pain relief and similar analgesia to bupivacaine/fentanyl during the first 2 postoperative days after posterolateral thoracotomy. Plain 0.2% ropivacaine was associated with worse pain control and an increased incidence of postoperative nausea and vomiting. We conclude that epidural ropivacaine/fentanyl offers no clinical advantage compared with bupivacaine/fentanyl for postthoracotomy analgesia.
abstract_id: PUBMED:9725371
Prospective, randomized comparison of extrapleural versus epidural analgesia for postthoracotomy pain. Background: Thoracic epidural analgesia is considered the method of choice for postthoracotomy analgesia, but it is not suitable for every patient and is associated with some risks and side effects. We therefore evaluated the effects of an extrapleural intercostal analgesia as an alternative to thoracic epidural analgesia.
Methods: In a prospective, randomized study, pain control, recovery of ventilatory function, and pulmonary complications were analyzed in patients undergoing elective lobectomy or bilobectomy. Two groups of 15 patients each were compared: one received a continuous extrapleural intercostal nerve blockade (T3 through T6) with bupivacaine through an indwelling catheter, the other was administered a combination of local anesthetics (bupivacaine) and opioid analgesics (fentanyl) through a thoracic epidural catheter.
Results: Both techniques were safe and highly effective in terms of pain relief and recovery of postoperative pulmonary function. However, minor differences were observed that, together with practical benefits, would favor extrapleural intercostal analgesia.
Conclusions: These results led us to suggest that extrapleural intercostal analgesia might be a valuable alternative to thoracic epidural analgesia for pain control after thoracotomy and should particularly be considered in patients who do not qualify for thoracic epidural analgesia.
abstract_id: PUBMED:21178605
Preemptive low-dose epidural ketamine for preventing chronic postthoracotomy pain: a prospective, double-blinded, randomized, clinical trial. Objectives: Chronic postthoracotomy pain is the most common long-term complication that occurs after a thoracotomy with a reported incidence of up to 80%. Although thoracic epidural analgesia is a widely used method for managing acute postthoracotomy pain, its effects seems questionable. The objective of this prospective, double-blinded, randomized, controlled trial was to assess the effect of preemptive low-dose epidural ketamine in addition to preemptive thoracic epidural analgesia on the incidence of chronic postthoracotomy pain.
Methods: We analyzed 133 patients who were randomized to preemptive thoracic epidural analgesia either with or without ketamine (Group K: 0.12% levobupivacaine, 2 μg/mL of fentanyl, 0.2 mg/mL ketamine, total volume of 500 mL vs. Group KF: 0.12% levobupivacaine, 2 μg/mL of fentanyl, total volume of 500 mL). Pain at the thoracotomy scar site during rest and movement (coughing) was assessed at 2 weeks and 3 months after surgery using a visual analog scale. The incidence of allodynia and numbness was also evaluated.
Results: There was no difference in the incidence of chronic postthoracotomy pain at 3 months between the 2 groups (67.7% in group K vs. 75% in group KF). The incidences of allodynia or numbness were not different between the 2 groups.
Discussion: The addition of preemptive low-dose epidural ketamine (1.2 mg/h) to preemptive thoracic epidural analgesia did not have any beneficial effects in preventing chronic postthoracotomy pain.
abstract_id: PUBMED:8512397
Thoracic versus lumbar epidural fentanyl for postthoracotomy pain. Thirty patients were prospectively randomized to receive either thoracic or lumbar epidural fentanyl infusion for postthoracotomy pain. Epidural catheters were inserted, and placement was confirmed with local anesthetic testing before operation. General anesthesia consisted of nitrous oxide, oxygen, isoflurane, intravenous fentanyl citrate (5 micrograms/kg), and vecuronium bromide. Pain was measured by a visual analogue scale (0 = no pain to 10 = worst pain ever). Postoperatively, patients received epidural fentanyl in titrated doses every 15 minutes until the visual analogue scale score was less than 4 or until a maximum fentanyl dose of 150 micrograms by bolus and an infusion rate of 150 micrograms/h was reached. The visual analogue scale score of patients who received thoracic infusion decreased from 8.8 +/- 0.5 to 5.5 +/- 0.7 (p < or = 0.05) by 15 minutes and to 3.5 +/- 0.4 (p < or = 0.05) by 45 minutes. The corresponding values in the lumbar group were 8.8 +/- 0.6 to 7.8 +/- 0.7 at 15 minutes and 5.3 +/- 0.9 at 45 minutes (p < or = 0.05). The infusion rate needed to maintain a visual analogue scale score of less than 4 was lower in the thoracic group (1.55 +/- 0.13 micrograms.kg-1 x h-1) than in the lumbar group (2.06 +/- 0.19 microgram.kg-1 x h-1) during the first 4 hours after operation (p < or = 0.05). The epidural fentanyl infusion rates could be reduced at 4, 24, and 48 hours after operation without compromising pain relief. Four patients in the lumbar group required naloxone hydrochloride intravenously.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:24999216
Acute perioperative pain in neonates: An evidence-based review of neurophysiology and management. Current literature lacks systematic data on acute perioperative pain management in neonates and mainly focuses only on procedural pain management. In the current review, the neurophysiological basis of neonatal pain perception and the role of different analgesic drugs and techniques in perioperative pain management in neonates are systematically reviewed. Intravenous opioids such as morphine or fentanyl as either intermittent bolus or continuous infusion remain the most common modality for the treatment of perioperative pain. Paracetamol has a promising role in decreasing opioid requirement. However, routine use of ketorolac or other nonsteroidal anti-inflammatory drugs is not usually recommended. Epidural analgesia is safe in experienced hands and provides several benefits over systemic opioids such as early extubation and early return of bowel function.
abstract_id: PUBMED:8424508
A randomized double-blind comparison of epidural fentanyl infusion versus patient-controlled analgesia with morphine for postthoracotomy pain. The authors conducted a prospective, randomized, double-blind comparison of an epidural fentanyl infusion versus patient-controlled analgesia (PCA) with morphine in the management of postthoracotomy pain. Thirty-six patients were randomized into one of two groups. The epidural group received an epidural fentanyl infusion, 10 micrograms/mL, and saline through their PCA machine. The PCA group received an epidural saline infusion and morphine, 1.0 mg/mL, through their PCA device. The infusions were escalated according to a study protocol when pain relief was deemed inadequate by the patients. Pain relief was evaluated by a visual analog pain scale (VAS), both at rest and during coughing, and by verbal rating scores (VRS) of pain relief. Degree of sedation and the frequency of nausea, vomiting, and pruritus were also noted. The VAS, VRS, degree of sedation, and side effects were evaluated every 2 h from 7 AM to 7 PM, for 72 h after surgery. Forced vital capacities were determined before surgery and at 24, 48, and 72 h after surgery. The VAS were significantly lower (P = 0.001), and the Total Pain Relief scores higher (P < 0.02) in the epidural group, signifying better analgesia. There were no differences in postoperative forced vital capacity between the two groups. More patients in the PCA group had greater degrees of sedation on postoperative day 1 (P = 0.005), whereas pruritus was more frequent (P < 0.02) in the epidural group. We conclude that an epidural fentanyl infusion is superior to that of PCA with morphine in the management of pain after thoracotomy.
abstract_id: PUBMED:35431742
Comparison of the effectiveness of intravenous fentanyl versus caudal epidural in neonates undergoing tracheoesophageal fistula surgeries. Background: Caudal epidural has become an inseparable part of pediatric pain relief as it depresses the stress response better than any other form of analgesia, resulting in the reduction in the need for systemic opioids; in addition, it facilitates early recovery and promotes good postoperative respiratory functions.
Aim: To evaluate the effectiveness of epidural analgesia in neonates undergoing tracheoesophageal fistula repair in terms of requirement of perioperative fentanyl opioid, postoperative neonatal infant pain score (NIPS), on-table extubation, duration of intubation, reintubation, perioperative hemodynamic response, and any other side effects.
Materials And Methods: A comparative, prospective, single-blind, randomized trial on 30 neonates scheduled for tracheoesophageal surgeries were randomly allocated to two groups: group I: neonates receiving caudal epidural block with ropivacaine 0.2%, 1 mg/kg bolus followed by infusion 0.1 mg/kg/h; group II: neonates receiving initial intravenous [IV] fentanyl 1 ug/kg and maintenance with 0.5 μg/kg/h IV bolus.
Results: None of the neonates received opioids in group I. There were statistically significant differences in the mean NIPS at 30, 60, 90, 120 150, and 240-min intervals between group I and group II. Further, 80% of neonates were extubated in group 1 compared to 50% in group II, which was statistically significant (P = 0.025). The duration of intubation was more in group II compared to group I, with a suggestive significance of P = 0.093.
Conclusion: Caudal epidural infusion provides adequate perioperative analgesia, promotes rapid weaning from the ventilator, and contributes to a successful outcome.
abstract_id: PUBMED:26530835
Continuous chloroprocaine infusion for thoracic and caudal epidurals as a postoperative analgesia modality in neonates, infants, and children. Background: Neonates and infants have decreased metabolic capacity for amide local anesthetics and increased risk of local anesthetic toxicity compared to the general population. Chloroprocaine is an ester local anesthetic that has an extremely short plasma half-life in infants as well as adults. Existing reports support the safety and efficacy of continuous chloroprocaine epidural infusions in neonates and young infants during the intraoperative period. Despite this, continuous chloroprocaine epidural infusion may be an under-utilized method of postoperative analgesia for this patient population. In particular, it may improve pain control in neonates and infants with incisions stretching many dermatomes or those with hepatic impairment.
Methods: We retrospectively reviewed our experience over 4 years with continuous chloroprocaine epidural infusions in neonates, infants, and children with a focus on the postoperative management of pain.
Results: Twenty-one pediatric patients received continuous 2-chloroprocaine epidural infusions for postoperative pain management from January 2010 to April 2014 for thoracic, abdominal, and limb procedures. The epidural infusion consisted of 1.5% chloroprocaine or 1.5% chloroprocaine with fentanyl. Tabulating the morphine and hydromorphone used for rescue analgesia, the median (interquartile range) opioid consumption (mg·kg(-1) ·day(-1) of intravenous morphine equivalents) for the first, second, and third 24-h postoperative periods were 0.02 (0-0.48), 0.30 (0-0.44), and 0.14 (0-0.29), respectively. Examining the total fentanyl usage, the median (interquartile range) fentanyl consumption (μg·kg(-1) ·day(-1)) for first, second, and third 24-h postoperative periods were 3.89 (0.41-7.24), 0 (0.00-4.06), and 0 (0.00-0.51), respectively. The median N-PASS score assessed every 6 h from 0 to 72 h postoperatively was 0, 1, 2, 0, 0, 1, 0, 0, 0, 0, 1, 0, and 0, respectively. The median FLACC score assessed every 6 h from 0 to 72 h postoperatively was 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, and 0, respectively.
Conclusion: The results suggest that chloroprocaine offers an efficacious alternative to the amide local anesthetics for postoperative epidural analgesia in the pediatric population.
Answer: The use of fentanyl in epidural solutions for postthoracotomy pain management in neonates may affect surgical outcomes. A study found that the length of stay (LOS) in patients receiving fentanyl in their epidural solution was one day longer than those receiving plain bupivacaine (median 4 vs 3 days, respectively). Additionally, the incidence of respiratory depression was greater in patients receiving epidural infusions containing fentanyl (50% vs 17%, respectively). These findings suggest that the addition of fentanyl to epidural infusions of bupivacaine in infants undergoing thoracotomy for resection of congenital cystic adenomatoid malformation (CCAM) may prolong recovery and increase the incidence of adverse respiratory events without providing a significant analgesic benefit (PUBMED:16034755).
In contrast, another study on neonates undergoing tracheoesophageal fistula surgeries found that caudal epidural infusion with ropivacaine provided adequate perioperative analgesia and promoted rapid weaning from the ventilator, contributing to a successful outcome. None of the neonates in the caudal epidural group received opioids, and 80% were extubated compared to 50% in the group receiving intravenous fentanyl, which was statistically significant (PUBMED:35431742).
These findings indicate that while fentanyl can be used as part of an epidural analgesic regimen, its use may be associated with longer hospital stays and a higher incidence of respiratory depression in neonates. Alternative analgesic approaches, such as caudal epidural with ropivacaine, may offer benefits in terms of perioperative analgesia and recovery outcomes. |
Instruction: Does prophylactic treatment with proteolytic enzymes reduce acute toxicity of adjuvant pelvic irradiation?
Abstracts:
abstract_id: PUBMED:12413670
Does prophylactic treatment with proteolytic enzymes reduce acute toxicity of adjuvant pelvic irradiation? Results of a double-blind randomized trial. Purpose: Does prophylactic treatment with proteolytic enzymes reduce acute toxicity of adjuvant pelvic radiotherapy?
Material And Methods: Fifty-six patients with an indication for adjuvant pelvic irradiation after curative surgery were double-blind randomized. All patients took 3 x 4 capsules study medication daily during radiotherapy. Twenty-eight patients in the enzyme group (EG) received capsules containing papain, trypsin and chymotrypsin, 28 in the placebo group (PG) received placebo capsules. All patients were irradiated with 5 x 1.8 Gy weekly to 50.4 Gy using four-field-box technique after CT-based planning. Primary objective was the grade of diarrhea, nausea, vomiting, fatigue and epitheliolysis during radiotherapy. Secondary objectives were the number of supportive medications and treatment interruptions due to acute toxicity.
Results: None/mild diarrhea: 43% EG, 64% PG. Moderate/severe diarrhea: 57% EG, 36% PG (P = 0.11). Mean duration: 11 days in EG, 10 days in PG. None/mild nausea: 93% EG, 93% PG. Moderate/severe nausea: 7% EG, 7% PG. None/mild vomiting: 100% EG, 97% PG. None/mild fatigue: 82% EG, 93% PG. Moderate/severe fatigue: 18% EG, 7% PG (P = 0.23). None/mild epitheliolysis: 75% EG, 93% PG. Moderate/severe epitheliolysis: 25% EG, 7% PG (P = 0.16). Treatment interruption (mean days): 2.44 in EG, 1.46 in PG. Number of supportive medication: 29 in EG, 19 in PG.
Conclusions: The prophylactic use of proteolytic enzymes does not reduce acute toxicities, treatment interruptions and number of supportive medication and therefore does not improve tolerance of adjuvant pelvic radiotherapy.
abstract_id: PUBMED:32985332
Prophylactic Faecalibacterium prausnitzii treatment prevents the acute breakdown of colonic epithelial barrier in a preclinical model of pelvic radiation disease. Every year, millions of people around the world benefit from radiation therapy to treat cancers localized in the pelvic area. Damage to healthy tissue in the radiation field can cause undesirable toxic effects leading to gastrointestinal complications called pelvic radiation disease. A change in the composition and/or function of the microbiota could contribute to radiation-induced gastrointestinal toxicity. In this study, we tested the prophylactic effect of a new generation of probiotic like Faecalibacterium prausnitzii (F. prausnitzii) on acute radiation-induced colonic lesions. Experiments were carried out in a preclinical model of pelvic radiation disease. Rats were locally irradiated at 29 Gray in the colon resulting in colonic epithelial barrier rupture. Three days before the irradiation and up to 3 d after the irradiation, the F. prausnitzii A2-165 strain was administered daily (intragastrically) to test its putative protective effects. Results showed that prophylactic F. prausnitzii treatment limits radiation-induced para-cellular hyperpermeability, as well as the infiltration of neutrophils (MPO+ cells) in the colonic mucosa. Moreover, F. prausnitzii treatment reduced the severity of the morphological change of crypts, but also preserved the pool of Sox-9+ stem/progenitor cells, the proliferating epithelial PCNA+ crypt cells and the Dclk1+/IL-25+ differentiated epithelial tuft cells. The benefit of F. prausnitzii was associated with increased production of IL-18 by colonic crypt epithelial cells. Thus, F. prausnitzii treatment protected the epithelial colonic barrier from colorectal irradiation. New-generation probiotics may be promising prophylactic treatments to reduce acute side effects in patients treated with radiation therapy and may improve their quality of life.
abstract_id: PUBMED:33358082
Pelvic irradiation and hematopoietic toxicity: A review of the literature Pelvic bone marrow is the site of nearly 50% of total hematopoiesis. Radiation therapy of pelvic lymph node areas, and cancers located near the bony structures of the pelvis, exposes to hematological toxicity in the range of 30 to 70%. This toxicity depends on many factors, including the presence or absence of concomitant chemotherapy and its type, the volume of irradiated bone, the received doses, or the initial hematopoietic reserve. Intensity modulated radiation therapy allows the optimisation of dose deposit on at risk organs while providing optimal coverage of target volumes. However, this suggests that dose constraints should be known precisely to limit the incidence of radiation side effects. This literature review focuses firstly on pelvic lymph node areas and bony volumes nearby, then on the effects of irradiation on bone marrow and the current dosimetric constraints resulting from it, and finally on hematological toxicities by carcinologic location and progress in reducing these toxicities.
abstract_id: PUBMED:33123482
Prophylactic Extended-Field Irradiation in Patients With Cervical Cancer: A Literature Review. Currently, the standard radiation field for locally advanced cervical cancer patients without evidence of para-aortic lymph node (PALN) metastasis is the pelvis. Due to the low accuracy of imaging in the diagnosis of PALN metastasis and the high incidence of PALN failure after pelvic radiotherapy, prophylactic pelvic and para-aortic irradiation, also called extended-field irradiation (EFI), is performed for patients with cervical cancer. In the era of concurrent chemoradiotherapy, randomized controlled trials are limited, and whether patients with cervical cancer can benefit from prophylactic EFI is still controversial. With conformal or intensity-modulated radiation therapy, patients tolerate prophylactic EFI very well. The severe toxicities of prophylactic EFI are not significantly higher than those of pelvic radiotherapy. We recommend delivering prophylactic EFI to cervical cancer patients with common iliac lymph nodes metastasis. Clinical trials are needed to investigate whether patients with ≥3 positive pelvic lymph nodes and FIGO stage IIIB disease can benefit from prophylactic EFI. According to the distribution of PALNs, it is reasonable to use the renal vein as the upper border of the radiation therapy field for patients treated with prophylactic EFI. The clinical target volume expansion of the node from the vessel should be smaller in the right para-caval region than in the left lateral para-aortic region. The right para-caval region above L2 or L3 may be omitted from the PALN target volume to reduce the dose to the duodenum. More clinical trials on prophylactic EFI in cervical cancer are needed.
abstract_id: PUBMED:33748442
Prospective observational study evaluating acute and delayed treatment related toxicities of prophylactic extended field volumetric modulated arc therapy with concurrent cisplatin in cervical cancer patients with pelvic lymph node metastasis. Purpose: To evaluate the treatment related acute and delayed toxicities of extended field Volumetric modulated arc therapy (VMAT) with concurrent chemotherapy in patients of locally advanced cervical cancer with pelvic lymph nodes.
Material And Methods: From 2014 to 2016, 15 patients of locally advanced cervical cancer with Fluoro-deoxyglucose positron emission tomography (FDG-PET) positive pelvic lymph nodes were treated with extended field Simultaneous integrated boost (SIB)-VMAT 45 Gy/55 Gy/25#/5weeks and concurrent cisplatin. Acute toxicities were documented according to common terminology criteria for adverse events version 4 (CTCAE v.4). Dose volume parameters and patient characteristics were analyzed for association with toxicities.
Results: Median age of patients at diagnosis was 48 years. 40% (6 patients) were stage IIB & 60% (9 patients) were stage IIIB. Median number of involved pelvic lymph nodes was 2 (range, 1-4), commonest location was external iliac lymph node region (86%). Median number of concurrent chemotherapy cycles received was five. Treatment was well tolerated and there were no grade ≥ 3 acute toxicities. Commonest acute toxicities observed were vomiting (≥grade2 -13.3%) followed by & nausea (grade ≥ 2 in 6%) and were associated with volume of bowel bag receiving 45 Gy. Constitutional symptoms (≥grade 2) were observed in 6% patients and had no dosimetric associations. At a median follow up of 43 months, delayed ≥ grade1, 2, 3 toxicity were observed in 80%, 0%, and 0% respectively with diarrhea being the commonest.
Conclusion: Prophylactic para aortic extended field VMAT with concurrent chemotherapy for locally advanced cervical cancer is well tolerated with acceptable acute toxicity profile. Significant grade 3 acute/delayed toxicities were not observed in this cohort of patients.
abstract_id: PUBMED:31904435
Concurrent Pituicytoma, Meningioma, and Cavernomas After Cranial Irradiation for Childhood Acute Lymphoblastic Leukemia. Background: The majority of patients with acute lymphoblastic leukaemia develop disease relapse in the central nervous system in the absence of central nervous system-directed prophylactic therapy. In the past, prophylactic cranial irradiation was commonly used in the form of whole-brain radiotherapy in patients with acute lymphoblastic leukemia to prevent the development of intracranial diseases. However, in addition to the inherent risk of toxicity, this type of therapy has several delayed side effects including the development of secondary intracranial tumors.
Case Description: We report a rare case of a patient with concurrent pituicytoma, meningioma, and cavernomas 44 years after prophylactic cranial irradiation for childhood acute lymphoblastic leukemia. The patient presented with visual disturbance, headache, and features of hypopituitarism. Endoscopic transsphenoidal resection of the pituicytoma and meningioma was performed. Subsequent regrowth of the residual meningioma necessitated further surgery and adjuvant treatment with radiotherapy.
Conclusions: This case report highlights the unusual case of a patient with 3 concurrent intracranial lesions of distinct pathologies after prophylactic cranial irradiation therapy for childhood acute lymphoblastic leukemia.
abstract_id: PUBMED:28577033
Impact of acute hematological toxicity on treatment interruptions during cranio-spinal irradiation in medulloblastoma: a tertiary care institute experience. To analyze treatment interruptions due to acute hematological toxicity in patients of medulloblastoma receiving cranio-spinal irradiation (CSI). Prospectively collected data from case records of 52 patients of medulloblastoma treated between 2011 and 2014 was evaluated. Blood counts were monitored twice a week during CSI. Spinal irradiation was interrupted for patients with ≥grade 2 hematological toxicity and resumed after recovery to grade 1 level (TLC >3000; platelet count >75,000). Treatment interruptions and hematological toxicity were analyzed. Median age was 11 years. All patients received adjuvant CSI of 36 Gy, followed by boost of 18 Gy to posterior fossa, at 1.8 Gy per fraction. Concurrent chemotherapy was not given. Adjuvant chemotherapy was given after CSI for high risk patients. Spinal fields were interrupted in 73.1% of patients. Cause of first interruption was leucopenia in 92.1%, thrombocytopenia in 2.6%, and both in 5.3%. Median number of fractions at first interruption was 8, with 25% of interruptions in first week. Median duration for hematological recovery after nadir was 5 days for leucopenia and 3 days for thrombocytopenia. Half of the patients had at least 2 interruptions, and 19% subsequently developed grade 3 toxicity. On multivariate analysis, significant correlation with duration of delay was observed for pre-treatment haemoglobin, number of fractions at first interruption, grade and duration of recovery of leucopenia. Acute hematological toxicity with CSI is frequently under-reported. Patients with low pre-treatment hemoglobin, early onset leucopenia, profound leucopenia and prolonged recovery times are at a higher risk of having protracted courses of irradiation. Frequent monitoring of blood counts and timely interruption of spinal fields of irradiation at grade 2 level of hematological toxicity minimizes the risk of grade 3 and grade 4 toxicity, while reducing the interruptions in irradiation of the gross tumour bed.
abstract_id: PUBMED:32505868
Effect of prophylactic granulocyte-colony stimulating factor (G-CSF) on acute hematological toxicity in medulloblastoma patients during craniospinal irradiation (CSI). Objectives: Haematological toxicity and treatment breaks are common during cranio-spinal irradiation (CSI) due to irradiation of large volume of bone marrow. We conducted this study to see the effect of prophylactic granulocyte colony stimulating factor (GCSF) in reducing treatment breaks.
Patients And Methods: The study was conducted over a period of 15 months from August 2017 to November 2018. Histopathologically proven Medulloblastoma patients received prophylactic GCSF during CSI. Acute hematological toxicities and treatment breaks were noted and effect of age and pretreatment blood counts were analyzed by SPSS (Statistical Package for Social Sciences) version 23.
Results: A total of 28 patients were included in the study. During CSI, hematological toxicity leading to treatment breaks was observed in 11 (39.3 %) patients, of which grade 3 and 2 toxicities were seen in ten and one patients respectively. Younger age (<10 years) at diagnosis was significantly associated with the development of hematological toxicity (p = 0.028, Chi-Square). No correlation was found with pre-treatment blood counts.
Conclusion: Prophylactic use of GCSF may be effective in preventing radiation induced hematological toxicity and treatment breaks.
abstract_id: PUBMED:25006292
Primary anaplastic astrocytoma of the brain after prophylactic cranial irradiation in a case of acute lymphoblastic leukemia: Case report and review of the literature. A 6½-year-old boy had developed acute lymphoblastic leukemia and was treated with systemic chemotherapy, intrathecal triple drug regimen and prophylactic cranial irradiation. After 10 years he developed anaplastic astrocytoma of the postero-superior cerebellum on the left side while leukemia was in remission. He was treated with surgical excision, followed by adjuvant three dimensional conformal radiotherapy and is on salvage chemotherapy with temozolamide. It is possible that the anaplastic astrocytoma may be a radiation induced malignancy.
abstract_id: PUBMED:19467803
Dose-volume relationships for acute bowel toxicity in patients treated with pelvic nodal irradiation for prostate cancer. Purpose: To find correlation between dose-volume histograms (DVHs) of the intestinal cavity (IC) and moderate-severe acute bowel toxicity in men with prostate cancer treated with pelvic nodal irradiation.
Methods And Materials: The study group consisted of 191 patients with localized prostate cancer who underwent whole-pelvis radiotherapy with radical or adjuvant/salvage intent during January 2004 to November 2007. Complete planning/clinical data were available in 175 of these men, 91 of whom were treated with a conventional four-field technique (50.4 Gy, 1.8 Gy/fraction) and 84 of whom were treated with IMRT using conventional Linac (n = 26, 50.4 Gy, 1.8 Gy/fraction) or Helical TomoTherapy (n = 58, 50-54 Gy, 1.8-2 Gy/fraction). The IC outside the planning target volume (PTV) was contoured and the DVH for the first 6 weeks of treatment was recovered in all patients. The correlation between a number of clinical and DVH (V10-V55) variables and toxicity was investigated in univariate and multivariate analyses. The correlation between DVHs for the IC outside the PTV and DVHs for the whole IC was also assessed.
Results: Twenty-two patients experienced toxicity (3/22 in the IMRT/tomotherapy group). Univariate analyses showed a significant correlation between V20-V50 and toxicity (p = 0.0002-0.001), with a higher predictive value observed for V40-V50. Previous prostatectomy (p = 0.066) and abdominal/pelvic surgery (p = 0.12) also correlated with toxicity. Multivariate analysis that included V45, abdominal/pelvic surgery, and prostatectomy showed that the most predictive parameters were V45 (p = 0.002) and abdominal/pelvic surgery (p = 0.05, HR = 2.4)
Conclusions: Our avoidance IMRT approach drastically reduces the incidence of acute bowel toxicity. V40-V50 of IC and, secondarily, previous abdominal/pelvic surgery were the main predictors of acute bowel toxicity.
Answer: No, prophylactic treatment with proteolytic enzymes does not reduce acute toxicity of adjuvant pelvic irradiation. A double-blind randomized trial involving 56 patients who were undergoing adjuvant pelvic irradiation after curative surgery found that the prophylactic use of proteolytic enzymes did not reduce the grade of diarrhea, nausea, vomiting, fatigue, and epitheliolysis during radiotherapy. Additionally, it did not decrease the number of treatment interruptions and the number of supportive medications required, thus not improving the tolerance of adjuvant pelvic radiotherapy (PUBMED:12413670). |
Instruction: Access to heart failure care post emergency department visit: do we meet established benchmarks and does it matter?
Abstracts:
abstract_id: PUBMED:23622909
Access to heart failure care post emergency department visit: do we meet established benchmarks and does it matter? Background: The Canadian Cardiology Society recommends that patients should be seen within 2 weeks after an emergency department (ED) visit for heart failure (HF). We sought to investigate whether patients who had an ED visit for HF subsequently consult a physician within the current established benchmark, to explore factors related to physician consultation, and to examine whether delay in consultation is associated with adverse events (AEs) (death, hospitalization, or repeat ED visit).
Methods: Patients were recruited by nurses at 8 hospital EDs in Québec, Canada, and interviewed by telephone within 6 weeks of discharge and subsequently at 3 and 6 months. Clinical variables were extracted from medical charts by nurses. We used Cox regression in the analysis.
Results: We enrolled 410 patients (mean age 74.9 ± 11.1 years, 53% males) with a confirmed primary diagnosis of HF. Only 30% consulted with a physician within 2 weeks post-ED visit. By 4 weeks, 51% consulted a physician. Over the 6-month follow-up, 26% returned to the ED, 25% were hospitalized, and 9% died. Patients who were followed up within 4 weeks were more likely to be older and have higher education and a worse quality of life. Patients who consulted a physician within 4 weeks of ED discharge had a lower risk of AEs (hazard ratio 0.59, 95% CI 0.35-0.99).
Conclusion: Prompt follow-up post-ED visit for HF is associated with lower risk for major AEs. Therefore, adherence to current HF guideline benchmarks for timely follow-up post-ED visit is crucial.
abstract_id: PUBMED:35372819
Emergency Department/Urgent Care as Usual Source of Care and Clinical Outcomes in CKD: Findings From the Chronic Renal Insufficiency Cohort Study. Rationale & Objective: Having a usual source of care increases use of preventive services and is associated with improved survival in the general population. We evaluated this association in adults with chronic kidney disease (CKD).
Study Design: Prospective, observational cohort study.
Setting & Participants: Adults with CKD enrolled in the Chronic Renal Insufficiency Cohort (CRIC) Study.
Predictor: Usual source of care was self-reported as: 1) clinic, 2) emergency department (ED)/urgent care, 3) other.
Outcomes: Primary outcomes included incident end-stage kidney disease (ESKD), atherosclerotic events (myocardial infarction, stroke, or peripheral artery disease), incident heart failure, hospitalization events, and all-cause death.
Analytical Approach: Multivariable regression analyses to evaluate the association between usual source of care (ED/urgent care vs clinic) and primary outcomes.
Results: Among 3,140 participants, mean age was 65 years, 44% female, 45% non-Hispanic White, 43% non-Hispanic Black, and 9% Hispanic, mean estimated glomerular filtration rate 50 mL/min/1.73 m2. Approximately 90% identified clinic as usual source of care, 9% ED/urgent care, and 1% other. ED/urgent care reflected a more vulnerable population given lower baseline socioeconomic status, higher comorbid condition burden, and poorer blood pressure and glycemic control. Over a median follow-up time of 3.6 years, there were 181 incident end-stage kidney disease events, 264 atherosclerotic events, 263 incident heart failure events, 288 deaths, and 7,957 hospitalizations. Compared to clinic as usual source of care, ED/urgent care was associated with higher risk for all-cause death (HR, 1.53; 95% CI, 1.05-2.23) and hospitalizations (RR, 1.41; 95% CI, 1.32-1.51).
Limitations: Cannot be generalized to all patients with CKD. Causal relationships cannot be established.
Conclusions: In this large, diverse cohort of adults with moderate-to-severe CKD, those identifying ED/urgent care as usual source of care were at increased risk for death and hospitalizations. These findings highlight the need to develop strategies to improve health care access for this high-risk population.
abstract_id: PUBMED:28447144
Standardized collection of presenting complaints in the emergency room : Integration of coded presenting complaints into the electronic medical record system of an emergency department and their value for health care research Background: The point of entry of a patient in emergency care is a symptom or a complaint. To evaluate subsequent processes in an emergency department until a diagnosis is made, this information has to be taken into account.
Objectives: We report the introduction of coded presenting complaints into the electronic medical record system of an emergency department and describe the patients based on these data.
Methods: The CEDIS presenting complaint list was integrated into the emergency department information system of an emergency department (38,000 patients/year). After 8 months, we performed an exploratory analysis of the most common presenting complaints. Furthermore, we identified the most frequent diagnoses for presenting complaint "shortness of breath" and the most frequent presenting complaints for the diagnosis of sepsis.
Results: After implementing the presenting complaint list, a presenting complaint code was assigned to each patient. In our sample (26,330 cases), "extremity pain and injury" comprised the largest group of patients (29.5%). "Chest pain-cardiac features" (3.7%) and "extremity weakness/symptoms of cerebrovascular accident" (2.4%) were the main cardiac and neurologic complaints, respectively. They were mostly triaged as urgent (>80%) and hospitalized in critical care units (>50%). The main diagnosis for presenting complaint "shortness of breath" was heart failure (25.1%), while the main presenting complaint for the diagnosis sepsis was "shortness of breath" (18.1%).
Conclusions: Containing 171 presenting complaints, this classification was implemented successfully without providing extensive staff training. The documentation of coded presenting complaints enables symptom-based analysis of the health care provided in emergency departments.
abstract_id: PUBMED:32292429
Pediatric congenital heart diseases: Patterns of presentation to the emergency department of a tertiary care hospital. Objective: To observe presentation of Pediatric congenital cardiac defects to the Emergency Department (ED) of a tertiary care hospital in Pakistan.
Methods: This is a retrospective chart review of patients under the age of 16 years with congenital cardiac defects presenting to the Emergency Department of Aga Khan University Hospital over a period of eighteen months, from January 2012 to June 2013. Study population was divided into two groups; first group constituted children with undiagnosed congenital cardiac defects, whereas second group constituted children with diagnosed congenial cardiac defects presented to ED. In previously diagnose cases each visit was counted as a separate encounter.
Results: Out of 133 children, 44 (33.5%) were diagnosed congenital cardiac disease for the first time (Group-1) in ED, while 89 (66.5%) children were diagnosed cases of congenital heart disease (Group-2). Among Group-1; main reasons for ED visits were cyanosis, cardiac failure, murmur evaluation and cardiogenic shock where as in Group-2; main presentations were cardiac failure, hyper cyanotic spells, gastroenteritis, lower respiratory tract infection, and post-operative issues. There were total 13 deaths.
Conclusion: High index of suspicion is necessary for early diagnosis and management of children with congenital heart disease in the pediatric emergency department.
abstract_id: PUBMED:38093494
The 'peptide for life' initiative in the emergency department study. Aims: Natriuretic peptide (NP) uptake varies in Emergency Departments (EDs) across Europe. The 'Peptide for Life' (P4L) initiative, led by Heart Failure Association, aims to enhance NP utilization for early diagnosis of heart failure (HF). We tested the hypothesis that implementing an educational campaign in Western Balkan countries would significantly increase NP adoption rates in the ED.
Methods And Results: This registry examined NP adoption before and after implementing the P4L-ED study across 10 centres in five countries: Bosnia and Herzegovina, Croatia, Montenegro, North Macedonia, and Serbia. A train-the-trainer programme was implemented to enhance awareness of NP testing in the ED, and centres without access received point-of-care instruments. Differences in NP testing between the pre-P4L-ED and post-P4L-ED phases were evaluated. A total of 2519 patients were enrolled in the study: 1224 (48.6%) in the pre-P4L-ED phase and 1295 (51.4%) in the post-P4L-ED phase. NP testing was performed in the ED on 684 patients (55.9%) during the pre-P4L-ED phase and on 1039 patients (80.3%) during the post-P4L-ED phase, indicating a significant absolute difference of 24.4% (95% CI: 20.8% to 27.9%, P < 0.001). The use of both NPs and echocardiography significantly increased from 37.7% in the pre-P4L-ED phase to 61.3% in the post-P4L-ED phase. There was an increased prescription of diuretics and SGLT2 inhibitors during the post-P4L-ED phase.
Conclusions: By increasing awareness and providing resources, the utilization of NPs increased in the ED, leading to improved diagnostic accuracy and enhanced patient care.
abstract_id: PUBMED:31390036
Identification of Emergency Care-Sensitive Conditions and Characteristics of Emergency Department Utilization. Importance: Monitoring emergency care quality requires understanding which conditions benefit most from timely, quality emergency care.
Objectives: To identify a set of emergency care-sensitive conditions (ECSCs) that are treated in most emergency departments (EDs), are associated with a spectrum of adult age groups, and represent common reasons for seeking emergency care and to provide benchmark national estimates of ECSC acute care utilization.
Design, Setting, And Participants: A modified Delphi method was used to identify ECSCs. In a cross-sectional analysis, ECSC-associated visits by adults (aged ≥18 years) were identified based on International Statistical Classification of Diseases, Tenth Revision, Clinical Modification diagnosis codes and analyzed with nationally representative data from the 2016 US Nationwide Emergency Department Sample. Data analysis was conducted from January 2018 to December 2018.
Main Outcomes And Measures: Identification of ECSCs and ECSC-associated ED utilization patterns, length of stay, and charges.
Results: An expert panel rated 51 condition groups as emergency care sensitive. Emergency care-sensitive conditions represented 16 033 359 of 114 323 044 ED visits (14.0%) in 2016. On average, 8 535 261 of 17 886 220 ED admissions (47.7%) were attributed to ECSCs. The most common ECSC ED visits were for sepsis (1 716 004 [10.7%]), chronic obstructive pulmonary disease (1 273 319 [7.9%]), pneumonia (1 263 971 [7.9%]), asthma (970 829 [6.1%]), and heart failure (911 602 [5.7%]) but varied by age group. Median (interquartile range) length of stay for ECSC ED admissions was longer than non-ECSC ED admissions (3.2 [1.7-5.8] days vs 2.7 [1.4-4.9] days; P < .001). In 2016, median (interquartile range) ED charges per visit for ECSCs were $2736 ($1684-$4605) compared with $2179 ($1118-$4359) per visit for non-ECSC ED visits (P < .001).
Conclusions And Relevance: This comprehensive list of ECSCs can be used to guide indicator development for pre-ED, intra-ED, and post-ED care and overall assessment of the adult, non-mental health, acute care system. Health care utilization and costs among patients with ECSCs are substantial and warrant future study of validation, variations in care, and outcomes associated with ECSCs.
abstract_id: PUBMED:27390973
Emergency department triage of acute heart failure triggered by pneumonia; when an intensive care unit is needed? Community acquired pneumonia (CAP) is a frequent triggering factor for decompensation of a chronic cardiac dysfunction, leading to acute heart failure (AHF). Patients with AHF exacerbated by CAP, are often admitted through the emergency department for ICU hospitalization, even though more than half the cases do not warrant any intensive care treatment. Emergency department physicians are forced to make disposition decisions based on subjective criteria, due to lack of evidence-based risk scores for AHF combined with CAP. Currently, the available risk models refer distinctly to either AHF or CAP patients. Extrapolation of data by arbitrarily combining these models, is not validated and can be treacherous. Examples of attempts to apply acuity scales provenient from different disciplines and the resulting discrepancies, are given in this review. There is a need for severity classification tools especially elaborated for use in the emergency department, applicable to patients with mixed AHF and CAP, in order to rationalize the ICU dispositions. This is bound to facilitate the efforts to save both lives and resources.
abstract_id: PUBMED:25904756
Editor's Choice- Call to action: Initiation of multidisciplinary care for acute heart failure begins in the Emergency Department. The Emergency Department is the first point of healthcare contact for most patients presenting with signs and symptoms of acute heart failure (AHF) and thus, plays a critical role in AHF management. Despite the increasing burden of AHF on healthcare systems in general and Emergency Departments in particular, there is little guidance for implementing care and disease management programmes. This has led to an urgent call for action to prioritize and improve the management of patients with AHF presenting to the Emergency Department. At a local level, hospitals are urged to develop and implement individual multidisciplinary AHF management programmes, which include detailed care pathways and the monitoring of management adherence, to ensure that care is based on the pathophysiology and causes of AHF. Multiple disciplines, including emergency medicine, hospital medicine, cardiology, nephrology and geriatrics, should provide input into the development of a multidisciplinary approach to AHF management in the ED and beyond, including in-hospital treatment, discharge and follow-up. This will ensure consensus of opinion and improve adherence. The benefits of standardized, multidisciplinary care have been shown in other areas of acute and chronic diseases and will also provide benefit for AHF patients presenting to Emergency Departments.
abstract_id: PUBMED:28803593
Predictors of obtaining follow-up care in the province of Ontario, Canada, following a new diagnosis of atrial fibrillation, heart failure, and hypertension in the emergency department. Objective: Patients with cardiovascular diseases are common in the emergency department (ED), and continuity of care following that visit is needed to ensure that they receive evidence-based diagnostic tests and therapy. We examined the frequency of follow-up care after discharge from an ED with a new diagnosis of one of three cardiovascular diseases.
Methods: We performed a retrospective cohort study of patients with a new diagnosis of heart failure, atrial fibrillation, or hypertension, who were discharged from 157 non-pediatric EDs in Ontario, Canada, between April 2007 and March 2014. We determined the frequency of follow-up care with a family physician, cardiologist, or internist within seven and 30 days, and assessed the association of patient, emergency physician, and family physician characteristics with obtaining follow-up care using cause-specific hazard modeling.
Results: There were 41,485 qualifying ED visits. Just under half (47.0%) had follow-up care within seven days, with 78.7% seen by 30 days. Patients with serious comorbidities (renal failure, dementia, COPD, stroke, coronary artery disease, and cancer) had a lower adjusted hazard of obtaining 7-day follow-up care (HRs 0.77-0.95) and 30-day follow-up care (HR 0.76-0.95). The only emergency physician characteristic associated with follow-up care was 5-year emergency medicine specialty training (HR 1.11). Compared to those whose family physician was remunerated via a primarily fee-for-service model, patients were less likely to obtain 7-day follow-up care if their family physician was remunerated via three types of capitation models (HR 0.72, 0.81, 0.85) or via traditional fee-for-service (HR 0.91). Findings were similar for 30-day follow-up care.
Conclusions: Only half of patients discharged from an ED with a new diagnosis of atrial fibrillation, heart failure, and hypertension were seen within a week of being discharged. Patients with significant comorbidities were less likely to obtain follow-up care, as were those with a family physician who was remunerated via primarily capitation methods.
abstract_id: PUBMED:31157039
Impact of a pharmaceutical care programme for patients with chronic disease initiated at the emergency department on drug-related negative outcomes: a randomised controlled trial. Background: The resolution of potential drug-related problems is a priority of pharmaceutical care programmes.
Objectives: To assess the clinical impact on drug-related negative outcomes of a pharmaceutical care programme focusing on the resolution of potential drug-related problems, initiated in the emergency department for patients with heart failure (HF) and/or chronic obstructive pulmonary disease (COPD).
Methods: Controlled trials, in which older adults (≥65 years) receiving four or more medications admitted to the emergency department for ≥12 hours for worsening of HF and/or COPD were randomised (1:1) to either a pharmaceutical care programme focusing on resolving potential drug-related problems initiated at the emergency department (intervention group (IG)) or standard care (control group). Comparisons between the groups were made for the proportion of patients with drug-related negative outcomes, number of drug-related negative outcomes per patient, mean stay, patients readmitted within 180 days and 180-day mortality.
Results: 118 patients were included, 59 in each group. Fewer patients in the IG had drug-related negative outcomes (37 (62.7%) vs 47 (79.7%) in the control group (p=0.042)). Fewer drug-related negative outcomes per patient occurred in the IG (56 (0.95 per patient) vs 85 (1.44 per patient) in the control group (p=0.01)). The mean stay was similar between groups (194.7 hours in the IG vs 242.5 hours in the control group (p=0.186)). No difference in revisits within 180 days was found (32 (54.24%) in the IG vs 22 (37.3%) in the control group (p=0.065)). 180-Day mortality was detected in 11 (18.6%) patients in the IG compared with 13 (22%) in the control group (p=0.647).
Conclusion: A pharmaceutical care programme focusing on resolving potential drug-related problems initiated at the emergency department has a favourable clinical impact, as it reduces the number and prevalence of drug-related negative outcomes. No difference was found in other outcome variables.Trial registration number NCT02368548.
Answer: The study by Lee et al. (PUBMED:23622909) investigated whether patients with heart failure (HF) received follow-up care within the established benchmark of 2 weeks after an emergency department (ED) visit, as recommended by the Canadian Cardiology Society. The study found that only 30% of patients consulted with a physician within 2 weeks post-ED visit, and by 4 weeks, this increased to 51%. The study also revealed that prompt follow-up post-ED visit for HF was associated with a lower risk of adverse events (AEs), such as death, hospitalization, or repeat ED visit. Patients who consulted a physician within 4 weeks of ED discharge had a lower risk of AEs (hazard ratio 0.59, 95% CI 0.35-0.99), indicating that adherence to HF guideline benchmarks for timely follow-up post-ED visit is crucial for reducing the risk of major AEs.
Therefore, based on the findings of this study, it appears that the established benchmarks for post-ED follow-up care for HF are not being met for a significant proportion of patients. Moreover, the study demonstrates that meeting these benchmarks does matter, as timely physician consultation after an ED visit for HF is associated with better patient outcomes. This underscores the importance of ensuring that patients with HF receive prompt follow-up care after an ED visit to potentially reduce the risk of AEs and improve overall patient care. |
Instruction: Does "Touching Four" on the Worth 4-dot test indicate fusion in young children?
Abstracts:
abstract_id: PUBMED:8764793
Does "Touching Four" on the Worth 4-dot test indicate fusion in young children? A computer simulation. Purpose: "Touching four" dots on the Worth 4-dot test is used sometimes as an indication of fusion in young children. The authors examined the reliability of this test.
Methods: A computer simulation of the Worth 4-dot test generated images representing fusion, suppression, and alternate fixation. Sixteen children, ranging in age from 32 to 48 months, were examined using this test.
Results: None of the children could accurately describe the images verbally. Alternate fixation could not be distinguished from fusion by asking the subjects to touch the dots. Monocular suppression was identified accurately in all subjects.
Conclusion: Touching four dots on the Worth 4-dot test does not distinguish fusion from alternate fixation in children with normal ocular alignment. This has important implications regarding the diagnosis of monofixation syndrome and assessment of the response to a prism adaptation trial in young children.
abstract_id: PUBMED:23275822
Correlation between Worth Four Dot Test Results and Fusional Control in Intermittent Exotropia. Purpose: To compare the results of Worth 4-dot test (WFDT) performed in dark and light, and at different distances, with fusional control in patients with intermittent exotropia (IXT).
Methods: Dark and light WFDT was performed for new IXT subjects at different distances and the results were compared with level of office-based fusional control.
Results: Fifty IXT patients including 17 male and 33 female subjects participated in the study. A significant difference (P<0.05) was observed between levels of home and office-based fusional control (P<0.05). A weak correlation was present between the results of WFDT and level of office-based fusional control; the highest agreement (Kappa=0.088) was observed with dark WFDT performed at a distance of 4m.
Conclusion: Evaluation of fusional state by far WFDT, especially in a dark room, shows modest correlation with office-based fusional control in IXT patients and can be used as an adjunct to more complex tests such as far stereoacuity.
abstract_id: PUBMED:32818097
Worth 4 Dot App for Determining Size and Depth of Suppression. Purpose: To describe and evaluate an iOS application suppression test, Worth 4 Dot App (W4DApp), which was designed and developed to assess size and depth of suppression.
Methods: Characteristics of sensory fusion were evaluated in 25 participants (age 12-69 years) with normal (n = 6) and abnormal (n = 19) binocular vision. Suppression zone size and classification of fusion were determined by W4DApp and by flashlight Worth 4 Dot (W4D) responses from 33 cm to 6 m. Measures of suppression depth were compared between the W4DApp, the flashlight W4D with neutral density filter bar and the dichoptic letters contrast balance index test.
Results: There was high agreement in classification of fusion between the W4DApp method and that derived from flashlight W4D responses from 33 cm to 6 m (α = 0.817). There were no significant differences in success rates or in reliability between the W4DApp or the flashlight W4D methods for determining suppression zone size. W4DApp suppression zone size strongly correlated to that determined with the flashlight W4D (rho = 0.964, P < 0.001). W4DApp depth of suppression measures showed significantly higher success rates (χ2 = 5.128, P = 0.043) and reliability (intraclass correlation analysis = 0.901) but no significant correlation to the depth of suppression calculated by flashlight W4D and neutral density bar (rho = 0.301, P = 0.399) or contrast balance index (rho = -0.018, P = 0.958).
Conclusions: The W4DApp has potential clinical benefit in measuring suppression zone size; however, further modifications are required to improve validity of suppression depth measures.
Translational Relevance: W4DApp iOS application will be a convenient tool for clinical determination of suppression characteristics.
abstract_id: PUBMED:27426739
Visual outcomes after spectacles treatment in children with bilateral high refractive amblyopia. Purpose: The aim was to investigate the visual outcomes of treatment with spectacles for bilateral high refractive amblyopia in children three to eight years of age.
Methods: Children with previously untreated bilateral refractive amblyopia were enrolled. Bilateral high refractive amblyopia was defined as visual acuity (VA) being worse than 6/9 in both eyes in the presence of 5.00 D or more of hyperopia, 5.00 D or more of myopia and 2.00 D or more of astigmatism. Full myopic and astigmatic refractive errors were corrected, and the hyperopic refractive errors were corrected within 1.00 D of the full correction. All children received visual assessments at four-weekly intervals. VA, Worth four-dot test and Randot preschool stereotest were assessed at baseline and every four weeks for two years.
Results: Twenty-eight children with previously untreated bilateral high refractive amblyopia were enrolled. The mean VA at baseline was 0.39 ± 0.24 logMAR and it significantly improved to 0.21, 0.14, 0.11, 0.05 and 0.0 logMAR at four, eight, 12, 24 weeks and 18 months, respectively (all p = 0.001). The mean stereoacuity (SA) was 1,143 ± 617 arcsec at baseline and it significantly improved to 701, 532, 429, 211 and 98 arcsec at four, eight, 12, 24 weeks and 18 months, respectively (all p = 0.001). The time interval for VA achieving 6/6 was significantly shorter in the eyes of low spherical equivalent (SE) (-2.00 D < SE < +2.00 D) than in those of high SE (SE > +2.00 D) (3.33 ± 2.75 months versus 8.11 ± 4.56 months, p = 0.0005). All subjects had normal fusion on Worth four-dot test at baseline and all follow-up visits.
Conclusion: Refractive correction with good spectacles compliance improves VA and SA in young children with bilateral high refractive amblyopia. Patients with greater amounts of refractive error will achieve resolution of amblyopia with a longer time.
abstract_id: PUBMED:28234793
Validity of the Worth 4 Dot Test in Patients with Red-Green Color Vision Defect. Purpose: The Worth four dot test uses red and green glasses for binocular dissociation, and although it has been believed that patients with red-green color vision defects cannot accurately perform the Worth four dot test, this has not been validated. Therefore, the purpose of this study was to demonstrate the validity of the Worth four dot test in patients with congenital red-green color vision defects who have normal or abnormal binocular vision.
Methods: A retrospective review of medical records was performed on 30 consecutive congenital red-green color vision defect patients who underwent the Worth four dot test. The type of color vision anomaly was determined by the Hardy Rand and Rittler (HRR) pseudoisochromatic plate test, Ishihara color test, anomaloscope, and/or the 100 hue test. All patients underwent a complete ophthalmologic examination. Binocular sensory status was evaluated with the Worth four dot test and Randot stereotest. The results were interpreted according to the presence of strabismus or amblyopia.
Results: Among the 30 patients, 24 had normal visual acuity without strabismus nor amblyopia and 6 patients had strabismus and/or amblyopia. The 24 patients without strabismus nor amblyopia all showed binocular fusional responses by seeing four dots of the Worth four dot test. Meanwhile, the six patients with strabismus or amblyopia showed various results of fusion, suppression, and diplopia.
Conclusions: Congenital red-green color vision defect patients of different types and variable degree of binocularity could successfully perform the Worth four dot test. They showed reliable results that were in accordance with their estimated binocular sensory status.
abstract_id: PUBMED:8455128
Worth vs Polarized four-dot test. A direct comparison between the Worth four-dot (W4D) and Polarized four-dot (P4D) flashlights is reported in a randomized trial on 107 unselected patients greater than 2.5 years old. The primary outcome variable was the interpretable response rate. Secondary outcomes were response time and age of test failure. There were 29 patients who failed to complete the W4D test, but only 10 patients who could not complete the P4D test, giving interpretable response rates of 73% and 91%, respectively (p < .001). The P4D test was found to be less dissociative and easier to administer. It also had a higher detection rate for fusion. We recommend its use as a tool in the clinical evaluation of binocular sensorial states.
abstract_id: PUBMED:21149094
The "worth" of the worth four dot test. Worth's four dot test was first described one hundred years ago. Despite many technological advances in equipment and techniques during the last century, this simple test is still used routinely by many strabismus specialists. It is an invaluable test when used in the evaluation of longstanding and acquired strabismus in adults and in the management of complex diplopia. Techniques using the test include selecting an optimal prism, assessing the effect of a prescribed prism or compensatory head posture on the range of binocular single vision, identifying non-organic responses, diffe1rentiating monocular from binocular diplopia, especially when they co-exist, and blurred from double vision in older patients with divergence paresis. It also can be used with prisms preoperatively to determine the risks of postoperative diplopia and give clues to the presence of torsion or a visual field defect.
abstract_id: PUBMED:8965247
Bagolini lenses vs the Polarized Four-Dot test. The applicability and accuracy of two minimally dissociative sensory tests were compared in a pediatric ophthalmology clinic. The Polarized Four-Dot (P4D) test and Bagolini striated lenses (BAG) were used to evaluate 133 patients who were at least 3 years old. The outcomes measured were test failure rate and mean age at the test failures. Test sensitivity and specificity were determined using the distance vectograph as a reference. The failure rate for the BAG test, 14%, was significantly higher than that for the P4D test, 5% (P = .0005). The mean age of the test failures also was higher for the BAG test (4.7 vs 3.8 years). Both sensitivity and specificity for the detection of central suppression were greater using the P4D test. The P4D test is simple to use, widely applicable, and accurate in the detection of peripheral fusion, central fusion, or suppression, without the confounding colors of the Worth Four-Dot test. We recommended the P4D test as the test of choice for routine evaluation of binocular fusion status.
abstract_id: PUBMED:28889226
Stereopsis and fusion in anisometropia according to the presence of amblyopia. Purpose: To evaluate the level of stereopsis and fusion in patients with anisometropia according to the presence of amblyopia.
Methods: We included 107 children with anisometropia, divided into groups with non-amblyopic anisometropia (NA, n = 72) and amblyopic anisometropia (AA, n = 35). Normal subjects without anisometropia were enrolled in the control group (n = 73). Main outcome measures were the level of stereopsis and sensory fusion as evaluated by Titmus stereotest and Worth 4-dot test, respectively, using anisometropic glasses.
Results: The degree of anisometropia in the NA, AA, and control groups was 2.54 diopters (D), 4.29 D, and 0.30 D, respectively (P = 0.014). Stereopsis (arcsec) was significantly worse in the AA group than the NA and control groups (641.71, 76.25, 54.52, respectively, P < 0.001), while no significant difference was found between the NA and control groups. The rate of fusion was significantly lower in the AA than the NA group (14.3% vs. 65.3%, P < 0.001), and was significantly lower in the NA than the control group (65.3% vs. 80.6%, P = 0.001).
Conclusions: The levels of stereopsis and sensory fusion with anisometropic glasses were significantly worse in the AA than in the NA group. The level of stereopsis in the NA group, however, did not differ significantly from that in the isometropic control, while the rate of fusion was significantly lower. Early prescription of anisometropic glasses is needed to improve visual acuity and binocularity in children with possible amblyopic anisometropia.
abstract_id: PUBMED:30336552
Two Experiments on the Psychological and Physiological Effects of Touching-Effect of Touching on the HPA Axis-Related Parts of the Body on Both Healthy and Traumatized Experiment Participants. Two experiments were conducted to measure both the psychological and physiological effects of touching on the HPA axis related parts of the body. HPA stands for the hypothalamus, pituitary, and adrenal. One experiment was conducted with a group of healthy experiment participants, and another was with a group of traumatized participants who had Adverse Childhood Experiences (ACE). In the experiments, the back of an experiment participant was touched, where a kidney-adrenal was supposed to reside, and both the psychological and physiological effects were measured. As a result, respiratory sinus arrhythmia (RSA), an indicator of the parasympathetic nerve system function and, especially, an indicator of the social engagement system increased, by a statistically significant degree, as a consequence of HPA touching in both the healthy and the trauma group, in comparison with the control. The traumatized participants had a lower RSA, and this was increased by HPA touching, accompanied by a decrease of the heart rate. It is worth noting that the social engagement function was possibly enhanced by HPA touching, especially in the trauma group, whose members tend to have difficulty being pro-social. This touching method is very simple, so it can be administered not only by oneself, but also by psycho-therapists and body workers in order to enhance both psychological and physiological well-being.
Answer: No, "Touching Four" on the Worth 4-dot test does not reliably indicate fusion in young children. A study using a computer simulation of the Worth 4-dot test found that none of the children aged 32 to 48 months could accurately describe the images verbally, and alternate fixation could not be distinguished from fusion by asking the subjects to touch the dots. However, monocular suppression was identified accurately in all subjects. The conclusion was that touching four dots on the Worth 4-dot test does not distinguish fusion from alternate fixation in children with normal ocular alignment, which has implications for the diagnosis of monofixation syndrome and assessment of the response to a prism adaptation trial in young children (PUBMED:8764793). |
Instruction: Minimalist transcatheter aortic valve replacement: The new standard for surgeons and cardiologists using transfemoral access?
Abstracts:
abstract_id: PUBMED:26704061
Transfemoral transcatheter aortic valve insertion-related intraoperative morbidity: Implications of the minimalist approach. Objectives: Transfemoral transcatheter aortic valve insertion may be performed in a catheterization laboratory (ie, the minimalist approach). It seems reasonable when considering this approach to avoid it in patients at risk for intraoperative morbidity that would require surgical intervention. We hypothesized that it would be possible to associate baseline characteristics with such morbidity, which would help heart teams select patients for the minimalist approach.
Methods: We reviewed the records of 215 consecutive patients who underwent transfemoral transcatheter aortic valve insertion with a current commercially available device from November 2008 through July 2015. Demographic characteristics of the patients included a mean age of 78.9 ± 10.6 years, female sex in 73 patients (34.0%), and a mean Society of Thoracic Surgeons predicted risk of mortality of 8.7% ± 5.4%. Valve prostheses were balloon-expandable in 126 patients (58.6%) and self-expanding in 89 patients (41.4%).
Results: Significant intraoperative morbidity occurred in 22 patients (10.2%) and included major vascular injury in 12 patients (5.6%), hemodynamic compromise requiring cardiopulmonary bypass support in 4 patients (1.9%), cardiac tamponade requiring intervention in 3 patients (1.4%), ventricular valve embolization in 2 patients (0.9%), and inability to obtain percutaneous access requiring open vascular access in 1 patient (0.5%). Intraoperative morbidity was similarly distributed across all valve types (P = .556) and sheath sizes (P = .369). There were no baseline patient characteristics predictive of intraoperative morbidity.
Conclusions: Patient and valve characteristics are not predictive of significant intraoperative morbidity during transfemoral transcatheter aortic valve insertion. The finding has implications for patient selection for the minimalist approach.
abstract_id: PUBMED:26318351
Minimalist transcatheter aortic valve replacement: The new standard for surgeons and cardiologists using transfemoral access? Background: A minimalist approach for transcatheter aortic valve replacement (MA-TAVR) utilizing transfemoral access under conscious sedation and transthoracic echocardiography is increasing in popularity. This relatively novel technique may necessitate a learning period to achieve proficiency in performing a successful and safe procedure. This report evaluates our MA-TAVR cohort with specific characterization between our early, midterm, and recent experience.
Methods: We retrospectively reviewed 151 consecutive patients who underwent MA-TAVR with surgeons and interventionists equally as primary operator at Emory University between May 2012 and July 2014. Our institution had performed 300 TAVR procedures before implementation of MA-TAVR. Patient characteristics and early outcomes were compared using Valve Academic Research Consortium 2 definitions among 3 groups: group 1 included the first 50 patients, group 2 included patients 51 to 100, and group 3 included patients 101 to 151.
Results: Median age for all patients was 84 years and similar among groups. The majority of patients were men (56%) and the median ejection fraction for all patients was 55% (interquartile range, 38.0%-60.0%). The majority of patients were high-risk surgical candidates with a median Society of Thoracic Surgeons Predicted Risk of Mortality of 10.0% and similar among groups. The overall major stroke rate was 3.3%, major vascular complications occurred in 3% of patients, and greater-than-mild paravalvular leak rate was 7%. In-hospital mortality and morbidity were similar among all 3 groups.
Conclusions: In a high-volume TAVR center, transition to MA-TAVR is feasible with acceptable outcomes and a diminutive procedural learning curve. We advocate for TAVR centers to actively pursue the minimalist technique with equal representation by cardiologists and surgeons.
abstract_id: PUBMED:34154746
Single-center experience of 105-minimalistc transfemoral transcatheter aortic valve replacement and its outcome. Introduction: Transcatheter aortic valve replacement (TAVR) increases worldwide, and indications expand from high-risk aortic stenosis patients to low-risk aortic stenosis. Studies have shown that minimalistic TAVR done under conscious sedation is safe and effective. We report single-operator, the single-center outcome of 105 minimalist transfemoral, conscious sedation TAVR patients, analyzed retrospectively.
Methods: All patients underwent TAVR in cardiac catheterization lab via percutaneous transfemoral, conscious sedation approach. A dedicated cardiac anesthetist team delivered the conscious sedation with a standard protocol described in the main text. The outcomes were analyzed as per VARC-2 criteria and compared with the latest low-risk TAVR trials.
Results: A total of 105 patients underwent transcatheter aortic valve replacement between July 2016 to February 2020. The mean age of the population was 73 years, and the mean STS score was 3.99 ± 2.59. All patients underwent a percutaneous transfemoral approach. Self-expanding valve was used in 40% of cases and balloon-expandable valve in 60% (Sapien3™ in 31% and MyVal™ in 29%) of cases. One patient required conversion to surgical aortic valve replacement. The success rate was 99 percent. The outcomes were: all-cause mortality: 0.9%, stroke rate 1.9%, New pacemaker rate 5.7%, 87.6% had no paravalvular leak. The mild and moderate paravalvular leak was seen in 2.8% and 1.9%, respectively. The mean gradient decreased from 47.5 mmHg to 9 mmHg. The average ICU stay was 26.4 h, and the average hospital stay was 5.4 days. Our outcomes are comparable with the latest published low-risk trial.
Conclusion: Minimalist, conscious sedation, transfemoral transcatheter aortic valve replacement when done following a standard protocol is safe and effective.
abstract_id: PUBMED:36282201
Standard Transfemoral Transcatheter Aortic Valve Replacement. The introduction of the transcatheter aortic valve implantation procedure has revolutionized the standards of care in patients with aortic valve pathologies and has significantly increased the quality of the medical treatment provided. The durability and constant technical improvements in the modern transcatheter aortic valve implantation procedure have broadened the indications towards younger patient groups with low-risk profiles. Therefore, transcatheter aortic valve implantation now represents an effective alternative for surgical aortic valve replacement in a large number of cases. Currently, various technical methods for the transcatheter aortic valve implantation procedure are available. The contemporary transcatheter aortic valve implantation procedure focuses on optimization of postoperative results and reduction of complications such as paravalvular leakage and permanent pacemaker implantation. Another goal of transcatheter aortic valve implantation is the achievement of a valid lifetime concept with secure coronary access and conditions for future valve-in-valve interventions. In this case report, we demonstrate a standard transfemoral transcatheter aortic valve implantation procedure with a self-expandable supra-annular device, one of the most commonly performed methods.
abstract_id: PUBMED:34660747
Transcatheter Aortic Valve Implantation: All Transfemoral? Update on Peripheral Vascular Access and Closure. Transfemoral access remains the most widely used peripheral vascular approach for transcatheter aortic valve implantation (TAVI). Despite technical improvement and reduction in delivery sheath diameters of all TAVI platforms, 10-20% of patients remain not eligible to transfemoral TAVI due to peripheral artery disease. In this review, we aim at presenting an update of recent data concerning transfemoral access and percutaneous closure devices. Moreover, we will review peripheral non-transfemoral alternative as well as caval-aortic accesses and discuss the important features to assess with pre-procedural imaging modalities before TAVI.
abstract_id: PUBMED:30174904
Non-transfemoral access sites for transcatheter aortic valve replacement. Transfemoral access is currently the standard and preferred access site for transcatheter aortic valve replacement (TAVR), though novel approaches are emerging to expand treatment options for the increasing numbers of patients with a contraindication for the traditional route. Previous publications have provided comparisons between two TAVR access sites, primarily transfemoral versus one of the novel approaches, while others have compared three or four novel approaches. The aim of this report is to provide a comprehensive summary of publications that analyse and compare the six non-transfemoral access sites currently described in the literature. These include the transapical, transaortic, axillary/subclavian, brachiocephalic, transcarotid, and transcaval approaches. Though there remains little consensus as to the superiority or non-inferiority of TAVR approaches, and there has yet to be randomized clinical trials to support published findings, with careful patient and procedural selection, outcomes for novel approaches have been reported to be comparable to standard transfemoral access when performed by skilled physicians. As such, choice of procedure is primarily based on registry data and the judgement of surgical teams as to which approach is best in each individual case. As TAVR continues to be an increasingly widespread treatment, search for the optimal access site will grow, and focus should be placed on the importance of educating surgeons as to all possible approaches so they may review and chose the most appropriate technique for a given patient.
abstract_id: PUBMED:25228962
Percutaneous management of vascular access in transfemoral transcatheter aortic valve implantation. Transcatheter aortic valve implantation (TAVI) using stent-based bioprostheses has recently emerged as a promising alternative to surgical valve replacement in selected patients. The main route for TAVI is retrograde access from the femoral artery using large sheaths (16-24 F). Vascular access complications are a clinically relevant issue in TAVI procedures since they are reported to occur in up to one fourth of patients and are strongly associated with adverse outcomes. In the present paper, we review the different types of vascular access site complications associated with transfemoral TAVI. Moreover, we discuss the possible optimal management strategies with particular attention to the relevance of early diagnosis and prompt treatment using endovascular techniques.
abstract_id: PUBMED:35985946
Using Intravascular Lithotripsy to Facilitate Transfemoral Arterial Access for Transcatheter Aortic Valve Implantation. Peripheral vascular assessment is important in pre-procedural planning for transcatheter aortic valve implantation (TAVI). While alternative vascular access sites have been used in patients with hostile iliofemoral anatomy, femoral access has been established as the superior access method for procedural outcomes. Intravascular lithotripsy (IVL) can facilitate transfemoral access for TAVI in patients with calcific stenoses of the iliofemoral arteries. This How-To-Do-It article describes the procedural planning and methods for performing IVL in these patients.
abstract_id: PUBMED:29588766
Overcoming the Challenges of the Transfemoral Approach in Transcatheter Aortic Valve Implantation. Transcatheter aortic valve implantation (TAVI) is performed through a retrograde transfemoral approach in approximately 80-90 % of cases thanks to the improvements in delivery catheter profile, size and steerability compared with the first generation devices. The aim of this review article is to describe the challenges of transfemoral TAVI and the options to overcome them. The difficulties may be related to the access itself or the placement of the valve using the transfemoral route. Comprehensive patient screening using multislice computed tomography and crossover techniques to prevent bleeding should result in low complication rates even for fully percutaneous procedures. Horizontal ascending aorta and severely calcified aortic valves remain a challenge for retrograde valve crossing, device advancement and accurate positioning during deployment. The buddy balloon technique is a simple option in the case of difficult aortic valve crossing with a delivery catheter, whereas an antegrade approach using the transapical route is a reasonable alternative in such anatomy.
abstract_id: PUBMED:32814109
The best way to transcatheter aortic valve implantation: From standard to new approaches. Transcatheter aortic valve implantation (TAVI) is a safe and beneficial treatment for patients with severe symptomatic aortic stenosis at high and intermediate surgical risk. The safety of the procedure continues to improve thanks to more refined procedural approaches and devices but, also and above all, to the accrual of the procedural knowledge and expertise by the operators. The diversification of the approaches and the possibility to tailor the treatment on the individual needs and anatomical features of the patients allows a rapid learning curve in the management of even complications. Indeed, there are several approaches with which TAVI can be carried out: transfemoral arterial, subclavian, transcarotid, transaortic, transaxillary, transapical, and through right anterior thoracotomy. Although transfemoral venous TAVI is less common, it has already have been carried out using caval-aortic punctures. This field is rapidly evolving, and it will be of paramount importance for interventional cardiologists and cardiothoracic surgeons to keep up to date with further developments. This review intends to give an in-depth and update overview of both conventional and innovative TAVI approaches, with the scope to highlight the relevant advantages, major disadvantages, safety aspects and techniques.
Answer: Based on the provided abstracts, it appears that the minimalist approach to transcatheter aortic valve replacement (MA-TAVR) using transfemoral access is gaining popularity and is being considered as a new standard for surgeons and cardiologists. This approach utilizes conscious sedation and transthoracic echocardiography, and is associated with acceptable outcomes and a diminutive procedural learning curve in high-volume TAVR centers (PUBMED:26318351). The transition to MA-TAVR has been shown to be feasible with outcomes comparable to traditional TAVR methods (PUBMED:34154746). Additionally, the minimalist, conscious sedation, transfemoral TAVR approach is reported to be safe and effective when following a standard protocol (PUBMED:34154746).
Furthermore, the transfemoral approach remains the most widely used peripheral vascular approach for TAVR, and despite technical improvements, a significant proportion of patients are not eligible for transfemoral TAVR due to peripheral artery disease (PUBMED:34660747). However, for those who can undergo the procedure via this route, femoral access has been established as the superior method for procedural outcomes (PUBMED:35985946). The challenges of the transfemoral approach can be overcome with comprehensive patient screening, and the use of crossover techniques and intravascular lithotripsy (IVL) can facilitate access in patients with calcific stenoses of the iliofemoral arteries (PUBMED:29588766, PUBMED:35985946).
In summary, the minimalist approach to transfemoral TAVR is being advocated as a new standard due to its feasibility, safety, and effectiveness, with a minimal learning curve for high-volume centers and comparable outcomes to traditional TAVR methods. However, patient selection is crucial, and the approach may not be suitable for all patients, particularly those with significant peripheral artery disease (PUBMED:26318351, PUBMED:34154746, PUBMED:34660747, PUBMED:35985946, PUBMED:29588766). |
Instruction: Cardiac effects of electrical stun guns: does position of barbs contact make a difference?
Abstracts:
abstract_id: PUBMED:18373757
Cardiac effects of electrical stun guns: does position of barbs contact make a difference? Background: The use of electrical stun guns has been rising among law enforcement authorities for subduing violent subjects. Multiple reports have raised concerns over their safety. The cardiovascular safety profile of these devices in relationship to the position of delivery on the torso has not been well studied.
Methods: We tested 13 adult pigs using a custom device built to deliver neuromuscular incapacitating (NMI) discharge of increasing intensity that matched the waveform of a commercially available stun gun (TASER(R) X-26, TASER International, Scottsdale, AZ, USA). Discharges with increasing multiples of output capacitances were applied in a step-up and step-down fashion, using two-tethered barbs at five locations: (1) Sternal notch to cardiac apex (position-1), (2) sternal notch to supraumbilical area (position-2), (3) sternal notch to infraumbilical area (position-3), (4) side to side on the chest (position-4), and (5) upper to lower mid-posterior torso (position-5). Endpoints included determination of maximum safe multiple (MaxSM), ventricular fibrillation threshold (VFT), and minimum ventricular fibrillation induction multiple (MinVFIM).
Results: Standard TASER discharges repeated three times did not cause ventricular fibrillation (VF) at any of the five locations. When the barbs were applied in the axis of the heart (position-1), MaxSM and MinVFIM were significantly lower than when applied away from the heart, on the dorsum (position-5) (4.31 +/- 1.11 vs 40.77 +/- 9.54, P< 0.001 and 8.31 +/- 2.69 vs 50.77 +/- 9.54, P< 0.001, respectively). The values of these endpoints at position-2, position-3, and position-4 were progressively higher and ranged in between those of position-1 and position-5. Presence of ventricular capture at a 2:1 ratio to the delivered TASER impulses correlated with induction of VF. No significant metabolic changes were seen after standard NMI TASER discharge. There was no evidence of myocardial damage based on serum cardiac markers, electrocardiography, echocardiography, and histopathologic findings confirming the absence of significant cardiac effects.
Conclusions: Standard TASER discharges did not cause VF at any of the positions. Induction of VF at higher output multiples appear to be sensitive to electrode distance from the heart, giving highest ventricular fibrillation safety margin when the electrodes are placed on the dorsum. Rapid ventricular capture appears to be a likely mechanism of VF induction by higher output TASER discharges.
abstract_id: PUBMED:38041825
Traumatic injuries by conducted electrical weapons: Case report of self-injury to the hand during stun gun training. The use of non-lethal weapons has spread worldwide, being introduced as an alternative to firearms in many countries such as the United States or the United Kingdom. Among non-lethal weapons, conducted electrical weapons have been adopted worldwide, to control unruly suspected criminals or to neutralise violent situations. The stun gun belongs to this category and is the most widely available, with more than 140,000 units in use by police officers in the field in the US, and an additional 100,000 electrical stun guns owned by civilians worldwide. In Italy, the use of conducted electrical weapons by law enforcement has only recently been introduced, with private use and commercialisation still prohibited, mainly due to controversies related to the potential dangers of such devices.Before the official adoption, several experiments had to be carried out, with mechanisms that reproduced the ballistics of the stun gun. Here we present the case of a man who suffered a self-injury trauma to his hand during a ballistics exercise with a crossbow loaded with stun gun probes.
abstract_id: PUBMED:18450834
Cardiac stimulation with high voltage discharge from stun guns. The ability of an electrical discharge to stimulate the heart depends on the duration of the pulse, the voltage and the current density that reaches the heart. Stun guns deliver very short electrical pulses with minimal amount of current at high voltages. We discuss external stimulation of the heart by high voltage discharges and review studies that have evaluated the potential of stun guns to stimulate cardiac muscle. Despite theoretical analyses and animal studies which suggest that stun guns cannot and do not affect the heart, 3 independent investigators have shown cardiac stimulation by stun guns. Additional research studies involving people are needed to resolve the conflicting theoretical and experimental findings and to aid in the design of stun guns that are unable to stimulate the heart.
abstract_id: PUBMED:30489364
Electrical Stun Gun and Modern Implantable Cardiac Stimulators. The aim of the study is to investigate systematically the possible interactions between two types of stun guns and last-generation pacemakers and implantable defibrillators. Experimental measurements were performed on pacemakers and implantable defibrillators from five leading manufacturers, considering the effect of stun gun dart positioning, sensing modality, stun gun shock duration, and defibrillation energy level. More than 300 measurements were collected. No damage or permanent malfunction was observed in either pacemakers or implantable defibrillators. During the stun gun shock, most of the pacemakers entered into the noise reversion mode. However, complete inhibition of the pacing activity was also observed in some of the pacemakers and in all the implantable defibrillators. In implantable defibrillators, standard stun gun shock (duration 5 s) caused the detection of a shockable rhythm and the start of a charging cycle. Prolonged stun gun shocks (10-15 s) triggered the inappropriate delivery of defibrillation therapy in all the implantable defibrillators tested. Also in this case, no damage or permanent malfunction was observed. For pacemakers, in most cases, the stun guns caused them either to switch to the noise reversion mode or to exhibit partial or total pacing inhibition. For implantable defibrillators, in all cases, the stun guns triggered a ventricular fibrillation event detection. No risks resulted when the stun gun was used by a person wearing a pacemaker or an implantable defibrillator. This work provides novel and up-to-date evidence useful for the evaluation of risks to pacemaker/implantable defibrillator wearers due to stun guns.
abstract_id: PUBMED:33086265
Electrical Stun Gun and Modern Implantable Cardiac Stimulators: Update for a New Stun Gun Model. Abstract: In 2017, the Italian National Institute of Health conducted a study to evaluate the potential risks of Conducted Electrical Weapons (CEW, AKA "stun guns") for users bearing a pacemaker (PM) or an implantable cardioverter defibrillator (ICD). The study addressed two specific models of stun guns: the TASER model X2 and AXON model X26P. In 2019, the same experimental protocol and testing procedure was adopted to evaluate the risk for another model of stun gun, the MAGEN model 5 (MAGEN, Israel). The MAGEN 5 differs from the previous stun guns tested in terms of peak voltage generated, duration of the shock, and trigger modality for repeated shocks. This note is an update of the previous study results, including the measurements on the MAGEN 5 stun gun. Despite the differences between the stun gun models, the effects on the PM/ICD behavior were the same as previously observed for the TASER stun guns.
abstract_id: PUBMED:29992405
Non-contact mapping in cardiac electrophysiology. Catheter ablation of atrial and ventricular arrhythmias is now considered a standard technology for selected patients. In some patients, however, cure of the arrhythmia is hampered by the complexity of the arrhythmia or the way the arrhythmia presents in the electrophysiological laboratory: some focal atrial and ventricular arrhythmias are difficult to induce using electrical stimulation or medical provocation. Precise mapping of these arrhythmias is challenging or even impossible by contact mapping, while other arrhythmias are poorly tolerated and need early termination.In these scenarios, use of non-contact mapping technology can be an alternative to conventional mapping, since isopotential maps may require no more than one ectopic beat identical with the clinical focal arrhythmia to reconstruct its endocardial origin. This review article presents the technology of non-contact cardiac mapping, as well as various arrhythmias that have been successfully treated using this technology in the past. The possibilities and limitations of using non-contact cardiac mapping under various conditions are also presented.
abstract_id: PUBMED:19964798
Electrical parameters of projectile stun guns. Projectile stun guns have been developed as less-lethal devices that law enforcement officers can use to control potentially violent subjects, as an alternative to using firearms. These devices apply high voltage, low amperage, pulsatile electric shocks to the subject, which causes involuntary skeletal muscle contraction and renders the subject unable to further resist. In field use of these devices, the electric shock is often applied to the thorax, which raises the issue of cardiac safety of these devices. An important determinant of the cardiac safety of these devices is their electrical output. Here the outputs of three commercially available projectile stun guns were evaluated with a resistive load and in a human-sized animal model (a 72 kg pig).
abstract_id: PUBMED:33172120
Enhancing Electrical Contact with a Commercial Polymer for Electrical Resistivity Tomography on Archaeological Sites: A Case Study. This communication reports an improvement of the quality of the electrical data obtained from the application of electrical resistivity tomography method on archaeological studies. The electrical contact between ground and electrode enhances significantly by using carbomer-based gel during the electrical resistivity tomography measurements. Not only does the gel promote the conservation of the building surface under investigation, but it also virtually eliminates the necessity of conventional spike electrodes, which in many archaeological studies are inadequate or not permitted. Results evidenced an enhancement in the quality of the electrical data obtained in the order of thousands of units compared with those without using the carbomer-based gel. The potential and capabilities of this affordable gel make it appropriate to be applied to other geoelectrical studies beyond archaeological investigations. Moreover, it might solve corrosion issues on conventional spike electrodes, and electrical multicore cables usually provoked for added saltwater attempting to improve the electrical contact.
abstract_id: PUBMED:28903275
Effect of Electrical Contact on the Contact Residual Stress of a Microrelay Switch. This paper investigates the effect of electrical contact on the thermal contactstress of a microrelay switch. A three-dimensional elastic-plastic finite element model withcontact elements is used to simulate the contact behavior between the microcantilever beamand the electrode. A model with thermal-electrical coupling and thermal-stress coupling isused in the finite element analysis. The effects of contact gap, plating film thickness andnumber of switching cycles on the contact residual stress, contact force, plastic deformation,and temperature rise of the microrelay switch are explored. The numerical results indicatethat the residual stress increases with increasing contact gap or decreasing plating filmthickness. The results also show that the residual stress increases as the number of switchingcycles increases. A large residual stress inside the microcantilever beam can decrease thelifecycle of the microrelay.
abstract_id: PUBMED:38405686
Investigating the Impact of Stress on Pain: A Scoping Review on Sense of Control, Social-Evaluative Threat, Unpredictability, and Novelty (STUN Model). Background: Stress can have paradoxical effects on pain, namely hyperalgesia and hypoalgesia. Four situational characteristics activate the hypothalamic-pituitary-adrenal axis, leading to a physiological stress response: lacking Sense of control, social-evaluative Threat, Unpredictability and Novelty (STUN). This scoping review reports on the types of evidence published on the effects of STUN characteristics on pain outcomes.
Databases/data Treatment: Searches of primary electronic databases were performed to identify articles published on adults between 1990 and 2021 that contained search terms on pain and stress/STUN characteristics. A total of 329 articles were included in the analysis.
Results: Only 3.3% of studies examined simultaneously >1 STUN component. Almost all observational studies (177/180) examined the association between perceived stress and pain without measuring physiological stress responses. Of the 130 experimental studies, 78 (60.0%) manipulated stressful characteristics through nociception, and only 38.5% assessed if/how stress manipulation impacted perceived stress.
Conclusion: There is a clear lack of integration of the characteristics that trigger a physiological stress response in the pain field. Only 3.3% of studies examined simultaneously more than one STUN component and there is an unequal attention given to individual components of the STUN framework. Recommendations for future research include selection of stress manipulations/measurements that are more precisely inducing/reflecting neurobiological mechanisms of stress responses to insure valid integration of scientific knowledge.
Answer: Yes, the position of barbs contact does make a difference in the cardiac effects of electrical stun guns. A study on adult pigs using a device that matched the waveform of a commercially available stun gun (TASER X-26) found that when the barbs were applied in the axis of the heart (position-1), the maximum safe multiple (MaxSM) and minimum ventricular fibrillation induction multiple (MinVFIM) were significantly lower than when applied away from the heart, on the dorsum (position-5). This suggests that the induction of ventricular fibrillation (VF) at higher output multiples is sensitive to electrode distance from the heart, with the highest ventricular fibrillation safety margin when the electrodes are placed on the dorsum. Standard TASER discharges did not cause VF at any of the positions tested, and there was no evidence of myocardial damage based on various assessments including serum cardiac markers and histopathologic findings (PUBMED:18373757).
Additional research involving people is needed to resolve conflicting theoretical and experimental findings and to aid in the design of stun guns that are unable to stimulate the heart (PUBMED:18450834). However, it is clear from the available evidence that the position of barb contact is a significant factor in the cardiac safety profile of electrical stun guns. |
Instruction: Is lower 30-day mortality posthospital admission among blacks unique to the Veterans Affairs health care system?
Abstracts:
abstract_id: PUBMED:18049349
Is lower 30-day mortality posthospital admission among blacks unique to the Veterans Affairs health care system? Background: Several studies have reported lower risk-adjusted mortality for blacks than whites within the Veterans Affairs (VA) health care system, particularly for those age 65 and older. This finding may be a result of the VA's integrated health care system, which reduces barriers to care through subsidized comprehensive health care services. However, no studies have directly compared racial differences in mortality within 30 days of hospitalization between the VA and non-VA facilities in the US health care system.
Objective: To compare risk-adjusted 30-day mortality for black and white males after hospital admission to VA and non-VA hospitals, with separate comparisons for patients younger than age 65 and those age 65 and older.
Research Design: Retrospective observational study using hospital claims data from the national VA system and all non-VA hospitals in Pennsylvania and California.
Subjects: A total of 369,155 VA and 1,509,891 non-VA hospitalizations for a principal diagnosis of pneumonia, congestive heart failure, gastrointestinal bleeding, hip fracture, stroke, or acute myocardial infarction between 1996 and 2001.
Measures: Mortality within 30 days of hospital admission.
Results: Among those under age 65, blacks in VA and non-VA hospitals had similar odds ratios of 30-day mortality relative to whites for gastrointestinal bleeding, hip fracture, stroke, and acute myocardial infarction. Among those age 65 and older, blacks in both VA and non-VA hospitals had significantly reduced odds of 30-day mortality compared with whites for all conditions except pneumonia in the VA. The differences in mortality by race are remarkably similar in VA and non-VA settings.
Conclusions: These findings suggest that factors associated with better short-term outcomes for blacks are not unique to the VA.
abstract_id: PUBMED:11176839
Racial differences in mortality among men hospitalized in the Veterans Affairs health care system. Context: Racial disparities in health care delivery and outcomes may be due to differences in health care access and, therefore, may be mitigated in an equal-access health care system. Few studies have examined racial differences in health outcomes in such a system.
Objective: To study racial differences in mortality among patients admitted to hospitals in the Veterans Affairs (VA) system, a health care system that potentially offers equal access to care.
Design, Setting, And Participants: Cohort study of 28 934 white and 7575 black men admitted to 147 VA hospitals for 1 of 6 common medical diagnoses (pneumonia, angina, congestive heart failure, chronic obstructive pulmonary disease, diabetes, and chronic renal failure) between October 1, 1995, and September 30, 1996.
Main Outcome Measures: The primary outcome measure was 30-day mortality among black compared with white patients. Secondary outcome measures were in-hospital mortality and 6-month mortality.
Results: Overall mortality at 30 days was 4.5% in black patients and 5.8% in white patients (relative risk [RR], 0.77; 95% confidence interval [CI], 0.69-0.87; P =.001). Mortality was lower among blacks for each of the 6 medical diagnoses. Multivariate adjustment for patient and hospital characteristics had a small effect (RR, 0.75; 95% CI, 0.66-0.85; P<.001). Black patients also had lower adjusted in-hospital and 6-month mortality. These findings were consistent among all subgroups evaluated.
Conclusions: Black patients admitted to VA hospitals with common medical diagnoses have lower mortality rates than white patients. The survival advantage of black patients is not readily explained; however, the absence of a survival disadvantage for blacks may reflect the benefits of equal access to health care and the quality of inpatient treatment at VA medical centers.
abstract_id: PUBMED:22825806
Intensive care unit admitting patterns in the Veterans Affairs health care system. Background: Critical care resource use accounts for almost 1% of US gross domestic product and varies widely among hospitals. However, we know little about the initial decision to admit a patient to the intensive care unit (ICU).
Methods: To describe hospital ICU admitting patterns for medical patients after accounting for severity of illness on admission, we performed a retrospective cohort study of the first nonsurgical admission of 289,310 patients admitted from the emergency department or the outpatient clinic to 118 Veterans Affairs acute care hospitals between July 1, 2009, and June 30, 2010. Severity (30-day predicted mortality rate) was measured using a modified Veterans Affairs ICU score based on laboratory data and comorbidities around admission. The main outcome measure was direct admission to an ICU.
Results: Of the 31,555 patients (10.9%) directly admitted to the ICU, 53.2% had 30-day predicted mortality at admission of 2% or less. The rate of ICU admission for this low-risk group varied from 1.2% to 38.9%. For high-risk patients (predicted mortality >30%), ICU admission rates also varied widely. For a 1-SD increase in predicted mortality, the adjusted odds of ICU admission varied substantially across hospitals (odds ratio = 0.85-2.22). As a result, 66.1% of hospitals were in different quartiles of ICU use for low- vs high-risk patients (weighted κ = 0.50).
Conclusions: The proportion of low- and high-risk patients admitted to the ICU, variation in ICU admitting patterns among hospitals, and the sensitivity of hospital rankings to patient risk all likely reflect a lack of consensus about which patients most benefit from ICU admission.
abstract_id: PUBMED:35173019
Mortality among US veterans after emergency visits to Veterans Affairs and other hospitals: retrospective cohort study. Objective: To measure and compare mortality outcomes between dually eligible veterans transported by ambulance to a Veterans Affairs hospital and those transported to a non-Veterans Affairs hospital.
Design: Retrospective cohort study using data from medical charts and administrative files.
Setting: Emergency visits by ambulance to 140 Veteran Affairs and 2622 non-Veteran Affairs hospitals across 46 US states and the District of Columbia in 2001-18.
Participants: National cohort of 583 248 veterans (aged ≥65 years) enrolled in both the Veterans Health Administration and Medicare programs, who resided within 20 miles of at least one Veterans Affairs hospital and at least one non-Veterans Affairs hospital, in areas where ambulances regularly transported patients to both types of hospitals.
Intervention: Emergency treatment at a Veterans Affairs hospital.
Main Outcome Measure: Deaths in the 30 day period after the ambulance ride. Linear probability models of mortality were used, with adjustment for patients' demographic characteristics, residential zip codes, comorbid conditions, and other variables.
Results: Of 1 470 157 ambulance rides, 231 611 (15.8%) went to Veterans Affairs hospitals and 1 238 546 (84.2%) went to non-Veterans Affairs hospitals. The adjusted mortality rate at 30 days was 20.1% lower among patients taken to Veterans Affairs hospitals than among patients taken to non-Veterans Affairs hospitals (9.32 deaths per 100 patients (95% confidence interval 9.15 to 9.50) v 11.67 (11.58 to 11.76)). The mortality advantage associated with Veterans Affairs hospitals was particularly large for patients who were black (-25.8%), were Hispanic (-22.7%), and had received care at the same hospital in the previous year.
Conclusions: These findings indicate that within a month of being treated with emergency care at Veterans Affairs hospitals, dually eligible veterans had substantially lower risk of death than those treated at non-Veterans Affairs hospitals. The nature of this mortality advantage warrants further investigation, as does its generalizability to other types of patients and care. Nonetheless, the finding is relevant to assessments of the merit of policies that encourage private healthcare alternatives for veterans.
abstract_id: PUBMED:20650356
Divergent trends in survival and readmission following a hospitalization for heart failure in the Veterans Affairs health care system 2002 to 2006. Objectives: This study sought to determine recent trends over time in heart failure hospitalization, patient characteristics, treatment, rehospitalization, and mortality within the Veterans Affairs health care system.
Background: Use of recommended therapies for heart failure has increased in the U.S. However, it is unclear to what extent hospitalization rates and the associated mortality have improved.
Methods: We compared rates of hospitalization for heart failure, 30-day rehospitalization for heart failure, and 30-day mortality following discharge from 2002 to 2006 in the Veterans Affairs Health Care System. Odds ratios for outcome were adjusted for patient diagnoses within the past year, laboratory data, and for clustering of patients within hospitals.
Results: We identified 50,125 patients with a first hospitalization for heart failure from 2002 to 2006. Mean age did not change (70 years), but increases were noted for most comorbidities (mean Charlson score increased from 1.72 to 1.89, p < 0.0001). Heart failure admission rates remained constant at about 5 per 1,000 veterans. Mortality at 30 days decreased (7.1% to 5.0%, p < 0.0001), whereas rehospitalization for heart failure at 30 days increased (5.6% to 6.1%, p = 0.11). After adjustment for patient characteristics, the odds ratio for rehospitalization in 2006 (vs. 2002) was 0.54 (95% confidence interval [CI]: 0.47 to 0.61) for mortality, but 1.21 (95% CI: 1.04 to 1.41) for heart failure rehospitalization at 30 days.
Conclusions: Recent mortality and rehospitalization rates in the Veterans Affairs Health Care System have trended in opposite directions. These results have implications for using rehospitalization as a measure of quality of care.
abstract_id: PUBMED:14567278
Can the Veterans Affairs health care system continue to care for the poor and vulnerable? Can the Veterans Affairs (VA) health care system, long an important part of the safety net for disabled and poor veterans, survive the loss of World War II veterans--once its largest constituency and still its most important advocates? A recent shift in emphasis from acute hospital-based care to care of chronic illness in outpatient settings, as well as changes in eligibility allowing many more nonpoor and nondisabled veterans into the VA system, will be key determinants of long-term survivability. Although allowing less needy veterans into the system runs the risk of diluting services to those most in need, the long-run effect may be to increase support among a larger and younger group of veterans, thereby enhancing political clout and ensuring survivability. It may be that the best way to maintain the safety net for veterans is to continue to cast it more widely.
abstract_id: PUBMED:17610440
Is thirty-day hospital mortality really lower for black veterans compared with white veterans? Objective: To examine the source of observed lower risk-adjusted mortality for blacks than whites within the Veterans Affairs (VA) system by accounting for hospital site where treated, potential under-reporting of black deaths, discretion on hospital admission, quality improvement efforts, and interactions by age group.
Data Sources: Data are from the VA Patient Treatment File on 406,550 hospitalizations of veterans admitted with a principal diagnosis of acute myocardial infarction, stroke, hip fracture, gastrointestinal bleeding, congestive heart failure, or pneumonia between 1996 and 2002. Information on deaths was obtained from the VA Beneficiary Identification Record Locator System and the National Death Index.
Study Design: This was a retrospective observational study of hospitalizations throughout the VA system nationally. The primary outcome studied was all-location mortality within 30 days of hospital admission. The key study variable was whether a patient was black or white.
Principal Findings: For each of the six study conditions, unadjusted 30-day mortality rates were significantly lower for blacks than for whites (p<.01). These results did not vary after adjusting for hospital site where treated, more complete ascertainment of deaths, and in comparing results for conditions for which hospital admission is discretionary versus non-discretionary. There were also no significant changes in the degree of difference by race in mortality by race following quality improvement efforts within VA. Risk-adjusted mortality was consistently lower for blacks than for whites only within the population of veterans over age 65.
Conclusions: Black veterans have significantly lower 30-day mortality than white veterans for six common, high severity conditions, but this is generally limited to veterans over age 65. This differential by age suggests that it is unlikely that lower 30-day mortality rates among blacks within VA are driven by treatment differences by race.
abstract_id: PUBMED:9674628
Female veterans' use of Department of Veterans Affairs health care services. Objectives: As access of women to mental health services has become increasingly important, empirical research has begun to examine the determinants of mental health care utilization across gender. This article examines the effect of being an extreme minority on utilization of Department of Veterans Affairs (VA) health services by female veterans.
Methods: Data were collected on a representative national sample of veterans in 1992 as part of the National Survey of Veterans. These data included information on sociodemographic variables, military service variables, physical health and disability, and health services utilization. The authors examined whether women who used health services in 1992, and who were eligible for VA care, differed from men on the likelihood of using any VA health services and on the likelihood of use of VA outpatient and inpatient health services. In addition, we compared VA health care utilization among subgroups of veterans with physical and mental disorders, and compared self-reported reasons for choice of health care provider, across gender.
Results: Results indicated that female veterans were less likely than male veterans to use VA health services. This difference was explained by lower utilization by women of VA outpatient services, since inpatient admission rates were the same across gender. The lower outpatient utilization was specific to women with self-reported mental disorders. Women with physical conditions did not differ from men with similar conditions in their VA outpatient utilization. Finally, men and women did not differ on their reasons for choosing VA or non-VA care.
Conclusions: The authors conclude that extreme gender minority status appears to affect outpatient utilization rates at the VA among women with mental disorders, perhaps because of the more personal or sensitive nature of the services involved. Further research is needed to understand why certain women may be underutilizing VA outpatient services and on the consequences of minority gender status for health service utilization, more generally.
abstract_id: PUBMED:32936253
Trends in Readmission and Mortality Rates Following Heart Failure Hospitalization in the Veterans Affairs Health Care System From 2007 to 2017. Importance: The Centers for Medicare & Medicaid Services and the Veterans Affairs Health Care System provide incentives for hospitals to reduce 30-day readmission and mortality rates. In contrast with the large body of evidence describing readmission and mortality in the Medicare system, it is unclear how heart failure readmission and mortality rates have changed during this period in the Veterans Affairs Health Care System.
Objectives: To evaluate trends in readmission and mortality after heart failure admission in the Veterans Affairs Health Care System, which had no financial penalties, in a decade involving focus on heart failure readmission reduction (2007-2017).
Design, Setting, And Participants: This cohort study used data from all Veterans Affairs-paid heart failure admissions from January 2007 to September 2017. All Veterans Affairs-paid hospital admissions to Veterans Affairs and non-Veterans Affairs facilities for a primary diagnosis of heart failure were included, when the admission was paid for by the Veterans Affairs. Data analyses were conducted from October 2018 to March 2020.
Exposures: Admission for a primary diagnosis of heart failure at discharge.
Main Outcomes And Measures: Thirty-day all-cause readmission and mortality rates.
Results: A total of 164 566 patients with 304 374 hospital admissions were included. Among the 304 374 hospital admissions between 2007 and 2017, 298 260 (98.0%) were for male patients, and 195 205 (64.4%) were for white patients. The mean (SD) age was 70.8 (11.5) years. The adjusted odds ratio of 30-day readmission declined throughout the study period to 0.85 (95% CI, 0.83-0.88) in 2015 to 2017 compared with 2007 to 2008. The adjusted odds ratio of 30-day mortality remained stable, with an adjusted odds ratio of 1.01 (95% CI, 0.96-1.06) in 2015 to 2017 compared with 2007 to 2008. Stratification by left ventricular ejection fraction showed similar readmission reduction trends and no significant change in mortality, regardless of strata.
Conclusions And Relevance: In this analysis of an integrated health care system that provided guidance and nonfinancial incentives for reducing readmissions, such as public reporting of readmission rates, risk-adjusted 30-day readmission declined despite inclusion of clinical variables in risk adjustment, but mortality did not decline. Future investigations should focus on evaluating the effectiveness of specific approaches to readmission reduction to inform efficient and effective application in individual health systems, hospitals, and practices.
abstract_id: PUBMED:27650421
Did Massachusetts Health Reform Affect Veterans Affairs Primary Care Use? Massachusetts Health Reform (MHR), implemented in 2006, introduced new health insurance options that may have prompted some veterans already enrolled in the Veterans Affairs Healthcare System (VA) to reduce their reliance on VA health services. This study examined whether MHR was associated with changes in VA primary care (PC) use. Using VA administrative data, we identified 147,836 veterans residing in Massachusetts and neighboring New England (NE) states from October 2004 to September 2008. We applied difference-in-difference methods to compare pre-post changes in PC use among Massachusetts and other NE veterans. Among veterans not enrolled in Medicare, VA PC use was not significantly different following MHR for Massachusetts veterans relative to other NE veterans. Among VA-Medicare dual enrollees, MHR was associated with an increase of 24.5 PC visits per 1,000 veterans per quarter ( p = .048). Despite new non-VA health options through MHR, VA enrollees continued to rely on VA PC.
Answer: No, lower 30-day mortality posthospital admission among blacks is not unique to the Veterans Affairs (VA) health care system. Studies have shown that the lower risk-adjusted mortality for blacks compared to whites is observed both within the VA health care system and in non-VA hospitals. A retrospective observational study using hospital claims data from the national VA system and all non-VA hospitals in Pennsylvania and California found that among those under age 65, blacks in VA and non-VA hospitals had similar odds ratios of 30-day mortality relative to whites for several conditions. Among those age 65 and older, blacks in both VA and non-VA hospitals had significantly reduced odds of 30-day mortality compared with whites for all conditions except pneumonia in the VA. The differences in mortality by race were remarkably similar in VA and non-VA settings, suggesting that factors associated with better short-term outcomes for blacks are not unique to the VA (PUBMED:18049349).
Additionally, another study found that black patients admitted to VA hospitals with common medical diagnoses have lower mortality rates than white patients, and this survival advantage is not readily explained by differences in health care access, as the VA system potentially offers equal access to care. The study also noted that the absence of a survival disadvantage for blacks may reflect the benefits of equal access to health care and the quality of inpatient treatment at VA medical centers (PUBMED:11176839).
These findings indicate that the lower 30-day mortality for blacks is not an outcome exclusive to the VA health care system but is also present in non-VA settings, suggesting that other factors beyond the specific health care system may contribute to these observed racial differences in mortality outcomes. |
Instruction: Do risk factors for childhood infections and malnutrition protect against asthma?
Abstracts:
abstract_id: PUBMED:14600053
Do risk factors for childhood infections and malnutrition protect against asthma? A study of Brazilian male adolescents. Objectives: We studied the association between early life conditions and asthma in adolescence.
Methods: We conducted a population-based birth cohort study involving 2250 male 18-year-olds residing in Brazil.
Results: Approximately 18% of the adolescents reported having asthma. Several childhood factors were found to be significantly associated with increased asthma risk: being of high socioeconomic status, living in an uncrowded household, and children being breastfed for 9 months or longer.
Conclusions: The present results are consistent with the "hygiene hypothesis," according to which early exposure to infections provides protection against asthma. The policy implications of our findings are unclear given that risk factors for asthma protect against serious childhood diseases in developing countries.
abstract_id: PUBMED:1843723
Risk factors for infection after heart surgery A consecutive series of 84 patients operated by the same surgical team was studied in order to identify risk factors for post operative infection. Female sex and longer antibiotic prophylaxis were significantly associated with higher infection risk; the risk of dying of infection was more pronounced with infected men than with infected women. Diabetes, undernutrition, low albumin serum level in the first or in the fifth day post-operatively, asthma or pneumonia in the past did not correlate with infectious risk, as well as use of antibiotic before surgery or longer hospitalization before surgery.
abstract_id: PUBMED:17385469
Main risk factors of psychosomatic diseases Aim: To reveal main risk factors (RF) of psychosomatic diseases and to design models of prediction of each nosological entity.
Material And Methods: For design of a model of predicting the risk of different nosological entities of psychosomatic diseases, a standard linear regression model was used. The study included 482 patients with arterial hypertension (AH, n = 96), ischemic heart disease (IHD, n = 99), duodenal ulcer (DU, n = 60), bronchial asthma (BA, n = 52), diabetes mellitus type 1 (DM-1, n = 84) and 2 (DM-2, n = 91).
Results: The stress factor was essential in the onset of all the diseases and was registered in more than 90% cases. The other leading prognostic RF in AH were the following: heredity, age, hypodynamics, history of craniocerebral traumas. In IHD--AH, age over 50, dislipidemia, behavior pattern 1. In DU--alcoholism, malnutrition, Helicobacter pylori infection. In BA--reaction to atopic and infection allergens, physical and meteorological factors, allergic diseases. In DM-1--heredity and prior virus infections. In DM-2--obesity, IHD, AH, dislipidemia.
Conclusion: A stress factor plays an important role in development of all the diseases studied. This confirms a psychosomatic nature of these diseases and point to necessity of rendering psychotherapeutic and psychological aid to such patients at early stages of the disease. The models proposed for prognosis of the risk to develop psychosomatic diseases can predict probability of the disease onset.
abstract_id: PUBMED:9844332
Acute respiratory disease survey in Tripura in case of children below five years of age. This epidemiological study has been carried out in urban and rural areas of West Tripura district, to determine the incidence, causes, risk factors, morbidity and mortality associated with acute respiratory infection (ARI) and impact of simple case management in children under 5 years of age. The annual attack rate (episode) per child was more in urban area than in rural area. Monthly incidence of ARI was 23% in urban area, 17.65% in rural area. The overall incidence of ARI was 20%. The incidence of pneumonia was 16 per 1000 children in urban area and 5 per 1000 in rural area. The incidence of pneumonia was found to be the highest in infant group; 3% of ARI cases in rural area and 7% in urban area developed pneumonia. Malnourishment in urban area was 54% and in rural area 65%. Malnourished children have higher likelihood for developing respiratory infection. The relative risk (RR) of developing pneumonia was 2.3 in malnourished children. Most children (59%) had been immunised with measles and diphtheria, pertussis and tetanus (DPT) vaccine earlier. The immunisation had a protective role in pneumonia. The RR was 2.7 in non-immunised group. Air pollution of the urban area had stronger relation for bronchial asthma than pneumonia. Breastfeeding had protective role in pneumonia and severe disease. Bottlefeeding had greater risk of developing pneumonia. Lower socio-economic status had the greater risk of ARI episodes. ARI was decreased as the per capita income increased. An increase in magnitude of ARI was observed with the decrease of literacy rate. Administration of co-trimoxazole for pneumonia case by trained health worker using simple case management strategies can reduce deaths from pneumonia significantly. Health education can change health care seeking behaviours and attitude of parents and other family members to take care of the ARI child in the home itself for preventing pneumonia death.
abstract_id: PUBMED:22277111
Maternal and fetal origins of lung disease in adulthood. This review focuses on genetic and environmental influences that result in long term alterations in lung structure and function. Environmental factors operating during fetal and early postnatal life can have persistent effects on lung development and so influence lung function and respiratory health throughout life. Common factors affecting the quality of the intrauterine environment that can alter lung development include fetal nutrient and oxygen availability leading to intrauterine growth restriction, fetal intrathoracic space, intrauterine infection or inflammation, maternal tobacco smoking and other drug exposures. Similarly, factors that operate during early postnatal life, such as mechanical ventilation and high FiO(2) in the case of preterm birth, undernutrition, exposure to tobacco smoke and respiratory infections, can all lead to persistent alterations in lung structure and function. Greater awareness of the many prenatal and early postnatal factors that can alter lung development will help to improve lung development and hence respiratory health throughout life.
abstract_id: PUBMED:19965355
Imputed food insecurity as a predictor of disease and mental health in Taiwanese elementary school children. This study investigated the association between food insecurity and Taiwanese children's ambulatory medical care use for treating eighteen disease types linked to endocrine and metabolic disorders, nutrition, immunity, infections, asthma, mental health, injury, and poisoning. We used longitudinal data in the Taiwan National Health Insurance scheme (NHI) for 764,526 elementary children, and employed approximate NHI data to construct three indicators imputed to food insecurity: low birth weight status, economic status (poverty versus non-poverty), and time of year (summer break time versus semester time). We compared ambulatory care for these diseases between children with low birth weight and those not, and between children living in poverty and those not. A difference-in-differences method was adopted to examine the potential for a publicly- funded lunch program to reduce the harmful health effects of food insecurity on poor children. We found that children in poverty were significantly more likely to have ambulatory visits linked with diabetes, inherited disorders of metabolism, iron deficiency anemias, ill-defined symptoms concerning nutrition, metabolism and development, as well as mental disorders. Children with low birth weight also had a significantly higher likelihood of using care for other endocrine disorders and nutritional deficiencies, in addition to the above diseases. The study failed to find any significant effect of the semester school lunch program on alleviating the harmful health effects of food insecurity for poor children, suggesting that a more intensive food program or other program approaches might be required to help poor children overcome food insecurity and its related health outcomes.
abstract_id: PUBMED:17521883
Mother-child immunological interactions in early life affect long-term humoral autoreactivity to heat shock protein 60 at age 18 years. The presence of anti-heat shock protein 60 (Hsp60) antibodies in healthy individuals and the association of these antibodies with diseases such as arthritis and atherosclerosis are well documented. However, there is limited population-level data on interindividual variation in anti-Hsp60 levels. We investigated the influence of early-life factors on IgG reactivity to human Hsp60 at age 18 years. A population-based prospective birth cohort study included 5914 births in the city of Pelotas, Brazil, in 1982. Early-life exposures were documented during home visits in childhood. In 2000, 79% of all males in the cohort were traced. Sera from a systematic 20% sample (411 subjects) were analyzed. Anti-Hsp60 total IgG reactivity was determined by ELISA. Data were analyzed using analysis of variance and generalized linear models. Anti-Hsp60 reactivity was lognormally distributed and showed a significant direct correlation with low birthweight (p=0.039) and total duration of breastfeeding (p=0.018), of which only the latter remained significant after adjustment for potential confounders. Reactivity was not associated with asthma, pneumonia, diarrhea, or early-life malnutrition. Mother-child immunological interactions, rather than infection/disease factors seem to be associated with reactivity to Hsp60 later in life. This is in agreement with the hypothesis that maternal antibodies influence future antibody profile.
abstract_id: PUBMED:22895934
Optimal duration of exclusive breastfeeding. Background: Although the health benefits of breastfeeding are widely acknowledged, opinions and recommendations are strongly divided on the optimal duration of exclusive breastfeeding. Since 2001, the World Health Organization has recommended exclusive breastfeeding for six months. Much of the recent debate in developed countries has centred on the micronutrient adequacy, as well as the existence and magnitude of health benefits, of this practice.
Objectives: To assess the effects on child health, growth, and development, and on maternal health, of exclusive breastfeeding for six months versus exclusive breastfeeding for three to four months with mixed breastfeeding (introduction of complementary liquid or solid foods with continued breastfeeding) thereafter through six months.
Search Methods: We searched The Cochrane Library (2011, Issue 6), MEDLINE (1 January 2007 to 14 June 2011), EMBASE (1 January 2007 to 14 June 2011), CINAHL (1 January 2007 to 14 June 2011), BIOSIS (1 January 2007 to 14 June 2011), African Index Medicus (searched 15 June 2011), Index Medicus for the WHO Eastern Mediterranean Region (IMEMR) (searched 15 June 2011), LILACS (Latin American and Caribbean Health Sciences) (searched 15 June 2011). We also contacted experts in the field.The search for the first version of the review in 2000 yielded a total of 2668 unique citations. Contacts with experts in the field yielded additional published and unpublished studies. The updated literature review in December 2006 yielded 835 additional unique citations.
Selection Criteria: We selected all internally-controlled clinical trials and observational studies comparing child or maternal health outcomes with exclusive breastfeeding for six or more months versus exclusive breastfeeding for at least three to four months with continued mixed breastfeeding until at least six months. Studies were stratified according to study design (controlled trials versus observational studies), provenance (developing versus developed countries), and timing of compared feeding groups (three to seven months versus later).
Data Collection And Analysis: We independently assessed study quality and extracted data.
Main Results: We identified 23 independent studies meeting the selection criteria: 11 from developing countries (two of which were controlled trials in Honduras) and 12 from developed countries (all observational studies). Definitions of exclusive breastfeeding varied considerably across studies. Neither the trials nor the observational studies suggest that infants who continue to be exclusively breastfed for six months show deficits in weight or length gain, although larger sample sizes would be required to rule out modest differences in risk of undernutrition. In developing-country settings where newborn iron stores may be suboptimal, the evidence suggests that exclusive breastfeeding without iron supplementation through six months may compromise hematologic status. Based on the Belarusian study, six months of exclusive breastfeeding confers no benefit (versus three months of exclusive breastfeeding followed by continued partial breastfeeding through six months) on height, weight, body mass index, dental caries, cognitive ability, or behaviour at 6.5 years of age. Based on studies from Belarus, Iran, and Nigeria, however, infants who continue exclusive breastfeeding for six months or more appear to have a significantly reduced risk of gastrointestinal and (in the Iranian and Nigerian studies) respiratory infection. No significant reduction in risk of atopic eczema, asthma, or other atopic outcomes has been demonstrated in studies from Finland, Australia, and Belarus. Data from the two Honduran trials and from observational studies from Bangladesh and Senegal suggest that exclusive breastfeeding through six months is associated with delayed resumption of menses and, in the Honduran trials, more rapid postpartum weight loss in the mother.
Authors' Conclusions: Infants who are exclusively breastfed for six months experience less morbidity from gastrointestinal infection than those who are partially breastfed as of three or four months, and no deficits have been demonstrated in growth among infants from either developing or developed countries who are exclusively breastfed for six months or longer. Moreover, the mothers of such infants have more prolonged lactational amenorrhea. Although infants should still be managed individually so that insufficient growth or other adverse outcomes are not ignored and appropriate interventions are provided, the available evidence demonstrates no apparent risks in recommending, as a general policy, exclusive breastfeeding for the first six months of life in both developing and developed-country settings.
abstract_id: PUBMED:15384567
The optimal duration of exclusive breastfeeding: a systematic review. Although the health benefits of breastfeeding are acknowledged widely, opinions and recommendations are divided on the optimal duration of exclusive breastfeeding. We systematically reviewed available evidence concerning the effects on child health, growth, and development and on maternal health of exclusive breastfeeding for 6 months vs. exclusive breastfeeding for 3-4 months followed by mixed breastfeeding (introduction of complementary liquid or solid foods with continued breastfeeding) to 6 months. Two independent literature searches were conducted, together comprising the following databases: MEDLINE (as of 1966), Index Medicus (prior to 1966), CINAHL, HealthSTAR, BIOSIS, CAB Abstracts, EMBASE-Medicine, EMBASE-Psychology, Econlit, Index Medicus for the WHO Eastern Mediterranean Region, African Index Medicus, Lilacs (Latin American and Carribean literature), EBM Reviews-Best Evidence, the Cochrane Database of Systematic Reviews, and the Cochrane Controlled Trials Register. No language restrictions were imposed. The two searches yielded a total of 2,668 unique citations. Contacts with experts in the field yielded additional published and unpublished studies. Studies were stratified according to study design (controlled trials vs. observational studies) and provenance (developing vs. developed countries). The main outcome measures were weight and length gain, weight-for-age and length-for-age z-scores, head circumference, iron status, gastrointestinal and respiratory infectious morbidity, atopic eczema, asthma, neuromotor development, duration of lactational amenorrhea, and maternal postpartum weight loss. Twenty independent studies meeting the selection criteria were identified by the literature search: 9 from developing countries (2 of which were controlled trials in Honduras) and 11 from developed countries (all observational studies). Neither the trials nor the observational studies suggest that infants who continue to be exclusively breastfed for 6 months show deficits in weight or length gain, although larger sample sizes would be required to rule out modest increases in the risk of undernutrition. The data are conflicting with respect to iron status but suggest that, at least in developing-country settings, where iron stores of newborn infants may be suboptimal, exclusive breastfeeding without iron supplementation through 6 months of age may compromise hematologic status. Based primarily on an observational analysis of a large randomized trial in Belarus, infants who continue exclusive breastfeeding for 6 months or more appear to have a significantly reduced risk of one or more episodes of gastrointestinal tract infection. No significant reduction in risk of atopic eczema, asthma, or other atopic outcomes has been demonstrated in studies from Finland, Australia, and Belarus. Data from the two Honduran trials suggest that exclusive breastfeeding through 6 months of age is associated with delayed resumption of menses and more rapid postpartum weight loss in the mother. Infants who are breastfed exclusively for 6 months experience less morbidity from gastrointestinal tract infection than infants who were mixed breastfed as of 3 or 4 months of age. No deficits have been demonstrated in growth among infants from either developing or developed countries who are exclusively breastfed for 6 months or longer. Moreover, the mothers of such infants have more prolonged lactational amenorrhea and faster postpartum weight loss. Based on the results of this review, the World Health Assembly adopted a resolution to recommend exclusive breastfeeding for 6 months to its member countries. Large randomized trials are recommended in both developed and developing countries to ensure that exclusive breastfeeding for 6 months does not increase the risk of undernutrition (growth faltering), to confirm the health benefits reported thus far, and to investigate other potential effects on health and development, especially over the long term.
abstract_id: PUBMED:10868186
A pediatrician and his mothers and infants. Pediatricians are in a unique place in society by being able not only to care for the health and well-being of mothers and which, are their clinical responsibility, but also by being able to act as advocates for those patients who are often among the most vulnerable of our population. This article illustrates some of these points by referring to Australian Aboriginals from the vast desert areas of Westerns Australia. In remote areas of Western Australia, Aboriginal infants have high rates of low birth weight, failure to thrive and undernutrition. They also have high rates of respiratory, gastrointestinal and other infections. Aboriginal infant mortality has improved significantly over recent years, but Aboriginal health and mortality rates are still much worse than those of non-Aboriginal children and tend to be worst in more remote parts of the state. Overall, Aboriginal infants less than one year in age were hospitalized 9.5 times more frequently than non-Aboriginal infants for respiratory diseases (such as pneumonia, acute bronchiolitis and asthma); diarrheal diseases and skin infections were other very important causes of hospitalization for Aboriginal infants. Another poorly understood aspect of Aboriginal health is their widespread proneness to urinary tract infections. This is very important now in Australian Aboriginals in whom end-stage renal failure is becoming very prevalent. Rapid social and lifesyle changes have been very important in the poor health status of Aboriginals. They are also subject to severe socio-economic discrimination, underemployment, limited education, overcrowding, social depression and severely depressed housing conditions, relative inaccessibility to adequate and nutritious foodstuffs, and limited access to clinical services. Aboriginal people are prone to obesity, hypertension, type-2 diabetes mellitus and cardiovascular diseases. Overuse of alcohol and tobacco smoking have also become important challenges, particularly among adolescents and young adults. For the past twenty years or so, special programs have been developed to help overcome some of these problems; these include immunization programs, an extensive child health care program, special childhood screening programs, and oral rehydration therapy to reduce the high rates of mortality and morbidity associated with diarrheal diseases. These improvements have been achieved despite a set of socio-economic circumstances that face Aboriginal infants and children who live with adverse social factors. This was termed "Down and Out in 1996" in an editorial in The New Scientist (27 January 1996). A strategy that Australian Aboriginals are using now is to increase their own role through Aboriginal-controlled health and medical services including child health programs.
Answer: The relationship between risk factors for childhood infections and malnutrition and the protection against asthma is complex and not entirely clear-cut. According to a study conducted on Brazilian male adolescents, several childhood factors were found to be significantly associated with increased asthma risk, including being of high socioeconomic status, living in an uncrowded household, and children being breastfed for 9 months or longer. These results are consistent with the "hygiene hypothesis," which suggests that early exposure to infections may provide protection against asthma (PUBMED:14600053). However, the policy implications of these findings are unclear, as the same risk factors that may protect against asthma also protect against serious childhood diseases in developing countries.
In contrast, another study found that risk factors such as diabetes, undernutrition, low albumin serum level, asthma, or pneumonia in the past did not correlate with infectious risk after heart surgery (PUBMED:1843723). This indicates that the relationship between these risk factors and asthma may not be straightforward.
Furthermore, a study on psychosomatic diseases identified reaction to atopic and infection allergens, physical and meteorological factors, and allergic diseases as leading prognostic risk factors for bronchial asthma (PUBMED:17385469). This suggests that certain risk factors for infections may also be risk factors for asthma.
An epidemiological study in Tripura found that malnourished children have a higher likelihood of developing respiratory infections, and breastfeeding had a protective role in pneumonia and severe disease (PUBMED:9844332). This indicates that malnutrition and lack of breastfeeding, which are risk factors for infections, may not provide protection against respiratory conditions like asthma.
In summary, while some risk factors for childhood infections and malnutrition, such as exposure to infections, may be associated with a lower risk of asthma according to the hygiene hypothesis, other factors like malnutrition do not appear to offer such protection and may even increase the risk of respiratory conditions. The evidence suggests a complex interplay between these risk factors and the development of asthma. |
Instruction: Perceived difficulty of diabetes treatment in primary care: does it differ by patient ethnicity?
Abstracts:
abstract_id: PUBMED:12212017
Perceived difficulty of diabetes treatment in primary care: does it differ by patient ethnicity? Purpose: The purpose of this cross-sectional study was to determine the attitudes of internal medicine physicians toward treating diabetes in different patient ethnic groups and compared with treating common chronic medical conditions in primary care.
Methods: The survey instrument was administered to 55 internal medicine physicians. An e-mail message was sent to each physician with a hyperlink to a site where the survey could be completed. The instrument was a modified, quantitative 10-point scale designed to measure attitudes regarding the difficulty of treating diabetes.
Results: Diabetes was perceived to be more difficult to treat than hyperlipidemia and angina. African Americans with diabetes were perceived to be more difficult to treat than Caucasian patients. Difficulty in treating diabetes was comparable to that for hypertension, arthritis, and congestive heart failure. Physicians were confident about treatment efficacy for diabetes and changing diabetes outcomes, but not about the adequacy of time and resources for diabetes treatment.
Conclusions: Diabetes was perceived as a difficult disease to treat, African American patients were more difficult to treat, and time and resources were inadequate for diabetes treatment. To improve diabetes care, there is a need to address these attitudes and concerns of internal medicine physicians.
abstract_id: PUBMED:26340663
Patient-reported Communication Quality and Perceived Discrimination in Maternity Care. Background: High-quality communication and a positive patient-provider relationship are aspects of patient-centered care, a crucial component of quality. We assessed racial/ethnic disparities in patient-reported communication problems and perceived discrimination in maternity care among women nationally and measured racial/ethnic variation in the correlates of these outcomes.
Methods: Data for this analysis came from the Listening to Mothers III survey, a national sample of women who gave birth to a singleton baby in a US hospital in 2011-2012. Outcomes were reluctance to ask questions and barriers to open discussion in prenatal care, and perceived discrimination during the birth hospitalization, assessed using multinomial and logistic regression. We also estimated models stratified by race/ethnicity.
Results: Over 40% of women reported communication problems in prenatal care, and 24% perceived discrimination during their hospitalization for birth. Having hypertension or diabetes was associated with higher levels of reluctance to ask questions and higher odds of reporting each type of perceived discrimination. Black and Hispanic (vs. white) women had higher odds of perceived discrimination due to race/ethnicity. Higher education was associated with more reported communication problems among black women only. Although having diabetes was associated with perceptions of discrimination among all women, associations were stronger for black women.
Conclusions: Race/ethnicity was associated with perceived racial discrimination, but diabetes and hypertension were consistent predictors of communication problems and perceptions of discrimination. Efforts to improve communication and reduce perceived discrimination are an important area of focus for improving patient-centered care in maternity services.
abstract_id: PUBMED:31088212
Geographic and Race/Ethnicity Differences in Patient Perceptions of Diabetes. Objectives: The present study takes a culture-centered approach to better understand how the experiences of culture affect patient's perception of type 2 diabetes mellitus (T2DM). This study explores personal models of T2DM and compares personal models across regional and race/ethnicity differences.
Methods: In a practice-based research network, a cross-sectional survey was distributed to patients diagnosed with T2DM at medical centers in Nevada and Georgia. In analyses of covariance, controlling for age, health literacy, and patient activation, geographic location, and race/ethnicity were tested onto 5 dimensions of illness representation.
Results: Among 685 patients, race/ethnicity was significantly associated with lower reported understanding diabetes ( P < .01) and less perceived longevity of diabetes ( P < .001). Geographic location was significantly associated with seriousness of the disease ( P < .005) and impact of diabetes ( P < .001).
Conclusion: Non-Hispanic White Americans report greater understanding and perceive a longer disease course than non-Hispanic Black Americans and Asian Americans. Regionally, patients in Nevada perceive T2DM as more serious and having more impact on their lives than patients living in Georgia. Primary care physicians should elicit patient perceptions of diabetes within the context of the patient's ethnic and geographic culture group to improve discussions about diabetes self-management. Specifically, primary care physicians should address the seriousness of a diabetes diagnosis and the chronic nature of the disease with patients who belong to communities with a higher prevalence of the disease.
abstract_id: PUBMED:21990234
The patient-perceived difficulty in diabetes treatment (PDDT) scale identifies barriers to care. Objective: The objective of this study is to describe the design and validation of a newly developed brief, treatment-focused scale for use with type 1 and type 2-diabetes, exploring patient-perceived difficulties that are associated with treatment.
Methods: The content of the construct was derived from consultation with experts, from existing instruments and the literature, as well as from diabetic patients. The original draft was comprised of 11 attributes. Based on an interim analysis, an additional 12th attribute was added. The final scale was tested on 988 diabetic patients from 25 practices in Israel. Respondents also completed a diabetes-specific quality of life (QoL) questionnaire and indicated their current perceived overall health status.
Results: The patient-perceived difficulty of diabetes treatment (PDDT) scale contains 12 items reflecting diabetes-treatment characteristics: adherence to self-monitoring of glucose schedule, frequency of self-monitoring of glucose, adherence to medication administration schedule, frequency of medication administration, multiple number of medications, synchronization between meals and medications, dependence on the medications, pain associated with treatment, diet restrictions, self-care, multiple healthcare providers, and costs of treatment. Response rate to all attributes was very high. Construct validity was shown by significant correlations between PDDT attributes and diabetes-specific quality of life (r = 0.31-0.46) and self-report adherence to recommended treatment (r = 0.14-0.28), as well as between overall perceived difficulty and diabetes-specific quality of life (r = 0.60). Furthermore, the PDDT items showed discriminant capabilities with respect to known groups of patients.
Conclusions: The PDDT scale is a simple and valid instrument that may assist in identifying potential barriers in adherence to recommended treatments and to new treatment options.
abstract_id: PUBMED:22354209
Patient race/ethnicity and shared medical record use among diabetes patients. Background: Previous studies have documented racial/ethnic differences in patients' use of websites providing shared electronic medical records between patients and health care professionals. Less is known about whether these are driven by patient-level preferences and/or barriers versus broader provider or system factors.
Methods: Cross-sectional study of diabetes patients in an integrated delivery system in 2008-2009. Primary measures were race/ethnicity and shared medical record (SMR) use. Covariates included sociodemographics (age, sex, income, education), health status (comorbidity, diabetes severity), and provider characteristics (encouragement of SMR, secure messaging use, clinic).
Results: The majority (62%) of Whites used the SMR, compared with 34% of Blacks, 37% of Asians, and 55% of other race/ethnicity (P<0.001). Most respondents (76%) stated that their provider had encouraged them to use the SMR, with no differences by race/ethnicity. Patients saw primary care providers who used a similar amount of secure messaging in their practices-except Asians, who were less likely to see high-messaging providers. In fully adjusted models, Blacks [odds ratio (OR), 0.18; 95% confidence interval (CI), 0.11-0.30] and Asians (OR, 0.40; 95% CI, 0.20-0.77) were significantly less likely than Whites to use the SMR. When restricted to individuals reporting at least occasional Internet use, this finding remained for Black respondents (OR, 0.25; 95% CI, 0.10-0.63).
Conclusions: Among diabetes patients, differences in SMR use by race/ethnicity were not fully explained by differences in age, sex, sociodemographics, health status, or provider factors-particularly for Black patients. There were few racial/ethnic differences in provider encouragement or provider secure messaging use that would have suggested disparities at the provider level.
abstract_id: PUBMED:20009094
Patient race/ethnicity and patient-physician race/ethnicity concordance in the management of cardiovascular disease risk factors for patients with diabetes. OBJECTIVE Patient-physician race/ethnicity concordance can improve care for minority patients. However, its effect on cardiovascular disease (CVD) care and prevention is unknown. We examined associations of patient race/ethnicity and patient-physician race/ethnicity concordance on CVD risk factor levels and appropriate modification of treatment in response to high risk factor values (treatment intensification) in a large cohort of diabetic patients. RESEARCH DESIGN AND METHODS The study population included 108,555 adult diabetic patients in Kaiser Permanente Northern California in 2005. Probit models assessed the effect of patient race/ethnicity on risk factor control and treatment intensification after adjusting for patient and physician-level characteristics. RESULTS African American patients were less likely than whites to have A1C <8.0% (64 vs. 69%, P < 0.0001), LDL cholesterol <100 mg/dl (40 vs. 47%, P < 0.0001), and systolic blood pressure (SBP) <140 mmHg (70 vs. 78%, P < 0.0001). Hispanic patients were less likely than whites to have A1C <8% (62 vs. 69%, P < 0.0001). African American patients were less likely than whites to have A1C treatment intensification (73 vs. 77%, P < 0.0001; odds ratio [OR] 0.8 [95% CI 0.7-0.9]) but more likely to receive treatment intensification for SBP (78 vs. 71%, P < 0.0001; 1.5 [1.3-1.7]). Hispanic patients were more likely to have LDL cholesterol treatment intensification (47 vs. 45%, P < 0.05; 1.1 [1.0-1.2]). Patient-physician race/ethnicity concordance was not significantly associated with risk factor control or treatment intensification. CONCLUSIONS Patient race/ethnicity is associated with risk factor control and treatment intensification, but patient-physician race/ethnicity concordance was not. Further research should investigate other potential drivers of disparities in CVD care.
abstract_id: PUBMED:22174966
Assessment of perceived health status in hypertensive and diabetes mellitus patients at primary health centers in oman. Objectives: This study aimed to assess the impact of diabetes mellitus and hypertension as well as other demographic and clinical characteristics on perceived health status in primary health centers in Oman.
Methods: In a cross-sectional retrospective study, 450 patients (aged ≥ 18 years) seen at six primary health centers in Wilayat A' Seeb in the Muscat region, Oman, were selected. Perceived health status of the physical (PSCC) and mental (MSCC) components of quality-of-life were assessed using the 12-item short form health survey (SF-12). The analyses were performed using univariate statistical techniques.
Results: The mean age of the participants was 54 ± 12 years and they were mostly female (62%). The presence of both diabetes mellitus and hypertension was associated with lower physical scores compared to those with diabetes alone (p = 0.001) but only marginally lower than those with hypertension alone (p = 0.066). No significant differences were found across the disease groups in mental scores (P = 0.578). Age was negatively correlated (p < 0.001) but male gender (P < 0.001), married (p < 0.001), literate (p < 0.001) and higher income (p = 0.002) were all associated with higher physical scores. Moreover, longer disease duration was associated with lower physical scores (p < 0.001). With regards to the mental status, male (p = 0.005), marriage (P = 0.017) and higher income (p < 0.001) were associated with higher mental scores. Polypharmacy was associated with lower physical (p < 0.001) and mental (p = 0.005) scores.
Conclusions: The presence of both diseases was associated with lower physical scores of perceived health status. Health status was also affected by various demographic and clinical characteristics. However, the results should be interpreted in light of the study's limitations.
abstract_id: PUBMED:11427631
Does ethnicity influence perceived quality of life of patients on dialysis and following renal transplant? Background: Quality of life (QoL) as perceived by patients with end-stage renal disease (ESRD) is an important measure of patient outcome. There is a high incidence of ESRD in the Indo-Asian population in the UK and a lower rate of transplantation compared with white Europeans. The aim of this study was to determine whether perceived quality of life was influenced by treatment modality and ethnicity.
Methods: Sixty Indo-Asians treated with either peritoneal dialysis (n=20), hospital haemodialysis (n=20) or with a renal transplant (n=20) for >3 months were compared with 60 age-matched white Europeans closely matched for gender, diabetes and duration of renal replacement therapy. QoL was measured using the Kidney Disease and Quality of Life questionnaire (KDQOL-SF). The KDQOL-SF measures four QoL dimensions: physical health (PH), mental health (MH), kidney disease-targeted issues (KDI) and patient satisfaction (PS). Adequacy of treatment was measured by biochemistry, 24 h urine collection and dialysis kinetics. The number of comorbid conditions was scored. Social deprivation was calculated from the patient's postal address using Townsend scoring.
Results: QoL was significantly lower in Indo-Asians than white Europeans for PH, MH and KDI. This was not related to treatment adequacy, which was similar in both for each modality. Indo-Asians had a worse index of social deprivation than white Europeans (P=0.008). PH and KDI were related to social deprivation (P=0.007 and P=0.005, respectively). QoL (except PS) was inversely correlated with comorbidity. Dialysis patients had higher comorbidity than transplant patients (P<0.02). Comparing only those dialysis patients considered fit for transplantation (n=51) with transplant patients, comorbidity was similar, but differences in QoL persisted.
Conclusion: This study demonstrates a lower perceived QoL in Asians compared with white Europeans with ESRD. Analysis of QoL indicates that Asian patients in particular perceive kidney disease as a social burden, even if successfully transplanted.
abstract_id: PUBMED:20571929
Adherence to cardiovascular disease medications: does patient-provider race/ethnicity and language concordance matter? Background: Patient-physician race/ethnicity and language concordance may improve medication adherence and reduce disparities in cardiovascular disease (CVD) by fostering trust and improved patient-physician communication.
Objective: To examine the association of patient race/ethnicity and language and patient-physician race/ethnicity and language concordance on medication adherence rates for a large cohort of diabetes patients in an integrated delivery system.
Design: We studied 131,277 adult diabetes patients in Kaiser Permanente Northern California in 2005. Probit models assessed the effect of patient and physician race/ethnicity and language on adherence to CVD medications, after controlling for patient and physician characteristics.
Results: Ten percent of African American, 11 % of Hispanic, 63% of Asian, and 47% of white patients had same race/ethnicity physicians. 24% of Spanish-speaking patients were linguistically concordant with their physicians. African American (46%), Hispanic (49%) and Asian (52%) patients were significantly less likely than white patients (58%) to be in good adherence to all of their CVD medications (p<0.001). Spanish-speaking patients were less likely than English speaking patients to be in good adherence (51% versus 57%, p<0.001). Race concordance for African American patients was associated with adherence to all their CVD medications (53% vs. 50%, p<0.05). Language concordance was associated with medication adherence for Spanish-speaking patients (51% vs. 45%, p<0.05).
Conclusion: Increasing opportunities for patient-physician race/ethnicity and language concordance may improve medication adherence for African American and Spanish-speaking patients, though a similar effect was not observed for Asian patients or English-proficient Hispanic patients.
abstract_id: PUBMED:28638617
Clinical pharmacists in primary care: Provider satisfaction and perceived impact on quality of care provided. Purpose: The purpose of this study is to evaluate primary care provider satisfaction and perceived impact of clinical pharmacy services on the disease state management in primary care.
Methods: A cross-sectional survey with 24 items and 4 domains was distributed anonymously to pharmacy residency program directors across the United States who were requested to forward the survey to their primary care provider colleagues. Primary care providers were asked to complete the survey.
Results: A total of 144 primary care providers responded to the survey, with 130 reporting a clinical pharmacist within their primary care practice and 114 completing the entire survey. Primary care providers report pharmacists positively impact quality of care (mean = 5.5 on Likert scale of 1-6; standard deviation = 0.72), high satisfaction with pharmacy services provided (5.5; standard deviation = 0.79), and no increase in workload as a result of clinical pharmacists (5.5; standard deviation = 0.77). Primary care providers would recommend clinical pharmacists to other primary care practices (5.7; standard deviation = 0.59). Primary care providers perceived specific types of pharmacy services to have the greatest impact on patient care: medication therapy management (38.6%), disease-focused management (29.82%), and medication reconciliation (11.4%). Primary care providers indicated the most valuable disease-focused pharmacy services as diabetes (58.78%), hypertension (9.65%), and pain (11.4%).
Conclusion: Primary care providers report high satisfaction with and perceived benefit of clinical pharmacy services in primary care and viewed medication therapy management and disease-focused management of diabetes, hypertension, and pain as the most valuable clinical pharmacy services. These results can be used to inform development or expansion of clinical pharmacy services in primary care.
Answer: Yes, the perceived difficulty of diabetes treatment in primary care does differ by patient ethnicity. Internal medicine physicians perceive diabetes to be more difficult to treat in African American patients compared to Caucasian patients (PUBMED:12212017). Additionally, having diabetes is associated with higher levels of reluctance to ask questions and higher odds of reporting perceived discrimination in maternity care, with Black and Hispanic women having higher odds of perceived discrimination due to race/ethnicity (PUBMED:26340663). Non-Hispanic Black Americans and Asian Americans report lower understanding and perceive a shorter disease course for type 2 diabetes mellitus (T2DM) compared to Non-Hispanic White Americans (PUBMED:31088212). Furthermore, Black and Asian diabetes patients are significantly less likely than White patients to use shared medical records (SMRs), which is not fully explained by differences in sociodemographics, health status, or provider factors (PUBMED:22354209). Patient race/ethnicity is also associated with cardiovascular disease risk factor control and treatment intensification, but patient-physician race/ethnicity concordance was not significantly associated with these outcomes (PUBMED:20009094). These findings suggest that patient ethnicity does influence the perceived difficulty of diabetes treatment in primary care, and this may be due to a combination of communication barriers, perceived discrimination, and differences in patient engagement with healthcare technology. |
Instruction: Screening medical patients for distress and depression: does measurement in the clinic prior to the consultation overestimate distress measured at home?
Abstracts:
abstract_id: PUBMED:23339843
Screening medical patients for distress and depression: does measurement in the clinic prior to the consultation overestimate distress measured at home? Background: Medical patients are often screened for distress in the clinic using a questionnaire such as the Hospital Anxiety and Depression Scale (HADS) while awaiting their consultation. However, might the context of the clinic artificially inflate the distress score ? To address this question we aimed to determine whether those who scored high on the HADS in the clinic remained high scorers when reassessed later at home.
Method: We analysed data collected by a distress and depression screening service for cancer out-patients. All patients had completed the HADS in the clinic (on computer or on paper) prior to their consultation. For a period, patients with a high score (total of > or = 15) also completed the HADS again at home (over the telephone) 1 week later. We used these data to determine what proportion remained high scorers and the mean change in their scores. We estimated the effect of ‘ regression to the mean’ on the observed change.
Results: Of the 218 high scorers in the clinic, most [158 (72.5 %), 95% confidence interval (CI) 66.6–78.4] scored high at reassessment. The mean fall in the HADS total score was 1.74 (95% CI 1.09–2.39), much of which could be attributed to the estimated change over time (regression to the mean) rather than the context.
Conclusions: Pre-consultation distress screening in clinic is widely used. Reassuringly, it only modestly overestimates distress measured later at home and consequently would result in a small proportion of unnecessary further assessments. We conclude it is a reasonable and convenient strategy.
abstract_id: PUBMED:29332504
Changes in Distress Measured by the Distress Thermometer as Reported by Patients in Home Palliative Care in Germany. Aim: To identify changes in distress as reported by patients in a home palliative care program over a 2-week period.
Methods: Prospective study in West Germany with consecutive patients cared for at home by a palliative care specialty team. Exclusion criteria were patients under 18 years of age, mentally or physically not able to complete the assessment questionnaires, or unable to comprehend German language. Distress was measured using the distress thermometer (DT); sociodemographic and medical data were collected from the patients' records.
Results: One hundred three participated in the study (response rate of 69%) and 39 participants completed DT at 2-week follow-up (T1; response rate = 38%; mean age = 67; female = 54.4%; married = 67%; living home with relatives = 60.2%; oncological condition = 91.3%; Karnofsky performance status [KPS] 0-40 = 18.9%, KPS 50-70 = 70.3%, KPS >80 = 10.8%). The mean DT score at the first visit (T0) was 5.9 (2.3), with 82.1% of the participants scoring DT ≥5. At the 2-week follow-up (T1), mean DT score was 5.0 (2.0), with 64.1% scoring DT ≥5, showing a statistically significant difference between T0 and T1. Comparing the single scores at T0 and T1 of each participant, the difference in DT scores was -0.9 (2.27).
Conclusion: The DT is a useful tool for screening severity and changes in psychological distress as well as sources of distress. The DT detected change in self-reported distress within a short treatment period, indicating success or failure of the palliative care treatment approaches.
abstract_id: PUBMED:24458691
Systematic screening for distress in oncology practice using the Distress Barometer: the impact on referrals to psychosocial care. Purpose: This study evaluates how patterns of psychosocial referral of patients with elevated distress differ in a 'systematic screening for distress' condition versus a 'usual practice' condition in ambulatory oncology practice.
Methods: The psychosocial referral process in a 2-week usual practice (N=278) condition was compared with a 2-week 'using the Distress Barometer as a screening instrument' (N=304) condition in an outpatient clinic with seven consulting oncologists.
Results: Out of all distressed patients in the usual practice condition, only 5.5% of patients detected with distress were actually referred to psychosocial counselling, compared with 69.1% of patients detected with distress in the condition with systematic screening using the Distress Barometer. Only 3.7% of patients detected with distress in the usual practice condition finally accepted this referral, compared with 27.6% of patients detected with distress in the screening condition.
Conclusions: Using the Distress Barometer as a self-report screening instrument prior to oncological consultation optimises detection of elevated distress in patients, and this results in a higher number of performed and accepted referrals, but cannot by itself guarantee actual psychosocial referral or acceptance of referral. There is not only a problem of poor detection of distress in cancer patients but also a need for better decision-making and communication between oncologists and patients about this issue.
abstract_id: PUBMED:29209376
Feasibility of Psychosocial Distress Screening and Management Program for Hospitalized Cancer Patients. Objective: Although the diagnosis and treatment of cancer is associated with psychosocial distress, routine distress screening is difficult in hospitalized oncology settings. We developed a consecutive screening program for psychosocial distress to promote psychiatric treatment of cancer patients and evaluated the feasibility of our program by Distress Thermometer (DT) and Hospital Anxiety and Depression Scale (HADS).
Methods: Among 777 cancer inpatients recruited from the Catholic Comprehensive Institute of Seoul St. Mary's Hospital, 499 agreed to complete primary distress screening through DT. We conducted secondary distress screening through HADS in 229 patients who had high scores of DT.
Results: Of the 499 participants, 270 patients with low scores of DT were included in the distress education program. 229 patients with high scores of DT received secondary distress screening through HADS. Among 115 patients with low scores of HADS, 111 patients received distress management. Among 114 patients with high scores in the secondary distress screening, 38 patients received psychiatric consultation service whereas 76 patients refused psychiatric consultation.
Conclusion: Using consecutive screening for psychosocial distress appeared to be feasible in an inpatient oncology setting. Nevertheless, the low participation rate of psychiatric consultation service in cancer patients with high distress level should be improved.
abstract_id: PUBMED:34670712
Relieving pain and distress symptoms in the outpatient burn clinic: The contribution of a medical clown. Context: High levels of pain and emotional distress characterize the experience of patients, at burn outpatient clinic and reflect on their accompanying persons and the medical personal.
Objectives: To examine the effect of a medical clown presence on: the patients' pain and distress levels as perceived by the patient and by their accompanying persons, and the emotional response of healthcare personnel.
Methods: A yearlong prospective observational comparative study in the burn outpatient clinic, operating twice a week, with a medical clown's presence once a week [Exposure Group - EG] versus clinic without clown presence [Non exposure Group- NEG]. Patients and accompanying persons filled pain [WBS, VAS] and emotional distress [SUDS] questionnaires regarding the patient's experience: before (T1) and after treatment (T2). The clinic personnel filled SUDS at the beginning and the end of the clinic working hours.
Results: Significantly lower WBS, VAS, and SUDS scores were reported at T2 in the EG as compared to the NEG both in patients and in the accompanying persons' evaluations. Personnel SUDS were affected in a similar manner.
Conclusion: Presence of a medical clown induced a positive atmosphere in the clinic. It is possible that the effect of humor through stress reduction mechanism lessened agony. Furthermore, the distraction the clown evoked played a role in the decrease of pain and emotional distress. We recommend implementing psychosocial oriented interventions such as those performed by a medical clown to improve the emotional atmosphere in the ambulatory clinic of patients, accompanying persons and healthcare personnel.
abstract_id: PUBMED:37227604
Implementing process improvements to enhance distress screening and management. Introduction: While distress is prevalent among individuals living with cancer, distress management has not been optimized across cancer care delivery despite standards for screening. This manuscript describes the development of an enhanced Distress Thermometer (eDT) and shares the process for deploying the (eDT) across a cancer institute by highlighting improvements at the provider, system, and clinic levels.
Methods: Focus groups and surveys were used at the provider-level to outline the problem space and to identify solutions to improve distress screening and management. Through stakeholder engagement, an eDT was developed and rolled out across the cancer institute. Technical EHR infrastructure changes were implemented at the system-level to improve the use of the distress screening findings and generate automated referrals for specialty services. Clinic workflows were adapted to improve screening and distress management using the eDT.
Results: Stakeholder focus group participants (n=17) and survey respondents (n=13) found the eDT to be feasible and acceptable for distress identification and management. System-level technical EHR changes resulted in high accuracy with patient identification for distress management, and 100% of patients with moderate to severe distress were connected directly to an appropriate specialty provider. Clinic-level workflow changes to expand eDT use improved compliance rates with distress screening from 85% to 96% over a 1-year period.
Conclusions: An eDT that provides more context to patient-reported problems improved identification of referral pathways for patients experiencing moderate to high distress during cancer treatment. Combining process improvement interventions across multiple levels in the cancer care delivery system enhanced the success of this project. These processes and tools could support improved distress screening and management across cancer care delivery settings.
abstract_id: PUBMED:38363002
A randomized controlled trial of a distress screening, consultation, and targeted referral system for family caregivers in oncologic care. Objective: Distress screening is standard practice among oncology patients, yet few routine distress screening programs exist for cancer caregivers. The objective of this study was to demonstrate the feasibility, acceptability, and preliminary efficacy of Cancer Support Source-CaregiverTM (CSS-CG, 33-item), an electronic distress screening and automated referral program with a consultation (S + C) to improve caregiver unmet needs, quality of life, anxiety, depression, and distress relative to Enhanced Usual Care (EUC; access to educational materials).
Method: 150 caregivers of patients with varying sites/stages of cancer were randomized to S + C or EUC and completed assessments at baseline, 3-months post-baseline, and 6-months post-baseline. A subset of participants (n = 10) completed in-depth qualitative interviews.
Results: S + C was feasible: among 75 caregivers randomized to S + C, 66 (88%) completed CSS-CG and consultation. Top concerns reported were: (1) patient's pain and/or physical discomfort; (2) patient's cancer progressing/recurring; and (3) feeling nervous or afraid. Differences between groups in improvements on outcomes by T2 and T3 were modest (ds < 0.53) in favor of S + C. Qualitative data underscored the helpfulness of S + C in connecting caregivers to support and helping them feel cared for and integrated into cancer care.
Conclusions: S + C is feasible, acceptable, and yields more positive impact on emotional well-being than usual care. Future studies will examine programmatic impact among caregivers experiencing higher acuity of needs, and benefits of earlier integration of S + C on caregiver, patient, and healthcare system outcomes.
abstract_id: PUBMED:33811568
Effect of a Moral Distress Consultation Service on Moral Distress, Empowerment, and a Healthy Work Environment. Background: Healthcare providers who are accountable for patient care safety and quality but who are not empowered to actualize them experience moral distress. Interventions to mitigate moral distress in the healthcare organization are needed.
Objective: To evaluate the effect on moral distress and clinician empowerment of an established, health-system-wide intervention, Moral Distress Consultation.
Methods: A quasi-experimental, mixed methods study using pre/post surveys, structured interviews, and evaluation of consult themes was used. Consults were requested by staff when moral distress was present. The purpose of consultation is to identify the causes of moral distress, barriers to action, and strategies to improve the situation. Intervention participants were those who attended a moral distress consult. Control participants were staff surveyed prior to the consult. Interviews were conducted after the consult with willing participants and unit managers. Moral distress was measured using the Moral Distress Thermometer. Empowerment was measured using the Global Empowerment Scale.
Results: Twenty-one consults were conducted. Analysis included 116 intervention and 30 control surveys, and 11 interviews. A small but significant decrease was found among intervention participants, especially intensive care staff. Empowerment was unchanged. Interview themes support the consult service as an effective mode for open discussion of difficult circumstances and an important aspect of a healthy work environment.
Conclusions: Moral distress consultation is an organization-wide mechanism for addressing moral distress. Consultation does not resolve moral distress but helps staff identify strategies to improve the situation. Further studies including follow up may elucidate consultation effectiveness.
abstract_id: PUBMED:34359662
From Distress Screening to Uptake: An Italian Multicenter Study of Cancer Patients. Introduction: Little consideration is given to the referral and uptake of available supportive services after distress screening. However, identifying the reasons for accepting or refusing help is mandatory for implementing a screening policy. The present study explored the practical usefulness of and potential barriers to the application of distress management.
Methods: 406 cancer patients were consecutively selected and asked to complete the Distress Thermometer (DT) and Problem Check List (PL). All patients with a DT score ≥6 were invited for a post-DT telephone interview with a trained psychologist.
Results: The 112 patients who refused to take part were more often older, retired, at a more advanced stage of illness, and with no previous experience of psychological intervention with respect to those who accepted. Of the 78 patients with a score ≥6 who were referred to the Psycho-Oncology Service, 65.4% accepted the telephone interview. Twenty-two patients rejected the initial invitation immediately for various reasons including logistic difficulties, physical problems, and feeling embarrassed about opening up to a psychologist.
Conclusions: Our study confirms that screening per sé is insufficient to deal with the problem of distress and that more emphasis should be placed on implementing referral and treatment.
abstract_id: PUBMED:25427711
Screening for distress in patients with cancer. Although distress is common among people with cancer, the current standard of care does not include consistent distress screening. To acquire or maintain accreditation, the American College of Surgeons Commission on Cancer will require cancer centers to have a distress screening program in place by 2015. When evaluating tools to screen for distress in patients with cancer, researchers should evaluate the literature to ascertain that tools have been tested for validity and reliability in the population of patients with cancer. Regardless of the distress screening tool chosen, studies support that screening patients for distress is beneficial to improving quality of life and outcomes by allowing for referral to appropriate interventions.
Answer: The study by Walker et al. (PUBMED:23339843) aimed to determine whether distress scores obtained in the clinic prior to consultation overestimate distress measured at home. The study analyzed data from cancer out-patients who completed the Hospital Anxiety and Depression Scale (HADS) in the clinic and then again at home over the telephone one week later. The results showed that of the 218 high scorers in the clinic, a majority (72.5%) remained high scorers at reassessment at home. The mean fall in the HADS total score was 1.74, which could largely be attributed to regression to the mean rather than the context of the clinic. The study concluded that pre-consultation distress screening in the clinic only modestly overestimates distress measured later at home, resulting in a small proportion of unnecessary further assessments. Therefore, it is a reasonable and convenient strategy for screening medical patients for distress and depression. |
Instruction: Is Alzheimer disease related to age-related macular degeneration?
Abstracts:
abstract_id: PUBMED:38036880
Impact of Oxysterols in Age-Related Disorders and Strategies to Alleviate Adverse Effects. Oxysterols or cholesterol oxidation products are a class of molecules with the sterol moiety, derived from oxidative reaction of cholesterol through enzymatic and non-enzymatic processes. They are widely reported in animal-origin foods and prove significant involvement in the regulation of cholesterol homeostasis, lipid transport, cellular signaling, and other physiological processes. Reports of oxysterol-mediated cytotoxicity are in abundance and thus consequently implicated in several age-related and lifestyle disorders such as cardiovascular diseases, bone disorders, pancreatic disorders, age-related macular degeneration, cataract, neurodegenerative disorders such as Alzheimer's and Parkinson's disease, and some types of cancers. In this chapter, we attempt to review a selection of physiologically relevant oxysterols, with a focus on their formation, properties, and roles in health and disease, while also delving into the potential of natural and synthetic molecules along with bacterial enzymes for mitigating oxysterol-mediated cell damage.
abstract_id: PUBMED:33867940
The Role of Inflammation and Infection in Age-Related Neurodegenerative Diseases: Lessons From Bacterial Meningitis Applied to Alzheimer Disease and Age-Related Macular Degeneration. Age-related neurodegenerative diseases, such as Alzheimer disease (AD) and age-related macular degeneration (AMD), are multifactorial and have diverse genetic and environmental risk factors. Despite the complex nature of the diseases, there is long-standing, and growing, evidence linking microbial infection to the development of AD dementia, which we summarize in this article. Also, we highlight emerging research findings that support a role for parainfection in the pathophysiology of AMD, a disease of the neurosensory retina that has been shown to share risk factors and pathological features with AD. Acute neurological infections, such as Bacterial Meningitis (BM), trigger inflammatory events that permanently change how the brain functions, leading to lasting cognitive impairment. Neuroinflammation likewise is a known pathological event that occurs in the early stages of chronic age-related neurodegenerative diseases AD and AMD and might be triggered as a parainfectious event. To date, at least 16 microbial pathogens have been linked to the development of AD; on the other hand, investigation of a microbe-AMD relationship is in its infancy. This mini-review article provides a synthesis of existing evidence indicating a contribution of parainfection in the aetiology of AD and of emerging findings that support a similar process in AMD. Subsequently, it describes the major immunopathological mechanisms that are common to BM and AD/AMD. Together, this evidence leads to our proposal that both AD and AMD may have an infectious aetiology that operates through a dysregulated inflammatory response, leading to deleterious outcomes. Last, it draws fresh insights from the existing literature about potential therapeutic options for BM that might alleviate neurological disruption associated with infections, and which could, by extension, be explored in the context of AD and AMD.
abstract_id: PUBMED:36430581
An Altered Neurovascular System in Aging-Related Eye Diseases. The eye has a complex and metabolically active neurovascular system. Repeated light injuries induce aging and trigger age-dependent eye diseases. Damage to blood vessels is related to the disruption of the blood-retinal barrier (BRB), altered cellular communication, disrupted mitochondrial functions, and exacerbated aggregated protein accumulation. Vascular complications, such as insufficient blood supply and BRB disruption, have been suggested to play a role in glaucoma, age-related macular degeneration (AMD), and Alzheimer's disease (AD), resulting in neuronal cell death. Neuronal loss can induce vision loss. In this review, we discuss the importance of the neurovascular system in the eye, especially in aging-related diseases such as glaucoma, AMD, and AD. Beneficial molecular pathways to prevent or slow down retinal pathologic processes will also be discussed.
abstract_id: PUBMED:35912080
Age-Related Eye Diseases in Individuals With Mild Cognitive Impairment and Alzheimer's Disease. Introduction: Alzheimer's disease (AD) and age-related eye diseases pose an increasing burden as the world's population ages. However, there is limited understanding on the association of AD/cognitive impairment, no dementia (CIND) with age-related eye diseases.
Methods: In this cross-sectional, memory clinic-based study of multiethnic Asians aged 50 and above, participants were diagnosed as AD (n = 216), cognitive impairment, no dementia (CIND) (n = 252), and no cognitive impairment (NCI) (n = 124) according to internationally accepted criteria. Retinal photographs were graded for the presence of age-related macular degeneration (AMD) and diabetic retinopathy (DR) using standard grading systems. Multivariable-adjusted logistic regression models were used to determine the associations between neurological diagnosis and odds of having eye diseases.
Results: Over half of the adults had at least one eye disease, with AMD being the most common (60.1%; n = 356), followed by DR (8.4%; n = 50). After controlling for age, sex, race, educational level, and marital status, persons with AD were more likely to have moderate DR or worse (OR = 2.95, 95% CI = 1.15-7.60) compared with NCI. In the fully adjusted model, the neurological diagnosis was not associated with AMD (OR = 0.75, 95% CI = 0.45-1.24).
Conclusion: Patients with AD have an increased odds of having moderate DR or worse, which suggests that these vulnerable individuals may benefit from specific social support and screening for eye diseases.
abstract_id: PUBMED:25305550
Involvement of oxysterols in age-related diseases and ageing processes. Ageing is accompanied by increasing vulnerability to major pathologies (atherosclerosis, Alzheimer's disease, age-related macular degeneration, cataract, and osteoporosis) which can have similar underlying pathoetiologies. All of these diseases involve oxidative stress, inflammation and/or cell death processes, which are triggered by cholesterol oxide derivatives, also named oxysterols. These oxidized lipids result either from spontaneous and/or enzymatic oxidation of cholesterol on the steroid nucleus or on the side chain. The ability of oxysterols to induce severe dysfunctions in organelles (especially mitochondria) plays key roles in RedOx homeostasis, inflammatory status, lipid metabolism, and in the control of cell death induction, which may at least in part contribute to explain the potential participation of these molecules in ageing processes and in age related diseases. As no efficient treatments are currently available for most of these diseases, which are predicted to become more prevalent due to the increasing life expectancy and average age, a better knowledge of the biological activities of the different oxysterols is of interest, and constitutes an important step toward identification of pharmacological targets for the development of new therapeutic strategies.
abstract_id: PUBMED:36570623
Neuroprotection for Age-Related Macular Degeneration. Age-related macular degeneration (AMD) is a leading cause of blindness worldwide. Early to intermediate AMD is characterized by the accumulation of lipid- and protein-rich drusen. Late stages of the disease are characterized by the development of choroidal neovascularization, termed "exudative" or "neovascular AMD," or retinal pigment epithelium (RPE) cell and photoreceptor death, termed "geographic atrophy" (GA) in advanced nonexudative AMD. Although we have effective treatments for exudative AMD in the form of anti-VEGF agents, they have no role for patients with GA. Neuroprotection strategies have emerged as a possible way to slow photoreceptor degeneration and vision loss in patients with GA. These approaches include reduction of oxidative stress, modulation of the visual cycle, reduction of toxic molecules, inhibition of pathologic protein activity, prevention of cellular apoptosis or programmed necrosis (necroptosis), inhibition of inflammation, direct activation of neurotrophic factors, delivery of umbilical tissue-derived cells, and RPE replacement. Despite active investigation in this area and significant promise based on preclinical studies, many clinical studies have not yielded successful results. We discuss selected past and current neuroprotection trials for AMD, highlight the lessons learned from these past studies, and discuss our perspective regarding remaining questions that must be answered before neuroprotection can be successfully applied in the field of AMD research.
abstract_id: PUBMED:31995026
Neovascular Age-Related Macular Degeneration and its Association with Alzheimer's Disease. In developed countries, people of advanced age go permanently blind most often due to age-related macular degeneration, while at the global level, this disease is the third major cause of blindness, after cataract and glaucoma, according to the World Health Organisation. The number of individuals believed to suffer from the disease throughout the world has been approximated at 50 million. Age-related macular degeneration is classified as non-neovascular (dry, non-exudative) and neovascular (wet, exudative). The exudative form is less common than the non-exudative as it accounts for approximately 10 percent of the cases of the disease. However, it can be much more aggressive and could result in a rapid and severe loss of central vision. Similarly, with age-related macular degeneration, Alzheimer's disease is a late-onset, neurodegenerative disease affecting millions of people worldwide. Both of them are associated with age and share several features, including the presence of abnormal extracellular deposits associated with neuronal degeneration, drusen, and plaques, respectively. The present review article highlights the pathogenesis, clinical features, and imaging modalities used for the diagnosis of neovascular age-related macular degeneration. A thorough overview of the effectiveness of anti-VEGF agents as well as of other treatment modalities that have either lost favour or, are rarely used, is provided in detail. Additionally, the common histologic, immunologic, and pathogenetic features of Alzheimer's disease and age-related macular degeneration are discussed in depth.
abstract_id: PUBMED:34210002
Interactions between Apolipoprotein E Metabolism and Retinal Inflammation in Age-Related Macular Degeneration. Age-related macular degeneration (AMD) is a multifactorial retinal disorder that is a major global cause of severe visual impairment. The development of an effective therapy to treat geographic atrophy, the predominant form of AMD, remains elusive due to the incomplete understanding of its pathogenesis. Central to AMD diagnosis and pathology are the hallmark lipid and proteinaceous deposits, drusen and reticular pseudodrusen, that accumulate in the subretinal pigment epithelium and subretinal spaces, respectively. Age-related changes and environmental stressors, such as smoking and a high-fat diet, are believed to interact with the many genetic risk variants that have been identified in several major biochemical pathways, including lipoprotein metabolism and the complement system. The APOE gene, encoding apolipoprotein E (APOE), is a major genetic risk factor for AMD, with the APOE2 allele conferring increased risk and APOE4 conferring reduced risk, in comparison to the wildtype APOE3. Paradoxically, APOE4 is the main genetic risk factor in Alzheimer's disease, a disease with features of neuroinflammation and amyloid-beta deposition in common with AMD. The potential interactions of APOE with the complement system and amyloid-beta are discussed here to shed light on their roles in AMD pathogenesis, including in drusen biogenesis, immune cell activation and recruitment, and retinal inflammation.
abstract_id: PUBMED:36620455
The function of p53 and its role in Alzheimer's and Parkinson's disease compared to age-related macular degeneration. The protein p53 is the main human tumor suppressor. Since its discovery, extensive research has been conducted, which led to the general assumption that the purview of p53 is also essential for additional functions, apart from the prevention of carcinogenesis. In response to cellular stress and DNA damages, p53 constitutes the key point for the induction of various regulatory processes, determining whether the cell induces cell cycle arrest and DNA repair mechanisms or otherwise cell death. As an implication, aberrations from its normal functioning can lead to pathogeneses. To this day, neurodegenerative diseases are considered difficult to treat, which arises from the fact that in general the underlying pathological mechanisms are not well understood. Current research on brain and retina-related neurodegenerative disorders suggests that p53 plays an essential role in the progression of these conditions as well. In this review, we therefore compare the role and similarities of the tumor suppressor protein p53 in the pathogenesis of Alzheimer's (AD) and Parkinson's disease (PD), two of the most prevalent neurological diseases, to the age-related macular degeneration (AMD) which is among the most common forms of retinal degeneration.
abstract_id: PUBMED:34758100
Assessing bidirectional associations between cognitive impairment and late age-related macular degeneration in the Age-Related Eye Disease Study 2. Introduction: We aimed to investigate bidirectional associations between cognitive impairment and late age-related macular degeneration (AMD).
Methods: Participants in the Age-Related Eye Disease Study 2 (AREDS2) received annual eye examinations and cognitive function testing (e.g., Modified Telephone Interview for Cognitive Status [TICS-M]). We examined bidirectional associations between cognitive impairment (e.g., a TICS-M score < 30) and late AMD at 5 and 10 years.
Results: Five thousand one hundred eighty-nine eyes (3157 participants; mean age 72.7 years) were analyzed and followed for a median of 10.4 years. Eyes of participants with cognitive impairment at baseline were more likely to progress to late AMD at 5 years (hazard ratio [HR], 1.24; 95% confidence interval [CI], 1.08-1.43) and 10 years (HR, 1.20; 95% CI, 1.05-1.37) than eyes of participants without cognitive impairment. Worse baseline AMD severity was not associated with developing cognitive impairment.
Discussion: Cognitive impairment is associated with late AMD progression in AREDS2. Our finding highlights the importance of eyecare for people with cognitive impairment.
Answer: Yes, Alzheimer's disease (AD) is related to age-related macular degeneration (AMD). Both conditions are age-related neurodegenerative diseases and share several features, including the presence of abnormal extracellular deposits associated with neuronal degeneration, such as drusen in AMD and plaques in AD (PUBMED:31995026). There is evidence linking microbial infection and inflammation to the development of both AD and AMD, suggesting that these conditions may have an infectious etiology that operates through a dysregulated inflammatory response (PUBMED:33867940). Additionally, the eye has a complex neurovascular system, and vascular complications like blood-retinal barrier (BRB) disruption have been suggested to play a role in both AMD and AD, resulting in neuronal cell death (PUBMED:36430581). Furthermore, a study found that patients with AD have an increased odds of having moderate diabetic retinopathy or worse, although the neurological diagnosis was not associated with AMD in the fully adjusted model (PUBMED:35912080). The involvement of oxysterols in oxidative stress, inflammation, and cell death processes is also implicated in both AD and AMD, as these oxidized lipids can induce dysfunctions that contribute to the pathogenesis of age-related diseases (PUBMED:25305550). Lastly, a study assessing bidirectional associations found that cognitive impairment is associated with late AMD progression, highlighting the interconnectedness of these conditions (PUBMED:34758100). |
Instruction: Carbohydrate challenge tests: do you need to measure methane?
Abstracts:
abstract_id: PUBMED:22561536
Carbohydrate challenge tests: do you need to measure methane? Objective: Breath tests that measure hydrogen (H2) have been judged reliable for the detection of lactose maldigestion (LM) and fructose malabsorption (FM). Recently, methane (CH4) testing has been advocated and measurement of CH4 in addition to H2 has been shown to increase the diagnostic accuracy for LM.
Purpose: This study was designed to consider the additional yield from CH4 measurement in patients tested for LM and FM.
Methods: Patients reported for testing after an overnight fast, not smoking and with their prior evening meal carbohydrate restricted. After challenge with 50 g lactose or 25 g fructose in water, end-alveolar breath samples collected over a 4-hour duration were analyzed for H2 and CH4. Diagnostic positivity was compared using a cutoff level of 20 ppm increase above fasting baseline for H2 alone, which is consistent with consensus guidelines, versus H2 plus twice CH4, which recognizes that CH4 consumes twice the hydrogen.
Results: There were 406 LM performed in 93 men and 313 women. Of those tested, 124 (30%) had a positive test for H2 and 139 (34%) had a positive test for H2 + CH4 ×2. There were 178 FM tests performed in 31 men and 147 women. Of those tested, 17 (9%) had a positive test for H2 and 42 (23%) had a positive test for H2 + CH4 ×2.
Conclusion: If H2 alone was measured without additional CH4 analysis, 4% of patients with LM and 14% patients with FM would not have been identified.
abstract_id: PUBMED:464732
Complete degradation of carbohydrate to carbon dioxide and methane by syntrophic cultures of Acetobacterium woodii and Methanosarcina barkeri. Methanosarcina barkeri (strain MS) grew and converted acetate to CO2 and methane after an adaption period of 20 days. Growth and metabolism were rapid with gas production being comparable to that of cells grown on H2 and CO2. After an intermediary growth cycle under a H2 and CO2 atmosphere acetate-adapted cells were capable of growth on acetate with formation of methane and CO2. When acetate-adapted Methanosarcina barkeri was co-cultured with Acetobacterium woodii on fructose or glucose as substrate, a complete conversion of the carbohydrate to gases (CO2 and CH4) was observed.
abstract_id: PUBMED:23578814
In vitro methane formation and carbohydrate fermentation by rumen microbes as influenced by selected rumen ciliate species. Ciliate protozoa contribute to ruminal digestion and emission of the greenhouse gas methane. Individual species of ciliates co-cultured with mixed prokaryote populations were hypothesized to utilize carbohydrate types differently. In an in vitro batch culture experiment, 0.6 g of pure cellulose or xylan was incubated for 24 h in 40-mL cultures of Entodinium caudatum, Epidinium ecaudatum, and Eudiplodinium maggii with accompanying prokaryotes. Irrespective of ciliate species, gas formation (mL) and short-chain fatty acids (SCFA) concentrations (mmol L(-1)) were higher with xylan (71; 156) than with cellulose (52; 105). Methane did not differ (7.9% of total gas). The SCFA profiles resulting from fermentation of the carbohydrates were similar before and after removing the ciliates from the mixed microbial population. However, absolute methane production (mL 24 h(-1)) was lower by 50% on average after removing E. caudatum and E. maggii. Methanogen copies were less without E. maggii, but not without E. ecaudatum. Within 3 weeks part of this difference was compensated. Butyrate proportion was higher in cultures with E. maggii and E. ecaudatum than with E. caudatum and only when fermenting xylan. In conclusion, the three ciliate species partly differed in their response to carbohydrate type and in supporting methane formation.
abstract_id: PUBMED:34803954
Changes in Carbohydrate Composition in Fermented Total Mixed Ration and Its Effects on in vitro Methane Production and Microbiome. The purpose of this experiment was to investigate the changes of carbohydrate composition in fermented total mixed diet and its effects on rumen fermentation, methane production, and rumen microbiome in vitro. The concentrate-to-forage ratio of the total mixed ration (TMR) was 4:6, and TMR was ensiled with lactic acid bacteria and fibrolytic enzymes. The results showed that different TMRs had different carbohydrate compositions and subfractions, fermentation characteristics, and bacterial community diversity. After fermentation, the fermented total mixed ration (FTMR) group had lower contents of neutral detergent fiber, acid detergent fiber, starch, non-fibrous carbohydrates, and carbohydrates. In addition, lactic acid content and relative abundance of Lactobacillus in the FTMR group were higher. Compared with the TMR group, the in vitro ammonia nitrogen and total volatile fatty acid concentrations and the molar proportion of propionate and butyrate were increased in the FTMR group. However, the ruminal pH, molar proportion of acetate, and methane production were significantly decreased in the FTMR group. Notably, we found that the relative abundance of ruminal bacteria was higher in FTMR than in TMR samples, including Prevotella, Coprococcus, and Oscillospira. At the same time, we found that the diversity of methanogens in the FTMR group was lower than that in the TMR group. The relative abundance of Methanobrevibacter significantly decreased, while the relative abundances of Methanoplanus and vadinCA11 increased. The relative abundances of Entodinium and Pichia significantly decreased in the FTMR group compared with the TMR group. These results suggest that FTMR can be used as an environmentally cleaner technology in animal farming due to its ability to improve ruminal fermentation, modulate the rumen microbiome, and reduce methane emissions.
abstract_id: PUBMED:7824863
Carbohydrate malabsorption: quantification by methane and hydrogen breath tests. Background: Previous studies in small series of healthy adults have suggested that parallel measurement of hydrogen and methane resulting from gut fermentation may improve the precision of quantitative estimates of carbohydrate malabsorption. Systematic, controlled studies of the role of simultaneous hydrogen and methane measurements using end-expiratory breath test techniques are not available.
Methods: We studied seven healthy, adult methane and hydrogen producers and seven methane non-producers by means of end-expiratory breath test techniques. Breath gas concentrations and gastrointestinal symptoms were recorded at intervals for 12h after ingestion of 10, 20 and 30 g lactulose.
Results: In the seven methane producers the excretion pattern was highly variable; the integrated methane responses were disproportional and not reliably reproducible. However, quantitative estimates of carbohydrate malabsorption on the basis of individual areas under the methane and hydrogen excretion curves (AUCs) tended to improve in methane producers after ingestion of 20 g lactulose by simple addition of AUCs of methane to the AUCs of the hydrogen curves. Estimates were no more precise in methane producers than similar estimates in non-producers. Gastrointestinal symptoms increased significantly with increasing lactulose dose; correlation with total hydrogen and methane excretion was weak.
Conclusions: Our study suggests that in methane producers, simple addition of methane and hydrogen excretion improves the precision of semiquantitative measurements of carbohydrate malabsorption. The status of methane production should, therefore, be known to interpret breath tests semiquantitatively. The weak correlation between hydrogen and methane excretion and gas-related abdominal complaints suggests that other factors than net production of these gases may be responsible for the symptoms.
abstract_id: PUBMED:7964127
The possible role of breath methane measurement in detecting carbohydrate malabsorption. To evaluate the possibility that measurement of breath methane (CH4) enhances the accuracy of breath hydrogen (H2) testing to diagnose carbohydrate malabsorption, breath CH4 concentration of healthy subjects was studied. Fasting breath CH4 concentration measured three times over a 30-minute period in 44 CH4-producing volunteers ranged from 5 to 120 ppm. Fluctuation of breath CH4 excretion exceeded 100% increase over fasting in 1 of 9 subjects who ingested a nonabsorbable, carbohydrate-free solution. Out of 13 subjects who had a false negative breath H2 response to lactulose, 11 had a CH4 percentage increase greater than 100%. In 11 of 32 lactose-intolerant patients with a negative breath H2 test, CH4 percentage increase after lactose challenge was greater than 100%. These data suggested that in methanogenic individuals, breath CH4 measurement might enhance the accuracy of H2 breath testing in detecting carbohydrate malabsorption.
abstract_id: PUBMED:16418921
Effect of the carbohydrate composition of feed concentratates on methane emission from dairy cows and their slurry. Dietary carbohydrate effects on methane emission from cows and their slurry were measured on an individual animal basis. Twelve dairy cows were fed three of six diets each (n = 6 per diet) of a forage-to-concentrate ratio of 1 : 1 (dry matter basis), and designed to cover the cows' requirements. The forages consisted of maize and grass silage, and hay. Variations were exclusively accomplished in the concentrates which were either rich in lignified or non-lignified fiber, pectin, fructan, sugar or starch. To measure methane emission, cows were placed into open-circuit respiration chambers and slurry was stored for 14 weeks in 60-L barrels with slurry being intermittently connected to this system. The enteric and slurry organic matter digestibility and degradation was highest when offering Jerusalem artichoke tubers rich in fructan, while acid-detergent fiber digestibility and degradation were highest in cows and slurries with the soybean hulls diet rich in non-lignified fiber. Multiple regression analysis, based on nutrients either offered or digested, suggested that, when carbohydrate variation is done in concentrate, sugar enhances enteric methanogenesis. The methane emission from the slurry accounted for 16.0 to 21.9% of total system methane emission. Despite a high individual variation, the methane emission from the slurry showed a trend toward lower values, when the diet was characterized by lignified fiber, a diet where enteric methane release also had been lowest. The study disproved the assumption that a lower enteric methanogenesis, associated with a higher excretion of fiber, will inevitably lead to compensatory increases in methane emission during slurry storage.
abstract_id: PUBMED:11381982
Effect of moisture content on anaerobic digestion of dewatered sludge: ammonia inhibition to carbohydrate removal and methane production. The purpose of this study is to investigate the effect of moisture content on anaerobic digestion of dewatered sewage sludge under mesophilic condition. The moisture contents of sludge fed to reactors were 97.0%, 94.6%, 92.9%, 91.1% and 89.0%. The VS removal efficiency changed from 45.6% to 33.8%, as the moisture content of sludge fed to digester decreased from 97.0% to 89.0%. The carbohydrate removal efficiency also decreased from 71.1% to 27.8%. Methane production decreased when the moisture content of sludge was lower than 91.1%. The number of glucose consuming acidogenic bacteria was decreased from 3.1 x 10(6) to 3.1 x 10(8) (MPN/mL) as the moisture content decreased from 91.1% to 89.0%. The numbers of hydrogenotrophic and acetoclastic methanogenic bacteria decreased by one order of magnitude when the moisture content was lower than 91.1%. The decrease in numbers of glucose consuming acidogenic bacteria and methanogenic bacteria was found to correspond to the decrease in the carbohydrate removal efficiency and the accumulation of propionic acid. Batch experiments showed that acetoclastic methanogenic bacteria were acclimated to high ammonia concentration, on the other hand, glucose consuming acidogenic bacteria were inhibited.
abstract_id: PUBMED:28892691
Combined pretreatment of electrolysis and ultra-sonication towards enhancing solubilization and methane production from mixed microalgae biomass. This study investigated the effect of combination of pretreatment methods such as ultra-sonication and electrolysis for the minimum energy input to recover the maximal carbohydrate and solubilization (in terms of sCOD) from mixed microalgae biomass. The composition of the soluble chemical oxygen demand (COD), protein, carbohydrate revealed that the hydrolysis method had showed positive impact on the increasing quantity and thus enhanced methane yields. As a result, the combination of these 2 pretreatments showed the greatest yield of soluble protein and carbohydrate as 279 and 309mg/L, which is the recovery of nearly 85 and 90% in terms of total content of them. BMP tests showed peak methane production yield of 257mL/gVSadded, for the hydrolysate of combined pretreatment as compared to the control experiment of 138mL/gVS added.
abstract_id: PUBMED:22361068
Evaluation of methane production and macronutrient degradation in the anaerobic co-digestion of algae biomass residue and lipid waste. Algae biomass residue was co-digested with lipid-rich fat, oil, and grease waste (FOG) to evaluate the effect on methane yield and macronutrient degradation. Co-digestion of algae biomass residue and FOG, each at 50% of the organic loading, allowed for an increased loading rate up to 3g VS/Ld, resulting in a specific methane yield of 0.54 L CH(4)/g VSd and a volumetric reactor productivity of 1.62 L CH(4)/Ld. Lipids were the key contributor to methane yields, accounting for 68-83% of the total methane potential. Co-digestion with algae biomass residue fractions of 33%, 50%, and 67% all maintained lipid degradations of at least 60% when the organic loading rate was increased to 3g VS/Ld, while synergetic effects on carbohydrate and protein degradation were less evident with increased loading.
Answer: Based on the provided abstracts, it appears that measuring methane (CH4) in addition to hydrogen (H2) in breath tests can improve the diagnostic accuracy for detecting lactose maldigestion (LM) and fructose malabsorption (FM). The study in abstract PUBMED:22561536 found that if only H2 was measured without additional CH4 analysis, 4% of patients with LM and 14% of patients with FM would not have been identified. This suggests that including CH4 measurement can lead to a more comprehensive identification of carbohydrate malabsorption.
Furthermore, the abstract PUBMED:7824863 indicates that in methane producers, the addition of methane and hydrogen excretion improves the precision of semi-quantitative measurements of carbohydrate malabsorption. This implies that knowing the status of methane production is important for interpreting breath tests semi-quantitatively.
Additionally, the abstract PUBMED:7964127 suggests that in methanogenic individuals, breath CH4 measurement might enhance the accuracy of H2 breath testing in detecting carbohydrate malabsorption.
In conclusion, the evidence from these studies supports the notion that measuring methane in carbohydrate challenge tests is beneficial and can increase the diagnostic yield for identifying carbohydrate malabsorption disorders such as LM and FM. |
Instruction: Can the Seattle heart failure model be used to risk-stratify heart failure patients for potential left ventricular assist device therapy?
Abstracts:
abstract_id: PUBMED:19285613
Can the Seattle heart failure model be used to risk-stratify heart failure patients for potential left ventricular assist device therapy? Background: According to results of the REMATCH trial, left ventricular assist device therapy in patients with severe heart failure has resulted in a 48% reduction in mortality. A decision tool will be necessary to aid in the selection of patients for destination left ventricular assist devices (LVADs) as the technology progresses for implantation in ambulatory Stage D heart failure patients. The purpose of this analysis was to determine whether the Seattle Heart Failure Model (SHFM) can be used to risk-stratify heart failure patients for potential LVAD therapy.
Methods: The SHFM was applied to REMATCH patients with the prospective addition of inotropic agents and intra-aortic balloon pump (IABP) +/- ventilator.
Results: The SHFM was highly predictive of survival (p = 0.0004). One-year SHFM-predicted survival was similar to actual survival for both the REMATCH medical (30% vs 28%) and LVAD (49% vs 52%) groups. The estimated 1-year survival with medical therapy for patients in REMATCH was 30 +/- 21%, but with a range of 0% to 74%. The 1- and 2-year estimated survival was </=50% for 81% and 98% of patients, respectively. There was no evidence that the benefit of the LVAD varied in the lower vs higher risk patients.
Conclusions: The SHFM can be used to risk-stratify end-stage heart failure patients, provided known markers of increased risk are included such inotrope use and IABP +/- ventilator support. The SHFM may facilitate identification of high-risk patients to evaluate for potential LVAD implantation by providing an estimate of 1-year survival with medical therapy.
abstract_id: PUBMED:29252053
Renal risk stratification in left ventricular assist device therapy. Introduction: Left ventricular assist device (LVAD) therapy has greatly reduced mortality for patients with advanced heart failure (HF), both as a bridge to heart transplantation and as destination therapy. However, among other comorbidities, LVAD recipients face a risk of renal dysfunction, related to either the residual effects of HF or to LVAD support, which complicates the management of these patients and increases the risk of an adverse clinical outcome, including death.
Areas Covered: The authors summarize the current understanding of pre-LVAD predictors of post-LVAD renal dysfunction and need for renal replacement therapy (RRT), including emerging data about the risk conferred by proteinuria. The authors also discuss dynamics changes in renal function after LVAD placement, the importance of perioperative hemodynamic management in lowering renal risk, and the challenges of managing LVAD patients requiring chronic RRT.
Expert Commentary: A requirement for RRT before or after LVAD placement portends a high risk of mortality, suggesting a need to identify patients at high risk for post-LVAD RRT. Proteinuria and reduced renal function prior to LVAD placement predict RRT and should be included in the risk assessment of patients being considered for LVAD therapy.
abstract_id: PUBMED:30422006
Neurological complications associated with left ventricular assist device therapy. Introduction: Associated with significant morbidity and mortality, neurological complications in adult patients with left ventricular assist devices (LVAD) approaches a prevalence as high as 25%. As the number of individuals using LVAD support grows, it is increasingly important for providers to understand the hematologic and hemodynamic changes associated with LVAD implantation, the risk factors for neurological complications and their mitigation strategies. Areas covered: PubMed searches were completed using the terms 'Left ventricular assist device and stroke' (994 results) then 'Left ventricular assist device and stroke risk factors' (199 results). Results were filtered by 'humans' (178 results). The manuscript focuses on the risk factors and mitigation strategies for stroke identified in the literature following LVAD implantation and managing this complication. Expert commentary: There is little consensus on how to accurately predict stroke risk in the LVAD population. While some recent large-scale clinical trials identified a limited number of risk factors, further research is warranted to generate reliable predictive models and treatment protocols for these patients. This should include developing novel agents and monitoring techniques to individualize anticoagulation therapy while safely balancing the risk of bleeding, thrombosis and stroke. A multi-specialty commitment is necessary to further standardize the management of these patients.
abstract_id: PUBMED:28602376
Left Ventricular Assist Device in Older Adults. Left ventricular assist devices (LVADs) are an effective therapy for a growing and aging population in the background of limited donor supply. Selecting the proper patient involves assessment of indications, risk factors, scores for overall outcomes, assessment for right ventricular failure, and optimal timing of implantation. LVAD complications have a 5% to 10% perioperative mortality and complications of bleeding, thrombosis, stroke, infection, right ventricular failure, and device failure. As LVAD engineering technology evolves, so will the risk-prediction scores. Hence, more large-scale prospective data from multicenters will continually be required to aid in patient selection, reduce complications, and improve long-term outcomes.
abstract_id: PUBMED:30592069
A novel, highly discriminatory risk model predicting acute severe right ventricular failure in patients undergoing continuous-flow left ventricular assist device implant. Various risk models with differing discriminatory power and predictive accuracy have been used to predict right ventricular failure (RVF) after left ventricular assist device (LVAD) placement. There remains an unmet need for a contemporary risk score for continuous flow (CF)-LVADs. We sought to independently validate and compare existing risk models in a large cohort of patients and develop a simple, yet highly predictive risk score for acute, severe RVF. Data from the Mechanical Circulatory Support Research Network (MCSRN) registry, consisting of patients who underwent CF-LVAD implantation, were randomly divided into equal-sized derivation and validation samples. RVF scores were calculated for the entire sample, and the need for a right ventricular assist device (RVAD) was the primary endpoint. Candidate predictors from the derivation sample were subjected to backward stepwise logistic regression until the model with lowest Akaike information criterion value was identified. A risk score was developed based on the identified variables and their respective regression coefficients. Between May 2004 and September 2014, 734 patients underwent implantation of CF-LVADs [HeartMate II LVAD, 76% (n = 560), HeartWare HVAD, 24% (n = 174)]. A RVAD was required in 4.5% (n = 33) of the patients [Derivation cohort, n = 15 (4.3%); Validation cohort, n = 18 (5.2%); P = 0.68)]. 19.5% of the patients (n = 143) were female, median age at implant was 59 years (IQR, 49.4-65.3), and median INTERMACS profile was 3 (IQR, 2-3). RVAD was required in 4.5% (n = 33) of the patients. Correlates of acute, severe RVF in the final model included heart rate, albumin, BUN, WBC, cardiac index, and TR severity. Areas under the curves (AUC) for most commonly used risk predictors ranged from 0.61 to 0.78. The AUC for the new model was 0.89 in the derivation and 0.92 in the validation cohort. Proposed risk model provides very high discriminatory power predicting acute severe right ventricular failure and can be reliably applied to patients undergoing placement of contemporary continuous flow left ventricular assist devices.
abstract_id: PUBMED:35325091
HVAD to HeartMate 3 left ventricular assist device exchange: Best practices recommendations. The HeartWare HVAD System (Medtronic) is a durable implantable left ventricular assist device that has been implanted in approximately 20,000 patients worldwide for bridge to transplant and destination therapy indications. In December 2020, Medtronic issued an Urgent Medical Device Communication informing clinicians of a critical device malfunction in which the HVAD may experience a delay or failure to restart after elective or accidental discontinuation of pump operation. Moreover, evolving retrospective comparative effectiveness studies of patients supported with the HVAD demonstrated a significantly higher risk of stroke and all-cause mortality when compared with a newer generation of a commercially available durable left ventricular assist device. Considering the totality of this new information on HVAD performance and the availability of an alternate commercially available device, Medtronic halted the sale and distribution of the HVAD System in June 2021. The decision to remove the HVAD from commercial distribution now requires the use of the HeartMate 3 left ventricular assist system (Abbott, Inc) if a patient previously implanted with an HVAD requires a pump exchange. The goal of this document is to review important differences in the design of the HVAD and HeartMate 3 that are relevant to the medical management of patients supported with these devices, and to assess the technical aspects of an HVAD-to-HeartMate 3 exchange. This document provides the best available evidence that supports best practices. (J Thorac Cardiovasc Surg 2022;-:1-8).
abstract_id: PUBMED:35341579
HVAD to HeartMate 3 left ventricular assist device exchange: Best practices recommendations. The HeartWare HVAD System (Medtronic) is a durable implantable left ventricular assist device that has been implanted in approximately 20,000 patients worldwide for bridge to transplant and destination therapy indications. In December 2020, Medtronic issued an Urgent Medical Device Communication informing clinicians of a critical device malfunction in which the HVAD may experience a delay or failure to restart after elective or accidental discontinuation of pump operation. Moreover, evolving retrospective comparative effectiveness studies of patients supported with the HVAD demonstrated a significantly higher risk of stroke and all-cause mortality when compared with a newer generation of a commercially available durable left ventricular assist device. Considering the totality of this new information on HVAD performance and the availability of an alternate commercially available device, Medtronic halted the sale and distribution of the HVAD System in June 2021. The decision to remove the HVAD from commercial distribution now requires the use of the HeartMate 3 left ventricular assist system (Abbott, Inc) if a patient previously implanted with an HVAD requires a pump exchange. The goal of this document is to review important differences in the design of the HVAD and HeartMate 3 that are relevant to the medical management of patients supported with these devices, and to assess the technical aspects of an HVAD-to-HeartMate 3 exchange. This document provides the best available evidence that supports best practices.
abstract_id: PUBMED:28396040
Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients: The ROADMAP Study 2-Year Results. Objectives: The authors sought to provide the pre-specified primary endpoint of the ROADMAP (Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients) trial at 2 years.
Background: The ROADMAP trial was a prospective nonrandomized observational study of 200 patients (97 with a left ventricular assist device [LVAD], 103 on optimal medical management [OMM]) that showed that survival with improved functional status at 1 year was better with LVADs compared with OMM in a patient population of ambulatory New York Heart Association functional class IIIb/IV patients.
Methods: The primary composite endpoint was survival on original therapy with improvement in 6-min walk distance ≥75 m.
Results: Patients receiving LVAD versus OMM had lower baseline health-related quality of life, reduced Seattle Heart Failure Model 1-year survival (78% vs. 84%; p = 0.012), and were predominantly INTERMACS (Interagency Registry for Mechanically Assisted Circulatory Support) profile 4 (65% vs. 34%; p < 0.001) versus profiles 5 to 7. More LVAD patients met the primary endpoint at 2 years: 30% LVAD versus 12% OMM (odds ratio: 3.2 [95% confidence interval: 1.3 to 7.7]; p = 0.012). Survival as treated on original therapy at 2 years was greater for LVAD versus OMM (70 ± 5% vs. 41 ± 5%; p < 0.001), but there was no difference in intent-to-treat survival (70 ± 5% vs. 63 ± 5%; p = 0.307). In the OMM arm, 23 of 103 (22%) received delayed LVADs (18 within 12 months; 5 from 12 to 24 months). LVAD adverse events declined after year 1 for bleeding (primarily gastrointestinal) and arrhythmias.
Conclusions: Survival on original therapy with improvement in 6-min walk distance was superior with LVAD compared with OMM at 2 years. Reduction in key adverse events beyond 1 year was observed in the LVAD group. The ROADMAP trial provides risk-benefit information to guide patient- and physician-shared decision making for elective LVAD therapy as a treatment for heart failure. (Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients [ROADMAP]; NCT01452802).
abstract_id: PUBMED:32998831
The impact of uncorrected mild aortic insufficiency at the time of left ventricular assist device implantation. Objective: The study objective was to investigate the progression of uncorrected mild aortic insufficiency and its impact on survival and functional status after left ventricular assist device implantation.
Methods: We retrospectively reviewed 694 consecutive patients who underwent implantation of a continuous-flow left ventricular assist device between January 2006 and March 2018. Pre-left ventricular assist device transthoracic echocardiography identified 111 patients with mild aortic insufficiency and 493 patients with trace or no aortic insufficiency. To adjust for differences in preoperative factors, propensity score matching was used, resulting in 101 matched patients in each of the mild aortic insufficiency and no aortic insufficiency groups.
Results: Although both groups showed similar survival (P = .58), the mild aortic insufficiency group experienced higher incidence of readmission caused by heart failure (hazard ratio, 2.62; 95% confidence interval, 1.42-4.69; P < .01). By using the mixed effect model, pre-left ventricular assist device mild aortic insufficiency was a significant risk factor for both moderate or greater aortic insufficiency and worsening New York Heart Association functional status (P < .01).
Conclusions: Patients with uncorrected mild aortic insufficiency had a higher risk of progression to moderate or greater aortic insufficiency after left ventricular assist device implantation with worse functional status and higher incidence of readmission caused by heart failure compared with patients without aortic insufficiency. Further investigations into the safety and efficacy of concomitant aortic valve procedures for mild aortic insufficiency at the time of left ventricular assist device implant are warranted to improve patients' quality of life, considering the longer left ventricular assist device use as destination therapy and bridge to transplant with the new US heart allocation system.
abstract_id: PUBMED:26793331
Chronic outpatient management of patients with a left ventricular assist device. The use of mechanical circulatory support (MCS) as treatment for advanced heart failure (HF) has grown exponentially over the past 15 years. The continuous flow left ventricular assist device (CF-LVAD) has become the most used form of MCS in advanced HF, especially since approval of use as destination therapy (DT) and with the lack of organ availability. Long-term survival has improved and diligent outpatient management is thus particularly critical to achieve optimal outcomes. This review will discuss outpatient management strategies for patients with HF and a left ventricular assist device (LVAD).
Answer: Yes, the Seattle Heart Failure Model (SHFM) can be used to risk-stratify heart failure patients for potential left ventricular assist device (LVAD) therapy. A study applied the SHFM to patients from the REMATCH trial, which included the prospective addition of inotropic agents and intra-aortic balloon pump (IABP) +/- ventilator support. The SHFM was found to be highly predictive of survival (p = 0.0004), with one-year SHFM-predicted survival being similar to actual survival for both the medical therapy and LVAD groups in the REMATCH trial. The SHFM may facilitate the identification of high-risk patients to evaluate for potential LVAD implantation by providing an estimate of 1-year survival with medical therapy (PUBMED:19285613). |
Instruction: Are obesity and anthropometry risk factors for diabetic retinopathy?
Abstracts:
abstract_id: PUBMED:21482643
Are obesity and anthropometry risk factors for diabetic retinopathy? The diabetes management project. Purpose: To investigate the relationship between anthropometric parameters and diabetic retinopathy (DR) in adults with diabetes.
Methods: Five hundred participants with diabetes were recruited prospectively from ophthalmology clinics in Melbourne, Australia. Each underwent an eye examination, anthropometric measurements, and standardized interview-administered questionnaires, and fasting blood glucose and serum lipids were analyzed. Two-field fundus photographs were taken and graded for DR. Height; weight; body mass index (BMI); waist, hip, neck, and head circumferences; and skinfold measurements were recorded.
Results: A total of 492 patients (325 men, 66.1%) aged between 26 and 90 years (median, 65) were included in the analysis: 171 (34.8%), 187 (38.0%), and 134 (27.2%) with no DR, nonproliferative DR (NPDR), and proliferative DR (PDR), respectively. After multiple adjustments, higher BMI (odds ratio [OR], 1.06; 95% confidence interval [CI],1.01-1.11; P = 0.02) was significantly associated with any DR. Obese people were 6.5 times more likely to have PDR than were those with normal weight (OR, 6.52; 95% CI, 1.49-28.6; P = 0.013). Neck circumference (OR, 1.05; 95% CI, 1.00-1.10; P = 0.03) and waist circumference (OR, 1.12; 95% CI, 1.03-1.22; P = 0.01) were significantly associated with any DR. BMI (OR, 1.04; 95% CI, 1.00-1.08; P = 0.04) and neck circumference (OR, 1.04 95% CI, 1.01-1.08; P = 0.04) were also positively associated with increasing severity levels of DR.
Conclusions: Persons with diabetes with higher BMI and larger neck circumference are more likely to have DR and more severe stages of DR. These data suggest that obesity is an independent risk factor for DR.
abstract_id: PUBMED:35996907
Burden and Risk Factors of Diabetic Retinopathy Among Diabetic Patients Attending a Multispecialty Tertiary Eye Hospital in Nepal. Introduction: As the number of people with diabetes mellitus is increasing because of urbanization and change in dietary habits and sedentary lifestyle, the number of diabetic retinopathy is also expected to increase in future. [1] [sa2] We aimed to find out the prevalence of diabetic retinopathy and associated risk factors among diabetic patients in the tertiary eye hospital.
Materials And Methods: This is the observational cross-sectional study enrolling 420 diabetic patients visiting the multispecialty tertiary eye hospital between March 2020 and February 2021. Anthropometry measurement, laboratory risk profiles and blood pressure were recorded Results: The prevalence of any diabetic retinopathy, proliferative diabetic retinopathy, and diabetic macular edema were 30.96 %, 6.19 %, and 5.95 % respectively. The duration of DM (p=0.001), hypertension (p=0.04), high SBP (p=0.023), abdominal obesity (p=0.015), high LDL(p=0.011) cholesterol, low HDL cholesterol(p=0.012), and creatinine (p=0.001) were associated with DR in our study.
Conclusion: A holistic approach should target to control the modifiable risk factors like blood sugar, blood pressure, lipid profile, kidney function, and obesity to prevent DR. Anthropometric assessment of waist to height and waist circumference should be included in the holistic health promotion strategy in Nepal as BMI may not be risk factors for DR in Nepalese people.
abstract_id: PUBMED:27779095
Associations between diabetic retinopathy and systemic risk factors. Introduction: Diabetes mellitus is a systemic disease with complications that include sight-threatening diabetic retinopathy. It is essential to understand the risk factors of diabetic retinopathy before effective prevention can be implemented. The aim of this review was to examine the association between diabetic retinopathy and systemic risk factors.
Methods: A PubMed literature search was performed up to May 2016 to identify articles reporting associations between diabetic retinopathy and systemic risk factors; only publications written in English were included. Relevant articles were selected and analysed.
Results: Patients with diabetic retinopathy were more likely to have poor glycaemic control as reflected by a higher glycated haemoglobin, longer duration of diabetes, and use of insulin therapy for treatment. For other systemic risk factors, hypertension was positively associated with prevalence and progression of diabetic retinopathy. No clear association between obesity, hyperlipidaemia, gender, or smoking with diabetic retinopathy has been established as studies reported inconsistent findings. Myopia was a protective factor for the development of diabetic retinopathy. Several genetic polymorphisms were also found to be associated with an increased risk of development of diabetic retinopathy.
Conclusions: Good glycaemic and blood pressure control remain the most important modifiable risk factors to reduce the risk of progression of diabetic retinopathy and vision loss.
abstract_id: PUBMED:26676661
Risk Factors and Comorbidities in Diabetic Neuropathy: An Update 2015. Distal symmetric sensorimotor polyneuropathy (DSPN) is the most common neurological manifestation in diabetes. Major risk factors of DSPN include diabetes duration, hyperglycemia, and age, followed by prediabetes, hypertension, dyslipidemia, and obesity. Height, smoking, insulin resistance, hypoinsulinemia, and others represent an additional risk. Importantly, hyperglycemia, hypertension, dyslipidemia, obesity, and smoking are modifiable. Stringent glycemic control has been shown to be effective in type 1, but not to the same extent in type 2 diabetes. Antilipidemic treatment, especially with fenofibrate, and multi-factorial intervention have produced encouraging results, but more experience is necessary. The major comorbidities of DSPN are depression, autonomic neuropathy, peripheral arterial disease, cardiovascular disease, nephropathy, retinopathy, and medial arterial calcification. Knowledge of risk factors and comorbidities has the potential to enrich the therapeutic strategy in clinical practice as part of the overall medical care for patients with neuropathy. This article provides an updated overview of DSPN risk factors and comorbidities.
abstract_id: PUBMED:24548738
Risk factors associated with retinal vein occlusion. Aims: Retinal vein occlusion (RVO) is the most frequent retinal vascular disease after diabetic retinopathy in which arterial risk factors are much more relevant than venous factors. The objective was to evaluate the role of risk factors in the development of the first episode of RVO.
Subjects And Methods: One hundred patients with RVO [mean age 56 years, 42% females and mean body mass index (BMI) 27.5 kg/m(2)] were recruited consecutively from the outpatient clinic of a tertiary hospital in Valencia (Spain). All subjects underwent clinical assessment including anthropometric and blood pressure measurements and laboratory test including homocysteine, antiphospholipid antibodies (aPLAs) and thrombophilia studies. In half of the subjects, a carotid ultrasonography was performed. Three control populations matched by age, sex and BMI from different population-based studies were used to compare the levels and prevalence of arterial risk factors. One cohort of young patients with venous thromboembolic disease was used to compare the venous risk factors.
Results: Blood pressure levels and the prevalence of hypertension were significantly higher in the RVO population when compared with those for the general populations. There was also a large proportion of undiagnosed hypertension within the RVO group. Moreover, carotid evaluation revealed that a large proportion of patients with RVO had evidence of subclinical organ damage. In addition, homocysteine levels and prevalence of aPLAs were similar to the results obtained in our cohort of venous thromboembolic disease.
Conclusions: The results indicate that hypertension is the key factor in the development of RVO, and that RVO can be the first manifestation of an undiagnosed hypertension. Furthermore, the majority of these patients had evidence of atherosclerotic disease. Among the venous factors, a thrombophilia study does not seem to be useful and only the prevalence of hyperhomocysteinaemia and aPLAs is higher than in the general population.
abstract_id: PUBMED:3743309
Risk factors for diabetic retinopathy: a population-based study in Rochester, Minnesota. Retinopathy is an important sequela of diabetes mellitus, but clinical risk factors for this condition have rarely been assessed in a geographically defined population. In this population-based study, the 1135 Rochester, Minnesota, residents with diabetes mellitus initially diagnosed between 1945 and 1969 (incidence cohort) were followed through their complete medical records in the community to January 1, 1982. Because most of the cases of diabetic retinopathy in Rochester residents developed in patients with non-insulin-dependent diabetes mellitus (NIDDM), risk factors for diabetic retinopathy were examined in this group (N = 1031). A proportional hazards model identified the following risk factors for diabetic retinopathy in NIDDM: elevated initial fasting blood glucose level, marked obesity, and earlier age at onset of diabetes. Stratified analyses indicated that duration of diabetes was also significantly associated with an increased risk of retinopathy. Two secular trends, increasing detection of "mild" NIDDM and decreasing risk of diabetic retinopathy, had a major effect on retinopathy risk assessment. These data also suggest that insulin therapy is not an independent risk factor for diabetic retinopathy.
abstract_id: PUBMED:30598943
The prevalence and risk factors of diabetic retinopathy in selected primary care centers during the 3-year screening intervals. Objectives: This study aimed to determine the prevalence and progression of diabetic retinopathy (DR) and its risk factors in patients with diabetes attending primary care centers.
Methods: This study was a cross-sectional chart review that was conducted in three randomly selected primary care centers. A total of 250 patients with diabetes had three consecutive annual screenings for DR from April 2014 to April 2017. At the initial visit, the ophthalmological findings were recorded. For three successive yearly screening, the screening results were assessed to estimate the changes that occurred in the prevalence, incidence, and progression of DR in addition to the degree of association with the most predictable risk factors.
Results: The initial prevalence of DR was 15.2%. In this study, the findings over three consecutive screening intervals revealed that there was a steady increase in the prevalence of DR. The findings of this study showed that there was no significant association with DR and known risk factors including sex, type of diabetes mellitus (DM), obesity, and smoking. On the other hand, the duration of DM, hemoglobin A1c level, uncontrolled diabetes, hypertension, dyslipidemia, nephropathy, insulin treatment, and age were identified as strong predictors of DR among diabetics in this study.
Conclusion: DR, a serious microvascular complication of DM, is an asymptomatic disease with a slow onset and gradual progression. Primary prevention is highly recommended to control the risk factors that will delay the onset and progression of DR.
abstract_id: PUBMED:30113147
The Prevalence and Risk Factors for Diabetic Retinopathy in Shiraz, Southern Iran. Globally, diabetic retinopathy (DR) is one of the leading causes of blindness, that diminishes quality of life. This study aimed to describe the prevalence of DR, and its associated risk factors. This cross-sectional study was carried out among 478 diabetic patients in a referral center in Fars province, Iran. The mean±standard deviation age of the participants was 56.64±12.45 years old and DR prevalence was 32.8%. In multivariable analysis, lower education levels (adjusted odds ratio [aOR], 0.43; 95% confidence interval [CI], 0.24 to 0.76), being overweight (aOR, 1.70; 95% CI, 1.02 to 2.83) or obese (aOR, 1.88; 95% CI, 1.09 to 3.26), diabetes duration of 10 to 20 years (aOR, 2.35; 95% CI, 1.48 to 3.73) and over 20 years (aOR, 5.63; 95% CI, 2.97 to 10.68), receiving insulin (aOR, 1.99; 95% CI, 1.27 to 3.10), and having chronic diseases (aOR, 1.71; 95% CI, 1.02 to 2.85) were significantly associated with DR. In conclusion, longer diabetes duration and obesity or having chronic diseases are strongly associated with DR suggesting that control of these risk factors may reduce both the prevalence and impact of retinopathy in Iran.
abstract_id: PUBMED:24336029
Kidney and eye diseases: common risk factors, etiological mechanisms, and pathways. Chronic kidney disease is an emerging health problem worldwide. The eye shares striking structural, developmental, and genetic pathways with the kidney, suggesting that kidney disease and ocular disease may be closely linked. A growing number of studies have found associations of chronic kidney disease with age-related macular degeneration, diabetic retinopathy, glaucoma, and cataract. In addition, retinal microvascular parameters have been shown to be predictive of chronic kidney disease. Chronic kidney disease shares common vascular risk factors including diabetes, hypertension, smoking, and obesity, and pathogenetic mechanisms including inflammation, oxidative stress, endothelial dysfunction, and microvascular dysfunction, with ocular diseases supporting the 'Common Soil Hypothesis.' In this review, we present major epidemiological evidence for these associations and explore underlying pathogenic mechanisms and common risk factors for kidney and ocular disease. Understanding the link between kidney and ocular disease can lead to the development of new treatment and screening strategies for both diseases.
abstract_id: PUBMED:29951551
Prevalence of Chronic Complications, Their Risk Factors, and the Cardiovascular Risk Factors among Patients with Type 2 Diabetes Attending the Diabetic Clinic at a Tertiary Care Hospital in Sri Lanka. Diabetes incurs heavy burden to patients and the healthcare system. Assessment of disease burden is important in taking necessary precautions and management decisions. We aimed to determine the prevalence of macro- and microvascular complications, their risk factors, and coronary artery disease (CAD) risk factors among patients with type 2 diabetes mellitus (T2DM). A descriptive cross-sectional single-centre study was carried out among 3000 patients with T2DM attending the diabetic clinic at the National Hospital of Sri Lanka from January to July 2016. The study population had 72.7% females and 27.3% males. Mean age and disease duration were 58.3 ± 10.3 and 10.8 ± 7 years, respectively. Prevalence of CAD, stroke, and peripheral vascular disease were 10.6%, 1.1%, and 4.7% while diabetic retinopathy, neuropathy, nephropathy, diabetic foot, and lower extremity amputation (LEA) were 26.1%, 62.6%, 50.8%, 2.6%, and 1.3%, respectively. Prevalence of overweight/obesity, hypertension, dyslipidemia, and smoking were 80%, 77.6%, 76.7%, and 11%, respectively. Increased age, disease duration, and HBA1c were risk factors for microvascular disease and diabetic foot while age was the only risk factor for macrovascular complications. Occurrence of CAD, peripheral neuropathy, diabetic foot, and LEA was significantly higher among males than when compared to females. This study highlights the major burden of chronic complications and high prevalence of CAD risk factors in this population.
Answer: Yes, obesity and anthropometry are risk factors for diabetic retinopathy. Studies have shown that higher body mass index (BMI) is significantly associated with any diabetic retinopathy (DR) and with increasing severity levels of DR. Specifically, obese individuals were found to be 6.5 times more likely to have proliferative diabetic retinopathy (PDR) than those with normal weight (PUBMED:21482643). Additionally, larger neck circumference and waist circumference were also significantly associated with any DR (PUBMED:21482643).
In Nepal, abdominal obesity was associated with DR, and it was suggested that anthropometric assessment of waist to height and waist circumference should be included in health promotion strategies, as BMI may not be a risk factor for DR in the Nepalese population (PUBMED:35996907).
However, some studies have reported inconsistent findings regarding the association between obesity and DR. For instance, a study in Rochester, Minnesota, identified marked obesity as a risk factor for diabetic retinopathy in non-insulin-dependent diabetes mellitus (NIDDM) (PUBMED:3743309), while another study in Iran found that being overweight or obese was significantly associated with DR (PUBMED:30598943).
In contrast, a study in primary care centers found no significant association with DR and known risk factors including obesity (PUBMED:30598943). This highlights that while there is a general trend supporting obesity and certain anthropometric measures as risk factors for DR, there may be variations in these associations across different populations and study designs.
Overall, the evidence suggests that obesity and certain anthropometric measures are risk factors for diabetic retinopathy, and managing these factors may help in the prevention and management of DR. |
Instruction: Late recurrence after surgery for cholangiocarcinoma: implications for follow-up?
Abstracts:
abstract_id: PUBMED:36125544
Usefulness of hepatobiliary scintigraphy for predicting late complications in patients with choledochal cysts. Purpose: Hepatobiliary scintigraphy is a minimally invasive imaging method that evaluates bile flow dynamics. At our hospital, it has been performed for postoperative evaluation of patients with choledochal cysts (CC). This study evaluated the usefulness of biliary scintigraphy for predicting late complications in patients with CCs.
Methods: The study included pediatric patients with CC who underwent surgery at Chiba University Hospital from 1978 to 2020, followed by postoperative biliary scintigraphy and subsequent radiologic evaluation. The patients were divided into two groups according to the presence or absence of "biliary cholestasis" on biliary scintigraphy.
Results: The study included 108 patients, with a median age at surgery of 2 years and 11 months. The median follow-up period was 5203 days, with 11 hepatolithiasis cases and 8 cholangitis cases. No patients had cholangiocarcinoma. Twelve patients were considered to have "cholestasis" following biliary scintigraphy evaluation. There was no significant difference in the occurrence of hepatolithiasis between the cholestasis and non-cholestasis groups (p = 0.47), but cholangitis was significantly more common in the cholestasis group (p = 0.016).
Conclusion: Biliary cholestasis on postoperative hepatobiliary scintigraphy was a risk factor for cholangitis in patients with CCs. These particular patients should be monitored carefully.
abstract_id: PUBMED:18842505
Late recurrence after surgery for cholangiocarcinoma: implications for follow-up? Background: Biliary tract cancer is uncommon, but has a high rate of early recurrence and a poor prognosis. There is only limited information on patients surviving more than 5 years after resection.
Methods: We report a patient who developed recurrence 8 years after resection of cholangiocarcinoma. Descriptions of late recurrence after excision of cholangiocarcinoma are reviewed.
Results: Few long-term survivors with biliary tract cancer have been reported. The survivors tend to have well differentiated or papillary tumors. The present case had no recurrence for 8 years despite poor prognostic factors including poor differentiation, invasion through the muscle wall and perineural invasion. It has been suggested that tumor cells left after the first operation grow and present as late recurrence. There is a need to differentiate a new primary and field change from recurrence of the previous tumor.
Conclusions: Long-term follow-up after resection of cholangiocarcinoma is needed because late recurrence after 5 years occurs. The mortality rate between 5 and 10 years after resection of cholangiocarcinoma ranges from 6% to 43% in different series. Early detection of local recurrence may give an opportunity for further surgical resection.
abstract_id: PUBMED:30606203
Specific risk factors contributing to early and late recurrences of intrahepatic cholangiocarcinoma after curative resection. Background: Most intrahepatic cholangiocarcinoma (ICC) patients experienced tumor recurrences even after curative resection, but the optimal cut-off time point and the specific risk factors for early and late recurrences of ICC have not been clearly defined. The objective of the current study was to define specific risk factors for early and late recurrences of ICC after radical hepatectomy.
Methods: Included in this study were 259 ICC patients who underwent curative surgery at our hospital between January 2005 and December 2009. Recurrences in these patients were followed-up prospectively. Piecewise regression model and the minimum P value approach were used to estimate the optimal cut-off time point for early and late recurrences. Then, Cox's proportional hazards regression model was used to identify specific independent risk factors for early and late recurrences.
Results: Early and late recurrences occurred in 130 and 74 patients, respectively, and the 12th month was confirmed as the optimal cut-off time point for early and late recurrences. Cox's proportional hazards regression model showed that microvascular invasion (HR = 2.084, 95% CI 1.115-3.897, P = 0.021), multiple tumors (HR = 2.071, 95% CI 1.185-3.616, P = 0.010), abnormal elevation of serum CA19-9 (HR = 1.619, 95% CI 1.076-2.437, P = 0.021), and the negative hepatitis B status (HR = 1.650, 95% CI 1.123-2.427, P = 0.011) were independent risk factors for early recurrence, and HBV-DNA level > 106 IU/mL (HR = 1.785, 95% CI 1.015-3.141, P = 0.044) and a hepatolithiasis history (HR = 2.538, 95% CI 1.165-5.533, P = 0.010) contributed to late recurrence independently.
Conclusion: Specific risk factors and mechanisms may relate to early and late recurrences of ICC after curative resection.
abstract_id: PUBMED:20152355
Long-term outcomes after hepaticojejunostomy for choledochal cyst: a 10- to 27-year follow-up. Introduction: Choledochal cyst (CC) is closely associated with anomalous arrangement of the pancreaticobiliary duct, which is considered a high-risk factor for biliary tract malignancy. Early diagnosis and early treatment for CC could lead to a good prognosis. This study investigated late complications and long-term outcomes after surgery for CC.
Patients And Methods: Fifty-six patients with CC and over 10 years of postoperative follow-up were analyzed retrospectively. All patients had undergone total resection of the extrahepatic bile duct and hepaticojejunostomy.
Results: Six patients showed liver dysfunction manifested in the first 10 years after surgery, but all returned to normal thereafter. Dilatation of intrahepatic bile ducts persisted in 6 postoperatively, and in 3, this was still apparent more than 10 years after. Recurrent abdominal pain was encountered in 3, 1 had pancreas divisum with a pancreatic stone, and 1 had adhesive small bowel obstruction. Two patients developed biliary tract malignancy. A 14-year-old girl died of recurrent common bile duct cancer 2 years after the initial resection of CC with adenocarcinoma. A 26-year-old man with repeated cholangitis owing to multiple intrahepatic bile stones developed cholangiocarcinoma 26 years after the initial resection of CC. Event-free survival rate and overall survival rate were 89% (50/56) and 96% (54/56), respectively.
Conclusions: Choledochal cyst generally has an excellent prognosis with early total resection and reconstruction. Long-term surveillance for the development of malignancy is still essential, especially if there is ongoing dilatation of the intrahepatic bile duct or biliary stones.
abstract_id: PUBMED:35877262
Intensive Follow-Up Program and Oncological Outcomes of Biliary Tract Cancer Patients after Curative-Intent Surgery: A Twenty-Year Experience in a Single Tertiary Medical Center. Aim: The aim of this research was to assess the impact of an intensive follow-up program on BTC patients who had received surgery with curative intent at a tertiary referral hospital.
Methods: BTC patients were followed-up every three months during the first two years after their first surgery and every six months from the third to the fifth post-operative year.
Results: A total of 278 BTC patients who received R0/R1 surgery were included. A total of 17.7% of patients underwent a second surgery following disease relapse, and none of these patients experienced additional disease relapse.
Conclusions: An intensive follow-up after surgical resection may help in the early identification of disease relapse, leading to early treatment and prolonged survival in selected cases.
abstract_id: PUBMED:8784405
Surgical treatment of hepatolithiasis: long-term results. Background: Hepatolithiasis is a common disease in East Asia and is prevalent in Taiwan. Surgical and nonsurgical procedures for management of hepatolithiasis have been discussed, but long-term follow-up results of surgical treatment of hepatolithiasis are rarely reported.
Methods: We conducted a retrospective study of case records of patients with hepatolithiasis who underwent surgical or nonsurgical percutaneous transhepatic cholangioscopy treatment. Of 614 patients with hepatolithiasis seen between January 1984 and December 1988, 427 underwent follow-up after surgical (380) or percutaneous transhepatic cholangioscopy (47) treatment for 4 to 10 years and constituted the basis of this study.
Results: Long-term results of 427 patients with hepatolithiasis after surgical and nonsurgical treatment within 4 to 10 years of follow-up were recurrent stone rate 29.6% (105 of 355), repeated operation 18.7% (80 of 427), secondary biliary cirrhosis 6.8% (29 of 427), late development of cholangiocarcinoma 2.8% (12 of 427), and mortality rate 10.3% (44 of 427). The patients with hepatectomy had a better quality of life (symptom-free) with a lower recurrent stone rate (9.5%), lower mortality rate (2.1%), and lower incidence of secondary biliary cirrhosis (2.1%) and cholangiocarcinoma (0%) than did the nonhepatectomy group (p < 0.01). The patients without residual stones after choledochoscopy had a better quality of life than did the residual stone group (p < 0.01).
Conclusions: Long-term follow-up study of hepatolithiasis after surgical treatment revealed a high recurrent stone rate (29.6%) that required repeated surgery and a high mortality rate (10.3%) resulting from repeated cholangitis, secondary biliary cirrhosis, and late development of cholangiocarcinoma. Patients who received hepatectomy or without residual stones after choledochoscopy had a good prognosis and quality of life.
abstract_id: PUBMED:32690464
Choledochal cysts: Management and long-term follow-up. Background: Choledochal cysts are congenital anomalies that can occur at any level of the biliary tree. They carry long-term risk of biliary complications and cancer development. Complete excision of all involved bile ducts is recommended.
Methods: Patients treated between 1995 and 2019 were reviewed retrospectively.
Results: Sixty patients; 46 female and 14 male with a median age of 41 years (range 13-83) were included in the study. Mild abdominal pain was the most common presenting symptom (60%). Majority of the patients had Todani type I cysts (67%). Concomitant biliary malignancy was diagnosed in five patients (9%). Eight patients were followed-up conservatively (13%). Twenty-five patients were treated by excision of the extrahepatic bile ducts and Roux-en-Y hepaticojejunostomy, liver resection was added in seven, pancreatoduodenectomy was done in three and liver transplantation in one. There was no perioperative mortality. Postoperative complications developed in 17 patients (34%), two requiring surgical treatment. Four of the five patients with malignancies died at a median 42 months (range 6-95) following surgery. Median 62 months (range 8-280) follow-up was available in 45 surgically treated patients, 19 followed-up for more than 10 years. None of the patients developed malignancy during follow-up. Four patients (17%) were readmitted for anastomotic strictures requiring treatment.
Conclusion: The majority of choledochal cysts are Todani type-I and early cyst excision is the mainstay of management, which may decrease the risk of malignant transformation.
abstract_id: PUBMED:30167887
Austrian consensus guidelines on imaging requirements prior to hepatic surgery and during follow-up in patients with malignant hepatic lesions. Rapid advances in imaging technology have improved the detection, characterization and staging of colorectal liver metastases, hepatocellular carcinoma and cholangiocarcinoma. A variety of imaging modalities are available and play a pivotal role in the work-up of patients, particularly as imaging findings determine resectability. Surgery often represents the only measure that can render long-term survival possible. Imaging is also indispensable for the assessment of responses to neoadjuvant treatment and for the detection of recurrence. At a consensus meeting held in June 2017 in Vienna, Austria, Austrian experts in the fields of surgery and radiology discussed imaging requirements prior to and after hepatic surgery for malignant liver lesions. This consensus was refined by online voting on a total of 47 items. Generally, the degree of consensus was high. The recommendations relate to the type of preferred preoperative imaging modalities, technical settings with respect to computed tomography and magnetic resonance imaging, use of contrast agents, reporting, postoperative follow-up, and long-term follow-up. Taking local resources into account, these consensus recommendations can be implemented in daily clinical practice at specialized centers as well as outpatient diagnostic institutes in Austria.
abstract_id: PUBMED:32385663
Cholangiocarcinoma Following Bariatric Surgery: a Prospective Follow-Up Single-Center Audit. Background: Cholangiocarcinoma (CC) incidence is rising worldwide. Obesity and its related metabolic impairments are associated with primitive liver malignancies including CC. While bariatric surgery (BS) is associated with decreased risk of incident cancer, few data are available regarding CC incidence, presentation, and management issues after BS.
Methods: We retrospectively reviewed collected data on 1911 consecutive patients undergoing BS from 2010 to 2019.
Results: We recorded three cases (0.16%) of CC during the postoperative follow-up. All cases underwent a Roux-en-Y gastric bypass (RYGB) for class III obesity with metabolic diseases (i.e., type 2 diabetes mellitus, hypertension, dyslipidemia, and obstructive sleep apnea) without any personal or familial history of biliary disease. The patients presented with an inaugural jaundice or pruritus at 8, 12, and 13 months after RYBG, which enabled the diagnosis of metastatic CCs in all cases. In such palliative setting without access to the pancreato-biliary system, biliary drainage was ensured by a percutaneous trans-hepatic biliary drain. Chemotherapy was initiated in two patients. All the patients died within a delay of 2, 11, and 17 months after the diagnosis, respectively.
Conclusion: The incidence of post-BS CC appears low, but the prognosis is poor because of advanced stages at diagnosis. These cases illustrate the difficulty to make both on-time diagnosis and optimal management of CC especially in patients operated of RYGB with limited access to the excluded anatomy.
abstract_id: PUBMED:19855260
Twenty-five-year follow-up for liver resection: the personal series of Dr. Joseph G. Fortner. Background: Liver resection is performed for many types of malignancies. Few reports document actual long-term survival.
Methods: Long-term follow-up of a series of 548 liver resections performed between 1970 and 1992 was performed and is presented.
Results: All patients were followed for at least 15 years after surgery. Of the 476 cancer patients, the 5-, 10-, 15-, and 20-year survival rates were 38%, 25%, 20%, and 17%, respectively. Within this group, 108 and 88 patients were actual 10- and 15-year survivors, respectively. Median survival time in months varied by tumor type: metastatic neuroendocrine (81 months), biliary cancer (cholangiocarcinoma) (63 months), gallbladder cancer (47 months), metastatic colorectal cancer (40 months), and hepatocellular carcinoma (27 months). Survivors of each tumor type living more than 25 years were documented. Patients disease-free 10 years after resection for metastatic colorectal cancer or gallbladder cancer were usually considered cured. Patients with diagnoses of hepatocellular carcinoma, cholangiocarcinoma, or other metastases (including neuroendocrine tumors or sarcomas) continued to recur and die of disease.
Conclusion: Liver resection can be performed with long-term survival and potential curative outcome in a variety of primary and metastatic cancers of the liver.
Answer: Late recurrence after surgery for cholangiocarcinoma is a significant concern, as it can occur even in patients who have been disease-free for extended periods. The literature suggests that long-term follow-up is necessary due to the possibility of late recurrence, which can happen more than 5 years after resection (PUBMED:18842505). The mortality rate between 5 and 10 years after resection ranges from 6% to 43% in different series, indicating that late recurrence is not uncommon and can have serious implications for patient survival (PUBMED:18842505).
Specific risk factors have been identified for early and late recurrences of intrahepatic cholangiocarcinoma (ICC) after curative resection. For late recurrences, a history of hepatolithiasis and HBV-DNA levels greater than 106 IU/mL have been identified as independent risk factors (PUBMED:30606203). This suggests that patients with these risk factors may require more vigilant follow-up.
Moreover, the presence of biliary cholestasis on postoperative hepatobiliary scintigraphy has been identified as a risk factor for cholangitis in patients with choledochal cysts, which could potentially be extrapolated to suggest that similar imaging techniques might be useful in monitoring for late complications in cholangiocarcinoma patients (PUBMED:36125544).
An intensive follow-up program after surgical resection of biliary tract cancer, including cholangiocarcinoma, may help in the early identification of disease relapse, leading to early treatment and potentially prolonged survival (PUBMED:35877262). This underscores the importance of a structured follow-up program.
In summary, the implications for follow-up after surgery for cholangiocarcinoma include the need for long-term, possibly lifelong, monitoring due to the risk of late recurrence. This follow-up should be tailored to the individual patient's risk factors and may benefit from the use of imaging modalities that can detect early signs of recurrence or complications. An intensive follow-up program may improve outcomes by facilitating early intervention upon disease relapse (PUBMED:18842505; PUBMED:30606203; PUBMED:36125544; PUBMED:35877262). |
Instruction: Is gait speed improving performance of the EuroSCORE II for prediction of early mortality and major morbidity in the elderly?
Abstracts:
abstract_id: PUBMED:24819199
Is gait speed improving performance of the EuroSCORE II for prediction of early mortality and major morbidity in the elderly? Background: The aim of this study was to verify if gait speed can be an incremental predictor for mortality and/or major morbidity in combination with EuroSCORE II.
Methods: A single center prospective study cohort of 150 patients aged 70 years or older and undergoing cardiac surgery between August 2012 and April 2013. Slow gait speed was defined as a time taken to walk 5 meters of ≥6 second. The logistic EuroSCORE and EuroSCORE II were used for risk stratification.
Results: The studied group had a mean age of 77.7±5.2 years and mean gait speed was 4.9±1.01 (3.0-8.6) seconds. Slow gait speed was recorded in 21 patients (14%), indicated as frail, the other 129 patients (86%) as active. The logistic EuroSCORE risk (P=0.528), was not significantly different between the two groups. The EuroSCORE II risk, however, was significantly higher (P=0.023) for the frail group. There was no mortality and no statistically significant difference in percentage of major morbidity between the frail (28.6%) versus 17.1% for the active group (P=0.209) and slow gait speed could not be identified as independent predictor. Nevertheless frailty demonstrated an incremental value to improve performance of the logistic EuroSCORE model to predict early mortality and/or major morbidity in this elderly patient population. This was not so for EuroSCORE II.
Conclusions: We confirm the incremental value of frailty, evaluated by gait speed, to improve mortality and morbidity prediction of the logistic EuroSCORE model in elderly undergoing cardiac surgery. We could not confirm this for the new EuroSCORE II model.
abstract_id: PUBMED:21050978
Gait speed as an incremental predictor of mortality and major morbidity in elderly patients undergoing cardiac surgery. Objectives: The purpose of this study was to test the value of gait speed, a clinical marker for frailty, to improve the prediction of mortality and major morbidity in elderly patients undergoing cardiac surgery.
Background: It is increasingly difficult to predict the elderly patient's risk posed by cardiac surgery because existing risk assessment tools are incomplete.
Methods: A multicenter prospective cohort of elderly patients undergoing cardiac surgery was assembled at 4 tertiary care hospitals between 2008 and 2009. Patients were eligible if they were 70 years of age or older and were scheduled for coronary artery bypass and/or valve replacement or repair. The primary predictor was slow gait speed, defined as a time taken to walk 5 m of ≥ 6 s. The primary end point was a composite of in-hospital post-operative mortality or major morbidity.
Results: The cohort consisted of 131 patients with a mean age of 75.8 ± 4.4 years; 34% were female patients. Sixty patients (46%) were classified as slow walkers before cardiac surgery. Slow walkers were more likely to be female (43% vs. 25%, p = 0.03) and diabetic (50% vs. 28%, p = 0.01). Thirty patients (23%) experienced the primary composite end point of mortality or major morbidity after cardiac surgery. Slow gait speed was an independent predictor of the composite end point after adjusting for the Society of Thoracic Surgeons risk score (odds ratio: 3.05; 95% confidence interval: 1.23 to 7.54).
Conclusions: Gait speed is a simple and effective test that may identify a subset of vulnerable elderly patients at incrementally higher risk of mortality and major morbidity after cardiac surgery.
abstract_id: PUBMED:38198923
Gait speed assessment as a prognostic tool for morbidity and mortality in vulnerable older adult patients following vascular surgery. Introduction: Predicting the risk associated with vascular surgery in older adult patients has become increasingly challenging, primarily due to limitations in existing risk assessment tools. This study aimed to evaluate the utility of gait speed, a clinical indicator of frailty, in enhancing the prediction of mortality and morbidity in older adult patients undergoing vascular surgery.
Methods: A single-center prospective cohort study was conducted, involving older adult patients undergoing vascular surgery at four tertiary care hospitals between 2021 and 2022. Eligible patients were aged 80 years or older and scheduled for surgical treatment of peripheral arterial disease of the lower limbs (IIb Leriche-Le Fontaine). The primary factor of interest was gait speed, defined as taking more than 6 s to walk 5 meters. The primary outcomes were in-hospital postoperative mortality and major morbidity.
Results: The cohort comprised 131 patients with a mean age of 82.8 ± 1.4 years, with 34 % being female. Before vascular surgery, 60 patients (46 %) were categorized as slow walkers. Slow walkers were more likely to be female (43 % vs. 25 %, p < 0.03) and diabetic (50 % vs. 28 %, p < 0.01). Among the patients, 30 (23 %) experienced the primary composite outcome of mortality or major morbidity following vascular surgery. After adjusting for the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP®) Surgical Risk Calculator, slow gait speed independently predicted the composite outcome (odds ratio: 3.05; 95 % confidence interval: 1.23 to 7.54).
Conclusions: Gait speed is a straightforward and effective test that can help identify a subgroup of frail older adult patients at an elevated and incremental risk of mortality and major morbidity after vascular surgery. While gait speed remains a valuable clinical indicator of frailty, it is important to recognize that the broader context of mobility plays a pivotal role in postoperative outcomes.
abstract_id: PUBMED:33558164
Performance of EuroSCORE II and Society of Thoracic Surgeons risk scores in elderly patients undergoing aortic valve replacement surgery. Background: In cardiac surgery, risk is estimated with models such as EuroSCORE II and the Society of Thoracic Surgeons (STS) score. Performance of these scores may vary across various patient age ranges.
Aim: To assess the effect of patient age on performance of the EuroSCORE II and STS scores, regarding postoperative mortality after surgical aortic valve replacement.
Methods: In a prospective cohort of patients, we assessed risk stratification of EuroSCORE II and STS scores for discrimination of in-hospital mortality with the area under the receiver operating characteristic curve (AUROC) and calibration with the Hosmer-Lemeshow test. Two groups of patients were compared: elderly (aged>75years) and younger patients.
Results: Of 1229 patients included, 635 (51.7%) were elderly. Mean EuroSCORE II score was 3.7±4.4% and mean STS score was 2.1±1.5%. Overall in-hospital mortality was 4.8% and was higher in the elderly compared with younger patients (6.6% vs. 2.8%; log-rank P=0.014). AUROC for the EuroSCORE II score was lower in elderly than in younger patients (0.731 vs. 0.784; P=0.025). Similarly, AUROC for the STS score was lower in elderly versus younger patients (0.738 vs. 0.768; P=0.017). In elderly patients, EuroSCORE II and STS scores were not adequately calibrated and significantly underestimated mortality. Age was independently associated with mortality, regardless of EuroSCORE II or STS score.
Conclusions: In this cohort, EuroSCORE II and STS scores did not perform as well in elderly patients as in younger patients. Elderly patients may be at increased postoperative risk, regardless of risk score.
abstract_id: PUBMED:23665983
Comparison of standard Euroscore, logistic Euroscore and Euroscore II in prediction of early mortality following coronary artery bypass grafting Objective: EuroSCORE is the most widely used risk prediction system. Standard EuroSCORE, which had been published in 1999, was revised as a Logistic EuroSCORE in 2003. Further, it was reconsidered and published as EuroSCORE II in 2011. In this study we compared Standard, Logistic EuroSCORE and EuroSCORE II in prediction of early mortality following coronary artery bypass grafting.
Methods: We retrospectively analyzed 406 patients who underwent coronary artery bypass grafting operation between 2011-1012. Standard, Logistic and new version were compared with ROC analysis.
Results: In general population, mean standard EuroSCORE was 3.25±1.05, mean logistic EuroSCORE was found 2.48±0.58, mean EuroSCORE II was found 1.30 ± 0.09 and overall mortality was 10 (10/406 2.46%). Area under curve (AUC) was found 0.992 95% CI: 0.978-0.998 for standard EuroSCORE, 0.992 95% CI: 0.977-0.998 for logistic EuroSCORE and 0.990 95% CI: 0.975-0.997 for EuroSCORE II. In high risk patients (patients with standard EuroSCORE ≥ 6) AUC was found 0.870 95% CI 0.707-0.961 for standard EuroSCORE, 0.857 95% CI 0.691-0.954 for logistic EuroSCORE, and 0.961 95% CI: 0.829-0.998 for EuroSCORE II.
Conclusion: Standard, Logistic EuroSCORE and EuroSCORE II are similarly successful in mortality prediction. EuroSCORE II may be better in high-risk patients which needs confirmation in large prospective studies.
abstract_id: PUBMED:23536616
The new EuroSCORE II does not improve prediction of mortality in high-risk patients undergoing cardiac surgery: a collaborative analysis of two European centres. Objectives: Prediction of operative risk in adult patients undergoing cardiac surgery remains a challenge, particularly in high-risk patients. In Europe, the EuroSCORE is the most commonly used risk-prediction model, but is no longer accurately calibrated to be used in contemporary practice. The new EuroSCORE II was recently published in an attempt to improve risk prediction. We sought to assess the predictive value of EuroSCORE II compared with the original EuroSCOREs in high-risk patients.
Methods: Patients who underwent surgery between 1 April 2006 and 31 March 2011 with a preoperative logistic EuroSCORE ≥ 10 were identified from prospective cardiac surgical databases at two European institutions. Additional variables included in EuroSCORE II, but not in the original EuroSCORE, were retrospectively collected through patient chart review. The C-statistic to predict in-hospital mortality was calculated for the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II models. The Hosmer-Lemeshow test was used to assess model calibration by comparing observed and expected mortality in a number of risk strata. The fit of EuroSCORE II was compared with the original EuroSCOREs using Akaike's Information Criterion (AIC).
Results: A total of 933 patients were identified; the median additive EuroSCORE was 10 (interquartile range [IQR] 9-11), median logistic EuroSCORE 15.3 (IQR 12.0-24.1) and median EuroSCORE II 9.3 (5.8-15.6). There were 90 (9.7%) in-hospital deaths. None of the EuroSCORE models performed well with a C-statistic of 0.67 for the additive EuroSCORE and EuroSCORE II, and 0.66 for the logistic EuroSCORE. Model calibration was poor for the EuroSCORE II (chi-square 16.5; P = 0.035). Both the additive EuroSCORE and logistic EuroSCORE had a numerically better model fit, the additive EuroSCORE statistically significantly so (difference in AIC was -5.66; P = 0.017).
Conclusions: The new EuroSCORE II does not improve risk prediction in high-risk patients undergoing adult cardiac surgery when compared with original additive and logistic EuroSCOREs. The key problem of risk stratification in high-risk patients has not been addressed by this new model. Future iterations of the score should explore more advanced statistical methods and focus on developing procedure-specific algorithms. Moreover, models that predict complications in addition to mortality may prove to be of increasing value.
abstract_id: PUBMED:35873132
Performance of the EuroSCORE II Model in Predicting Short-Term Mortality of General Cardiac Surgery: A Single-Center Study in Taiwan. Background: The latest European System for Cardiac Operative Risk Evaluation (EuroSCORE) II is a well-accepted risk evaluation system for mortality in cardiac surgery in Europe.
Objectives: To determine the performance of this new model in Taiwanese patients.
Methods: Between January 2012 and December 2014, 657 patients underwent cardiac surgery at our institution. The EuroSCORE II scores of all patients were determined preoperatively. The short-term surgical outcomes of 30-day and in-hospital mortality were evaluated to assess the performance of the EuroSCORE II.
Results: Of the 657 patients [192 women (29.22%); age 63.5 ± 12.68 years], the 30-day mortality rate was 5.48%, and the in-hospital mortality rate was 9.28%. The discrimination power of this new model was good in all populations, regardless of 30-day mortality or in-hospital mortality. Good accuracy was also noted in different procedures related to coronary artery bypass grafting, and good calibration was noted for cardiac procedures (p value > 0.05). When predicting surgical death within 30 days, the EuroSCORE II overestimated the risk (observed to expected: 0.79), but in-hospital mortality was underestimated (observed to expected: 1.33). The predictive ability [area under the curve (AUC) of the receiver operating characteristic (ROC) curve] and calibration of the EuroSCORE II for 30-day mortality (0.792) and in-hospital mortality (0.825) suggested that in-hospital mortality is a better endpoint for the EuroSCORE II.
Conclusions: The new EuroSCORE II model performed well in predicting short-term outcomes among patients undergoing general cardiac surgeries. For short-term outcomes, in-hospital mortality was better than 30-day mortality as an indicator of surgical results, suggesting that it may be a better endpoint for the EuroSCORE II.
abstract_id: PUBMED:30570081
Physical frailty and gait speed in community elderly: a systematic review. Objective: To identify the outcomes of studies on gait speed and its use as a marker of physical frailty in community elderly.
Method: Systematic review of the literature performed in the following databases: LILACS, SciELO, MEDLINE/PubMed, ScienceDirect, Scopus and ProQuest. The studies were evaluated by STROBE statement, and the PRISMA recommendations were adopted.
Results: There were 6,303 studies, and 49 of them met the inclusion criteria. Of the total number of studies, 91.8% described the way of measuring gait speed. Of these, 28.6% used the distance of 4.6 meters, and 34.7% adopted values below 20% as cutoff points for reduced gait speed, procedures in accordance with the frailty phenotype. Regarding the outcomes, in 30.6% of studies, there was an association between gait speed and variables of disability, frailty, sedentary lifestyle, falls, muscular weakness, diseases, body fat, cognitive impairment, mortality, stress, lower life satisfaction, lower quality of life, napping duration, and poor performance in quantitative parameters of gait in community elderly.
Conclusion: The results reinforce the association between gait speed, physical frailty and health indicator variables in community elderly.
abstract_id: PUBMED:35579398
Validation for EuroSCORE II in the Indonesian cardiac surgical population: a retrospective, multicenter study. Background: In 2011, the European System for Cardiac Operative Risk (EuroSCORE) II was created as an improvement of the additive/logistic EuroSCORE for the prediction of mortality after cardiac surgery.
Objective: To validate EuroSCORE II in predicting the mortality of open cardiac surgery patients in Indonesia.
Methods: We performed a multi-center retrospective study of cardiac surgery patients from three participating centers (Dr. Sardjito Hospital, Kariadi Hospital, and Abdul Wahab Sjahranie Hospital) between January 1st, 2016, and December 31st, 2020. Discrimination and calibration tests were performed.
Results: The observed mortality rate was 9.5% (73 out of 767 patients). The median EuroSCORE II value was 1.13%. The area under the curve for EuroSCORE II was 0.71 (95% CI: 0.65-0.77), suggesting fair discriminatory power. Calibration analysis suggested that EuroSCORE II underestimated postoperative mortality. Gender, age, chronic pulmonary disease, limited mobility, NYHA, and critical pre-operative state were significant predictors of post-cardiac surgery mortality in our population.
Conclusion: This study suggested that the EuroSCORE II was a poor predictor for postoperative mortality in Indonesian patients who underwent cardiac surgery procedures. Therefore, EuroSCORE II may not be suitable for mortality risk prediction in Indonesian populations, and surgical planning should be decided on an individual basis.
abstract_id: PUBMED:36155751
Performance of EuroSCORE II in Latin America: a systematic review Background: EuroSCORE II is a mortality risk score for cardiac surgery in adults. This version is widely validated and compared with other scores in Europe, North America, and Asia.
Aim: To determine the performance of the EuroSCORE II for the prediction of mortality in cardiac surgeries in Latin America.
Material And Methods: A systematic review was carried out of studies from Latin American countries evaluating the performance of EuroSCORE II in cardiac surgery. The inclusion criteria were patients older than 18 years, from Latin America, published in English, Spanish and/or Portuguese, between the years 2012 to 2020, with the term "EuroSCORE II" in the title. Observed mortality and estimated mortality data by EuroSCORE II were extracted. The calibration was determined by the observed/estimated mortality ratio and the discrimination was evaluated using receiver operating characteristic (ROC) curves.
Results: Four articles met the inclusion criteria, including 8372 patients. The average patients' age was 62 years and 34% were women. The observed and Euroscore II estimated mortality figures were 7.08 and 3.89%, respectively. The average area under the curve of ROC curves was 0.77 and the observed/ estimated mortality ratio was 2.04.
Conclusions: In these studies, EuroSCORE II underestimated mortality in cardiac surgery.
Answer: The studies suggest that gait speed, as an indicator of frailty, can be an incremental predictor of mortality and major morbidity in elderly patients undergoing cardiac surgery, but its impact on improving the performance of the EuroSCORE II specifically is mixed.
A study by Afilalo et al. (PUBMED:21050978) found that gait speed is a simple and effective test that may identify a subset of vulnerable elderly patients at incrementally higher risk of mortality and major morbidity after cardiac surgery. This suggests that gait speed could potentially enhance the predictive accuracy of existing risk models.
Another study (PUBMED:38198923) also supports the use of gait speed as a prognostic tool for morbidity and mortality in vulnerable older adult patients following vascular surgery. It found that slow gait speed independently predicted the composite outcome of mortality or major morbidity after adjusting for the ACS NSQIP® Surgical Risk Calculator.
However, a study by Florath et al. (PUBMED:24819199) found that while frailty demonstrated by slow gait speed had an incremental value to improve the performance of the logistic EuroSCORE model to predict early mortality and/or major morbidity, this was not the case for EuroSCORE II. They could not confirm the incremental value of gait speed for the new EuroSCORE II model.
In the context of EuroSCORE II's performance, other studies (PUBMED:33558164, PUBMED:23665983, PUBMED:23536616, PUBMED:35873132) have evaluated its predictive accuracy and calibration in different populations and settings. Some found that EuroSCORE II may not be as effective in certain patient groups, such as high-risk patients or specific regional populations (PUBMED:35579398, PUBMED:36155751), suggesting that there may be limitations to its generalizability and accuracy.
In summary, while gait speed has been shown to be a valuable predictor of outcomes in elderly patients undergoing cardiac surgery, its role in specifically improving the performance of EuroSCORE II is not conclusively supported across all studies. Additional research may be needed to determine how gait speed could be integrated into the EuroSCORE II model to enhance its predictive power for elderly patients. |
Instruction: Is Breast Conserving Therapy a Safe Modality for Early-Stage Male Breast Cancer?
Abstracts:
abstract_id: PUBMED:26718092
Is Breast Conserving Therapy a Safe Modality for Early-Stage Male Breast Cancer? Introduction: Male breast cancer (MBC) is a rare disease and lacks data-based treatment guidelines. Most men are currently treated with modified radical mastectomy (MRM) or simple mastectomy (SM). We compared the oncologic treatment outcomes of early-stage MBC to determine whether breast conservation therapy (BCT) is appropriate.
Materials And Methods: We searched the Surveillance, Epidemiology, and End Results database for MBC cases. That cohort was narrowed to cases of stage I-II, T1-T2N0 MBC with surgical and radiation therapy (RT) data available. The patients had undergone MRM, SM, or breast conservation surgery (BCS) with or without postoperative RT. We calculated the actuarial 5-year cause-specific survival (CSS).
Results: We identified 6263 MBC cases and included 1777 men with stage I or II, T1-T2, node-negative disease, who had the required treatment information available. MRM without RT was the most common treatment (43%). Only 17% underwent BCS. Of the BCS patients, 46% received adjuvant RT to complete the traditional BCT. No deaths were recorded in the BCT group, regardless of stage, or in the 3 stage I surgical groups if the men had received RT. The actuarial 5-year CSS was 100% in each BCT group. MRM alone resulted in an actuarial 5-year CSS of 97.3% for stage 1% and 91.2% for stage 2.
Conclusion: The results from our study suggest that BCT for early-stage MBC yields comparable survival compared with more invasive treatment modalities (ie, MRM or SM alone). This could shift the treatment paradigm to less-invasive interventions and might have the added benefit of increased functional and psychological outcomes. Further prospective studies are needed to confirm our conclusions.
abstract_id: PUBMED:9381091
Quality of treatment in operable breast carcinoma. Comparison of the years before 1987, 1987-1990 and 1991-1994 Background: Has progress in the treatment of breast cancer been translated into routine practice? What can be further ameliorated? We present a first step in quality assurance by examining the quality of care in early-stage breast cancer during recent years.
Methods: Retrospective analysis of actual care in 300 patients with operable invasive breast cancer. Analysis and comparison of treatment in 3 time-periods based on date of diagnosis (before 1987, 1987-1991, 1991-6/1994).
Results: Staging, surgical treatment and histopathological analysis have become more complete over these years. There is, however, no tendency to diagnose smaller tumors in our series. The percentage of patients undergoing breast-conserving surgery has not increased since 1987. Overall, 25% of cancers were treated by breast-conserving surgery. The rate of ipsilateral breast recurrences after breast-conserving surgery was 19% if the breast was irradiated, and 67% when radiation had been omitted (median follow-up 50 months). Adjuvant systemic therapy is now given to many node negative patients. Combined adjuvant therapy (endocrine plus chemotherapy) was rarely used. Early consultation of medical oncology has increased in recent years.
Conclusion: Progress in the treatment of early-stage breast cancer has only partially been translated into clinical practice. To ensure that treatment decisions conform to the most recent standards, quality controls are necessary. The simplest form of quality control is a multidisciplinary approach, which should be used early, in every case, and if necessary, repeatedly.
abstract_id: PUBMED:18210656
How to treat male breast cancer. The prevalence for breast cancer in males in Europe is estimated to be 1 or less per 100,000. Male breast cancer has a peak incidence at the age of 71 years. There are no randomized data giving information on the optimal therapy for male breast cancer patients, thereby limiting firmer conclusions. The preferred primary surgical therapy is modified radical/simple mastectomy, but breast-conserving surgery has also been used in males. Post-operative radiotherapy should be used on a more routine basis; as males have shorter breast-anatomical distances and males are diagnosed at a later stage compared with females. The so far preferred adjuvant therapy modality has been tamoxifen for patients with endocrine responsive disease. The use of aromatase inhibitors in males is more controversial, since they may not deplete the estradiol levels sufficiently. Different chemotherapy regimens have been used in the adjuvant and metastatic setting. The use of adjuvant therapy has in institutional and review comparisons been demonstrated to result in an improved outcome.
abstract_id: PUBMED:35810531
The role of postoperative radiation therapy in stage I-III male breast cancer: A population-based study from the surveillance, epidemiology, and End Results database. Background: This study aimed to investigate the role of postoperative radiation therapy in a large population-based cohort of patients with stage I-III male breast cancer (MaBC).
Methods: Patients with stage I-III breast cancer treated with surgery were selected from the Surveillance, Epidemiology, and End Results cancer database from 2010 to 2015. Multivariate logistic regression identified the predictors of radiation therapy administration. Multivariate Cox regression model was used to evaluate the predictors of survival.
Results: We identified 1321 patients. Age, stage, positive regional nodes, surgical procedure, and HER2 status were strong predictors of radiation therapy administration. There was no difference between patients who received radiation therapy and those who did not (P = 0.46); however, after propensity score matching, it was associated with improved OS (P = 0.04). In the multivariate analysis of the unmatched cohort, the factors associated with better OS were administration of radiation therapy and chemotherapy. In the subset analysis of the unmatched cohort, postoperative radiation therapy was associated with improved OS in men undergoing breast-conserving surgery (BCS), with four or more node-positive or larger primary tumours (T3/T4). Furthermore, we found no benefit of radiation therapy, regardless of the type of axillary surgery in mastectomy (MS). In older MaBC patients with T1-2N1 who underwent MS, radiation therapy showed no significant effects, regardless of chemotherapy.
Conclusion: Postoperative radiation therapy could improve the survival of MaBC patients undergoing BCS, with four or more node-positive or larger primary tumours. Moreover, it should be carefully considered in patients undergoing MS and older T1-2N1 patients.
abstract_id: PUBMED:33721121
An updated review of epidemiology, risk factors, and management of male breast cancer. Unlike female breast cancer, male breast cancer (MBC) is rare and not very well understood. Prospective data in the management of MBC are lacking and majority of treatment strategies are adopted from the established guidelines for breast cancer in women. The understanding of biology, clinical presentation, genetics, and management of MBC is evolving but there still remains a large knowledge gap due to the rarity of this disease. Older age, high estradiol levels, klinefelter syndrome, radiation exposure, gynecomastia, family history of breast cancer, BRCA2 and BRCA1 mutation are some of the known risk factors for MBC. Routine screening mammography is not recommended for asymptomatic men. Diagnostic mammogram with or without ultrasound should be considered if there is a suspicion for breast mass. Majority of men with early-stage breast cancer undergo mastectomy whereas breast conserving surgery (BCS) with sentinel lymph node biopsy (SLNB) remains an alternative option in selected cases. Since the majority of MBC are hormone receptor positive (HR+), adjuvant hormonal therapy is required. Tamoxifen for a total of 5 to 10 years is the mainstay adjuvant hormonal therapy. The role of neoadjuvant and adjuvant chemotherapy for early-stage breast cancer is uncertain and not commonly used. The role of gene recurrence scores like oncotype Dx and mammaprint is evolving and can be used as an aid for adjuvant chemotherapy. Majority of metastatic MBC are treated with hormonal therapy with either tamoxifen, gonadotropin-releasing hormone agonist (GnRH) with aromatase inhibitors (AI), or fulvestrant. Chemotherapy is reserved for patients with visceral crisis or rapidly growing tumors.
abstract_id: PUBMED:30847663
Lateral thoracoaxillar dermal-fat flap for breast conserving surgery: the changes of the indication and long-term results. Background: Oncoplastic breast conserving surgery had been challenged to achieve both of local control and the cosmetic appearance of preserved breast. We developed the lateral thoracoaxillar dermal-fat flap (LTDF) as an oncoplastic procedure to fill the defect of breast-conserving surgery in 1999.
Methods: A total of 2338 breast cancer patients underwent surgery from January, 2000 to December, 2017. Mastectomy was performed in 706 patients (30%), and breast conservative surgery (BCS) was performed in 1634 patients (70%). The LTDF was adopted in 487/1634 (30%) of BCS cases to fill the large defect left by partial resection. we divided all patients into 3 groups: breast total mastectomy (BT group), the breast partial resection (BP) with LTDF (LTDF group), and Bp without LTDF (BP group) and compared the clinical characteristics, and recurrence rate.
Results: The Indications for LTDF increased up to 40% in 2010, while they decreased to 20%-30% in the most recent period, in accordance with the frequency of breast reconstruction increased. Patients who underwent BP + LTDF (LTDF group) included significantly higher proportions of stage II diseases and cases treated by neoadjuvant chemotherapy than those in BP or BT groups.. We found no marked difference of local recurrence and distant metastases between the LTDF and Bp groups. However, the rate of distant metastasis was significantly higher in BT group than in the Bp or LTDF group. Concerning the complications of LTDF, we experienced a few complications of Grade 3-4 requiring surgical management, namely one case of dislocation of the LTDF, three cases of bleeding, and five cases each of skin necrosis and fat necrosis.
Conclusions: We reported satisfying long-term outcomes of 487 cases treated by LTDF. LTDF is a suitable oncoplastic technique for BCS.
abstract_id: PUBMED:35117123
Current state of surgical management for male breast cancer. Management guidelines for male breast cancer have long been extrapolated from those for female breast cancer, which are based on large, randomised-controlled trials. While there are no randomised-controlled trials for male breast cancer management mainly due to the rarity of the disease, the only type of evidence available comes from retrospective studies, subject to selection biases and small sample sizes. Male breast cancer, while similar to female breast cancer in many respects, has some important differences that can affect management choices. Most cancers are oestrogen and progesterone receptor positive, and usually more advanced at presentation than female breast cancer. This is likely due to less breast parenchyma in male patients and delay to diagnosis. The classical management option for male patients with breast cancer is mastectomy, due to small tumour-to-breast ratio and often central position of the tumour. Breast conserving surgery is still useful in selected cases and has similar outcomes when compared to mastectomies in these patients. For patients with clinically negative lymph nodes, sentinel lymph node biopsy offers the same prognosis as axillary lymph node dissection, but with less associated morbidity. Endocrine therapy is of particular use, due to high levels of receptor positivity. Adjuvant endocrine therapy seems to significantly improve overall survival of male patients with breast cancer and while no prospective evidence exists for neoadjuvant hormonal therapy, there is hope that this is a useful management option as well. Radiotherapy is also useful in an adjuvant setting, particularly when combined with endocrine therapy. Better identification of patients, less delay from presentation to diagnosis and more collaborative efforts are key in improving the management, prognosis and outcomes of patients with male breast cancer.
abstract_id: PUBMED:31026796
Breast-Conserving Surgery in Patients With Mammary Paget's Disease. Background: We aimed to analyze the association between Paget's disease (PD) and breast cancer (BC) subtypes and compare the effect of breast-conserving surgery (BCS) as a local treatment with mastectomy for PD.
Materials And Methods: Data of patients with histologic type International Classification of Diseases-0-3 8540-8543 who were treated from 1973 to 2014 were retrieved from the Surveillance, Epidemiology, and End Results database of the National Cancer Institute. A chi-square test was used to identify differences in categorical data among different groups. Overall survival (OS) was analyzed using the Kaplan-Meier method, log-rank test, Cox proportional hazards models, sequential landmark analysis, and propensity score-matched analysis.
Results: The study cohort included 5398 patients. Triple-negative BC accounted for the fewest patients with PD-only (1/22, 4.54%), Paget's disease-ductal carcinoma in situ (PD-DCIS) (3/48, 6.25%), and Paget's disease-invading ductal carcinoma (PD-IDC) (23/352, 6.53%). According to the results of the log-rank test and Cox analysis, the 10-year OS rates were similar for the BCS and mastectomy subgroups among patients with PD-DCIS or PD-IDC. Furthermore, there were no significant differences in survival benefits among the different surgeries after propensity score matching. Landmark analyses for OS of patients with PD-DCIS or PD-IDC surviving more than 1, 3, and 5 y showed no significant differences in survival. There were statistical differences in 10-year OS rates for patients with PD-DCIS or PD-IDC who underwent radiation therapy, or not, following BCS (both, P < 0.001).
Conclusions: For patients with PD-DCIS or PD-IDC, breast conservation therapy with lumpectomy and radiation is an effective local treatment strategy, compared with mastectomy.
abstract_id: PUBMED:30761438
Is Breast-Conserving Therapy Appropriate for Male Breast Cancer Patients? A National Cancer Database Analysis. Background: Current treatment guidelines for male breast cancer are predominantly guided by female-only clinical trials. With scarce research, it is unclear whether breast-conserving therapy (BCT) is equivalent to mastectomy in men. We sought to compare overall survival (OS) among male breast cancer patients who underwent BCT versus mastectomy.
Methods: We performed a retrospective analysis of 8445 stage I-II (T1-2 N0-1 M0) male breast cancer patients from the National Cancer Database (2004-2014). Patients were grouped according to surgical and radiation therapy (RT). BCT was defined as partial mastectomy followed by RT. Multivariable and inverse probability of treatment-weighted (IPTW) Cox proportional hazards models were used to compare OS between treatment groups, controlling for demographic and clinicopathologic characteristics.
Results: Most patients underwent total mastectomy (61.2%), whereas 18.2% underwent BCT, 12.4% underwent total mastectomy with RT, and 8.2% underwent partial mastectomy alone. In multivariable and IPTW models, partial mastectomy alone, total mastectomy alone, and total mastectomy with RT were associated with worse OS compared with BCT (p < 0.001 all). Ten-year OS was 73.8% for BCT and 56.3, 58.0 and 56.3% for other treatment approaches. Older age, higher T/N stage, histological grade, and triple-negative receptor status were associated with poorer OS (p < 0.05). Subgroup analysis by stage demonstrated similar results.
Conclusions: In this national sample of male breast cancer patients, BCT was associated with greater survival. The underlying mechanisms of this association warrant further study, because more routine adoption of BCT in male breast cancer appears to translate into clinically meaningful improvements in survival.
abstract_id: PUBMED:22492066
Appropriate evaluation of and risk factors for radiation dermatitis in breast cancer patients receiving hypofractionated whole-breast irradiation after breast-conserving surgery. Background: In grading radiation-induced dermatitis (RID), there are not only inter-evaluator differences but also intra-evaluator variations. We retrospectively analyzed the advantages of establishing a more precise evaluation method using photographs to minimize intra-evaluator variations and RID risk factors.
Methods: We analyzed 301 breasts, including those of 3 patients with bilateral breast cancer who underwent hypofractionated whole-breast irradiation (WBI) after breast-conserving surgery. Four radiation oncologists (A, B, C and D) evaluated photographs taken before, during and after radiation therapy and graded RID using two methods.
Results: The percentages of maximum grades between the two methods varied widely. Kappa statistics revealed that the inter- and intra-evaluator agreements were mostly fair. In multivariate analysis, age (≤60 years old), boost irradiation, concurrent hormonal therapy and chemotherapy prior to WBI are statistically significant risk factors for ≥ grade 2 RID according to two evaluators (B and D), two evaluators (A and B), one evaluator (B) and one evaluator (D), respectively.
Conclusions: The assessment of serial skin change in photographs is useful for judging RID. No risk factor was statistically significant for all evaluators because of wide intra-evaluator variations and large inter-evaluator differences. More objective criteria are needed for appropriate evaluation of RID.
Answer: Breast conserving therapy (BCT) appears to be a safe modality for early-stage male breast cancer (MBC) based on the available evidence. A study using the Surveillance, Epidemiology, and End Results database found that BCT for early-stage MBC yielded comparable survival to more invasive treatments such as modified radical mastectomy (MRM) or simple mastectomy (SM). Specifically, the actuarial 5-year cause-specific survival (CSS) was 100% in each BCT group, suggesting that BCT could be a less invasive alternative with similar oncologic outcomes (PUBMED:26718092).
Furthermore, another population-based study from the same database indicated that postoperative radiation therapy could improve the survival of MBC patients undergoing BCS, especially in those with four or more node-positive or larger primary tumors (T3/T4). However, the benefit of radiation therapy in patients undergoing mastectomy was not evident in certain subgroups, such as older T1-2N1 patients (PUBMED:35810531).
The management of MBC is often extrapolated from female breast cancer guidelines due to the rarity of the disease in males and the lack of randomized data. The preferred primary surgical therapy has traditionally been modified radical/simple mastectomy, but BCS has been used in males as well. Adjuvant therapy, including tamoxifen for hormone-responsive disease, is commonly used, and the role of radiation therapy is emphasized due to anatomical considerations and later stage at diagnosis in males compared to females (PUBMED:18210656).
An updated review of MBC management also supports the use of BCS with sentinel lymph node biopsy in selected cases. The majority of MBC are hormone receptor positive, and adjuvant hormonal therapy, primarily tamoxifen, is a mainstay of treatment. The role of chemotherapy in early-stage MBC is uncertain and not commonly used (PUBMED:33721121).
In conclusion, the evidence suggests that BCT is a safe and effective treatment modality for early-stage MBC, offering comparable survival outcomes to more invasive surgical options. However, due to the rarity of MBC and the lack of randomized controlled trials, further prospective studies are needed to confirm these findings and to optimize treatment guidelines for this patient population. |
Instruction: Does Type 2 Diabetes Genetic Testing and Counseling Reduce Modifiable Risk Factors?
Abstracts:
abstract_id: PUBMED:25876740
Does Type 2 Diabetes Genetic Testing and Counseling Reduce Modifiable Risk Factors? A Randomized Controlled Trial of Veterans. Objective: We examined the clinical utility of supplementing type 2 diabetes mellitus (DM) risk counseling with DM genetic test results and counseling.
Research Design And Methods: In this randomized controlled trial, non-diabetic overweight/obese veteran outpatients aged 21 to 65 years received DM risk estimates for lifetime risk, family history, and fasting plasma glucose, followed by either genetic test results (CR+G; N = 303) or control eye disease counseling (CR+EYE; N = 298). All participants received brief lifestyle counseling encouraging weight loss to reduce the risk of DM.
Results: The mean age was 54 years, 53% of participants were black, and 80% were men. There was no difference between arms in weight (estimated mean difference between CR+G vs. CR+EYE at 3 months = 0.2 kg, 95% CI: -0.3 to 0.7; at 6 months = 0.4 kg, 95 % CI: -0.3 to 1.1), insulin resistance, perceived risk, or physical activity at 3 or 6 months. Calorie and fat intake were lower in the CR+G arm at 3 months (p's ≤ 0.05) but not at 6 months (p's > 0.20).
Conclusions: Providing patients with genetic test results was not more effective in changing patient behavior to reduce the risk of DM compared to conventional risk counseling.
Trial Registration: ClinicalTrials.gov NCT01060540 http://clinicaltrials.gov/show/NCT01060540.
abstract_id: PUBMED:27296809
Impact of Genetic Testing and Family Health History Based Risk Counseling on Behavior Change and Cognitive Precursors for Type 2 Diabetes. Family health history (FHH) in the context of risk assessment has been shown to positively impact risk perception and behavior change. The added value of genetic risk testing is less certain. The aim of this study was to determine the impact of Type 2 Diabetes (T2D) FHH and genetic risk counseling on behavior and its cognitive precursors. Subjects were non-diabetic patients randomized to counseling that included FHH +/- T2D genetic testing. Measurements included weight, BMI, fasting glucose at baseline and 12 months and behavioral and cognitive precursor (T2D risk perception and control over disease development) surveys at baseline, 3, and 12 months. 391 subjects enrolled of which 312 completed the study. Behavioral and clinical outcomes did not differ across FHH or genetic risk but cognitive precursors did. Higher FHH risk was associated with a stronger perceived T2D risk (pKendall < 0.001) and with a perception of "serious" risk (pKendall < 0.001). Genetic risk did not influence risk perception, but was correlated with an increase in perception of "serious" risk for moderate (pKendall = 0.04) and average FHH risk subjects (pKendall = 0.01), though not for the high FHH risk group. Perceived control over T2D risk was high and not affected by FHH or genetic risk. FHH appears to have a strong impact on cognitive precursors of behavior change, suggesting it could be leveraged to enhance risk counseling, particularly when lifestyle change is desirable. Genetic risk was able to alter perceptions about the seriousness of T2D risk in those with moderate and average FHH risk, suggesting that FHH could be used to selectively identify individuals who may benefit from genetic risk testing.
abstract_id: PUBMED:22852560
Examining the impact of genetic testing for type 2 diabetes on health behaviors: study protocol for a randomized controlled trial. Background: We describe the study design, procedures, and development of the risk counseling protocol used in a randomized controlled trial to evaluate the impact of genetic testing for diabetes mellitus (DM) on psychological, health behavior, and clinical outcomes.
Methods/design: Eligible patients are aged 21 to 65 years with body mass index (BMI) ≥27 kg/m(2) and no prior diagnosis of DM. At baseline, conventional DM risk factors are assessed, and blood is drawn for possible genetic testing. Participants are randomized to receive conventional risk counseling for DM with eye disease counseling or with genetic test results. The counseling protocol was pilot tested to identify an acceptable graphical format for conveying risk estimates and match the length of the eye disease to genetic counseling. Risk estimates are presented with a vertical bar graph denoting risk level with colors and descriptors. After receiving either genetic counseling regarding risk for DM or control counseling on eye disease, brief lifestyle counseling for prevention of DM is provided to all participants.
Discussion: A standardized risk counseling protocol is being used in a randomized trial of 600 participants. Results of this trial will inform policy about whether risk counseling should include genetic counseling.
Trial Registration: ClinicalTrials.gov Identifier NCT01060540.
abstract_id: PUBMED:22933432
Personalized genetic risk counseling to motivate diabetes prevention: a randomized trial. Objective: To examine whether diabetes genetic risk testing and counseling can improve diabetes prevention behaviors.
Research Design And Methods: We conducted a randomized trial of diabetes genetic risk counseling among overweight patients at increased phenotypic risk for type 2 diabetes. Participants were randomly allocated to genetic testing versus no testing. Genetic risk was calculated by summing 36 single nucleotide polymorphisms associated with type 2 diabetes. Participants in the top and bottom score quartiles received individual genetic counseling before being enrolled with untested control participants in a 12-week, validated, diabetes prevention program. Middle-risk quartile participants were not studied further. We examined the effect of this genetic counseling intervention on patient self-reported attitudes, program attendance, and weight loss, separately comparing higher-risk and lower-risk result recipients with control participants.
Results: The 108 participants enrolled in the diabetes prevention program included 42 participants at higher diabetes genetic risk, 32 at lower diabetes genetic risk, and 34 untested control subjects. Mean age was 57.9 ± 10.6 years, 61% were men, and average BMI was 34.8 kg/m(2), with no differences among randomization groups. Participants attended 6.8 ± 4.3 group sessions and lost 8.5 ± 10.1 pounds, with 33 of 108 (30.6%) losing ≥5% body weight. There were few statistically significant differences in self-reported motivation, program attendance, or mean weight loss when higher-risk recipients and lower-risk recipients were compared with control subjects (P > 0.05 for all but one comparison).
Conclusions: Diabetes genetic risk counseling with currently available variants does not significantly alter self-reported motivation or prevention program adherence for overweight individuals at risk for diabetes.
abstract_id: PUBMED:32688241
Gender differences in the modifiable risk factors associated with the presence of prediabetes: A systematic review. Background: Prediabetes is a risk state for the future development of type 2 diabetes. Previously, it was evident that the risk factors for diabetes differ by gender. However, conclusive evidence regarding the gender difference in modifiable risk factors associated with the presence of pre-diabetes is still lacking.
Aims: To systematically identify and summarize the available literature on whether the modifiable risk factors associated with prediabetes displays similar relationship in both the genders.
Methods: A systematic search was performed on electronic databases i.e. PubMed, EBSCOhost, and Scopus using "sex", "gender", "modifiable risk factors" and "prediabetes" as keywords. Reference list from identified studies was used to augment the search strategy. Methodological quality and results from individual studies were summarized in tables.
Results: Gender differences in the risk factor association were observed among reviewed studies. Overall, reported association between risk factors and prediabetes apparently stronger among men. In particular, abdominal obesity, dyslipidemia, smoking and alcohol drinking habits were risk factors that showed prominent association among men. Hypertension and poor diet quality may appear to be stronger among women. General obesity showed stringent hold, while physical activity not significantly associated with the risk of prediabetes in both the genders.
Conclusions: Evidence suggests the existence of gender differences in risk factors associated with prediabetes, demands future researchers to analyze data separately based on gender. The consideration and the implementation of gender differences in health policies and in diabetes prevention programs may improve the quality of care and reduce number of diabetes prevalence among prediabetic subjects.
abstract_id: PUBMED:22013171
Design of a randomized trial of diabetes genetic risk testing to motivate behavior change: the Genetic Counseling/lifestyle Change (GC/LC) Study for Diabetes Prevention. Background: The efficacy of diabetes genetic risk testing to motivate behavior change for diabetes prevention is currently unknown.
Purpose: This paper presents key issues in the design and implementation of one of the first randomized trials (The Genetic Counseling/Lifestyle Change (GC/LC) Study for Diabetes Prevention) to test whether knowledge of diabetes genetic risk can motivate patients to adopt healthier behaviors.
Methods: Because individuals may react differently to receiving 'higher' vs 'lower' genetic risk results, we designed a 3-arm parallel group study to separately test the hypotheses that: (1) patients receiving 'higher' diabetes genetic risk results will increase healthy behaviors compared to untested controls, and (2) patients receiving 'lower' diabetes genetic risk results will decrease healthy behaviors compared to untested controls. In this paper we describe several challenges to implementing this study, including: (1) the application of a novel diabetes risk score derived from genetic epidemiology studies to a clinical population, (2) the use of the principle of Mendelian randomization to efficiently exclude 'average' diabetes genetic risk patients from the intervention, and (3) the development of a diabetes genetic risk counseling intervention that maintained the ethical need to motivate behavior change in both 'higher' and 'lower' diabetes genetic risk result recipients.
Results: Diabetes genetic risk scores were developed by aggregating the results of 36 diabetes-associated single nucleotide polymorphisms. Relative risk for type 2 diabetes was calculated using Framingham Offspring Study outcomes, grouped by quartiles into 'higher', 'average' (middle two quartiles) and 'lower' genetic risk. From these relative risks, revised absolute risks were estimated using the overall absolute risk for the study group. For study efficiency, we excluded all patients receiving 'average' diabetes risk results from the subsequent intervention. This post-randomization allocation strategy was justified because genotype represents a random allocation of parental alleles ('Mendelian randomization'). Finally, because it would be unethical to discourage participants to participate in diabetes prevention behaviors, we designed our two diabetes genetic risk counseling interventions (for 'higher' and 'lower' result recipients) so that both groups would be motivated despite receiving opposing results.
Limitations: For this initial assessment of the clinical implementation of genetic risk testing we assessed intermediate outcomes of attendance at a 12-week diabetes prevention course and changes in self-reported motivation. If effective, longer term studies with larger sample sizes will be needed to assess whether knowledge of diabetes genetic risk can help patients prevent diabetes.
Conclusions: We designed a randomized clinical trial designed to explore the motivational impact of disclosing both higher than average and lower than average genetic risk for type 2 diabetes. This design allowed exploration of both increased risk and false reassurance, and has implications for future studies in translational genomics.
abstract_id: PUBMED:22302620
Genetic counseling as a tool for type 2 diabetes prevention: a genetic counseling framework for common polygenetic disorders. Advances in genetic epidemiology have increased understanding of common, polygenic preventable diseases such as type 2 diabetes. As genetic risk testing based on this knowledge moves into clinical practice, we propose that genetic counselors will need to expand their roles and adapt traditional counseling techniques for this new patient set. In this paper, we present a genetic counseling intervention developed for a clinical trial [Genetic Counseling/Lifestyle Change for Diabetes Prevention, ClinicalTrials.gov identifier: NCT01034319] designed to motivate behavioral changes for diabetes prevention. Seventy-two phenotypically high-risk participants received counseling that included their diabetes genetic risk score, general education about diabetes risk factors, and encouragement to participate in a diabetes prevention program. Using two validated genetic counseling scales, participants reported favorable perceived control and satisfaction with the counseling session. Our intervention represents one model for applying traditional genetic counseling principles to risk testing for polygenetic, preventable diseases, such as type 2 diabetes.
abstract_id: PUBMED:37939919
Causal associations between modifiable risk factors and intervertebral disc degeneration. Background: Intervertebral disc degeneration (IVDD) is a common degenerative condition, which is thought to be a major cause of lower back pain (LBP). However, the etiology and pathophysiology of IVDD are not yet completely clear.
Purpose: To examine potential causal effects of modifiable risk factors on IVDD.
Study Design: Bidirectional Mendelian randomization (MR) study.
Patient Sample: Genome-wide association studies (GWAS) with sample sizes between 54,358 and 766,345 participants.
Outcome Measures: Outcomes included (1) modifiable risk factors associated with IVDD use in the forward MR; and (2) modifiable risk factors that were determined to have a causal association with IVDD in the reverse MR, including smoking, alcohol intake, standing height, education level, household income, sleeplessness, hypertension, hip osteoarthritis, HDL, triglycerides, apolipoprotein A-I, type 2 diabetes, fasting glucose, HbA1c, BMI and obesity trait.
Methods: We obtained genetic variants associated with 33 exposure factors from genome-wide association studies. Summary statistics for IVDD were obtained from the FinnGen consortium. The risk factors of IVDD were analyzed by inverse variance weighting method, MR-Egger method, weighted median method, MR-PRESSO method and multivariate MR Method. Reverse Mendelian randomization analysis was performed on risk factors found to be caustically associated with IVDD in the forward Mendelian randomization analysis. The heterogeneity of instrumental variables was quantified using Cochran's Q statistic.
Results: Genetic predisposition to smoking (OR=1.221, 95% CI: 1.068-1.396), alcohol intake (OR=1.208, 95% CI: 1.056-1.328) and standing height (OR=1.149, 95% CI: 1.072-1.231) were associated with increased risk of IVDD. In addition, education level (OR=0.573, 95%CI: 0.502-0.654)and household income (OR=0.614, 95%CI: 0.445-0.847) had a protective effect on IVDD. Sleeplessness (OR=1.799, 95%CI: 1.162-2.783), hypertension (OR=2.113, 95%CI: 1.132-3.944) and type 2 diabetes (OR=1.069, 95%CI: 1.024-1.115) are three important risk factors causally associated with the IVDD. In addition, we demonstrated that increased levels of triglycerides (OR=1.080, 95%CI:1.013-1.151), fasting glucose (OR=1.189, 95%CI:1.007-1.405), and HbA1c (OR=1.308, 95%CI:1.017-1.683) could significantly increase the odds of IVDD. Hip osteoarthritis, HDL, apolipoprotein A-I, BMI and obesity trait factors showed bidirectional causal associations with IVDD, therefore we considered the causal associations between these risk factors and IVDD to be uncertain.
Conclusions: This MR study provides evidence of complex causal associations between modifiable risk factors and IVDD. It is noteworthy that metabolic disturbances appear to have a more significant effect on IVDD than biomechanical alterations, as individuals with type 2 diabetes, elevated triglycerides, fasting glucose, and elevated HbA1c are at higher risk for IVDD, and the causal association of obesity-related characteristics with IVDD incidence is unclear. These findings provide new insights into potential therapeutic and prevention strategies. Further research is needed to clarify the mechanisms of these risk factors on IVDD.
abstract_id: PUBMED:36999014
Causal associations between modifiable risk factors and pancreatitis: A comprehensive Mendelian randomization study. Background: The pathogenesis of pancreatitis involves diverse environmental risk factors, some of which have not yet been clearly elucidated. This study systematically investigated the causal effects of genetically predicted modifiable risk factors on pancreatitis using the Mendelian randomization (MR) approach.
Methods: Genetic variants associated with 30 exposure factors were obtained from genome-wide association studies. Summary-level statistical data for acute pancreatitis (AP), chronic pancreatitis (CP), alcohol-induced AP (AAP) and alcohol-induced CP (ACP) were obtained from FinnGen consortia. Univariable and multivariable MR analyses were performed to identify causal risk factors for pancreatitis.
Results: Genetic predisposition to smoking (OR = 1.314, P = 0.021), cholelithiasis (OR = 1.365, P = 1.307E-19) and inflammatory bowel disease (IBD) (OR = 1.063, P = 0.008) as well as higher triglycerides (OR = 1.189, P = 0.016), body mass index (BMI) (OR = 1.335, P = 3.077E-04), whole body fat mass (OR = 1.291, P = 0.004) and waist circumference (OR = 1.466, P = 0.011) were associated with increased risk of AP. The effect of obesity traits on AP was attenuated after correcting for cholelithiasis. Genetically-driven smoking (OR = 1.595, P = 0.005), alcohol consumption (OR = 3.142, P = 0.020), cholelithiasis (OR = 1.180, P = 0.001), autoimmune diseases (OR = 1.123, P = 0.008), IBD (OR = 1.066, P = 0.042), type 2 diabetes (OR = 1.121, P = 0.029), and higher serum calcium (OR = 1.933, P = 0.018), triglycerides (OR = 1.222, P = 0.021) and waist-to-hip ratio (OR = 1.632, P = 0.023) increased the risk of CP. Cholelithiasis, triglycerides and the waist-to-hip ratio remained significant predictors in the multivariable MR. Genetically predicted alcohol drinking was associated with increased risk of AAP (OR = 15.045, P = 0.001) and ACP (OR = 6.042, P = 0.014). After adjustment of alcohol drinking, genetic liability to IBD had a similar significant causal effect on AAP (OR = 1.137, P = 0.049), while testosterone (OR = 0.270, P = 0.002) a triglyceride (OR = 1.610, P = 0.001) and hip circumference (OR = 0.648, P = 0.040) were significantly associated with ACP. Genetically predicted higher education and household income levels could lower the risk of pancreatitis.
Conclusions: This MR study provides evidence of complex causal associations between modifiable risk factors and pancreatitis. These findings provide new insights into potential therapeutic and prevention strategies.
abstract_id: PUBMED:27600142
Elucidating the link between the modifiable risk factors of Alzheimer's disease and neuroinflammation. Increased worldwide longevity through medical interventions, although beneficial, has allowed the age-related Alzheimer's disease (AD) to become an epidemic of the 21st century. AD pathology involves adverse activation of microglia, the immune cells of the brain and resulting chronic neuroinflammation. Certain diets, physical inactivity and Type 2 diabetes mellitus have been identified as the risk factors for developing AD, which may increase the risk of AD by neuroimmune mechanisms primarily through the overactivation of microglia. Thus, modifying these risk factors may represent an alternative therapeutic strategy for lowering the incidence of AD. We highlight the link between select modifiable risk factors and neuroimmune mechanisms, and demonstrate that by controlling microglial activation and neuroinflammation the prevalence of AD may be decreased.
Answer: Based on the information provided in the abstracts, Type 2 Diabetes Genetic Testing and Counseling does not appear to significantly reduce modifiable risk factors. A randomized controlled trial of veterans found that providing patients with genetic test results was not more effective in changing patient behavior to reduce the risk of diabetes mellitus (DM) compared to conventional risk counseling (PUBMED:25876740). Another study examining the impact of Type 2 Diabetes (T2D) Family Health History (FHH) and genetic risk counseling on behavior and its cognitive precursors found that while FHH risk was associated with a stronger perceived T2D risk, genetic risk did not influence risk perception, and behavioral and clinical outcomes did not differ across FHH or genetic risk (PUBMED:27296809).
Furthermore, a randomized trial of diabetes genetic risk counseling among overweight patients at increased phenotypic risk for type 2 diabetes showed that diabetes genetic risk counseling with currently available variants does not significantly alter self-reported motivation or prevention program adherence for overweight individuals at risk for diabetes (PUBMED:22933432).
These findings suggest that while genetic testing and counseling may impact cognitive precursors such as risk perception, they do not necessarily lead to significant changes in modifiable risk factors or behaviors associated with the prevention of Type 2 Diabetes. |
Instruction: Is off-pump coronary artery bypass surgery safe for left main coronary artery stenosis?
Abstracts:
abstract_id: PUBMED:28473107
Current Interventions for the Left Main Bifurcation. Contemporary clinical trials, registries, and meta-analyses, supported by recent results from the EXCEL (Everolimus-Eluting Stents or Bypass Surgery for Left Main Coronary Artery Disease) and NOBLE (Percutaneous Coronary Angioplasty Versus Coronary Artery Bypass Grafting in Treatment of Unprotected Left Main Stenosis) trials, have established percutaneous coronary intervention of left main coronary stenosis as a safe alternative to coronary artery bypass grafting in patients with low and intermediate SYNTAX (Synergy Between Percutaneous Coronary Intervention With Taxus and Cardiac Surgery) scores. As left main percutaneous coronary intervention gains acceptance, it is imperative to increase awareness for patient selection, risk scoring, intracoronary imaging, vessel preparation, and choice of stenting techniques that will optimize procedural and patient outcomes.
abstract_id: PUBMED:21160702
Percutaneous coronary intervention for unprotected left main coronary artery stenosis. Hemodynamically significant left main coronary artery stenosis (LMCA) is found in around 4% of diagnostic coronary angiograms and is known as unprotected LMCA stenosis if the left coronary artery and left circumflex artery has no previous patent grafts. Previous randomized studies have demonstrated a significant reduction in mortality when revascularization by coronary artery bypass graft (CABG) surgery was undertaken compared with medical treatment. Therefore, current practice guidelines do not recommend percutaneous coronary intervention (PCI) for such a lesion because of the proven benefit of surgery and high rates of restenosis with the use of bare metal stents. However, with the advent of drug-eluting stents (DES), the long term outcomes of PCI with DES to treat unprotected LMCA stenoses have been acceptable. Therefore, apart from the current guidelines, PCI for treatment of unprotected LMCA stenosis is often undertaken in individuals who are at a very high risk of CABG or refuse to undergo a sternotomy. Future randomized studies comparing CABG vs PCI using DES for treatment of unprotected LMCA stenosis would be a great advance in clinical knowledge for the adoption of appropriate treatment.
abstract_id: PUBMED:25340285
Drug-eluting stents in unprotected left main coronary artery disease. Though coronary bypass graft surgery (CABG) has traditionally been the cornerstone of therapy in patients with unprotected left main coronary artery (ULMCA) disease, recent evidence supports the use of percutaneous coronary intervention in appropriate patients. Indeed in patients with ULMCA disease, drug-eluting stents (DES) have shown similar incidence of hard end points, fewer periprocedural complications and lower stroke rates compared with CABG, though at the cost of increased revascularization with time. Furthermore, the availability of newer efficacious and safer DES as well as improvements in diagnostic tools, percutaneous techniques and, importantly, a better patient selection, allowed percutaneous coronary intervention a viable alternative to CABG of left main-patients with low disease complexity; however, even in this interventional era characterized by efficacious DES, patients with ULMCA disease remain a challenging high-risk population where outcomes strongly depend on clinical characteristics, anatomical disease complexity and extension and operator's experience. This review summarizes the role of DES in ULMCA disease patients.
abstract_id: PUBMED:29336945
Current treatment of significant left main coronary artery disease: A review. Though infrequent, left main stenosis has a major prognostic impact. The management of left main disease has evolved over the last few decades with the growing evidence of the efficacy and safety of percutaneous interventions, as attested by the most recent trials. However, mastery of the technical aspects of left main bifurcation stenting is essential in ensuring optimal results. This review focuses on recent data concerning left main angioplasty results as well as the current technical approaches.
abstract_id: PUBMED:31297258
The Impact of the Risk Factors in the Evolution of the Patients with Left Main Coronary Artery Stenosis Treated with PCI or CABG. The aim of our study was to identify the cardiovascular risk factors present in patients with left main coronary artery disease (LMCAD), which influenced the progression of these patients in both percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG). We performed a clinical observational descriptive study in which, during three years, we followed the evolution of 81 patients who were diagnosed with left main coronary artery disease and who were treated either by interventional revascularization by stent implantation, by surgical revascularization by performing an aortic-coronary bypass. In our study the risk factors according to which the evolution of the patients was observed were represented by diabetes, smoking, age and gender. The primary endpoint was mortality from any cause and other clinical endpoints were the reduction of left ventricular ejection fraction, symptomatic ischemic heart disease manifested by angina pectoris, non-procedural myocardial infarction or need for repeated revascularization. In our study diabetes was the risk factor that negatively influenced the evolution of patients with LMCAD treated either by PCI or by CABG for the most part, followed by smoking, male gender and age over 65 years.
abstract_id: PUBMED:25838879
Iatrogenic left main coronary artery stenosis following aortic and mitral valve replacement. Iatrogenic coronary artery disease following prosthetic valve implantation is a rare complication. This may result from mechanical injury in the intraoperative period. The use of balloon tip perfusion catheter presumably provides the initial insult with local vessel wall hypoxia. Once the diagnosis of coronary ostial stenosis is established, the procedure of choice is coronary artery bypass surgery. We report a case of a young lady who underwent aortic and mitral valves replacement for infective endocarditis. She was then diagnosed with ostial left main stem coronary stenosis after presenting with atypical symptoms. The patient eventually underwent coronary artery bypass surgery.
abstract_id: PUBMED:28352405
Real World Application of Stenting of Unprotected Left Main Coronary Stenosis: A Single-Center Experience. Background: The aim of this study was to summarize our single-center real-world experience with percutaneous coronary intervention (PCI) stenting of unprotected left main coronary artery (ULMCA). PCI-stenting of the ULMCA, while controversial, is emerging as an alternative to coronary artery bypass graft (CABG) surgery in select patients and clinical situations.
Methods: Between January 2005 and December 2008, PCI-stenting was performed on 125 patients with ULMCA lesions at our institution. Clinical and procedural data were recorded at the time of procedure, and patients were followed prospectively (mean 1.7 years; range 1 day-4.1 years) for outcomes, including death, myocardial infarction (MI), and target vessel revascularization (TVR).
Results: The majority of cases were urgent or emergent (82.5%), 50.4% of patients were non-surgical candidates, and 63.2% had 3 vessel disease. Many emergent patients presented in shock (62.1%), were not surgical candidates (89.7%), and had high mortality (20.7% in-hospital, 44.8% long-term). Mortality in the elective group was 6.3%. Cumulative death and TVR rates were 28.8% and 13.6%, respectively. Independent predictors of mortality were ejection fraction (EF) ≤ 35% (HR 2.4, CI 1.1 - 5.4) and left main bifurcation (HR 2.7, CI 1.2 - 5.7).
Conclusions: PCI-stenting is a viable option in patients with LMCA disease and extends options to patients who are poor candidates for CABG. Elective PCI in low-risk CABG patients results in good long-term survival. Cumulative TVR is 13.6%. EF ≤ 35% and left main bifurcation are independently associated with increased mortality.
abstract_id: PUBMED:30808396
Internal thoracic artery patch repair of a saccular left main coronary artery aneurysm. Background: A saccular aneurysm located at the bifurcation of the left main coronary artery (LMCA) is an extremely rare condition. A major cause of left main coronary aneurysm is atherosclerosis, and common complications include thrombosis, embolism, and rupture. Despite the serious nature of this condition, the ideal operative approach to LMCA aneurysm (LMCAA) has not been established. Furthermore, little is known about resection of the saccular aneurysm and closure using a small internal thoracic artery patch.
Case Presentation: Here, we present the case of a 66-year-old woman who had significant stenosis in the left anterior descending artery and a saccular aneurysm at the bifurcation of the LMCAA, which was repaired using a small internal thoracic artery patch during coronary artery bypass grafting. Postoperative multislice computed tomography revealed the complete disappearance of the aneurysm and a successful repair with no luminal stenosis of the internal thoracic artery patch. In addition, the left internal thoracic artery graft was found to be patent.
Conclusions: Resection of the saccular LMCA aneurysm and closure using a small internal thoracic artery patch is safe and offer excellent results.
abstract_id: PUBMED:23813548
Unprotected left main coronary stenting as alternative therapy to coronary bypass surgery in high surgical risk acute coronary syndrome patients. Acute coronary syndrome has a high mortality rate that dramatically increases in the presence of left main coronary artery (LMCA) disease. Over the past decades, coronary artery bypass graft (CABG) surgery has been commonly accepted as the standard of care for patients with LMCA stenosis and is still considered the first-line treatment in current practice guidelines. Percutaneous coronary intervention (PCI) of protected and unprotected LMCA has gained popularity and is increasingly utilized with comparable outcomes to CABG in randomized controlled trials. In-stent restenosis and the need for revascularization provide the main obstacle to LMCA revascularization. The advent of better PCI equipment, stents, ablative devices, intravascular ultrasound, hemodynamic support devices and antithrombotic agents have ignited a renewed interest in the practice of LMCA PCI, especially for high surgical risk patients who are neither candidates nor agreeable to CABG surgery. Herein, we review the studies comparing unprotected LMCA stenting with CABG surgery in regard to 3 main endpoints: mortality, major adverse events and the incidence of repeat revascularization.
abstract_id: PUBMED:29572751
Operator Experience and Outcomes After Left Main Percutaneous Coronary Intervention. Purpose Of Review: This review was performed with the goal of summarizing the role of operator experience in the treatment of severe left main stenosis by percutaneous intervention techniques.
Recent Findings: The Evaluation of XIENCE versus Coronary Artery Bypass Surgery for Effectiveness of Left Main Revascularization (EXCEL) trial demonstrated that percutaneous coronary intervention and coronary artery bypass grafting had similar clinical outcomes for severe left main disease. However, PCI of the left main coronary stenosis is considered to be a high-risk intervention because of the large area of myocardium at jeopardy that can quickly cause hemodynamic compromise. Operator experience and familiarity with the use of hemodynamic support devices, plaque modification techniques, and intravascular imaging tools is associated with better clinical outcomes. In patients with severe left main stenosis undergoing percutaneous coronary intervention by high-volume operators, the clinical outcomes are superior.
Answer: The provided abstracts do not directly address the safety of off-pump coronary artery bypass surgery (OPCAB) for left main coronary artery stenosis. The abstracts focus on percutaneous coronary intervention (PCI) as an alternative to coronary artery bypass grafting (CABG) for the treatment of left main coronary artery stenosis, discussing the use of drug-eluting stents (DES), patient selection, and operator experience in the context of PCI (PUBMED:28473107, PUBMED:21160702, PUBMED:25340285, PUBMED:29336945, PUBMED:31297258, PUBMED:25838879, PUBMED:28352405, PUBMED:30808396, PUBMED:23813548, PUBMED:29572751).
However, it is important to note that the safety and efficacy of OPCAB, a technique where CABG is performed without the use of a heart-lung machine, would be determined by factors such as the patient's overall health, the complexity of the coronary artery disease, the surgeon's experience, and the presence of comorbid conditions. Traditional CABG has been the standard treatment for left main coronary artery stenosis, and while the abstracts suggest that PCI is a viable alternative in certain cases, they do not provide specific information on OPCAB outcomes for this condition.
For a definitive answer on the safety of OPCAB for left main coronary artery stenosis, one would need to consult clinical studies or guidelines that specifically address the outcomes of OPCAB in this patient population. |
Instruction: Does prior psychotherapy experience affect the course of cognitive-behavioural group therapy for social anxiety disorder?
Abstracts:
abstract_id: PUBMED:18801212
Does prior psychotherapy experience affect the course of cognitive-behavioural group therapy for social anxiety disorder? Objective: To examine whether and how different patterns of psychotherapy history (no prior therapy, successful therapy experience, and unsuccessful therapy experience) affect the outcome of future treatment among patients undergoing cognitive-behavioural group therapy for social anxiety disorder.
Method: Fifty-seven patients with varying histories of psychotherapy participating in cognitive-behavioural group treatment for social anxiety disorder were included in the study. Symptom severity (including anxiety, depression, self-efficacy, and global symptom severity) was assessed at pre- and posttreatment. A therapist-rated measure of patient therapy engagement was included as a process variable.
Results: First-time therapy patients showed more favourable pretreatment variables and achieved greater benefit from group therapy. Among patients with unsuccessful therapy experience, substantial gains were attained by those who were able to actively engage in the therapy process. Patients rating previous therapies as successful could benefit the least and tended to stagnate. Possible explanations for group differences and clinical implications are discussed.
Conclusions: Prior psychotherapy experience affects the course of cognitive-behavioural group therapy in patients with social phobias. While patients with negative therapy experience may need extensive support in being and remaining actively engaged, those rating previous therapies as successful should be assessed very carefully and may benefit from a major focus on relational aspects.
abstract_id: PUBMED:34791669
Bouldering psychotherapy is not inferior to cognitive behavioural therapy in the group treatment of depression: A randomized controlled trial. Objectives: Bouldering has shown promising results in the treatment of various health problems. In previous research, bouldering psychotherapy (BPT) was shown to be superior to a waitlist control group and to physical exercise with regard to reducing symptoms of depression. The primary aim of this study was to compare group BPT with group cognitive behavioural psychotherapy (CBT) to test the hypothesis that BPT would be equally as effective as CBT.
Design: We conducted a randomized, controlled, assessor-blinded non-inferiority trial in which 156 outpatients meeting the criteria of a depressive episode according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) were randomly assigned to one of the two intervention groups (CBT: N = 77, BPT: N = 79).
Methods: Intervention groups were manualized and treated for 10 weeks with a maximum of 11 participants and two therapists. The primary outcome was depressive symptom severity assessed with the Montgomery-Åsberg Depression Rating Scale (MADRS) and the Patient Health questionnaire (PHQ-9) at the beginning and end of the treatment phase as well as one year after the end of treatment.
Result: In both groups, depressive symptoms improved significantly by an average of one severity level, moving from moderate to mild depressive symptoms after therapy (MADRS difference scores: BPT -8.06, 95% CI [-10.85, -5.27], p < .001; CBT -5.99, 95% CI [-8.55, -3.44], p < .001). The non-inferiority of BPT in comparison with CBT was established on the basis of the lower bound of the 95% confidence interval falling above all of the predefined margins. BPT was found to be effective in both the short (d = 0.89) and long term (d = 1.15).
Conclusion: Group BPT was found to be equally as effective as group CBT. Positive effects were maintained until at least 12 months after the end of therapy. Thus, BPT is a promising approach for broadening the therapeutic field of therapies for depression.
Practitioner Points: Physical activity is effective in the treatment of depression and current guidelines explicitly recommend it as a complementary method for the treatment of depression. Nevertheless, body-related interventions are still underrepresented in current treatments for depression. Bouldering psychotherapy (BPT) combines physical activity with psychotherapeutic content. Its concept relies on proven effective factors from CBT such as exposure training, problem solving and practicing new functional behaviours and is thus an enrichment and implementation of CBT methods on the bouldering wall. The positive effect of group bouldering psychotherapy (BPT) in reducing depressive symptoms in outpatients with depression is not inferior to the effect of group cognitive behavioural therapy (CBT). Additionally the 10-weeks BPT-programme significantly improved symptoms of anxiety and interpersonal sensitivity as well as health-related quality of life, coping, body image, self-efficacy, and global self-esteem.
abstract_id: PUBMED:32378494
The acceptability and feasibility of group cognitive behavioural therapy for older adults with generalised anxiety disorder. Background: Group psychotherapy for older adults with generalised anxiety disorder is an under-researched area.
Aim: This report describes a mixed method evaluation of the acceptability and feasibility of an Overcoming Worry Group.
Method: The Overcoming Worry Group was a novel adaptation of a cognitive behavioural therapy protocol targeting intolerance-of-uncertainty for generalised anxiety disorder, tailored for delivery to older adults in a group setting (n = 13).
Results: The adapted protocol was found to be acceptable and feasible, and treatment outcomes observed were encouraging.
Conclusions: This proof-of-concept study provides evidence for an Overcoming Worry Group as an acceptable and feasible group treatment for older adults with generalised anxiety disorder.
abstract_id: PUBMED:27774709
The effect of Korean-group cognitive behavioural therapy among patients with panic disorder in clinic settings. WHAT IS KNOWN ON THE SUBJECT?: Panic disorder patients display various panic-related physical symptoms and catastrophic misinterpretation of bodily sensations, which lower their quality of life by interfering with daily activities. Cognitive behavioural therapy (CBT) is a useful strategy for panic disorder patients to manage symptoms associated with inaccurate cognitive interpretation of situations resulting from the patient's cognitive vulnerability. In South Korea, however, despite the increasing prevalence of panic disorder, CBT is not a common element of nursing care plans for panic disorder patients. Moreover, few Korean researchers have attempted to assess the effects of CBT on such patients. WHAT THIS PAPER ADD TO EXISTING KNOWLEDGE?: In a strategy combining CBT and routine treatments, patients with panic disorder can experience greater positive effects in the acute treatment phase than those they experience when receiving only routine treatment. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: Mental health professionals, especially psychiatric nurses in local clinics who operate most special mental health programmes for panic disorder patients, should apply a panic disorder management programme that integrates CBT and routine treatments. The integrated approach is more effective for reducing the number of panic attacks and cognitive misinterpretation in patients than providing routine treatment alone. For patients with panic disorder, the objective of CBT is to understand the relationship between psychological panic disorder sensations, emotions, thoughts and behaviours. Therefore, nurses can help patients address and improve biological, social and psychological aspects of physical health problems as well as help them improve their coping skills in general.
Abstract: Introduction In panic disorder, sensitivity to bodily sensations increases due to the patient's cognitive vulnerability. Cognitive behavioural therapy (CBT) can help to decrease sensitivity to bodily sensations by correcting these cognitive distortions by controlling negative thoughts and panic attacks. Aims This study verified whether group CBT is more effective than treatment as usual (TAU) in South Korean patients with panic disorder. Methods The study participants consisted of 76 panic disorder patients. Patients in the therapy condition attended sessions once a week for a total of 12 sessions in addition to drug treatment. Results In the therapy condition, there were significant decreases in panic-related bodily sensations and ranking and belief scores for catastrophic misinterpretation of external events. Discussion Group CBT, in comparison to TAU, decreases panic and agoraphobia symptom severity in South Korean patients with panic disorder. Our study provides evidence for the effectiveness of a panic disorder management programme that integrates group CBT and traditional pharmacotherapeutic treatment for patients with panic disorder. Implications for Practice The cognitive behavioural approach is needed to reduce panic and agoraphobia symptoms for hospitalized patients with panic disorder more than activity therapies, medications and supportive counselling by doctors and nurses.
abstract_id: PUBMED:27432441
Cognitive-Behavioural Therapy for Inflammatory Bowel Disease: 24-Month Data from a Randomised Controlled Trial. Purpose: There is ongoing controversy on the effectiveness of psychotherapy in inflammatory bowel disease (IBD). In the few small studies, cognitive-behavioural therapy (CBT) has been shown to alleviate symptoms of anxiety or depression. However, there is little research on the impact of CBT on physical outcomes in IBD and no studies on long-term effectiveness of CBT.
Methods: The present two-arm pragmatic randomised controlled trial aimed to establish the impact of CBT on disease course after 24 months of observation. The study compared standard care plus CBT (+CBT) with standard care alone (SC). CBT was delivered over 10 weeks, face-to-face (F2F) or online (cCBT). The data were analysed using linear mixed-effects models.
Results: CBT did not significantly influence disease activity as measured by disease activity indices at 24 months (Crohn's Disease Activity Index (CDAI), p = 0.92; Simple Clinical Colitis Activity Index (SCCAI), p = 0.88) or blood parameters (C-reactive protein (CRP), p < 0.62; haemoglobin (Hb), p = 0.77; platelet, p = 0.64; white cell count (WCC), p = 0.59) nor did CBT significantly affect mental health, coping or quality of life (all p > 0.05).
Conclusions: Therefore, we conclude that CBT does not influence the course of IBD over 24 months. Given the high rate of attrition, particularly in the CBT group, future trials should consider a personalised approach to psychotherapy, perhaps combining online and one-to-one therapist time.
abstract_id: PUBMED:31284123
Changes in post-event processing during cognitive behavioural therapy for social anxiety disorder: A longitudinal analysis using post-session measurement and experience sampling methodology. Purpose: Post-event processing (PEP) is posited to be an important factor in the maintenance of social anxiety symptoms. Previous research has demonstrated that general PEP tendencies are sensitive to treatment. However, it remains unclear how momentary PEP following social interactions changes over the course of treatment for social anxiety disorder. The purpose of the present study was to examine how both momentary and general PEP change over the course of treatment, and how such changes predict treatment outcome.
Method: Participants (N = 60) with social anxiety disorder were enrolled in group cognitive behavioural therapy. All participants completed measures of PEP and social anxiety symptom severity at five time points over treatment. A subset (N = 33) also completed repeated experience sampling measurements of PEP following social interactions across the course of treatment.
Results: Both general and momentary PEP decreased over the course of treatment. Decreases in both types of PEP predicted lower social anxiety symptom severity following treatment.
Conclusion: The results of the study demonstrate that momentary experiences of PEP can be influenced by treatment, and can in turn impact treatment outcome. The findings have significant clinical and theoretical implications.
abstract_id: PUBMED:33715642
An evaluation of a group-based cognitive behavioural therapy intervention for low self-esteem. Background: Self-esteem is a common factor in many mental health problems, including anxiety and depression. A cognitive behavioural therapy (CBT)-based protocol called 'Overcoming Low Self-Esteem' is available; the use of this protocol in a group format has been associated with improvements in self-esteem. However, it is unclear whether improvements persist after the end of a group-based version of this programme.
Aims: We aimed to assess whether changes in self-esteem, anxiety and depression persist 3 months after the end of a group version of the Overcoming Low Self-Esteem programme.
Method: Using data from the National Health Service in Fife, Scotland, we analysed whether there were improvements on self-report measures of self-esteem, anxiety and depression from the beginning of the group to the end of the group and at a follow-up session 3 months later.
Results: Significant improvements in self-esteem, anxiety and depression are maintained at 3 months follow-up.
Conclusions: The Overcoming Low Self-Esteem group seems to be associated with improved self-esteem, anxiety and depression. However, further research from randomised controlled trials is needed to establish a causal link between the programme and improved psychological outcomes.
abstract_id: PUBMED:36303332
Experience of internet-delivered cognitive behavioural therapy among patients with non-cardiac chest pain. Aims And Objective: To explore the experiences of patients with non-cardiac chest pain and cardiac anxiety regarding participation in an internet-delivered cognitive behavioural therapy program.
Background: Non-cardiac chest pain is common and leads to cardiac anxiety. Internet-delivered cognitive behavioural therapy may be a possible option to decrease cardiac anxiety in these patients. We have recently evaluated the effect of an internet-delivered cognitive behavioural therapy program on cardiac anxiety.
Design: An inductive qualitative study using content analysis and the COREQ checklist.
Methods: Semi-structured interviews with 16 Swedish patients, who had participated in the internet-delivered cognitive behavioural therapy program.
Results: Three categories were found. The first, 'Driving factors for participation in the internet-delivered cognitive behavioural therapy program' described the impact of pain on their lives and struggle that led them to participating in the program. The second, 'The program as a catalyst' described that the program was helpful, trustworthy and useful and the last category, 'Learning to live with chest pain' described the program as a tool for gaining the strength and skills to live a normal life despite chest pain.
Conclusions: The program was experienced as an opportunity to return to a normal life. The program was perceived as helpful, trustworthy and useful, which helped the participants challenge their fear of chest pain and death, and gain strength and new insights into their ability to live a normal life.
Relevance To Clinical Practice: A tailored internet-delivered cognitive behavioural therapy program delivered by a nurse therapist with clinical experience of the patient group is important to improve cardiac anxiety.
Patient Or Public Contribution: Patients or the general public were not involved in the design, analysis or interpretation of the data of this study, but two patients with experience of non-cardiac chest pain were involved in the development of the pilot study.
Trial Registration: ClinicalTrials.gov NCT03336112; https://www.
Clinicaltrials: gov/ct2/show/NCT03336112.
abstract_id: PUBMED:32303260
Pilot trial of a group cognitive behavioural therapy program for comorbid depression and obesity. Background: Depression and obesity are significant global health concerns that commonly occur together. An integrated group cognitive behavioural therapy program was therefore developed to simultaneously address comorbid depression and obesity.
Methods: Twenty-four participants (63% women, mean age 46 years) who screened positively for depression with a body mass index ≥25 were recruited from a self-referred general population sample. The group therapy program (10 two-hour weekly sessions) was examined in a single-arm, before-after pilot trial, conducted in a behavioural health clinic in Adelaide, Australia. Primary outcomes included survey and assessment-based analyses of depression, anxiety, body image, self-esteem, and weight (kg), assessed at four time-points: baseline, post-intervention, three-months and 12-months post program. Eighteen participants (75%) completed the program and all assessments.
Results: Significant improvements in depression, anxiety, self-esteem and body shape concern scores, several quality of life domains, eating behaviours and total physical activity (among others) - but not weight - were observed over the course of the trial.
Conclusions: Results from this pilot trial suggest that combining interventions for depression and obesity may be useful. Further development of the program, particularly regarding the potential for physical health benefits, and a randomised controlled trial, are warranted.
Trial Registration: Trial registration: ANZCTR, ACTRN12617001079336, 13 July 2017. Retrospectively registered after date of the first consent (6 July 2017), but before the date of the first intervention session (20 July 2017).
abstract_id: PUBMED:29871705
A Preliminary Study of Work-Focused Cognitive Behavioural Group Therapy for Japanese Workers. Background: In Japan, cognitive behavioural therapy (CBT) has been introduced in the 'Rework Programme', but its impact on return to work (RTW) has not been fully clarified.
Aims: This pilot study investigated the initial efficacy of a work-focused cognitive behavioural group therapy (WF-CBGT) for Japanese workers on sick leave due to depression.
Method: Twenty-three patients on leave due to depression were recruited from a mental health clinic. WF-CBGT including behavioural activation therapy, cognitive therapy, and problem-solving therapy techniques was conducted for eight weekly 150-minute sessions. Participants completed questionnaires on depression and anxiety (Kessler-6), social adaptation (Social Adaptation Self-Evaluation Scale), and difficulty in RTW (Difficulty in Returning to Work Inventory) at pre- and post-intervention time points. Rates of re-instatement after the intervention were examined.
Results: One participant dropped out, but 22 participants successfully completed the intervention. All scale scores significantly improved after intervention and, except for difficulty in RTW related to physical fitness, all effect sizes were above the moderate classification. All participants who completed the intervention succeeded in RTW.
Conclusions: Results suggested the possibility that WF-CBGT may be a feasible and promising intervention for Japanese workers on leave due to depression regardless of cross-cultural differences, but that additional research examining effectiveness using controlled designs and other samples is needed. Future research should examine the efficacy of this programme more systematically to provide relevant data to aid in the continued development of an evidence-based intervention.
Answer: Yes, prior psychotherapy experience does affect the course of cognitive-behavioural group therapy for social anxiety disorder. According to a study examining the impact of different patterns of psychotherapy history on the outcome of future treatment, first-time therapy patients showed more favourable pretreatment variables and achieved greater benefit from group therapy. In contrast, patients with unsuccessful therapy experience still attained substantial gains if they were able to actively engage in the therapy process. However, patients who rated previous therapies as successful benefited the least and tended to stagnate. This suggests that prior psychotherapy experience, particularly when perceived as unsuccessful, may require additional support for patients to be and remain actively engaged in the therapy process. Conversely, those who viewed their previous therapy experiences as successful should be assessed carefully, as they may benefit from a focus on relational aspects in their treatment (PUBMED:18801212). |
Instruction: Infliximab and pediatric stricturing Crohn's disease: a possible alternative to surgery?
Abstracts:
abstract_id: PUBMED:21694847
Infliximab therapy in pediatric Crohn's disease: a review. Anti-tumor necrosis factor alpha (TNF-α) therapy has re-defined our treatment paradigms in managing patients with Crohn's disease (CD) and ulcerative colitis. Although the ACCENT studies showed proven efficacy in the induction and maintenance of disease remission in adult patients with moderate to severe CD, the pediatric experience was instrumental in bringing forth the notion of "top-down" therapy to improve overall clinical response while reducing the risk of complications resulting from long-standing active disease. Infliximab has proven efficacy in the induction and maintenance of disease remission in children and adolescents with CD. In an open-labeled study of 112 pediatric patients with moderate to severe CD, 58% achieved clinical remission on induction of infliximab (5 mg/kg) therapy. Among those patients who achieved disease remission, 56% maintained disease remission on maintenance (5 mg/kg every 8 weeks) therapy. Longitudinal follow-up studies have also shown that responsiveness to infliximab therapy also correlates well with reduced rates of hospitalization, and surgery for complication of long-standing active disease, including stricture and fistulae formation. Moreover, these children have also been shown to improve overall growth while maintaining an effective disease remission. The pediatric experience has been instructive in suggesting that the early introduction of anti-TNF-α therapy may perhaps alter the natural history of CD in children, an observation that has stimulated a great deal of interest among gastroenterologists who care for adult patients with CD.
abstract_id: PUBMED:33108856
Pediatric Crohn's disease with severe morbidity manifested by gastric outlet obstruction: two cases report and review of the literature. Crohn's disease (CD) presenting as gastric outlet obstruction is rare but serious clinical presentation of CD causing severe morbidity. However, there have been few case reports concerning this disorder in East Asian children and adolescents. The current case report describes 2 pediatric patients with CD who had had gastric outlet obstruction as an initial symptom of CD. Two pediatric patients developed postprandial vomiting, bloating, and unintentional weight loss. The upper endoscopy result indicated that there was pyloric obstruction with mucosal edema, inflammation and ulcers. The serologic test and colonoscopy results suggested CD. These patients were treated with infliximab, and endoscopic balloon dilation without surgery and showed remarkable improvement in obstructing symptoms with maintaining clinical and biochemical remission. This case report elucidates the benefits of early intervention using infliximab and endoscopic balloon dilation to improve gastric outlet obstruction and achieve baseline recovery in patients with upper gastrointestinal B2 phenotype of CD.
abstract_id: PUBMED:26944181
Evaluating the impact of infliximab use on surgical outcomes in pediatric Crohn's disease. Background: The impact of infliximab (IFX) on surgical outcomes is poorly defined in pediatric Crohn's disease (CD). We evaluated our institution's experience with IFX on postoperative complications and surgical recurrence.
Methods: A retrospective review of children who underwent intestinal resection with primary anastomosis for CD from 1/2002 to 10/2014 was performed. Data collected included IFX use and surgical outcomes. Preoperative IFX use was within 3months of surgery.
Results: Seventy-three patients were included with median age 15years (range: 9-18). The most frequent indications for operation were obstruction (n=26) and fistulae (n=19). Nine patients (13%) had a surgical recurrence at a median of 2.3years (IQR 0.7-3.5). Twenty-two patients received preoperative IFX at median of 26days (IQR 14-46). There were 7 postoperative complications: 2 bowel obstructions, and 5 superficial wound infections. Outcomes of patients stratified by IFX were not different. When stratified by indication, refractory disease was associated with higher preoperative IFX use (IFX use 55% vs. no IFX use 28%, p=0.027). No specific indication was associated with increased reoperation rates.
Conclusion: Pediatric CD patients treated with preoperative IFX undergo intestinal resection with primary anastomosis with acceptable morbidity. The heterogeneous approach to medical management underscores the need for guidelines to direct treatment.
abstract_id: PUBMED:22567750
Infliximab and pediatric stricturing Crohn's disease: a possible alternative to surgery? Experience of seven cases. Introduction: Infliximab (IFX) is one of the treatments of choice for the different phenotypes of pediatric Crohn's disease (CD). Although it was initially feared that anti-TNFα treatment might cause bowel stenosis, recent studies have validated the efficacy of IFX as an anti-stricturing agent.
Aim: To assess the efficacy of IFX treatment for pediatric stricturing CD.
Patients And Methods: Data were obtained on pediatric patients treated at our tertiary level Pediatrics Department (years 2000-2010). Indications for IFX therapy included persistent disease activity (PCDAI > 20) unresponsive to corticosteroids and thiopurines. All patients treated with IFX underwent upper and lower intestinal endoscopy, abdominal ultrasound and magnetic resonance enterography.
Case Series: Among 44 pediatric CD patients, 21 were treated with IFX. Seven of these cases had luminal strictures and in 6 patients the inflammatory strictures disappeared after treatment with IFX. One child with ileal fibrotic stenosis (MR) required a surgical resection.
Conclusion: Our data support the efficacy of IFX in pediatric CD, including the stricturing phenotype.
abstract_id: PUBMED:28138950
What is the optimal surgical strategy for complex perianal fistulous disease in pediatric Crohn's disease? A systematic review. Purpose: Perianal fistulous disease is present in 10-15% of children with Crohn's disease (CD) and is frequently complex and refractory to treatment, with one-third of patients having recurrent lesions. We conducted a systematic review of the literature to examine the best surgical strategy or strategies for pediatric complex perianal fistulous disease (CPFD) in CD.
Methods: We searched CENTRAL, MEDLINE, EMBASE, and CINAHL for studies discussing at least one surgical strategy for the treatment of pediatric CPFD in CD. Reference lists of included studies were hand-searched. Two researchers screened all studies for inclusion, quality assessed each relevant study, and extracted data.
Results: One non-randomized prospective and two retrospective studies met our inclusion criteria. Combined use of setons and infliximab therapy shows promise as a first-line treatment. A specific form of fistulectomy, "cone-like resection," also shows promise when combined with biologics. Endoscopic ultrasound to guide medical and surgical management is feasible in the pediatric population, though it is unclear if it improves outcomes.
Conclusion: There is a paucity of evidence regarding the treatment of CPFD in the pediatric population, and further research is required before recommendations can be made as to what, if any, surgical management is optimal.
abstract_id: PUBMED:31574236
The pharmacotherapeutic management of pediatric Crohn's disease. Introduction: Crohn's disease (CD) is a chronic inflammatory condition that can occur throughout the gastrointestinal tract. The aims of treatment of children with CD are to induce and maintain clinical remission of disease, optimize nutrition and growth, minimize adverse effects of therapies, and if possible, achieve mucosal healing.Areas covered: This review summarizes evidence for the various therapeutic options in the treatment of children with CD. Exclusive enteral nutrition, corticosteroids, and biologics may be used for induction of remission. Immunomodulators (thiopurines, methotrexate) and biologics (infliximab, adalimumab) may be employed for maintenance of remission to prevent flares of disease and avoid chronic steroid use. In cases of fibrotic disease, intestinal perforations, or medically refractory, surgery may be the best therapeutic option.Expert opinion: Exclusive enteral nutrition, corticosteroids, and biologics (including anti-TNF inhibitors) may be used for induction of remission in patients with active flare of their disease. Immunomodulators and TNF inhibitors may be used for maintenance of remission. Early use of anti-TNF inhibitors in patients with moderate to severe CD may improve efficacy and prevent penetrating complications of disease. While pediatric data is limited, newer biologics, such as vedolizumab and ustekinumab, are used off-label in anti-TNF refractory disease.
abstract_id: PUBMED:27367297
The safety of treatment options for pediatric Crohn's disease. Introduction: A severe clinical phenotype along with concern for ensuring normal growth and development has a major impact on treatment choices for children newly diagnosed with Crohn's disease (CD).
Areas Covered: We review the increasingly outdated concept of 'conventional' therapy of pediatric CD based on aminosalicylates, corticosteroids, and immunomodulators for patients at high risk of complicated disease. Key safety concerns with each treatment are reviewed.
Expert Opinion: There are minimal data supporting the use of aminosalicylates in the treatment of pediatric CD. Corticosteroids are effective short-term for improving signs and symptoms of disease but are ineffective for maintenance therapy. Thiopurines decrease corticosteroid dependence but may not alter progression to complicated disease requiring surgery. Concerns for lymphoma as well as hemophagocytic lymphohistiocytosis with thiopurines are valid. Further data are required on the efficacy and safety of methotrexate as an alternative immunomodulator. Though generally well tolerated and efficacious in most patients, anti-TNF-α therapy can be associated with both mild as well as more serious complications. Current data do not support an increased risk for malignancy associated with anti-TNF therapy alone in children. Anti-adhesion therapy appears to have a favorable safety profile but the experience in children is extremely limited.
abstract_id: PUBMED:19433182
The incidence of inflammatory bowel disease in the pediatric population of Southwestern Ontario. Purpose: Despite a rising worldwide incidence of inflammatory bowel disease (IBD), few data exist on Canadian children. We reviewed the incidence of IBD in all children 17 years or younger in Southwestern Ontario.
Materials And Methods: A chart review from 1997 to 2006 revealed 123 children with IBD. Patients were divided into 2 groups according to year of diagnosis: group 1 = 1997 to 2001 and group 2 = 2002 to 2006. Our catchment population was determined from census data.
Results: Sex (group 1 = 52% females; group 2 = 45% females, P = .42) and age (group 1 = 12.4 +/- 3.6 years; group 2 = 12.9 +/- 3.5 years; P = .43) were similar between groups. Although the overall incidence of IBD decreased (group 1 = 14.3 cases/100,000; group 2 = 12.4 cases/100,000), the incidence of Crohn's disease nearly doubled (group 1 = 3.5 cases/100,000; group 2 = 6.01 cases/100,000) while the incidence of ulcerative colitis decreased substantially (group 1 = 10.6 cases/100,000; group 2 = 6.01 cases/100,000). The incidence of indeterminate colitis was 0.2 cases/100,000 for group 1 and 0.4 cases/100,000 for group 2. The rate of surgical intervention decreased over time, with 43% of patients requiring surgery in group 1 and 31% in group 2 (P = .17).
Conclusion: Despite a slight decrease in pediatric IBD incidence in Southwestern Ontario, the incidence of Crohn's disease has nearly doubled over the last decade. Reasons for this remain unclear, although given the relatively short time interval, environmental factors, rather than genetic changes, seem more likely.
abstract_id: PUBMED:18607264
Safety and efficacy of adalimumab in pediatric patients with Crohn disease. Objectives: Adalimumab has recently become available for adult patients with Crohn disease (CD) as a viable alternative tumor necrosis factor-alpha inhibitor to infliximab. To our knowledge, there have been no studies reviewing the use of adalimumab in pediatric patients with CD. Our aim was to examine the safety and efficacy of adalimumab therapy in pediatric patients with CD.
Patients And Methods: We performed a retrospective chart review of 15 pediatric patients with CD who received adalimumab at a single institution between January 2003 and March 2007. All of the patients had a history of an attenuated response or anaphylaxis to infliximab. Each patient's chart was reviewed for age at diagnosis, sex, extent of disease, age at start of adalimumab therapy, course of therapy, side effects noted during therapy, concurrent medications, and response to adalimumab. Clinical response to adalimumab was classified as complete, partial, or no response based on the patients' ability to be weaned from steroids, increased or decreased need for steroids, or need for surgery during the course of treatment. This study was approved by the Cleveland Clinic Institutional Review Board.
Results: Fifteen pediatric patients with CD received adalimumab for a 33-month period. Of those, 14 patients had adequate follow-up, and 1 patient was lost to follow-up. The mean age at initiation of therapy was 16.6 years (median 17.9 years, range 10.3-21.8 years, SD 3.1 years). The majority of patients received an 80-mg loading dose administered subcutaneously and 40-mg doses subsequently every 2 weeks. The median duration of therapy was 6.5 months (range 1-31 months). A total of 272 injections were given. Of the 14 patients with sufficient data for follow-up, 7 (50%) had a complete response, 2 (14%) had a partial response, and 5 (36%) had no response to adalimumab. Complete response was achieved after a median of 5 injections (range 3-11). Of the 14 patients with adequate follow-up, 5 had fistulizing disease; 3 of these maintained fistula closure, 1 had temporary closure, and 1 required surgery to assist with closure. Twenty-six adverse events occurred during therapy. Eight (57%) patients had at least 1 adverse effect. The most common events were abdominal pain and nausea. No serious adverse events were reported, no serious infections occurred, and no adverse events required discontinuation of adalimumab.
Conclusions: Adalimumab was well tolerated in pediatric patients with CD. Complete or partial response was observed in 64% of patients. No serious adverse events occurred during therapy. Additional studies are needed to evaluate the efficacy and to determine optimal dosing of adalimumab in the pediatric population with CD.
abstract_id: PUBMED:27478130
Treatment with infliximab for pediatric Crohn's disease: Nationwide survey of Japan. Background And Aim: Childhood-onset inflammatory bowel disease (IBD) is characterized by extensive intestinal involvement and rapid early progression. Infliximab (IFX), cyclosporin (CYA), and tacrolimus (FK506) are increasingly used to treat pediatric IBD; however, their long-term effects and adverse events have not been properly investigated in pediatric patients. The aim of this study was to characterize the effects of these biologics and immunomodulators on pediatric IBD patients in Japan. Additionally, we assessed IFX use in pediatric patients with Crohn's disease (CD).
Methods: A national survey of IFX, adalimumab, CYA, and FK506 use in pediatric IBD patients (< 17 years of age) was sent to 683 facilities in Japan from December 2012 to March 2013. Secondary questionnaires were sent to pediatric and adult practitioners with the aim of assessing the effectiveness and safety of IFX for pediatric CD patients.
Results: The response rate for the primary survey was 61.2% (N = 418). Among 871 pediatric CD patients, 284 (31.5%), 24, 4, and 15 received IFX (31.5%), adalimumab, CYA, and FK506, respectively, from 2000 to 2012. According to the secondary survey, extensive colitis (L3, Paris classification) was diagnosed in 69.4% of pediatric CD patients who received IFX. Regarding the effectiveness of IFX in this population, 54.7% (99/181) of patients were in remission, and 42.0% (76/181) were on maintenance therapy. However, 32.0% (58/181) of patients experienced adverse events, and one patient died of septic shock.
Conclusions: Infliximab is reasonably safe and effective in pediatric CD patients and should therefore be administered in refractory cases.
Answer: Infliximab, an anti-tumor necrosis factor alpha (TNF-α) therapy, has been considered as a treatment option for pediatric stricturing Crohn's disease (CD) and may serve as an alternative to surgery in some cases. A study assessing the efficacy of infliximab treatment for pediatric stricturing CD found that among 21 pediatric CD patients treated with infliximab, seven had luminal strictures. In six of these patients, the inflammatory strictures disappeared after treatment with infliximab. However, one child with ileal fibrotic stenosis required surgical resection, indicating that while infliximab can be effective for inflammatory strictures, it may not be sufficient for fibrotic stenoses (PUBMED:22567750).
The use of infliximab in pediatric CD has been associated with reduced rates of hospitalization and surgery for complications of long-standing active disease, including stricture and fistulae formation. It has also been shown to improve overall growth while maintaining effective disease remission (PUBMED:21694847). Furthermore, the early introduction of anti-TNF-α therapy, such as infliximab, may alter the natural history of CD in children, potentially reducing the need for surgical interventions (PUBMED:21694847).
However, it is important to note that while infliximab can be a valuable treatment option, it may not be suitable for all cases of stricturing CD, particularly those with fibrotic strictures. The decision to use infliximab as an alternative to surgery should be made on a case-by-case basis, considering the specific characteristics of the disease and the patient's response to medical therapy (PUBMED:22567750). |
Instruction: Does enrollment status in community-based insurance lead to poorer quality of care?
Abstracts:
abstract_id: PUBMED:23680066
Does enrollment status in community-based insurance lead to poorer quality of care? Evidence from Burkina Faso. Introduction: In 2004, a community-based health insurance (CBI) scheme was introduced in Nouna health district, Burkina Faso, with the objective of improving financial access to high quality health services. We investigate the role of CBI enrollment in the quality of care provided at primary-care facilities in Nouna district, and measure differences in objective and perceived quality of care and patient satisfaction between enrolled and non-enrolled populations who visit the facilities.
Methods: We interviewed a systematic random sample of 398 patients after their visit to one of the thirteen primary-care facilities contracted with the scheme; 34% (n = 135) of the patients were currently enrolled in the CBI scheme. We assessed objective quality of care as consultation, diagnostic and counselling tasks performed by providers during outpatient visits, perceived quality of care as patient evaluations of the structures and processes of service delivery, and overall patient satisfaction. Two-sample t-tests were performed for group comparison and ordinal logistic regression (OLR) analysis was used to estimate the association between CBI enrollment and overall patient satisfaction.
Results: Objective quality of care evaluations show that CBI enrollees received substantially less comprehensive care for outpatient services than non-enrollees. In contrast, CBI enrollment was positively associated with overall patient satisfaction (aOR = 1.51, p = 0.014), controlling for potential confounders such as patient socio-economic status, illness symptoms, history of illness and characteristics of care received.
Conclusions: CBI patients perceived better quality of care, while objectively receiving worse quality of care, compared to patients who were not enrolled in CBI. Systematic differences in quality of care expectations between CBI enrollees and non-enrollees may explain this finding. One factor influencing quality of care may be the type of provider payment used by the CBI scheme, which has been identified as a leading factor in reducing provider motivation to deliver high quality care to CBI enrollees in previous studies. Based on this study, it is unlikely that perceived quality of care and patient satisfaction explain the low CBI enrollment rates in this community.
abstract_id: PUBMED:37529065
Determinants of enrollment in community based health insurance program among households in East Wollega Zone, west Ethiopia: Unmatched case-control study. Background: Ethiopia has launched a community-based health insurance (CBHI) since 2011, which is an innovative financing mechanism to enhance domestic resource mobilization and sustainable health financing. This study assessed determinants of CBHI enrollment among HHs (households) of East Wollega, Ethiopia, 2022.
Method And Materials: Community based unmatched 1:2 case-control study design was conducted between Jan 7and Feb 5/2022 among 428 HHs (144 cases and 284 controls). Cases were selected from HHs who registered for CBHI and currently using CBHI. Controls were from those who do not registered for CBHI membership. Data collected using a semi-structured, interview administered questionnaire. Multivariable logistic regression with SPSS version 25 was employed for analysis and variables were declared statistical significant association at p-value < 0.05, 95% CI.
Result: Data from 428 (144 cases and 284 controls to CBHI) were collected; a response rate of 98.8%. Statistically lower odds of CBHI enrollment was observed among HHs who have poor knowledge [AOR = 0.48 (95% CI:0.27, 0.85)], perceived not respectful care [AOR = 0.44 (95% CI :0.24, 0.81)], unavailability of laboratory services [AOR = 0.37(95% CI:0.21, 0.66)], inappropriate time of premium payment [AOR = 0.31(95% CI:0.18, 0.52)]. In addition, medium wealth status category [AOR = 0.11(95% CI: 0.03, 0.45)]. Higher odd of CBHI enrollment observed among who have formal education [AOR = 2.39(95% CI: 1.28, 4.48)].
Conclusion And Recommendation: Educational level, knowledge, time of membership payment, laboratory test availability, perception of respectful care and wealth status were significant determinants of CBHI enrollment status. Hence, the responsible bodies should discuss and decide with community on the appropriate time of premium payment collection, and enhance community education on CBHI benefit package.
abstract_id: PUBMED:38218770
Enrollment and clients' satisfaction with a community-based health insurance scheme: a community-based survey in Northwest Ethiopia. Background: Although the Ethiopian government has implemented a community-based health insurance (CBHI) program, community enrollment and clients' satisfaction have not been well investigated in Gondar Zuria district, Northwest Ethiopia. This study assessed CBHI scheme enrollment, clients' satisfaction, and associated factors among households in the district.
Methods: A community-based cross-sectional survey assessed CBHI scheme enrollment and clients' satisfaction among households in Gondar Zuria district, Northwest Ethiopia, from May to June 2022. A systematic random sampling method was used to select the study participants from eligible households. A home-to-home interview using a structured questionnaire was conducted. Data were analysed using the statistical packages for social sciences version 26. Logistic regression was used to identify variables associated with enrollment and clients' satisfaction. A p-value < 0.05 was considered statistically significant.
Results: Out of 410 participants, around two-thirds (64.9%) of the participants were enrolled in the CBHI scheme. Residency status (AOR = 1.38, 95% CI: 1.02-5.32; p = 0.038), time taken to reach a health facility (AOR = 1.01, 95% CI: 1.00-1.02; p = 0.001), and household size (AOR = 0.77, 95% CI: 0.67-0.88; p < 0.001) were significantly associated with CBHI scheme enrollment. Two-thirds (66.5%) of enrolled households were dissatisfied with the overall services provided; in particular, higher proportions were dissatisfied with the availability of medication and laboratory tests (88.7%). Household size (AOR = 1.31, 95% CI: 1.01-2.24; p = 0.043) and waiting time to get healthcare services (AOR = 3.14, 95% CI: 1.01-9.97; p = 0.047) were predictors of clients' satisfaction with the CBHI scheme services.
Conclusion: Although a promisingly high proportion of households were enrolled in the CBHI scheme, most of them were dissatisfied with the service. Improving waiting times to get health services, improving the availability of medications and laboratory tests, and other factors should be encouraged.
abstract_id: PUBMED:29781157
Adverse selection and supply-side factors in the enrollment in community-based health insurance in Northwest Ethiopia: A mixed methodology. Background: Since 2010, the Ethiopian government introduced different measures to implement community-based health insurance (CBHI) schemes to improve access to health service and reduce the catastrophic effect of health care costs.
Objectives: The aim of this study was to examine the determinants of enrollment in CBHI in Northwest Ethiopia.
Methods: In this study, we utilized a mix of quantitative (multivariate logistic regression applied to population survey linked with health facility survey) and qualitative (focus group discussion and in-depth interview) methods to better understand the factors that affect CBHI enrollment.
Results: The study revealed important factors, such as household, informal association, and health facility, as barriers to CBHI enrollment. Age and educational status, self-rated health status, perceived quality of health services, knowledge, and information (awareness) about CBHI were among the characteristics of individual household head, affecting enrollment. Household size and participation in an informal association, such as local credit associations, were also positively associated with CBHI enrollment. Additionally, health facility factors like unavailability of laboratory tests were the main factor that hinders CBHI enrollment.
Conclusions: This study showed a possibility of adverse selection in CBHI enrollment. Additionally, perceived quality of health services, knowledge, and information (awareness) are positively associated with CBHI enrollment. Therefore, policy interventions to mitigate adverse selection as well as provision of social marketing activities are crucial to increase enrollment in CBHI. Furthermore, policy interventions that enhance the capacity of health facilities and schemes to provide the promised services are necessary.
abstract_id: PUBMED:33218111
The Effect of Ethiopia's Community-Based Health Insurance Scheme on Revenues and Quality of Care. Ethiopia's Community-Based Health Insurance (CBHI) scheme was established with the objectives of enhancing access to health care, reducing out-of-pocket expenditure (OOP), mobilizing financial resources and enhancing the quality of health care. Previous analyses have shown that the scheme has enhanced health care access and led to reductions in OOP. This paper examines the impact of the scheme on health facility revenues and quality of care. This paper relies on a difference-in-differences approach applied to both panel and cross-section data. We find that CBHI-affiliated facilities experience a 111% increase in annual outpatient visits and annual revenues increase by 47%. Increased revenues are used to ameliorate drug shortages. These increases have translated into enhanced patient satisfaction. Patient satisfaction increased by 11 percentage points. Despite the increase in patient volume, there is no discernible increase in waiting time to see medical professionals. These results and the relatively high levels of CBHI enrollment suggest that the Ethiopian CBHI has been able to successfully negotiate the main stumbling block-that is, the poor quality of care-which has plagued similar CBHI schemes in Sub-Saharan Africa.
abstract_id: PUBMED:18799773
Insurance status and quality of diabetes care in community health centers. Objectives: We sought to compare quality of diabetes care by insurance type in federally funded community health centers. Method. We categorized 2018 diabetes patients, randomly selected from 27 community health centers in 17 states in 2002, into 6 mutually exclusive insurance groups. We used multivariate logistic regression analyses to compare quality of diabetes care according to 6 National Committee for Quality Assurance Health Plan Employer Data and Information Set diabetes processes of care and outcome measures.
Results: Thirty-three percent of patients had no health insurance, 24% had Medicare only, 15% had Medicaid only, 7% had both Medicare and Medicaid, 14% had private insurance, and 7% had another insurance type. Those without insurance were the least likely to meet the quality-of-care measures; those with Medicaid had a quality of care similar to those with no insurance.
Conclusions: Research is needed to identify the major mediators of differences in quality of care by insurance status among safety-net providers such as community health centers. Such research is needed for policy interventions at Medicaid benefit design and as an incentive to improve quality of care.
abstract_id: PUBMED:37064679
Enrollment of reproductive age women in community-based health insurance: An evidence from 2019 Mini Ethiopian Demographic and Health Survey. Background: Universal health coverage (UHC) is aimed at ensuring that everyone has access to high-quality healthcare without the risk of financial ruin. Community-based health insurance (CBHI) is one of the essential means to achieve the sustainable development goals (SDGs) global health priority of UHC. Thus, this study assessed health insurance enrollment and associated factors among reproductive age women in Ethiopia.
Methods: We computed the health insurance enrollment of reproductive-age women using secondary data from the recent Ethiopian Mini Demographic and Health Surveys (EMDHS) 2019. The EMDHS was a community-based cross-sectional study carried out in Ethiopia from March 21 to June 28, 2019. Cluster sampling with two stages was employed for the survey. The study comprised 8885 (weighted) reproductive-age women. STATA 14 was used for data processing and analysis. Bivariate and multivariable logistic regression analyses were conducted. Adjusted odds ratio (AOR) with 95% confidence interval (CI) was reported and statistical significance was set at a value of p < 0.05.
Results: Of the 8,885 study participants, 3,835 (43.2, 95% CI; 42.1, 44.2%) of women had health insurance. Women aged 20-24 years, 25-29 years, and 30-34 years less likely to enroll in health insurance compared to their younger counterparts (15-19 years). Women living in rural areas, had greater than five family sizes, living with a female household head, and having more than five living children were negatively associated with enrollment in health insurance. Besides, health insurance enrollment among reproductive-age women is significantly affected by region and religious variation.
Conclusion: The overall CBHI enrolment among reproductive-age women in Ethiopia was low. To achieve the SDGs of reducing maternal mortality ratio and neonatal mortality, improving reproductive-age women's access to health insurance is essential. The national, regional, and local officals, policymakers, NGOs, program planners, and other supporting organizations working on improving health insurance enrollment of reproductive age women need to create awareness and support them based on these significant factors.
abstract_id: PUBMED:31906995
Determinants of enrollment decision in the community-based health insurance, North West Ethiopia: a case-control study. Objective: To identify the determinants for enrollment decision in the community-based health insurance program among informal economic sector-engaged societies, North West Ethiopia.
Method: Unmatched case-control study was conducted on 148 cases (member-to-insurance) and 148 controls (not-member-to-insurance program) from September 1 to October 30,2016. To select the villages and households, stratified then simple random sampling method was employed respectively. The data were entered in to Epi-info version 7 and exported to SPSS version 20 for analysis. Descriptive statistics, bi-variable, and multi-variable logistic regression analyses were computed to describe the study objectives and identify the determinants of enrolment decision for the insurance program. Odds ratio at 95% CI was used to describe the association between the independent and outcome variables.
Results: A total of 296 respondents (148 cases and 148 controls) were employed. The mean age for both cases and controls were 42 ± 11.73 and 40 ± 11.37 years respectively. Majority of respondents were males (87.2% for cases and 79% for controls). Family size between 4 and 6 (AOR = 2.26; 95% CI: 1.04, 4.89), history of illness by household (AOR = 3.24; 95% CI: 1.68, 6.24), perceived amount of membership contribution was medium (AOR = 2.3; 95% CI: 1.23, 4.26), being married (AOR = 6; 95% CI:1.43, 10.18) and trust on program (AOR = 4.79; 95% CI: 2.40, 9.55) were independent determinants for increased enrollment decision in the community-based health insurance. While, being merchant (AOR = 0.07; 95% CI: 0.09, 0.6) decreased the enrollment decision.
Conclusion: Societies' enrollment decision to community-based health insurance program was determined by demographic, social, economic and political factors. Households with large family sizes and farmers in the informal sector should be given maximal attention for intensifying enrollment decision in the insurance program.
abstract_id: PUBMED:36215271
The effects of individual and community-level factors on community-based health insurance enrollment of households in Ethiopia. Introduction: Community-based health insurance (CBHI) is a type of volunteer health insurance that has been adopted all over the world in which people of the community pool funds to protect themselves from the high costs of seeking medical care and treatment for the disease. In Ethiopia, healthcare services are underutilized due to a lack of resources in the healthcare system. The study aims to identify the individual and community level factors associated with community-based health insurance enrollment of households in Ethiopia.
Methods: Data from the Ethiopian mini demographic and health survey 2019 were used to identify factors associated with community-based health insurance enrollment of households in Ethiopia. Multilevel logistic regression analysis was used on a nationally representative sample of 8,663 households nested within 305 communities, considering the data's layered structure. We used a p-value<0.05 with a 95% confidence interval for the results.
Result: The prevalence of community-based health insurance enrollment in Ethiopia was 20.2%. The enrollment rate of households in the scheme was high in both Amhara (57.9), and Tigray (57.9%) regions and low (3.0%) in the Afar region. At the individual level; the age of household heads, number of children 5 and under, number of household members, has land for agriculture, has a mobile telephone, receiving cash of food from the safety Net Program, Owning livestock, and herds of farm animals, wealth index, and at the community level; the region had a significant association with community-based health insurance enrollment.
Conclusion: Both individual and community-level characteristics were significant predictors of community-based health insurance enrollment in households. Furthermore, the ministry of health, health bureaus, and other concerning bodies prioritize clusters with low health insurance coverage to strengthen health system financing and intervene in factors that negatively affect the CBHI enrollment of households.
abstract_id: PUBMED:31375108
Determinants of community-based health insurance implementation in west Gojjam zone, Northwest Ethiopia: a community based cross sectional study design. Background: In most developing countries, healthcare cost is mainly paid at the time of sickness and out-of-pocket at the point of service delivery which potentially could inhibit access. The total economic cost of illness for households is also estimated to be frequently above 10% of household income which is categorized as catastrophic. The purpose of this study was to assess factors that determine decisions to join the community based health insurance in West Gojjam zone.
Methods: A community based cross sectional survey was conducted to collect data from 690 household heads using a multistage sampling technique. A binary logistic regression was used to identify the determinants of household decisions for CBHI enrollment.
Results: Out of the participants, 58% were CBHI members. Besides, family size (AOR = 1.17; CI = 1.02-1.35), average health status (AOR = .380; CI = .179-.805), chronic disease (AOR = 3.42; CI = 1.89-6.19); scheme benefit package adequacy (AOR = 2.17; CI = 1.20-3.93), perceived health service quality (AOR = 3.69; CI = 1.77-7.69), CBHI awareness (AOR = 4.90; CI = 1.65-14.4); community solidarity (AOR = 3.77; CI = 2.05-6.92) and wealth (AOR = 3.62; CI = 1.67-7.83) were significant determinant factors for enrolment in the community based health insurance scheme.
Conclusion: CBHI awareness, family health status, community solidarity, quality of service of health institutions, and wealth were major factors that most determine the household decisions to enroll in the system. Therefore, in-depth and sustainable awareness creation programs on the scheme; stratified premium- based on economic status of households; incorporation of social capital factors, particularly building community solidarity in the scheme implementation are vital to enhance sustainable enrollment. As perceived family health status and the existence of chronic disease were also found significant determinants of enrollment, the Government might have to look for options to make the scheme mandatory.
Answer: The evidence from the provided abstracts suggests that enrollment status in community-based insurance (CBI) can lead to a perception of poorer quality of care in some cases. Specifically, a study in Burkina Faso found that while CBI enrollees perceived better quality of care, they objectively received worse quality of care compared to patients who were not enrolled in CBI (PUBMED:23680066). This discrepancy between perceived and actual quality of care might be due to systematic differences in quality of care expectations between CBI enrollees and non-enrollees. The type of provider payment used by the CBI scheme could be a factor influencing the quality of care, as it has been identified as a leading factor in reducing provider motivation to deliver high-quality care to CBI enrollees in previous studies.
However, it is important to note that the impact of CBI enrollment on the quality of care can vary by context. For instance, in Ethiopia, the CBHI scheme was found to increase health facility revenues and enhance patient satisfaction without increasing waiting times to see medical professionals, suggesting an improvement in the quality of care (PUBMED:33218111). Moreover, dissatisfaction among enrolled households was reported in another study, particularly with the availability of medication and laboratory tests (PUBMED:38218770).
In summary, while there is evidence that CBI enrollment can be associated with a perception of poorer quality of care in some settings, the actual impact on quality of care may vary depending on various factors, including the design and implementation of the CBI scheme, the local context, and the expectations of the enrollees. |
Instruction: Do nodal metastases from cutaneous melanoma of the head and neck follow a clinically predictable pattern?
Abstracts:
abstract_id: PUBMED:11505490
Do nodal metastases from cutaneous melanoma of the head and neck follow a clinically predictable pattern? Background: Potential lymphatic drainage patterns from cutaneous melanomas of the head and neck are said to be variable and frequently unpredictable. The aim of this article is to correlate the anatomic distribution of pathologically involved lymph nodes with primary melanoma sites and to compare these findings with clinically predicted patterns of metastatic spread.
Methods: A prospectively documented series of 169 patients with pathologically proven metastatic melanoma was reviewed by analyzing the clinical, operative, and pathologic records. Clinically, it was predicted that melanomas of the anterior scalp, forehead, and face could metastasize to the parotid and neck levels I-III; the coronal scalp, ear, and neck to the parotid and levels I-V; the posterior scalp to occipital nodes and levels II-V; and the lower neck to levels III-V. Minimum follow up was 2 years.
Results: There were 141 therapeutic (97 comprehensive, 44 selective) and 28 elective lymphadenectomies (4 comprehensive dissections, 21 selective neck dissections, and 3 cases in which parotidectomy alone was performed). Overall, there were 112 parotidectomies, 44 of which were therapeutic and 68 elective. Pathologically positive nodes involved clinically predicted nodal groups in 156 of 169 cases (92.3%). The incidence of postauricular node involvement was only 1.5% (3 cases). No patient was initially seen with contralateral metastatic disease; however, 5 patients (2.9%) failed in the contralateral neck after therapeutic dissection. In 68% of patients, metastatic disease involved the nearest nodal group, and in 59% only a single node was involved.
Conclusions: Cutaneous malignant melanomas of the head and neck metastasized to clinically predicted nodal groups in 92% of patients in this series. Postauricular and contralateral metastatic node involvement was uncommon.
abstract_id: PUBMED:33783056
Prognostic value and therapeutic implications of nodal involvement in head and neck mucosal melanoma. Background: The prognostic significance of nodal involvement is not well established in head and neck mucosal melanoma (HNMM).
Methods: A retrospective, monocentric study was performed on 96 patients with HNMM treated between 2000 and 2017.
Results: At diagnosis, seventeen patients (17.8%) were cN1, with a higher risk for HNMM arising from the oral cavity (p = 0.01). cN status had no prognostic value in patients with nonmetastatic resectable HNMM. No occult nodal metastasis was observed in the cN0 patients after a nodal dissection (ND). The nodal recurrence rate was similar in the cN1 and the cN0 patients. No isolated nodal recurrences were noted. Among the patients who underwent a ND, no benefit of this procedure was noted.
Conclusions: cN1 status is not a prognostic factor in patients with resectable HNMM. Elective ND should not be systematically performed in cN0 HNMM.
abstract_id: PUBMED:32083247
Predictors of occult lymph node metastasis in cutaneous head and neck melanoma. Objective: To use the Surveillance, Epidemiology, and End Results (SEER) database to verify the findings of a recent National Cancer Database (NCDB) study that identified factors predicting occult nodal involvement in cutaneous head and neck melanoma (CHNM) while identifying additional predictors of occult nodal metastasis and comparing two distinct cancer databases.
Methods: Cases of CHNM in the SEER database diagnosed between 2004 and 2014 were identified. Demographic information and oncologic data were obtained. Univariate and multivariate analysis were performed to identify factors associated with pathologic nodal positivity.
Results: There were 34002 patients with CHNM identified. Within this population, 16232 were clinically node-negative, 1090 of which were found to be pathologically node-positive. On multivariate analysis, factors associated with an increased risk of occult nodal metastasis included increasing depth of invasion (stepwise increase in adjusted odds ratio [OR]), nodular histology (aOR: 1.47 [95% CI: 1.21-1.80]), ulceration (aOR: 1.74 [95% CI: 1.48-2.05]), and mitoses (aOR: 1.86 [95% CI: 1.36-2.54]). Factors associated with a decreased risk of occult nodal metastasis included female sex (aOR: 0.80 [0.67-0.94]) and desmoplastic histology (aOR: 0.37 [95% CI: 0.24-0.59]). Between the SEER database and the NCDB, factors associated with occult nodal involvement were similar except for nodular histology and female sex, which did not demonstrate significance in the NCDB.
Conclusion: Regarding clinically node-negative CHNM, the SEER database and the NCDB have similarities in demographic information but differences in baseline population sizes and tumor characteristics that should be considered when comparing findings between the two databases.
Level Of Evidence: 4.
abstract_id: PUBMED:2770308
Management of nodal metastases from head and neck melanoma. Ninety-three patients with nodal metastases from melanoma (stage II) located in the head and neck underwent surgery at the National Cancer Institute of Milan. Different surgical techniques were employed, ranging from radical to conservative treatment. Analysis of the data shows no significant difference from an oncological standpoint between radical and conservative surgery when a radical dissection is performed. Elective nodal dissections for malignant melanoma of the head and neck region, like those at other sites of lymphatic drainage such as the groin and axilla, did not prove beneficial. We do recommend parotidectomy in cases where the primary tumor arises in the superior area of the head. The number of nodes involved and the type of disease spread constitute the major prognostic factors, as in the case of melanomas located in other sites. Our data further indicate that the incidence of distant and local recurrence is not influenced by the type of dissection performed.
abstract_id: PUBMED:2307240
The role of parotidectomy in the treatment of nodal metastases from cutaneous melanoma of the head and neck. Forty-six patients affected by head and neck melanoma were submitted to elective or therapeutic parotidectomy associated with laterocervical dissection from 1980 to 1983 at the National Cancer Institute of Milan. The study showed that parotidectomy is indicated in the presence of clinically palpable nodes or where primaries originate in the temporo-zygomatic area. It also demonstrated that survival is not affected by type of dissection performed and that cervical lymphadenectomy must always be associated with parotidectomy because of the high incidence of occult metastases in other nodal groups in these cases.
abstract_id: PUBMED:15024316
Correlation between preoperative lymphoscintigraphy and metastatic nodal disease sites in 362 patients with cutaneous melanomas of the head and neck. Objective: Lymphoscintigraphy for head and neck melanomas demonstrates a wide variation in lymphatic drainage pathways, and sentinel nodes (SNs) are reported in sites that are not clinically predicted (discordant). To assess the clinical relevance of these discordant node fields, the lymphoscintigrams of patients with head and neck melanomas were analyzed and correlated with the sites of metastatic nodal disease.
Methods: In 362 patients with head and neck melanomas who underwent lymphoscintigraphy, the locations of the SNs were compared with the locations of the primary tumors. The SNs were removed and examined in 136 patients and an elective or therapeutic regional lymph node dissection was performed in 40 patients.
Results: Lymphoscintigraphy identified a total of 918 SNs (mean 2.5 per patient). One or more SNs was located in a discordant site in 114 patients (31.5%). Lymph node metastases developed in 16 patients with nonoperated SNs, all underneath the tattoo spots on the skin used to mark the position of the SNs. In 14 patients SN biopsy revealed metastatic melanoma. After a negative SN biopsy procedure 11 patients developed regional lymph node metastases during follow-up. Elective and therapeutic neck dissections demonstrated 10 patients with nodal metastases, all located in predicted node fields. Of the 51 patients with involved lymph nodes, 7 had positive nodes in discordant sites (13.7%).
Conclusions: Metastases from head and neck melanomas can occur in any SN demonstrated by lymphoscintigraphy. SNs in discordant as well as predicted node fields should be removed and examined to optimize the accuracy of staging.
abstract_id: PUBMED:35394247
Predictors of Nodal Metastasis in Cutaneous Head and Neck Cancers. Purpose Of Review: The complex and varied drainage patterns in the head and neck present a challenge in the regional control of cutaneous neoplasms. Lymph node involvement significantly diminishes survival, often warranting more aggressive treatment. Here, we review the risk factors associated with lymphatic metastasis, in the context of the evolving role of sentinel lymph node biopsy.
Recent Findings: In cutaneous head and neck melanomas, tumor thickness, age, size, mitosis, ulceration, and specific histology have been associated with lymph node metastasis (LNM). In head and neck cutaneous squamous cell carcinomas, tumor thickness, size, perineural invasion, and immunosuppression are all risk factors for nodal metastasis. The risk factors for lymph node involvement in Merkel cell carcinoma are not yet fully defined, but emerging evidence indicates that tumor thickness and size may be associated with regional metastasis. The specific factors that predict a greater risk of LNM for cutaneous head and neck cancers generally include depth of invasion, tumor size, mitotic rate, ulceration, immunosuppression, and other histopathological factors.
abstract_id: PUBMED:25441725
Recurrence and survival after neck dissections in cutaneous head and neck melanoma. Introduction: An important prognostic factor in head and neck melanoma is the status of the regional lymph nodes since the presence of metastatic disease in the nodes greatly aggravates the prognosis. There is no consensus on the surgical treatment algorithm for this group. Our aim was to study if there is a difference in nodal recurrence and survival after radical, modified or selective neck dissection.
Methods: A total of 57 patients treated for regional meta-stases of head and neck melanoma were analysed retrospectively with respect to type of neck dissection, use of sentinel node biopsy, nodal recurrence and survival.
Results: After a median 127-month (range: 22-290) follow-up period, we showed that there was no significant difference in nodal recurrence between three different dissection groups (11% for radical node dissection, 24% for modified radical node dissection and 23% for selective node dissection, p > 0.05). No significant difference in five-year survival was observed between the dissection types (56% for radical node dissection, 61% for modified radical node dissection and 48% for selective node dissection, p = 0.613). Multivariate and univariate analysis revealed that patients with metastatic deposits in sentinel nodes had a better survival than patients with clinically palpable nodes (five-year survival rate: 70% versus 36%, p = 0.008).
Conclusion: The extent of neck dissection does not significantly influence the rate of recurrence or survival. This study indicates that there is a survival benefit for patients who undergo completion lymph node dissection following a positive sentinel node biopsy.
Funding: not relevant.
Trial Registration: not relevant.
abstract_id: PUBMED:23956077
No benefit in staging fluorodeoxyglucose-positron emission tomography in clinically node-negative head and neck cutaneous melanoma. Background: Fluorodeoxyglucose-positron emission tomography (FDG-PET) has a high sensitivity for detecting metastasis from melanoma, but its application in early-stage melanomas is questionable. The purpose of this study was to determine if positron emission tomography (PET) is beneficial in staging of clinically node negative (cN0) head and neck melanoma.
Methods: After institutional review board approval, patients with head and neck melanoma treated at 2 cancer centers (between 2000 and 2010) were identified using International Classification of Disease (ICD)-9 codes. A retrospective medical chart review of cN0 patients was performed for the treatment course and outcomes.
Results: A total of 165 patients were treated; of these, 106 were node negative. FDG-PET was included in initial staging of 47 cN0 patients. None had true distant metastasis detected on PET. The imaging also failed to detect nodal metastasis in 2 patients who had disease on lymphatic sampling.
Conclusion: FDG-PET did not alter the initial staging or treatment in patients with cN0 head and neck melanoma. Therefore, its routine use for staging is not warranted.
abstract_id: PUBMED:18717284
Neck nodal metastases from unknown primary: case series. Background: Neck nodal metastases from occult primary constitute about 5%-10% of all hosts harboring carcinoma of unknown primary site. Metastases in the upper and middle neck (levels I-II-III-IV-V) are generally attributed to head and neck cancers, whereas the lower neck (level IV) involvement is often associated with primaries below the clavicles. Diagnostic procedures include a careful clinical evaluation and a fiberoptic endoscopic examination of the head and neck mucosa, biopsies from all suspicious sites or blindly from the sites of possible origin of the primary, computerized tomography (CT) scan, and magnetic resonance radiology (MRI). The most frequent histological finding is squamous cell carcinoma, particularly when the upper neck is involved.
Settings: We report three cases of patients presented with nodal metastases of the neck from unknown primary site and we also describe the diagnostic and therapeutic approach employed in each one.
Results: One patient harbored a neuroendocrine metastatic deposit, the second patient a poorly differentiated carcinoma and the third one a malignant melanoma.
Conclusions: Diagnostic procedures should be aimed at clarifying the histology of the nodal metastases and detecting the primary tumor site.
Answer: Yes, nodal metastases from cutaneous melanoma of the head and neck generally follow a clinically predictable pattern. A study reviewed a series of 169 patients with pathologically proven metastatic melanoma and found that pathologically positive nodes involved clinically predicted nodal groups in 156 of 169 cases (92.3%). The study predicted that melanomas of the anterior scalp, forehead, and face could metastasize to the parotid and neck levels I-III; the coronal scalp, ear, and neck to the parotid and levels I-V; the posterior scalp to occipital nodes and levels II-V; and the lower neck to levels III-V. Postauricular and contralateral metastatic node involvement was uncommon (PUBMED:11505490).
Additionally, lymphoscintigraphy in patients with head and neck melanomas showed that metastases can occur in any sentinel node demonstrated by the imaging, including both discordant and predicted node fields. This suggests that sentinel nodes in both discordant and predicted fields should be removed and examined to optimize the accuracy of staging (PUBMED:15024316).
However, it is important to note that while the pattern is generally predictable, there are exceptions, and metastatic disease can involve the nearest nodal group in 68% of patients, with 59% involving only a single node (PUBMED:11505490). Therefore, while a clinically predictable pattern exists, there is still a degree of variability that must be considered in the management of cutaneous melanoma of the head and neck. |
Instruction: Do we need histology for a normal-looking gallbladder?
Abstracts:
abstract_id: PUBMED:18040621
Do we need histology for a normal-looking gallbladder? Background/purpose: Gallbladder cancer (GBC) is a rare malignancy with poor overall prognosis. Simple cholecystectomy is curative if the cancer is limited to mucosa. We aimed here to investigate the need for routine histological examination of gallbladder.
Methods: We carried out a retrospective review of 2890 final pathology reports of processed gallbladder specimens following cholecystectomy due to gallstones disease. The review covered the 10-year period from 1994 to 2004. The notes of all cases of gallbladder cancer were scrutinized, with particular emphasis on presentation, preoperative diagnostic tools using abdominal ultrasound and computed tomography scan, operative findings, and the histology results.
Results: Gallbladder cancer (GBC) was detected in five specimens (0.17%), dysplasia in six (0.2%), and secondaries to gallbladder in three (0.1%). Histological findings confirmed gallstone disease in 97% and rare benign pathology in 3%. The median age of patients with GBC was 61 years (range, 59-84 years). In all five patients, cancer was isolated from thickened fibrotic wall on macroscopic appearance and spread through all layers of the gallbladder wall. The percentage of thickened-wall gallbladder in this study was 38.02% and the cancer incidence in the thickened wall was 0.45%.
Conclusions: A selective policy rather than routine histological examination of nonfibrotic or thickened-wall gallbladder has to be considered. This will reduce the burden on pathology departments, with significant cost savings.
abstract_id: PUBMED:25058481
Routine histological analysis of a macroscopically normal gallbladder--a review of the literature. Background: 70,000 cholecystectomies were performed in the United Kingdom in 2011-2012. Currently it is standard practice to submit all gallbladder specimens for routine histology to exclude malignancy. The aim of this systematic review was to establish whether a normal macroscopic appearance to the gallbladder at the time of cholecystectomy is sufficient to rule out malignancy and therefore negate the need for routine histology.
Methods: Relevant articles that were published between 1966 and January 2013 were identified through electronic databases.
Results: 21 studies reported on 34,499 histologically analysed specimens. 172/187 (92%) of gallbladder cancers demonstrated intra-operative macroscopic abnormality. Studies that opened the specimens intra-operatively identified all cancers, whereas gross macroscopic visualization resulted in 15 potentially missed cancers (p = 0.10). In patients of European ethnicity, gallbladder cancer in a macroscopically normal looking gallbladder was identified in only one study; however all of these patients were above the age of 60. The incidence of gallbladder cancer was significantly raised in ethnic groups from high risk areas (p = 0.0001).
Conclusions: A macroscopically normal gallbladder in patients of European ethnicity under the age of 60 may not require formal histopathology. The best method for intra-operative examination may involve opening the specimen to allow inspection of the mucosa and wall, however this needs further investigation. In the context of the volume of gallbladder surgery being performed there is the potential for significant cost and time savings.
abstract_id: PUBMED:33403040
Hypoxia Inducible Factor-1alpha (HIF-1A) plays different roles in Gallbladder Cancer and Normal Gallbladder Tissues. Purpose: Hypoxia-inducible factor-1alpha (HIF-1A) is a transcription factor that plays an "angiogenic switch" role especially under hypoxia microenvironment in solid tumor. However, the functions and clinical significance of HIF-1A in gallbladder cancer (GBC) are still controversial, and it has not been studied in normal gallbladder tissues. In this study, we sought to clarify the role of sub-cellular localization of HIF-1A expression in GBC and normal gallbladder tissues. Methods: The expressions of HIF-1A and CD34 in 127 GBC and 47 normal gallbladder tissues were evaluated by immunohistochemistry. Cox's proportional hazards model analysis and Kaplan-Meier method analysis were used to assess the correlations between these factors and clinicopathological features and prognosis. Results: HIF-1A was expressed in both cytoplasm and nucleus of GBC and normal control tissues, and was significantly correlated with microvessel density (MVD). GBC tissues with positive nuclear HIF-1A expression had higher MVD compared to that with positive cytoplasmic HIF-1A expression; however, in normal gallbladder tissues, samples with positive cytoplasmic HIF-1A had higher MVD compared to that with positive nuclear HIF-1A expression. Moreover, GBC with nuclear HIF-1A expression tended to be more poorly differentiated and had larger tumor size compared to that with cytoplasm HIF-1A expression. Furthermore, GBC patients with nuclear HIF-1A positive were significantly correlated with worse overall survival (OS) compared with cytoplasmic HIF-1A positive. Multivariate Cox regression analysis identified lymph node metastasis and nuclear HIF-1A expression to be independent prognostic parameter in GBC. Conclusions: Our findings provide evidence for the first time that HIF-1A is expressed in normal gallbladder tissues. Nuclear HIF-1A and cytoplasm HIF-1A plays different roles in GBC and normal gallbladder tissues.
abstract_id: PUBMED:33773526
Selective or Routine Histology of Cholecystectomy Specimens for Diagnosing Incidental Carcinoma of Gallbladder and Correlation with Careful Intraoperative Macroscopic Examination? A Systematic Review. Background: Selective or Routine histology of cholecystectomy specimens for benign gallbladder disease has always been a matter of debate because of the low prevalence and bad prognosis associated with gall bladder carcinoma. The objective of this study is to ascertain whether selective histology can be preferred over Routine histology without any harm.
Methods: This systematic review is conducted according to PRISMA's checklist; relevant articles were searched in the database until September 1 2020 in PubMed, Scopus, Science Direct, and Web of Science databases, manually, with search queries and without date restrictions. Studies included in this systematic review involved patients who underwent cholecystectomy for benign gallbladder disease and were diagnosed with gallbladder carcinoma incidentally either after selective or routine histology of the gallbladder.
Results: A total of 24 routine or selective histology recommending studies were selected for the systematic review after following the inclusion and exclusion criteria. These studies comprised 77,213 numbers of patients and 486 numbers of Malignancies. These studies correlate the number of IGBC diagnosed histologically with the number of IGBC's that were suspected by the surgeons intraoperative by macroscopy. Routine recommending studies show a significant number of IGBC diagnosed histologically as missed by surgeons whereas the selective recommending studies show most of the histologically diagnosed IGBC already suspected by the surgeons intraoperative. When comparing the macroscopic details of the IGBC's between routine and selective studies, we found that there was significant overlap. Most of the findings missed by the surgeons as suspicious in routine studies were suspected by the surgeons involved in selective histology recommending studies. Thereby, favouring selective histology and emphasizing the need for careful intraoperative macroscopy for suspecting IGBC.
Conclusion: Selective Histological examination of cholecystectomy specimens can be preferred if a careful intraoperative macroscopic examination is done and patient risk factors are taken into consideration.
abstract_id: PUBMED:25743827
Is it necessary to submit grossly normal looking gall bladder specimens for histopathological examination? Background: The objectives of the study were to: 1) determine the frequency of incidental malignancy in unsuspected/grossly normal looking gall bladders; 2) determine the frequency of malignancy in suspected/grossly abnormal looking gall bladders.
Materials And Methods: This prospective, cross sectional study was carried out at a tertiary care hospital in Pakistan, during a four year period (Jan 2009-dec2012). All the cholecystectomy cases performed for gallstone diseases were examined initially by a surgeon and later on by a pathologist for macroscopic abnormalities and accordingly assigned to one of the three categories i.e. grossly normal, suspicious, abnormal/malignant. Frequency of incidental carcinoma in these categories was observed after receiving the final histopathology report.
Results: A total of 426 patients underwent cholecystectomy for cholelithiasis, with a 1:4 male: female ratio. Mean age of the patients was 45 years with a range of 17-80 years. The frequency of incidental gallbladder carcinoma was found to be 0.70 %(n=3). All the cases of gallbladder carcinoma were associated with some macroscopic abnormality. Not a single case of incidental carcinoma gallbladder was diagnosed in 383 'macroscopically normal looking' gallbladders.
Conclusions: Incidental finding of gall bladder cancer was not observed in any of macroscopically normal looking gall bladders and all the cases reported as carcinoma gallbladder had some gross abnormality that made them suspicious. We suggest histopathologic examination of only those gall bladders with some gross abnormality.
abstract_id: PUBMED:28221193
Normal Gallbladder Ejection Fraction Occurring Unexpectedly Obviates Need for Sincalide Stimulation. A 25-year-old man was referred for chronic right upper quadrant abdominal pain for hepatobiliary scintigraphy to evaluate the gallbladder (GB) function. An unexpected GB contraction with ejection fraction (EF) of 90% was observed during the first hour of baseline imaging. Subsequent stimulation with sincalide produced GB EF of 99%. A similar case previously reported also showed normal unexpected GB EF that predicted similar post-sincalide GB EF. These examples support what should be evident: A normal unexpected GB EF is a sufficient evidence for a normal GB function and should obviate need for sincalide stimulation.
abstract_id: PUBMED:2644853
The gross anatomy and histology of the gallbladder, extrahepatic bile ducts, Vaterian system, and minor papilla. The gross anatomy, histology, and immunohistochemistry of the normal gallbladder, extrahepatic bile ducts, Vaterian system, and minor papilla are reviewed. The variability in the gross and microscopic morphology of the extrahepatic biliary system is emphasized.
abstract_id: PUBMED:22251522
TLR4 expression in normal gallbladder, chronic cholecystitis and gallbladder carcinoma. Background/aims: Chronic inflammation is a risk factor for gallbladder carcinoma. The molecular mechanisms linking inflammation and gallbladder carcinogenesis are incompletely understood. Toll-like receptors are involved in inflammatory response and play an important role in the innate immune system by initiating and directing immune response to pathogens. We tested the hypothesis that TLR4 participated in the development of gallbladder carcinoma through investigating the expression of TLR4 in chronic cholecystitis, gallbladder carcinoma and normal gallbladder.
Methodology: The expression of TLR4 in 30 specimens of chronic calculous cholecystitis, 13 specimens of gallbladder adenocarcinoma and 10 specimens of normal gallbladder tissue was determined by immunohistochemistry, western blotting analysis and quantitative RT-PCR.
Results: We showed that TLR4 was mostly localized to the glandular and luminal epithelium of gallbladder. TLR4 expression was lower in gallbladder carcinoma tissue than in chronic cholecystitis and normal gallbladder tissue, whereas the difference between chronic cholecystitis tissue and normal gallbladder tissue was not statistically significant.
Conclusions: The expression of TLR4 may be closely associated with the course of gallbladder carcinoma.
abstract_id: PUBMED:23145247
Layers of looking. The films that I make are part documentary, part film essay, part visual poem. They are created out of a series of up to 30 drawings that are animated. The process from research to completion takes about 2 years, and at the end of it there is a film and a number of drawings that can be exhibited. This paper emerges out of the realisation that in the different stages of making a drawing and a film-from planning a drawing and seeking an image as a starting point through to the finished film being projected onto a screen in a gallery-I am looking at the image in different ways. There is a close-up kind of looking when I am drawing and a reflective kind of looking when I step back to take stock. There is a way of looking at darks and lights as they are built up that is different from the way of looking at the lines crossing the edges of the grid being used to transfer an image to the drawing paper. Seeing the drawings on the wall of my studio is different from seeing the same drawings in a museum setting.
abstract_id: PUBMED:8853227
Gallbladder motility in systemic sclerosis. In 20 patients with systemic sclerosis (SSc) and 24 healthy controls, gallbladder motility was evaluated by abdominal ultrasonography after stimulation by a standard liquid meal. Results from patients with normal and disturbed oesophageal function were analysed separately in order to investigate the significance of gallbladder motility as a parameter for gastrointestinal involvement in SSc. All patients showed a marked decrease in gallbladder size after stimulation (patients 61 +/- 13%; controls 48 +/- 12%). Patients with oesophageal dysfunction (n = 12) had a slightly lower gallbladder contraction (maximal decrease = 58 +/- 13%) when compared to patients with normal oesophageal function (n = 8; 66 +/- 13%); however, this difference was not statistically significant. Gallbladder motility in patients with SSc was not reduced when compared with healthy controls. SSc-induced oesophageal dysfunction was not associated with impaired gallbladder motility. Thus, measurement of gallbladder emptying is not a helpful tool when looking for gastrointestinal involvement in SSc.
Answer: The question of whether histology is necessary for a normal-looking gallbladder has been addressed in several studies. A retrospective review of 2890 gallbladder specimens found that gallbladder cancer (GBC) was detected in only 0.17% of cases, suggesting that a selective policy for histological examination might be considered to reduce the burden on pathology departments and save costs (PUBMED:18040621). Another systematic review indicated that in patients of European ethnicity under the age of 60, a macroscopically normal gallbladder may not require formal histopathology, as the incidence of GBC was significantly lower in this group (PUBMED:25058481).
Furthermore, a systematic review comparing routine and selective histology of cholecystectomy specimens found that careful intraoperative macroscopic examination could potentially allow for selective histology without missing incidental gallbladder carcinoma (IGBC), emphasizing the importance of careful intraoperative inspection (PUBMED:33773526). A prospective cross-sectional study supported this by showing that incidental GBC was not found in any macroscopically normal-looking gallbladders, suggesting that histopathologic examination might be reserved for gallbladders with gross abnormalities (PUBMED:25743827).
However, it is important to note that histological examination can reveal other aspects of gallbladder pathology, such as the expression of certain markers like Hypoxia Inducible Factor-1alpha (HIF-1A) in GBC and normal gallbladder tissues, which may have different roles and prognostic implications in GBC (PUBMED:33403040). Additionally, the expression of Toll-like receptor 4 (TLR4) has been studied in normal gallbladder, chronic cholecystitis, and gallbladder carcinoma, indicating its potential association with the development of gallbladder carcinoma (PUBMED:22251522).
In conclusion, while there is evidence to suggest that routine histology may not be necessary for all macroscopically normal-looking gallbladders, especially in certain populations, careful intraoperative examination is crucial. Selective histology could be considered when the gallbladder appears normal upon careful inspection, and patient risk factors are taken into account. Nonetheless, histology can provide valuable information on other pathological processes and should be considered in the context of the overall clinical picture. |
Instruction: Does Perfusion MRI After Closed Reduction of Developmental Dysplasia of the Hip Reduce the Incidence of Avascular Necrosis?
Abstracts:
abstract_id: PUBMED:26092677
Does Perfusion MRI After Closed Reduction of Developmental Dysplasia of the Hip Reduce the Incidence of Avascular Necrosis? Background: Gadolinium-enhanced perfusion MRI (pMRI) after closed reduction/spica casting for developmental dysplasia of the hip (DDH) has been suggested as a potential means to identify and avoid avascular necrosis (AVN). To date, however, no study has evaluated the effectiveness of pMRI in clinical practice or compared it with other approaches (such as postreduction CT scan) to show a difference in the proportion of AVN.
Questions/purposes: (1) Can a pMRI-based protocol be used immediately post closed reduction to minimize the risk that AVN would develop? (2) What are the overall hip-related outcomes after closed reduction/spica casting using this protocol? (3) Do any patient-specific factors at the time of closed reduction predict future AVN?
Methods: This was a retrospective cohort study at a large tertiary care children's hospital. Between 2009 and 2013 we treated 43 patients with closed reduction/spica casting for DDH, of whom 33 (77%) received a postreduction pMRI. All patients were indicated for pMRI per treating surgeon preference. A convenience sample totaling 25 hips in 22 patients treated with pMRI was then established using the following exclusion criteria: DDH of neuromuscular/syndromic origin, failed initial closed reduction, less than 1 year of clinical and radiographic followup, and subsequent open reduction. Next, the 40 patients treated with closed reduction between 2004 and 2009 were screened until the chronologically most recent 25 hips (after applying the previously mentioned exclusion criteria) were identified in 21 of the first 34 patients (62%) screened. Although termed the CT group, specific postreduction imaging was not a defined inclusion criterion in this group with the majority (21 of 25 [84%]) receiving postreduction CT and the remainder (four of 25 [16%]) receiving only postreduction radiographs. All hips with globally decreased femoral head perfusion on postreduction pMRI were treated with immediate cast removal followed by repeat closed reduction or open reduction, as per surgeon preference, with two of 33 (6%) requiring such further interventions. Salter criteria were then used to determine the proportion of AVN on radiographs at 1-year and final followup. Secondary outcomes including residual dysplasia and the need for further corrective surgery were ascertained through radiographic and retrospective chart review.
Results: At 1-year followup there was no difference in the proportion of AVN in the historical CT group as compared with the pMRI group (six of 25 [24%] versus one of 25 [4%]; odds ratio [OR], 7.6; 95% confidence interval [CI], 0.8-363; p = 0.098). However, by final followup there was a statistically higher proportion of AVN in the CT group (seven of 25 [28%] versus one of 25 [4%]; OR, 9.3; 95% CI, 1.0-438; p = 0.049). No patient with normal perfusion on postreduction pMRI went on to develop AVN. In those pMRI patients in whom a successful reduction was initially obtained, two of 25 (8%) went on to require further corrective surgery and one of 25 (4%) had a redislocation event. With the numbers available, no patient-specific factors at the time of closed reduction were predictive of future AVN, including the patient's age/weight, the presence of an ossific nucleus, history of previous bracing treatment, or the abduction angle in spica cast.
Conclusions: A pMRI-based protocol immediately after closed reduction/spica casting may decrease the risk of AVN by helping the surgeon to evaluate femoral head vascularity. Although preliminary in nature, this study could serve to guide further investigation into the potential role of pMRI for the treatment of patients who require closed reduction/spica casting for DDH.
Level Of Evidence: Level III, therapeutic study.
abstract_id: PUBMED:31695810
Effect of abduction on avascular necrosis of the femoral epiphysis in patients with late-detected developmental dysplasia of the hip treated by closed reduction: a MRI study of 59 hips. Purpose: The purpose of this study was to explore whether increasing the hip abduction angle would increase the incidence of avascular necrosis (AVN) in patients with late- detected developmental dysplasia of the hip (DDH) treated by closed reduction (CR) and spica cast immobilization.
Methods: A total of 55 patients (59 hips) with late-detected DDH underwent MRI after CR. Hip abduction angle and hip joint distance were measured on postoperative MRI transverse sections. The acetabular index and centre-edge angle were measured on plain radiographs at the last follow-up. The presence of AVN according to Kalamchi and McEwen's classification was assessed. We retrospectively analyzed the associations among abduction angles, hip joint distances, radiographic parameters, AVN and final outcomes, exploring the relationship between hip joint abduction angle and AVN rate.
Results: The mean age at the time of CR was 14.4 months SD 5.5 (6 to 28), and the mean follow-up was 26.2 months SD 8.1 (12.4 to 41.7). The mean hip abduction angle was 70.2° SD 7.2° (53° to 85°) on the dislocated side and 63.7° SD 8.8° (40° to 82°) on the normal side; the mean hip joint distance was 5.1 mm SD 1.9 (1.3 to 9.1) on the dislocated side and 2.2 mm SD 0.6 on the normal side (1.3 to 3.3). Eight of 59 hips (13.6%) developed AVN. Neither the amount of abduction nor hip joint distance increased the AVN rate (p = 0.97 and p = 0.65, respectively) or the dislocation rate (p = 0.38 and p = 0.14, respectively).
Conclusion: Abduction angle up to 70.2° following CR did not increase the AVN rate in children aged six to 28 months with late-detected DDH treated by CR.
Level Of Evidence: III.
abstract_id: PUBMED:34476030
Does the vascular development of the femoral head correlate with the incidence of avascular necrosis of the proximal femoral epiphysis in children with developmental dysplasia of the hip treated by closed reduction? Purpose: The purpose of this study was to identify the correlation between the vascular development of the femoral head and avascular necrosis (AVN) in patients with developmental dysplasia of the hip (DDH) treated by closed reduction (CR).
Methods: We retrospectively reviewed 78 patients with DDH treated by CR (83 hips). The vascular maturity, number of vessels and perfusion changes of the femoral head were assessed on perfusion MRI (pMRI) before and after CR.
Results: The number of vessels (mean 4.2 sd 1.4) of the femoral head and the ratio (36.1%) of mature vessels (type III) on the dislocated side were significantly less than those at contralateral side (mean 6.0 sd 1.2; 82.2%) (p < 0.001). Of the included 83 hips, 39 hips (61.5%) showed decreased perfusion of the femoral head, including partial decreased (Class B, 47.0%) and global decreased (Class C, 14.5%), at the dislocated side, which was significantly more than those at contralateral side (0.0%) (p < 0.001). In total, 32 out of 83 hips (38.5%) developed AVN. The rate of AVN with Class A (18.8%) which perfusion of the femoral head was normal (unchanged or enhanced) was significantly less than those with Class C (66.7%) (p = 0.006).
Conclusion: The vascular development and perfusion changes of the femoral head on the dislocated side are significantly worse than those at contralateral side. Immature vascularity of the femoral head before CR and poor perfusion of the femoral head after CR may be risk factors for AVN in patients with DDH.
Level Of Evidence: III.
abstract_id: PUBMED:30087565
Prereduction traction for the prevention of avascular necrosis before closed reduction for developmental dysplasia of the hip: a meta-analysis. Background And Purpose: Avascular necrosis (AVN) is one of the common complications after closed reduction and hip spica cast for developmental dysplasia of the hip (DDH). Prereduction traction has been used to reduce a dislocated hip or decrease the risk of AVN, but there are conflicting results in prevention effects on AVN. The purpose of this study was to systematically review the current literature and evaluate the effect of prereduction traction in preventing AVN in children with DDH treated by closed reduction through a meta-analysis.
Materials And Methods: A systematic review of the literature was performed using PubMed and EMBASE with variations of three major terms: 1) hip dislocation; 2) closed reduction; and 3) avascular necrosis. Seven studies that could compare the incidence of AVN between the traction and no-traction group were included. Methodological quality was assessed, a heterogeneity test was done (p=0.008), and the pooled risk ratios were estimated.
Results: The association between traction and AVN was assessed, using data on 683 hips treated by closed reduction. The incidence of AVN in the traction and no-traction groups ranged from 5% to 47.7% and from 0% to 72.7%, respectively. A meta-analysis with a random effects model indicated no significant difference in the incidence of AVN between traction and no-traction groups (p=0.536).
Conclusion: There was insufficient evidence to decide the efficacy of prereduction traction before closed reduction in reducing the risk of AVN in patients with DDH in this meta-analysis. To recommend prereduction traction for the prevention of AVN, long-term follow-up studies considering age, severity of dislocation, and appropriate traction method are needed.
abstract_id: PUBMED:32582384
Does the size of the femoral head correlate with the incidence of avascular necrosis of the proximal femoral epiphysis in children with developmental dysplasia of the hip treated by closed reduction? Purpose: The purpose of this study was to identify if any correlation between size of the proximal femoral epiphysis and avascular necrosis (AVN) exists.
Methods: We retrospectively reviewed 111 patients with developmental dysplasia of the hip treated by closed reduction (124 hips). The diameter and height of both femoral head and ossific nucleus were assessed on preoperative MRI.
Results: The diameter and the height of the femoral head as well as of the ossific nucleus of the contralateral side were significantly greater than the dislocated side. AVN occurred in 21 (16.9%) out of 124 hips. The rate of AVN gradually decreased with age: 30.0% at six to 12 months, 18.2% at 12 to 18 months and 3.7% at 18 to 24 months. Spearman correlation analysis showed that age is negatively correlated with the incidence of AVN (r = -0.274; p = 0.002) and the diameter of the femoral head has a significantly negative association with the incidence of AVN (r = -0.287; p = 0.001). No significant association was observed between the incidence of AVN and height of the femoral head or size of the ossific nucleus. Hips with AVN were significantly smaller than hips without AVN.
Conclusions: The size of both the femoral head and the ossific nucleus increase with age although the dislocated femoral head is smaller compared with the contralateral side. The diameter of the femoral head and not the size of the ossific nucleus negatively correlate with the risk of AVN, with a bigger femoral head showing lower risk of AVN.
Level Of Evidence: III.
abstract_id: PUBMED:30585878
Traction does not decrease failure of reduction and femoral head avascular necrosis in patients aged 6-24 months with developmental dysplasia of the hip treated by closed reduction: a review of 385 patients and meta-analysis. This study aimed to investigate the effects of preliminary traction on the rate of failure of reduction and the incidence of femoral head avascular necrosis (AVN) in patients with late-detected developmental dysplasia of the hip treated by closed reduction. A total of 385 patients (440 hips) treated by closed reduction satisfied the inclusion criteria. Patients were divided in two groups according to treatment modality: a traction group (276 patients) and a no-traction group (109 patients). Tönnis grade, rate of failure reduction, AVN rate, acetabular index, center-edge angle of Wiberg, and Severin's radiographic grade were assessed on plain radiographs, and the results were compared between the two groups of patients. In addition, a meta-analysis was performed based on the existing comparative studies to further evaluate the effect of traction on the incidence of AVN. Tönnis grade in the traction group was significantly higher than in the no-traction group (P = 0.021). The overall rate of failure reduction was 8.2%; no significant difference was found between the traction (9.2%) and no-traction groups (5.6%) (P = 0.203). The rates of failure reduction were similar in all Tönnis grades, regardless of treatment modality (P > 0.05). The rate of AVN in the traction group (14%) was similar to that of the no-traction group (14.5%; P = 0.881). Moreover, the rates of AVN were similar in all Tönnis grades, regardless of treatment modality (P > 0.05). The meta-analysis did not identify any significant difference in the AVN rate whether preliminary traction was used or not (odds ratio = 0.76, P = 0.32). At the last follow-up visit, the two groups of patients had comparable acetabular indices, center-edge angles, and Severin's radiographic grades (P > 0.05). In conclusion, preliminary traction does not decrease the failure of reduction and the incidence of AVN in developmental dysplasia of the hip treated by closed reduction between 6 and 24 months of age.
abstract_id: PUBMED:30154921
Closed reduction in late-detected developmental dysplasia of the hip: indications, results and complications. Purpose: The aim of the study was a review of the literature in order to evaluate the results and complications of closed reduction in late-detected developmental dysplasia of the hip (DDH).
Methods: This study consisted of an analysis of the literature relative to late-detected DDH treatment options considering hip congruency, rates of re-dislocation and of avascular necrosis.
Results: Gradual closed reduction (Petit-Morel method) appears to be an effective method concerning joint congruency restitution. Dislocation relapse and avascular necrosis are more efficiently prevented with closed versus open reduction. The tendency for spontaneous correction of acetabular dysplasia decreases if closed reduction is performed after 18 months of age. Patient age at the beginning of traction should be considered for the prognosis, with a lower rate of satisfactory results showing after the age of 3 years.
Conclusion: In our opinion, the Petit-Morel method is a suitable treatment option for children aged between six months and three years with idiopathic DDH.
abstract_id: PUBMED:19098638
Post-closed reduction perfusion magnetic resonance imaging as a predictor of avascular necrosis in developmental hip dysplasia: a preliminary report. Introduction: Avascular necrosis (AVN) of the femoral head remains a major complication in the treatment of developmental dysplasia of the hip (DDH) in infants. We performed a retrospective analysis to look at the predictive ability of postclosed reduction contrast-enhanced magnetic resonance imaging (MRI) for AVN after closed reduction in DDH.
Methods: Twenty-eight hips in 27 infants (aged 1-11 months) with idiopathic hip dislocations who had failed brace treatment underwent closed reduction +/- adductor tenotomy and spica cast application under general anesthesia. Magnetic resonance imaging of the hips after intravenous gadolinium contrast injection for evaluation of epiphyseal perfusion was obtained immediately after cast application. Patients were followed with serial radiographs for a minimum of 1 year after closed reduction. Presence of AVN was determined by the presence of any one of the 5 Salter criteria by 2 readers. Magnetic resonance imaging was graded as normal, asymmetric enhancement, focal decreased enhancement, or global decreased enhancement by 2 radiologists.
Results: Six (21%) of 28 hips showed evidence of clinically significant AVN on follow-up radiographs. Fifty percent of the hips with AVN, but only 2 of 22 hips without AVN, showed a global decreased MRI enhancement (P < 0.05, Fisher exact test). Multivariate logistic regression indicated that a global decreased enhancement was associated with a significantly higher risk of developing AVN (P < 0.01), independently of age at reduction (P = 0.02) and abduction angle.
Conclusions: In addition to accurate anatomical assessment of a closed reduction in DDH, gadolinium-enhanced MRI provides information about femoral head perfusion that may be predictive for future AVN. At present, it is premature to use the perfusion information for routine clinical use. However, it opens the door to studies looking at repositioning or alternative reduction methods that may reduce the risk of AVN in this higher risk group.
abstract_id: PUBMED:27177477
Risk factors for avascular necrosis after closed reduction for developmental dysplasia of the hip. Purpose: The purpose of this study was to identify and evaluate risk factors of avascular necrosis (AVN) after closed treatment for developmental dysplasia of the hip (DDH).
Methods: A retrospective review of children diagnosed with DDH at a tertiary-care children's hospital between 1986 and 2009 was performed. The presence of AVN was assessed according to Salter's classification system.
Results: Eighty-two affected hips in 70 children with an average age of 10 months at closed reduction (range 1-31 months) and 5 years (range 2-19 years) of follow-up met the inclusion criteria. Twenty-nine (of 82, 35 %) affected hips developed AVN. The use of pre-reduction traction (p = 0.019) increased the risk of AVN, while preoperative Pavlik harness or brace trial (p = 0.28), presence of ossific nucleus at the time of closed reduction (p = 0.16), and adductor tenotomy (p = 0.37) were not significant factors. Laterality (right vs. left) was also not a significant risk factor (p = 0.75), but patients who underwent closed reduction for bilateral DDH were less likely to develop AVN (p = 0.027). Overall, the degree of abduction did not affect the rate of AVN (p = 0.87). However, in patients treated with closed reduction younger than 6 months of age, the rate of AVN was increased with abduction ≥50° (9/15, 60 %) compared to abduction <50° (0/8, 0 %) (p = 0.007). Patients who developed AVN were more likely to require subsequent surgery (p = 0.034) and more likely to report a fair/poor clinical outcome (p = 0.049).
Conclusions: The risk of AVN (35 %) following closed reduction and spica casting for DDH is high. The degree of abduction in spica casts appears to be a risk factor in patients ≤6 months old. The authors recommend that abduction in spica casts should be limited to <50° in children younger than 6 months of age.
Level Of Evidence: IV.
abstract_id: PUBMED:36138634
The Effect of Traction before Closed Reduction in Patients with Developmental Dysplasia of the Hip. Developmental dysplasia of the hip (DDH) with a dislocated hip can be treated with traction before closed reduction (CR). Currently, there is insufficient evidence supporting the use of preoperative traction treatment for a successful CR. The objective of this study was to determine the effect of preoperative traction on the success rate of primary CR in DDH patients with dislocated hips. A retrospective pair-matched study was performed in DDH patients with dislocated hips. Patients with preoperative traction treatment prior to primary CR were matched (based on age and the severity of DDH on the radiograph) to patients without preoperative traction treatment. The primary outcome was the presence or absence of maintained reduction after three weeks. A match was found for 37 hips, which resulted in the inclusion of 74 hips. No significant difference was found in the number of successful reductions after three weeks between the traction group and the control group (31 vs. 33 hips, p = 0.496). Traction treatment did not significantly improve the short-term or mid-term outcomes for closed reduction. Based on these results, we suggest that traction treatment should not be used as standard care for dislocated hips in DDH.
Answer: The use of perfusion MRI (pMRI) after closed reduction and spica casting for developmental dysplasia of the hip (DDH) has been suggested as a means to identify and potentially avoid avascular necrosis (AVN). A retrospective cohort study at a large tertiary care children's hospital indicated that a pMRI-based protocol immediately after closed reduction/spica casting may decrease the risk of AVN by allowing surgeons to evaluate femoral head vascularity. In this study, no patient with normal perfusion on postreduction pMRI developed AVN, and by final follow-up, there was a statistically higher proportion of AVN in the historical CT group compared to the pMRI group (28% vs. 4%, respectively) (PUBMED:26092677).
Another study found that the vascular development and perfusion changes of the femoral head on the dislocated side are significantly worse than those on the contralateral side. Immature vascularity of the femoral head before CR and poor perfusion of the femoral head after CR may be risk factors for AVN in patients with DDH (PUBMED:34476030).
However, it is important to note that while these studies suggest a potential benefit of pMRI in reducing the incidence of AVN, the evidence is not yet conclusive. For instance, a preliminary report on post-closed reduction perfusion MRI as a predictor of AVN in DDH indicated that while gadolinium-enhanced MRI provides information about femoral head perfusion that may be predictive for future AVN, it is premature to use this information for routine clinical use (PUBMED:19098638).
In summary, the available evidence suggests that pMRI may be a useful tool in reducing the risk of AVN after closed reduction for DDH, but further investigation is needed to confirm its role and establish routine clinical use. |
Instruction: Targeted delayed scanning at CT urography: a worthwhile use of radiation?
Abstracts:
abstract_id: PUBMED:22855323
Targeted delayed scanning at CT urography: a worthwhile use of radiation? Purpose: To determine whether ureteral segments not filled with contrast material at computed tomographic (CT) urography ever contain tumor detectable only by filling these segments with contrast material.
Materials And Methods: In this institutional review board-approved, HIPAA-compliant retrospective study, with waiver of informed consent, databases were searched for all patients who underwent heminephroureterectomy or ureteroscopy between January 1, 2001, and December 31, 2009, with available CT urography findings in the 12 months prior to surgery or biopsy and patients who had undergone at least two CT urography procedures with a minimum 5-year follow-up between studies. One of two radiologists blinded to results of pathologic examination recorded location of unfilled segments, time of scan, subsequent filling, and pathologic or 5-year follow-up CT urography results. Tumors were considered missed in an unfilled segment if tumor was found at pathologic examination or follow-up CT urography in the same one-third of the ureter and there were no secondary signs of a mass with other index CT urography sequences. Estimated radiation dose for additional delayed sequences was calculated with a 32-cm phantom.
Results: In 59 male and 33 female patients (mean age, 66 years) undergoing heminephroureterectomy, 27 tumors were present in 41 partially nonopacified ureters in 20 patients. Six tumors were present in nonopacified segments (one multifocal, none bilateral); all were identifiable by means of secondary signs present with earlier sequences. Among 182 lesions biopsied at ureteroscopy in 124 male and 53 female patients (mean age, 69 years), 28 tumors were present in nonopacified segments in 25 patients (four multifocal, none bilateral), all with secondary imaging signs detectable without delayed scanning. In 64 male and 29 female patients (mean age, 69 years) who underwent 5-year follow-up CT urography, three new tumors were revealed in three patients; none occurred in the unfilled ureter at index CT urography. Estimated radiation dose from additional sequences was 4.3 mSv per patient.
Conclusion: Targeted delayed scanning at CT urography yielded no additional ureteral tumors and resulted in additional radiation exposure.
abstract_id: PUBMED:36359488
Dedicated CCTA Followed by High-Pitch Scanning versus TRO-CT for Contrast Media and Radiation Dose Reduction: A Retrospective Study. We aimed to compare dedicated coronary computed tomography angiography (CCTA) followed by high-pitch scanning and triple-rule-out computed tomography angiography (TRO-CTA) in terms of radiation dose, contrast media (CM) use, and image quality. Patients with acute chest pain were retrospectively enrolled and assigned to group A (n = 55; scanned with dedicated CCTA followed by high-pitch scanning) or group B (n = 45; with TRO-CTA). Patient characteristics, radiation dose, CM use, and quantitative parameters (CT value, image noise, signal-to-noise ratio, contrast-to-noise ratio, and image quality score) of pulmonary arteries (PAs), thoracic aortae (TAs), and coronary arteries (CAs) were compared. The total effective dose was significantly lower in group A (6.25 ± 2.94 mSv) than B (8.93 ± 4.08 mSv; p < 0.001). CM volume was significantly lower in group A (75.7 ± 8.9 mL) than B (95.0 ± 0 mL; p < 0.001). PA and TA image quality were significantly better in group B, whereas that of CA was significantly better in group A. Qualitative image scores of PA and TA scans rated by radiologists were similar, whereas that of CA scans was significantly higher in group A than B (p < 0.001). Dedicated CCTA followed by high-pitch scanning demonstrated lower radiation doses and CM volume without debasing qualities of PA, TA, and CA scans than did TRO-CTA.
abstract_id: PUBMED:33177787
An Investigation of Slot-scanning for Mammography and Breast CT. Mammography and breast CT are important tools for breast cancer screening and diagnosis. Current implementations are limited by scattered radiation and/or spatial resolution. In this work, we propose and develop a slot scan-based system to be used in both mammography and CT mode that can limit scatter and collect sparse CT data for improved image quality at low radiation exposures. Monte Carlo simulations of an anthropomorphic breast phantom show a factor of 10 reduction in scattering amplitude with our slot scan-based system compared to that of a full-field detector mammography system (area mode). Similarly, slot-scan improved the MTF (particularly the low-frequency response) compared to an area detector. Investigation of sparse CT sampling with doubly sparse acquisition data return better quality reconstruction, for which our slot-scanning system is capable, over angle-only projection. Thus, a system with the combined ability for slot-scanning mammography and slot-scanning breast CT has the potential to deliver improved dose-efficient imaging performance and become viable breast cancer screening and diagnostic tools.
abstract_id: PUBMED:38319452
The image quality and feasibility of solitary delayed [68 Ga]Ga-PSMA-11 PET/CT using long field-of-view scanning in patients with prostate cancer. Background: Previous studies have demonstrated that delayed [68 Ga]Ga-PSMA PET/CT imaging improves lesion detection compared to early [68 Ga]Ga-PSMA PET/CT in patients with prostate cancer. However, the sole use of delayed [68 Ga]Ga-PSMA PET/CT has been limited due to the insufficient number of photons obtained with standard PET/CT scanners. The combination of early and delayed [68 Ga]Ga-PSMA standard PET/CT may be considered, and it is challenging to incorporate into a high-demand clinical setting. Long field-of-view (LFOV) PET/CT scanners have higher sensitivity compared to standard PET/CT. However, it remains unknown whether the image quality of solitary delayed [68 Ga]Ga-PSMA LFOV PET/CT imaging is adequate to satisfy clinical diagnostic requirements. Therefore, the purpose of this study was to evaluate the image quality of delayed [68 Ga]Ga-PSMA LFOV PET/CT and examine the feasibility of utilizing delayed [68 Ga]Ga-PSMA LFOV PET/CT imaging alone in patients with prostate cancer.
Methods: The study sample consisted of 56 prostate cancer patients who underwent [68 Ga]Ga-PSMA-11 LFOV PET/CT scanning between December 2020 and July 2021. All patients were subjected to early LFOV PET/CT imaging at 1-h post-injection as well as delayed LFOV PET/CT imaging at 3-h post-injection using [68 Ga]Ga-PSMA-11. The image quality and diagnostic efficiency of solitary delayed [68 Ga]Ga-PSMA-11 LFOV PET/CT imaging was analyzed.
Results: The results showed that delayed [68 Ga]Ga-PSMA-11 LFOV PET/CT yielded satisfactory image quality that fulfilled clinical diagnostic benchmarks. Compared to early imaging, delayed [68 Ga]Ga-PSMA-11 LFOV PET/CT demonstrated heightened lesion SUVmax values (11.0 [2.3-193.6] vs. 7.0 [2.0-124.3], P < 0.001) and superior tumor-to-background ratios (3.3 [0.5-62.2] vs. 1.7 [0.3-30.7], P < 0.001). Additionally, delayed [68 Ga]Ga-PSMA-11 LFOV PET/CT detected supplementary lesions in 14 patients (25%) compared to early imaging, resulting in modifications to disease staging and management plans.
Conclusions: In summary, the findings indicate that the image quality of delayed [68 Ga]Ga-PSMA-11 LFOV PET/CT is satisfactory for meeting clinical diagnostic prerequisites. The use of solitary delayed [68 Ga]Ga-PSMA-11 LFOV PET/CT imaging in prostate cancer simplifies the examination protocol and improves patient compliance, compared to [68 Ga]Ga-PSMA-11 standard PET/CT which necessitates both early and delayed imaging.
abstract_id: PUBMED:32063009
Use of Multiphase CT Protocols in 18 Countries: Appropriateness and Radiation Doses. Purpose: To assess the frequency, appropriateness, and radiation doses associated with multiphase computed tomography (CT) protocols for routine chest and abdomen-pelvis examinations in 18 countries.
Materials And Methods: In collaboration with the International Atomic Energy Agency, multi-institutional data on clinical indications, number of scan phases, scan parameters, and radiation dose descriptors (CT dose-index volume; dose-length product [DLP]) were collected for routine chest (n = 1706 patients) and abdomen-pelvis (n = 426 patients) CT from 18 institutions in Asia, Africa, and Europe. Two radiologists scored the need for each phase based on clinical indications (1 = not indicated, 2 = probably indicated, 3 = indicated). We surveyed 11 institutions for their practice regarding single-phase and multiphase CT examinations. Data were analyzed with the Student t test.
Results: Most institutions use multiphase protocols for routine chest (10/18 institutions) and routine abdomen-pelvis (10/11 institutions that supplied data for abdomen-pelvis) CT examinations. Most institutions (10/11) do not modify scan parameters between different scan phases. Respective total DLP for 1-, 2-, and 3-phase routine chest CT was 272, 518, and 820 mGy·cm, respectively. Corresponding values for 1- to 5-phase routine abdomen-pelvis CT were 400, 726, 1218, 1214, and 1458 mGy cm, respectively. For multiphase CT protocols, there were no differences in scan parameters and radiation doses between different phases for either chest or abdomen-pelvis CT (P = 0.40-0.99). Multiphase CT examinations were unnecessary in 100% of routine chest CT and in 63% of routine abdomen-pelvis CT examinations.
Conclusions: Multiphase scan protocols for the routine chest and abdomen-pelvis CT examinations are unnecessary, and their use increases radiation dose.
abstract_id: PUBMED:28143506
Utilization of CT scanning associated with complex spine surgery. Background: Due to the risk associated with exposure to ionizing radiation, there is an urgent need to identify areas of CT scanning overutilization. While increased use of diagnostic spinal imaging has been documented, no previous research has estimated the magnitude of follow-up imaging used to evaluate the postoperative spine.
Methods: This retrospective cohort study quantifies the association between spinal surgery and CT utilization. An insurance database (Humana, Inc.) with ≈ 19 million enrollees was employed, representing 8 consecutive years (2007-2014). Surgical and imaging procedures were captured by anatomic-specific CPT codes. Complex surgeries included all cervical, thoracic and lumbar instrumented spine fusions. Simple surgeries included discectomy and laminectomy. Imaging was restricted to CT and MRI. Postoperative imaging frequency extended to 5-years post-surgery.
Results: There were 140,660 complex spinal procedures and 39,943 discectomies and 49,889 laminectomies. MRI was the predominate preoperative imaging modality for all surgical procedures (median: 80%; range: 73-82%). Postoperatively, CT prevalence following complex procedures increased more than two-fold from 6 months (18%) to 5 years (≥40%), and patients having a postoperative CT averaged two scans. For simple procedures, the prevalence of postoperative CT scanning never exceeded 30%.
Conclusions: CT scanning is used frequently for follow-up imaging evaluation following complex spine surgery. There is emerging evidence of an increased cancer risk due to ionizing radiation exposure with CT. In the setting of complex spine surgery, actions to mitigate this risk should be considered and include reducing nonessential scans, using the lowest possible radiation dose protocols, exerting greater selectivity in monitoring the developing fusion construct, and adopting non-ferromagnetic implant biomaterials that facilitate MRI postoperatively.
abstract_id: PUBMED:29925750
Differentiation between Hepatic Focal Lesions and Heterogenous Physiological Accumulations by Early Delayed Scanning in 18F-FDG PET/CT Examination Purpose: We examined whether early delayed scanning is useful for differentiation of liver lesion and heterogenous physiological accumulation in positron emission tomography (PET) examination.
Methods: The subjects of the study were 33 patients with colorectal cancer who underwent PET examination and were added early delayed scanning to distinguish between liver lesions and heterogenous physiological accumulation to conventional early images. We placed same regions of interest (ROI) in the tumor and hepatic parenchyma for early delayed and conventional early images. Then, we measured SUVmax of the ROIs and calculated tumor to liver parenchyma uptake ratio (TLR). In addition, change rates between early and early delayed images were calculated for the SUVmax and TLR.
Results: The receiver operating characterstic (ROC) analysis result of SUVmax showed the highest SUVmax change rate, and the ROC analysis result of TLR showed the highest early delayed scanning. The SUVmax of the lesions did not change between early scan and early delayed scanning (p=0.98), but it decreased significantly in the normal group (p<0.001). TLR of the lesion group was significantly increased (p<0.001) in early delayed images compared to early scan and TLR significantly decreased in the normal group (p<0.001). The AUC of the ROC curve showed the highest SUVmax change rate (0.99).
Conclusion: Early delayed scanning could distinguish between liver lesions and heterogenous physiological accumulation in colon cancer patients.
abstract_id: PUBMED:26466180
CT Radiation: Key Concepts for Gentle and Wise Use. Use of computed tomography (CT) in medicine comes with the responsibility of its appropriate (wise) and safe (gentle) application to obtain required diagnostic information with the lowest possible dose of radiation. CT provides useful information that may not be available with other imaging modalities in many clinical situations in children and adults. Inappropriate or excessive use of CT should be avoided, especially if required information can be obtained in an accurate and time-efficient manner with other modalities that require a lower radiation dose, or non-radiation-based imaging modalities such as ultrasonography and magnetic resonance imaging. In addition to appropriate use of CT, the radiology community also must monitor scanning practices and protocols. When appropriate, high-contrast regions and lesions should be scanned with reduced dose, but overly zealous dose reduction should be avoided for assessment of low-contrast lesions. Patients' cross-sectional body size should be taken into account to deliver lower radiation dose to smaller patients and children. Wise use of CT scanning with gentle application of radiation dose can help maximize the diagnostic value of CT, as well as address concerns about potential risks of radiation. In this article, key concepts in CT radiation dose are reviewed, including CT dose descriptors; radiation doses from CT procedures; and factors and technologies that affect radiation dose and image quality, including their use in creating dose-saving protocols. Also discussed are the contributions of radiation awareness campaigns such as the Image Gently and Image Wisely campaigns and the American College of Radiology Dose Index Registry initiatives.
abstract_id: PUBMED:37960636
Radiation Protection of a 3D Computer Tomography Scanning Workplace for Logs-A Case Study. Despite its undeniable advantages, the operation of a CT scanner also carries risks to human health. The CT scanner is a source of ionizing radiation, which also affects people in its surroundings. The aim of this paper is to quantify the radiation exposure of workers at a 3D CT wood scanning workplace and to determine a monitoring program based on measurements of ionizing radiation levels during the operation of a CT log scanner. The workplace is located in the Biotechnology Park of the National Forestry Centre. The ionizing radiation source is located in a protective cabin as a MICROTEC 3D CT machine with an X-ray lamp as X-ray source. The CT scanner is part of the 3D CT scanning line and its function is continuous quality scanning or detection of internal defects of the examined wood. The measurement of leakage radiation during scanning is performed with a metrologically verified meter. The measured quantity is the ambient dose equivalent rate H˙*10. The results of the measurements at the selected measurement sites have shown that, after installation of additional safety barriers, the CT scanner for the logs complies with the most strict criteria in terms of radiation protection. Workers present at the workplace during the operation of the CT scanner are not exposed to radiation higher than the background radiation level.
abstract_id: PUBMED:28598130
Radiation Dose of Body Surface at Sensitive Organs and Its Protective Precaution in Head CT Scanning:Initial Experience. Objectives: To investigate the radiation dose of body surface at sensitive organs using different protective precaution in head CT scanning.
Methods: The standard phantom received head routine sequence scanning with dual-source computed tomography.The phantoms were scanned with three different protective precaution:group A (without protective clothing),group B (wearing lead collar and cover lead clothes),group C (wearing lead collar and clothes without cuffs).The thermoluminescence dosimeters (TLD) was placed on the body surface corresponding to sensitive organs to measure radiation dose.
Results: The volume of CT dose index (CTDIvol) and dose length product (DLP) in group A,B and C were same,44.06 mGy and 634 mGy×cm,respectively.There was no statistic significant difference between group A,B and C of radiation dose in non-protective region (P=0.712).The radiation dose in the neck region under the cover of lead collar in group A,B and C were (2.57 ± 0.65) mGy,(2.30 ± 1.10) mGy and (2.48 ± 0.90) mGy,respectively,without statistic significant difference between groups (P=0.816).However,the radiation dose in abdominal region of group A was (0.66±0.37) mGy,which was significantly higher than that of group B [(0.18 ± 0.18) mGy] and group C [(0.18±0.16) mGy],The radiation dose in dorsal region of group A was (0.55±0.43) mGy,which was significantly higher than that of group B [(0.28±0.22) mGy],while that of group C [(0.14±0.12) mGy] was significantly lower than group B.
Conclusions: Compared to traditional lead coverage,lead collar and cloth can decrease the radiation dose of body surface and sensitive organs in head scan with dual source CT.
Answer: Based on the findings from the study reported in abstract PUBMED:22855323, targeted delayed scanning at CT urography does not appear to be a worthwhile use of radiation. The study aimed to determine if ureteral segments not filled with contrast material at initial CT urography could contain tumors detectable only by filling these segments with contrast material on delayed scans. The retrospective study included patients who underwent heminephroureterectomy or ureteroscopy, as well as those who had at least two CT urography procedures with a minimum 5-year follow-up.
The results showed that in patients undergoing heminephroureterectomy, tumors present in partially nonopacified ureters were identifiable by secondary signs present with earlier sequences, without the need for delayed scanning. Similarly, in patients who underwent ureteroscopy, tumors in nonopacified segments were also detectable by secondary imaging signs without the need for additional delayed scanning. Furthermore, in patients who underwent 5-year follow-up CT urography, no new tumors were found in the unfilled ureter at index CT urography.
The study concluded that targeted delayed scanning at CT urography yielded no additional ureteral tumors and resulted in additional radiation exposure, estimated at 4.3 mSv per patient. Therefore, the practice of performing targeted delayed scans in CT urography for the purpose of detecting tumors in nonopacified ureteral segments does not seem to justify the additional radiation dose to the patient. |
Instruction: Can anemia in the first trimester predict obstetrical complications later in pregnancy?
Abstracts:
abstract_id: PUBMED:22708721
Can anemia in the first trimester predict obstetrical complications later in pregnancy? Objective: The present study examines whether there is an association between anemia during the first trimester and the risk to develop preterm delivery (PTD), intrauterine growth restriction, and other obstetrical complications.
Methods: The study population included all registered births between 2000 and 2010. Anemia was defined as hemoglobin <10 g/dl. A comparison of obstetrical characteristics and perinatal outcomes was performed between women with and without anemia. Multiple logistic regression models were used to control for confounders.
Results: The study population included 33,888 deliveries, of these 5.1% (1718) were with anemia during the first trimester. Women with anemia were significantly older, delivered earlier, and were more likely to be grand multiparous. There were significantly higher rates of PTD and low birth weight (LBW; <2500 g) among patients with anemia (12.3% vs. 9.3%; p < 0.001 and 11.7% vs. 9.0%; p < 0.001, respectively). On the contrary, no significant differences between the groups were noted regarding the rate of intrauterine growth restriction. Using a multivariable analysis, the significant association between anemia and PTD persisted (OR = 1.35; 95% CI 1.2-1.6, p < 0.01).
Conclusions: Anemia during the first trimester is significantly and independently associated with an increased risk for subsequent PTD.
abstract_id: PUBMED:21642766
Effects of maternal subclinical hypothyroidism on obstetrical outcomes during early pregnancy. Background: Maternal hypothyroidism [overt hypothyroidism and subclinical hypothyroidism (SCH)] during early pregnancy is suspected to associate with adverse obstetrical outcomes.
Aim: The aim of the present study was to investigate whether maternal SCH during the early stage of pregnancy increase obstetrical complications and whether treatment results in an improvement in these outcomes.
Subjects And Methods: A total of 756 women in the 1st trimester (≤12 weeks) of pregnancy were enrolled through 10 hospitals in Shenyang from 2007 to 2009. All participants underwent thyroid function testing in early pregnancy and their obstetrical outcomes were studied following delivery.
Results: The incidence of spontaneous abortions in the SCH group was higher than the normal TSH group (15.48% vs 8.86%, p=0.03). No significant association was observed between SCH and other obstetrical complications including gestational hypertension, premature delivery, anemia, post-partum hemorrhage, low neonatal Apgar scores and low birth weight. Although levo-T4 (L-T4) treatment decreased the incidence of spontaneous abortions in women with SCH, it was not statistically significant when compared to women who did not receive treatment in the SCH group. None of the 28 women who received L-T4 treatment had premature delivery, low birth weight, hemorrhage, and low Apgar score.
Conclusions: The incidence of spontaneous abortion in pregnant women with SCH increases in early pregnancy.
abstract_id: PUBMED:20864377
Leiomyoma during pregnancy: which complications? Objective: We have observed the association between uterine leiomyomas and complications during pregnancy, delivery and post-partum among our patients over the last 10 years.
Patients And Methods: We realized a retrospective case-control study comparing pregnancy and delivery outcomes in women with and without leiomyomas. In order to strengthen our observations, we conducted both univariate and multivariate analyses, and carefully respected 3 matching criteria between the two groups: age, parity and date of delivery.
Results: Over a ten-year period, 117 (0.38%) women with at least one leiomyoma would give birth - among the 30,805 births registered in our unit. By multivariate analysis, the presence of leiomyomas was significantly associated with women's age over 35 (adjusted odds ratio [AOR] 2.48, 95% confidence interval (CI) [1.31-4.67]), smoking (AOR=4.3, [1.82-10.13]), cystitis (AOR = 6.55, [2.12-20.16]), hydramnios (AOR = 5.12, [1.57-16.65]), threatened preterm labor (AOR = 3.99, [1.66-9.56]), first trimester bleeding (AOR = 3.92, [1.62-13.26]), anaemia during pregnancy (AOR = 2.97, [1.30-6.78]), labor dystocia (AOR = 11.79, [2.80-49.56]), retained placenta (AOR = 4.25, [1.49-12.11] and neonatal pediatric intensive care (AOR = 4.44, [1.19-16.60]). Regarding cesarean delivery, the multivariate analysis found that women with several leiomyomas underwent 8.48 times more cesarean sections than women with a single leiomyoma (p = 0.001).
Discussion And Conclusion: Our study shows how specific features are to be kept in mind regarding obstetric outcomes for women with leiomyomas. These results emphasise the need for good perinatal care and raise the question of the treatment of those leiomyomas before pregnancy with the development of non-invasive procedures.
abstract_id: PUBMED:15625136
Acute pyelonephritis in pregnancy. Objective: To examine the incidence of pyelonephritis and the incidence of risk factors, microbial pathogens, and obstetric complications in women with acute antepartum pyelonephritis.
Methods: For 2 years, information on pregnant women with acute pyelonephritis was collected in a longitudinal study. All women were admitted to the hospital and treated with intravenous antimicrobial agents. We compared the pregnancy outcomes of these women with those of the general obstetric population received at our hospital during the same time period.
Results: Four hundred forty cases of acute antepartum pyelonephritis were identified during the study period (incidence 1.4%). Although there were no significant differences in ethnicity, pyelonephritis was associated with nulliparity (44% versus 37%, P = .003) and young age (P = .003). Thirteen percent of the women had a known risk factor for pyelonephritis. Acute pyelonephritis occurred more often in the second trimester (53%), and the predominant uropathogens were Escherichia coli (70%) and gram-positive organisms, including group B beta Streptococcus (10%). Complications included anemia (23%), septicemia (17%), transient renal dysfunction (2%), and pulmonary insufficiency (7%).
Conclusion: The incidence of pyelonephritis has remained low in the era of routine prenatal screening for asymptomatic bacteriuria. First-trimester pyelonephritis accounts for over 1 in 5 antepartum cases. Gram-positive uropathogens are found more commonly as pregnancy progresses. Maternal complications continue, but poor obstetrical outcomes are rare.
abstract_id: PUBMED:21078248
Risk factors in early pregnancy complications. Objective: To determine the underlying risk factors in early pregnancy complications and outcome.
Study Design: Case series.
Place And Duration Of Study: This study was conducted at the Department of Obstetrics and Gynaecology Unit-IV, Liaquat University of Medical and Health Sciences, Jamshoro, from July 2007 to June 2008.
Methodology: All the women with first trimester pregnancy with different complications were included in this study while those women with uneventful first trimester were excluded. The inducted women were registered on pre-designed proforma. Studied variables including demographic details, gestational period, type of complications, risk factors, treatment and outcome. The data was expressed in terms of mean and percentages with a confidence interval of 95%. Analysis was done on SPSS version 14.
Results: Out of a 204 total admissions, 115 (56.37%) patients had different early pregnancy complications. Their mean age was 29.4+6.8 years. Commonest complications found were abortion in 88 (76.52%) cases. The underlying risk factors found in abortion were antiphospholipid syndrome in 5 (5.68%) cases, Diabetes mellitus in 8 (9.09%) cases, hypertension in 16 (18.18%) cases, and polycystic ovarian syndrome and infection in 11 (12.5%) cases each. Most of the cases 69 (60%) were treated by minor surgical procedures, and 22 (19.13%) cases responded with conservative medical therapy. Outcome were anaemia in 92 (79.3%) cases, psychological upset in 72 (62.1%), infection in 55 (44%) cases and coagulopathy in 9 (7.8%) cases.
Conclusion: Abortion was found as the most frequent early pregnancy complication and the most frequent underlying risk factor was hypertension. Outcome included anaemia, psychological upset and infection.
abstract_id: PUBMED:37004384
Neonatal and pregnancy complications following maternal depression or antidepressant exposure: A population-based, retrospective birth cohort study. Objectives: Depression is common during pregnancy, and antidepressants are often prescribed for treatment. However, depression and antidepressant use both increase the risk of neonatal and pregnancy complications. To separately evaluate the effects of antidepressant use and the underlying depression on pregnancy and neonatal complications by using a robust statistical method to control for confounding by indication.
Methods: All study data were obtained from Taiwan's National Health Insurance Research Database. Pregnant women were divided into three groups: those with no depression and no antidepressant exposure(n = 1619,198), depression and no antidepressant exposure(n = 2006), and depression and antidepressant exposure(n = 7857). Antidepressant exposure was further divided into that before pregnancy and during each trimester.
Results: Mothers with depression but no antidepressant exposure exhibited increased risks of intrauterine growth restriction and preterm delivery, compared with mothers without depression. In mothers with depression, antidepressant exposure before pregnancy or during the first trimester conferred increased risks of gestational diabetes mellitus, malpresentation, preterm delivery and cardiovascular anomalies, compared with no antidepressant exposure. Moreover, antidepressant exposure during the second or third trimester conferred increased risks of anemia, a low Apgar score, preterm delivery and genitourinary defects. However, antidepressants administered before pregnancy and during all trimesters did not increase the risk of stillbirth.
Conclusion: Depression and antidepressant treatment for depression during pregnancy may individually increase the risks of some neonatal and pregnancy complications. Physicians should thoroughly consider the risks and benefits for both the mother and fetus when treating depression during pregnancy by using antidepressants.
abstract_id: PUBMED:12520995
Nutritional anemia of pregnancy in the Zulia state. 30 years later Thirty years ago, we reported a high frequency of nutritional anemia among pregnant women of low socioeconomic class, from Zulia State, Venezuela. Today, preliminary results, show an increase of anemia from 20% in the year of 1972 to 44% at the end of the first trimester of gestation, and from 53% to 63% at the end of the third trimester. These results could be the reflection of the impairment of the economic conditions in the country and the failure of the prenatal care services in preventing or treating anemia in pregnancy.
abstract_id: PUBMED:24118742
Perinatal complications of monochorionic diamniotic twin gestations with discordant crown-rump length determined at mid-first trimester. Aim: The aim of this study was to investigate the value of discordance of crown-rump length (DCRL) at mid-first trimester to predict adverse outcomes in monochorionic diamniotic twin gestations (MD).
Material And Methods: This was a retrospective cohort study of the perinatal outcome in MD pregnancies managed from the first trimester onward. DCRL was evaluated between 8 and 10 weeks of gestation. The association between DCRL and perinatal complications, including fetal death, twin-twin transfusion syndrome, severe discordant birthweight (DB), and twin anemia-polycythemia sequence, was assessed.
Results: Among 126 cases, a single fetal demise occurred in two (2%) and demise of both fetuses occurred in eight (6%). Five pregnancies (4%) were complicated with twin-twin transfusion syndrome; one case (1%) was twin anemia-polycythemia sequence and 13 (10%) were DB. Neonatal death occurred in one pair. At 28 days of age, in 115 cases (91%) both twins were alive. In 117 cases (93%), at least one twin survived until 28 days of age. DCRL >12.0% was not related to any perinatal complications but DB (P < 0.01; relative risk: 1.40; 95% confidence interval: 1.06-1.84).
Conclusions: DCRL in MD during the mid-first trimester might be useful for predicting DB.
abstract_id: PUBMED:22458961
Restless legs syndrome during and after pregnancy and its relation to snoring. Objective: To study development of restless legs syndrome (RLS) during and after pregnancy, and whether RLS is related to snoring or other pregnancy-related symptoms.
Design: Prospective study.
Setting: Antenatal care clinics in the catchment area of Linköping university hospital, Sweden.
Population: Five hundred consecutively recruited pregnant women.
Methods: Sleep disturbances, including symptoms of RLS and snoring, were assessed with questionnaires in each trimester. A complementary questionnaire was sent three years after delivery to women experiencing symptoms of RLS during pregnancy.
Main Outcome Measures: Symptoms of RLS in relation to snoring in each trimester.
Results: Symptoms of RLS were reported by 17.0% of the women in the first trimester, by 27.1% in the second trimester and by 29.6% in the third trimester. Snoring in the first trimester was correlated to increased prevalence of RLS in all three trimesters (p= 0.003, 0.017 and 0.044 in the first, second and third trimester, respectively). No correlation was found between RLS and anemia, parity or body mass index. Among the women who experienced RLS, 31% still had symptoms three years after delivery. Fifty-eight per cent of those whose symptoms had disappeared stated that this happened within one month after delivery.
Conclusions: Symptoms of RLS progressed most between the first and second trimester. Women who snored in the first or second trimester of pregnancy had a higher prevalence of RLS in the third trimester, which indicates that snoring in early pregnancy might predict RLS later. Symptoms of RLS disappear quite soon after delivery, but about one-third of women with RLS during pregnancy may still have symptoms three years after childbirth.
abstract_id: PUBMED:38074267
Anemia burden in pregnancy and birth outcomes among women receiving antenatal care services from a secondary level hospital in South India: A record review. Introduction: Anaemia in pregnant women is a major public health problem and is associated with adverse outcomes both in pregnant mothers and new-borns. According to NFHS-5, 45.7% of women in urban India were affected by anaemia during their pregnancy. The objectives of this study were to estimate the proportion of pregnant women who were anaemic and its effect on maternal and birth outcomes, and additionally, to assess the various socio-economic factors contributing to anaemia during pregnancy.
Methodology: Data was collected by reviewing records between December 2018 and December 2021 of 302 pregnant women who had received antenatal care at a secondary level health facility in Krishnagiri, Tamil Nadu. The data included details of socio-economic parameters, parity, haemoglobin levels, mode of delivery, pregnancy related complications during the antenatal period and during delivery, including need for blood transfusions in mothers. Birth-related outcomes like miscarriages, intra-uterine death, low birth weight (LBW), need for NICU admission and congenital defects were also analysed. Variables in the study are described as frequencies and proportions. Appropriate tests were used to check for association and a P- value of less than 0.05 was considered as statistically significant.
Results: The mean age of the women was 24 ± 3.72 years and the mean Haemoglobin level during the first trimester was 10.5 ± 1.12 gm/dl. The proportion of women with anaemia in the first trimester was 174 (57.6%). Among the anaemics, 15 (8.6%) required treatment in the form of injectable iron and 29 (16.7%) required blood transfusion. Gestational diabetes mellitus (GDM) (31.6%) (P < 0.05), preeclampsia (38.9%) (P < 0.02) less among anaemic. Hyperemesis (66.7%) (P = 0.58), risk of miscarriage (57.8%) (P = 0.94), postpartum haemorrhage (PPH) (80.0%) (P = 0.15) was higher in the anaemic group. Adverse Neonatal-related outcomes such as low birth weight babies (52.0%) (P = 0.54), meconium aspiration (72.7%) (P = 0.25) and need for NICU admission (75.0%) (P = 0.25) was higher in babies born to anaemic pregnant women.
Conclusions: The proportion of anaemic women was 57.6%. There was no significant difference in maternal demographic characteristics or obstetric factors between anaemic and non-anaemic pregnant women. GDM, pre-eclampsia was less prevalent among anaemic women. Hyperemesis, risk of abortion, postpartum haemorrhage and low birth weight (LBW) infants was higher in anaemic pregnancies. The need for providing education regarding iron supplementation and early detection of anaemia during pregnancy should be emphasised to avoid complications.
Answer: Yes, anemia in the first trimester can predict obstetrical complications later in pregnancy. A study that included 33,888 deliveries found that anemia during the first trimester is significantly and independently associated with an increased risk for subsequent preterm delivery (PTD). Women with anemia were more likely to deliver earlier and had higher rates of PTD and low birth weight (LBW) compared to those without anemia. However, no significant differences were noted regarding the rate of intrauterine growth restriction between the groups (PUBMED:22708721).
Additionally, another study showed that anemia burden in pregnancy is associated with adverse maternal and birth outcomes. The proportion of women with anemia in the first trimester was 57.6%, and these women had a higher incidence of complications such as hyperemesis, risk of miscarriage, postpartum hemorrhage (PPH), and adverse neonatal outcomes like low birth weight (LBW) infants, meconium aspiration, and the need for NICU admission (PUBMED:38074267).
These findings suggest that anemia in the first trimester is a predictor of certain obstetrical complications later in pregnancy, emphasizing the importance of early detection and management of anemia to prevent such complications. |
Instruction: Does Salter innominate osteotomy predispose the patient to acetabular retroversion in adulthood?
Abstracts:
abstract_id: PUBMED:25391418
Does Salter innominate osteotomy predispose the patient to acetabular retroversion in adulthood? Background: Salter innominate osteotomy has been identified as an effective additional surgery for the dysplastic hip. However, because in this procedure, the distal segment of the pelvis is displaced laterally and anteriorly, it may predispose the patient to acetabular retroversion. The degree to which this may be the case, however, remains incompletely characterized.
Questions/purposes: We asked, in a group of pediatric patients with acetabular dysplasia who underwent Salter osteotomy, whether the operated hip developed (1) acetabular retroversion compared with contralateral unaffected hips; (2) radiographic evidence of osteoarthritis; or (3) worse functional scores. (4) In addition, we asked whether femoral head deformity resulting from aseptic necrosis was a risk factor for acetabular retroversion.
Methods: Between 1971 and 2001, we performed 213 Salter innominate osteotomies for unilateral pediatric dysplasia, of which 99 hips (47%) in 99 patients were available for review at a mean of 16 years after surgery (range, 12-25 years). Average patient age at surgery was 4 years (range, 2-9 years) and the average age at the most recent followup was 21 years (range, 18-29 years). Acetabular retroversion was diagnosed based on the presence of a positive crossover sign and prominence of the ischial spine sign at the final visit. The center-edge angle, acetabular angle of Sharp, and acetabular index were measured at preoperative and final visits. Contralateral unaffected hips were used as controls, and statistical comparison was made in each patient. Clinical findings, including Harris hip score (HHS) and the anterior impingement sign, were recorded at the final visit.
Results: Patients were no more likely to have a positive crossover sign in the surgically treated hips (20 of 99 hips [20%]) than in the contralateral control hips (17 of 99 hips [17%]; p = 0584). In addition, the percentage of positive prominence of the ischial spine sign was not different between treated hips (22 of 99 hips [22%]) and contralateral hips (18 of 99 hips [18%]; p = 0.256). Hips that had a positive crossover or prominence of the ischial spine sign in the operated hips were likely also to have a positive crossover sign or prominence of the ischial spine sign in the unaffected hips (16 of 20 hips [80%] crossover sign, 17 of 22 hips [77%] prominence of the ischial spine sign). At the final visit, five hips (5%) showed osteoarthritic change; one of the five hips (20%) showed positive crossover and prominence of the ischial spine signs, and the remaining four hips showed negative crossover and prominence of the ischial spine signs. There was no significant difference in HHS between the crossover-positive and crossover-negative patient groups nor in the prominence of the ischial spine-positive and prominence of the ischial spine-negative patient groups (crossover sign, p = 0.68; prominence of the ischial spine sign, p = 0.54). Hips with femoral head deformity (25 of 99 hips [25%]) were more likely to have acetabular retroversion compared with hips without femoral-head deformity (crossover sign, p = 0.029, prominence of the ischial spine sign, p = 0.013).
Conclusions: Our results suggest that Salter innominate osteotomy does not consistently cause acetabular retroversion in adulthood. We propose that retroversion of the acetabulum is a result of intrinsic development of the pelvis in each patient. A longer-term followup study is needed to determine whether retroverted acetabulum after Slater innominate osteotomy is a true risk factor for early osteoarthritis. Femoral head deformity is a risk factor for subsequent acetabular retroversion.
Level Of Evidence: Level III, therapeutic study.
abstract_id: PUBMED:36660480
Salter Innominate Osteotomy for The Management of Developmental Dysplasia of The Hip in Children: Radiographic Analysis. Background: There are a variety of described osteotomies to address acetabular dysplasia in children with Developmental Dysplasia of The Hip (DDH). This study will analyze the radiographic outcome of cases diagnosed with DDH and treated with a Salter innominate osteotomy.
Methods: A retrospective review of all patients who underwent Salter innominate osteotomy between January 2017 and January 2019 at our institution was performed. 48 procedures (44 patients were evaluated for acetabular index (AI) and center edge angle (CEA) based on the preoperative, immediate postoperative, and the most recent pelvic x-ray.
Results: 48 procedures (44 patients) were radiologically evaluated. The AI improved from 34° preoperatively to 19.9° on the final follow up radiograph and the CEA improved from - 2.4° preoperatively to 24.6° on the final follow up radiograph.
Conclusions: In our hands, use of Salter innominate osteotomy for acetabular dysplasia in patients with DDH was associated with good radiological outcomes. The Salter innominate osteotomy was able to improve lateral acetabular coverage of the hip to almost near-normal radiographic values.
Type Of Study/level Of Evidence: Therapeutic IV.
abstract_id: PUBMED:33136791
The Salter innominate osteotomy does not lead to acetabular retroversion. In children with developmental dysplasia of the hip (DDH), Salter's innominate osteotomy aims to surgically manipulate the acetabulum to increase anterior coverage and aid joint support. Consequently, this procedure may retrovert the acetabulum, predisposing patients to pain, osteoarthritis, impingement, or further surgical intervention. In this study, we aim to address whether the innominate osteotomy leads to acetabular retroversion postoperatively or at follow-up. Ninety-two patients were identified from our institutions DDH database between 2009 and 2016, who underwent a unilateral innominate osteotomy for DDH, performed by expert surgeons in a leading paediatric hospital. A novel technique was utilized to measure acetabular version on postoperative computed tomography (CT) scans, where acetabular version was compared between the pathological and contralateral control hips. Measurement of acetabular version in postoperative and control hips demonstrated no incidence of acetabular retroversion. A significant difference was observed when comparing the acetabular version of control versus post-operative hips (P < 0.001), where hips postinnominate osteotomy had a larger degree of acetabular anteversion compared to the control hip. Furthermore, on follow-up radiographic imaging, there was no evidence of acetabular retroversion when using previously defined markers. This study confirms that the Salter innominate osteotomy does not lead to acetabular retroversion both immediately post-operatively and throughout follow-up. In fact, it demonstrates that the acetabula are more anteverted than the contralateral control hip, which has not been previously documented. Additionally, this study demonstrates a novel method of measuring acetabular retroversion using CT technology that adjusts for pelvic tilt, which is repeatable among individuals.
abstract_id: PUBMED:19455495
Assessment of acetabular retroversion following long term review of Salter's osteotomy. Salter's innominate osteotomy may predispose to anterior over-coverage of the acetabulum. Over cover or retroversion has been demonstrated to be a cause of hip pain, impingement and subsequent osteoarthritis. We reviewed the long-term follow up of seventeen skeletally mature hips in sixteen patients who had previously undergone a Salter's osteotomy in childhood. The Salter pelvic osteotomy was performed at a mean average age of 5 years and follow up at a mean average age of 20 years. Patients were assessed by clinical examination for signs of impingement, Harris Hip Score and pelvic radiograph. Acetabular version was evaluated by the relationship between anterior and posterior walls of the acetabulum using templates applied to the pelvic radiograph as described by Hefti. The median acetabular cover averaged 17 degrees of anteversion with 2 patients (12%) demonstrating retroversion, neither of whom, had signs of impingement on examination. The mean average Harris Hip Score was 85 indicating a good outcome at long-term follow-up. We believe acetabular remodelling may occur with age after Salter's innominate osteotomy and have found good results in patients after skeletal maturation. Fears of long-term anterior over-coverage and retroversion with this operation may be unfounded.
abstract_id: PUBMED:36272927
A cross-sectional study evaluating patients' preferences for Salter innominate osteotomy. Background: Residual acetabular dysplasia in children after reduction of hip dislocation is often treated using Salter innominate osteotomy to prevent future osteoarthritis. Preventive surgery for asymptomatic patients, which could result in overtreatment, should be carefully applied with consideration of patients' opinions. In this study, we aimed to describe opinions on Salter innominate osteotomy as preventive surgery for children among adult patients who had undergone periacetabular osteotomy for hip pain due to hip dysplasia.
Methods: A mail-in questionnaire survey was conducted with 77 patients who underwent periacetabular osteotomy. Participants responded whether they would recommend Salter innominate osteotomy as preventive surgery for children and the reason for their opinion. We also performed a patient-based evaluation using the Japanese Orthopaedic Association Hip-Disease Evaluation Questionnaire and assessed clinical outcome measures with the Japanese Orthopedic Association score. Their recommendations and reasons were evaluated, and associations between their opinions and demographic and clinical characteristics were analyzed.
Results: Forty-three patients (56%) responded to the questionnaire. Of these, 10 (23%) patients recommended undergoing Salter innominate osteotomy, 28 (65%) patients did not, and 5 (12%) patients responded they were undecided. No significant association was observed between their opinions and demographic/clinical characteristics evaluated in the survey. The most frequent reason for why they do not recommend Salter innominate osteotomy was related to uncertainty about future hip pain.
Conclusions: In total, 65% of the study participants did not recommend Salter innominate osteotomy for children with risk of dysplasia in the future. Participants' preferences regarding preventive surgery were not influenced by demographic and clinical characteristics.
abstract_id: PUBMED:25610203
A Biomechanical Comparison between Salter Innominate Osteotomy and Pemberton Pericapsular Osteotomy. Objective: This study aims to compare the pelvic biomechanics of patients who underwent Salter innominate osteotomy (SIO) for one hip and Pemberton pericapsular osteotomy (PPO) for the other hip.
Materials And Methods: Fifty-seven of 126 patients who received a one-stage procedure involving SIO for one hip and PPO for the other hip were included in this series. Preoperative x-rays, archived reports and patient recall were obtained and retrospectively analyzed for these 57 patients. Pelvic biomechanics of the two osteotomy techniques were compared on x-rays and computerized tomography imaging.
Results: Based on x-rays, three hips with SIO and 1 hip with PPO had changes that could reflect unstable pelvic biomechanics. SIO caused an average lower limb discrepancy of 0.47 cm in all patients. Positive results were found in 5 patients at their most recent clinical examination.
Conclusion: PPO affects the biomechanics of the pelvis much less than SIO. PPO demonstrated ideal biomechanical results compared with SIO, with fewer changes to the pelvic ring and the hip joints.
abstract_id: PUBMED:30194642
Modified Salter innominate osteotomy in adults Objective: The dysplastic acetabulum is shifted three-dimensionally outwards and forwards.
Indications: Symptomatic residual hip dysplasias and hip subluxations in skeletally mature patients up to the age of 50 years. Sharp's acetabular up to 60°, as an exception above 60°.
Contraindications: Acetabular retroversion. Radiographic joint space at the lateral acetabular edge that is less than half the normal thickness for the patient's age. Relative contraindication: Elongated leg on the affected side.
Surgical Technique: Ilioinguinal approach in a supine position. Division of the innominate bone. Pivoting the distal osteotomy fragment outwards and forwards with the aid of the Salter maneuver. Fixing the fragments with a guide wire. Final correction of the osteotomy fragments. Force fitting of a dovetail grooved, wedge-shaped bone graft. Insertion of a cannulated compression screw and two further threaded rods. Wound closure.
Postoperative Management: Unloaded 3‑point walking for 4 weeks. Increasing weight bearing from week 4. Full weight bearing from week 10-12.
Results: A total of 45 consecutive patients (7 men, 38 women, 49 hips) underwent surgery. Average age at surgery was 27.6 years. The Sharp acetabular angle improved from preoperatively 45.7° ± 4.2° by 13.8° to 32.0° ± 6.4°; the Wiberg (LCE) angle increased from 15.4° ± 9.3° by 19.5° to 34.9° ± 10° postoperatively. The anterior center edge (ACE) angle increased from 28.9° ± 10.4° by 8.6° ± 2.3° to 37.5° ± 8.1°. Complications requiring surgical intervention occurred in 7 patients.
abstract_id: PUBMED:34583014
Periacetabular osteotomy for acetabular retroversion: A systematic review and meta-analysis. Introduction: The evidence for periacetabular osteotomy (PAO) when used in the management of acetabular retroversion remain limited. The review aims to answer the following questions: (1) What are the indications for an anteverting PAO for acetabular retroversion? (2) When are other concomitant procedures required when performing anteverting PAO for acetabular retroversion? (3) To what extent is an anteverting PAO able to correct acetabular retroversion? (4) What are the clinical outcomes for an anteverting PAO when used in acetabular retroversion? (5) What is the estimated survival for anteverting PAO when used in the treatment of acetabular retroversion, before other procedures need to be performed? (6) What are the complications and the complication rates when an anteverting PAO is performed? (7) How do the outcomes of an anteverting PAO compare to other surgical procedures used in the management of acetabular retroversion?
Material And Methods: The systematic review was conducted using the PRISMA guidelines. The search was conducted using PubMed Medical Literature Analysis and Retrieval System Online (MEDLINE) and Cumulative Index to Nursing and Allied Health Literature (CINAHL) from inception through 1 May 2020. The keywords used were "periacetabular osteotomy". All studies that reported the outcomes of periacetabular osteotomy for acetabular retroversion were included. Each study's data was then retrieved individually. The study design, surgical technique, indications, outcomes and complications of each study were analysed.
Results: Seven studies with 225 hips were included. The pooled odds ratio (OR) for a positive crossover sign and posterior wall sign preoperatively as compared to postoperatively were 456.31 (95% CI: 99.57 to 2091.28) and 53.45 (95% CI: 23.05 to 123.93) respectively. The pooled weighted mean difference (WMD) for studies with their mean preoperative LCEA and AI in the dysplastic range were 12.61 (95% CI: 6.54 to 18.68) and-15.0 (95% CI: -19.40 to -11.80) respectively, while the pooled WMD for studies with their mean preoperative LCEA and AI in the normal range were 3.43 (95% CI: 1.08 to 5.77) and -3.56 (95% CI: -5.29 to -1.83) respectively. Other indicators for acetabular retroversion correction, hip dysplasia correction, functional outcomes and range of motion were also significantly improved and sustained up till 11 years postoperatively. Only 7.1% of the hips required subsequent surgical procedures for impingement symptoms or progression of osteoarthritis, and the mean estimate for survival time across the studies was 123.90 months (95% CI: 119.94 to 127.86). The complication rates for low-grade complication were 31.6% while the rate for high-grade complications was 12.0%.
Discussion: Anteverting PAO is indicated for symptomatic acetabular retroversion, and when performed, leads to good deformity correction for both acetabular retroversion and hip dysplasia, positive improvement in clinical outcomes sustainable till 11 years postoperatively and a mean estimated survival time of more than 10 years.
Level Of Evidence: IV; Systematic review and meta-analysis.
abstract_id: PUBMED:31877100
Spinopelvic Characteristics in Acetabular Retroversion: Does Pelvic Tilt Change After Periacetabular Osteotomy? Background: Acetabular retroversion may lead to impingement and pain, which can be treated with an anteverting periacetabular osteotomy (aPAO). Pelvic tilt influences acetabular orientation; as pelvic tilt angle reduces, acetabular version reduces. Thus, acetabular retroversion may be a deformity secondary to abnormal pelvic tilt (functional retroversion) or an anatomic deformity of the acetabulum and the innominate bone (pelvic ring).
Purpose: To (1) measure the spinopelvic morphology in patients with acetabular retroversion and (2) assess whether pelvic tilt changes after successful anteverting PAO (aPAO), thus testing whether preoperative pelvic tilt was compensatory.
Study Design: Case series; Level of evidence, 4.
Methods: A consecutive cohort of 48 hips (42 patients; 30 ± 7 years [mean ± SD]) with acetabular retroversion that underwent successful aPAO was studied. Spinopelvic morphology (pelvic tilt, pelvic incidence, anterior pelvic plane, and sacral slope) was measured from computed tomography scans including the sacral end plate in 21 patients, with adequate images. In addition, the change in pelvic tilt with aPAO was measured via the sacrofemoral-pubic angle with supine pelvic radiographs at an interval of 2.5 ± 2 years.
Results: The spinopelvic characteristics included a pelvic tilt of 4° ± 4°, a sacral slope of 39° ± 9°, an anterior pelvic plane angle of 11° ± 5°, and a pelvic incidence of 42° ± 10°. Preoperative pelvic tilt was 4° ± 4° and did not change postoperatively (4° ± 4°) (P = .676).
Conclusion: Pelvic tilt in acetabular retroversion was within normal parameters, illustrating "normal" sagittal pelvic balance and values similar to those reported in the literature in healthy subjects. In addition, it did not change after aPAO. Thus, acetabular retroversion was not secondary to a maladaptive pelvic tilt (functional retroversion). Further work is required to assess whether retroversion is a reflection of a pelvic morphological abnormality rather than an isolated acetabular abnormality. Treatment of acetabular retroversion should focus on correcting the deformity rather than attempting to change the functional pelvic position.
abstract_id: PUBMED:17483622
Salter osteotomy: an overview Pelvic osteotomies are an integral part of treatment in developmental dysplasia of the hip after 18 months. This article focuses on the innominate osteotomy which was introduced by Richard Salter in 1961. Salter innominate osteotomy is a complete pelvic osteotomy that hinges on the symphysis pubis and results in anterolateral displacement of the acetabulum. The derotated acetabulum is held in place with a bone graft and fixed with Kirschner wires. Salter osteotomy has been performed over four decades and excellent short- and long-term results have been reported from different centers all over the world. Its success is closely related to appropriate patient selection and meticulous surgical technique with strict adherence to prerequisites.
Answer: Based on the available abstracts, Salter innominate osteotomy does not consistently cause acetabular retroversion in adulthood. A study with a mean follow-up of 16 years after surgery found no significant difference in the presence of acetabular retroversion between operated hips and contralateral control hips (PUBMED:25391418). Another study confirmed that Salter innominate osteotomy does not lead to acetabular retroversion both immediately post-operatively and throughout follow-up, with postoperative hips demonstrating a larger degree of acetabular anteversion compared to the control hip (PUBMED:33136791). Additionally, a long-term review of Salter's osteotomy found that fears of long-term anterior over-coverage and retroversion with this operation may be unfounded, as acetabular remodeling may occur with age after the osteotomy (PUBMED:19455495). Therefore, the evidence suggests that Salter innominate osteotomy does not predispose patients to acetabular retroversion in adulthood. |
Instruction: NARES: a risk factor for obstructive sleep apnea?
Abstracts:
abstract_id: PUBMED:32274161
Effects of continuous positive airway pressure on exhaled transforming growth factor-β and vascular endothelial growth factor in patients with obstructive sleep apnea. Background: Both transforming growth factor β (TGF-β) and vascular endothelial growth factor (VEGF) are master regulators of airway remodeling; however, their pathological roles in obstructive sleep apnea (OSA) remain unclear. The aim of the present study was to evaluate the expression of TGF-β and VEGF protein in the serum and exhaled breath condensate (EBC) before and after continuous positive airway pressure (CPAP) treatment in OSA patients.
Methods: Forty patients with moderate to severe OSA requiring CPAP and 20 healthy subjects were prospectively recruited. The concentrations of TGF-β and VEGF protein in the serum and EBC were evaluated by enzyme-linked immunosorbent assay. All OSA patients underwent a sleep study that was repeated 3 months after receiving CPAP therapy.
Results: Protein concentrations of TGF-β and VEGF in the serum did not differ between healthy controls and OSA patients before CPAP treatment. There was also no difference in the serum protein concentrations of TGF-β and VEGF of the OSA patients before and after CPAP treatment. However, both the TGF-β and VEGF protein concentrations in the EBC were higher in the OSA patients than those in control subjects, and recovered to normal levels after CPAP.
Conclusions: Successful treatment of OSA by CPAP can restore the TGF-β and VEGF protein concentrations in the EBC.
abstract_id: PUBMED:24680337
Relationship between plasma vascular endothelial growth factor and tumor necrosis factor-alpha and obstructive sleep apnea hypopnea syndrome in children Objective: To investigate the relation of plasma vascular endothelial growth factor(VEGF) and tumor necrosis factor-α (TNF-α) with obstructive sleep apnea-hypopnea syndrome(OSAHS) in children.
Methods: Eighty children were recruited from October 2008 to March 2009, including 60 children with snoring and 20 healthy children without snoring as control. Plasma VEGF or TNF-α concentration was measured by enzyme-linked immunosorbent assay (ELISA) respectively. Sixty children with snoring underwent an overnight polysomnography test their PSG data, including whole night mean saturation (MSaO2), lowest oxygen saturation (LSaO2), desaturation cumulate time/total sleep time (DCT/TST), oxygen desaturation index 3 (ODI3), apnea-hypopnea index (AHI), obstructive apnea index (OAI), were collected and analysed. SPSS 13.0 software was used to analyze the data.
Results: The levels of plasma VEGF and TNF-α in children with OSAHS(540.45 pg/ml and 311.94 pg/ml) were higher than those in children with snoring alone (234.45 pg/ml and 97.55 pg/ml) or those in healthy children (259.80 pg/ml and 120.70 pg/ml), with statistically significant differences(HC value:14.176 and 15.571, P < 0.05, respectively), but with no statistical difference between children with snoring alone and healthy children (P > 0.05). The differences in plasma VEGF or TNF-α levels between children with moderate and severe hypoxemia and children with mild hypoxemia were not statistically significant (P > 0.05). Spearman rank correlation analysis showed no significant correlation between plasma level of VEGF or TNF-α and LSaO2, MSaO2, ODI3, DCT/TST, OAI, AHI or BMI (r values were <0.5, P > 0.05).
Conclusion: Plasma levels of VEGF and TNF-α increase in children with OSAHS.
abstract_id: PUBMED:36876337
Effect of surgical intervention on serum insulin-like growth factor 1 in patients with obstructive sleep apnoea. Objective: To evaluate the effect of surgical intervention on serum insulin-like growth factor 1 levels in patients with obstructive sleep apnoea.
Methods: A prospective study was conducted in a tertiary care hospital of adult patients with obstructive sleep apnoea for whom continuous positive airway pressure therapy failed or was refused. All patients underwent polysomnography and serum insulin-like growth factor 1 evaluation pre-operatively and at three months post-operatively. The site of surgery was determined using Müller's manoeuvre and ApneaGraph AG 200.
Results: Fifteen patients were included with a mean age of 38 years: 11 males and 4 females. The mean pre-operative Apnoea-Hypopnoea Index using polysomnography was 53.7 events per hour, and the mean post-operative Apnoea-Hypopnoea Index at three months was 15.3 events per hour (p = 0.0001). The mean pre-operative serum insulin-like growth factor 1 was 160.2 μg/l, while the mean post-operative value was 236.98 μg/l (p = 0.005).
Conclusion: In adult patients with obstructive sleep apnoea for whom continuous positive airway pressure therapy fails, site-specific surgical intervention to treat the obstruction leads to an increase in serum insulin-like growth factor 1 levels.
abstract_id: PUBMED:27006717
Serum Levels of Vascular Endothelial Growth Factor and Insulin-like Growth Factor Binding Protein-3 in Obstructive Sleep Apnea Patients: Effect of Continuous Positive Airway Pressure Treatment. Background And Aim: Hypoxia, a major feature of obstructive sleep apnea (OSA), modifies Vascular Endothelial Growth Factor (VEGF) and Insulin-like Growth Factor Binding Protein-3 (IGFBP-3) levels, which contribute to atherogenesis and occurrence of cardiovascular (CV) events. We assessed and compared serum levels of VEGF and IGFBP-3 in newly diagnosed OSA patients and controls, to explore associations with anthropometric and sleep parameters and to study the effect of continuous positive airway pressure (CPAP) treatment on these levels.
Materials And Methods: Serum levels of VEGF and IGFBP-3 were measured in 65 OSA patients and 31 age- and body mass index- matched controls. In OSA patients, measurements were repeated after 6 months of CPAP therapy. All participants were non-smokers, without any comorbidities or systemic medication use.
Results: At baseline, serum VEGF levels in OSA patients were higher compared with controls (p<0.001), while IGFBP-3 levels were lower (1.41±0.56 vs. 1.61±0.38 μg/ml, p=0.039). VEGF levels correlated with apnea-hypopnea index (r=0.336, p=0.001) and oxygen desaturation index (r=0.282, p=0.007). After 6 months on CPAP treatment, VEGF levels decreased in OSA patients (p<0.001), while IGFBP-3 levels increased (p<0.001).
Conclusion: In newly diagnosed OSA patients, serum levels of VEGF are elevated, while IGFBP-3 levels are low. After 6 months of CPAP treatment these levels change. These results may reflect an increased CV risk in untreated OSA patients, which is ameliorated after CPAP therapy.
abstract_id: PUBMED:18208737
Post-tonsillectomy haemorrhage treatment with activated factor VII Tonsillectomy is a common procedure in otorhinolaryngology and postoperative haemorrhage is a well-known complication. In this case, a 34 year-old male with obstructive sleep apnoea and no coagulation defects was operated; 30 minutes after the operation he presented with primary post operative haemorrhage and surgical haemostasis was not possible. After one hour of secondary surgery tranexamacid 1 g and phytomenadione 10 mg was given, but the diffuse bleeding continued until recombinant factor VIIa 96 mg/kg was administered.
abstract_id: PUBMED:24293770
A pro-inflammatory role for nuclear factor kappa B in childhood obstructive sleep apnea syndrome. Study Objectives: Childhood obstructive sleep apnea syndrome (OSAS) is associated with an elevation of inflammatory markers such as C-reactive protein (CRP) that correlates with specific morbidities and subsides following intervention. In adults, OSAS is associated with activation of the transcription factor nuclear factor kappa B (NF-kB). We explored the mechanisms underlying NF-kB activation, based on the hypothesis that specific NF-kB signaling is activated in children with OSAS.
Design: Adenoid and tonsillar tissues from children with OSAS and matched controls were immunostained against NF-kB classical (p65 and p50) and alternative (RelB and p52) pathway subunits, and NF-kB-dependent cytokines: interleukin (IL)- 1α, IL-1β, tumor necrosis factor-α, and IL-8). Serum CRP levels were measured in all subjects. NF-kB induction was evaluated by a luciferase-NF-kB reporter assay in L428 cells constitutively expressing NF-kB and in Jurkat cells with inducible NF-kB expression. p65 translocation to the nucleus, reflecting NF-kB activation, was measured in cells expressing fluorescent NF-kB-p65-GFP (green fluorescent protein).
Setting: Sleep research laboratory.
Patients Or Participants: Twenty-five children with OSAS and 24 without OSAS.
Interventions: N/A.
Measurements And Results: Higher expression of IL-1α and classical NF-kB subunits p65 and p50 was observed in adenoids and tonsils of children with OSAS. Patient serum induced NF-kB activity, as measured by a luciferase-NF-kB reporter assay and by induction of p65 nuclear translocation in cells permanently transfected with GFP-p65 plasmid. IL-1β showed increased epithelial expression in OSAS tissues.
Conclusions: Nuclear factor kappa B is locally and systemically activated in children with obstructive sleep apnea syndrome. This observation may motivate the search for new anti-inflammatory strategies for controlling nuclear factor kappa B activation in obstructive sleep apnea syndrome.
abstract_id: PUBMED:17013605
Evidence for activation of nuclear factor kappaB in obstructive sleep apnea. Obstructive sleep apnea (OSA) is a risk factor for atherosclerosis, and atherosclerosis evolves from activation of the inflammatory cascade. We propose that activation of the nuclear factor kappaB (NF-kappaB), a key transcription factor in the inflammatory cascade, occurs in OSA. Nine age-matched, nonsmoking, and non-hypertensive men with OSA symptoms and seven similar healthy subjects were recruited for standard polysomnography followed by the collection of blood samples for monocyte nuclear p65 concentrations (OSA and healthy groups). In the OSA group, p65 and of monocyte production of tumor necrosis factor alpha (TNF-alpha) were measured at the same time and after the next night of continuous positive airway pressure (CPAP). p65 Concentrations in the OSA group were significantly higher than in the control group [median, 0.037 ng/microl (interquartile range, 0.034 to 0.051) vs 0.019 ng/microl (interquartile range, 0.013 to 0.032); p = 0.008], and in the OSA group were significantly correlated with apnea-hypopnea index and time spent below an oxygen saturation of 90% (r = 0.77 and 0.88, respectively) after adjustment for age and BMI. One night of CPAP resulted in a reduction in p65 [to 0.020 ng/mul (interquartile range, 0.010 to 0.036), p = 0.04] and levels of TNF-alpha production in cultured monocytes [16.26 (interquartile range, 7.75 to 24.85) to 7.59 ng/ml (interquartile range, 5.19 to 12.95), p = 0.01]. NF-kappaB activation occurs with sleep-disordered breathing. Such activation of NF-kappaB may contribute to the pathogenesis of atherosclerosis in OSA patients.
abstract_id: PUBMED:23805411
Elevation of plasma basic fibroblast growth factor after nocturnal hypoxic events in patients with obstructive sleep apnea syndrome. Obstructive sleep apnea syndrome (OSAS) is associated with recurrent nocturnal hypoxia during sleep; this hypoxia has been implicated in the pathogenesis of cardiovascular complication. However, a useful soluble factor that is sensitively correlated with OSAS severity for the diagnosis remains unidentified. We hypothesized that systemic levels of basic fibroblast growth factor (bFGF), a hypoxia-induced cytokine, were affected by nocturnal hypoxemia in OSAS patients, and we assessed whether the degree of change in the plasma bFGF concentrations before and after nocturnal hypoxia is correlated with the severity of OSAS. Thirty subjects who had suspected OSAS and had been investigated by nocturnal polysomnography (PSG) were enrolled. Plasma bFGF and vascular endothelial growth factor (VEGF) concentrations the night before PSG and the next morning were measured by sandwich enzyme-linked immunosorbent assay. Correlations between the changes in these factors and hypoxia-associated parameters for OSAS severity were analyzed. Patients with OSAS had significantly elevated levels of plasma bFGF but not VEGF and hemoglobin after rising. The degree of change in bFGF concentrations after nocturnal apnea episodes was significantly correlated with diagnostic parameters for OSAS severity. The change in plasma bFGF levels is associated with the degree of hypoxic state in OSAS patients, implying that bFGF might be a useful soluble factor for evaluating OSAS severity.
abstract_id: PUBMED:29432463
Beyond factor analysis: Multidimensionality and the Parkinson's Disease Sleep Scale-Revised. Many studies have sought to describe the relationship between sleep disturbance and cognition in Parkinson's disease (PD). The Parkinson's Disease Sleep Scale (PDSS) and its variants (the Parkinson's disease Sleep Scale-Revised; PDSS-R, and the Parkinson's Disease Sleep Scale-2; PDSS-2) quantify a range of symptoms impacting sleep in only 15 items. However, data from these scales may be problematic as included items have considerable conceptual breadth, and there may be overlap in the constructs assessed. Multidimensional measurement models, accounting for the tendency for items to measure multiple constructs, may be useful more accurately to model variance than traditional confirmatory factor analysis. In the present study, we tested the hypothesis that a multidimensional model (a bifactor model) is more appropriate than traditional factor analysis for data generated by these types of scales, using data collected using the PDSS-R as an exemplar. 166 participants diagnosed with idiopathic PD participated in this study. Using PDSS-R data, we compared three models: a unidimensional model; a 3-factor model consisting of sub-factors measuring insomnia, motor symptoms and obstructive sleep apnoea (OSA) and REM sleep behaviour disorder (RBD) symptoms; and, a confirmatory bifactor model with both a general factor and the same three sub-factors. Only the confirmatory bifactor model achieved satisfactory model fit, suggesting that PDSS-R data are multidimensional. There were differential associations between factor scores and patient characteristics, suggesting that some PDSS-R items, but not others, are influenced by mood and personality in addition to sleep symptoms. Multidimensional measurement models may also be a helpful tool in the PDSS and the PDSS-2 scales and may improve the sensitivity of these instruments.
abstract_id: PUBMED:24961120
Effect of multimodality therapies on plasma soluble tumor necrosis factor receptor I in OSAHS patients Objective: As a subtype membrane receptor of tumor necrosis factor alpha, not much is known about the link between the soluble TNF receptor-I and obstructive sleep apnea hypopnea syndrome. We hypothesized that the TNF receptor might play an important role in the inflammation in patients with OSAHS, moreover this study was undertakan to investigate the effects of multimodality therapies on its periphery blood level.
Method: Seventy-seven adults with habitual snoring and mean age of 34.9 +/- 11 years old consented to participate in the study. All participants were studied with overnight polysomnography, physical examination and a blood crew at baseline. According to the severity of OSAHS, they were categorized into three groups and one control group. Moderate and severe OSAHS groups returned for a repeat test of polysomnography and a blood crew at 3 months after the ENT surgery or continuous positive airway pressure (CPAP). serum levels were measured by using an immunoluminometric assay kit.
Result: (1) Compared with control non-OSAHS group, serum sTNF-R I levels prior to treatment in OSAHS groups were significantly greater, with a mean serum levels at (742 +/- 258 & 340 +/- 102) pg/ml (P < 0.05), respectively. (2) Plasma solube tumor necrosis factor receptor-I responsed sensitively to the effect of comprehensive therapies when we compared its prior treatment levels with post ones. (3) Analysis was used to assess the associations adjusting for age, gender, BMI and weight ,a positive assosiation were found between apnea-hypopnea index (AHI) and sTNF-R I (r = 0.646, P < 0.01) a negative assosiation were found between lowest nadir oxygen saturation (LSaO2) and (r = 0.522, P < 0.01).
Conclusion: ln summary, independent of age, gender, BMI and weight ,our datas suggest a relationship can be found between the the severity of OSAHS and periphery blood level of soluble TNF receptor-I. Comprehensive therapies is effective in changing sTNF-R I. sTNF-R I may be recommended as a Inflammation factor of OSAHS.
Answer: The provided abstracts do not directly address whether nasal allergies, rhinitis, or nasal airway resistance (NARES) are risk factors for obstructive sleep apnea (OSA). The abstracts focus on various biomarkers, inflammatory factors, and the effects of treatments such as continuous positive airway pressure (CPAP) and surgical interventions on patients with OSA. They discuss the roles of transforming growth factor β (TGF-β), vascular endothelial growth factor (VEGF), tumor necrosis factor-α (TNF-α), insulin-like growth factor 1 (IGF-1), basic fibroblast growth factor (bFGF), and nuclear factor kappa B (NF-kB) in the context of OSA (PUBMED:32274161, PUBMED:24680337, PUBMED:36876337, PUBMED:27006717, PUBMED:18208737, PUBMED:24293770, PUBMED:17013605, PUBMED:23805411, PUBMED:29432463, PUBMED:24961120).
However, none of the abstracts mention NARES or nasal airway resistance as a factor in the development or progression of OSA. To answer the question about NARES being a risk factor for OSA, one would need to look at studies specifically investigating the relationship between nasal airway obstruction, such as that caused by allergies or rhinitis, and the incidence or severity of OSA. Such studies would likely explore how nasal congestion and the resulting difficulty in nasal breathing during sleep could contribute to the collapse of the upper airway, thus predisposing individuals to OSA. Unfortunately, the provided abstracts do not contain this information. |
Instruction: Does the Success of Work-related Interventions in the Rehabilitation of Neurological Diseases Depend on the Return-to-Work Prognosis?
Abstracts:
abstract_id: PUBMED:26317841
Does the Success of Work-related Interventions in the Rehabilitation of Neurological Diseases Depend on the Return-to-Work Prognosis? A Re-analysis of 2 Randomised Controlled Trials Objective: The paper examines whether patients with neurological diseases and a poor return to work (RTW) prognosis gain more from work-related medical rehabilitation (WMR).
Methods: Re-analysis of matched samples of 2 randomised controlled trials (N=442; questionnaire at admission of rehabilitation and 15-month follow-up). Linear regression models were used calculating the effect of the WMR dependent on the RTW prognosis. Primary outcome was time of sick leave in the follow-up and physical and mental health measured by the SF-36. As secondary outcomes, strategies of coping skills and work-related attitudes were defined.
Results: Only for patients with a high non-RTW risk could positive effects of WMR be demonstrated on mental health, coping skills and the scale "work as a resource". In the 15-month follow-up, there were no differences in effects on duration of sick leave and physical health.
Conclusions: The results based on this analysis indicate that patients with neurological diseases derive benefit from WMR only if their empirical RTW prognosis is poor. However, this only applies for the mental health in the medium term. Our study confirms the previous findings that suggest different effectiveness of the WMR for patients with different RTW risk.
abstract_id: PUBMED:31451849
Work-related medical rehabilitation in neurology : Effective on the basis of individualized rehabilitant identification Background: Evidence for the effectiveness of work-related medical rehabilitation (WMR) for a successful return to work (RTW) is lacking for neurological diseases. The aim of this study was therefore to correlate the cross-indication screening instrument for the identification of the demand of work-related medical rehabilitation (SIMBO‑C) with the individualized clinical anamnestic determination of severe restrictions of work ability (SRWA) as a required access criterion for admittance to neurological WMR. A further aim was to compare the rate of successful RTW in rehabilitants with and without WMR measures 6 months after inpatient rehabilitation.
Methods: On admission SRWA were routinely screened by an individualized clinical anamnestic determination with subsequent assignment to WMR or conventional rehabilitation. At the beginning of rehabilitation the SIMBO-C was applied and 6 months after the rehabilitation the RTW status was surveyed.
Results: Of the 80 rehabilitants 44 (55%) received WMR. On admission they showed a higher SIMBO-C score (41.3 ± 15.7 vs. 26.2 ± 18.6 points, p = 0.002), on discharge more often locomotor and psychomental disorders (55% vs. 36%, p = 0.10 and 46% vs. 22%, p = 0.03, respectively) and longer incapacitation times after rehabilitation of > 4 weeks (66% vs. 33%, p = 0.02) compared to those without WMR. At 6 months follow-up after discharge the 2 groups did not significantly differ with respect to successful RTW (61% vs. 66%, p = 0.69). The SIMBO-C (cut-off ≥ 30 points) showed a medium correlation with the individualized clinical anamnestic determination of SRWA (r = 0.33, p = 0.01).
Conclusion: The applied neurological WMR concept accomplished a comparable RTW rate between rehabilitants with SRWA by a WMR and those without SRWA and conventional rehabilitation. The SIMBO-C should only be used in combination with the individualized anamnesis to identify SRWA.
abstract_id: PUBMED:27728938
Formative Evaluation of the "MBO® Kompakt-Neurowoche" - An Intensified Work-Related Rehabilitation Program for Neurological Patients Objectives: The MBO® Kompakt-Neurowoche is offered as a work-related medical rehabilitation measure (based on allocation by a physician) following a regular neurological rehabilitation program with a duration of 7 days. Program access, process, and outcomes were examined in terms of a formative evaluation. Method: Pre-post-questionnaire data from 5 data points were used: start of regular rehabilitation (T0); start of work-related rehabilitation (T1); end of work-related rehabilitation (T2); 6-months follow-up (T3); 12-months follow-up (T4). Results: N=252 patients (75% male, 48±10 years) were included. Participants report a higher work-related treatment motivation and a more positive subjective return-to-work prognosis as compared to nonparticipants (N=215). At T4, 76% are (very) satisfied with the program. Patients rate therapy elements focusing on the assessment and improvement of work-related capacity and memory as especially useful. Assistance in developing job-related alternatives should be optimized. Conclusions: Patients participating in the work-related program report both vocational problems and a high motivation to deal with these problems during rehabilitation. The program is rated as useful with regard to return to work and the management of workplace issues.
abstract_id: PUBMED:29189108
Return to work after young stroke: A systematic review. Background The incidence of stroke in young adults is increasing. While many young survivors are able to achieve a good physical recovery, subtle dysfunction in other domains, such as cognition, often persists, and could affect return to work. However, reported estimates of return to work and factors affecting vocational outcome post-stroke vary greatly. Aims The aims of this systematic review were to determine the frequency of return to work at different time points after stroke and identify predictors of return to work. Summary of review Two electronic databases (Medline and Embase) were systematically searched for articles according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A total of 6473 records were screened, 68 were assessed for eligibility, and 29 met all inclusion criteria (working-age adults with stroke, return to work evaluated as an outcome, follow-up duration reported, and publication within the past 20 years). Return to work increased with time, with median frequency increasing from 41% between 0 and 6 months, 53% at 1 year, 56% at 1.5 years to 66% between 2 and 4 years post-stroke. Greater independence in activities of daily living, fewer neurological deficits, and better cognitive ability were the most common predictors of return to work. Conclusion This review highlights the need to examine return to work in relation to time from stroke and assess cognition in working age and young stroke survivors. The full range of factors affecting return to work has not yet been explored and further evaluations of return to work interventions are warranted.
abstract_id: PUBMED:33414754
Critical Issues and Imminent Challenges in the Use of sEMG in Return-To-Work Rehabilitation of Patients Affected by Neurological Disorders in the Epoch of Human-Robot Collaborative Technologies. Patients affected by neurological pathologies with motor disorders when they are of working age have to cope with problems related to employability, difficulties in working, and premature work interruption. It has been demonstrated that suitable job accommodation plans play a beneficial role in the overall quality of life of pathological subjects. A well-designed return-to-work program should consider several recent innovations in the clinical and ergonomic fields. One of the instrument-based methods used to monitor the effectiveness of ergonomic interventions is surface electromyography (sEMG), a multi-channel, non-invasive, wireless, wearable tool, which allows in-depth analysis of motor coordination mechanisms. Although the scientific literature in this field is extensive, its use remains significantly underexploited and the state-of-the-art technology lags expectations. This is mainly attributable to technical and methodological (electrode-skin impedance, noise, electrode location, size, configuration and distance, presence of crosstalk signals, comfort issues, selection of appropriate sensor setup, sEMG amplitude normalization, definition of correct sEMG-related outcomes and normative data) and cultural limitations. The technical and methodological problems are being resolved or minimized also thanks to the possibility of using reference books and tutorials. Cultural limitations are identified in the traditional use of qualitative approaches at the expense of quantitative measurement-based monitoring methods to design and assess ergonomic interventions and train operators. To bridge the gap between the return-to-work rehabilitation and other disciplines, several teaching courses, accompanied by further electrodes and instrumentations development, should be designed at all Bachelor, Master and PhD of Science levels to enhance the best skills available among physiotherapists, occupational health and safety technicians and ergonomists.
abstract_id: PUBMED:23531585
Important factors influencing the return to work after stroke. Background: As the field of rehabilitation shifts its focus towards improving functional capacity instead of managing disability, return to work (RTW) and return to the community emerge as key goals in a person's recovery from major disabling illness such as stroke.
Objective: To compile important factors believed to influence RTW after a stroke.
Methods: Based on a comprehensive literature review, we clustered similar factors and organized these factors based on the International Classification of Function, Disability and Health (ICF) framework: body functions or structure, activity participation, environmental factors and personal and psychosocial factors.
Results: Overall, stroke severity, as assessed by the degree of residual disability such as weakness, neurological deficit or impairments (speech, cognition, apraxia, agnosia), has been shown to be the most consistent negative predictor of RTW. Many factors such as the number of working years remaining until retirement, depression, medical history, and occupation need to be taken into consideration for stroke survivors, as they can influence RTW decision making. Stroke survivors who are flexible and realistic in their vocational goal and emotionally accept their disability appear more likely to return to work.
Conclusions: There are many barriers to employment for stroke survivors ranging from physical and cognitive impairments to psychosocial and environmental factors.
abstract_id: PUBMED:29125015
Factors associated with return to work in patients with long-term disabilities due to neurological and neuropsychiatric disorders. The current study explores factors predicting return to work in a sample of patients with neurological and neuropsychiatric disorders who have attended a prevocational readiness and social skills training programme many years after trauma. Participants were community-dwelling adults with long-term disabilities (N = 67). Results of univariate analyses followed by multivariate logistic regression analysis revealed that both pre-injury (prior) and post-injury (current) factors influenced the likelihood of employment in our sample: prior employment, current employment readiness, current cognitive competence (particularly memory and executive functioning) and emotional adjustment. Our findings demonstrate that both pre-trauma and current factors interact in predicting return to work not only for individuals with traumatic brain injury (TBI), but also for a broader group of patients with long-term disabilities due to a variety of neurological and neuropsychiatric conditions. Thus, our findings provide preliminary support for ongoing long-term management of individuals with long-term disabilities and warrant close attention of future investigators to potential benefits of cognitive remediation, psychotherapy and vocational rehabilitation in terms of maintenance of initial gains and increased probability of return to work many years after trauma.
abstract_id: PUBMED:34408555
Predictors of return to work among stroke survivors in south-west Nigeria. Introduction: Stroke is acknowledged globally and among Nigerian rehabilitation researchers as a public health problem that leaves half of its survivors with significant neurological deficits and inability to re-establish pre-existing roles. Consequent to the dearth of country specific data on return to work and its determinants for stroke survivors in Nigeria, this study investigated the predictors of return to work among stroke survivors in south-west Nigeria.
Method: Two hundred and ten stroke survivors from five tertiary health facilities in Osun state, Nigeria responded to a validated three-section questionnaire assessing return to work rates and its determinants after stroke in this study. Collected data were analysed using descriptive statistics and inferential statistic of chi-square, t-test and multiple logistic regression.
Result: The mean age of the respondents was 52.90 ± 7.92 years. Over 60% of the respondents returned to work with about half of them in full time employment (32.9%). Majority of the respondents noted that travel to and from work (43.8%) and access at work (43.3%) had an impact on their ability to work. The symptoms of stroke (odds ratio (OR) = 0.87), the environment (OR = 0.83), body function impairments (OR = 0.86) as well as activity and participation problems (OR = 0.80) were the significant predictors of return to work. Hemiplegia or paresis of the non-dominant side of the body was associated with a higher chance of return to work (OR = 7.64).
Conclusion: Body function impairments, activity and participation problems were independent predictors of return to work after stroke. Similarly, side of hemiplegia plays a prominent role in resumption of the worker role of stroke survivors in south-west Nigeria.
abstract_id: PUBMED:31555452
BDNF, COMT, and DRD2 polymorphisms and ability to return to work in adult patients with low- and high-grade glioma. Background: Cognitive and language dysfunction is common among patients with glioma and has a significant impact on survival and health-related quality of life (HRQOL). Little is known about the factors that make individual patients more or less susceptible to the cognitive sequelae of the disease. A better understanding of the individual and population characteristics related to cognitive function in glioma patients is required to appropriately stratify patients, prognosticate, and develop more efficacious treatment regimens. There is evidence that allelic variation among genes involved in neurotransmission and synaptic plasticity are related to neurocognitive performance in states of health and neurologic disease.
Methods: We studied the association of single-nucleotide polymorphism variations in brain-derived neurotrophic factor (BDNF, rs6265), dopamine receptor 2 (DRD2, rs1076560), and catechol-O-methyltransferase (COMT, rs4680) with neurocognitive function and ability to return to work in glioma patients at diagnosis and at 3 months. We developed a functional score based on the number of high-performance alleles that correlates with the capacity for patients to return to work.
Results: Patients with higher-performing alleles have better scores on neurocognitive testing with the Repeatable Battery for the Assessment of Neuropsychological Status and Stroop test, but not the Trail Making Test.
Conclusions: A better understanding of the genetic contributors to neurocognitive performance in glioma patients and capacity for functional recovery is necessary to develop improved treatment strategies based on patient-specific factors.
abstract_id: PUBMED:29046925
Clinical anamnestic characteristics in neurological work-related medical rehabilitation : Necessity for a qualitative identification of severe restrictions of work ability Background: Severe restrictions of work ability (SRWA) as a condition for participation in neurological work-related medical rehabilitation (WMR) have not been adequately described up to now. Similarly, the applicability of the screening instrument SIMBO-C for evaluating SRWA in neurological rehabilitation has not yet been answered conclusively.
Objective: Determination of clinical and anamnestic characteristics of neurological SRWA and assessment of the applicability of the screening instrument SIMBO-C in neurological WMR.
Material And Methods: For the identification of SRWA clinical and anamnestic characteristics of 344 rehabilitants were routinely collected. The clinically and anamnestically determined SRWA was described quantitatively and content-analytically and correlated with SIMBO-C.
Results: Of the rehabilitants 66% exhibited SRWA. Apart from the established characteristics of SRWA further person and disease-specific factors were found. The SIMBO-C score was significantly higher in the group with SRWA compared to the group without SRWA (45.6 ± 18.9 vs. 31.5 ± 12.5, p < 0.001); however, 31% of the group with SRWA and 50% of the group without SRWA demonstrated a SIMBO-C score ≤ 36 points and thereby a large overlap. The profile of the clinical and anamnestic characteristics in the group with SRWA was homogeneous, regardless of the SIMBO-C score.
Conclusion: The characteristics of neurological SRWA are mainly qualitatively shaped and may only partly be identified by SIMBO-C. A combined quantitative and qualitative approach is necessary in neurological WMR.
Answer: The success of work-related interventions in the rehabilitation of neurological diseases does seem to depend on the return-to-work (RTW) prognosis. A re-analysis of two randomized controlled trials indicated that only patients with a high non-RTW risk demonstrated positive effects of work-related medical rehabilitation (WMR) on mental health, coping skills, and the scale "work as a resource." However, there were no differences in effects on the duration of sick leave and physical health in the 15-month follow-up (PUBMED:26317841). This suggests that patients with a poor RTW prognosis may benefit more from WMR, particularly in terms of mental health in the medium term.
Additionally, a study correlating the cross-indication screening instrument SIMBO-C with the individualized clinical anamnestic determination of severe restrictions of work ability (SRWA) found that the applied neurological WMR concept accomplished a comparable RTW rate between rehabilitants with SRWA by a WMR and those without SRWA and conventional rehabilitation (PUBMED:31451849). This implies that WMR can be effective when tailored to individual needs based on a thorough assessment of work ability restrictions.
Moreover, participants in an intensified work-related rehabilitation program, the MBO® Kompakt-Neurowoche, reported a higher work-related treatment motivation and a more positive subjective RTW prognosis compared to nonparticipants, suggesting that motivation and positive outlook may influence the success of RTW interventions (PUBMED:27728938).
In summary, the success of work-related interventions in the rehabilitation of neurological diseases does appear to be influenced by the RTW prognosis, with tailored interventions based on individual assessments of work ability restrictions showing effectiveness in supporting RTW outcomes. |
Instruction: Do postoperative platelet-rich plasma injections accelerate early tendon healing and functional recovery after arthroscopic supraspinatus repair?
Abstracts:
abstract_id: PUBMED:26284178
Platelet-Rich Fibrin Promotes an Accelerated Healing of Achilles Tendon When Compared to Platelet-Rich Plasma in Rat. Background: Autologous platelet concentrate has been used to improve the function and regeneration of injured tissues. Tendinopathies are common in clinical practice, although long-term treatment is required. On the basis of lead time, we compared the effect of using platelet-rich plasma (PRP) and platelet-rich fibrin (PRF) in repairing rat Achilles tendon.
Methods: The effectiveness of using PRP and PRF was evaluated after 14 and 28 postoperative days by histological analysis. The quantification of collagen types I and III was performed by Sirius red staining. Qualitatively, the data were verified with hematoxylin-eosin (H&E) staining.
Results: In Sirius red staining, no significant treatment differences were found between groups. Statistical difference was observed only between PRP (37.2% collagen) and the control group (16.2%) 14 days after treatment. Intra-groups compared twice showed a difference for collagen I (27.8% and 47.7%) and III (66.9% and 46.0%) in the PRF group. The control group showed differences only in collagen I (14.2% and 40.9%) and no other finding was observed in the PRP group. In H&E staining, PRF showed a better cellular organization when compared to the other groups at 28 days.
Conclusion: Our study suggests that PRF promotes accelerated regeneration of the Achilles tendon in rats, offering promising prospects for future clinical use.
abstract_id: PUBMED:33285699
The efficacy of platelet-rich plasma in arthroscopic rotator cuff repair: A protocol of randomized controlled trial. Background: Platelet-rich plasma (PRP), an autologous platelet concentrate (contain a large number of growth factors), has been widely investigated in healing and rebuilding the bone and tendon tissue. The objective of this prospective randomized research is to study and then compare the long-term effectiveness of the repair of arthroscopic rotator cuff without and with the platelet-rich plasma. It is assumed that there is no difference in the clinical results between patients receiving the repair of arthroscopic rotator cuff and the patients who do not receive PRP enhancement.
Methods: This current study is a prospective, single-center, controlled, and randomized experiment. This study was reviewed and permitted via the institutional review committee of our hospital. All the patients will receive the written informed consent in order to involve in our clinical experiment. Patients were selected from the patients who received the repair of arthroscopic rotator cuff. Patients who meet the following conditions will be included in this study: ages ranges from 18 to 55; patients with complete tear of rotator cuff confirmed during operation; the patients agreed to wear the abduction stent for 4 weeks after operation; the preoperative count of platelet count is >150,000. All patients were evaluated at follow-up and baseline for the scores of Constant-Murley (CM) and American Shoulder and Elbow Surgeons (ASES), the numerical rating scale (NRS), and retear rate. The analysis is implemented with the SPSS 16.0 (SPSS Inc., Chicago, IL), the significance level remain at P < .05.
Conclusions: The results of this study will provide useful new information on whether PRP is effective in the arthroscopic rotator cuff repair patients.
Trial Registration: This study protocol was registered in Research Registry (researchregistry6108).
abstract_id: PUBMED:23758981
PARot--assessing platelet-rich plasma plus arthroscopic subacromial decompression in the treatment of rotator cuff tendinopathy: study protocol for a randomized controlled trial. Background: Platelet-rich plasma (PRP) is an autologous platelet concentrate. It is prepared by separating the platelet fraction of whole blood from patients and mixing it with an agent to activate the platelets. In a clinical setting, PRP may be reapplied to the patient to improve and hasten the healing of tissue. The therapeutic effect is based on the presence of growth factors stored in the platelets. Current evidence in orthopedics shows that PRP applications can be used to accelerate bone and soft tissue regeneration following tendon injuries and arthroplasty. Outcomes include decreased inflammation, reduced blood loss and post-treatment pain relief. Recent shoulder research indicates there is poor vascularization present in the area around tendinopathies and this possibly prevents full healing capacity post surgery (Am J Sports Med36(6):1171-1178, 2008). Although it is becoming popular in other areas of orthopedics there is little evidence regarding the use of PRP for shoulder pathologies. The application of PRP may help to revascularize the area and consequently promote tendon healing. Such evidence highlights an opportunity to explore the efficacy of PRP use during arthroscopic shoulder surgery for rotator cuff pathologies.
Methods/design: PARot is a single center, blinded superiority-type randomized controlled trial assessing the clinical outcomes of PRP applications in patients who undergo shoulder surgery for rotator cuff disease. Patients will be randomized to one of the following treatment groups: arthroscopic subacromial decompression surgery or arthroscopic subacromial decompression surgery with application of PRP.
Trial Registration: Current Controlled Trials: ISRCTN10464365.
abstract_id: PUBMED:21562414
A systematic review of the use of platelet-rich plasma in sports medicine as a new treatment for tendon and ligament injuries. Objective: To evaluate, through a systematic review of the current literature, the evidence-based outcomes of the use of platelet-rich plasma (PRP) for the treatment of tendon and ligament injuries.
Data Sources: A search of English-language articles was performed in PubMed and EMBASE using keywords "PRP," "platelet plasma," and "platelet concentrate" combined with "tendon" and then "ligament" independently. The search was conducted through September 2010.
Study Selection: Search was limited to in vivo studies. Nonhuman studies were excluded. Tissue engineering strategies, which included a combination of PRP with additional cell types (bone marrow), were also excluded. Articles with all levels of evidence were included. Thirteen of 32 retrieved articles respected the inclusion criteria.
Data Extraction: The authors reviewed and tabulated data according to the year of study and journal, study type and level of evidence, patient demographics, method of PRP preparation, site of application, and outcomes.
Data Synthesis: The selected studies focused on the application of PRP in the treatment of patellar and elbow tendinosis, Achilles tendon injuries, rotator cuff repair, and anterior cruciate ligament (ACL) reconstruction. Seven studies demonstrated favorable outcomes in tendinopathies in terms of improved pain and functional scores. In 3 studies on the use of PRP in ACL reconstruction, no statistically significant differences were seen with regard to clinical outcomes, tunnel widening, and graft integration. One study examined the systemic effects after the local PRP application for patellar and elbow tendinosis.
Conclusions: Presently, PRP use in tendon and ligament injuries has several potential advantages, including faster recovery and, possibly, a reduction in recurrence, with no adverse reactions described. However, only 3 randomized clinical trials have been conducted.
abstract_id: PUBMED:20953759
Platelet concentrate vs. saline in a rat patellar tendon healing model. Purpose: To evaluate single centrifuge platelet concentrate as additive for improved tendon healing. Platelet-rich plasma has been reported to improve tendon healing. Single centrifuge platelet concentration may increase platelet concentration enough to positively affect tendon healing. A single centrifuge process will lead to a blood product with increased platelet concentrations which, when added to a surgically created tendon injury, will improve tendon healing when compared with a saline control.
Methods: Lewis rats had a surgical transection of the patellar tendon that was subsequently stabilized with a cerclage suture. Prior to skin closure, the tendon was saturated with either a concentrated platelet solution or saline. At 14 days, all animals were killed, and the extensor mechanism was isolated for testing. Biomechanical testing outputs included ultimate tensile load, stiffness, and energy absorbed.
Results: Comparisons between the control group and the concentrated platelet group revealed no differences. A subgroup of the concentrated platelet group consisting of specimens in whom the concentration process was most successful showed significantly higher ultimate tensile load (P < 0.05) and energy absorbed to failure (P < 0.05) when compared to the control group.
Conclusion: When successful, single centrifuge platelet concentration yields a solution that improves tendon healing when compared with a saline control. Single-spin platelet concentration may yield a biologically active additive that may improve tendon healing, but more studies must be undertaken to ensure that adequate platelet concentration is possible.
abstract_id: PUBMED:17068715
How can one platelet injection after tendon injury lead to a stronger tendon after 4 weeks? Interplay between early regeneration and mechanical stimulation. Background: Mechanical stimulation improves the repair of ruptured tendons. Injection of a platelet concentrate (platelet-rich plasma, PRP) can also improve repair in several animal models. In a rat Achilles tendon transection model, 1 postoperative injection resulted in increased strength after 4 weeks. Considering the short half-lives of factors released by platelets, this very late effect calls for an explanation.
Methods: We studied the effects of platelets on Achilles tendon regenerates in rats 3, 5 and 14 days after transection. The tendons were either unloaded by Botulinum toxin A (Botox) injections into the calf muscles, or mechanically stimulated in activity cages. No Botox injections and ordinary cages, respectively, served as controls. Repair was evaluated by tensile testing.
Results: At 14 days, unloading (with Botox) abolished any effect of the platelets and reduced the mechanical properties of the repair tissue to less than half of normal. Thus, some mechanical stimulation is a prerequisite for the effect of platelets at 14 days. Without Botox, both activity and platelets increased repair independently of each other. However, at 3 and 5 days, platelets improved the mechanical properties in Botox-treated rats.
Interpretation: Platelets influence only the early phases of regeneration, but this allows mechanical stimulation to start driving neo-tendon development at an earlier time point, which kept it constantly ahead of the controls.
abstract_id: PUBMED:27979486
Platelet-rich plasma: a biomimetic approach to enhancement of surgical wound healing. Platelets are small anucleate cytoplasmic cell bodies released by megakaryocytes in response to various physiologic triggers. Traditionally thought to be solely involved in the mechanisms of hemostasis, platelets have gained much attention due to their involvement wound healing, immunomodulation, and antiseptic properties. As the field of surgery continues to evolve so does the need for therapies to aid in treating the increasingly complex patients seen. With over 14 million obstetric, musculoskeletal, and urological and gastrointestinal surgeries performed annually, the healing of surgical wounds continues to be of upmost importance to the surgeon and patient. Platelet-rich plasma, or platelet concentrate, has emerged as a possible adjuvant therapy to aid in the healing of surgical wounds and injuries. In this review, we will discuss the wound healing properties of platelet-rich plasma and various surgical applications.
abstract_id: PUBMED:33431160
Effect of platelet-rich plasma on fracture healing. Bone has the ability to completely regenerate under normal healing conditions. Although fractures generally heal uneventfully, healing problems such as delayed union or nonunion still occur in approximately 10% of patients. Optimal healing potential involves an interplay of biomechanical and biological factors. Orthopedic implants are commonly used for providing the necessary biomechanical support. In situations where the biological factors that are needed for fracture healing are deemed inadequate, additional biological enhancement is needed. With platelets being packed with granules that contain growth factors and other proteins that have osteoinductive capacity, local application of platelet concentrates, also called platelet-rich plasma (PRP) seems an attractive biological to enhance fracture healing. This review shows an overview of the use PRP and its effect in enhancing fracture healing. PRP is extracted from the patient's own blood, supporting that its use is considered safe. Although PRP showed effective in some studies, other studies showed controversial results. Conflicts in the literature may be explained by the absence of consensus about the preparation of PRP, differences in platelet counts, low number of patients, and absence of a standard application technique. More studies addressing these issues are needed in order to determine the true effect of PRP on fracture healing.
abstract_id: PUBMED:23867186
Use of platelet-rich plasma in the care of sports injuries: our experience with ultrasound-guided injection. Background: Platelet-rich plasma is being used more frequently to promote healing of muscle injuries. The growth factors contained in platelet-rich plasma accelerate physiological healing processes and the use of these factors is simple and minimally invasive. The aim of this study was to demonstrate the efficacy of ultrasound-guided injection of platelet-rich plasma in muscle strains and the absence of side effects.
Materials And Methods: Fifty-three recreational athletes were enrolled in the study. The patients were recruited from the Emergency Room in the University Hospital at Parma according to a pre-defined protocol. Every patient was assessed by ultrasound imaging to evaluate the extent and degree of muscle injuries. Only grade II lesions were treated with three ultrasound-guided injections of autologous platelet-rich plasma every 7 days. Platelet concentrate was produced according to standard methods, with a 10% variability in platelet count. The platelet gel for clinical use was obtained by adding thrombin to the concentrates under standardised conditions. Outcomes assessed were: pain reduction, muscle function recovery and return to sports activity, ultrasound-imaging tissue healing, relapses, local infections, and any side effect during the treatment.
Results: In all cases muscle lesions healed fully on ultrasound-imaging, the pain disappeared, and muscle function recovery was documented with a return to sports activity. A single patient had a relapse 1 year after treatment.
Discussion: Platelet-rich plasma injected into the injury site is one of the most important factors rendering the treatment effective. To maximise its efficacy the preliminary ultrasound must be done accurately to localise the lesion and guide the needle into the corresponding lesion. According to the current results, which document full muscle recovery and no relapse except for one case, platelet-rich plasma ultrasound-guided injection represents a valid mini-invasive treatment for muscle injuries.
abstract_id: PUBMED:17089514
Platelet-rich plasma combined with skin substitute for chronic wound healing: a case report. Contemporary management of chronic wounds focuses on improving natural healing and individualization of treatment. Incorporating multiple therapies has become increasingly common. Of interest are autologous growth factors, which are especially important in chronic wound healing and may contribute to tissue formation and epithelialization. Autologous platelet concentrate or platelet-rich plasma (PRP) is a concentration of at least five autologous growth factors and has been shown to accelerate wound healing and may have infection-fighting properties. Chronic wound healing is complicated by both decreased growth factor availability and infection, making PRP use valuable in these types of wounds. In this report, the use of PRP therapy alone and in combination with a bioengineered skin substitute as a platelet-rich tissue graft in a chronic, non-healing wound is detailed. Over 27 weeks, the patient received multiple therapies in attempts to heal a severe decubitus ulcer of the sacrum. The introduction of PRP therapy at Week 14 led to a 26% reduction in wound depth over 4 weeks. At Week 19, PRP therapy was combined with a powdered skin substitute to create a platelet-rich tissue graft. The combination brought dramatic results, eliminating wound tunneling and reducing the wound dimensions from 6.2 cm long x 6.7 cm wide x 2.7 cm deep to 5.0 cm long x 6.0 cm wide x 1.4 cm deep. The promising observations from this case report indicate that further study on the combining of PRP therapy and skin substitutes is necessary.
Answer: The abstracts provided do not directly answer the question regarding the specific effect of postoperative platelet-rich plasma (PRP) injections on early tendon healing and functional recovery after arthroscopic supraspinatus repair. However, they do provide insights into the general efficacy of PRP in tendon healing and its use in orthopedic procedures, including rotator cuff repair, which may be relevant to the supraspinatus tendon.
Abstract PUBMED:26284178 suggests that platelet-rich fibrin (PRF), a second-generation platelet concentrate, promotes accelerated regeneration of the Achilles tendon in rats, which could imply potential benefits for tendon healing in general.
Abstract PUBMED:33285699 describes a protocol for a randomized controlled trial to study the long-term effectiveness of arthroscopic rotator cuff repair with and without PRP enhancement. The results of this study could provide information on the efficacy of PRP in rotator cuff repair, which includes the supraspinatus tendon.
Abstract PUBMED:23758981 outlines a study protocol for a randomized controlled trial assessing the clinical outcomes of PRP applications in patients undergoing shoulder surgery for rotator cuff disease, which may include the supraspinatus tendon.
Abstract PUBMED:21562414 is a systematic review that evaluates the evidence-based outcomes of PRP use for the treatment of tendon and ligament injuries. It reports that PRP has potential advantages, including faster recovery, which could be relevant to early tendon healing and functional recovery.
Abstract PUBMED:20953759 discusses a rat model study evaluating the effect of a single centrifuge platelet concentrate on tendon healing, which could provide insights into the potential benefits of PRP in tendon repair.
Abstract PUBMED:17068715 explores the interplay between early regeneration and mechanical stimulation in tendon repair with PRP, suggesting that PRP may influence early phases of tendon regeneration.
Abstract PUBMED:27979486 reviews the wound healing properties of PRP and its surgical applications, which could include tendon repair surgeries.
Abstract PUBMED:33431160 provides an overview of PRP's effect on enhancing fracture healing, which, while not directly related to tendon healing, indicates PRP's role in tissue regeneration.
Abstract PUBMED:23867186 discusses the efficacy of ultrasound-guided injection of PRP in muscle strains, which could be extrapolated to tendon injuries.
Abstract PUBMED:17089514 details a case report where PRP combined with a skin substitute was used for chronic wound healing, demonstrating dramatic results in wound healing. Although this is not directly related to tendon healing, it suggests that PRP may have beneficial effects on tissue repair and could potentially be applied to tendon healing processes.
In summary, while the abstracts provided do not give a definitive answer to the question of whether postoperative PRP injections specifically accelerate early tendon healing and functional recovery after arthroscopic supraspinatus repair, they do suggest that PRP has been found to be beneficial in various forms of tissue healing, including tendon repair. The studies mentioned indicate that PRP may enhance healing and functional recovery in tendon injuries, which could be relevant to the healing of the supraspinatus tendon after arthroscopic repair. However, the results of the specific studies on rotator cuff repair with PRP (PUBMED:33285699 and PUBMED:23758981) would be necessary to draw a more precise conclusion regarding the supraspinatus tendon. |
Instruction: Can the addition of interpretative comments to laboratory reports influence outcome?
Abstracts:
abstract_id: PUBMED:34955674
Interpretative comments - need for harmonization? Results of the Croatian survey by the Working Group for Post-analytics. Introduction: Interpretation of laboratory test results is a complex post-analytical activity that requires not only understanding of the clinical significance of laboratory results but also the analytical phase of laboratory work. The aims of this study were to determine: 1) the general opinion of Croatian medical biochemistry laboratories (MBLs) about the importance of interpretative comments on laboratory test reports, and 2) to find out whether harmonization of interpretative comments is needed.
Materials And Methods: This retrospective study was designed as a survey by the Working Group for Post-analytics as part of national External Quality Assessment (EQA) program. All 195 MBLs participating in the national EQA scheme, were invited to participate in the survey. Results are reported as percentages of the total number of survey participants.
Results: Out of 195 MBLs, 162 participated in the survey (83%). Among them 59% MBLs implemented test result comments in routine according to national recommendations. The majority of laboratories (92%) state that interpretative comments added value to the laboratory reports, and a substantial part (72%) does not have feedback from physicians on their significance. Although physicians and patients ask for expert opinion, participants stated that the lack of interest of physicians (64%) as well as the inability to access patient's medical record (62%) affects the quality of expert opinion.
Conclusion: Although most participants state that they use interpretative comments and provide expert opinions regarding test results, results of the present study indicate that harmonization for interpretative comments is needed.
abstract_id: PUBMED:36447805
Importance of Interpretative Comments in Clinical Biochemistry - a Practitioner's Report. Interpretative comment (IC) from the clinical biochemist is a professional obligation. Most of the Nepalese clinical laboratories use only predefined comments on the report, while few laboratories do not provide comments at all. Apart from doctors, other healthcare professionals and sometimes patients themselves seek laboratory expert opinion in the interpretation of obtained results. The non-availability of patient's medical record or limited communication with physicians as well as insufficient professional knowledge impacts the quality of interpretative comments in Nepal. This report is intended to emphasize that the task of providing IC is becoming more important in the context of Nepal. Similarly, this report also guides those who provide interpretative comments.
abstract_id: PUBMED:32369399
Do reflex comments on laboratory reports alter patient management? Introduction: Laboratory comments appended on clinical biochemistry reports are common in the UK. Although popular with clinicians and the public, there is little evidence that these comments influence the clinical management of patients.
Methods: We provided reflex automated laboratory comments on all primary care lipid results including, if appropriate, recommendation of direct referral to the West Midlands Familial Hypercholesterolaemia service (WMFHS). Over a two-year period, the number GP referrals from the Wolverhampton City Clinical Commissioning Group (CCG) to the WMFHS were compared with four comparator CCGs of similar population size, who were not provided with reflex laboratory comments.
Results: Over the study period, the WMFHS received more referrals from Wolverhampton GPs (241) than any other comparator CCG (range 8-65) and greater than the combined referrals (172) from all four comparator CCGs.
Conclusion: Targeted reflex laboratory comments may influence the clinical management of patients and may have a role in the identification of individuals with familial hypercholesterolaemia.
abstract_id: PUBMED:15117437
Can the addition of interpretative comments to laboratory reports influence outcome? An example involving patients taking thyroxine. Introduction: There is little evidence that the addition of interpretative comments to biochemistry reports can influence outcome for patients. Interpretative comments on thyroid function test (TFT) requests were introduced in Hull in August 1999, providing the opportunity to determine whether feedback on hypothyroid patients taking thyroxine could lead to a reduction in the proportion whose thyroxine was inadequately replaced.
Patients And Methods: The study comprised 15 584 TFT requests, made from 1 August 1999 to 30 August 2002 by general practitioners (GPs), for 8281 patients taking thyroxine. Under-replacement of thyroxine, defined as a TSH concentration above the upper reference limit (i.e. 4.7 mU/L), was usually commented on in the biochemical report.
Results: In the first, second and third years following introduction of interpretative comments, the proportions of samples with a TSH concentration of >4.7 mU/L were 21.3%, 17.6% and 16.6%, respectively (chi(2)(trend) = 43.1, P <0.0001). The proportion with a TSH concentration of <0.1 mU/L showed a more modest change, from 12.5% in year 1 to 14.0% and 14.8% in years 2 and 3, respectively (chi(2)(trend) = 22.3, P <0.0001).
Conclusion: This study shows that in the three years following the introduction of interpretative comments there was a 22% reduction in the number of GPs' samples indicating thyroxine under-replacement. It seems likely that these data provide evidence that comments can indeed influence the biochemical outcome of patients.
abstract_id: PUBMED:30367781
Adding clinical utility to the laboratory reports: automation of interpretative comments. In laboratory medicine, consultation by adding interpretative comments to reports has long been recognized as one of the activities that help to improve patient treatment outcomes and strengthen the position of our profession. Interpretation and understanding of laboratory test results might in some cases considerably be enhanced by adding test when considered appropriate by the laboratory specialist - an activity that was named reflective testing. With patient material available at this stage, this might considerably improve the diagnostic efficiency. The need and value of these forms of consultation have been proven by a diversity of studies. Both general practitioners and medical specialists have been shown to value interpretative comments. Other forms of consultation are emerging: in this time of patient empowerment and shared decision making, reporting of laboratory results to patients will be common. Patients have in general little understanding of these results, and consultation of patients could add a new dimension to the service of the laboratory. These developments have been recognized by the European Federation of Clinical Chemistry and Laboratory Medicine, which has established the working group on Patient Focused Laboratory Medicine for work on the matter. Providing proper interpretative comments is, however, labor intensive because harmonization is necessary to maintain quality between individual specialists. In present-day high-volume laboratories, there are few options on how to generate high-quality, patient-specific comments for all the relevant results without overwhelming the laboratory specialists. Automation and application of expert systems could be a solution, and systems have been developed that could ease this task.
abstract_id: PUBMED:27641826
Assuring the quality of interpretative comments in clinical chemistry. The provision of interpretative advice on laboratory results is a post-analytic activity and an integral part of clinical laboratory services. It is valued by healthcare workers and has the potential to prevent or reduce errors and improve patient outcomes. It is important to ensure that interpretative comments provided by laboratory personnel are of high quality: comments should be patient-focused and answer the implicit or explicit question raised by the requesting clinician. Comment providers need to be adequately trained and qualified and be able to demonstrate their proficiency to provide advice on laboratory reports. External quality assessment (EQA) schemes can play a part in assessing and demonstrating the competence of such laboratory staff and have an important role in their education and continuing professional development. A standard structure is proposed for EQA schemes for interpretative comments in clinical chemistry, which addresses the scope and method of assessment including nomenclature and marking scales. There is a need for evidence that participation in an EQA program for interpretative commenting facilitates improved quality of comments. It is proposed that standardizing goals and methods of assessment as well as nomenclature and marking scales may help accumulate evidence to demonstrate the impact of participation in EQA for interpretative commenting on patient outcome.
abstract_id: PUBMED:21670094
A national survey of interpretative reporting in the UK. Aims: There is still debate as to whether the addition of interpretative comments to laboratory reports can influence the management of patients. Little is known about the extent of this activity in individual laboratories throughout the UK and so this national survey aimed to establish the prevalence.
Methods: An electronic questionnaire was sent to 196 NHS laboratories in the UK asking whether 17 commonly requested groups of tests were reported with interpretative comments and, if so, how laboratory computers and/or humans were involved in the process. Enquiries were also made of the grades of staff performing the process and of any 'vignette' examples where interpretative reporting had improved the clinical outcome for the patient.
Results: A total of 138 of the 196 laboratories (70%) responded. Only two laboratories did not have staff adding interpretative comments to any of the 17 tests. Consultant laboratory staff reporting predominated in all tests with a significant minority also being added by biomedical scientists. High-volume requests usually had staff adding comments to results selected by computer rules whereas more of the specialist endocrine tests tended to be considered for comment. Only six of 71 vignettes referred specifically to 'routine' biochemistry.
Conclusions: The addition of interpretative comments onto clinical biochemistry reports is widespread throughout the UK. This service is largely consultant led. There is anecdotal evidence that the process can influence the clinical management of patients.
abstract_id: PUBMED:34286057
Interpretative commenting in clinical chemistry with worked examples for thyroid function test reports. Correct interpretation of pathology results is a requirement for accurate diagnosis and appropriate patient management. Clinical Pathologists and Scientists are increasingly focusing on providing quality interpretative comments on their reports and these comments are appreciated by clinicians who receive them. Interpretative comments may improve patient outcomes by helping reduce errors in application of the results in patient management. Thyroid function test (TFT) results are one of the areas in clinical chemistry where interpretative commenting is practised by clinical laboratories. We have provided a series of TFT reports together with possible interpretative comments and a brief explanation of the comments. It is felt that this would be of help in setting up an interpretative service for TFTs and also assist in training and continuing education in their provision.
abstract_id: PUBMED:29389663
Harmonization of interpretative comments in laboratory hematology reporting: the recommendations of the Working Group on Diagnostic Hematology of the Italian Society of Clinical Chemistry and Clinical Molecular Biology (WGDH-SIBioC). The goal of harmonizing laboratory testing is contributing to improving the quality of patient care and ultimately ameliorating patient outcome. The complete blood and leukocyte differential counts are among the most frequently requested clinical laboratory tests. The morphological assessment of peripheral blood cells (PB) through microscopic examination of properly stained blood smears is still considered a hallmark of laboratory hematology. Nevertheless, a variable inter-observer experience and the different terminology used for characterizing cellular abnormalities both contribute to the current lack of harmonization in blood smear revision. In 2014, the Working Group on Diagnostic Hematology of the Italian Society of Clinical Chemistry and Clinical Molecular Biology (WGDH-SIBioC) conducted a national survey, collecting responses from 78 different Italian laboratories. The results of this survey highlighted a lack of harmonization of interpretative comments in hematology, which prompted the WGDH-SIBioC to develop a project on "Harmonization of interpretative comments in the laboratory hematology report", aimed at identifying appropriate comments and proposing a standardized reporting system. The comments were then revised and updated according to the 2016 revision of the World Health Organization classification of hematologic malignancies. In 2016, the Working Group on Diagnostic Hematology of the Italian Society of Clinical Chemistry and Clinical Molecular Biology (WGDH SIBioC) published its first consensus based recommendation for interpretative comments in laboratory hematology reporting whit the purpose of evaluating comments and the aim to (a) reducing their overall number, (b) standardizing the language, (c) providing information that could be easily comprehended by clinicians and patients, (d) increasing the quality of the clinical information, and (e) suggesting additional diagnostic tests when necessary. This paper represents a review of the recommendations of the former document.
abstract_id: PUBMED:12849907
Interpretative comments and reference ranges in EQA programs as a tool for improving laboratory appropriateness and effectiveness. Introduction: Laboratory information is generated when a meaning is given to certain data. This is usually achieved by comparing a laboratory test result with the reference range/decisional limit (RL), and by providing consultation for the interpretation of data, advice, and follow-up testing.
Aim: In this paper, we investigate factors affecting the conversion of data into useful information with regard to biochemical markers of myocardial damage (CK-MB mass, myoglobin, and troponins), in view of their importance in detecting myocardial necrosis. Our aim was to report results obtained in order to verify the consensus between laboratories with reference to interpretative comments and the reference ranges/decisional limits added to clinical reports.
Methods: A questionnaire and simulated medical reports on three different patients were distributed to participants (94 laboratories) in the 2001 cycle of the External Quality Assessment (EQA). Moreover, we analysed 113 medical reports sent by laboratories during the most recent EQA cycle 2002, and checked the number of different RLs used, both independent and within the diagnostic system used. We also compared each laboratory result of a control sample, obtained in the 2002 cycle, with declared RL in order to verify the clinical significance of results ("normal" or "pathological") for troponin I and CK-MB.
Results: Our findings show that few laboratories regularly add interpretative comments to medical reports. On the contrary, they cooperate with clinicians who require consultation, advice, and information for the appropriate use of biochemical markers. There is a general consensus among participants regarding probable syndromes suggested by the interpretation of the same result and most laboratories also agree on further investigations to be carried out for several diseases. Concerning RL, the data demonstrate that numerous different RLs are used to report the results of the biochemical markers evaluated, both when considered independent of the diagnostic system used and within the diagnostic system used.
Discussion And Conclusions: The biochemist does not have the opportunity to verify the efficacy of the interpretation that he/she provided. An audit of this activity is therefore required to allow the laboratory to monitor its own performance and to assure good practice. The evaluation of interpretative comments, through specific surveys, should be a prime objective of EQA organisers. Well-designed EQA programs can, moreover, support laboratories in establishing appropriate RL and in verifying the clinical significance of their results with respect to that of other laboratories. Our survey on interpretative comments and the analysis of the RLs further demonstrate how laboratory medicine can contribute to the objective evaluation of the patients' health status.
Answer: Yes, the addition of interpretative comments to laboratory reports can influence outcome. Studies have shown that interpretative comments can lead to a reduction in the proportion of patients whose treatment is inadequately managed. For example, a study in Hull found that after the introduction of interpretative comments on thyroid function test requests, there was a 22% reduction in the number of samples indicating thyroxine under-replacement over three years (PUBMED:15117437). Similarly, targeted reflex laboratory comments were found to influence the clinical management of patients, as evidenced by the increased number of GP referrals to the West Midlands Familial Hypercholesterolaemia service from Wolverhampton compared to other regions not provided with reflex comments (PUBMED:32369399).
Furthermore, interpretative comments are valued by healthcare workers and have the potential to prevent or reduce errors and improve patient outcomes (PUBMED:27641826). Anecdotal evidence suggests that interpretative reporting can influence the clinical management of patients (PUBMED:21670094). Additionally, the provision of interpretative comments is a professional obligation and is becoming increasingly important, as it may improve patient outcomes by helping reduce errors in the application of results in patient management (PUBMED:36447805, PUBMED:34286057).
However, the quality of interpretative comments is dependent on several factors, including the availability of patient medical records, communication with physicians, and the professional knowledge of the laboratory staff providing the comments (PUBMED:36447805). Harmonization of interpretative comments is necessary to maintain quality and consistency across different specialists and laboratories (PUBMED:30367781, PUBMED:29389663). External quality assessment schemes can help in assessing and demonstrating the competence of laboratory staff in providing interpretative comments (PUBMED:27641826).
In conclusion, the evidence suggests that interpretative comments on laboratory reports can positively influence patient management and outcomes when they are of high quality and appropriately harmonized across the healthcare system. |
Instruction: Does a kaolin-impregnated hemostatic dressing reduce intraoperative blood loss and blood transfusions in pediatric spinal deformity surgery?
Abstracts:
abstract_id: PUBMED:24921838
Does a kaolin-impregnated hemostatic dressing reduce intraoperative blood loss and blood transfusions in pediatric spinal deformity surgery? Study Design: Retrospective case-control study.
Objective: To evaluate the hemostatic benefits of using a kaolin-impregnated dressing during pediatric spinal deformity correction surgery.
Summary Of Background Data: Minimizing blood loss and transfusions are clear benefits for patient safety. A technique common in both severe trauma and combat medicine that has not been reported in the spine literature is wound packing with a kaolin-impregnated hemostatic dressing.
Methods: Estimated blood loss and transfusion amounts were analyzed in a total of 117 retrospectively identified cases. The control group included 65 patients (46 females, 19 males, 12.7±4.5 yr, 10.2±4.8 levels fused) who received standard operative care with gauze packing between June 2007 and March 2010. The treatment group included 52 patients (33 females, 19 males, 13.9±3.2 yr, 10.4±4.3 levels fused) who underwent intraoperative packing with QuikClot Trauma Pads (QCTP, Z-Medica Corporation) for all surgical procedures from July 2010 to August 2011. No other major changes in the use of antifibrinolytics or perioperative, surgical, or anesthesia technique were noted. Statistical differences were analyzed using analysis of covariance in R with P value of less than 0.05. The statistical model included sex, age, weight, scoliosis type, the number of vertebral levels fused, and surgery duration as covariates.
Results: The treatment group had 40% less intraoperative estimated blood loss than the control group (974 mL vs. 1620 mL) (P<0.001). Patients who received the QCTP treatment also had 42% less total perioperative transfusion volume (499 mL vs. 862 mL) (P<0.01).
Conclusion: The use of a kaolin-impregnated intraoperative trauma pad seems to be an effective and inexpensive method to reduce intraoperative blood loss and transfusion volume in pediatric spinal deformity surgery.
Level Of Evidence: 3.
abstract_id: PUBMED:28696876
Experience Using Kaolin-Impregnated Sponge to Minimize Perioperative Bleeding in Norwood Operation. Purpose: A kaolin-impregnated hemostatic sponge (QuikClot) is reported to reduce intraoperative blood loss in trauma and noncardiac surgery. The purpose of this study was to assess if this sponge was effective for hemostasis during Norwood operation.
Description: We conducted a retrospective review of patients undergoing Norwood operation in infancy between 2011 and 2016 at our institution.
Evaluation: Of 31 identified Norwood operations, a kaolin-impregnated sponge was used intraoperatively in 15 (48%) patients. The preoperative profiles and cardiopulmonary bypass status were similar between the operations with or without kaolin-impregnated sponge. The comparison on each operative outcome between operations with or without kaolin-impregnated sponge showed that the intraoperative platelets, cryoprecipitate, and factor VII dosage were significantly less in the operations with kaolin-impregnated sponge (55 mL, 10 mL, 0 µg/kg vs 72 mL, 15 mL, 45 µg/kg; P = .03, .021, .019), as well as the incidence of perioperative bleeding complications (second cardiopulmonary bypass for hemostasis or postoperative mediastinal exploration, 0% vs 31%, P = .043). A logistic regression model showed that the nonuse of kaolin-impregnated sponge and longer aortic cross clamp time were associated with perioperative bleeding complication in univariable model ( P = .02 and .005).
Conclusions: Use of kaolin-impregnated hemostatic sponge was associated with reduced blood product use and perioperative bleeding complications in Norwood operation at a single institution.
abstract_id: PUBMED:30391604
Cost-Effectiveness of a Radio Frequency Hemostatic Sealer (RFHS) in Adult Spinal Deformity Surgery. Background: Patients undergoing posterior spinal fusion surgery can lose a substantial amount of blood. This can prolong operative time and require transfusion of allogeneic blood components, which increases the risk of infection and can be the harbinger of serious complications. Does a saline-irrigated bipolar radiofrequency hemostatic sealer (RFHS) help reduce transfusion requirements?
Methods: In an observational cohort study, we compared transfusion requirements in 30 patients undergoing surgery for adult spinal deformity using the RFHS with that of a historical control group of 30 patients in which traditional hemostasis was obtained with bipolar electrocautery and matched them for blood loss-related variables. Total expense to the hospital for the RFHS, laboratory expenses, and blood transfusions was used for cost calculations. The incremental cost-effectiveness ratio was calculated using the number of blood transfusions avoided as the effectiveness payoff.
Results: Using a multivariable linear regression model, we found that only estimated blood loss (EBL) was an independent significant predictor of transfusion requirement in both groups. We evaluated the variables of age, EBL, time duration of surgery, preoperative hemoglobin, hemoglobin nadir during surgery, body mass index, length of stay, and number of levels operated on. Mean EBL was greater in the control group (2201 vs. 1416 mL, P = 0.0099). The number of transfusions also was greater in the control group (14.5 vs. 6.5, P = 0.0008). In the cost-effectiveness analysis, we found that the RFHS cost $108 more (compared with not using the RFHS) to avoid 1 unit of blood transfusion.
Conclusions: The cost-effectiveness analysis revealed that if we are willing to pay $108 to avoid 1 unit of blood transfusion, the use of the RFHS is a reasonable choice to use in open surgery for adult spinal deformity.
abstract_id: PUBMED:31938967
Advances in surgical hemostasis: a comprehensive review and meta-analysis on topical tranexamic acid in spinal deformity surgery. Tranexamic acid (TXA) is an effective and commonly used hemostatic agent for perioperative blood loss in various surgical specialties. It is being increasingly used in spinal deformity surgery. We aimed to evaluate the safety and efficacy of topical TXA (tTXA) compared to both placebo and/or intravenous (IV) TXA in patients undergoing spinal deformity surgery. We conducted a systematic review of the electronic databases using different MeSH terms from January 1970 to August 2019. Pooled and subgroup analysis was performed using fixed and random-effect model based upon the heterogeneity (I2). A total of 609 patients (tTXA: n = 258, 42.4%) from 8 studies were included. We found that there was a statistically significant difference in terms of (i) postoperative blood loss [mean difference (MD) - 147.1, 95% CI - 189.5 to - 104.8, p < 0.00001], (ii) postoperative hemoglobin level (MD 1.09, 95% CI 0.45 to 1.72, p = 0.0008), (iii) operative time (MD 7.47, 95% CI 2.94 to 12.00, p < 0.00001), (iv) postoperative transfusion rate [odds ratio (OR) 0.39, 95% CI 0.20 to 0.78, p = 0.007], postoperative drain output (MD, - 184.0, 95% CI - 222.03 to - 146.04, p < 0.00001), and (v) duration of hospital stay (MD - 1.14, 95% CI - 1.44 to - 0.85, p < 0.00001) in patients treated with tTXA compared to the control group. However, there was no significant difference in terms of intraoperative blood loss (p = 0.13) and complications (p = 0.23) between the two comparative groups. Furthermore, low-dose (250-500 mg) tTXA (p < 0.00001) reduced postoperative blood loss more effectively compared to high-dose tTXA (1-3 g) (p = 0.001). Our meta-analysis corroborates the effectiveness and safety of tTXA in spinal deformity surgery.
abstract_id: PUBMED:11453426
Management of complex pediatric and adolescent spinal deformity. Object: The authors sought to analyze prospectively the outcome of surgery for complex spinal deformity in the pediatric and young adult populations.
Methods: The authors evaluate all pediatric and adolescent patients undergoing operative correction of complex spinal deformity from December 1997 through July 1999. No patient was lost to follow-up review (average 21.1 months). There were 27 consecutive pediatric and adolescent patients (3-20 years of age) who underwent 32 operations. Diagnoses included scoliosis (18 idiopathic, five nonidiopathic) and four severe kyphoscoliosis. Operative correction and arthrodesis were achieved via 21 posterior approaches (Cotrel-Dubousset-Horizon), seven anterior approaches (Isola or Kaneda Scoliosis System), and two combined approaches. Operative time averaged 358 minutes (range 115-620 minutes). Blood loss averaged 807 ml (range 100-2,000 ml). Levels treated averaged 9.1 (range three-16 levels). There was a 54% average Cobb angle correction (range 6-82%). No case was complicated by the patient's neurological deterioration, loss of somatosensory evoked potential monitoring, cardiopulmonary disease, donor-site complication, or wound breakdown. There was one case of hook failure and one progression of deformity beyond the site of surgical instrumentation that required reoperation. There were 10 minor complications that did not significantly affect patient outcome. No patient received undirected banked blood products. There was a significant improvement in cosmesis, and no patient experienced continued pain postoperatively. All patients have been able to return to their preoperative activities.
Conclusions: Compared with other major neurosurgical operations, segmental instrumentation for pediatric and adolescent spinal deformity is a safe procedure with minimal morbidity and there is a low risk of needing to use allogeneic blood products.
abstract_id: PUBMED:28061495
Effectiveness and Safety of Tranexamic Acid in Spinal Deformity Surgery. Objective: Spinal deformity surgery has the potential risk of massive blood loss. To reduce surgical bleeding, the use of tranexamic acid (TXA) became popular in spinal surgery, recently. The purpose of this study was to determine the effectiveness of intra-operative TXA use to reduce surgical bleeding and transfusion requirements in spinal deformity surgery.
Methods: A total of 132 consecutive patients undergoing multi-level posterior spinal segmental instrumented fusion (≥5 levels) were analyzed retrospectively. Primary outcome measures included intraoperative estimated blood loss (EBL), transfusion amount and rate of transfusion. Secondary outcome measures included postoperative transfusion amount, rate of transfusion, and complications associated with TXA or allogeneic blood transfusions.
Results: The number of patients was 89 in TXA group and 43 in non-TXA group. There were no significant differences in demographic or surgical traits between the groups except hypertension. The EBL was significantly lower in TXA group than non-TXA group (841 vs. 1336 mL, p=0.002). TXA group also showed less intra-operative and postoperative transfusion requirements (544 vs. 812 mL, p=0.012; 193 vs. 359 mL, p=0.034). Based on multiple regression analysis, TXA use could reduce surgical bleeding by 371 mL (37 % of mean EBL). Complication rate was not different between the groups.
Conclusion: TXA use can effectively reduce the amount of intra-operative bleeding and transfusion requirements in spinal deformity surgery. Future randomized controlled study could confirm the routine use of TXA in major spinal surgery.
abstract_id: PUBMED:30926553
The Efficacy and Safety of Epsilon-Aminocaproic Acid for Blood Loss and Transfusions in Spinal Deformity Surgery: A Meta-Analysis. Objective: To assess the efficacy and safety of epsilon-aminocaproic acid (EACA) in reducing the blood loss and transfusion volume during open spinal deformity surgery.
Methods: A systematic search was conducted for all studies written in English published on or before October 2018 in PubMed, EMBASE, and the Cochrane Library that compared antifibrinolytic agents with placebos for open spinal deformity surgeries. The primary outcomes included the total blood loss, intraoperative, and postoperative blood loss, transfusions volume and complication rate.
Results: Seven studies that included 525 patients who were diagnosed with spinal deformity. Compared with placebo, the patients who received EACA showed a reduction in the postoperative blood loss (mean difference [MD] -249.80; 95% confidence interval [CI] -375.65 to -123.95; P = 0.0001) and total blood loss (MD -670.30; 95% CI -1142.63 to -197.98; P = 0.005). Furthermore, the patients treated with EACA received approximately 1.67 fewer units of blood (MD -1.67; 95% CI -3.10 to -0.24; P = 0.02). However, in this cohort, no statistically significant differences were observed in the intraoperative blood loss (MD -452.19; 95% CI -1082.21 to 177.83; P = 0.16) and complication rate (odds ratio 0.73; 95% CI 0.16-3.24; P = 0.68).
Conclusions: This meta-analysis demonstrated that EACA could be safe and potentially efficacious for reducing blood loss and transfusions volume in patients with spinal deformity surgeries when compared with placebo. In light of the significant heterogeneity, the findings of this meta-analysis should be confirmed in methodologically rigorous and adequately powered clinical trials.
abstract_id: PUBMED:27267013
Hemostasis in Pediatric Surgery. Hemostasis is an important concept in pediatric otolaryngologic surgery. This article details the considerations the otolaryngologist should take when it comes to clinical evaluation and surgical technique. It begins with the preoperative evaluation, and evolves into the use of different mechanical and chemical methods of operative hemostasis. We detail use of different hemostatic techniques in common pediatric procedures, and finally, we discuss indications for intraoperative and postoperative blood transfusion in pediatric patients if the surgeon encounters significant intraoperative hemorrhage. This paper gives a comprehensive look into the hemostatic considerations for the pediatric patient through the preoperative to postoperative period.
abstract_id: PUBMED:25868100
Antifibrinolytics reduce blood loss in adult spinal deformity surgery: a prospective, randomized controlled trial. Study Design: This is a prospective, randomized, double-blinded comparison of tranexamic acid (TXA), epsilon aminocaproic acid (EACA), and placebo used intraoperatively in patients with adult spinal deformity.
Objective: The purpose of this study was to provide high-quality evidence regarding the comparative efficacies of TXA, EACA, and placebo in reducing blood loss and transfusion requirements in patients undergoing posterior spinal fusion surgery.
Summary Of Background Data: Spine deformity surgery usually involves substantial blood loss. The antifibrinolytics TXA and EACA have been shown to improve hemostasis in large blood loss surgical procedures.
Methods: Fifty-one patients undergoing posterior spinal fusion of at least 5 levels for correction of adult spinal deformity were randomized to 1 of 3 treatment groups. Primary outcome measures included intraoperative estimated blood loss, total loss, (estimated blood loss + postoperative blood loss), and transfusion rates.
Results: Patients received TXA (n = 19), EACA (n = 19), or placebo (n = 13) in the operating room (mean ages: 60, 47, and 43 yr, respectively); TXA patients were significantly older and had larger estimated blood volumes than both other groups. Total losses were significantly reduced for EACA versus control, and there was a demonstrable but nonsignificant trend toward reduced intraoperative blood loss in both antifibrinolytic arms versus control. EACA had significant reductions in postoperative blood transfusions versus TXA.
Conclusion: The findings in this study support the use of antifibrinolytics to reduce blood loss in posterior adult spinal deformity surgery.
Level Of Evidence: 1.
abstract_id: PUBMED:36866794
Spinal deformity surgery in patients for whom blood transfusion is not an option: a single-center experience. Objective: Spinal deformity surgery is associated with significant blood loss, often requiring the transfusion of blood and/or blood products. For patients declining blood or blood products, even in the face of life-threatening blood loss, spinal deformity surgery has been associated with high rates of morbidity and mortality. For these reasons, patients for whom blood transfusion is not an option have historically been denied spinal deformity surgery.
Methods: The authors retrospectively reviewed a prospectively collected data set. All patients declining blood transfusion who underwent spinal deformity surgery at a single institution between January 2002 and September 2021 were identified. Demographics collected included age, sex, diagnosis, details of any prior surgery, and medical comorbidities. Perioperative variables included levels decompressed and instrumented, estimated blood loss, blood conservation techniques used, length of surgery, length of hospital stay, and complications from surgery. Radiographic measurements included, where appropriate, sagittal vertical axis correction, Cobb angle correction, and regional angular correction.
Results: Spinal deformity surgery was performed in 31 patients (18 male, 13 female) over 37 admissions. The median age at surgery was 41.2 years (range 10.9-70.1 years), and 64.5% had significant medical comorbidities. A median of 9 levels (range 5-16 levels) were instrumented per surgery, and the median estimated blood loss was 800 mL (range 200-3000 mL). Posterior column osteotomies were performed in all surgeries, and pedicle subtraction osteotomies in 6 cases. Multiple blood conservation techniques were used in all patients. Preoperative erythropoietin was administered prior to 23 surgeries, intraoperative cell salvage was used in all, acute normovolemic hemodilution was performed in 20, and perioperative administration of antifibrinolytic agents was performed in 28 surgeries. No allogenic blood transfusions were administered. Surgery was staged intentionally in 5 cases, and there was 1 unintended staging due to intraoperative blood loss from a vascular injury. There was 1 readmission for a pulmonary embolus. There were 2 minor postoperative complications. The median length of stay was 6 days (range 3-28 days). Deformity correction and the goals of surgery were achieved in all patients. Two patients underwent revision surgery during the follow-up period: one for pseudarthrosis and the other for proximal junctional kyphosis.
Conclusions: With proper preoperative planning and judicious use of blood conservation techniques, spinal deformity surgery may be performed safely in patients for whom blood transfusion is not an option. The same techniques can be applied widely to the general population in order to minimize blood loss and the need for allogeneic blood transfusion.
Answer: Yes, a kaolin-impregnated hemostatic dressing does reduce intraoperative blood loss and blood transfusions in pediatric spinal deformity surgery. A retrospective case-control study evaluated the hemostatic benefits of using a kaolin-impregnated dressing during pediatric spinal deformity correction surgery. The study found that the treatment group, which underwent intraoperative packing with QuikClot Trauma Pads (QCTP), had 40% less intraoperative estimated blood loss and 42% less total perioperative transfusion volume compared to the control group that received standard operative care with gauze packing. This suggests that the use of a kaolin-impregnated intraoperative trauma pad is an effective and inexpensive method to reduce intraoperative blood loss and transfusion volume in pediatric spinal deformity surgery (PUBMED:24921838). |
Instruction: Oxidative stress in diabetic macrovascular disease: does homocysteine play a role?
Abstracts:
abstract_id: PUBMED:32628815
Protective role of NRF2 in macrovascular complications of diabetes. Macrovascular complications develop in over a half of the diabetic individuals, resulting in high morbidity and mortality. This poses a severe threat to public health and a heavy burden to social economy. It is therefore important to develop effective approaches to prevent or slow down the pathogenesis and progression of macrovascular complications of diabetes (MCD). Oxidative stress is a major contributor to MCD. Nuclear factor (erythroid-derived 2)-like 2 (NRF2) governs cellular antioxidant defence system by activating the transcription of various antioxidant genes, combating diabetes-induced oxidative stress. Accumulating experimental evidence has demonstrated that NRF2 activation protects against MCD. Structural inhibition of Kelch-like ECH-associated protein 1 (KEAP1) is a canonical way to activate NRF2. More recently, novel approaches, such as activation of the Nfe2l2 gene transcription, decreasing KEAP1 protein level by microRNA-induced degradation of Keap1 mRNA, prevention of proteasomal degradation of NRF2 protein and modulation of other upstream regulators of NRF2, have emerged in prevention of MCD. This review provides a brief introduction of the pathophysiology of MCD and the role of oxidative stress in the pathogenesis of MCD. By reviewing previous work on the activation of NRF2 in MCD, we summarize strategies to activate NRF2, providing clues for future intervention of MCD. Controversies over NRF2 activation and future perspectives are also provided in this review.
abstract_id: PUBMED:24655140
Oxidative stress: meeting multiple targets in pathogenesis of diabetic nephropathy. Excessive production of reactive oxygen species is an important mechanism underlying the pathogenesis of diabetes associated macrovascular and microvascular complications including diabetic nephropathy. Diabetic nephropathy is characterized by glomerular enlargement, early albuminuria and progressive glomerulosclerosis. The pathogenesis of diabetic nephropathy is multi-factorial and the precise mechanisms are unclear. Hyperglycemia-mediated dysregulation of various pathways either enhances the intensity of oxidative stress or these pathways are affected by oxidative stress. Thus, oxidative stress has been considered as a central mediator in progression of nephropathy in patients with diabetes. In this review, we have focused on current perspectives in oxidative stress signaling to determine common biological processes whereby diabetes-induced oxidative stress plays a central role in progression of diabetic nephropathy.
abstract_id: PUBMED:27903991
Role of Oxidative Stress and Inflammatory Factors in Diabetic Kidney Disease. Diabetic nephropathy (DN) is a serious complication of diabetes mellitus, and its prevalence has been increasing in developed countries. Diabetic nephropathy has become the most common single cause of end-stage renal disease worldwide. Oxidative stress and inflammation factors are hypothesized to play a role in the development of late diabetes complications. Chronic hyperglycemia increases oxidative stress, significantly modifies the structure and function of proteins and lipids, and induces glycoxidation and peroxidation. Therefore, hyperglycemia causes auto-oxidation of glucose, glycation of proteins, and activation of polyol mechanism. Overproduction of intracellular reactive oxygen species contributes to several microvascular and macrovascular complications of DN. On the other hand, reactive oxygen species modulates signaling cascade of immune factors. An increase in reactive oxygen species can increase the production of inflammatory cytokines, and likewise, an increase in inflammatory cytokines can stimulate the production of free radicals. Some studies have shown that kidney inflammation is serious in promoting the development and progression of DN. Inflammatory factors which are activated by the metabolic, biochemical, and hemodynamic derangements are known to exist in the diabetic kidney. This review discusses facts for oxidative stress and inflammatory factors in DN and encompasses the role of immune and inflammatory cells, inflammatory cytokines, and stress oxidative factors.
abstract_id: PUBMED:20022399
Molecular pathology of oxidative stress in diabetic angiopathy: role of mitochondrial and cellular pathways. Diabetes mellitus is characterized by chronic hyperglycaemia and a significant risk of developing micro- and macrovascular complications. Growing evidence suggests that increased oxidative stress, induced by several hyperglycaemia-activated pathways, is a key factor in the pathogenesis of endothelial dysfunction and vascular disease. Reactive oxidant molecules, which are produced at a high rate in the diabetic milieu, can cause oxidative damage of many cellular components and activate several pathways linked with inflammation and apoptosis. Among the mechanisms involved in oxidative stress generation, mitochondria and uncoupling proteins are of particular interest and there is growing evidence suggesting their pivotal role in the pathogenesis of diabetic complications. Other important cellular sources of oxidants include nicotinamide adenine dinucleotide phosphate oxidases and uncoupling endothelial nitric oxide synthase. In addition, diabetes is associated with reduced antioxidant defences, which generally contrast the deleterious effect of oxidant species. This concept underlines a potential beneficial role of antioxidant therapy for the prevention and treatment of diabetic vascular disease. However, large scale trials with classical antioxidants have failed to show a significant effect on major cardiovascular events, thus underlying the need of further investigations in order to develop therapies to prevent and/or delay the development of micro- and macrovascular complications.
abstract_id: PUBMED:9305300
Oxidative stress in diabetic macrovascular disease: does homocysteine play a role? Background: Non-insulin-dependent diabetes mellitus (NIDDM) and hyperhomocysteinemia are both associated with increased lipid peroxidation (oxidative stress). This may contribute to the accelerated vascular disease associated with these conditions. It is not known whether the coexistence of elevated homocysteine levels will stimulate oxidative stress further than that caused by diabetes alone.
Methods: Plasma concentrations of thiobarbituric acid reactive substances (TBARS), an index of lipid peroxidation, were measured in patients with NIDDM who had previously had a methionine load test; some of the patients had hyperhomocysteinemia.
Results: Plasma TBARS concentrations were elevated in diabetics with vascular disease. The additional presence of hyperhomocysteinemia was not associated with a further increase in plasma TBARS concentrations.
Conclusions: Lipid peroxidation is increased in patients with diabetes mellitus and macrovascular disease and is not further elevated by the coexistence of elevated homocysteine levels. It is possible that diabetes maximally stimulates oxidative stress and any further acceleration of vascular disease in patients who have coexistent hyperhomocysteinemia is mediated through mechanisms other than lipid peroxidation.
abstract_id: PUBMED:8742574
Oxidative stress and diabetic vascular complications. Long-term vascular complications still represent the main cause of morbidity and mortality in diabetic patients. Although prospective randomized long-term clinical studies comparing the effects of conventional and intensive therapy have demonstrated a clear link between diabetic hyperglycemia and the development of secondary complications of diabetes, they have not defined the mechanism through which excess glucose results in tissue damage. Evidence has accumulated indicating that the generation of reactive oxygen species (oxidative stress) may play an important role in the etiology of diabetic complications. This hypothesis is supported by evidence that many biochemical pathways strictly associated with hyperglycemia (glucose autoxidation, polyol pathway, prostanoid synthesis, protein glycation) can increase the production of free radicals. Furthermore, exposure of endothelial cells to high glucose leads to augmented production of superoxide anion, which may quench nitric oxide, a potent endothelium-derived vasodilator that participates in the general homeostasis of the vasculature. In further support of the consequential injurious role of oxidative stress, many of the adverse effects of high glucose on endothelial functions, such as reduced endothelial-dependent relaxation and delayed cell replication, are reversed by antioxidants. A rational extension of this proposed role for oxidative stress is the suggestion that the different susceptibility of diabetic patients to microvascular and macrovascular complications may be a function of the endogenous antioxidant status.
abstract_id: PUBMED:21838680
The role of oxidative stress in the pathogenesis of type 2 diabetes mellitus micro- and macrovascular complications: avenues for a mechanistic-based therapeutic approach. A growing body of evidence suggests that oxidative stress plays a key role in the pathogenesis of micro- and macrovascular diabetic complications. The increased oxidative stress in subjects with type 2 diabetes is a consequence of several abnormalities, including hyperglycemia, insulin resistance, hyperinsulinemia, and dyslipidemia, each of which contributes to mitochondrial superoxide overproduction in endothelial cells of large and small vessels as well as the myocardium. The unifying pathophysiological mechanism that underlies diabetic complications could be explained by increased production of reactive oxygen species (ROS) via: (1) the polyol pathway flux, (2) increased formation of advanced glycation end products (AGEs), (3) increased expression of the receptor for AGEs, (4) activation of protein kinase C isoforms, and (5) overactivity of the hexosamine pathway. Furthermore, the effects of oxidative stress in individuals with type 2 diabetes are compounded by the inactivation of two critical anti-atherosclerotic enzymes: endothelial nitric oxide synthase and prostacyclin synthase. Of interest, the results of clinical trials in patients with type 2 diabetes in whom intensive management of all the components of the metabolic syndrome (hyperglycemia, hypercholesterolemia, and essential hypertension) was attempted (with agents that exert a beneficial effect on serum glucose, serum lipid concentrations, and blood pressure, respectively) showed a decrease in adverse cardiovascular end points. The purpose of this review is (1) to examine the mechanisms that link oxidative stress to micro- and macrovascular complications in subjects with type 2 diabetes and (2) to consider the therapeutic opportunities that are presented by currently used therapeutic agents which possess antioxidant properties as well as new potential antioxidant substances.
abstract_id: PUBMED:27917671
The role of oxidative stress in the development of diabetic neuropathy Diabetic neuropathy may be one of the most common and severe complications of diabetes mellitus. Oxidative stress plays a pivotal role in the development of microvascular complications of diabetes. The majority of related pathways like polyol and hexosamine, advanced glycation end products, poly-ADP-ribose polymerase, and protein kinase-C all originated from initial oxidative stress. In this review, the authors present the current oxidative stress hypothesis in diabetes mellitus and summarize the pathophysiological mechanisms of diabetic neuropathy associated with increased oxidative stress. The development of modern medicines to treat diabetic neuropathy needs intensive long-term comparative trials in the future. Orv. Hetil., 2016, 157(49), 1939-1946.
abstract_id: PUBMED:29204450
The Role of Oxidative Stress, Mitochondrial Function, and Autophagy in Diabetic Polyneuropathy. Diabetic polyneuropathy (DPN) is the most frequent and prevalent chronic complication of diabetes mellitus (DM). The state of persistent hyperglycemia leads to an increase in the production of cytosolic and mitochondrial reactive oxygen species (ROS) and favors deregulation of the antioxidant defenses that are capable of activating diverse metabolic pathways which trigger the presence of nitro-oxidative stress (NOS) and endoplasmic reticulum stress. Hyperglycemia provokes the appearance of micro- and macrovascular complications and favors oxidative damage to the macromolecules (lipids, carbohydrates, and proteins) with an increase in products that damage the DNA. Hyperglycemia produces mitochondrial dysfunction with deregulation between mitochondrial fission/fusion and regulatory factors. Mitochondrial fission appears early in diabetic neuropathy with the ability to facilitate mitochondrial fragmentation. Autophagy is a catabolic process induced by oxidative stress that involves the formation of vesicles by the lysosomes. Autophagy protects cells from diverse stress factors and routine deterioration. Clarification of the mechanisms involved in the appearance of complications in DM will facilitate the selection of specific therapeutic options based on the mechanisms involved in the metabolic pathways affected. Nowadays, the antioxidant agents consumed exogenously form an adjuvant therapeutic alternative in chronic degenerative metabolic diseases, such as DM.
abstract_id: PUBMED:33959621
The Role of Non-coding RNAs in Diabetic Nephropathy-Related Oxidative Stress. Diabetic nephropathy (DN) is one of the main complications of diabetes and the main cause of diabetic end-stage renal disease, which is often fatal. DN is usually characterized by progressive renal interstitial fibrosis, which is closely related to the excessive accumulation of extracellular matrix and oxidative stress. Non-coding RNAs (ncRNAs) are RNA molecules expressed in eukaryotic cells that are not translated into proteins. They are widely involved in the regulation of biological processes, such as, chromatin remodeling, transcription, post-transcriptional modification, and signal transduction. Recent studies have shown that ncRNAs play an important role in the occurrence and development of DN and participate in the regulation of oxidative stress in DN. This review clarifies the functions and mechanisms of ncRNAs in DN-related oxidative stress, providing valuable insights into the prevention, early diagnosis, and molecular therapeutic targets of DN.
Answer: The role of homocysteine in oxidative stress related to diabetic macrovascular disease has been investigated, but the evidence suggests that while both non-insulin-dependent diabetes mellitus (NIDDM) and hyperhomocysteinemia are associated with increased lipid peroxidation (oxidative stress), the presence of elevated homocysteine levels does not further increase oxidative stress beyond that caused by diabetes alone. A study measuring plasma concentrations of thiobarbituric acid reactive substances (TBARS), an index of lipid peroxidation, found that plasma TBARS concentrations were elevated in diabetics with vascular disease. However, the additional presence of hyperhomocysteinemia was not associated with a further increase in plasma TBARS concentrations. This suggests that diabetes may already maximally stimulate oxidative stress, and any further acceleration of vascular disease in patients with coexistent hyperhomocysteinemia is likely mediated through mechanisms other than lipid peroxidation (PUBMED:9305300).
Therefore, while oxidative stress is a significant factor in the pathogenesis of diabetic macrovascular complications, the specific role of homocysteine in this process may not be as impactful as the oxidative stress induced by hyperglycemia and other diabetes-related metabolic disturbances. |
Instruction: Does health-related quality of life predict injury event?
Abstracts:
abstract_id: PUBMED:21483187
Does health-related quality of life predict injury event? Background: Unintentional injury is a leading threat to children's health. Some human factors have been determined as predictor of unintentional injury. Association between Health-Related Quality of Life (HRQOL) as a human factor and unintentional injuries is unclear. The objective of study is to examine the association between HRQOL and unintentional injuries among primary school children.
Methods: This study was a cross-sectional conducted in Ahwaz, a city in Iran. Overall, 3375 children aged 6-10 years were randomly selected from primary school. HRQOL was measured by 56 items taken from seven domains of Netherlands Organization for Applied Scientific Research Academic Medical Center (TNO AZL) child quality of life (TACQOL) parent form. Parents were interviewed to collect information about incidence, cause and a brief description of injury within the past 12 months prior to the study.
Results: The response rate was 3375 of 3792 (89%). There was a significant trend for increasing occurrence of injury with decreasing of HRQOL score (p was less than 0.001). Adjusted OR for injury was significantly higher in very low (2.38, 95% CI: 1.45-3.86), low (2.18, 95% CI: 1.34-3.56), and medium (1.73, 95%CI: 1.06-2.83) HRQOL groups compared to reference group (very high HRQOL). The median of total HRQOL (P less than 0.001) and all its domains (P=0.017) (except autonomous functioning) was lower in injured group compared to uninjured one.
Conclusions: This study found an association between HRQOL and unintentional injury among primary school children. This is a preliminary finding and further investigations with a well-defined analytical design are needed.
abstract_id: PUBMED:31565200
Health-related quality of life and related characteristics of persons with spinal cord injury in Nigeria. Background: Spinal cord injury (SCI) is impairment of the spinal cord resulting in numerous health problems that considerably affect the quality of life (QOL) of the patients. Moreover, a number of sociodemographic and clinical characteristics may influence the persons' health-related quality of life (HRQOL). However, there is limited information on the HRQOL and related characteristics among affected persons living in Nigeria. This study explores the HRQOL and related characteristics of persons with SCI in Kano, Northwestern Nigeria. Methods: A prospective cross-sectional survey of 41 subjects with SCI and 40 age and gender matched healthy subjects was conducted from January to December 2016. Subjects' sociodemographic and clinical characteristics and HRQOL (using the SF-36 questionnaire) were collected and analyzed. Results: The majority of the subjects were men in both the SCI (85.4%) and healthy (82.5%) groups. The mean injury duration was 28.4 ± 20.2 months. Road traffic accident (46.3%) was the leading cause of injury with paraplegia (70.7%) being the most frequent level of injury. A greater number of the subjects (43.9%) had a complete impairment. Subjects with SCI had significantly lower HRQOL in the domains of general health, physical functioning, bodily pain, social functioning, role-emotional, and mental health compared to healthy controls. Gender, level of injury, and severity of injury were commonly found to be related to lower HRQOL scores. Conclusion: Persons with SCI from Kano, Northwestern Nigeria have lower HRQOL across various domains compared to healthy controls. Common factors related to lower HRQOL scores were gender, level of injury, and severity of injury. There is a need for optimal rehabilitation for persons with SCI in Kano, Northwestern Nigeria.
abstract_id: PUBMED:29338955
Declining Health-Related Quality of Life in the U.S. Introduction: Despite recent declining mortality of the U.S. population from most leading causes, uncertainty exists over trends in health-related quality of life.
Methods: The 2001-2002 and 2012-2013 National Epidemiologic Surveys on Alcohol and Related Conditions U.S. representative household surveys were analyzed for trends in health-related quality of life (n=79,402). Health-related quality of life was measured with the Short Form-6 Dimension scale derived from the Short Form-12. Changes in mean Short Form-6 Dimension ratings were attributed to changes in economic, social, substance abuse, mental, and medical risk factors.
Results: Mean Short Form-6 Dimension ratings decreased from 0.820 (2001-2002) to 0.790 (2012-2013; p<0.0001). In regressions adjusted for age, sex, race/ethnicity, and education, variable proportions of this decline were attributable to medical (21.9%; obesity, cardiac disease, hypertension, arthritis, medical injury), economic (15.6%; financial crisis, job loss), substance use (15.3%; substance use disorder or marijuana use), mental health (13.1%; depression and anxiety disorders), and social (11.2%; partner, neighbor, or coworker problems) risks. In corresponding adjusted models, a larger percentage of the decline in Short Form-6 Dimension ratings of older adults (aged ≥55 years) was attributable to medical (35.3%) than substance use (7.4%) risk factors, whereas the reverse occurred for younger adults (aged 18-24 years; 5.7% and 19.7%) and adults aged 25-44 years (12.7% and 16.3%).
Conclusions: Between 2001-2002 and 2012-2013, there was a significant decline in average quality of life ratings of U.S. adults. The decline was partially attributed to increases in several modifiable risk factors, with medical disorders having a larger role than substance use disorders for older adults but the reverse for younger and middle-aged adults.
abstract_id: PUBMED:35186604
Clinical and Demographic Predictors of Health-Related Quality of Life After Orthopedic Surgery With Implant Placement. Background: Orthopedic surgeries can rehabilitate injuries and at the same time improve the patients' quality of life. The study aimed to assess patients' health-related quality of life (HRQOL) six months after an orthopedic surgery with implant placement.
Materials And Methods: A cross-sectional study with the use of a structured questionnaire among 103 patients was conducted. The 36-Item Short Form Survey (SF-36) questionnaire was used to evaluate patients' quality of life.
Results: According to the findings of the multivariate linear regression analysis, low age, marital status (married in comparison to unmarried/ divorcees/widows), reduced intensity of the pain, and low educational attainment were associated with a better quality of life. Furthermore, the patients who were living with another person and the patients who underwent surgery on a part of the body other than the hip presented better quality of life. The results of the multivariate analysis explained 33%-67% of the variance of the SF-36 HRQOL.
Conclusion: Measuring quality of life is a valuable asset that helps to reveal the frail patient groups, in which health professionals will prioritize their care and the state in turn will design primary care services to meet their needs after discharge from the hospital.
abstract_id: PUBMED:34331197
Development of prognostic models for Health-Related Quality of Life following traumatic brain injury. Background: Traumatic brain injury (TBI) is a leading cause of impairments affecting Health-Related Quality of Life (HRQoL). We aimed to identify predictors of and develop prognostic models for HRQoL following TBI.
Methods: We used data from the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) Core study, including patients with a clinical diagnosis of TBI and an indication for computed tomography presenting within 24 h of injury. The primary outcome measures were the SF-36v2 physical (PCS) and mental (MCS) health component summary scores and the Quality of Life after Traumatic Brain Injury (QOLIBRI) total score 6 months post injury. We considered 16 patient and injury characteristics in linear regression analyses. Model performance was expressed as proportion of variance explained (R2) and corrected for optimism with bootstrap procedures.
Results: 2666 Adult patients completed the HRQoL questionnaires. Most were mild TBI patients (74%). The strongest predictors for PCS were Glasgow Coma Scale, major extracranial injury, and pre-injury health status, while MCS and QOLIBRI were mainly related to pre-injury mental health problems, level of education, and type of employment. R2 of the full models was 19% for PCS, 9% for MCS, and 13% for the QOLIBRI. In a subset of patients following predominantly mild TBI (N = 436), including 2 week HRQoL assessment improved model performance substantially (R2 PCS 15% to 37%, MCS 12% to 36%, and QOLIBRI 10% to 48%).
Conclusion: Medical and injury-related characteristics are of greatest importance for the prediction of PCS, whereas patient-related characteristics are more important for the prediction of MCS and the QOLIBRI following TBI.
abstract_id: PUBMED:37304095
Health-related quality of life in older women with injuries: a nationwide study. Objectives: This study aims to describe the health-related quality of life (HRQoL) and influencing factors of older women who experienced injuries.
Methods: This study is a secondary analysis of data from 4,217 women aged 65 years or older sampled from the Korea National Health and Nutrition Examination Survey (KNHANES) (2016-2020) database. Two-way analysis of variance was used to analyze the data.
Results: The mean HRQoL scores of older women with and without injuries were 0.81 ± 0.19 (n = 328) and 0.85 ± 0.17 (n = 3,889), respectively, which were significantly different (p < 0.001). The results of multiple regression analysis revealed that working, physical activity, BMI, osteoarthritis, stress, and subjective health status significantly affected the HRQoL of older women with injuries, and the explanatory power of the model was 29%.
Conclusion: The results of this study on factors affecting HRQoL can contribute to the understanding of the experience of older women with injuries and can be used as a reference to develop health promotion programs.
abstract_id: PUBMED:33726689
Hemodialysis patients perceived exercise benefits and barriers: the association with health-related quality of life. Background: Patients on hemodialysis have less exercise capacity and lower health-related quality of life than healthy individuals without chronic kidney disease (CKD). One of the factors that may influence exercise behavior among these patients is their perception of exercise benefits and barriers. The present study aimed to assess the perception of hemodialysis patients about exercise benefits and barriers and its association with patients' health-related quality of life.
Methods: In this cross-sectional study, 227 patients undergoing hemodialysis were randomly selected from two dialysis centers. Data collection was carried out using dialysis patient-perceived exercise benefits and barriers scale (DPEBBS) and kidney disease quality of life short form (KDQOL-SF). Data were analyzed using SPSS software ver. 21.
Results: The mean score of DPEBBS was 68.2 ± 7.4 (range: 24 to 96) and the mean KDQOL score was 48.9 ± 23.3 (range: 0 to 100). Data analysis by Pearson correlation coefficient showed a positive and significant relationship between the mean scores of DPEBBS and the total score of KDQOL (r = 0.55, p < 0.001). Moreover, there was a positive relationship between the mean scores of DPEBBS and the mean score of all domains of KDQOL.
Conclusion: Although most of the patients undergoing hemodialysis had a positive perception of the exercise, the majority of them do not engage in exercise; it could be contributed to the barriers of exercise such as tiredness, muscle fatigue, and fear of arteriovenous fistula injury. Providing exercise facilities, encouraging the patients by the health care provider to engage in exercise programs, and incorporation of exercise professionals into hemodialysis centers could help the patients to engage in regular exercise.
abstract_id: PUBMED:34646711
Impact of Oral Diseases and Conditions on Oral Health-Related Quality of Life: A Narrative Review of Studies Conducted in the Kingdom of Saudi Arabia. Oral health-related quality of life (OHRQoL) is a novel concept that has evolved over the past two decades. The World Health Organization (WHO) has also recognized it as a significant part of the Global Oral Health Program (2003). Information on OHRQoL gives better understanding about feelings and perceptions on an individual level. It also helps us to understand the impact of oral health on the lives on the patients and their family. It is now well documented that oral diseases and conditions impact people's life. Some of the oral diseases/conditions like caries, dental fluorosis, tooth loss, periodontal disease, dental injuries, oral cancer, dental anomalies, craniofacial disorders, and many more have got negative impact on QoL. This paper identifies the various literatures published on the impact of oral diseases and conditions on OHRQoL in the population of Saudi Arabia. Although numerous researches can be found in other countries, the data on Saudi Arabian population are limited, leading to the need for carrying out more research in this area.
abstract_id: PUBMED:28077006
Longitudinal Assessment of Health-Related Quality of Life following Adolescent Sports-Related Concussion. To examine initial and longitudinal health-related quality of life (HRQOL) in adolescent sports-related concussion (SRC) patients, a prospective observational case-series study was conducted among adolescent SRC patients who were evaluated at a multi-disciplinary pediatric concussion program. Health-related quality of life was measured using the child self-report Pediatric Quality of Life Inventory (PedsQL) generic score scale (age 13-18 version) and the PedsQL Cognitive Functioning scale. Initial and longitudinal HRQOL outcomes were compared between patients who did and did not develop post-concussion syndrome (PCS). A total of 63 patients met the inclusion criteria during the study period. The mean age of the cohort was 14.57 years (standard deviation, 1.17) and 61.9% were male. The median time from injury to initial consultation was 6.5 days (interquartile range, 5, 11). At initial consultation, impairments in physical and cognitive HRQOL but not social or emotional HRQOL were observed. Initial symptom burden and length of recovery were associated with greater impairment in physical and cognitive HRQOL. Patients who went on to develop PCS had significantly worse physical and cognitive HRQOL at initial consultation and demonstrated a slower rate of recovery in these domains, compared with those who recovered in less than 30 days. Adolescent SRC was associated with HRQOL impairments that correlated with clinical outcomes. No persistent impairments in HRQOL were detected among patients who achieved physician-documented clinical recovery. Future studies are needed to evaluate the clinical utility of HRQOL measurement in the longitudinal management of adolescent SRC and PCS patients.
abstract_id: PUBMED:20735496
Illness representations predict health-related quality of life 6 months after hospital discharge in individuals with injury: a predictive survey. Aim: This article is a report of a study investigating health-related quality of life and its predictors 6 months after hospital discharge in individuals who had experienced moderate to severe injury.
Background: Understanding the relationship between individuals' illness representations and health-related quality of life may provide the foundation for strategies to promote recovery after injury.
Methods: A longitudinal predictive a survey was undertaken. Questionnaires were administered while patients were in hospital, and telephone interviews were conducted 3 and 6 months after hospital discharge. Data were collected from August 2006 to February 2007. Demographic and clinical data were collected and the Chinese Illness Perception Questionnaire Revised (Trauma) and the Medical Outcomes Study Short Form 36, were used. Simultaneous multiple regressions were used to identify predictors of health-related quality of life.
Results: A total of 114 participants completed the 6-month survey. Demographic and clinical characteristics did not predict health-related quality of life. Predictors of 6-month Physical Summary Score were 3-month Physical Summary Score and two illness representation scales: Identity and Timeline (acute/chronic). Predictors of 6-month were 3-month Mental Summary Score, and three illness representation scales: Identity, Emotional representations and Timeline (acute/chronic). Independent variables accounted for 75·1% and 72·4% of the variance in Physical and Mental Summary Scores.
Conclusion: Assessment of patients' illness representations may help clinicians to understand better how people who have sustained traumatic injury view their health and health-related quality of life and to help patients in their recovery process.
Answer: Yes, health-related quality of life (HRQOL) has been found to predict injury events. A study conducted among primary school children in Ahwaz, Iran, found a significant trend for increasing occurrence of injury with decreasing HRQOL scores. The study concluded that there was an association between HRQOL and unintentional injury among primary school children, indicating that lower HRQOL scores were associated with a higher risk of injury (PUBMED:21483187). This finding suggests that HRQOL can be a predictor of injury events, although the study also noted that further investigations with a well-defined analytical design are needed to confirm this preliminary finding. |
Instruction: Surgical treatment of displaced intra-articular calcaneal fractures: is bone grafting necessary?
Abstracts:
abstract_id: PUBMED:23670493
Surgical treatment of displaced intra-articular calcaneal fractures: is bone grafting necessary? Background: The aim of this retrospective study was to determine the need for bone grafting in the surgical treatment of displaced intra-articular calcaneal fractures. We reviewed 390 cases of displaced intra-articular calcaneal fractures treated with plate osteosynthesis with or without autologous iliac bone grafting, and compared outcomes and complications related to fracture stabilization.
Materials And Methods: Three hundred ninety patients with displaced intra-articular calcaneal fractures that were treated with plate osteosynthesis from December 2002 to December 2010 were reviewed. Two hundred two patients (group A) were treated by osteosynthesis with autologous bone grafting, and 188 patients (group B) were treated by osteosynthesis without bone grafting. One hundred eighty-one patients with an AO type 73-C1 fracture (Sanders type II), 182 patients with an AO type 73-C2 fracture (Sanders type III), and 27 patients with an AO type 73-C3 fracture (Sanders type IV) were included in this study. Bohler's angle, the crucial angle of Gissane, and calcaneal height in the immediate postoperative period and at the 2-year follow-up were compared. Any change in the subtalar joint status was documented and analyzed. The final outcomes of all patients were evaluated by the AOFAS Ankle-Hindfoot Scale and compared in both groups.
Results: The mean full weight-bearing time in group A (with bone grafting) was significantly lower (median 6.2 months, range 2.8-9.2 months) than that in group B (without bone grafting; median 9.8 months, range 6.8-12.2 months). The immediate-postoperative Bohler's angle and that at the 2-year follow-up were significantly higher in group A. The loss of Bohler's angle after 2 years was significantly lower in group A (mean 3.5°; 95 % CI 0.8°-6.2°) than in group B (mean 6.2°; 95 % CI 1.0°-11.2°). The average change in the crucial angle and the average change in calcaneal height were not statistically significant for either group. The infection rate in the bone grafting group was higher, though statistically insignificantly so, than in the nongrafting group (8.3 vs. 6.3 %). No significant difference was found between the groups in terms of the rates of good reduction, postoperative osteoarthritis, and subtalar fusion. Regarding the efficacy outcomes, the mean AOFAS score was lower (mean 76.4 points; 95 % CI 65.8-82.9 points) in group A than in group B (mean, 81.6 points; 95 % CI, 72.3-88.8 points), but this difference was not significant (p > 0.05).
Conclusions: Bohler's angle showed improved restoration and the patients returned to full weight-bearing earlier when bone grafting was used in the treatment of intra-articular calcaneal fracture. However, the functional outcomes and complication rates of both groups were similar.
abstract_id: PUBMED:32498951
Minimally Invasive Treatment of Displaced Intra-Articular Calcaneal Fractures. Minimally invasive surgical techniques are increasingly used for definitive treatment of displaced intra-articular calcaneal fractures. These approaches have been shown to minimize soft tissue injury, preserve blood supply, and decrease operative time. These methods can be applied to all calcaneal fractures and have particular advantages in patients with higher than usual risks to the soft tissues. The literature suggests that results of limited soft tissue dissection approaches provide equivalent outcomes to those obtained with the extensile lateral approach. We predict that as imaging and other techniques continue to improve, more calcaneal fractures will be treated by these appealing safer techniques.
abstract_id: PUBMED:31320206
Percutaneous arthroscopic calcaneal osteosynthesis for displaced intra-articular calcaneal fractures: Systematic review and surgical technique. Background: The aim of this study was to systematically evaluate the available literature on technique and outcomes of percutaneous arthroscopic calcaneal osteosynthesis for displaced intra-articular calcaneal fractures.
Methods: A systematic review of the literature available in MEDLINE, EMBASE, and the Cochrane Library database was performed, including studies from January 1985 to august 2018. The literature search, data extraction, and quality assessment were conducted by 2 independent reviewers. The surgical technique and perioperative management, clinical outcomes scores, radiographic outcomes and complication rate were evaluated.
Results: Of 66 reviewed articles, 8 studies met the inclusion criteria. The included studies reported on the results of 152 patients. At last follow up the mean American Orthopaedic Foot & Ankle Society ankle-hindfoot was ranging from 72.1 to 94.1. The complication rate was low, including only one superficial infection.
Conclusions: The studies included were of too little level of evidence to allow for data pooling or meta-analysis. However, the percutaneous arthroscopic calcaneal osteosynthesis seems to be a good option for displaced intra-articular calcaneal fractures with a low complication rate. Appropriately powered randomized controlled trials with long-term follow up are needed to confirm the efficacy of this technique.
Level Of Evidence: Level III, systematic review of Level III studies.
abstract_id: PUBMED:36534878
Displaced Intra-articular Calcaneus Fractures: Extensile Lateral and Less Invasive Approaches. Treatment of displaced intra-articular calcaneal fractures is controversial and must be individualized by patient and fracture type. With an extensile lateral approach, all components of the deformity in displaced intra-articular calcaneal fractures can be addressed. The extensile lateral approach is indicated in more complex fracture patterns and when delay of surgery is necessary because of severe soft-tissue injury beyond 2 to 3 weeks. Careful patient selection, proper surgical timing, incision placement, and soft-tissue handling minimize the high rate of wound healing complications associated with the extensile lateral approach. The goals of surgical treatment of displaced intra-articular calcaneal fractures may also be achieved using less invasive approaches, such as the sinus tarsi approach and closed reduction with percutaneous fixation, decreasing the risk of wound complications. Multiple factors influence determination of the specific approach.
abstract_id: PUBMED:36310463
Effectiveness comparison of two surgical methods in treatment of intra-articular displaced calcaneal fractures in older children Objective: To compare the effectiveness of open reduction and internal fixation with plate and closed reduction and internal fixation with Kirschner wire (K-wire) in the treatment of intra-articular displaced calcaneal fractures in older children.
Methods: A clinical data of 35 older children (37 feet) with intra-articular displaced calcaneal fractures who were admitted between November 2014 and November 2020 and met the selection criteria were retrospectively analyzed. Among them, 19 cases (20 feet) underwent open reduction and internal fixation with plate (plate group), and 16 cases (17 feet) underwent closed reduction and internal fixation with K-wire (K-wire group). There was no significant difference in gender, age, cause of injury, side and type of fracture, and time from injury to admission, and preoperative calcaneal Gissane angle and Böhler angle ( P>0.05). The postoperative calcaneal Gissane angle, Böhler angle, complications, and fracture healing were compared between the two groups. The ankle function was evaluated based on the American Orthopedic Foot and Ankle Society (AOFAS) ankle-hindfoot scoring system.
Results: Incision necrosis occurred in 1 foot in the plate group after operation, which healed after symptomatic treatment; the other incisions in the two groups healed by first intention. All children were followed up 12-39 months (mean, 19 months). X-ray films showed that the fractures in both groups healed; the healing time was (2.65±0.71) months in the plate group and (2.24±1.38) months in the K-wire group respectively, with no significant difference ( t=1.161, P=0.253). At last follow-up, the calcaneal Gissane angle and Böhler angle returned to normal; and the difference between pre- and post-operation in the two group was significant (P<0.05), but there was no significant difference between the two groups in the difference between before and after operation ( P>0.05). In the plate group, the plate was removed at 11-22 months after operation (mean, 16.8 months). At last follow-up, the AOFAS ankle-hindfoot score in the plate group was 91.2±5.1, which was significantly higher than that in the K-wire group (86.9±6.1) ( t=2.316, P=0.027). The ankle function was rated as excellent in 15 feet, good in 4 feet, and fair in 1 foot in the plate group, and excellent in 14 feet and good in 3 feet in the K-wire group, and the difference between the two groups was not significant ( Z=1.712, P=0.092).
Conclusion: For intra-articular displaced calcaneal fracture in older children, the open reduction and internal fixation with plate and closed reduction and internal fixation with K-wire can achieve good effectiveness, but the former has better recovery of ankle function.
abstract_id: PUBMED:21254671
Surgical treatment of displaced intra-articular fractures of the calcaneus in elderly patients Objective: To study the clinical effects of surgical treatment of displaced intra-articular fractures of the calcaneus in elderly patients, and to discuss the operative indications.
Methods: From January 2000 to December 2007, 24 elderly patients with 26 fractures underwent open reduction and internal fixation for a displaced intra-articular fracture of calcaneus, which included 18 feet of 18 males and 8 feet of 6 female, with an average age of 67 years (range, 60 to 75 years). According to Sanders classification based on CT scanning, 13 fractures were rated as type II, 12 as type III and 1 as type IV. Böhler angle and Gissane angle were measured preoperatively and postoperatively and foot function was assessed with Maryland foot score system.
Results: Twenty-four cases with 26 feet were followed up for an average of 18.4 months (range, 12 to 26 months). Mean Böhler angle was (10.4 +/- 8.2) degrees preoperatively and (27.8 +/- 7.4) degrees postoperatively and mean Gissane angle was (136.5 +/- 10.3) degrees preoperatively and (124.3 +/- 4.2) degrees postoperatively. The difference between preoperative and postoperative values was found with statistically significant (P < 0.05). The results were excellent in 5 feet, good in 16 feet, fair in 4 feet and poor in 1 foot. There were 3 cases of wound necerosis, 2 cases of wound infection, 1 case of sural nerve injury and 6 cases of posttraumatic subtalar arthritis complications.
Conclusion: Good clinical result could be obtained with surgical treatment in elderly patients with displaced intra-articular fractures of the calcaneus. Open reduction appears to be an acceptable method of treatment for displaced calcaneal fractures in elderly patients if they have good general conditions.
abstract_id: PUBMED:25989769
Operative treatment of displaced intra-articular fractures of Calcaneum: Is it worthwhile? Objective: To compare the results of operative treatment for displaced intra-articular fractures of calcaneum with conservative treatment.
Methods: The retrospective non-randomised comparative study using. purposive non-probability convenient sampling was conducted at the Combined Military Hospital, Rawalpindi, and comprised treatment records from March 2010 to October 2013 of patients who had been treated either by Plaster of Paris casting (Group A) or managed by open reduction internal fixation (Group B). Functional outcome was assessed using Foot and Ankle Disability Index.
Results: Of the 42 records in the study, 20(47.6%) related to Group A and 22(52.4%) to Group B. The mean age was 41±7.82 years (range: 28-55 years) in Group A, and 31±6.35 years (range: 21-43) in Group B. Male-to-female ratio was 10:1 in Group A; 9:1 in Group B. Union was achieved in all (100%) cases. Bone substitute was used in 16(72.7%) in Group B to fill void during reconstruction of collapsed calcaneum. Wound complications were noted in 2(9.1%) Group B patients. There was loss of reduction in 1(4.5%). Mean Foot and Ankle Disability Index score in Group A was 45±10.68.4 compared to 67.9±10.04 in Group B (p=1.99).
Conclusions: For displaced intra-articular fractures, operative treatment is associated with better functional outcome in terms of absolute functional scores and should be the treatment of choice although factors such as age, soft tissue injury and surgical expertise may influence the decision.
abstract_id: PUBMED:33893034
Osteosynthesis or primary arthrodesis for displaced intra-articular calcaneus fractures Sanders type IV - A systematic review. Background: Displaced intra-articular calcaneus fractures (DIACF) Sanders type IV represent a challenge in its management and questions remain about the best treatment option available. This study aimed to compare the outcomes of primary subtalar arthrodesis (PSTA) and osteosynthesis in these fractures.
Methods: Studies concerning DIACF Sanders type IV, from 2005 to 2020 were systematically reviewed. Only studies evaluating functional outcomes with American Orthopaedic Foot & Ankle Society ankle-hindfoot (AOFAS) score were admitted allowing for results comparison.
Results: In total, 9 studies met the inclusion criteria. These reported on the results of 142 patients, from which 41 submitted to PSTA and 101 to osteosynthesis, with an average follow-up period over 2 years. We found a significant moderate negative correlation between the reported AOFAS score and the Coleman Methodology Score obtained. Late subtalar arthrodesis was 13.63% of the total osteosynthesis performed.
Conclusions: Clinical outcomes after PSTA and osteosynthesis, for the treatment of Sanders type IV fractures, do not seem very different, yet careful data interpretation is crucial. Additional powered randomized controlled trials are necessary to assess which surgical strategy is better.
abstract_id: PUBMED:30205938
Should the Extended Lateral Approach Remain Part of Standard Treatment in Displaced Intra-articular Calcaneal Fractures? The aim of this study was to evaluate the results of open reduction and internal fixation through the extended lateral approach (ELA) in displaced intra-articular calcaneal fractures and to determine whether this approach should remain part of standard therapy. This retrospective cohort study included 60 patients with 64 displaced intra-articular calcaneal fractures who underwent surgical treatment through the ELA. Outcome measures were the visual analog scale foot and ankle (VAS FA), the American Orthopedic Foot and Ankle Society (AOFAS) score, surgical site infections (SSIs), and reoperations. We determined the AOFAS score for 40 patients with 42 fractures, and 42 patients with 44 fractures completed the VAS FA questionnaire. The mean VAS FA score was 61.0 ± 23.4 and the median AOFAS score was 83 (range 33 to 100), with 55% good to excellent scores. We found 10.9% superficial SSIs successfully treated with antibiotics. In 4.7% of patients a deep SSI was diagnosed, wherefore premature implant removal was necessary. Patients with an SSI did not have significantly lower VAS FA or AOFAS scores than did patients without an SSI (p = .318 and p = .766, respectively). Implant removal in absence of SSIs was necessary in 17 patients because of pain, and 3 patients needed secondary arthrodesis because of persistent pain. We concluded that the ELA proved to be a safe procedure, and moreover the most common complications did not influence the long-term outcomes of patients. However, recent literature demonstrates that less invasive techniques seem to exceed the ELA with respect to wound complications.
abstract_id: PUBMED:29116324
Displaced Intra-articular Calcaneal Fractures: Classification and Treatment. The calcaneus is the most commonly fractured tarsal bone. Displaced intraarticular calcaneal fractures are usually caused by a fall from height with one or both heels directly hitting the ground. Displaced intra-articular calcaneal fractures are complex and highly disabling injuries. There is ongoing debate regarding the optimal treatment for each type of displaced intra-articular calcaneal fracture. This review aims to summarize the classification of, various treatment options for, prevention of perioperative complications in, and management algorithms for displaced intra-articular calcaneal fractures. [Orthopedics. 2017; 40(6):e921-e929.].
Answer: The necessity of bone grafting in the surgical treatment of displaced intra-articular calcaneal fractures has been a subject of investigation. A retrospective study reviewed 390 cases of such fractures treated with plate osteosynthesis, with 202 patients receiving autologous bone grafting and 188 not receiving grafting (PUBMED:23670493). The study found that the use of bone grafting resulted in a significantly higher immediate postoperative Bohler's angle and a lower loss of Bohler's angle after 2 years, as well as a significantly shorter time to full weight-bearing. However, the functional outcomes, as measured by the AOFAS Ankle-Hindfoot Scale, and complication rates were similar between the two groups. The infection rate was slightly higher in the bone grafting group, though not statistically significant. These findings suggest that while bone grafting may improve certain radiological outcomes and allow for earlier weight-bearing, it does not significantly alter the functional outcomes or complication rates (PUBMED:23670493).
Other studies have explored alternative surgical approaches that minimize soft tissue injury and preserve blood supply, such as minimally invasive techniques (PUBMED:32498951) and percutaneous arthroscopic osteosynthesis (PUBMED:31320206). These methods have been reported to have low complication rates and provide equivalent outcomes to more invasive approaches, such as the extensile lateral approach (PUBMED:36534878).
In summary, while bone grafting in the treatment of displaced intra-articular calcaneal fractures may offer some benefits in terms of radiological outcomes and earlier weight-bearing, it does not appear to be necessary for achieving similar functional outcomes and complication rates compared to non-grafting approaches (PUBMED:23670493). Alternative surgical methods, such as minimally invasive and percutaneous arthroscopic techniques, also present viable options with potentially lower risks of soft tissue complications (PUBMED:32498951, PUBMED:31320206, PUBMED:36534878). |
Instruction: Does prolonged treatment course adversely affect local control of carcinoma of the larynx?
Abstracts:
abstract_id: PUBMED:8040011
Does prolonged treatment course adversely affect local control of carcinoma of the larynx? Purpose: The purpose of this paper is to present local control rates of carcinoma of the larynx in relation to the total treatment course after radical radiation therapy.
Methods And Materials: A total of 1350 patients with laryngeal carcinoma treated at the Massachusetts General Hospital for the past three decades were available for analysis. Treatment courses were divided into two groups: 45 days and > 45 days. The local control rates were compared and evaluated for statistical differences.
Results: The data indicated that prolonged treatment course adversely affects local tumor control of both advanced glottic and supraglottic lesions, but to a lesser degree for the early tumors.
Conclusion: The study indicated that for optimal local control, radiation treatment should be completed as soon as possible, preferably within 6.5 weeks, either by once- or twice-daily accelerated programs. The local control of early T1 glottic cancer has been exceedingly satisfactory by conventional once-daily radiation therapy. Further improvement by shortening of treatment time for such early lesions will be difficult to assess without a prospective randomized trial.
abstract_id: PUBMED:8184125
Dose, fractionation and overall treatment time in radiation therapy--the effects on local control for cancer of the larynx. The effect of total tumor dose, split course treatment and overall treatment time on local control was analysed in a retrospective series of 997 patients with carcinoma of the larynx, treated with megavoltage radiotherapy only. Primary tumors were classified by site (glottis and supraglottis) and T-stage. Continuous course (CC, n = 594) treatment was given primarily to small tumors. Split course radiation (SC, n = 403) was generally given to patients with larger field sizes. Total doses of irradiation ranged from 50 to 79 Gy, with a mean of 64 Gy in CC and 66 Gy in SC. Most of the treatments were given with fraction sizes between 2.0 and 2.1 Gy (91%). Overall treatment times ranged between 25 and 60 days in the CC group (mean, 45 days) and between 45 and 120 in the SC group (mean, 76 days). A local recurrence was observed in 256 patients. T-stage was the only tumor characteristic strongly related to local failure. Corrected for T-stage, no difference in local relapse rate was observed between glottic and supraglottic tumors, or between node-negative (n = 886) and node-positive patients (n = 111). After correction for T-stage the local failure rate of SC-treated tumors was 2.1 (95% confidence limits: 1.4-3.1) times higher than of CC-treated tumors. However, this effect could not be explained as an effect of the overall treatment time (OTT) itself, as no effect of OTT was found within the SC and the CC group, even though the variation in OTT's was considerable in the SC group. A higher tumor dose was associated with a lower local failure rate in the CC group (p = 0.005), but not in the SC group (p = 0.56).
abstract_id: PUBMED:9457816
Similar decreases in local tumor control are calculated for treatment protraction and for interruptions in the radiotherapy of carcinoma of the larynx in four centers. Purpose: Data on patients with cancer of the larynx are analyzed using statistical models to estimate the effect of gaps in the treatment time on the local control of the tumor.
Methods And Materials: Patients from four centers, Edinburgh, Glasgow, Manchester, and Toronto, with carcinoma of the larynx and treated by radiotherapy were followed up and the disease-free period recorded. In all centers the end point was control of the primary tumor after irradiation alone. The local control rates at > or = 2 years, Pc, were analyzed by log linear models, and Cox proportional hazard models were used to model the disease-free period.
Results: T stage, nodal involvement, and site of the tumor were important determinants of the disease-free interval, as was the radiation schedule used. Elongation of the treatment time by 1 day, or a gap of 1 day, was associated with a decrease in Pc of 0.68% per day for Pc = 0.80, with a 95% confidence interval of (0.28, 1.08)%. An increase of 5 days was associated with a 3.5% reduction in Pc from 0.80 to 0.77. At Pc = 0.60 an increase of 5 days was associated with an 7.9% decrease in Pc. The time factor in the Linear Quadratic model, gamma/alpha, was estimated as 0.89 Gy/day, 95% confidence interval (0.35, 1.43) Gy/day.
Conclusions: Any gaps (public holidays are the majority) in the treatment schedule have the same deleterious effect on the disease free period as an increase in the prescribed treatment time. For a schedule, where dose and fraction number are specified, any gap in treatment is potentially damaging.
abstract_id: PUBMED:9788401
The impact of treatment time and smoking on local control and complications in T1 glottic cancer. Purpose: To define the optimal treatment regimen, patients with T1N0M0 glottic larynx carcinoma were treated with six different radiotherapy (RT) schedules. To assess the influence of patient characteristics, complication rates, and to evaluate the overall larynx preservation.
Methods And Materials: Out of a consecutive series of 383 patients treated for T1N0M0 glottic larynx carcinoma between 1965 and 1992, 352 evaluable patients were treated with six different "standard" fractionation schedules: 65 Gy (20 x 3.25 Gy), 62 Gy (20 x 3.1 Gy), 61.6 Gy (22 x 2.8 Gy), 60 Gy (25 x 2.4 Gy), 66 Gy (33 x 2 Gy) and 60 Gy (30 x 2 Gy). The median follow-up of all patients was 89 months. Patient factors analyzed included: age, sex, concurrent illness, smoking habits, tumor localization and extension, tumor differentiation, the effect of tumor biopsy or stripping of the vocal cord, and the presence of visible tumor at the start of radiotherapy. Treatment parameters evaluated were: year of treatment, beam energy, treatment planning, field size, fractionation schedule, fraction size, number of fractions, total dose, treatment time and treatment gap, the use of wedges, and neck diameter.
Results: The overall 5-year actuarial locoregional control was 89%, varying between 83 and 93% for the different schedules. Univariately, local control decreased with increasing treatment time. This could not be explained by the confounding variables sex, tumor extension, and field length (p = 0.0065). Adjusted for these variables, 5-year local control percentage decreased from 95% (SE 2%) for 22-29 days to 79% (SE 6%) for treatment time > or = 40 days. The overall complication rate (grade I-IV) at 5 years was 15.3%, and varied between the different schedules, from 7 to 17%. No relation was found between complications and treatment factors. Patients who continued smoking had a higher complication rate than those who never smoked or stopped smoking, univariately as well as adjusted for tumor extension, macroscopic tumor, and neck diameter (p = 0.0038). Twenty-eight percent (SE 6%) of the patients who continued smoking had complications at 10 years, compared to about 13% (SE 4%) of those who stopped before or after RT. No evidence was found for any other relation between complications and patient or tumor factors. Severe edema and necrosis (grade III and IV) were not observed in the 2 Gy fraction schedules. A laryngectomy was performed in 36 patients: 30 for recurrence, 3 for complications (at 40, 161, and 272 months), and 3 for a second primary. The overall larynx preservation was 90% at 10 years, and for the different schedules it was 20 x 3.25 Gy: 97%; 20 x 3.1 Gy: 96%; 22 x 2.8 Gy: 92%; 25 x 2.4 Gy: 89%; 33 x 2 Gy: 78%; and 30 x 2 Gy: 80%.
Conclusion: Overall treatment time is the most significant factor for locoregional control of T1 glottic cancer. A schedule of 25 x 2.4 Gy appeared to be the optimal treatment schedule considering both tumor control and long term toxicity. The complication rate was increased in patients who continued smoking.
abstract_id: PUBMED:3291901
Split-course versus continuous radiotherapy. Analysis of a randomized trial from 1964 to 1967. A randomized clinical trial was performed from 1964 to 1967 to compare the therapeutic results of split-course external beam radiotherapy with those of continuously fractionated treatment. Altogether 439 consecutive patients with carcinoma of larynx, nasopharynx, hypopharynx, oropharynx, oral cavity, oesophagus and urinary bladder were included in the series. 227 patients received split-course treatment and 212 were treated by the continuous-course method. In the split-course treatment there was a 2-3 weeks' interruption after 25-30 Gy. This break was compensated by a 10% increase in the total dose. For each tumour site local control and failure rates for the 2 treatment techniques were similar. No significant differences in 5- and 10-year survival were noted. Acute side effects were milder in all patients treated with split-course. The occurrence of late reactions was similar in both treatment groups. However, severe late reactions in the urinary bladder were somewhat more frequent in patients treated with split-course technique; the difference was not statistically significant. We conclude that there were no significant differences in local control, long-term survival and late normal tissue reactions between the treatment groups. The acute normal tissue reactions were milder in the split-course treated groups. We still regard split-course as a useful treatment modality provided the interruption is compensated with about 10% increase in total dose. However, more studies are needed to show which tumours proliferate during prolonged radiotherapy.
abstract_id: PUBMED:8783399
Prognostic factors of local control after radiotherapy in T1 glottic and supraglottic carcinoma of the larynx. This study presents a retrospective analysis of a consecutive series of 161 patients treated with curatively intended radiotherapy for T1 supraglottic or glottic carcinoma from 1972 to 1990 at the Department of Oncology, Aalborg County Hospital, Denmark. All patients received radiotherapy given with 4-MV X-rays on lateral opposed fields. Intended dose was 60 Gy in 30 fractions. Multivariate analysis of recorded clinical parameters was applied to identify possible prognostic factors of local control. Tumor size, differentiation grade and sex were identified as significant independent prognostic parameters of local control. Five-year local control was 58% and 78% for supraglottic and glottic tumors, respectively. Applying salvage surgery the ultimate control rates were 82% and 97% for supraglottic and glottic tumors, respectively. Evaluation of treatment response 3-6 weeks following accomplishment of radiotherapy demonstrated that remaining tumor at the time of evaluation was an indicator of failure in local control.
abstract_id: PUBMED:30521676
Factors of local recurrence and organ preservation with transoral laser microsurgery in laryngeal carcinomas; CHAID decision-tree analysis. Background: Indications of transoral laser microsurgery (TLM) are conditioned by the risk of local relapse.
Objective: To evaluate prognostic factors of local relapse and local control with TLM (LC-TLM).
Methods: Local relapse and LC-TLM were evaluated in 1119 patients. Logistic regression and CHAID decision tree analysis were performed.
Results: Local relapse correlated to previous radiotherapy failure (8.45, CI 95%: 2.64-27.03; P < .001), paraglottic involvement (2.42, CI: 1.41-4.15; P = .001), anterior commissure involvement (2.12, CI: 1.43-3.14; P < .001), grade of differentiation (1.74, CI: 1.18-2.57; P = .005), and alcohol consumption (1.4, CI: 0.99-1.98; P = .057). Local relapse tended to inversely correlate with experience (0.73, CI: 0.51-1.03; P = .078). The most important factors for local relapse were previous radiotherapy failure and anterior commissure involvement. LC-TLM inversely correlated with previous radiotherapy failure (0.09, CI: 0.03-0.28; P < .001), paraglottic involvement (0.25, CI: 0.14-0.43; P < .001), anterior commissure involvement (0.49, CI: 0.32-0.77; P = .007), margins (0.56, CI: 0.30-1.04; P = .068), and differentiation (0.68, CI: 0.44-1.05; P = .087). LC-TLM correlated with experience (1.71, CI: 1.13-2.55; P = .010). The most important factors for LC-TLM were previous radiotherapy failure and paraglottic involvement.
Conclusion: Previous radiotherapy failure is the most important factor for local relapse and LC-TLM. In primary treatments, anterior commissure involvement and paraglottic involvement are the most important factors for local relapse and LC-TLM, respectively.
abstract_id: PUBMED:9783888
Effect of gap length and position on results of treatment of cancer of the larynx in Scotland by radiotherapy: a linear quadratic analysis. Purpose: This paper reports on the analysis of the effect of the length and position of unplanned gaps in radiotherapy treatment schedules.
Materials And Methods: Data from an audit of the treatment of carcinoma of the larynx are used. They represent all newly diagnosed cases of glottic node-negative carcinoma of the larynx between 1986 and 1990, inclusive, in Scotland that were referred to one of the five Scottish Oncology Centres for primary radical radiotherapy treatment. The end-points are local control of cancer of the larynx in 5 years and the length of the disease-free period. The local control rates at > or =5 years, Pc were analyzed by log linear models and Cox proportional hazard models were used to model the disease-free period.
Results: Unplanned gaps in treatment are associated with poorer local control rates and an increased hazard of a local recurrence through their effect on extending the treatment time. A gap of 1 day is potentially damaging but the greatest effect is at treatment extensions of 3 or more days, where the hazard of a failure of local control is increased by a factor of 1.75 (95% confidence interval 1.20-2.55) compared to no gap. The time factor for the actual time was imprecisely estimated at 2.7 Gy/day with a standard error of 13.2 Gy/day. Among those cases who had exactly one gap resulting in a treatment extension of 1 day, there is no evidence that gap position influences local control (P = 0.17). The treatment extension as a result of the gap is more important than the position of the gap in the schedule.
Conclusions: Gaps in the treatment schedule have a detrimental effect on the disease-free period. A gap has a slightly greater effect than an increase in the prescribed treatment time. Any gap in treatment is potentially damaging. The position of the gap in the schedule was shown to be not important.
abstract_id: PUBMED:7790246
Outcome following radiotherapy in verrucous carcinoma of the larynx. Purpose: To evaluate the outcome of patients with verrucous carcinoma of the larynx treated at the Princess Margaret Hospital with respect to control rates with radiotherapy, the salvage of local failure, the risk of regional lymph node metastasis following radiation therapy, and the risk of anaplastic transformation following radiotherapy.
Methods And Materials: Forty-eight patients underwent primary treatment for verrucous carcinoma of the larynx in the period between January 1961 and December 1990. This represented 1.1% of cases of laryngeal cancer seen in this time period. Forty-three received radiotherapy and 5 had surgery as the primary treatment. Several radiation dose-fractionation schedules were used, the most frequent being 50 Gy in 20 fractions in 4 weeks (31 cases), while eight patients were treated with 55 Gy over 5 weeks.
Results: The 5-year rate of local control was 59% for the 43 patients treated with radiotherapy. Surgical salvage was universally successful in all cases where it was attempted. The five cases treated with surgery alone did not experience relapse. Only one patient died of verrucous carcinoma. He had been medically unfit for surgical intervention at the time of initial treatment and at the time of relapse. He underwent a truncated course of radiotherapy (24 Gy in 3 fractions over three weeks in 1975). There was no evidence of increased neck relapse compared to other forms of laryngeal carcinoma following radiation treatment. No evidence to support anaplastic transformation of tumors treated with radiotherapy was evident in this series.
Conclusions: Local control using radiation treatment is less successful than with ordinary invasive and in situ squamous carcinomas of the larynx. Nevertheless, the treatment is effective and provides an appropriate option for laryngeal conservation, especially in advanced lesions where total laryngectomy may be the only treatment alternative. Surgical salvage of radiation failures contributes to very high rates of cure for verrucous carcinoma of the larynx. Anaplastic transformation of cases treated with radiotherapy was not observed in any case in this series.
abstract_id: PUBMED:9169807
Can pretreatment computed tomography predict local control in T3 squamous cell carcinoma of the glottic larynx treated with definitive radiotherapy? Purpose: To determine if pretreatment computed tomography (CT) can predict local control in T3 squamous cell carcinoma of the glottic larynx treated with definitive radiotherapy (RT).
Methods And Materials: Forty-two patients with previously untreated T3 squamous cell carcinoma of the glottic larynx were treated for cure with RT alone; all had a minimum 2-year follow-up. Tumor volumes and extent were determined by consensus of two head and neck radiologists on pretreatment CT studies. A tumor score was calculated and assigned to each primary lesion depending on the extent of laryngeal spread. Sclerosis of any laryngeal cartilage was recorded. The specific CT parameters assessed were correlated with local control.
Results: Tumor volume was a significant predictor of local control. For tumors measuring < 3.5 cm3, local control was achieved in 22 of 26 patients (85%), whereas for tumors > or = 3.5 cm3, local control was achieved in 4 of 16 patients (25%) (p = 0.0002). Sensitivity and specificity using this cutpoint were 85% and 75%, respectively. Tumor score as a measure of anatomic extent was also found to be a significant predictor of local control. The local control rate for tumors assigned a low tumor score (< or = 5) was 78% (21 of 27) compared to 33% (5 of 15) for tumors assigned a high tumor score (6, 7, or 8) (p = 0.008). A significant decrease in the local control rate was observed for cancers involving the paraglottic space at the false vocal cord level (14 of 16 [88%] vs. 12/26 [46%]) (p = 0.010), cancers involving the face of the arytenoid (15 of 18 [83%] vs. 11 of 24 [46%]) (p = 0.024), and tumors involving the interarytenoid region (25 of 36 [69%] vs. 1 of 6 [17%]; p = 0.020). There were 12 patients with sclerosis of both the ipsilateral arytenoid and the adjacent cricoid cartilage. These patients showed a significant decrease in local control (4 of 12 [33%]).
Conclusion: Pretreatment CT can stratify patients with T3 glottic carcinoma into groups more or less likely to be locally controlled with definitive RT. The local control rate for these tumors can be improved using a CT-based tumor profile; the ideal CT profile for a radiocurable T3 glottic larynx carcinoma is volume < 3.5 cm3 and no or single laryngeal cartilage sclerosis.
Answer: Yes, a prolonged treatment course does adversely affect local control of carcinoma of the larynx. Studies have shown that a longer overall treatment time is associated with decreased local control rates for laryngeal cancer. Specifically, one study indicated that for optimal local control, radiation treatment should be completed as soon as possible, preferably within 6.5 weeks, either by once- or twice-daily accelerated programs, as prolonged treatment course negatively impacts local tumor control of both advanced glottic and supraglottic lesions (PUBMED:8040011). Another study found that split-course treated tumors had a higher local failure rate than continuously treated tumors, and a higher tumor dose was associated with a lower local failure rate in the continuously treated group (PUBMED:8184125). Additionally, treatment protraction and interruptions in radiotherapy were found to have a similar detrimental effect on local control, with each day of elongation or gap decreasing the probability of local control (PUBMED:9457816). Furthermore, overall treatment time was identified as the most significant factor for locoregional control of T1 glottic cancer, with local control decreasing with increasing treatment time (PUBMED:9788401). Unplanned gaps in treatment schedules were also associated with poorer local control rates and an increased hazard of local recurrence (PUBMED:9783888). These findings collectively suggest that minimizing treatment duration and avoiding interruptions are important for achieving better local control in the treatment of laryngeal carcinoma. |
Instruction: Assessment of under nutrition of Bangladeshi adults using anthropometry: can body mass index be replaced by mid-upper-arm-circumference?
Abstracts:
abstract_id: PUBMED:36357916
Mid-upper arm circumference as a substitute for body mass index in the assessment of nutritional status among adults in eastern Sudan. Background: Body mass index (BMI) remains the most used indicator of nutritional status despite the presence of a potentially credible alternative. Mid-upper arm circumference (MUAC) is an anthropometric measure that requires simple equipment and minimal training. The aim of this study was to compare MUAC with BMI and propose a MUAC cut-off point corresponding to a BMI of < 18.5 kg/m2 (underweight) and ≥ 30.0 kg/m2 (obesity) among Sudanese adults.
Methods: A cross-sectional study using multistage cluster sampling was conducted in New-Halfa, eastern Sudan. Participants' age and sex were recorded and their MUAC, weight and height were measured using the standard procedures. The MUAC (cm) cut-offs corresponding to < 18.5 kg/m2 and ≥ 30.0 kg/m2 were calculated and determined using receiver operating characteristic (ROC) curve analysis RESULTS: Five hundreds and fifty-two adults were enrolled in the study. The median (interquartile range, IQR) of the participants age was 31.0 (24.0 ̶ 40.0) years and 331 (60.0%) of them were females. The medians (IQR) of BMI and MUAC were 22.4 (19.1 ̶ 26.3) kg/m2 and 25.0 (23.0 ̶ 28.0) cm, respectively. There was a significant positive correlation between MUAC and BMI (r = 0.673, p < 0.001). Of the 552 enrolled participants, 104 (18.8%), 282 (51.1%), 89 (16.1%) and 77 (13.9%) were normal weight, underweight, overweight and obese, respectively. Best statistically derived MUAC cut-off corresponding to a BMI < 18.5 kg/m2 (underweight) was ≤ 25.5 cm in both males and females (Youden's Index, YI = 0.51; sensitivity = 96.0%; specificity = 54.0%), with a good predictive value (AUROCC = 0.82). Best statistically derived MUAC cut-off corresponding to a BMI ≥ 30.0 kg/m2 (obesity) was ≥ 29.5 cm in both males and females (YI = 0.62, sensitivity = 70.3%, specificity = 92.0%), with a good predictive value (AUROCC = 0.86, 95.0% CI = 0.76 - 0.95).
Conclusion: The results suggest that the cut-offs based on MUAC can be used for community-based screening of underweight and obesity.
abstract_id: PUBMED:37739713
Validating a linear regression equation using mid-upper arm circumference to predict body mass index. Background: Estimating body mass index (BMI) in hospitalised patients for nutritional assessment is challenging when measurement of weight and height is not feasible. The study aimed to validate a previously published regression equation to predict BMI using mid-upper arm circumference (MUAC). We also evaluated the proposed global MUAC cut-off of ≤24 cm to detect undernutrition.
Methods: We measured standing height, weight, and MUAC prospectively in a sample of stable patients. Agreement between calculated and predicted BMI was evaluated using Bland-Altman analysis.
Results: We studied 201 patients; 102 (51%) were male. Median (IQR age was 42 (29-50) years. 95% limits of agreement between predicted and calculated BMI were +0.6767 to +1.712 and the bias was +1.076. MUAC ≤24 cm was 97% sensitive and 83% specific to detect undernutrition.
Conclusion: BMI derived from MUAC had poor calibration for estimating actual BMI. However, low MUAC has good discriminative accuracy to detect undernutrition.
abstract_id: PUBMED:36619554
Large mid-upper arm circumference is associated with reduced insulin resistance independent of BMI and waist circumference: A cross-sectional study in the Chinese population. Background: Body mass index (BMI) is a common indicator in clinical practice, but it is not sufficient to predict insulin resistance (IR). Other anthropometric methods supplement BMI in the assessment of body composition, which can be predicted more accurately. This cross-sectional study aimed to evaluate the association between mid-upper arm circumference (MUAC), triceps skinfold (TSF) thickness, mid-arm muscle circumference (MAMC) and IR in Chinese adults.
Methods: This cross-sectional study analyzed data from the 2009 China Health and Nutrition Survey database. The study population was divided into four groups according to the MUAC quartiles, and the homeostasis mode assessment was used to evaluate the degree of IR. Logistic regression analysis was performed to calculate odds ratios (ORs) with 95% confidence intervals (CIs), with adjustments for multiple covariates. Subgroup analyses stratified by age, sex, BMI, waist circumference (WC), smoking status, and alcohol consumption were performed.
Results: In total, 8,070 participants were included in the analysis. As MUAC increased, BMI, TSF thickness, MAMC, and the proportion of IR tended to increase. However, we found that there was a significant negative association between MUAC and MAMC and IR in the logistic regression analysis, independent of BMI and WC, the ORs for the highest quartiles compared with the lowest quartiles were 0.662 (95%CI: 0.540-0.811) and 0.723 (95%CI: 0.609-0.860), respectively. There was no significant association was observed between the TSF thickness and IR (OR=1.035 [95%CI: 0.870-1.231]). The inverse associations were more pronounced among participants with lower BMI and WC. No significant age-specific differences were observed (P-heterogeneity > 0.05).
Conclusions: After adjusting for BMI and WC, MUAC was negatively associated with IR in Chinese adults, and the association between MUAC and IR was derived from arm muscle instead of subcutaneous fat. MUAC could be an additional predictor of IR besides BMI and WC in clinical practice.
abstract_id: PUBMED:25875397
Assessment of under nutrition of Bangladeshi adults using anthropometry: can body mass index be replaced by mid-upper-arm-circumference? Background And Objective: Body-mass-index (BMI) is widely accepted as an indicator of nutritional status in adults. Mid-upper-arm-circumference (MUAC) is another anthropometric-measure used primarily among children. The present study attempted to evaluate the use of MUAC as a simpler alternative to BMI cut-off <18.5 to detect adult undernutrition, and thus to suggest a suitable cut-off value.
Methods: A cross-sectional study in 650 adult attendants of the patients of Dhaka-Hospital, of the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b) was conducted during 2012. Height, weight and MUAC of 260 male and 390 female aged 19-60 years were measured. Curve estimation was done to assess the linearity and correlation of BMI and MUAC. Sensitivity and specificity of MUAC against BMI<18.5 was determined. Separate Receiver-operating-characteristic (ROC) analyses were performed for male and female. Area under ROC curve and Youden's index were generated to aid selection of the most suitable cut-off value of MUAC for undernutrition. A value with highest Youden's index was chosen for cut-off.
Results: Our data shows strong significant positive correlation (linear) between MUAC and BMI, for males r = 0.81, (p<0.001) and for females r = 0.828, (p<0.001). MUAC cut-off <25.1 cm in males (AUC 0.930) and <23.9 cm in females (AUC 0.930) were chosen separately based on highest corresponding Youden's index. These values best correspond with BMI cut-off for under nutrition (BMI <18.5) in either gender.
Conclusion: MUAC correlates closely with BMI. For the simplicity and easy to remember MUAC <25 cm for male and <24 cm for female may be considered as a simpler alternative to BMI cut-off <18.5 to detect adult undernutrition.
abstract_id: PUBMED:29323425
Evaluating Mid-Upper Arm Circumference Z-Score as a Determinant of Nutrition Status. Background: Mid-upper arm circumference (MUAC) z-score, has recently been listed as an independent indicator for pediatric malnutrition. This investigation examined the relationship between MUAC z-score and the z-scores for conventional indicators (ie, weight-for-length and body mass index) to expand the available evidence for nutrition classification z-score threshold ranges in U.S. practice settings.
Methods: This was a single-center study of children through 18 years of age seen between October 2015 and September 2016. Height and weight were obtained on intake. MUAC was measured at midpoint of the humerus, between the acromion and olecranon. Age-specific and gender-specific z-score values were calculated using published λ, μ, and σ values derived from Centers for Disease Control and Prevention reference data. Nutrition status was determined from biochemical data; prior history; anthropometrics; weight gain velocity; weight loss, if present; and nutrient intake.
Results: 5,004 children (7.5 ± 5.7 years, 53% boys) were evaluated. As expected, MUAC z-scores were significantly correlated with body mass index (r = 0.789, P < .01) and weight-for-length (r = 0.638, P < .01) z-scores. There was a large degree of overlap in z-scores for all indicators between nutrition status groups; however, MUAC z-scores spanned a narrower range of values such that mean MUAC z-scores are lower in children classified as overweight/obese and higher in children who were severely malnourished than the corresponding body mass index or weight-for-length z-scores.
Conclusion: These data are the first to suggest that the z-score ranges used to define various stages of malnutrition may not be the same for all indicators.
abstract_id: PUBMED:33102290
Association between mid-upper arm circumference and body mass index in pregnant women to assess their nutritional status. Background: Underweight/ Undernourished is a state when the body mass index (BMI) falls below 18.5 kg/m2 and as per National Family and Health Survey-4, 22.9% of women in the reproductive age group fall into this category. Despite being considered as an important anthropometry marker, it is not measured in most of the healthcare facilities across India due to lack of basic amenities and resources. In such instances, how helpful other indicators like mid-upper arm circumference (MUAC) can be to measure the undernourished status of pregnant needs to be determined.
Objectives: To estimate the prevalence of undernutrition in pregnant women (PW) based on baseline BMI and MUAC and to determine the association between them.
Materials And Methods: A cross-sectional study was conducted in Tangi Block of Odisha among 440 PW (in the first trimester) from July 2018 to November 2018 using a pre-tested, validated questionnaire and anthropometric instruments.
Results: PW having BMI <18.5 kg/m2 were found to be 16.6% and having MUAC <23.5 cm were 19.5%. A significant association was found between BMI and MUAC [aOR 7.91 (4.27-14.65)]. Also, a moderate correlation was established between the indicators (r = 0.57).
Conclusion: MUAC can be used instead of BMI as it is easier to measure, cheaper, does not require any training or calculations, and insensitive to changes during the period of gestation unlike BMI. This can be beneficial to the healthcare workers at primary level who are in resource-limited settings.
abstract_id: PUBMED:33623193
Mid-Upper-Arm-Circumference as a Growth Parameter and its Correlation with Body Mass Index and Heights in Ashram School Students in Nashik District in Maharashtra, India. Background: Under nutrition is a major problem among Indian schoolchildren. Yet, routine height and weight measurements in schools are nor used for growth monitoring. This study attempts to evaluate mid-upper-arm-circumference (MUAC) as a quick assessment tool against body mass index (BMI) in schoolchildren.
Objective: The objective of the study was to evaluate MUAC against BMI, height, and average skin fold thickness (ASFT) parameters and to estimate MUAC values across age, sex, and social categories.
Subjects And Methods: The study was conducted in 2017-2018 in four randomly selected Ashram schools and an urban school in Nashik district. Girls (1187) and boys (1083) from age 6-18 were included, and height, weight, skinfold thickness, and MUAC were measured. MUAC was done on the left arm with Shakir's tape and tailor's tape (for MUAC >25 cm). Epi Info 7.1 and Excel were used for the data analysis.
Results: MUAC had a consistently high correlation with BMI at all ages for boys (r = 0.8786, P < 0.0001) and girls (r = 0.8586, P < 0.0001). ASFT too was strongly correlated with MUAC (r = 0.5945, P < 0.0001). MUAC had strong but nonlinear correlation with heights in girls (r = 0.7751, P < 0.0001) and boys (r = 0.8267, P < 0.0001). MUAC was higher for girls than boys at all ages. MUAC values for scheduled tribe (ST) children were highly significantly lower than non-ST students.
Conclusion: MUAC is a good and quick proxy tool for BMI and can serve as a sensitive nutritional indicator for school ages across socioeconomic categories. However, it is necessary to construct age-wise cutoff points and bandwidths using multicentric studies across income quintiles.
abstract_id: PUBMED:31775700
Nutritional status among young adolescents attending primary school in Tanzania: contributions of mid-upper arm circumference (MUAC) for adolescent assessment. Background: Adolescence is a critical time of development and nutritional status in adolescence influences both current and future adult health outcomes. However, data on adolescent nutritional status is limited in low-resource settings. Mid-upper arm circumference (MUAC) has the potential to offer a simple, low-resource alternative or supplement to body mass index (BMI) in assessing nutrition in adolescent populations.
Methods: This is secondary data analysis, from a cross-sectional pilot study, which analyses anthropometric data from a sample of young adolescents attending their last year of primary school in Pwani Region and Dar es Salaam Region, Tanzania (n = 154; 92 girls & 62 boys; mean age 13.2 years).
Results: The majority of adolescents (75%) were of normal nutritional status defined by BMI. Significantly more males were stunted than females, while significantly more females were overweight than males. Among those identified as outside the normal nutrition ranges, there was inconsistency between MUAC and BMI cut-offs. Bivariate analyses indicate that BMI and MUAC show a positive correlation for both female and male participants, and the relationship between BMI and MUAC was more strongly correlated among adolescent females.
Conclusions: Further studies are needed with more nutritionally and demographically diverse populations to better understand the nutritional status of adolescents and the practical contribution of MUAC cut-offs to measure adolescent nutrition.
abstract_id: PUBMED:23810718
Mid-upper-arm circumference and arm-to-height ratio in evaluation of overweight and obesity in Han children. Background: The purposes of this study were: (1) to analyze whether mid-upper-arm circumference (MUAC) could be used to determine overweight and obese children and to propose the optimal cutoffs of MUAC in Han children aged 7-12 years; and (2) to evaluate the feasibility and accuracy of the arm-to-height ratio (AHtR) and propose the optimal cutoffs of AHtR for identifying overweight and obesity.
Materials And Methods: In 2011, anthropometric measurements were assessed in a cross-sectional, population-based study of 2847 Han children aged 7-12 years. Overweight and obesity were defined according to the 2004 Group of China Obesity Task Force definition. The AHtR was calculated as arm circumference/height. Receiver operating characteristic curve analyses were performed to assess the accuracy of MUAC and AHtR as diagnostic tests for elevated body mass index (BMI; defined as BMI ≥ 85(th) percentiles).
Results: The accuracy levels of MUAC for identifying elevated BMI [as assessed by area under the curve (AUC)] were over 0.85 (AUC: approximately 0.934-0.975) in both genders and across all age groups. The MUAC cutoff values for elevated BMI were calculated to be approximately 18.9-23.4 cm in boys and girls. The accuracy levels of AHtR for identifying elevated BMI (as assessed by AUC) were also over 0.85 (AUC: 0.956 in boys and 0.935 in girls). The AHtR cutoff values for elevated BMI were calculated to be 0.15 in boys and girls.
Conclusion: This study demonstrates that MUAC and AHtR are simple, inexpensive, and accurate measurements that may be used to identify overweight and obese Han children. Compared with MUAC, AHtR is a nonage-dependent index with higher applicability to screen for overweight and obese children.
abstract_id: PUBMED:29195733
Changes in body mass index and mid-upper arm circumference in relation to all-cause mortality in older adults. Background & Aims: The assessment of weight loss as an indicator of poor nutritional status in older persons is currently widely applied to establish risk of mortality. Little is known about the relationship between changes in mid-upper arm circumference (MUAC) and mortality in older individuals. The aim of the present study was to examine the association between 3-year change in MUAC and 20-year mortality in community-dwelling older adults and compare this to the association between body mass index (BMI) change and mortality.
Methods: Data on changes in MUAC (cm) and BMI (kg/m2), covariates, and mortality were available for 1307 Dutch older adults (49.7% men) aged 65 years and older in 1995/96 (mean 75.6 years, SD 6.5) from Longitudinal Aging Study Amsterdam (LASA). Anthropometric measurements were performed in 1992/93 with repeated measurements in 1995/96 (baseline), and a mortality follow up until July 2015. BMI and MUAC change were divided into quintiles, with the quintile including zero defined as the reference category. Cox regression analyses were performed to examine the associations of 3-year changes in MUAC and BMI with subsequent 20-year all-cause mortality, adjusted for demographic and health factors. Age, sex and initial measurement of BMI and MUAC (1992/93) were tested for effect modification (P = <0.10).
Results: Mean baseline BMI was 26.7 kg/m2 (SD 4.2) with a 3-year change of -0.2 (SD 1.5). Mean baseline MUAC was 30.5 cm (SD 3.5) with a 3-year change of -0.8 (SD 1.6). Age, sex, and BMI and MUAC 3 years prior were effect modifiers in the associations between change in anthropometric measurement and mortality. Decrease in MUAC was not associated with mortality in persons with a higher initial MUAC (≥31 cm), while for persons with a lower initial MUAC, a decrease in MUAC of ≤-2.15 was associated with increased mortality risk (HR 1.54; 95% CI: 1.14-2.09), also when further stratified on median age and sex. In stratified analysis of BMI change for median initial BMI (26.5) and additionally stratified for median age and sex, the associations between a BMI decrease of ≤-1.19 and mortality fluctuated, mostly statistically not significant. No associations were found for gain in MUAC or BMI.
Conclusions: Given that MUAC loss is more strongly and consistently associated with an increased mortality risk in older individuals with a low initial MUAC compared to BMI loss, this may be a more recommendable measure to use in clinical practice for assessing poor nutritional status, instead of weight loss.
Answer: The assessment of undernutrition in Bangladeshi adults using anthropometry has been explored in several studies, with a focus on whether mid-upper arm circumference (MUAC) can serve as a simpler alternative to body mass index (BMI) for detecting undernutrition.
One study conducted among adult attendants of patients at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b), found a strong positive correlation between MUAC and BMI, suggesting that MUAC could be a simpler alternative to BMI for detecting adult undernutrition. The study proposed MUAC cut-off points of <25.1 cm for males and <23.9 cm for females, which corresponded well with the BMI cut-off for undernutrition (BMI <18.5) in either gender. The study concluded that for simplicity and ease of use, MUAC <25 cm for males and <24 cm for females could be considered as alternatives to the BMI cut-off <18.5 for detecting adult undernutrition (PUBMED:25875397).
Another study that evaluated MUAC as a growth parameter and its correlation with BMI and heights in Ashram school students in Nashik District, Maharashtra, India, found that MUAC had a consistently high correlation with BMI at all ages for both boys and girls. This study concluded that MUAC is a good and quick proxy tool for BMI and can serve as a sensitive nutritional indicator for school ages across socioeconomic categories (PUBMED:33623193).
These findings suggest that MUAC can indeed be used as a substitute for BMI in the assessment of nutritional status among adults, including those in Bangladesh. MUAC is advantageous as it is easier to measure, does not require any training or calculations, and is insensitive to changes during periods such as gestation, making it particularly useful in resource-limited settings (PUBMED:33102290). However, it is important to note that while MUAC can be a useful tool for screening and assessing undernutrition, it may not fully replace BMI in all contexts, and further research may be needed to establish standardized cut-off points for different populations. |
Instruction: Is the clinical efficacy of epidural diamorphine concentration-dependent when used as analgesia for labour?
Abstracts:
abstract_id: PUBMED:15556964
Is the clinical efficacy of epidural diamorphine concentration-dependent when used as analgesia for labour? Background: The physicochemical properties of diamorphine (3,6-diacetylmorphine) enhance its bioavailability compared with more lipid-soluble opioids when administered into the epidural space. However, the influence of concentration, volume or mass on the clinical efficacy of diamorphine is not known.
Method: In this double-blind, randomized, prospective study, 62 women in active labour and </=5 cm cervical dilatation were recruited to determine whether the mode of action of diamorphine in the epidural space is concentration-dependent. After insertion of a lumbar epidural catheter, patients received epidural diamorphine 3 mg either as a high-volume, low-concentration solution (group A) or a low-volume, high-concentration solution (group B). The concentration of diamorphine was determined by the response of the previous patient in the same group using up-down sequential allocation. Pain corresponding to the previous contraction was assessed using a 100-mm visual analogue score and effective analgesia was defined as </=10 mm within 30 min of epidural injection.
Results: There was no significant difference in EC50 for diamorphine between the groups: the difference was 15.0 microg ml(-1) (95% CI -40.3 to 10.3). The EC50 for group A was 237.5 microg ml(-1) (95% CI 221.2 to 253.8) and the EC50 for group B was 252.5 microg ml(-1) (95% CI 232.2 to 272.8). The EC50 ratio was 0.95 (95% CI 0.87 to 1.06). The groups exhibited parallelism (P=0.98). The overall EC50 for all data was 244.2 microg ml(-1) (95% CI 230.8 to 257.2).
Conclusion: We conclude that diamorphine provides analgesia in labour by a concentration-dependent effect.
abstract_id: PUBMED:31230993
Remifentanil patient-controlled intravenous analgesia during labour: a retrospective observational study of 10 years' experience. Background: Intravenous remifentanil patient-controlled analgesia (PCA) has been routinely available for labouring women in our unit since 2004, the regimen using a 40 µg bolus available two minutely on demand, continuous pulse oximetry and mandatory one-to-one care. We examined remifentanil use and compared, with the other analgesic options available in our unit, outcomes such as mode of delivery, Apgar scores, neonatal resuscitation and admission to the neonatal intensive care unit.
Methods: We retrospectively identified women who delivered in our unit between 2005 and 2014 and received remifentanil, diamorphine or epidural analgesia during labour. Data were drawn from the Northern Ireland Maternity System electronic database, which records birth details from all obstetric units in Northern Ireland. Additional data were identified from paper survey forms, completed by the midwife post delivery for all women who received remifentanil in our unit. Outcomes were compared between women who received remifentanil, diamorphine or an epidural technique for labour analgesia.
Results: Over the 10-year period, remifentanil was the most popular form of analgesia, being selected by 31.9% (8170/25617) women. Compared with women selecting diamorphine or epidural analgesia, those having remifentanil had similar rates of instrumental and operative delivery. Neonatal Apgar scores were also similar. Neonatal resuscitation or neonatal unit admission were not more likely in women choosing remifentanil PCA.
Conclusion: We found remifentanil PCA to be neither less safe nor associated with poorer outcomes than other analgesic options offered in our unit, when used within our guidelines for more than a 10-year period.
abstract_id: PUBMED:26182601
Administering 'top-ups' of epidural analgesia. Repeated annual audit cycles revealed an unacceptable failure rate of 38%-46% in epidural analgesia for surgical patients in our organisation. Reasons for failure included unilateral block, missed segments and catheter migration. In spite of interventions to remedy the situation, the success rate could not be improved. The aim of the initiative outlined in this article was to improve the efficacy of epidural analgesia and to reduce the failure rate. We found that following the appropriate training and assessment, nurse-administered diamorphine top-ups are a safe and effective way to improve the efficacy of epidural analgesia and can be integrated into acute pain team practice.
abstract_id: PUBMED:17303622
An isobolographic analysis of diamorphine and levobupivacaine for epidural analgesia in early labour. Background: Few data describe the pharmacological interactions between local anaesthetics and opioids. The aim of this study was to measure the median effective concentration (MEC) of diamorphine and levobupivacaine when given separately and as mixtures for epidural analgesia, and determine whether the combination is additive or synergistic.
Methods: One hundred and twenty patients were enrolled in this prospective randomized, two-phase, double-blind study. In the first phase, 60 women were randomized to receive a fixed 20 ml volume of either levobupivacaine or diamorphine epidurally . Dosing was determined using up-down sequential allocation with testing intervals, respectively, of 0.01%w/v and 12.5 microg ml(-1). After estimations of the MEC of levobupivacaine and diamorphine, a further 60 patients were randomized in the second phase to one of the three mixtures: (a) diamorphine 70 microg ml(-1) (fixed) and levobupivacaine (testing interval 0.004%w/v, starting at 0.044%w/v); (b) levobupivacaine 0.044%w/v (fixed) and diamorphine (testing interval 7 microg ml(-1), starting at 70 microg ml(-1)); and (c) bivariate diamorphine and levobupivacaine (testing intervals of 7 microg ml(-1) and 0.004%w/v starting at 70 microg ml(-1) and 0.044% w/v respectively).
Results: The MEC estimates from the first phase were 143.8 microg ml(-1) (95% CI 122.2-165.3) for diamorphine and 0.083%w/v (95% CI 0.071-0.095) for levobupivacaine. In the second phase, the MEC and interaction index (gamma) of the three combinations were: diamorphine 65.5 microg ml(-1) (56.8-74.2), gamma = 0.99; levobupivacaine 0.041%w/v (0.037-0.049), gamma = 0.98; and for the fixed combination diamorphine 69.5 microg ml(-1) (60.5-78.5) and levobupivacaine 0.044%w/v (0.039-0.049), gamma = 1.02.
Conclusion: The combination of diamorphine and levobupivacaine is additive and not synergistic when used for epidural analgesia in the first stage of labour.
abstract_id: PUBMED:8820230
Epidural analgesia in Scotland: a survey of extradural opioid practice. A postal survey of all Consultant Anaesthetists working within Scotland was conducted to establish the current state of epidural analgesia and the level of post-operative care for patients after epidural opioid administration. Of those consultants using epidurals for post-operative analgesia, 89% use extradural opioids, and the lipid soluble opioids diamorphine and fentanyl by an infusion technique were the most popular. For analgesia in labour the use of extradural opioids drops to 41% with fentanyl by bolus the commonest drug and method of administration. Monitoring requirements after extradural opioids are variable with more than half of consultants satisfied with intermittent observational measurements. Sixty-nine per cent of consultants frequently send their patients to a high dependency unit following epidural opioid administration. Additional administration of opioids on an ordinary ward setting is considered inappropriate by over half of the consultants replying. There is considerable variability amongst anaesthetists as to how patients receiving epidural opioids should be monitored and National Recommendations are required to stop the present confusion.
abstract_id: PUBMED:11573634
Choice of opioid for initiation of combined spinal epidural analgesia in labour--fentanyl or diamorphine. Sixty-two women requesting regional analgesia in labour were allocated to receive a 1.5 ml intrathecal injection as part of a combined spinal-epidural (CSE) analgesic technique. This contained either bupivacaine 2.5 mg plus fentanyl 25 microg (group F) or bupivacaine 2.5 mg plus diamorphine 250 microg (group D). Times of analgesic onset and offset were recorded, motor and proprioceptive assessments made and side-effects noted. Analgesic onset was not significantly different between the groups (group F, 8.0 min; group D, 9.5 min; P = 0.3) but time to first top-up request was significantly longer in the diamorphine group (group F, 73 min; group D, 101 min; P = 0.003). Motor loss, assessed by the modified Bromage score, was statistically but not clinically greater in the fentanyl group (P = 0.01). Maternal hypotension, pruritus, proprioceptive loss, nausea and fetal bradycardia were rare and not severe, and their incidences did not differ between groups. No respiratory depression was observed after CSE. This use of diamorphine was not associated with increased side-effects compared with fentanyl/bupivacaine, and it has a longer duration of action.
abstract_id: PUBMED:2014889
Epidural infusion of diamorphine with bupivacaine in labour. A comparison with fentanyl and bupivacaine. We have compared the analgesic effects of three epidural infusions in a randomised, double-blind study of 61 mothers in labour. An initial dose of bupivacaine 0.5% 8 ml was followed by either bupivacaine 0.125%, bupivacaine 0.125% with diamorphine 0.0025% or bupivacaine 0.125% with fentanyl 0.0002%. All infusions were run at a rate of 7.5 ml/hour. Analgesia was significantly better in both the groups receiving opioids. Diamorphine was shown to be the more effective supplement to bupivacaine. The 5% incidence of pruritus in the opioid groups was less than that reported by earlier authors.
abstract_id: PUBMED:8285333
A comparison of epidural diamorphine with intravenous patient-controlled analgesia using the Baxter infusor following caesarean section. In a randomised study of analgesia following Caesarean section, we compared the efficacy and side effects of on-demand epidural diamorphine 2.5 mg with intravenous patient-controlled analgesia using diamorphine from the Baxter infusor system. Pain scores fell more rapidly in the epidural group, but by the fourth hour, and thereafter, both techniques had a similar analgesic effect. The patient-controlled analgesia group used significantly more diamorphine (p < 0.001), median 62 mg (range 18-120 mg) compared to the epidural group, median 10 mg (range 2.5-20 mg), over a significantly longer time period (p < 0.001), median 54.25 h (range 38-68 h) compared to the epidural group, median 40.75 h (range 6-70 h). The frequency and severity of nausea, vomiting and pruritus were similar in the two groups, however, the patient-controlled analgesia group were more sedated during the first postoperative day. This reached statistical significance (p < 0.05) between 9-24 h. Overall satisfaction scores (0-100) were high, but the patient-controlled analgesia group scored significantly higher: mean 85.5 (SD 12.2) compared to mean 77.0 (SD 11.7) in the epidural group.
abstract_id: PUBMED:20806149
Patient-controlled intravenous analgesia with remifentanil as an alternative to epidural analgesia during labor: case series and discussion of medicolegal aspects Background: Epidural analgesia is considered as the standard method for labor analgesia by inducing a minimal negative impact on labor while providing effective analgesia. Labor analgesia in the absence of epidural analgesia is difficult to achieve with the commonly used analgesic interventions. If epidural analgesia is not feasible due to coagulation disorders, anticoagulation, inability to insert an epidural catheter or due to the mother''s refusal to accept neuraxial analgesia, there is a need for interventions to cope with labor pain. So far, pethidine, diamorphine, meptazinol and spasmolytics remain the most widely used substances for IM and IV use. Unfortunately, in addition to not being very effective, these interventions may be associated with undesirable side effects for the parturient and the newborn. For a decade, anaesthesiologists have experienced the unique properties of remifentanil in the settings of surgical anaesthesia and conscious sedation since it was introduced for labor analgesia. Unfortunately, remifentanil is not licensed for administration to the pregnant patient, and it is unlikely that the manufacturers would consider the cost justified.
Methods: Therefore, relevant concerns, legal issues and precautions are discussed based on the presentation of case series and a protocol is presented on how the use of remifentanil can be safely implemented for labor analgesia in selected situations.
Results And Conclusions: Proper informed consent, appropriate monitoring for the mother and the newborn, one-to-one nursing or midwifery care as well as the availability of an attending physician experienced in neonatal resuscitation and an anaesthesiologist with experience regarding the use of remifentanil are important to ensure that this method retains its good reputation for obstetric analgesia.
abstract_id: PUBMED:2662803
Epidural diamorphine and bupivacaine in labour. A double-blind randomised study was performed to assess the analgesic effect of epidural diamorphine, administered with bupivacaine, on primigravid women in labour. Fifty patients received 0.25% bupivacaine 10 ml via the epidural catheter as their initial dose; patients in Group 1 received diamorphine 5 mg with the bupivacaine. A 0.1% bupivacaine infusion was started at 10 minutes and bolus doses of bupivacaine were given if required. There was a significant reduction in rate of bupivacaine administration, pain scores at 20 and 30 minutes, number of supplements required, and degree of motor blockade in the diamorphine group.
Answer: Yes, the clinical efficacy of epidural diamorphine is concentration-dependent when used as analgesia for labour. This conclusion is drawn from a study where 62 women in active labour were recruited to determine whether the mode of action of diamorphine in the epidural space is concentration-dependent. The study found no significant difference in the median effective concentration (EC50) for diamorphine between the two groups tested, which received either a high-volume, low-concentration solution or a low-volume, high-concentration solution. The overall EC50 for all data was 244.2 microg ml(-1), indicating that the analgesic effect of diamorphine is indeed concentration-dependent (PUBMED:15556964). |
Instruction: Does routine pin re-torquing of patients wearing a halo-thoracic orthosis reduce the need for pin replacement?
Abstracts:
abstract_id: PUBMED:24812117
Does routine pin re-torquing of patients wearing a halo-thoracic orthosis reduce the need for pin replacement? Background: The halo orthosis is a treatment option currently used in Australia for cervical spine immobilisation following trauma, fracture and post surgical stabilisation. In a previous study, the authors reported halo pin replacement to be a common complication. The aim of this study was to investigate the potential correlation between routine halo pin re-torquing and the incidence of pin replacement.
Case Description And Methods: A retrospective case series study was undertaken. A total of 258 charts were reviewed, with 170 patients included in the study. Patients were fitted with a Bremer HALO System with the initial application torque maintained by routine re-torquing throughout the duration of wear.
Findings And Outcomes: A total of 680 pins (4 per patient) were inserted during the initial application of the halo orthoses, with only six pins replaced (0.88%) throughout the duration of the study.
Conclusion: The findings from this study demonstrate a potential correlation between routinely re-torquing halo pins and decreasing the incidence of pin replacement.
Clinical Relevance: This case series study has identified a potential improvement in clinical management of patients wearing a halo-thoracic orthosis.
abstract_id: PUBMED:32915397
Halo pin positioning in the temporal bone; parameters for safe halo gravity traction. Introduction: Halo gravity traction (HGT) is increasingly used pre-operatively in the treatment of children with complex spinal deformities. However, the design of the current halo crowns is not optimal for that purpose. To prevent pin loosening and to avoid visual scars, fixation to the temporal area would be preferable. This study aims to determine whether this area could be safe for positioning HGT pins.
Methods: A custom made traction setup plus three human cadaver skulls were used to determine the most optimal pin location, the resistance to migration and the load to failure on the temporal bone. A custom-made spring-loaded pin with an adjustable axial force was used. For the migration experiment, this pin was positioned at 10 predefined anatomical areas in the temporal region of adult cadaver skulls, with different predefined axial forces. Subsequently traction force was applied and increased until migration occurred. For the load-to-failure experiment, the pin was positioned on the most applicable temporal location on both sides of the skull.
Results: The most optimal position was identified as just antero-cranial to the auricle. The resistance to migration was clearly related to the axial tightening force. With an axial force of only 100 N, which corresponds to a torque of 0.06 Nm (0.5 in-lb), a vertical traction force of at least 200 N was needed for pin migration. A tightening force of 200 N (torque 0.2 Nm or 2 in-lb) was sufficient to resist migration at the maximal applied force of 360 N for all but one of the pins. The load-to-failure experiment showed a failure range of 780-1270 N axial force, which was not obviously related to skull thickness.
Conclusion: The temporal bone area of adult skulls allows axial tightening forces that are well above those needed for HGT in children. The generally applied torque of 0.5 Nm (4 in-lb) which corresponds to about 350 N axial force, appeared well below the failure load of these skulls and much higher than needed for firm fixation.
abstract_id: PUBMED:9383858
Pin-site complications of the halo thoracic brace with routine pin re-tightening. Study Design: Retrospective analysis with historic controls.
Objectives: To analyze pin-site complications in a large series of halo thoracic braces in which regular re-tightening of the pins was performed.
Summary Of Background Data: Perry and Nickel first described the use of the halo thoracic brace in 1959 for cervical immobilization. Its use has been extended successfully to cervical fracture management.
Methods: A total of 266 commercially available halo thoracic braces were fitted using a standard technique. All pins were tightened routinely at 24 hours and at 1 week after application. Two data sources, prosthetic department records and patients' medical records, were analyzed.
Results: Six percent of patients had a pin-site infection; 3.7% had loose pins, and 1.1% reported pin-site pain. No subdural, intradural, or extradural abscess or cerebrospinal fluid leaks occurred. A total of 2.6% of halo rings slipped off, and 2.3% of patients experienced severe headaches.
Conclusions: Low rates of pin-site infection, loosening, and pain were achieved through routine re-tightening of the pins. Pin re-tightening, at 24 hours and at 1 week after application, is a safe and effective method to decrease pin-site complications.
abstract_id: PUBMED:23741545
Reduction of halo pin site morbidity with a new pin care regimen. Study Design: A retrospective analysis of halo device associated morbidity over a 4-year period.
Purpose: To assess the impact of a new pin care regimen on halo pin site related morbidity.
Overview Of Literature: Halo orthosis treatment still has a role in cervical spine pathology, despite increasing possibilities of open surgical treatment. Published figures for pin site infection range from 12% to 22% with pin loosening from 7% to 50%.
Methods: We assessed the outcome of a new pin care regimen on morbidity associated with halo spinal orthoses, using a retrospective cohort study from 2001 to 2004. In the last two years, our pin care regimen was changed. This involved pin site care using chlorhexidene & regular torque checking as part of a standard protocol. Previously, povidone iodine was used as skin preparation in theatre, followed by regular sterile saline cleansing when pin sites became encrusted with blood.
Results: There were 37 patients in the series, the median age was 49 (range, 22-83) and 20 patients were male. The overall infection rate prior to the new pin care protocol was 30% (n=6) and after the introduction, it dropped to 5.9% (n=1). This difference was statistically significant (p<0.05). Pin loosening occurred in one patient in the group prior to the formal pin care protocol (3%) and none thereafter.
Conclusions: Reduced morbidity from halo use can be achieved with a modified pin cleansing and tightening regimen.
abstract_id: PUBMED:35141604
Septic cavernous sinus thrombosis secondary to halo vest pin site infection. Background: Pin site infection is one of the frequent complications of the halo crown application which can be easily handled if addressed early. However, if this issue is neglected then serious infectious events may quickly transpire. Among all of the medical literature that the previously described scenarios have illuminated; we did not encounter a case involving infectious cavernous sinus thrombosis.
Case Description: The authors present a middle age man who arrived at our clinic with an acute left peri-orbital swelling, proptosis, and ophthalmoplegia which had occurred subsequent to an untreated halo pine site infection. With a diagnosis of septic cavernous sinus thrombosis (CST), appropriate antibiotics and anticoagulant therapies were administered.
Outcome: With the continuation of this conservative treatment regimen, he was successfully managed with no residual neurological consequences.
Conclusion: Halo vest orthosis is an appropriately tolerated upper cervical spinal stabilizing device that is a commonly used worldwide. Septic CST that is secondary to a halo vest pin site infection has not been previously described within medical literature. In the case of a neglected pin site infection, with demonstration of ipsilateral eyelid edema and proptosis, septic CST should be immediately considered and treated vigorously with antibiotics and anticoagulant therapies.
abstract_id: PUBMED:9796687
Pin force measurement in a halo-vest orthosis, in vivo. The halo-vest is an orthosis commonly used to immobilize and protect the cervical spine. The primary complications associated with the halo-vest have been attributed to cranial pin loosening. However, the pin force history during day-to-day halo-vest wear has not previously been reported. This paper presents a new technique developed to monitor cranial pin forces in a halo-vest orthosis, in vivo. A strain gaged, open-ring halo was used to measure the compressive and shear forces produced at the posterior pin tips. The strain gages measured the bending moments produced by these forces without compromising the structural integrity of the halo-vest system. The prototype halo measured the compressive and shear force components with a resolution of +/- 15 and +/- 10 N, respectively. To test the feasibility and durability of the device, it was applied to one patient requiring treatment with a halo-vest orthosis. At the time of halo-vest application, the mean compressive force in the two posterior pins was 368 N. Over the 3 month treatment period, the compressive forces decreased by a mean of 88%. The shear forces were relatively insignificant. Using this technology future work will be aimed at determining the causes of pin loosening, optimizing vest and pin designs, and investigating the safety of more rapid rehabilitation.
abstract_id: PUBMED:10828912
Pin loosening in a halo-vest orthosis: a biomechanical study. Study Design: The cranial pin force history of a halo-vest orthosis was measured using an instrumented halo in a clinical study with three patients. Pin force values at the time of halo-vest application and at subsequent clinical visits during the halo-vest wear period were compared.
Objectives: To document the pin force reduction in the cranial pins of a halo-vest orthosis in vivo.
Summary Of Background Data: The halo-vest is an orthosis commonly used to immobilize and protect the cervical spine. An important problem with halo-vest use is pin loosening. There have been no previous reports of pin force history in vivo.
Methods: A custom-built strain-gauged, open-ring halo was used to measure the compressive force and superiorly-inferiorly directed shear forces produced at the tips of the two posterior pins. The instrumented halo was applied to three patients with cervical spine fractures. Pin force measurements were recorded at the time of halo application and at subsequent follow-up visits during the entire treatment period.
Results: A mean compressive force of 343 +/- 64.6 N was produced at the pin tips during halo application with the patient in a supine position. On average, the compressive forces decreased by 83% (P = 0.002) during the typical halo-vest wear period. The compressive forces were substantially greater than the shear forces, which averaged only -11+/-30.2 N at the time of halo application and which did not change significantly with time.
Conclusions: The study confirmed the hypothesized decrease in the compressive pin forces with time. All patients had developed at least some clinical symptoms of pin loosening at the time of halo-vest removal.
abstract_id: PUBMED:15968303
Pin-site myiasis: a rare complication of halo orthosis. Study Design: Case report.
Objective: To report a rare complication following halo placement for cervical fracture.
Setting: United States University Teaching Hospital.
Case Report: A 39-year-old woman who sustained a spinal cord injury from a C6-7 fracture underwent halo placement. She subsequently developed an infection adjacent to the right posterior pin, which then became infected with Diptera larvae (maggots), necessitating removal of the pin and debridement of the wound site.
Conclusion: Halo orthosis continues to be an effective means of immobilizing the cervical spine. Incidence of complications ranges from 6.4 to 36.0% of cases. Commonly reported complications include pin-site infection, pin penetration, pin loosening, pressure sores, nerve injury, bleeding, and head ring migration. Pin-site myiasis is rare, with no known reports found in the literature. Poor pin-site care by the patient and her failure to keep follow-up appointments after development of the initial infection likely contributed to the development of this complication.
abstract_id: PUBMED:2781390
The effect of angled insertion on halo pin fixation. This study evaluated the effect of angled insertion of halo pins on the biomechanical characteristics of the halo pin-calvarium complex. Halo pins were inserted into isolated calvarium sections at 90 degrees, 75 degrees, and 60 degrees to the surface of the bone at an insertional torque of 0.68 N-m (6 inch-pounds). Initial rigidity, load at failure, and deformation at failure of the pin-bone complex were assessed during transverse shear loading. The structural properties of the pin-bone complex were maximized at loads approaching failure when pins were inserted perpendicular (90 degrees) to the bony surface and significantly decreased at more acute angles of insertion. Perpendicular insertion of halo pins maximizes the structural properties of the complex formed by the halo pin and the calvarium. This improved structural behavior may minimize the incidence of pin loosening clinically, and may reduce the frequency of other complications currently associated with the use of the halo orthosis.
abstract_id: PUBMED:2308671
The effect of pin location on the rigidity of the halo pin-bone interface. Optimal insertion of halo fixation pins to maximize the rigidity of the interface between the halo pins and the outer table of the skull is important in reducing the incidence of pin loosening. An in vitro biomechanical study was performed using cadaver skulls to investigate the effects of pin location on the rigidity of this pin-bone interface. Halo pins were inserted at nine positions within a recommended "safe zone" for pin insertion. It was found that the rigidity of the pin-bone interface progressively decreased as pins were inserted more superiorly on the calvaria. The rigidity of the interface did not change significantly when the location of the pins was varied in the horizontal axis. This reduction in interface rigidity associated with inserting pins more superiorly on the skull may be related to an increase in the angle of insertion of the pins with respect to the surface of the calvaria. Based on this study, a change in the technique of halo pin insertion is recommended. Pins should be placed as inferiorly as possible, close to the supraorbital ridge, to achieve the most perpendicular angle of insertion and thus the most rigid fixation. The improved rigidity obtained with perpendicular pin insertion may minimize the rate of pin loosening and other complications associated with use of the halo orthosis.
Answer: Yes, routine pin re-torquing of patients wearing a halo-thoracic orthosis appears to reduce the need for pin replacement. A study demonstrated that routine re-torquing of halo pins decreased the incidence of pin replacement, with only six pins replaced out of 680 (0.88%) throughout the duration of the study, suggesting a potential correlation between routinely re-torquing halo pins and a decrease in the incidence of pin replacement (PUBMED:24812117). Another study reported low rates of pin-site infection, loosening, and pain through routine re-tightening of the pins at 24 hours and at 1 week after application, indicating that this method is safe and effective in decreasing pin-site complications (PUBMED:9383858). Additionally, a retrospective analysis of halo device-associated morbidity over a 4-year period found that a new pin care regimen, which included regular torque checking, significantly reduced the rate of pin site infection from 30% to 5.9% and eliminated pin loosening after its introduction (PUBMED:23741545). These findings collectively suggest that routine pin re-torquing can be an effective measure to reduce the need for pin replacement and other complications associated with halo-thoracic orthosis use. |
Instruction: Long-term diuretic treatment in heart failure: are there differences between furosemide and torasemide?
Abstracts:
abstract_id: PUBMED:12360682
Long-term diuretic treatment in heart failure: are there differences between furosemide and torasemide? Background: Treatment for congestive heart failure (CHF) is an important factor in rising health care costs especially in patients requiring repeated hospitalisations. Diuretics remain the most frequently utilized drugs in symptomatic patients. In this study the long-term outcome under furosemide and torasemide, two loop diuretics with different pharmacokinetic properties, were evaluated during one year in an ambulatory care setting.
Aims: Comparison of hospitalization rates and estimated costs under long-term treatment with furosemide and torasemide in patients with CHF.
Methods: Retrospective analysis of disease course and resource utilization in 222 ambulatory patients receiving long-term treatment with furosemide (n = 111) or torasemide (n = 111). Data were also compared to those of a similar study including 1000 patients in Germany.
Results: Patients receiving long-term treatment with torasemide had a lower hospitalisation rate (3.6%) compared to patients on furosemide (5.4%). Corresponding hospitalization rates in the German study were 1.4% under torasemide and 2% under furosemide. The higher hospitalisation rates in Swiss patients could be explained by a higher average age (75 years vs. 69 years) and a longer duration of symptomatic heart failure (4.1 yrs vs. 0.7 yrs). Cost estimates based on the average number of hospital days (0.54 under torasemide compared to 1.05 under furosemide) indicated that the financial burden could be halved by a long-term torasemide treatment.
Conclusion: Torasemide with its more complete and less variable bioavailability offers potential clinical and economic advantages over furosemide in the long-term treatment in patients with CHF.
abstract_id: PUBMED:38188263
Diuretic resistance and the role of albumin in congestive heart failure. Diuresis with loop diuretics is the mainstay treatment for volume optimization in patients with congestive heart failure, in which perfusion and volume expansion play a crucial role. There are robust guidelines with extensive evidence for the management of heart failure; however, clear guidance is needed for patients who do not respond to standard diuretic treatment. Diuretic resistance (DR) can be defined as an insufficient quantity of natriuresis with proper diuretic therapy. A combination of diuretic regimens is used to overcome DR and, more recently, SGLT2 inhibitors have been shown to improve diuresis. Despite DR being relatively common, it is challenging to treat and there remains a notable lack of substantial data guiding its management. Moreover, DR has been linked with poor prognosis. This review aims to expose the multiple approaches for treatment of patients with DR and the importance of intravascular volume expansion in the response to therapy.
abstract_id: PUBMED:30649675
Comparative Analysis of Long-Term Outcomes of Torasemide and Furosemide in Heart Failure Patients in Heart Failure Registries of the European Society of Cardiology. Purpose: Current clinical recommendations do not emphasise superiority of any of diuretics, but available reports are very encouraging and suggest beneficial effects of torasemide. This study aimed to compare the effect of torasemide and furosemide on long-term outcomes and New York Heart Association (NYHA) class change in patients with chronic heart failure (HF).
Methods: Of 2019 patients enrolled in Polish parts of the heart failure registries of the European Society of Cardiology (Pilot and Long-Term), 1440 patients treated with a loop diuretic were included in the analysis. The main analysis was performed on matched cohorts of HF patients treated with furosemide and torasemide using propensity score matching.
Results: Torasemide was associated with a similar primary endpoint (all-cause death; 9.8% vs. 14.1%; p = 0.13) occurrence and 23.8% risk reduction of the secondary endpoint (a composite of all-cause death or hospitalisation for worsening HF; 26.4% vs. 34.7%; p = 0.04). Treatment with both torasemide and furosemide was associated with the significantly most frequent occurrence of the primary (23.8%) and secondary (59.2%) endpoints. In the matched cohort after 12 months, NYHA class was higher in the furosemide group (p = 0.04), while furosemide use was associated with a higher risk (20.0% vs. 12.9%; p = 0.03) of worsening ≥ 1 NYHA class. Torasemide use impacted positively upon the primary endpoint occurrence, especially in younger patients (aged < 65 years) and with dilated cardiomyopathy.
Conclusions: Our findings contribute to the body of research on the optimal diuretic choice. Torasemide may have advantageous influence on NYHA class and long-term outcomes of HF patients, especially younger patients or those with dilated cardiomyopathy, but it needs further investigations in prospective randomised trials.
abstract_id: PUBMED:21623723
Torasemide is the effective loop diuretic for long-term therapy of arterial hypertension Torasemide is a loop diuretic and has been used for the treatment of both acute and chronic congestive heart failure (CHF) and arterial hypertension (AH). Torasemide is similar to other loop diuretics in terms of its mechanism of diuretic action. It has higher bioavailability (&gt;80%) and a longer elimination half-life (3 to 4 hours) than furosemide. In the treatment of CHF torasemide (5 to 20 mg/day) has been shown to be an effective diuretic. Non-diuretic dosages (2.5 to 5 mg/day) of torasemide have been used to treat essential AH, both as monotherapy and in combination with other antihypertensive agents. When used in these dosages, torasemide lowers diastolic blood pressure to below 90mm Hg in 70 to 80% of patients. Antihypertensive efficacy of torasemide is similar to that of thiazides and related compounds. Thus low-dose torasemide constitutes an alternative to thiazides in the treatment of essential AH.
abstract_id: PUBMED:35620081
Renin-Angiotensin-Aldosterone System Activation and Diuretic Response in Ambulatory Patients With Heart Failure. Rationale & Objective: Heart failure treatment relies on loop diuretics to induce natriuresis and decongestion, but the therapy is often limited by diuretic resistance. We explored the association of renin-angiotensin-aldosterone system (RAAS) activation with diuretic response.
Study Design: Observational cohort.
Setting & Population: Euvolemic ambulatory adults with chronic heart failure were administered torsemide in a monitored environment.
Predictors: Plasma total renin, active renin, angiotensinogen, and aldosterone levels. Urine total renin and angiotensinogen levels.
Outcomes: Sodium output per doubling of diuretic dose and fractional excretion of sodium per doubling of diuretic dose.
Analytical Approach: Robust linear regression models estimated the associations of each RAAS intermediate with outcomes.
Results: The analysis included 56 participants, and the median age was 65 years; 50% were women, and 41% were Black. The median home diuretic dose was 80-mg furosemide equivalents. In unadjusted and multivariable-adjusted models, higher levels of RAAS measures were generally associated with lower diuretic efficiency. Higher plasma total renin remained significantly associated with lower sodium output per doubling of diuretic dose (β = -0.41 [-0.76, -0.059] per SD change) with adjustment; higher plasma total and active renin were significantly associated with lower fractional excretion of sodium per doubling of diuretic dose (β = -0.48 [-0.83, -0.14] and β = -0.51 [-0.95, -0.08], respectively) in adjusted models. Stratification by RAAS inhibitor use did not substantially alter these associations.
Limitations: Small sample size; highly selected participants; associations may not be causal.
Conclusions: Among multiple measures of RAAS activation, higher plasma total and active renin levels were consistently associated with lower diuretic response. These findings highlight the potential drivers of diuretic resistance and underscore the need for high-quality trials of decongestive therapy enhanced by RAAS blockade.
abstract_id: PUBMED:24585267
Diuretic response in acute heart failure: clinical characteristics and prognostic significance. Aim: Diminished diuretic response is common in patients with acute heart failure, although a clinically useful definition is lacking. Our aim was to investigate a practical, workable metric for diuretic response, examine associated patient characteristics and relationships with outcome.
Methods And Results: We examined diuretic response (defined as Δ weight kg/40 mg furosemide) in 1745 hospitalized acute heart failure patients from the PROTECT trial. Day 4 response was used to allow maximum differentiation in responsiveness and tailoring of diuretic doses to clinical response, following sensitivity analyses. We investigated predictors of diuretic response and relationships with outcome. The median diuretic response was -0.38 (-0.80 to -0.13) kg/40 mg furosemide. Poor diuretic response was independently associated with low systolic blood pressure, high blood urea nitrogen, diabetes, and atherosclerotic disease (all P < 0.05). Worse diuretic response independently predicted 180-day mortality (HR: 1.42; 95% CI: 1.11-1.81, P = 0.005), 60-day death or renal or cardiovascular rehospitalization (HR: 1.34; 95% CI: 1.14-1.59, P < 0.001) and 60-day HF rehospitalization (HR: 1.57; 95% CI: 1.24-2.01, P < 0.001) in multivariable models. The proposed metric-weight loss indexed to diuretic dose-better captures a dose-response relationship. Model diagnostics showed diuretic response provided essentially the same or slightly better prognostic information compared with its individual components (weight loss and diuretic dose) in this population, while providing a less biased, more easily interpreted signal.
Conclusions: Worse diuretic response was associated with more advanced heart failure, renal impairment, diabetes, atherosclerotic disease and in-hospital worsening heart failure, and predicts mortality and heart failure rehospitalization in this post hoc, hypothesis-generating study.
abstract_id: PUBMED:34674536
Diuretic Changes, Health Care Resource Utilization, and Clinical Outcomes for Heart Failure With Reduced Ejection Fraction: From the Change the Management of Patients With Heart Failure Registry. Background: Diuretics are a mainstay therapy for the symptomatic treatment of heart failure. However, in contemporary US outpatient practice, the degree to which diuretic dosing changes over time and the associations with clinical outcomes and health care resource utilization are unknown.
Methods: Among 3426 US outpatients with chronic heart failure with reduced ejection fraction in the Change the Management of Patients with Heart Failure registry with complete medication data and who were prescribed a loop diuretic, diuretic dose increase was defined as: (1) change to a total daily dose higher than their previous total daily dose, (2) addition of metolazone to the regimen, (3) change from furosemide to either bumetanide or torsemide, and the change persists for at least 7 days. Adjusted hazard ratios or rate ratios along with 95% CIs were reported for clinical outcomes among patients with an increase in oral diuretic dose versus no increase in diuretic dose.
Results: Overall, 796 (23%) had a diuretic dose increase (18 episodes per 100 patient-years). The proportion of patients with dyspnea at rest (38% versus 26%), dyspnea at exertion (79% versus 67%), orthopnea (32% versus 21%), edema (60% versus 43%), and weight gain (40% versus 23%) were significantly (all P <0.001) higher in the diuretic increase group. Baseline angiotensin-converting enzyme inhibitor/angiotensin receptor blocker (hazard ratio, 0.75 [95% CI, 0.65-0.87]) use were associated with lower likelihood of diuretic increase over time. Patients with a diuretic dose increase had a significantly higher number of heart failure hospitalizations (rate ratio, 2.53 [95% CI, 2.10-3.05]), emergency department visits (rate ratio, 1.84 [95% CI, 1.56-2.17]), and home health visits (rate ratio, 1.88 [95% CI, 1.39-2.54]), but not all-cause mortality (hazard ratio, 1.10 [95% CI, 0.89-1.36]). Similarly, greater furosemide dose equivalent increases were associated with greater resource utilization but not with mortality, compared with smaller increases.
Conclusions: In this contemporary US registry, 1 in 4 patients with heart failure with reduced ejection fraction had outpatient escalation of diuretic therapy over longitudinal follow-up, and these patients were more likely to have sign/symptoms of congestion. Outpatient diuretic dose escalation of any magnitude was associated with heart failure hospitalizations and resource utilization, but not all-cause mortality.
abstract_id: PUBMED:33714745
Pragmatic Design of Randomized Clinical Trials for Heart Failure: Rationale and Design of the TRANSFORM-HF Trial. Randomized clinical trials are the foundation of evidence-based medicine and central to practice guidelines and patient care decisions. Nonetheless, randomized trials in heart failure (HF) populations have become increasingly difficult to conduct and are frequently associated with slow patient enrollment, highly selected populations, extensive data collection, and high costs. The traditional model for HF trials has become particularly difficult to execute in the United States, where challenges to site-based research have frequently led to modest U.S. representation in global trials. In this context, the TRANSFORM-HF (Torsemide Comparison with Furosemide for Management of Heart Failure) trial aims to overcome traditional trial challenges and compare the effects of torsemide versus furosemide among patients with HF in the United States. Loop diuretic agents are regularly used by most patients with HF and practice guidelines recommend optimal use of diuretic agents as key to a successful treatment strategy. Long-time clinical experience has contributed to dominant use of furosemide for loop diuretic therapy, although preclinical and small clinical studies suggest potential advantages of torsemide. However, due to the lack of appropriately powered clinical outcome studies, there is insufficient evidence to conclude that torsemide should be routinely recommended over furosemide. Given this gap in knowledge and the fundamental role of loop diuretic agents in HF care, the TRANSFORM-HF trial was designed as a prospective, randomized, event-driven, pragmatic, comparative-effectiveness study to definitively compare the effect of a treatment strategy of torsemide versus furosemide on long-term mortality, hospitalization, and patient-reported outcomes among patients with HF. (TRANSFORM-HF: ToRsemide compArisoN With furoSemide FORManagement of Heart Failure [TRANSFORM-HF]; NCT03296813).
abstract_id: PUBMED:8435376
Central hemodynamic effects of diuretic therapy in chronic heart failure. In chronic heart failure diuretic drugs improve central hemodynamic variables and cardiac pumping secondary to altered plasma and extracellular volumes; humoral markers of these changes include increased plasma renin and aldosterone levels. The latter increases are maximal over the first week but decline with chronic therapy. The plasma alpha-ANP levels show a reciprocal effect; these data are compatible with a rapid contraction of the plasma volume which is sustained during chronic therapy. The acute hemodynamic actions of diuretic agents reflect both immediate and direct vascular actions and also effects secondary to diuresis (volume redistribution). At rest substantial reductions in pulmonary "wedge" pressure (-29%), with a consequent fall in cardiac output (-10%), are described. Total systemic vascular resistance initially increases but "reverse autoregulation" over subsequent weeks returns this elevation gradually towards control values. Tolerance to these initial hemodynamic effects does not occur with maintained therapy; moreover, echocardiographic markers of contractility and exercise capacity may increase. The early venodilator effects of diuretic drugs can be attributed to prostaglandin release and the initial pressor actions to activation of the renin angiotensin system; these vascular actions may have limited relevance to long-term beneficial effects on hemodynamics. Direct pulmonary vasodilation and improved pulmonary compliance remain an interesting finding. Although most patients are both symptomatically and hemodynamically improved at rest, the actions during exercise are more varied. Some individuals with severely impaired left ventricular function show little hemodynamic improvement, whereas those with milder dysfunction usually benefit; in the main this is probably related to the latter being on a steeper cardiac function curve.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:29387462
Dilemmas in the Dosing of Heart Failure Drugs: Titrating Diuretics in Chronic Heart Failure. Despite advances in medical therapy over the past few decades, the incidence of heart failure hospitalisation continues to rise. Diuretics are the most common therapy used to treat heart failure as they relieve congestion. However, there is a lack of guidance on how to best use these medications. Guidelines support the use of diuretics at the lowest clinically effective dose but do not specify a diuretic strategy beyond that. Here we review the diuretics available for treatment, potential mechanisms of diuretic resistance and ways to address this in the ambulatory setting, and review tools that have been developed to help guide diuretic use in the treatment of chronic heart failure.
Answer: Long-term diuretic treatment in heart failure shows differences between furosemide and torasemide. Studies have indicated that patients receiving long-term treatment with torasemide had a lower hospitalization rate compared to those on furosemide. For instance, one study reported hospitalization rates of 3.6% for torasemide versus 5.4% for furosemide, suggesting potential clinical and economic advantages of torasemide over furosemide in the long-term treatment of patients with congestive heart failure (CHF) (PUBMED:12360682).
Another study comparing the effect of torasemide and furosemide on long-term outcomes and New York Heart Association (NYHA) class change in patients with chronic heart failure found that torasemide was associated with a similar primary endpoint (all-cause death) occurrence and a risk reduction of the secondary endpoint (a composite of all-cause death or hospitalization for worsening HF). Additionally, treatment with torasemide had a positive impact on NYHA class and long-term outcomes, particularly in younger patients and those with dilated cardiomyopathy (PUBMED:30649675).
Torasemide has also been shown to be an effective loop diuretic for long-term therapy of arterial hypertension, with higher bioavailability and a longer elimination half-life than furosemide. It has been effective in treating both acute and chronic CHF and arterial hypertension (AH) (PUBMED:21623723).
In terms of diuretic resistance, which can be a challenge in the management of heart failure, a combination of diuretic regimens and SGLT2 inhibitors have been shown to improve diuresis. However, there is a notable lack of substantial data guiding the management of diuretic resistance (PUBMED:38188263).
Overall, the evidence suggests that torasemide may offer clinical benefits over furosemide in the long-term management of heart failure, particularly in reducing hospitalization rates and improving NYHA class. However, further investigations in prospective randomized trials are needed to confirm these findings (PUBMED:30649675). |
Instruction: The use of fresh frozen plasma after major hepatic resection for colorectal metastasis: is there a standard for transfusion?
Abstracts:
abstract_id: PUBMED:12648692
The use of fresh frozen plasma after major hepatic resection for colorectal metastasis: is there a standard for transfusion? Background: Major hepatic resection is indicated for selected patients with colorectal metastasis to the liver. Transfusion of fresh frozen plasma (FFP) might be required after major hepatectomy because of blood loss or coagulopathy, but there are no standard criteria for the use of FFP in this setting.
Methods: We identified 260 patients from our prospective database who underwent major (> or =3 Couinaud segments) hepatectomy between May 1997 and February 2001 for colorectal metastasis. FFP use was determined and tested for its relationship to clinical and pathologic factors. A survey on FFP use was sent to 12 other hepatobiliary centers worldwide.
Results: There were 142 (55%) men, 118 (45%) women, and the median age was 63 years. The most common hepatic resections performed were right lobectomy (37%) and extended right lobectomy (33%). There were 83 (32%) patients who received FFP. In these patients, a total of 405 units of FFP were administered with a median of 4 units. The majority of patients who received FFP were transfused within the first two postoperative days, while there were only five (2%) patients who initially received FFP beyond that time. FFP was administered for a median prothrombin time of 16.9. Only one (0.4%) patient required reoperation for bleeding. Right lobectomy and extended right lobectomy were found to predict FFP use on multivariate analysis. Postoperative complications did not correlate with FFP use. The criteria used for FFP administration at other major hepatobiliary centers were found to be variable.
Conclusions: There is no universal standard for FFP use following major hepatic resection for colorectal metastasis. Our criterion of a prothrombin time of 16-18 seconds is conservative but results only rarely in reoperation for bleeding. Prospective evaluation of a higher threshold for FFP administration, such as an International Normal Ratio of 2.0, should be performed to better define the guidelines for FFP use in patients undergoing major hepatectomy who have normal underlying hepatic parenchyma.
abstract_id: PUBMED:23749932
Negative impact of fresh-frozen plasma transfusion on prognosis after hepatic resection for liver metastases from colorectal cancer. Background: In perioperative management of hepatic resection for colorectal cancer liver metastasis (CRLM), excessive blood loss and blood transfusion greatly influence postoperative complications and prognosis of the patients. We evaluated the influence of the use of blood products on prognosis of patients with CRLM.
Patients And Methods: The subjects of this study were 65 patients who underwent elective hepatic resection between January 2001 and April 2011 for CRLM without distant metastasis or other malignancy. We retrospectively investigated the influence of the use of blood products, including red cell concentrate (RC) and fresh frozen plasma (FFP), and clinical variables on overall survival.
Results: In univariate analysis, bilobar distribution (p=0.0332), more than four lymph node metastases of the primary cancer (p=0.0155), perioperative RC use (p=0.0205), and perioperative FFP use (p=0.0065) were positively associated with poor overall survival rate. In multivariate analysis, bilobar distribution (p=0.0012), more than four lymph node metastases of the primary cancer (p=0.0171), and perioperative FFP use (p=0.0091), were independent risk factors for poor overall survival rate.
Conclusion: The use of FFP is associated with worse overall survival after elective hepatic resection for patients with CRLM.
abstract_id: PUBMED:24023348
Negative impact of fresh-frozen plasma transfusion on prognosis of pancreatic ductal adenocarcinoma after pancreatic resection. Background: Excessive blood loss and blood transfusion may influence postoperative complications and prognosis of patients after pancreatic resection. We evaluated the influence of blood products use on postoperative recurrence and outcome of patients with pancreatic ductal adenocarcinoma.
Patients And Methods: The study included 82 patients who underwent elective pancreatic resections for pancreatic ductal adenocarcinoma without distant metastasis or other malignancies between January 2001 and December 2010. We retrospectively investigated the influence of the use of perioperative blood products including red cell concentrate, fresh-frozen plasma (FFP), and albumin preparation, and clinical variables regarding disease-free and overall survival.
Results: In disease-free survival, serum carcinoembryonic antigen more than 10 ng/ml (p=0.015), serum carbohydrate antigen 19-9 (CA19-9) more than 200 U/ml (p=0.0032), R1 resection (p=0.005), and FFP transfusion were independent risk factors for cancer recurrence in the Cox proportional regression model, pancreaticoduodenectomy (p=0.057) and advanced tumor stage (p=0.083) tended to associate with poor disease-free survival, but were not statistically significant. In overall survival, male gender (p=0.012), advanced tumor stage (p=0.005), serum CA19-9 more than 200 U/ml (p<0.001), and FFP transfusion (p=0.003) were positively associated with poor overall survival in the Cox proportional regression model.
Conclusion: FFP transfusion is associated with poor therapeutic outcome after elective pancreatic resection for pancreatic ductal adenocarcinoma.
abstract_id: PUBMED:28319941
Short-Term Outcomes after Simultaneous Colorectal and Major Hepatic Resection for Synchronous Colorectal Liver Metastases. Background/aims: Resection of the liver is the standard therapeutic approach for patients with hepatic metastasis and is the only therapy with curative potential. The optimal timing of surgical resection for synchronous metastases has remained controversial.
Methods: From January 1993 to December 2008, our strategy has been to use simultaneous resection for resectable synchronous colorectal and liver metastases. During this period, 115 patients underwent simultaneous colorectal and hepatic resection. We evaluated the short-term outcomes of these patients by reviewing operative and perioperative clinical data.
Results: In patients with simultaneous resection, there was no evidence of colorectal complications associated with major hepatectomy or no evidence of hepatic complications related to rectal resection. But increased hepatic complications were apparent with major hepatectomy compared with minor hepatectomy (44 vs. 7.2%, p < 0.001) and patients with rectal resection had increased colorectal complications (23% in the rectal resection vs. 5.3% in the colectomy group, p = 0.034).
Conclusions: Simultaneous major hepatectomy and rectal resection can increase the hepatic or colorectal morbidity, respectively. These patients may be considered for staged resections.
abstract_id: PUBMED:10026750
Reduction of transfusion requirements during major hepatic resection for metastatic disease. Background: Our purpose was to determine whether the combination of total liver vascular inflow occlusion (Pringle maneuver) and rapid hepatic transection with a clamp-crush technique results in significant reduction of blood loss and transfusion requirements during major hepatic resections.
Methods: A series of 49 adult patients underwent major hepatic resections for metastatic disease between April 1, 1992, and March 31, 1998. Group 1 patients (n = 15) had standard hilar dissection and finger-fracture hepatic transection without total liver inflow occlusion. Group 2 patients (n = 34) had total liver inflow occlusion and clamp-crush parenchymal transection.
Results: Median blood loss was 1600 mL for group 1 and 500 mL for group 2 (P = .001). Eleven (73%) patients in group 1 required intraoperative blood transfusion (median 2 units) compared with 7 (21%) in group 2 with a median of 0 units (P = .001 and P < .001, respectively). Of the 7 patients in group 2 who required transfusion, 3 had a preoperative hemoglobin below 10 g/dL, 1 required splenectomy for operative injury, and 1 underwent a concomitant complicated small bowel resection.
Conclusions: Major hepatic resections can be performed without transfusion of blood products when preoperative hemoglobin is above 10 g/dL and concomitant major surgical procedures are not required.
abstract_id: PUBMED:9035419
Surgical treatment of synchronous hepatic metastases of colorectal cancers. Simultaneous or delayed resection? The surgical strategy for resectable synchronous hepatic metastases of colorectal cancer remains controversial. The retrospective analysis of our series of resectable synchronous hepatic metastases focused on the percentage of simultaneous resections, the circumstances, the indications and the results of the one-step procedure compared to the two-step strategy. From January 1, 1982 to December 31, 1995, 129 patients were operated on for resection of hepatic metastases of colorectal cancer. Forty one patients (32%) presented with synchronous hepatic metastases, 20 of whom (49%) underwent simultaneous resection of the primary tumor and the hepatic metastases (simultaneous resection group: SR). For the other 21 patients (51%), the hepatic resection was delayed for a mean interval of 5.2 +/- 4.2 months (delayed resection group: DR). The mean age of the 2 groups was not significantly different (54 years versus 58 years). When the primary tumor was located on the ascending colon, the hepatic excision was performed simultaneously in 9 out of ten cases. The need for blood transfusion and the volume required were not significantly different between the two groups. The length of each surgical operation was comparable between the two groups (331 +/- 76 minutes SR vs 330 +/- 88 minutes DR). Postoperative complications were observed in 20% of patients in the SR group and 10% of patients in the DR group (no significative difference). There was no postoperative mortality in either group. Survival was 83%, 44% and 37% at 1, 2 and 3 years respectively in the SR group and 79%, 59% and 49% in the DR group, with no significant difference between the groups. These results show that simultaneous resection of the primary tumor and the hepatic metastases did not increase either morbidity or mortality in our study, and that it should be proposed especially to patients presenting with a primary tumor of the ascending colon with metastases resectable by means of a minor hepatectomy.
abstract_id: PUBMED:28169483
Prognostic significance of combined albumin-bilirubin and tumor-node-metastasis staging system in patients who underwent hepatic resection for hepatocellular carcinoma. Background: In recent years, the establishment of new staging systems for hepatocellular carcinoma (HCC) has been reported worldwide. The system combining albumin-bilirubin (ALBI) with tumor-node-metastasis stage, developed by the Liver Cancer Study Group of Japan, was called the ALBI-T score.
Methods: Patient data were retrospectively collected for 357 consecutive patients who had undergone hepatic resection for HCC with curative intent between January 2004 and December 2015. The overall survival and recurrence-free survival were compared by the Kaplan-Meier method, using different staging systems: the Japan integrated staging (JIS), modified JIS, and ALBI-T.
Results: Multivariate analysis identified five poor prognostic factors (higher age, poor differentiation, the presence of microvascular invasion, the presence of intrahepatic metastasis, and blood transfusion) that influenced overall survival, and four poor prognostic factors (the presence of intrahepatic metastasis, serum α-fetoprotein level, blood transfusion, and each staging system (JIS, modified JIS, and ALBI-T score)) that influenced recurrence-free survival. Patients for each these three staging system had a significantly worse prognosis regarding recurrence-free survival, but not with overall survival. The modified JIS score showed the lowest Akaike information criteria statistic value, indicating it had the best ability to predict overall survival compared with the other staging systems.
Conclusions: This retrospective analysis showed that, in post-hepatectomy patients with HCC, the ALBI-T score is predictive of worse recurrence-free survival, even when adjustments are made for other known predictors. However, modified JIS is better than ALBI-T in predicting overall survival.
abstract_id: PUBMED:9529482
An 8-year experience of hepatic resection: indications and outcome. Background: Most reports highlighting decreasing operative morbidity and mortality rates following hepatic resection have focused on the management of metastatic disease. Information on the full range of hepatic disease is lacking.
Methods: The indications for hepatic resection in a specialist hepatobiliary unit have been reviewed and the operative morbidity and mortality rates assessed.
Results: Among 129 patients undergoing 133 hepatic resections between October 1988 and September 1996, the principal indication for resection was hepatic malignancy (102 resections), metastatic in 66 cases. Other indications included contiguous tumour (n = 20), primary tumour (n = 16) and benign disease (n = 31). Some 116 procedures were classical anatomical resections. Blood transfusion was required in 40 per cent of cases but major morbidity occurred in 20 per cent. There were six deaths following surgery, five of which were due to hepatic failure and followed resection for malignancy or trauma. The 3-year survival rate in patients resected for colorectal metastases was 65 per cent.
Conclusion: This experience has demonstrated an increasing role for hepatic resection in a wide variety of hepatobiliary pathologies. Despite the low postoperative mortality rate, the significant risk of complications in the postoperative period serves to emphasize the need for careful selection of patients for such surgery, which should be undertaken in specialist centres.
abstract_id: PUBMED:17466484
Major resection of hepatic colorectal liver metastases in elderly patients - an aggressive approach is justified. Aims: With a progressively ageing population, increasing numbers of elderly patients will present with colorectal metastases and be referred for surgical resection. The aim of this study was to assess the safety of hepatic resection in patients over 70 years of age by comparing outcomes with those of a younger cohort of patients.
Methods: Forty-nine patients over 70 years of age who underwent hepatic resection of colorectal liver metastases were compared to 142 patients less than 70 years of age in terms of pre-, peri- and post-operative results, as well as long-term survival.
Results: Major resections were performed in 61% of the elderly group and 68% of the younger group. The two groups were comparable in terms of operative duration, transfusion rate, length of HDU stay and post-operative hospital stay. The elderly group had a non-significant increase in post-operative morbidity. The 30-day and 60-day/inpatient mortality rates were similar between the two groups (elderly 0% and 4%; younger 2% and 3%). Long-term disease-free survival was similar between elderly and younger patients.
Conclusion: This study confirms that an aggressive surgical policy towards colorectal metastases in elderly patients is associated with low peri-operative morbidity and mortality, as well as good long-term outcomes.
abstract_id: PUBMED:15168872
Anaesthesia management of major hepatic resections without application of allogeneic blood. Background: In a prospective study we used acute normovolemic haemodilution (ANH) together with low CVP anaesthesia in order to avoid allogeneic blood transfusion during major liver surgery.
Material And Methods: With institutional approval, 20 patients, ASA classification I-III, undergoing hepatic lobe resection consecutively (18 for metastatic cancer, 2 for hepatocellular cancer) in 2002, were enrolled into the prospective study. A epidural catheter was inserted before surgery in the thoracic region. General anaesthesia was induced and maintained with sevoflurane and fentanyl. Invasive haemodynamic monitoring was instituted. Blood was removed for ANH and circulation volume was replaced by infusion of colloids and crystalloid. The CVP was maintained less than 5 mm Hg during hepatic parenchymal transection. After procedure the ventilated patients were transported to ICU.
Results: The preoperative haematocrit value (Hct) allowed performance of ANH with a blood removal (1025 +/- 357 g) in all patients. The estimated blood loss was 825 +/- 515 ml. None of the subject received allogeneic blood during surgery. One patient had transfusion of 2 units of allogeneic blood the fifth day after the right hepatectomy during a second look for perihepatic hematoma. The CVP was 3.8 +/- 0.4 mmHg during liver resection. Postoperative Hct was 0.31 +/- 0.02 and haemoglobin 108 +/- 11 g/L. All patients were extubated the first day after surgery. There was no hospital mortality.
Conclusions: We have shown that the routine use of new surgical techniques and low CVP anaesthesia in combination with ANH enabled avoidance of allogeneic blood transfusion in all patients during surgery. (Tab. 3, Ref. 22).
Answer: The use of fresh frozen plasma (FFP) after major hepatic resection for colorectal metastasis does not have a universal standard for transfusion. A study identified 260 patients who underwent major hepatectomy for colorectal metastasis and found that FFP use was variable and not correlated with postoperative complications. The criterion used for FFP administration at the study center was a prothrombin time of 16-18 seconds, which rarely resulted in reoperation for bleeding. However, the study suggested that a prospective evaluation of a higher threshold for FFP administration, such as an International Normal Ratio of 2.0, should be performed to better define guidelines for FFP use in patients undergoing major hepatectomy with normal underlying hepatic parenchyma (PUBMED:12648692).
Another study indicated that the use of FFP is associated with worse overall survival after elective hepatic resection for patients with colorectal cancer liver metastasis (CRLM). The study, which included 65 patients, found that perioperative FFP use was an independent risk factor for poor overall survival rate in multivariate analysis (PUBMED:23749932).
These findings suggest that while FFP may be used to manage blood loss or coagulopathy after major hepatic resection, its use is not standardized and may be associated with negative outcomes in terms of overall survival. Therefore, the decision to use FFP should be carefully considered, and further research is needed to establish clear guidelines for its administration in this clinical setting. |
Instruction: Do patients with right ventricular outflow tract ventricular arrhythmias have a normal right ventricular wall motion?
Abstracts:
abstract_id: PUBMED:15942177
Do patients with right ventricular outflow tract ventricular arrhythmias have a normal right ventricular wall motion? A quantitative analysis compared to normal subjects. Background/aim: Patients with ventricular ectopy from the right ventricular (RV) outflow tract (RVOT) are often referred for RV angiography to exclude disorders such as arrhythmogenic RV cardiomyopathy/dysplasia (ARVC/D). This is usually based on a qualitative assessment of the wall motion. We present a method to quantify the wall motion and to apply this method to compare patients with RVOT ectopy to normal subjects.
Methods: RV angiograms were analyzed from 19 normal subjects and 11 subjects with RVOT ventricular arrhythmias (RVOT arrhythmia subjects) who had no other clinical or other evidence for ARVC/D. By a newly developed computer-based method, RV contours were first traced from multiple frames spanning the entire cardiac cycle. The fractional change in area between contours was then calculated as a serial function of time and location to determine both total contour area change and timing of contour movement. Contour area strain, defined as the differential change in area between nearby regions, was also computed.
Results: The contour area change was greatest in the tricuspid valve region and least in the RVOT and midanterior regions. The onset of contraction was earliest in the RVOT region and latest in the apical, inferior, inferoapical, and subtricuspid valve regions. The contour strain was largest in superior tricuspid valve and inferior wall and near zero within the lateral tricuspid valve region. There were significant pairwise differences in contraction area, timing, and strain in the various regions. There were no significant differences between normal subjects and RVOT arrhythmia subjects.
Conclusions: The RV wall motion is nonuniform in contour area change, strain, and timing of motion. Patients with RVOT ventricular ectopy demonstrate wall motion parameters similar to those of normal subjects. This technique should be applicable in analyzing RV wall motion in patients suspected of having ARVC/D.
abstract_id: PUBMED:27957137
Right Ventricular Outflow Tract Arrhythmias: Benign Or Early Stage Arrhythmogenic Right Ventricular Cardiomyopathy/Dysplasia? Ventricular arrhythmias (VAs) arising from the right ventricular outflow tract (RVOT) are a common and heterogeneous entity. Idiopathic right ventricular arrhythmias (IdioVAs) are generally benign, with excellent ablation outcomes and long-term arrhythmia-free survival, and must be distinguished from other conditions associated with VAs arising from the right ventricle: the differential diagnosis with arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is therefore crucial because VAs are one of the most important causes of sudden cardiac death (SCD) in young individuals even with early stage of the disease. Radiofrequency catheter ablation (RFCA) is a current option for the treatment of VAs but important differences must be considered in terms of indication, purposes and procedural strategies in the treatment of the two conditions. In this review, we comprehensively discuss clinical and electrophysiological features, diagnostic and therapeutic techniques in a compared analysis of these two entities.
abstract_id: PUBMED:37511554
Electrical and Structural Insights into Right Ventricular Outflow Tract Arrhythmogenesis. The right ventricular outflow tract (RVOT) is the major origin of ventricular arrhythmias, including premature ventricular contractions, idiopathic ventricular arrhythmias, Brugada syndrome, torsade de pointes, long QT syndrome, and arrhythmogenic right ventricular cardiomyopathy. The RVOT has distinct developmental origins and cellular characteristics and a complex myocardial architecture with high shear wall stress, which may lead to its high vulnerability to arrhythmogenesis. RVOT myocytes are vulnerable to intracellular sodium and calcium overload due to calcium handling protein modulation, enhanced CaMKII activity, ryanodine receptor phosphorylation, and a higher cAMP level activated by predisposing factors or pathological conditions. A reduction in Cx43 and Scn5a expression may lead to electrical uncoupling in RVOT. The purpose of this review is to update the current understanding of the cellular and molecular mechanisms of RVOT arrhythmogenesis.
abstract_id: PUBMED:27633251
Ventricular fibrillation treated by cryotherapy to the right ventricular outflow tract: a case report. Background: Arrhythmias originating from the right ventricular outflow tract are generally considered benign but cases of cardiac arrest have been described, usually associated with polymorphic ventricular tachycardia or extrasystoles with short coupling intervals.
Case Presentation: We report the case of a 54-year-old Caucasian woman with symptomatic right ventricular outflow tract arrhythmias without structural heart disease who suffered a ventricular fibrillation arrest without prior malignant clinical features. Cryoablation was performed and an implantable cardioverter defibrillator was implanted. She has since been free of arrhythmia for 7 years and has asked that the implantable cardioverter defibrillator not be replaced when the battery becomes depleted.
Conclusions: Although usually benign, right ventricular outflow tract tachycardia can be life-threatening. Even the most malignant cases can be cured by ablation.
abstract_id: PUBMED:27957079
Treatment Or Cure Of Right Ventricular Outflow Tract Tachycardia. Right ventricular outflow tract (RVOT) ventricular tachycardias (VT) occur in the absence of structural heart disease and are called idiopathic ventricular arrhythmias. These arrhythmias are thought to be produced by adenosine-sensitive, cyclic AMP mediated, triggered activity and are commonly observed in adolescents and young adults. In the ECG, they appear with a wide QRS complex, a left bundle branch block morphology and, usually, an inferior QRS axis. In the last few years, there has been an increasing number of reports suggesting the possibility of a curative treatment of RVOT VT by means of catheter ablation. This paper reviews the rate of cure of such arrhythmias by discussing the effects of catheter ablation on symptoms, arrhythmia detection, possibility of induction, and short- and long-term follow-up studies.
abstract_id: PUBMED:21835319
Electrocardiographic comparison of ventricular arrhythmias in patients with arrhythmogenic right ventricular cardiomyopathy and right ventricular outflow tract tachycardia. Objectives: The purpose of this study was to evaluate whether electrocardiographic characteristics of ventricular arrhythmias distinguish patients with arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) from those with right ventricular outflow tract tachycardia (RVOT-VT).
Background: Ventricular arrhythmias in RVOT-VT and ARVD/C-VT patients can share a left bundle branch block/inferior axis morphology.
Methods: We compared the electrocardiographic morphology of ventricular tachycardia or premature ventricular contractions with left bundle branch block/inferior axis pattern in 16 ARVD/C patients with that in 42 RVOT-VT patients.
Results: ARVD/C patients had a significantly longer mean QRS duration in lead I (150 ± 31 ms vs. 123 ± 34 ms, p = 0.006), more often exhibited a precordial transition in lead V(6) (3 of 17 [18%] vs. 0 of 42 [0%] with RVOT-VT, p = 0.005), and more often had at least 1 lead with notching (11 of 17 [65%] vs. 9 of 42 [21%], p = 0.001). The most sensitive characteristics for the detection of ARVD/C were a QRS duration in lead I of ≥120 ms (88% sensitivity, 91% negative predictive value). QRS transition at V(6) was most specific at 100% (100% positive predictive value, 77% negative predictive value). The presence of notching on any QRS complex had 79% sensitivity and 65% specificity of (55% positive predictive value, 85% negative predictive value). In multivariate analysis, QRS duration in lead I of ≥120 ms (odds ratio [OR]: 20.4, p = 0.034), earliest onset QRS in lead V(1) (OR: 17.0, p = 0.022), QRS notching (OR: 7.7, p = 0.018), and a transition of V(5) or later (OR: 7.0, p = 0.030) each predicted the presence of ARVD/C.
Conclusions: Several electrocardiographic criteria can help distinguish right ventricular outflow tract arrhythmias originating from ARVD/C compared with RVOT-VT patients.
abstract_id: PUBMED:25542998
Impact of earliest activation site location in the septal right ventricular outflow tract for identification of left vs right outflow tract origin of idiopathic ventricular arrhythmias. Background: The earliest activation site (EAS) location in the septal right ventricular outflow tract (RVOT) could be an additional mapping data predictor of left ventricular outflow tract (LVOT) vs RVOT origin of idiopathic ventricular arrhythmias (VAs).
Objective: The purpose of this study was to assess the impact of EAS location in predicting LVOT vs RVOT origin.
Methods: Macroscopic and histologic study was performed in 12 postmortem hearts. Electroanatomic maps (EAMs) from 37 patients with outflow tract (OT) VA with the EAS in the septal RVOT were analyzed. Pulmonary valve (PV) was defined by voltage scanning after validation of voltage thresholds by image integration. EAM measurements were correlated with those of macroscopic/histologic study.
Results: A cutoff value of 1.9 mV discriminated between subvalvular and supravalvular positions (90% sensitivity, 96% specificity). EAS ≥1 cm below PV excluded RVOT site of origin (SOO). According to anatomic findings (distance PV-left coronary cusp = 5 ± 3 vs PV-right coronary cusp = 11 ± 5 mm), EAS-PV distance was significantly shorter in VAs arising from left coronary cusp than from the other LVOT locations (4.2 ± 5.4 mm vs 9.2 ± 7 mm; P = .034). The 10-ms isochronal longitudinal/perpendicular diameter ratio was higher in the RVOT vs the LVOT SOO group (1.97 ± 1.2 vs 0.79 ± 0.49; P = .001). An algorithm based on EAS-PV distance and the 10-ms isochronal longitudinal/perpendicular diameter ratio predicted LVOT SOO with 91% sensitivity and 100% specificity.
Conclusion: An algorithm based on the EAS-PV distance and the 10-ms isochronal longitudinal/perpendicular diameter ratio accurately predicts LVOT vs RVOT SOO in outflow tract VAs with EAS in the septal RVOT.
abstract_id: PUBMED:36969511
Electrical isolation of the right ventricular outflow tract in idiopathic ventricular tachycardia: a case report. Background: Ventricular tachycardia (VT) originating in the right ventricular outflow tract (RVOT) is the most common form of idiopathic VT. Catheter ablation of right ventricular outflow tract tachycardia (RVOT-VT) is associated with high success rates. However, non-inducibility of VT on electrophysiological (EP) study can severely impact ablation outcome. We describe a novel catheter ablation strategy which proved feasible and safe in a case of highly symptomatic, non-inducible RVOT-VT.
Case Summary: A 51-year-old male with a history of non-sustained VT (NSVT) was referred to our hospital after two syncopal episodes resulting in collapse. Upon admission, a cluster of monomorphic NSVT (250-270 b.p.m.) resulted in haemodynamic instability and required transfer to the intensive care unit. On twelve-lead electrocardiogram, NSVT showed inferior axis and left bundle branch block, suggestive of RVOT-VT. Diagnostic workup including echocardiography, coronary angiography, and late enhancement computed tomography (CT) revealed no evidence of structural heart disease. On two EP studies, non-inducibility of clinical VT despite repeated ventricular pacing and isoproterenol infusion rendered precise mapping of triggered activity unfeasible. Therefore, a bailout ablation strategy was developed by performing a circumferential electrical RVOT isolation using a 3.5 mm irrigated-tip ablation catheter under the guidance of high-density electroanatomic mapping (CARTO® 3) and CT reconstruction of cardiac anatomy. No procedural complications occurred, and the patient remained arrhythmia-free during a 6-month follow-up period.
Discussion: Catheter ablation is a first-line therapy for symptomatic and drug-refractory idiopathic RVOT-VT. Non-inducibility of RVOT-VT represents a relevant limitation for successful ablation which might be overcome by electrical RVOT isolation as a bailout ablation strategy.
abstract_id: PUBMED:9609899
Right ventricular arrhythmia in the absence of arrhythmogenic dysplasia: MR imaging of myocardial abnormalities. Purpose: To evaluate right ventricular abnormalities with magnetic resonance (MR) imaging in patients with arrhythmia but without arrhythmogenic dysplasia.
Materials And Methods: In 53 patients being evaluated for right ventricular arrhythmia and 15 control subjects, MR imaging was performed to evaluate fixed thinning, fatty replacement, or reduced systolic wall thickening or motion. A diagnosis of idiopathic right ventricular outflow tract tachycardia or indeterminate was assigned for each patient, and the severity of arrhythmia was categorized.
Results: Right ventricular abnormalities were revealed in 32 (60%) of the 53 patients: fixed thinning in 27 (84%), fatty replacement in eight (25%), and reduced wall thickening or motion in 31 (97%). Right ventricular abnormalities were found in 35 (76%) of 46 patients with idiopathic right ventricular outflow tract tachycardia and in seven (39%) of 18 patients with indeterminate diagnoses (P = .022).
Conclusion: Mild right ventricular abnormalities are likely sources for arrhythmias, even in the absence of arrhythmogenic right ventricular dysplasia.
abstract_id: PUBMED:35971685
Validation of an electrocardiographic marker of low voltage areas in the right ventricular outflow tract in patients with idiopathic ventricular arrhythmias. Background: Previous studies have reported the presence of subtle abnormalities in the right ventricular outflow tract (RVOT) in patients with apparently normal hearts and ventricular arrhythmias (VAs) from the RVOT, including the presence of low voltage areas (LVAs). This LVAs seem to be associated with the presence of ST-segment elevation in V1 or V2 leads at the level of the 2nd intercostal space (ICS).
Objective: Our aim was to validate an electrocardiographic marker of LVAs in the RVOT in patients with idiopathic outflow tract VAs.
Methods: A total of 120 patients were studied, 84 patients referred for ablation of idiopathic VAs with an inferior axis by the same operator, and a control group of 36 patients without VAs. Structural heart disease including arrhythmogenic right ventricular cardiomyopathy was ruled out in all patients. An electrocardiogram was performed with V1-V2 at the 2nd ICS, and ST-segment elevation ≥1 mm and T-wave inversion beyond V1 were assessed. Bipolar voltage map of the RVOT was performed in sinus rhythm (0.5-1.5 mV color display). Areas with electrograms <1.5 mV were considered LVAs, and their presence was assessed. We compared three groups, VAs from the RVOT (n = 66), VAs from the LVOT (n = 18) and Control group (n = 36). ST-elevation, T-wave inversion and left versus right side of the VAs were tested as predictors of LVAs, respective odds ratio (ORs) (95% confidence interval [CI]) and p values, were calculated with univariate logist regression. Variables with a p < .005 were included in the multivariate analysis.
Results: ST-segment elevation, T-wave inversion and LVAs were present in the RVOT group, LVOT group and Control group as follows: (62%, 17%, and 6%, p < .0001), (33%, 29%, and 0%, p = .001) and (62%, 25%, and 14%, p < .0001). The ST-segment elevation, T-wave inversion and right-sided VAs were all predictors of LVAs, respective unadjusted ORs (95% CI), p values were, 32.31 (11.33-92.13), p < .0001, 4.137 (1.615-10.60), p = .003 and 8.200 (3.309-20.32), p < .0001. After adjustment, the only independent predictor of LVAs was the ST-segment elevation, with an adjusted OR (95% CI) of 20.94 (6.787-64.61), p < .0001.
Conclusion: LVAs were frequently present in patients with idiopathic VAs. ST-segment elevation was the only independent predictor of their presence.
Answer: Patients with right ventricular outflow tract (RVOT) ventricular arrhythmias demonstrate wall motion parameters similar to those of normal subjects, indicating that they have normal right ventricular (RV) wall motion. This was determined through a quantitative analysis that compared patients with RVOT ectopy to normal subjects. The study involved analyzing RV angiograms from both groups using a computer-based method to trace RV contours and calculate the fractional change in area between contours throughout the cardiac cycle. The results showed no significant differences in contraction area, timing, and strain in various regions of the RV between normal subjects and RVOT arrhythmia subjects (PUBMED:15942177). |
Instruction: Does hepatic pedicle clamping affect disease-free survival following liver resection for colorectal metastases?
Abstracts:
abstract_id: PUBMED:21107113
Does hepatic pedicle clamping affect disease-free survival following liver resection for colorectal metastases? Objective: To evaluate the impact of liver ischemia from hepatic pedicle clamping (HPC) on long-term outcome after hepatectomy for colorectal liver metastases (CRLM).
Background: Liver resection offers the only chance of cure for patients with CRLM. Several clinical and pathologic factors have been reported as determinants of poor outcome after hepatectomy for CRLM. A controversial issue is that hepatic ischemia/reperfusion injury from HPC may adversely affect long-term outcome by accelerating the outgrowth of residual hepatic micrometastases.
Methods: Patients undergoing liver resection for CRLM in 2 tertiary referral centers, between 1992 and 2008, were included. Disease-free survival and specific liver-free survival were analyzed according to the use, type, and duration of HPC.
Results: Five hundred forty-three patients had primary hepatectomy for CRLM. Hepatic pedicle clamping was performed in 355 patients (65.4%), and intermittently applied in 254 patients (71.5%). Postoperative mortality and morbidity rates were 1.3% and 18.5%, respectively. Hepatic pedicle clamping had a highly significant impact in reducing the risk of blood transfusions and was not correlated with significantly higher postoperative morbidity. Liver recurrence rate was not significantly different according to the use, type, and duration of HPC, in patients resected after preoperative chemotherapy as well. On univariate analysis, HPC did not significantly affect overall and disease-free survival. These results were confirmed on the multivariate analysis where blood transfusions, primary tumor nodal involvement, and the size of CRLM of more than 5 cm prevailed as determinants of poor outcome.
Conclusions: This study confirms the safety and effectiveness of HPC and demonstrates that in the human situation, there is no evidence that HPC may adversely affect long-term outcome after hepatectomy for CRLM.
abstract_id: PUBMED:33383844
Prognostic Impact of Pedicle Clamping during Liver Resection for Colorectal Metastases. Pedicle clamping (PC) during liver resection for colorectal metastases (CRLM) is used to reduce blood loss and allogeneic blood transfusion (ABT). The effect on long-term oncologic outcomes is still under debate. A retrospective analysis of the impact of PC on ABT-demand regarding overall (OS) and recurrence-free survival (RFS) in 336 patients undergoing curative resection for CRLM was carried out. Survival analysis was performed by both univariate and multivariate methods and propensity-score (PS) matching. PC was employed in 75 patients (22%). No increased postoperative morbidity was monitored. While the overall ABT-rate was comparable (35% vs. 37%, p = 0.786), a reduced demand for more than two ABT-units was observed (p = 0.046). PC-patients had better median OS (78 vs. 47 months, p = 0.005) and RFS (36 vs. 23 months, p = 0.006). Multivariate analysis revealed PC as an independent prognostic factor for OS (HR = 0.60; p = 0.009) and RFS (HR = 0.67; p = 0.017). For PC-patients, 1:2 PS-matching (N = 174) showed no differences in the overall ABT-rate compared to no-PC-patients (35% vs. 40%, p = 0.619), but a trend towards reduced transfusion requirement (>2 ABT-units: 9% vs. 21%, p = 0.052; >4 ABT-units: 2% vs. 11%, p = 0.037) and better survival (OS: 78 vs. 44 months, p = 0.088; RFS: 36 vs. 24 months; p = 0.029). Favorable long-term outcomes and lower rates of increased transfusion demand were observed in patients with PC undergoing resection for CRLM. Further prospective evaluation of potential oncologic benefits of PC in these patients may be meaningful.
abstract_id: PUBMED:31853827
ACE Inhibitor Therapy Does Not Influence the Survival Outcomes of Patients with Colorectal Liver Metastases Following Liver Resection. Background: Angiotensin-converting enzyme (ACE) inhibitors have been shown to possibly influence the survival outcomes in certain cancers. The aim of this study was to evaluate the impact of ACE inhibitors on the outcomes of patients undergoing liver resection for colorectal liver metastases (CRLM). The secondary aim was to determine whether ACE inhibitors influenced histopathological changes in CRLM.
Methods: Patients treated with liver resection for CRLM over a 13-year period were identified from a prospectively maintained database. Data including demographics, primary tumour treatment, surgical data, histopathology analysis and clinical outcome were collated and analysed.
Results: A total of 586 patients underwent primary hepatic resections for CRLM during this period including 100 patients on ACE inhibitors. The median follow-up period was 23 (range: 12-96) months, in which 267 patients developed recurrent disease and 131 patients died. Independent predictors of disease-free survival on multivariate analysis included synchronous presentation, neoadjuvant chemotherapy, major liver resection, tumour size and number, extent of hepatic steatosis, R0 resection and presence of perineural invasion. Poorer overall survival was associated with neoadjuvant treatment, major liver resection, presence of multiple metastases, perineural invasion and positive resection margins on multivariate analysis. ACE inhibitors did not influence the survival outcome or histological presentation in CRLM.
Conclusion: The use of ACE inhibitors did not affect the survival outcome or tumour biology in patients with CRLM following liver resection.
abstract_id: PUBMED:7571046
Prognostic factors for survival and disease-free survival in hepatic metastases from colorectal cancer treated by resection. The prognostic factors of 219 patients submitted to Ro hepatic resection for colorectal metastases have been statistically analyzed. The overall 5-year actuarial survival rate was 24% and the 5-year disease-free survival rate was 18%. At univariate analysis four variables resulted significant: 1) The stage of primary colorectal cancer: if the meserentic lymph nodes were metastatic (Dukes C) or uninvolved (Dukes B) 5-year survival was respectively 16 and 38% (p < 0.001). 2) The percentage of hepatic replacement: the 5-year survival rate of patients with H1 (< 25%), H2 (25-50%) and H3 (> 50%) was 27, 16 and 8% respectively (p < 0.001). 3) The number of metastases: the 5-year survival of patients with 1, 2-3, > 3 hepatic nodules was 29, 21 and 17% respectively (p < 0.05). 4) The extent of surgical resection: 5-year survival after minor and major resection was 28 and 18% respectively (p < 0.05). At multivariate analysis only stage of primary and percentage of hepatic replacement retained statistical significance. In 60% of 154 patients with recurrent disease the liver was again involved.
abstract_id: PUBMED:18646038
Surgical technique and systemic inflammation influences long-term disease-free survival following hepatic resection for colorectal metastasis. Background: To date, there is limited data available on prognostic factors that influence long-term disease-free survival following hepatic resection for colorectal liver metastasis (CRLM). The aim of the study was to identify prognostic factors that were associated with long-term disease-free survival (>5 years) following resection for CRLM.
Methods: Patients undergoing resection for CRLM from January 1993 to March 2007 were identified from the hepatobiliary database. Data analyzed included demographics, laboratory results, operative findings and histopathological data.
Results: Seven hundred five curative primary hepatic resections were performed, of which 434 patients developed disease recurrence within 5 years and 67 patients were disease-free more than 5 years. There was a significant association between systemic inflammatory response (raised neutrophil to lymphocyte ratio and/or C-reactive protein), blood transfusion, >2 tumors, bilobar disease and resection margin involvement with developing recurrence during the follow-up period. On multivariate analysis, three independent predictors for recurrent disease within the 5-year follow-up were identified: pre-operative inflammatory response; blood transfusion requirement; and status of resection margin.
Conclusion: Absence of a systemic inflammatory response and surgical technique to minimize transfusion requirements and obtain a R0 resection margin, are associated with long-term disease-free survival.
abstract_id: PUBMED:19816614
Influence of resection margin on survival in hepatic resections for colorectal liver metastases. Background: Traditionally a 1-cm margin has been accepted as the gold standard for resection of colorectal liver metastases. Evidence is emerging that a lesser margin may provide equally acceptable outcomes, but a critical margin, below which recurrence is higher and survival poorer, has not been universally agreed. In a recent publication, we reported peri-operative morbidity and clear margin as the two independent prognostic factors. The aim of the current study was to further analyse the effect of the width of the surgical margin on patient survival to determine whether a margin of 1 mm is adequate.
Methods: Two hundred and sixty-one consecutive primary liver resections for colorectal metastases were analysed from 1992 to 2007. The resection margins were assessed by microscopic examination of paraffin sections. The initial analysis was performed on five groups according to the resection margins: involved margin, 0-1 mm, >1-<4 mm, 4-<10 mm and > or = 10 mm. Subsequent analysis was based on two groups: margin <1 mm and >1 mm.
Results: With a median follow-up of 4.7 years, the overall 5-year patient and disease-free survival were 38% and 22%, respectively. There was no significant difference in patient- or disease-free survival between the three groups with resection margins >1 mm. When a comparison was made between patients with resection margins < or = 1 mm and patients with resection margins >1 mm, there was a significant 5-year patient survival difference of 25% versus 43% (P < 0.04). However, the disease-free survival difference did not reach statistical significance (P = 0.14).
Conclusions: In this cohort of patients, we have demonstrated that a resection margin of greater than 1 mm is associated with significantly improved 5-year overall survival, compared with involved margins or margins less than or equal to 1 mm. The possible beneficial effect of greater margins beyond 1 mm could not be demonstrated.
abstract_id: PUBMED:7695977
Hepatic resection for metastases from colorectal carcinoma--a survival analysis. Between 1 January 1984 and 31 December 1992, 66 patients with hepatic metastases from colorectal carcinomas underwent liver resection. 40 of these patients had synchronous hepatic metastases, and liver resection was carried out simultaneously with radical resection of the primary tumour; in 26 cases metachronous metastases in the liver were surgically removed. 25 patients had an anatomical resection and the remainder underwent atypical resections. The postoperative mortality rate was 4.5% and the major complication rate was 19.7%. Univariate and subsequently multivariate analyses were used to predict the influence of various clinical, histopathological and surgical variables. The observed 5-year survival rate was 29.6% and the 5-year disease-free survival rate 13.9%. Furthermore, the observed median survival time was 24.7 months and the mean disease-free survival time was 16.7 months. Multivariate analysis showed that stage of primary (pTN) (P = 0.043), tumour grading (P = 0.013) and site of primary (P = 0.007) were factors which independently influenced 5-year disease-free survival whereas stage of primary (pTN) (P = 0.008), tumour grading (P = 0.004) and type of resection (P = 0.035) were identified as having independent influence on 5-year observed survival. We consider liver resection to be an effective form of treatment for patients with resectable liver metastases from colorectal carcinoma, although the overall chances for cure are generally not very promising. It appears that the biological behaviour of the primary tumour, in terms of tumour stage and grading, has the greatest influence on survival.
abstract_id: PUBMED:10370668
Survival after repeat hepatic resection for recurrent colorectal metastases. Background/aims: This is a retrospective study examining survival of patients undergoing repeat hepatic resection for recurrent colorectal metastases.
Methodology: The records of 41 patients undergoing hepatic resection for metastatic colorectal cancer were reviewed. Curative resections (negative resection margin and no extrahepatic disease) were attempted in all patients. Recurrence developed in 26 (63%) patients, with disease being confined to the liver in 16 (39%) patients. Ten of them (24%) underwent hepatic resection and make up the study population.
Results: Ten patients (4 women, 6 men; mean age: 62 years, range: 50-82 years) developed recurrence confined to the liver at the median interval of 16 months (range: 5-34 months) after the first hepatectomy. In 6 patients the recurrent cancer(s) involved both the area near the resection line and remote sites from the site of the first hepatic resection. In 3 patients recurrent cancer(s) was located at sites remote from the first liver resection. In 1 patient the recurrent cancer was located in the same area as the original hepatic resection. Three formal hepatectomies and seven non-anatomical (wedge) resections were performed. The mean blood loss was 900 cc (range: 100-2700 cc); the mean hospital stay was 19 days (range: 8-34 days). There was no perioperative mortality. Morbidity was 20%. Four patients died of recurrent disease, with a mean disease-free survival of 13 months (range: 5-21 months). Two patients had a second recurrence resected at 10 and 24 months, respectively, after the second hepatic resection. One of these 2 patients had a fourth hepatic resection for hepatic recurrence and is still alive with no evidence of disease. Six patients are alive, 4 of them without evidence of disease, with a median follow-up time of 30 months (range: 22-64 months). Actuarial 4-year specific survival was 44%. Actuarial disease-free survival at 4 years was 18%.
Conclusions: In appropriately selected patients, repeat hepatic resection for colorectal metastases is a worthwhile treatment. Mortality, morbidity, and survival are similar to those following the initial resection.
abstract_id: PUBMED:28612017
Treatment Options in Colorectal Liver Metastases: Hepatic Arterial Infusion. Background: The liver is the most common site for metastases from colorectal cancer (CRC) with the majority of these patients having unresectable disease.
Methods: This is a retrospective review of studies using hepatic arterial infusion (HAI) therapy to treat liver metastasis from CRC. A PubMed search of randomized controlled trials and retrospective studies from 2006 to present was conducted using the search terms 'hepatic arterial infusion (HAI) therapy', 'colorectal cancer', and 'treatment of liver metastases'.
Results: The first randomized studies comparing HAI to systemic therapy with 5-fluorouracil/leucovorin produced significantly higher response rates of 41 versus 14%. Systemic therapy has improved with the addition of irinotecan and oxaliplatin; however, the responses with HAI and these modern agents have also increased, with responses as high as 80%. For patients with wild-type KRAS, HAI and systemic therapy produced a median survival of 68 months. In patients with refractory disease, response rates are in the 30% range with a median survival of 20 months. Adjuvant HAI after liver resection has shown an increase of hepatic disease-free survival and overall disease-free survival when compared to systemic therapy alone in three of four randomized trials. A recent update of the adjuvant trials after liver resection at Memorial Sloan Kettering Cancer Center has shown a 5-year survival of 78%.
Conclusion: HAI therapy has a role in treating hepatic metastases from CRC in both the resectable and unresectable setting.
abstract_id: PUBMED:27011523
Comparison of Hepatic Resection and Radiofrequency Ablation for the Treatment of Colorectal Liver Metastasis. The liver is the major site of metastasis of primary colorectal cancer. Hepatic resection (HR) is considered the standard treatment for colorectal liver metastasis. In high-risk cases, radiofrequency ablation (RFA) can be attempted as an alternative treatment. This study compared the clinical profiles and overall and disease-free survival rates of patients with colorectal liver metastasis undergoing HR and RFA. From 1995 to 2009, we retrospectively analyzed clinical experiences of 43 and 17 patients who had undergone HR and RFA for primary colorectal cancer, respectively. To compare outcomes, we investigated the 3-year overall and disease-free survival rates. The 3-year overall survival rates of patients treated with HR and RFA were 53.5 and 47.1 %, respectively (p = 0.285); the disease-free survival rates were 35.0 and 26.9 %, respectively (p = 0.211). In the HR and RFA groups, 30 (60.2 %) and 13 (76.5 %) patients developed recurrence, respectively (p = 0.604). In the HR group, 1 patient died from postoperative liver failure, and 9 (20.9 %) developed postoperative complications, including wound infection, biliary leakage, intra-abdominal abscess, and pneumonia. In the RFA group, 1 patient (5.9 %) required prolonged inpatient care because of a procedure-related liver abscess. Although HR should be considered the first option for colorectal liver metastasis, RFA can be regarded as a primary treatment modality depending on the patient's characteristics, especially when a patient refuses surgery or has comorbidities.
Answer: The impact of hepatic pedicle clamping (HPC) on disease-free survival following liver resection for colorectal metastases (CRLM) has been evaluated in several studies, with varying results. According to the study with PUBMED:21107113, HPC does not significantly affect overall and disease-free survival after hepatectomy for CRLM. This study included patients from two tertiary referral centers and found that factors such as blood transfusions, primary tumor nodal involvement, and the size of CRLM greater than 5 cm were more significant determinants of poor outcome.
In contrast, another study with PUBMED:33383844 reported that patients who underwent pedicle clamping (PC) during liver resection had better median overall survival (OS) and recurrence-free survival (RFS) compared to those who did not. This retrospective analysis suggested that PC could be an independent prognostic factor for OS and RFS, indicating a potential oncologic benefit of PC in patients undergoing resection for CRLM.
However, it is important to note that these studies have different methodologies and patient populations, which could account for the differences in their findings. While one study (PUBMED:21107113) did not find a significant impact of HPC on long-term outcomes, the other (PUBMED:33383844) suggested a favorable impact on survival. Further prospective evaluation may be necessary to fully understand the potential benefits of PC in the context of liver resection for CRLM. |
Instruction: Does pay-for-performance improve surgical outcomes?
Abstracts:
abstract_id: PUBMED:26667492
Impact of pay for performance on behavior of primary care physicians and patient outcomes. Background And Objectives: Pay-for-performance is a financial incentive which links physicians' income to the quality of their services. Although pay-for-performance is suggested to be an effective payment method in many pilot countries (ie the UK) and enjoys a wide application in primary health care, researches on it are yet to reach an agreement. Thus, a systematic review was conducted on the evidence of impact of pay-for-performance on behavior of primary care physicians and patient outcomes aiming to provide a comprehensive and objective evaluation of pay-for-performance for decision-makers.
Methods: Studies were identified by searching PubMed, EMbase, and The Cochrane Library. Electronic search was conducted in the fourth week of January 2013. As the included studies had significant clinical heterogeneity, a descriptive analysis was conducted. Quality Index was adopted for quality assessment of evidences.
Results: Database searches yielded 651 candidate articles, of which 44 studies fulfilled the inclusion criteria. An overall positive effect was found on the management of disease, which varied in accordance with the baseline medical quality and the practice size. Meanwhile, it could bring about new problems regarding the inequity, patients' dissatisfaction and increasing medical cost.
Conclusions: Decision-makers should consider the baseline conditions of medical quality and the practice size before new medical policies are enacted. Furthermore, most studies are retrospective and observational with high level of heterogeneity though, the descriptive analysis is still of significance.
abstract_id: PUBMED:33304576
Do penalty-based pay-for-performance programs improve surgical care more effectively than other payment strategies? A systematic review. Background: The aim of this systematic review is to assess if penalty-based pay-for-performance (P4P) programs are more effective in improving quality and cost outcomes compared to two other payment strategies (i.e., rewards and a combination of rewards and penalties) for surgical care in the United States. Penalty-based programs have gained in popularity because of their potential to motivate behavioral change more effectively than reward-based programs to improve quality of care. However, little is known about whether penalties are more effective than other strategies.
Materials And Methods: A systematic literature review was conducted according to the PRISMA guideline to identify studies that evaluated the effects of P4P programs on quality and cost outcomes for surgical care. Five databases were used to search studies published from 2003 to March 1, 2020. Studies were selected based on the PRISMA guidelines. Methodological quality of individual studies was assessed based on ROBINS-I with GRADE approach.
Results: This review included 22 studies. Fifteen cross-sectional, 1 prospective cohort, 4 retrospective cohort, and 2 case-control studies were found. We identified 11 unique P4P programs: 5 used rewards, 3 used penalties, and 3 used a combination of rewards and penalties as a payment strategy. Five out of 10 studies reported positive effects of penalty-based programs, whereas evidence from studies evaluating P4P programs with a reward design or combination of rewards and penalties was little or null.
Conclusions: This review highlights that P4P programs with a penalty design could be more effective than programs using rewards or a combination of rewards and penalties to improve quality of surgical care.
abstract_id: PUBMED:30213407
Quality Measurement and Pay for Performance. Recent debate has focused on which quality measures are appropriate for surgical oncology and how they should be implemented and incentivized. Current quality measures focus primarily on process measures (use of adjuvant therapy, pathology reporting) and patient-centered outcomes (health-related quality of life). Pay for performance programs impacting surgical oncology patients focus primarily on preventing postoperative complications, but are not specific to cancer surgery. Future pay for performance programs in surgical oncology will likely focus on incentivizing high-quality, low-cost cancer care by evaluating process measures, patient-centered measures, and costs of care specific to cancer surgery.
abstract_id: PUBMED:35873746
Pay for performance system in Turkey and the world; a global overview. Objectives: This study aimed to compare the pay for performance system applied nationally in Turkey and in other countries around the world and to reveal the effects of the system applied in our country on the general surgery.
Material And Methods: Current literature and countries' programs on the implementation of the pay for performance system were recorded. The results of the Turkish Surgical Association's performance and Healthcare Implementation Communique (HIC) commission studies were evaluated in light of the literature.
Results: Many countries have implemented performance systems on a limited scale to improve quality, speed up the diagnosis, treatment, and control of certain diseases, and they have generally applied it as a financial promotion by receiving the support of health insurance companies and nongovernmental organizations. It turns out that surgeons in our country feel that they are being wronged because of the injustice in the current system because the property of their works is not appreciated and they cannot get the reward for the work they do. This is also the reason for the reluctance of medical school graduates to choose general surgery.
Conclusion: Authorities should pay attention to the opinions of associations and experts in the related field when creating lists of interventional procedures related to surgery. Equal pay should be given to equal work nationally, and surgeons should be encouraged by incentives to perform detailed, qualified surgeries. There is a possibility that the staff positions opened for general surgery, as well as, all surgical branches will remain empty in the near future.
abstract_id: PUBMED:31790980
Is it feasible to pay specialty substance use disorder treatment programs based on patient outcomes? Background: Some US payers are starting to vary payment to providers depending on patient outcomes, but this approach is rarely used in substance use disorder (SUD) treatment.
Purpose: We examine the feasibility of applying a pay-for-outcomes approach to SUD treatment.
Methods: We reviewed several relevant literatures: (1) economic theory papers that describe the conditions under which pay-for-outcomes is feasible in principle; (2) description of the key outcomes expected from SUD treatment, and the measures of these outcomes that are available in administrative data systems; and (3) reports on actual experiences of paying SUD treatment providers based on patient outcomes.
Results: The economics literature notes that when patient outcomes are strongly influenced by factors beyond provider control and when risk adjustment performs poorly, pay-for-outcomes will increase provider financial risk. This is relevant to SUD treatment. The literature on SUD outcome measurement shows disagreement on whether to include broader outcomes beyond abstinence from substance use. Good measures are available for some of these broader constructs, but the need for risk adjustment still brings many challenges. Results from two past payment experiments in SUD treatment reinforce some of the concerns raised in the more conceptual literature.
Conclusion: There are special challenges in applying pay-for-outcomes to SUD treatment, not all of which could be overcome by developing better measures. For SUD treatment it may be necessary to define outcomes more broadly than for general medical care, and to continue conditioning a sizeable portion of payment on process measures.
abstract_id: PUBMED:31256249
Evidence that surgical performance predicts clinical outcomes. Purpose: Assessment of surgeon performance in the operating room has been identified as a direct method of measuring surgical quality. Studies published in urology and other surgical disciplines have investigated this link directly by measuring surgeon and team performance using methodology supported by validity evidence. This article highlights the key findings of these studies and associated underlying concepts.
Methods: Seminal literature from urology and related areas of research was used to inform this review of the performance-outcome relationship in surgery. Current efforts to further our understanding of this concept are discussed, including relevant quality improvement and educational interventions that utilize this relationship.
Results: Evidence from multiple surgical specialties and procedures has established the association between surgeon skill and clinically significant patient outcomes. Novel methods of measuring performance utilize surgeon kinematics and artificial intelligence techniques to more reliably and objectively quantify surgical performance.
Conclusions: Future directions include the use of this data to create interventions for quality improvement, as well as innovate the credentialing and recertification process for practicing surgeons.
abstract_id: PUBMED:23582762
Surgical care improvement project and surgical site infections: can integration in the surgical safety checklist improve quality performance and clinical outcomes? Introduction: The World Health Organization Surgical Safety Checklist (SSC) has been shown to decrease surgical site infections (SSI). The Surgical Care Improvement Project (SCIP) SSI reduction bundle (SCIP Inf) contains elements to improve SSI rates. We wanted to determine if integration of SCIP measures within our SSC would improve SCIP performance and patient outcomes for SSI.
Methods: An integrated SSC that included perioperative SCIP Inf measures (antibiotic selection, antibiotic timing, and temperature management) was implemented. We compared SCIP Inf compliance and patient outcomes for 1-y before and 1-y after SSC implementation. Outcomes included number of patients with initial post-anesthesia care unit temperature <98.6°F and SSI rates according to our National Surgical Quality Improvement Program data.
Results: Implementation of a SCIP integrated SSC resulted in a significant improvement in antibiotic infusion timing (92.7% [670/723] versus 95.4% [557/584]; P < 0.05), antibiotic selection (96.2% [707/735] versus 98.7% [584/592]; P < 0.01), and temperature management (93.8% [723/771] versus 97.7% [693/709]; P < 0.001). Furthermore, we found a significant reduction in number of patients with initial post-anesthesia care unit temperature <98.6°F from 9.7% (982/10,126) to 6.9% (671/9676) (P < 0.001). Institutional SSI rates decreased from 3.13% (104/3319) to 2.96% (107/3616), but was not significant (P = 0.72). SSI rates according to specialty service were similar for all groups except colorectal surgery (24.1% [19/79] versus 11.5% [12/104]; P < 0.05).
Conclusion: Implementation of an integrated SSC can improve compliance of SSI reduction strategies such as SCIP Inf performance and maintenance of normothermia. This did not, however, correlate with an improvement in overall SSI at our institution. Further investigation is required to determine other factors that may influence SSI at an institutional level.
abstract_id: PUBMED:36743591
A cognitive evaluation and equity-based perspective of pay for performance on job performance: A meta-analysis and path model. Pay for performance, as one of the most important means of motivating employees, has attracted the attention of many scholars and managers. However, controversy has continued regarding whether it promotes or undermines job performance. Drawing on a meta-analysis of 108 independent samples (N = 71,438) from 100 articles, we found that pay for performance was positively related to job performance. That pay for performance had a more substantial positive effect on task performance than contextual performance in workplace settings. From the cognitive evaluation perspective, we found that pay for performance enhanced employees' task performance and contextual performance by enhancing intrinsic motivation and weakened task performance and contextual performance by increasing employee pressure. From the equity perspective, our results indicated that the relationship between pay for performance and task performance was partially mediated by employee perceptions of distributive justice and procedural justice, with distributive justice having a more substantial mediating effect than procedural justice. However, the relationship between pay for performance and contextual performance was only partially mediated by procedural justice. Further tests of moderating effects indicated that the varying impacts of pay for performance are contingent on measures of pay for performance and national culture. The findings contributed to understanding the complex mechanisms and boundary conditions of pay-for-performance's effects on job performance, which provided insights for organizations to maximize its positive effects.
abstract_id: PUBMED:32843941
Pay-for-performance challenges in family physician program. Objective: This study was conducted to investigate the challenges faced in the implementation of the pay-for-performance system in Iran's family physician program.
Study Design: Qualitative.
Place And Duration Of Study: The study was conducted with 32 key informants at the family physician program at the Tabriz University of Medical Sciences between May 2018 and June 2018. Method: This is a qualitative study. A purposeful sampling method was used with only one inclusion criterion for participants: five years of experience in the family physician program. The researchers conducted 17 individual and group non-structured interviews and examined participants' perspectives on the challenges faced in the implementation of the pay-for-performance system in the family physician program. Content analysis was conducted on the obtained data.
Results: This study identified 7 themes, 14 sub-themes, and 46 items related to the challenges in the implementation of pay-for-performance systems in Iran's family physician program. The main themes are: workload, training, program cultivation, payment, assessment and monitoring, information management, and level of authority. Other sub-challenges were also identified.
Conclusion: The study results demonstrate some notable challenges faced in the implementation of the pay-for-performance system. This information can be helpful to managers and policymakers.
abstract_id: PUBMED:37842642
Impact of pay-for-performance for stroke unit access on mortality in Queensland, Australia: an interrupted time series analysis. Background: Stroke unit care provides substantial benefits for all subgroups of patient with stroke, but consistent access has been difficult to achieve in many healthcare systems. Pay-for-performance incentives have been introduced widely in attempt to improve quality and efficiency in healthcare, but there is limited evidence of positive impact when they are targeted at hospitals. In 2012, a pay-for-performance program targeting stroke unit access was co-designed and implemented within a clinical quality improvement network across public hospitals in Queensland, Australia. We assessed the impact on access to specialist care and mortality following stroke.
Methods: We used interrupted time series analysis on linked hospital and death registry data to compare changes in level (absolute proportions) and trends in outcomes (stroke/coronary care unit admission, 6-month mortality) for stroke, and a control condition of myocardial infarction (MI) without pay-for-performance incentive, from 2009 before, to 2017 after introduction of the pay-for-performance scheme in 2012.
Findings: We included 23,572 patients with stroke and 39,511 with MI. Following pay-for-performance introduction, stroke unit access increased by an absolute 35% (95% CI 29, 41) more than historical trend prediction, with greater impact for regional/rural residents (41% vs major city 24%) where baseline access was lowest (18% vs major city residents 53%). Historical upward 6-month mortality trends following stroke (+0.11%/month) reversed to a downward slope (-0.05%/month) with pay-for-performance; difference -0.16%/month (95% CI -0.29, -0.03). In contrast, access to coronary care and mortality trends for MI controls were unchanged, difference-in-difference for mortality -0.18%, (95% CI -0.34, -0.02).
Interpretation: This clinician led pay-for-performance incentive stimulated significant improvements in stroke unit access, reduced regional disparities; and resulted in a sustained decline in 6-month mortality. As our findings contrast with lack of effect in most hospital directed pay-for-performance programs, differences in design and context provide insights for optimal program design.
Funding: Queensland Advancing Clinical Research Fellowship, National Health and Medical Research Council Senior Research Fellowship.
Answer: The evidence suggests that pay for performance (P4P) can have a positive impact on surgical outcomes, although the results are not uniform across all studies and contexts. A systematic review found that penalty-based P4P programs might be more effective than reward-based programs or a combination of both in improving the quality of surgical care (PUBMED:33304576). This indicates that the design of the P4P program, particularly the use of penalties, could be a significant factor in its effectiveness.
Another study highlighted that the implementation of an integrated Surgical Safety Checklist, which included perioperative measures from the Surgical Care Improvement Project, resulted in improved compliance with SSI reduction strategies and maintenance of normothermia, although it did not significantly correlate with an overall improvement in surgical site infection rates at the institution (PUBMED:23582762). This suggests that while P4P programs can lead to better adherence to quality measures, the translation into clinical outcomes may vary.
Additionally, a study on the impact of P4P for stroke unit access demonstrated that the introduction of a P4P scheme led to a significant increase in stroke unit access and a sustained decline in 6-month mortality, particularly benefiting regional and rural residents where baseline access was lowest (PUBMED:37842642). This provides evidence that P4P can improve clinical outcomes when targeted effectively.
However, it is important to note that P4P programs can also bring about new problems such as inequity, patient dissatisfaction, and increased medical costs (PUBMED:26667492). Moreover, the feasibility of P4P in specialty areas like substance use disorder treatment is challenged by the difficulty in defining and measuring outcomes and the influence of factors beyond provider control (PUBMED:31790980).
In summary, while there is evidence to suggest that P4P can improve surgical outcomes, the effectiveness of such programs is dependent on their design and implementation, as well as the context in which they are applied. Decision-makers should consider these factors carefully when enacting P4P policies to ensure they are tailored to achieve the desired improvements in patient care and outcomes. |
Instruction: Does presence of anterior greater tuberosity cysts change the function in patients with chronic rotator cuff tear?
Abstracts:
abstract_id: PUBMED:25413458
Does presence of anterior greater tuberosity cysts change the function in patients with chronic rotator cuff tear? Objectives: This study aims to compare rotator cuff muscle atrophy with fatty degeneration, tear size, range of motion, shoulder muscle strength, pain and upper extremity function in patients with chronic rotator cuff tear, and with or without anterior greater tuberosity cyst.
Patients And Methods: A total of 101 patients (32 males, 69 females; mean age 51 ± 12.9 years; range 17 to 76 years) were evaluated in this study. Fifty-eight patients were excluded due to traumatic or acute rotator cuff tears and neck pain. Forty-three patients of chronic rotator cuff tear were divided into two groups as patients with (n=15) and without (n=28) an anterior greater tuberosity cyst. Patients were evaluated for range of motion, shoulder muscle strength, pain and upper extremity function, and radiologically. Statistical differences were investigated between two groups.
Results: The number of patients with tears larger than 1 cm and the number of patients who had muscle atrophy were higher in the group of patients with a cyst. Also, upper extremity function was reduced in the group of patients with a cyst (Western Ontario Rotator Cuff Index, p=0.03, Nine-Hole Peg Test, p=0.02).
Conclusion: Our findings demonstrated that decreased function, larger cuff tears and muscle atrophy can be observed patients with anterior greater tuberosity cysts. Anterior greater tuberosity cysts can be detected by plain X-rays. The presence of these cysts should warn the physician regarding the possibility of decreased shoulder function, muscle atrophy and larger cuff tear before ordering a magnetic resonance imaging.
abstract_id: PUBMED:30798766
Association between the location of tuberosity cysts and rotator cuff tears: A comparative study using radiograph and MRI. Background: The association between tuberosity cysts and rotator cuff tears (RCTs) and the nature of the major contributing factors to tuberosity cyst formation continue to be controversial. The purpose of our study was to evaluate the strength of associations of RCT and various factors involved in the chronicity of RCT with tuberosity cysts, using magnetic resonance imaging (MRI) and radiographs.
Methods: We reviewed consecutive patients with various disease entities between August 2004 and July 2013. After excluding unsuitable patients, this study involved 1007 shoulders of 906 consecutive patients. Each tuberosity cyst was categorized as an anterior greater tuberosity (GT), posterior GT, lesser tuberosity, and bare-area cyst. The odds ratios (ORs) and 95% confidence intervals (CIs) between the tuberosity cysts and various factors were evaluated by logistic regression analyses; p-value was set below 0.05.
Results: Anterior GT cysts and posterior GT cysts on MRI or anterior GT cysts on radiographs were significantly associated with supraspinatus tendon (SST) tears ( p ≤ 0.019) and infraspinatus tendon (IST) tears ( p ≤ 0.004). Among the shoulder pathologies, RCTs only significantly associated with cyst formation (OR 4.23, 95% CI 3.17-5.65; p < 0.001). The retraction grade of Patte was significantly associated with anterior GT cyst (OR 3.65, 95% CI 2.42-5.48; p < 0.001).
Conclusion: Detecting an anterior GT cyst in a radiograph, even a low prevalence, in a patient with symptomatic shoulder indicates a need to consider RCT, especially of the SST, IST, and a high possibility of a retracted tear.
abstract_id: PUBMED:17681214
Arthroscopic grafting of greater tuberosity cyst and rotator cuff repair. Cysts of the greater tuberosity can be a normal finding independent of age and concurrent rotator cuff tear. The presence of a large greater tuberosity cyst can present a challenge at the time of rotator cuff repair. We present a 1-step arthroscopic technique to address these defects at the time of rotator cuff repair using a synthetic graft (OsteoBiologics, San Antonio, TX) originally designed to address osteoarticular defects. With the viewing portal established laterally, a portal allowing perpendicular access to the cyst is established. The cyst is thoroughly debrided, and a drill sleeve is then introduced perpendicular to the surrounding bone, serving as a guide for the matching drill to create a circular socket. A correspondingly sized TruFit BGS cylindrical implant (OsteoBiologics) is then implanted by use of the included instrumentation. The scaffold is placed flush with the surrounding bone. Because our arthroscopic rotator cuff protocol uses a tension-band technique with placement of suture anchors distal and lateral to the rotator cuff footprint, we are subsequently able to proceed with routine rotator cuff repair.
abstract_id: PUBMED:30093781
MR Geyser Sign in chronic rotator cuff tears. Acromio-clavicular (AC) joint cysts are rare presentation of chronic shoulder pathology. These cysts may be observed secondary to either degenerative changes in the AC joint with an intact rotator cuff (type 1 cyst) or following a chronic rotator cuff tear (type 2 cyst). The latter phenomenon is known as Geyser Sign and is described by ultrasound, conventional arthrogram and magnetic resonance imaging (MRI). We present a case of chronic rotator cuff tear presenting with a large type 2 cyst and Geyser Sign on MRI.
abstract_id: PUBMED:30798723
Evaluation of arthroscopic rotator cuff repair results in patients with anterior greater tubercle cysts. Purpose: The purpose of this study was to investigate the clinical results of arthroscopic rotator cuff repair in patients with anterior greater tubercle cyst in magnetic resonance imaging (MRI).
Methods: The cyst-present group comprised 38 patients with anterior greater tubercle cyst in MRI, and age- and sex-matched 30 patients without cyst in humeral head were included in the control group. The cystic group was divided into two groups, smaller than 5 mm (21 patients) and larger than 5 mm (17 patients), according to the cyst size. A total of three groups were created. In the evaluation of clinical outcomes, modified University of California at Los Angeles (UCLA) and the Western Ontario Rotator Cuff Index (WORC) were used. The visual analog scale (VAS) was used to assess pain. One-way analysis of variance was used to compare VAS, UCLA, and WORC scores among the groups.
Results: There was a statistically significant difference in the clinical results of VAS, UCLA, and WORC among the cystic and noncystic groups in the anterior greater tubercle ( p < 0.05). There was also a statistically significant difference in the clinical results of UCLA, WORC, and VAS scores according to the cyst sizes in the anterior greater tubercle cyst group ( p < 0.05).
Conclusion: Anterior greater tubercle cysts have negative effects on rotator cuff repair results. If the anterior greater tubercle cyst size is greater than 5 mm, the negative effects of rotator cuff repair results are more pronounced. An understanding of anterior greater tubercle cysts has a critical importance for rotator cuff surgery planning.
abstract_id: PUBMED:35366077
Transosseous repair with a cortical implant for greater tubercle cyst-related rotator cuff tear results in good clinical outcomes, but significant implant migration. Purpose: To evaluate whether an arthroscopic transosseous technique (ATO) with cortical implants is effective for rotator cuff tear (RCT) repair in patients with cysts of the greater tuberosity (GTC).
Methods: Patients treated with the ATO technique between January 2013 and October 2017 were evaluated. Inclusion criteria were patients treated for both cyst-related and non-cyst-related RCTs and patients with a moderate-sized tear (1-3 cm) according to the DeOrio and Cofield classification. A total of 39 patients were separated into two groups: Group 1 (n = 16) patients with cyst-associated RCT, and Group 2 (n = 23) patients with no cyst. Implant pull-out and migration were evaluated radiologically on standard antero-posterior shoulder radiographs and rotator cuff re-tear was assessed on magnetic resonance images at the final follow-up examination. Group 1 patients were separated into two subgroups according to cyst size (cyst < 5 mm and cyst ≥ 5 mm) and subgroup analysis was performed. Clinical assessment was performed using a visual analog scale, the Constant score and Oxford shoulder score.
Results: The mean follow-up time was 33.7 ± 11.7 months. The mean cyst size was 5.4 ± 1.5 mm. There was no significant difference in re-tear rates between the cystic and non-cystic groups. The mean implant migration distance was 3.0 ± 2.2 mm in patients with a RCT -related cyst and 0.7 ± 0.8 mm in those without a cyst. A statistically significant difference was found between the groups (p = 0.002). There was no statistically significant difference between the groups in respect of clinical scores. No implant failure was observed.
Conclusion: The ATO method performed with a cortical implant in RCTs resulted in satisfactory recovery and clinical outcomes in the short to medium term with low failure rates. While no implant failures were observed, implant migration was associated with cyst presence. Therefore, judicious use is advocated in the choice of transosseous fixation for cyst-related RCTs and patients should be informed of the possibility of implant migration.
Level Of Evidence: III.
abstract_id: PUBMED:31019562
The accuracy of plain radiographs in diagnosing degenerate rotator cuff disease. Background: A number of radiographic signs have been previously demonstrated to be associated with degenerative rotator cuff tears. An ability to predict the presence of a tear by radiography would permit the early commencement of appropriate treatment and the avoidance of unnecessary invasive investigations. The aim of the present study was to determine the accuracy of using radiographic signs to predict the presence of a cuff tear on arthroscopy.
Methods: Fifty consecutive patients who had undergone shoulder arthroscopy and had pre-operative plain radiographs were included. Pre-operative radiographs were reviewed by a consultant shoulder surgeon, a consultant radiologist and a senior clinical fellow for the following signs: acromial spur; subjective reduction of subacromial space; sourcil sign; acromial acetabularization; os acromiale; greater tuberosity cortical irregularity; greater tuberosity sclerosis; humeral head rounding; cyst; and reduction in acromiohumeral head distance.
Results: The presence of tuberosity sclerosis (p < 0.0001), tuberosity irregularities (p < 0.0001), tuberosity cyst (p = 0.004) and sourcil sign (p = 0.019) was associated with the presence of a rotator cuff tear. The combined sensitivity of prediction of tear by the observers following radiographic review was 91.7%, with a combined negative predictive value of 80%.
Conclusions: The assessment of radiographs by senior clinicians is a useful tool for confirming the absence of a rotator cuff tear.
abstract_id: PUBMED:33276984
Acromioclavicular cyst with geyser sign - An uncommon presentation of massive rotator cuff tear. Rotator cuff muscle tear is a common finding among adults and acromioclavicular cyst is a rare secondary manifestation. This case report describes the clinical presentation and workup diagnosis of a patient with acromioclavicular cyst in context of massive rotator cuff tear. Woman, 83-year-old developed a tumefaction over the left acromioclavicular joint. She had pain, limitation on active range of motion and function limitation of the left shoulder. The X-ray revealed superior humeral head displacement and signs of arthropathy. The MRI revealed "geyser sign" and identified an acromioclavicular cyst secondary to cuff tear arthropathy. Aspiration was not performed due to high recurrence rate and surgical removal was decided. Clinicians should be aware of this rare complication of rotator cuff tear, demanding exclusion of other possible causes of acromioclavicular cyst and offer suitable treatment options.
abstract_id: PUBMED:16352733
US of the shoulder: rotator cuff and non-rotator cuff disorders. Ultrasonography (US) has been shown to be an effective imaging modality in the evaluation of both rotator cuff and non-rotator cuff disorders, usually serving in a complementary role to magnetic resonance imaging of the shoulder. US technique for shoulder examination depends on patient positioning, scanning protocol for every tendon and anatomic part, and dynamic imaging. The primary US signs for rotator cuff supraspinatus tendon tears are tendon nonvisualization for complete tears, focal tendon defect for full-thickness tears, a hypoechoic defect of the articular side of the tendon for an articular-side partial-thickness tear, and flattening of the bursal surface of the tendon for a bursal-side partial-thickness tear. Secondary US signs such as cortical irregularity of the greater tuberosity and joint and subacromial-subdeltoid bursal fluid are helpful when correlated with the primary signs. Tendon degeneration, tendinosis, and intrasubstance tear are demonstrated as internal heterogeneity. Long-head biceps tendon abnormalities include instability, acute or chronic tear, and tendinosis. The acromioclavicular joint is assessed for dislocation, fluid collection, cysts, and bone erosions. Other non-rotator cuff disorders include synovial disorders such as adhesive capsulitis and synovial osteochondromatosis; degenerative disorders such as osteoarthritis, amyloid arthropathy, hemarthrosis, and chondrocalcinosis; infectious disorders such as septic arthritis and bursitis; and space-occupying lesions.
abstract_id: PUBMED:23748696
Humeral head cysts: association with rotator cuff tears and age. The objective of this study was to retrospectively evaluate the prevalence of the cystic changes at rotator cuff footprint on proximal humeral tuberosities and investigate their relationship with rotator cuff tears and patient age. Magnetic resonance (MR) images of 657 patients who underwent shoulder arthroscopy for treatment of rotator cuff disorders were reviewed to localize the cystic changes at anterior (supraspinatus insertion) and posterior (infraspinatus insertion) aspects of greater tuberosity (GT) and lesser tuberosity (subscapularis insertion). Preoperative MR reports as well as cyst size and locations on MR images were correlated with arthroscopic records of rotator cuff pathology (tear type, size, location and tendon involvement) and patient age. The prevalence of cystic changes was 9.1% (60 patients) in the study population. Anterior GT cysts were found in 56% of patients and were strongly associated with full-thickness (p<.001) and articular-sided partial-thickness rotator cuff tears (p=.02). Posterior GT and lesser tuberosity cysts were found in 27 and 17% of patients, respectively, and were not significantly related to rotator cuff tears, although there was an increased trend of posterior cysts in patients with infraspinatus tears (p=.09). A significant relation was found between patient age and the cyst size (p=.01), while none of the cyst localizations were statistically related to age. Anterior GT cysts were more common in this patient group and demonstrated a strong association with rotator cuff disorders regardless of age. Posterior GT and lesser tuberosity cysts were less common and showed no association with rotator cuff pathology or patient age.
Answer: Yes, the presence of anterior greater tuberosity cysts does change the function in patients with chronic rotator cuff tear. A study comparing patients with chronic rotator cuff tear with and without anterior greater tuberosity cysts found that those with a cyst had larger cuff tears, more muscle atrophy, and reduced upper extremity function as measured by the Western Ontario Rotator Cuff Index and the Nine-Hole Peg Test (PUBMED:25413458). Additionally, anterior greater tuberosity cysts detected on radiographs were significantly associated with supraspinatus tendon and infraspinatus tendon tears, and a high possibility of a retracted tear (PUBMED:30798766). Furthermore, another study showed that anterior greater tubercle cysts negatively affect the results of arthroscopic rotator cuff repair, with larger cysts (>5 mm) having a more pronounced negative effect (PUBMED:30798723). These findings suggest that the presence of anterior greater tuberosity cysts should be considered when evaluating shoulder function and planning rotator cuff surgery. |
Instruction: Do low preoperative vitamin D levels reduce the accuracy of quick parathyroid hormone in predicting postthyroidectomy hypocalcemia?
Abstracts:
abstract_id: PUBMED:22968355
Do low preoperative vitamin D levels reduce the accuracy of quick parathyroid hormone in predicting postthyroidectomy hypocalcemia? Background: Although some studies have suggested that low preoperative 25-hydroxyvitamin D (25-OHD) levels may increase the risk of hypocalcemia and decrease the accuracy of single quick parathyroid hormone in predicting hypocalcemia after total thyroidectomy, the literature remains scarce and inconsistent. Our study aimed to address these issues.
Methods: Of the 281 consecutive patients who underwent a total/completion total thyroidectomy, 244 (86.8%) did not require any oral calcium and/or calcitriol supplements (group 1), while 37 (13.2%) did (group 2) at hospital discharge. 25-OHD level was checked 1 day before surgery, and postoperative quick parathyroid hormone (PTH) was checked at skin closure (PTH-SC). Postoperative serum calcium was checked regularly. Hypocalcemia was defined by the presence of symptoms or adjusted calcium of <1.90 mmol/L. Significant factors for hypocalcemia were determined by univariate and multivariate analyses. The accuracy of PTH-SC in predicting hypocalcemia was measured by area under a receiver operating characteristic curve (AUC), and the AUC of PTH-SC was compared between patients with preoperative 25-OHD <15 and ≥15 ng/mL via bootstrapping.
Results: Preoperative 25-OHD level was not significantly different between groups 1 and 2 (13.1 vs. 12.5 ng/mL, p = 0.175). After adjusting for other significant factors, PTH-SC (odds ratio 2.49, 95% confidence interval 1.52-4.07, p < 0.001) and parathyroid autotransplantation (odds ratio 3.23, 95% confidence interval 1.22-8.60, p = 0.019) were the two independent factors for hypocalcemia. The AUC of PTH-SC was similar between those with 25-OHD <15 and ≥15 ng/mL (0.880 vs. 0.850, p = 0.61)
Conclusions: Low 25-OHD was not a significant factor for hypocalcemia and did not lower the accuracy of quick PTH in predicting postthyroidectomy hypocalcemia.
abstract_id: PUBMED:29167863
Association of Parathyroid Hormone Level With Postthyroidectomy Hypocalcemia: A Systematic Review. Importance: There has been an increased interest in measuring parathyroid hormone (PTH) levels as an early predictive marker for the development of hypocalcemia after total thyroidectomy. However, significant variation exists in the timing, type of assay, and thresholds of PTH in the literature.
Objective: We performed a systematic review to examine the utility of PTH levels in predicting temporary postthyroidectomy hypocalcemia.
Evidence Review: A systematic literature review of studies published prior to May 25, 2016 was performed within PubMed, EMBASE, SCOPUS, and Cochrane databases using the following terms and keywords: "thyroidectomy," "parathyroid hormone," and "hypocalcaemia," "calcium," or "calcitriol." Each candidate full-text publication was reviewed by 2 independent reviewers and selected for data extraction if the study examined the prognostic significance of PTH obtained within 24 hours after thyroidectomy to predict hypocalcaemia. Studies were excluded if calcium supplementation was used routinely or based on a PTH level. Study characteristics, PTH parameters used to predict hypocalcemia, and their respective accuracies were summarized.
Findings: The initial search yielded 2417 abstracts. Sixty-nine studies, comprising 9163 patients, were included. Overall, for an absolute PTH threshold, the median accuracy, sensitivity, and specificity were 86%, 85%, and 86%, respectively. For a percentage change over time the median accuracy, sensitivity, and specificity were 89%, 88%, and 90%, respectively.
Conclusions And Relevance: The existing literature regarding PTH levels to predict postthyroidectomy hypocalcemia is extremely heterogeneous. A single PTH threshold is not a reliable measure of hypocalcemia. Additional prospective studies controlled for timing of laboratory draws and a priori defined PTH thresholds need to be performed to ascertain the true prognostic significance of PTH in predicting postthyroidectomy hypocalcaemia.
abstract_id: PUBMED:32555278
Preoperative Vitamin D Levels as a Predictor of Transient Hypocalcemia and Hypoparathyroidism After Parathyroidectomy. Hypocalcemia is a common problem after parathyroidectomy and/or thyroidectomy. The complication may be transient or permanent. Most cases occur as a result of removal of the parathyroid glands or damage to the glands during neck surgery. The purpose of this study was to evaluate the effect of preoperative vitamin D deficiency in predicting transient hypocalcemia and hypoparathyroidism after parathyroidectomy.Retrospective evaluation was made of 180 patients with primary hyperparathyroidism in respect of serum 25(OH)D, calcium and parathyroid hormone before and after parathyroidectomy. Transient hypocalcemia was defined as corrected calcium ≤ 8.4 mg/dL, and these cases were then evaluated for preoperative 25(OH)D values. Transient hypoparathyroidism has been described as low PTH level immediately after surgery before beginning any supplementation. Permanent hypoparathyroidism is accepted as the need for medical treatment is necessary over 12 months.Both transient hypocalcemia and hypoparathyroidism developed at statistically significantly higher rates in patients with preoperative vitamin D deficiency and vitamin D insufficiency.Vitamin D deficiency is an independent contributor to transient hypocalcemia and hypoparathyroidism following parathyroidectomy.
abstract_id: PUBMED:28670529
A Prospective Study on Role of Supplemental Oral Calcium and Vitamin D in Prevention of Postthyroidectomy Hypocalcemia. Background: Postoperative transient hypocalcemia is sequelae of total thyroidectomy (TT), which is observed in up to 50% of patients. Routine oral calcium and Vitamin D supplementation have been proposed to prevent symptomatic hypocalcemia preventing morbidity and facilitating early discharge.
Patients And Methods: A total of 208 patients with nontoxic benign thyroid disorders, undergoing TT, were serially randomized into four groups: Group A (no supplements were given), Group B (oral calcium - 2 g/day given), Group C (calcium and calcitriol - 1 mcg/day are given), and Group D (calcium, calcitriol, and cholecalciferol - 60,000 IU/day are given). Patients were monitored for clinical and biochemical hypocalcemia (serum calcium, [Sr. Ca] <8 mg/dl), along with serum intact parathormone (Sr. PTH) and magnesium 6 h after surgery and Sr. Ca every 24 h. Intravenous (IV) calcium infusion was started, if any of the above four groups exhibit frank hypocalcemia. Patients are followed up with Sr. Ca and Sr. PTH at 3 and 6 months.
Results: All groups were age and sex matched. Hypocalcemia was observed in 72/208 (34.61%) cases. Incidence of hypocalcemia was higher in Group A (57.69%) and Group B (50%) compared to Group C (15.38%) and Group D (15.38%). Hypocalcemia necessitating IV calcium occurred in 31/208 (14.90%) patients. IV calcium requirement exceeded in Group A (26.92%) and Group B (23.07%) compared to Group C (5.76%) and Group D (3.84%). There was no statistical difference in basal levels of serum Vitamin D, calcium, magnesium, intact PTH, and 6 h after surgery. Permanent hypoparathyroidism developed in five patients on follow-up.
Conclusion: Routine postoperative supplementation of oral calcium and Vitamin D will help in the prevention of postthyroidectomy transient hypocalcemia significantly. Preoperative Vitamin D levels do not predict postoperative hypocalcemia.
abstract_id: PUBMED:25475499
Severe vitamin D deficiency: a significant predictor of early hypocalcemia after total thyroidectomy. Objective: To assess the role of preoperative serum 25 hydroxyvitamin D as predictor of hypocalcemia after total thyroidectomy.
Study Design: Retrospective cohort study.
Setting: University teaching hospital.
Subjects And Methods: All consecutively performed total and completion thyroidectomies from February 2007 to December 2013 were reviewed through a hospital database and patient charts. The relationship between postthyroidectomy laboratory hypocalcemia (serum calcium≤2 mmol/L), clinical hypocalcemia, and preoperative serum 25 hydroxyvitamin D level was evaluated.
Results: Two hundred thirteen patients were analyzed. The incidence of postoperative laboratory and clinical hypocalcemia was 19.7% and 17.8%, respectively. The incidence of laboratory and clinical hypocalcemia among severely deficient (<25 nmol/L), deficient (<50 nmol/L), insufficient (<75 nmol/L), and sufficient (≥75 nmol/L) serum 25 hydroxyvitamin D levels was 54% versus 33.9%, 10% versus 18%, 2.9% versus 11.6%, and 3.1% versus 0%, respectively. Multiple logistic regression analysis revealed preoperative severe vitamin D deficiency as a significant independent predictor of postoperative hypocalcemia (odds ratio [OR], 7.3; 95% confidence interval [CI], 2.3-22.9; P=.001). Parathyroid hormone level was also found to be an independent predictor of postoperative hypocalcemia (OR, 0.6; 95% CI, 0.5-0.8; P=.002).
Conclusion: Postoperative clinical and laboratory hypocalcemia is significantly associated with low levels of serum 25 hydroxyvitamin D. Our findings identify severe vitamin D deficiency (<25 nmol/L) as an independent predictor of postoperative laboratory hypocalcemia. Early identification and management of patients at risk may reduce morbidity and costs.
abstract_id: PUBMED:23700585
Vitamin D deficiency and the risk of hypocalcemia following total thyroidectomy. Objective: To determine whether patients with vitamin D deficiency (VDD) are at increased risk for hypocalcemia following total thyroidectomy.
Methods: A retrospective study of 246 consecutive patients undergoing thyroidectomy at a McGill University teaching hospital was conducted. Patients who had subtotal thyroidectomy or concomitant parathyroidectomy or whose laboratory tests were incomplete for analysis were excluded, as were pediatric patients. The remaining 139 patients had preoperative 25-hydroxyvitamin D [25(OH)D], corrected calcium, and parathyroid hormone (PTH) measured. Postoperatively, PTH and serum calcium were measured to assess for hypocalcemia. Low vitamin D (LVD) was defined as 25(OH)D ≤ 70 nmol/L (≤ 28 ng/mL), which includes vitamin D insufficiency, 25(OH)D > 35 nmol/L (> 14 ng/mL) but ≤ 70 nmol/L (≤ 28 ng/mL), and VDD, 25(OH)D ≤ 35 nmol/L (≤ 14 ng/mL). Adequate vitamin D (AVD) corresponded to levels > 35 nmol/L (> 14 ng/mL), whereas optimal vitamin D (OVD) levels corresponded to levels > 70 nmol/L (> 28 ng/mL).
Results: The rate of postthyroidectomy hypocalcemia in OVD patients was 10.4% (8 of 77) compared to 3.2% (2 of 62) in LVD patients (odds ratio = 0.29, p = .10). There was no hypocalcemia in the 9 VDD patients, meaning that all hypocalcemic episodes occurred in patients with AVD (7.7%; 10 of 130). The mean preoperative PTH levels for LVD patients was 4.65 pmol/L (43 ng/L) compared to 4.18 pmol/L (38.9 ng/L) for OVD patients (p = .073).
Conclusions: In this series, preoperative LVD did not predict early postthyroidectomy hypocalcemia. On the contrary, it showed a trend toward protective effect. Adaptive changes in the parathyroid glands, such as hypertrophy, hyperplasia, or the ability to secrete more hormone secondary to prolonged VDD, may contribute to this phenomenon. A large prospective study is needed to better understand the relationship between preoperative vitamin D levels and postoperative hypocalcemia.
abstract_id: PUBMED:20642997
Accuracy of postthyroidectomy parathyroid hormone and corrected calcium levels as early predictors of clinical hypocalcemia. Objective: To evaluate the accuracy of measurement of different parathyroid hormone (PTH) and corrected calcium (cCa) levels at different times as early predictors of postthyroidectomy hypocalcemia.
Design: A retrospective cohort study.
Setting: King Fahad Medical City, Riyadh, Saudi Arabia, between January 2006 and March 2009.
Methods: Patients who underwent total or completion thyroidectomy were followed until hospital discharge. Patients were observed clinically for hypocalcemia; at the same time, the postoperative PTH and cCa levels after 6, 12, and 20 hours and then twice daily were recorded.
Main Outcome Measures: Postthyroidectomy hypocalcemia.
Results: Seventy-nine of 116 patients were enrolled in our study; 26.60% of them had hypocalcemia. PTH measurement at 6 hours postoperatively was an excellent predictor of hypocalcemia (area under the curve = 0.95, 95% CI 0.88-0.99). The mean PTH at 6 hours for hypocalcemic patients was 0.93 (+/- 0.60). A 1.7 pmol/L as a cutoff level of PTH at 6 hours has 95.2% sensitivity, 89.7% specificity, 76.9% positive predictive value (PPV), and 98.1% negative predictive value (NPV). On the other hand, a 2.1 mmol/L as a cutoff level of cCa has 81.0% sensitivity, 81.6% specificity, 65.3% PPV, and 90.9% NPV in predicting hypocalcemic patients.
Conclusions: PTH measurement 6 hours after surgery with a cutoff level of 1.7 pmol/L is more accurate than serial calcium level measurement for early prediction of patients at risk of hypocalcemia. Thus, a single PTH measurement postoperatively will help in discharging patients safely within the first 24 hours, improving bed use and cost-effective care.
abstract_id: PUBMED:38013484
Management of Postthyroidectomy Hypoparathyroidism and Its Effect on Hypocalcemia-Related Complications: A Meta-Analysis. Objective: The aim of this Meta-analysis is to evaluate the impact of different treatment strategies for early postoperative hypoparathyroidism on hypocalcemia-related complications and long-term hypoparathyroidism.
Data Sources: Embase.com, MEDLINE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, and the top 100 references of Google Scholar were searched to September 20, 2022.
Review Methods: Articles reporting on adult patients who underwent total thyroidectomy which specified a treatment strategy for postthyroidectomy hypoparathyroidism were included. Random effect models were applied to obtain pooled proportions and 95% confidence intervals. Primary outcome was the occurrence of major hypocalcemia-related complications. Secondary outcome was long-term hypoparathyroidism.
Results: Sixty-six studies comprising 67 treatment protocols and 51,096 patients were included in this Meta-analysis. In 8 protocols (3806 patients), routine calcium and/or active vitamin D medication was given to all patients directly after thyroidectomy. In 49 protocols (44,012 patients), calcium and/or active vitamin D medication was only given to patients with biochemically proven postthyroidectomy hypoparathyroidism. In 10 protocols (3278 patients), calcium and/or active vitamin D supplementation was only initiated in case of clinical symptoms of hypocalcemia. No patient had a major complication due to postoperative hypocalcemia. The pooled proportion of long-term hypoparathyroidism was 2.4% (95% confidence interval, 1.9-3.0). There was no significant difference in the incidence of long-term hypoparathyroidism between the 3 supplementation groups.
Conclusions: All treatment strategies for postoperative hypocalcemia prevent major complications of hypocalcemia. The early postoperative treatment protocol for postthyroidectomy hypoparathyroidism does not seem to influence recovery of parathyroid function in the long term.
abstract_id: PUBMED:7940161
Risk factors for postthyroidectomy hypocalcemia. Background: Hypocalcemia is a common sequela of thyroidectomy; however, its causative factors have not been completely delineated.
Methods: A prospective study of 60 patients who underwent unilateral (n = 15) or bilateral (n = 45) thyroidectomy between 1990 and 1993 was completed to determine the incidence and risk factors for hypocalcemia. Free thyroxine, thyrotropin, and alkaline phosphatase levels were obtained before operation in all patients, together with preoperative and postoperative ionized calcium, parathyroid hormone (PTH), calcitonin, and 1,25-dihydroxyvitamin D3 levels. All patients were examined for age, gender, extent of thyroidectomy, initial versus reoperative neck surgery, weight and pathologic characteristics of resected thyroid tissue, substernal thyroid extension, and parathyroid resection and autotransplantation.
Results: Hypocalcemia, defined by an ionized calcium level less than 4.5 mg/dl, occurred in 28 patients (47%), including nine (15%) symptomatic patients who required vitamin D and/or calcium for 2 to 6 weeks. In no patient did permanent hypoparathyroidism develop. With a multivariate logistic regression analysis, factors that were predictive of postoperative hypocalcemia included an elevated free thyroxine level (p = 0.003), cancer (p = 0.010), and substernal extension (p = 0.046).
Conclusions: Postoperative decline in parathyroid hormone was not an independent risk factor for hypocalcemia, indicating that other factors besides parathyroid injury, ischemia, or removal are involved in the pathogenesis of postthyroidectomy hypocalcemia. An elevated free thyroxine level, substernal thyroid disease, and carcinoma are risk factors for postthyroidectomy hypocalcemia, and their presence should warrant routine postoperative calcium measurement. In the absence of these risk factors, routine postoperative measurement of serum calcium is unnecessary.
abstract_id: PUBMED:31149103
IS THERE A RELATIONSHIP BETWEEN VITAMIN D STATUS AND HYPOCALCEMIA AFTER TOTAL THYROIDECTOMY? Context: Vitamin D plays a crucial role in calcium metabolism through parathormone-dependent process. Deficiency of this important nutrient may be associated with hypocalcemia after thyroidectomy.
Objective: To evaluate the role of vitamin D in predicting hypocalcemia following total thyroidectomy.
Subjects And Methods: One hundred and fifty patients who underwent total thyroidectomy for benign or malignant thyroid disease were included in this prospective study. The association between preoperative vitamin D status and the development of hypocalcemia were investigated.
Results: Biochemical and symptomatic hypocalcemia were found in 28 (18.7%) and 22 (14.7%) patients, respectively. Preoperative vit D level was found significantly lower in patients with biochemical (p = 0.006) and clinical (p < 0.001) hypocalcemia in comparison to normocalcemic patients. The patients who had <10 ng/mL vit D level (severe deficiency) developed significantly more biochemical and clinical hypocalcemia than the patients with serum vit D level higher than 10 ng/mL (p = 0.030 and p < 0.001, respectively).
Conclusions: Although postthyroidectomy hypocalcemia is multifactorial, vit D deficiency, particularly severe form, is significantly associated with the development of biochemical and clinical hypocalcemia. Vit D supplementation can prevent this unwanted complication in such patients.
Answer: No, low preoperative vitamin D levels do not reduce the accuracy of quick parathyroid hormone (PTH) in predicting postthyroidectomy hypocalcemia. A study specifically aimed to address this issue found that preoperative 25-hydroxyvitamin D (25-OHD) levels were not significantly different between patients who did and did not require oral calcium and/or calcitriol supplements at hospital discharge after total/completion total thyroidectomy. Furthermore, after adjusting for other significant factors, quick PTH at skin closure (PTH-SC) and parathyroid autotransplantation were the two independent factors for hypocalcemia. The accuracy of PTH-SC in predicting hypocalcemia, measured by the area under a receiver operating characteristic curve (AUC), was similar between patients with preoperative 25-OHD levels below 15 ng/mL and those with levels at or above 15 ng/mL (PUBMED:22968355).
This finding is consistent with the broader literature, which suggests that while PTH levels are used as an early predictive marker for the development of hypocalcemia after total thyroidectomy, there is significant variation in the timing, type of assay, and thresholds of PTH used. A systematic review found that a single PTH threshold is not a reliable measure of hypocalcemia, indicating that additional controlled prospective studies are needed to ascertain the true prognostic significance of PTH in predicting postthyroidectomy hypocalcemia (PUBMED:29167863).
However, it is important to note that other studies have found an association between preoperative vitamin D deficiency and the development of transient hypocalcemia and hypoparathyroidism after parathyroidectomy (PUBMED:32555278), and severe vitamin D deficiency has been identified as a significant predictor of early hypocalcemia after total thyroidectomy (PUBMED:25475499). Despite these associations, the specific question of whether low preoperative vitamin D levels affect the accuracy of quick PTH in predicting hypocalcemia appears to be answered negatively based on the study with the direct aim of investigating this relationship (PUBMED:22968355). |
Instruction: Does lemon candy decrease salivary gland damage after radioiodine therapy for thyroid cancer?
Abstracts:
abstract_id: PUBMED:15695785
Does lemon candy decrease salivary gland damage after radioiodine therapy for thyroid cancer? Unlabelled: Salivary gland dysfunction is one of the common side effects of high-dose radioiodine therapy for thyroid cancer. The purpose of this study was to determine whether an early start of sucking lemon candy decreases salivary gland injury after radioiodine therapy.
Methods: The incidence of the side effects of radioiodine therapy on the salivary glands was prospectively and longitudinally investigated in 2 groups of patients with postsurgical differentiated thyroid cancer with varying regimens for sucking lemon candy. From August 1999 to October 2000, 116 consecutive patients were asked to suck 1 or 2 lemon candies every 2-3 h in the daytime of the first 5 d after radioiodine therapy (group A). Lemon candy sucking was started within 1 h after radioiodine ingestion. From November 2000 to June 2002, 139 consecutive patients (group B) were asked to suck lemon candies in a manner similar to that of group A. In the group B, lemon candies were withheld until 24 h after the ingestion of radioiodine. Patients with salivary gland disorders, diabetes, collagen tissue diseases, or a previous history of radioiodine therapy or external irradiation to the neck were excluded. Thus, 105 patients in group A and 125 patients in group B were available for analysis. There were no statistical differences in the mean age (55.2 y vs. 58.5 y), average levels of serum free thyroxine (l-3,5,3',5'-tetraiodothyronine) (0.40 ng/dL vs. 0.47 ng/dL), and the mean dose of (131)I administered (3.96 GBq vs. 3.87 GBq) between the 2 groups. The onset of salivary side effects was monitored during hospital admission and regular follow-up on the basis of interviews with patients, a visual analog scale, and salivary gland scintigraphy using (99m)Tc-pertechnetate. When a patient showed a persistent (>4 mo) dry mouth associated with a nonfunctioning pattern on salivary gland scintigraphy, a diagnosis of xerostomia was established.
Results: The incidences of sialoadenitis, hypogeusia or taste loss, and dry mouth with or without repeated sialadenitis in group A versus group B were 63.8% versus 36.8% (P < 0.001), 39.0% versus 25.6% (P < 0.01), and 23.8% versus 11.2% (P < 0.005), respectively. Permanent xerostomia occurred in 15 patients in group A (14.3%) and 7 patients in group B (5.6%) (P < 0.05). In both groups, bilateral involvement of the parotid gland was the most frequently seen and was followed by bilateral involvement of the submandibular gland.
Conclusion: An early start of sucking lemon candy may induce a significant increase in salivary gland damage. Lemon candy should not be given until 24 h after radioiodine therapy.
abstract_id: PUBMED:24256334
A randomized controlled trial for the use of thymus honey in decreasing salivary gland damage following radioiodine therapy for thyroid cancer: research protocol. Aim: To test the effectiveness of thymus honey as a complementary intervention for decreasing the salivary gland damage due to Radioiodine ((131) I) therapy.
Background: Radioiodine is the treatment of choice in people diagnosed with thyroid cancer following total thyroidectomy. Although its value has been acknowledged in eradicating remnant thyroid tissue and treating residual disease in patients with visible, inoperable, iodine-avid metastases, it has been associated with various salivary gland side effects.
Design: This is a randomized controlled trial with a 2 × 3 mixed between-within subjects design.
Methods: In total, 120 participants of postsurgical differentiated thyroid cancer, who will be referred to this centre for (131) I therapy to ablate the remnant thyroid tissue or to treat metastatic tumour, will be prospectively studied under varying regimens of lemon candy (standard treatment) and thymus honey mouthwashes (experimental intervention). Patients will be randomized in four equally numbered groups based on the assumptions and hypothesis of the study. The recruiting process will be informed by predefined inclusion and exclusion criteria. Mixed statistical modelling will be adopted taking into consideration between and within subjects' effects and repeated measures.
Discussion: The recommended intervention protocol is expected to improve the comprehensive management of salivary gland-related side effects induced by the radioiodine treatment in people diagnosed with thyroid cancer. Through the methodological approach chosen, the ideal intervention protocol in terms of the time to initiate the intervention and the frequency of the intervention to acquire optimal results in minimizing salivary glands damage will be tested.
abstract_id: PUBMED:26136724
Salivary Function after Radioiodine Therapy: Poor Correlation between Symptoms and Salivary Scintigraphy. Objective: Symptoms of salivary gland dysfunction frequently develop after radioactive iodine (RAI) therapy, but have generally not been correlated with assessment of salivary gland functioning. The aim of this study was to determine whether there was a correlation between salivary symptoms and salivary functioning as assessed by salivary scan parameters.
Methods: This was a non-randomized observational study. Fifteen patients receiving RAI therapy for differentiated thyroid cancer completed a questionnaire assessing their salivary and nasal symptoms prior to their therapy and 3 and 12 months after their therapy. Salivary gland scanning using technetium-99m pertechnetate was performed at the same time points. In addition, protective measures used at the time of radioiodine administration, such as use of fluids and sour candy, were also documented. Measures of salivary gland accumulation and secretion were correlated with scores of salivary and nasal symptomatology and any effects of protective measures were assessed.
Results: The mean number of salivary, nasal, and total symptoms at 3 months increased significantly over the number of symptoms at baseline by 3.7, 2.7, and 6.3 symptoms, respectively (p values 0.001, 0.0046, and <0.001, respectively). The mean increases in the number of salivary, nasal, and total symptoms at 12 months were non-significant at 1.3, 1.3, and 2.5 symptoms, respectively. The mean right parotid gland accumulation and secretion of radioisotope declined significantly at 3 months, compared with baseline. The changes in left parotid and right and left submandibular function were non-significant. There was no association between the increase in salivary, nasal, or total symptoms and the change in scintigraphy measures. However, the increases in nasal and total symptoms were significantly greater in those with co-existent Hashimoto's disease, compared with those without this condition (p values 0.01 and 0.04, respectively). Nasal symptoms decreased (p value 0.04) and total symptoms trended to decrease (p value 0.08) in those who used sour candies, compared with those who did not. Increasing body mass index was significantly associated with increasing nasal symptoms (p value 0.05). Greater decline in salivary parameters at 3 months compared with baseline was generally associated with heavier body weight, decreased thyroid cancer stage, absence of Hashimoto's thyroiditis, and pre-menopausal status.
Conclusion: Salivary and nasal symptoms increased and salivary scintigraphy parameters decreased after radioiodine therapy. However, the increased symptoms did not correlate with decrements in salivary gland accumulation or secretion. Moreover, the variables associated with symptoms and changes in salivary scan parameters differed. Therefore, a better understanding of the relationship between salivary gland symptoms and functioning is needed. Factors affecting susceptibility to salivary and nasal damage after radioiodine therapy need to be better elucidated, so that modifiable factors can be identified.
abstract_id: PUBMED:34040292
Early Quantification of Salivary Gland Function after Radioiodine Therapy. Purpose Of The Study: Radioiodine (I-131) is used as an effective noninvasive treatment for thyroid malignancies. Salivary gland is one of the most affected nontarget organs. The present study aims to perform early quantification of salivary gland function after I-131 therapy (RIT) for thyroid cancer considering I-131 down-scatter in the Tc-99m window.
Materials And Methods: A total of 20 patients (6 males and 14 females) with differentiated thyroid carcinoma were enrolled in the study. Baseline dynamic salivary scintigraphy was performed in all patients using 185-370 MBq (5-10 mCi) Tc-99m pertechnetate. Posttherapy, salivary scintigraphy was performed 10-25 days after RIT in the range of 1.85-7.4 GBq (50-200 mCi). Time-activity curves obtained from the pre- and posttherapy dynamic salivary scintigraphy were used for semi-quantitative analysis. Uptake ratio (UR), ejection fraction (EF%), and maximum accumulation (MA%) were calculated by drawing regions of interest of individual parotid and submandibular glands over a composite image, after correcting for down-scatter from I-131 in the Tc-99m window. A paired t-test was used for comparison of the parameters obtained.
Results: Significant changes were observed in UR and EF% of both parotid and submandibular glands (P < 0.05). No significant changes were found in the value of MA% of left parotid gland and both submandibular glands in the posttherapy scans in comparison to pretherapy scans (P > 0.05). However, significant difference was observed in the MA% of the right parotid gland (P = 0.025).
Conclusion: Salivary gland function was found to deteriorate after RIT, with the parotid glands affected more than the submandibular glands.
abstract_id: PUBMED:25177377
Significance of Salivary Gland Radioiodine Retention on Post-ablation (131)I Scintigraphy as a Predictor of Salivary Gland Dysfunction in Patients with Differentiated Thyroid Carcinoma. Purpose: We investigated whether (131)I whole-body scintigraphy could predict functional changes in salivary glands after radioiodine therapy.
Methods: We evaluated 90 patients who received initial high-dose (≥3.7 GBq) radioiodine therapy after total thyroidectomy. All patients underwent diagnostic (DWS) and post-ablation (TWS) (131)I whole-body scintigraphy. Visual assessment of salivary radioiodine retention on DWS and TWS was used to divide the patients into two types of groups: a DWS+ or DWS- group and a TWS+ or TWS- group. Salivary gland scintigraphy was also performed before DWS and at the first follow-up visit. Peak uptake and %washout were calculated in ROIs of each gland. Functional changes (Δuptake or Δwashout) of salivary glands after radioiodine therapy were compared between the two groups.
Results: Both peak uptake and the %washout of the parotid glands were significantly lower after radioiodine therapy (all p values <0.001), whereas only the %washout were significantly reduced in the submandibular glands (all p values <0.05). For the parotid glands, the TWS+ group showed larger Δuptake and Δwashout after radioiodine therapy than did the TWS- group (all p values <0.01). In contrast, the Δuptake and Δwashout of the submandibular glands did not significantly differ between the TWS+ and TWS- groups (all p values >0.05). Likewise, no differences in Δuptake or Δwashout were apparent between the DWS+ and DWS- groups in either the parotid or submandibular glands (all p values >0.05).
Conclusion: Salivary gland radioiodine retention on post-ablation (131)I scintigraphy is a good predictor of functional impairment of the parotid glands after high-dose radioiodine therapy.
abstract_id: PUBMED:27446226
Clinical Studies of Nonpharmacological Methods to Minimize Salivary Gland Damage after Radioiodine Therapy of Differentiated Thyroid Carcinoma: Systematic Review. Purpose. To systematically review clinical studies examining the effectiveness of nonpharmacological methods to prevent/minimize salivary gland damage due to radioiodine treatment of differentiated thyroid carcinoma (DTC). Methods. Reports on relevant trials were identified by searching the PubMed, CINHAL, Cochrane, and Scopus electronic databases covering the period 01/2000-10/2015. Inclusion/exclusion criteria were prespecified. Search yielded eight studies that were reviewed by four of the present authors. Results. Nonpharmacological methods used in trials may reduce salivary gland damage induced by radioiodine. Sialogogues such as lemon candy, vitamin E, lemon juice, and lemon slice reduced such damage significantly (p < 0.0001, p < 0.05, p < 0.10, and p < 0.05, resp.). Parotid gland massage also reduced the salivary damage significantly (p < 0.001). Additionally, vitamin C had some limited effect (p = 0.37), whereas no effect was present in the case of chewing gum (p = 0.99). Conclusion. The review showed that, among nonpharmacological interventions, sialogogues and parotid gland massage had the greatest impact on reducing salivary damage induced by radioiodine therapy of DTC. However, the studies retrieved were limited in number, sample size, strength of evidence, and generalizability. More randomized controlled trials of these methods with multicenter scope and larger sample sizes will provide more systematic and reliable results allowing more definitive conclusions.
abstract_id: PUBMED:31765488
Does Salivary Function Decrease in Proportion to Radioiodine Dose? Objectives: This study was conducted to investigate the dose-response characteristics of radioiodine on salivary glands and to investigate the mechanism responsible for radioiodine-induced salivary glands toxicity.
Methods: Twenty-four mice were divided into six groups: 0, 0.05, 0.10, 0.20, 0.40, and 0.80 mCi/20 g mouse, administered orally. Mortalities were noted 12 months after radioiodine administration. Body weights, gland weights, salivary lag times, flow rates, and changes in 99m Tc pertechnetate were recorded. Histopathological changes and mRNA expressions were also evaluated, and immunohistochemical analysis and apoptotic assays were performed.
Results: Survival rates, body weights, gland weights, and flow rates decreased, and lag times increased on increasing radioiodine dose. Animals administered radioiodine showed acinar atrophy, striated duct dilations, and lymphocytic infiltration in glands and irregular destruction of epithelial surfaces of tongue. The uptake and excretion of 99m Tc pertechnetate were impaired by radioiodine. Immunohistochemical analysis showed that numbers of salivary epithelial, myoepithelial, and endothelial cells decreased and that numbers of ductal cells increased with radioiodine dose. Oxidative stress biomarker levels increased; reactive oxygen species scavenger levels decreased; and numbers of apoptotic cells increased in animals exposed to higher radioiodine doses.
Conclusion: These dose-related, long-term effects on salivary gland should be taken into account when determining radioiodine doses.
Level Of Evidence: NA Laryngoscope, 130:2173-2178, 2020.
abstract_id: PUBMED:34147311
Progressive changes in the major salivary gland after radioiodine therapy for differentiated thyroid cancer: a single-center retrospective ultrasound cohort study. This study aimed to determine the prevalence of radioiodine-induced salivary gland damage by evaluating progressive changes in salivary glands using ultrasound. Four hundred forty-six patients with differentiated thyroid carcinoma who underwent total or near-total thyroidectomy and postoperative radioiodine therapy were retrospectively reviewed. From the first to the fifth follow-up visits, the positive rate of major salivary gland changes on ultrasound gradually increased from 2.0% to 33.0% (P<0.001) and possibly stabilized at the fifth visit (approximately 36 months). The first positive result was detected at an average of 20.78±8.72 months. Only 21 of the 161 positive cases eventually achieved negative ultrasound results (Fisher's test, P<0.001), and the 21 cases simply showed a coarse echotexure. In conclusion, ultrasound changes appeared late, and most of these changes were not reversed.
abstract_id: PUBMED:27339871
Effects of Radioiodine Treatment on Salivary Gland Function in Patients with Differentiated Thyroid Carcinoma: A Prospective Study. Complaints of a dry mouth (xerostomia) and sialoadenitis are frequent side effects of radioiodine treatment in differentiated thyroid cancer (DTC) patients. However, detailed prospective data on alterations in salivary gland functioning after radioiodine treatment (131I) are scarce. Therefore, the primary aim of this study was to prospectively assess the effect of high-activity radioiodine treatment on stimulated whole saliva flow rate. Secondary aims were to study unstimulated whole and stimulated glandular (i.e., parotid and submandibular) saliva flow rate and composition alterations, development of xerostomia, characteristics of patients at risk for salivary gland dysfunction, and whether radioiodine uptake in salivary glands on diagnostic scans correlates to flow rate alterations.
Methods: In a multicenter prospective study, whole and glandular saliva were collected both before and 5 mo after radioiodine treatment. Furthermore, patients completed the validated xerostomia inventory. Alterations in salivary flow rate, composition, and xerostomia inventory score were analyzed. Salivary gland radioiodine uptake on diagnostic scans was correlated with saliva flow rate changes after radioiodine treatment.
Results: Sixty-seven patients (mean age ± SD, 48 ± 17 y; 63% women, 84% underwent ablation therapy) completed both study visits. Stimulated whole saliva flow rate decreased after ablation therapy (from 0.92 [interquartile range, 0.74-1.25] to 0.80 [interquartile range, 0.58-1.18] mL/min, P = 0.003), as well as unstimulated whole- and stimulated glandular flow rates (P < 0.05). The concentration of salivary electrolytes was similar at both study visits, whereas the output of proteins, especially amylase (P < 0.05), was decreased. The subjective feeling of dry mouth increased (P = 0.001). Alterations in saliva flow rate were not associated with semiquantitatively assessed radioiodine uptake in salivary glands on diagnostic scans. For the small cohort of patients undergoing repeated radioiodine therapy, we could not demonstrate alterations in salivary parameters.
Conclusion: We prospectively showed that salivary gland function is affected after high-activity radioiodine ablation therapy in patients with DTC. Therefore, more emphasis should be placed on salivary gland dysfunction during follow-up for DTC patients receiving high-activity radioiodine treatment.
abstract_id: PUBMED:16710103
Quantitative evaluation of salivary gland dysfunction after radioiodine therapy using salivary gland scintigraphy. Aim: The most frequent non-thyroidal complication of high-dose (131)I therapy for thyroid carcinoma is salivary gland dysfunction, which may be transient or permanent. In this study, we assessed radioiodine-induced permanent salivary gland dysfunction using quantitative salivary gland scintigraphy.
Methods: Salivary scintigraphy was performed with (99m)Tc-pertechnetate on 50 thyroid carcinoma patients who had been given radioiodine for thyroid ablation; 20 normal subjects were imaged as the control population. Dynamic scintigraphy was performed and time-activity curves for four major salivary glands were generated. The glandular functional parameters maximum secretion, time at maximum count and uptake ratio of the parotid and submandibular glands were calculated. Correlation of the administered dose and subjective symptoms with findings of salivary gland scintigraphy was evaluated.
Results: The maximum secretion and uptake ratio were decreased in 46% and 42% of patients who received radioiodine therapy, respectively. Salivary gland dysfunction correlated well with the administered dose. The parotid glands were more affected than the submandibular glands. Fifty-two per cent of patients were symptomatic, 69.23% of whom showed salivary gland dysfunction.
Conclusion: Parenchymal damage to the salivary glands induced by radioactive iodine treatment can be evaluated by salivary gland scintigraphy. The impairment was worse in parotid glands and increased with the total dose. The maximum secretion and uptake ratio were found to be sufficiently sensitive to distinguish the severity of the damage.
Answer: The use of lemon candy after radioiodine therapy for thyroid cancer does not appear to decrease salivary gland damage. In fact, one study found that an early start of sucking lemon candy may induce a significant increase in salivary gland damage. The study compared two groups of patients with differentiated thyroid cancer who underwent radioiodine therapy. Group A started sucking lemon candy within 1 hour after radioiodine ingestion, while Group B was asked to withhold lemon candy until 24 hours after ingestion. The results showed higher incidences of sialoadenitis, hypogeusia or taste loss, and dry mouth with or without repeated sialadenitis in Group A compared to Group B. Permanent xerostomia also occurred more frequently in Group A. The conclusion was that lemon candy should not be given until 24 hours after radioiodine therapy (PUBMED:15695785).
Other studies have explored different nonpharmacological methods to minimize salivary gland damage after radioiodine therapy. For instance, a systematic review of clinical studies found that sialogogues such as lemon candy, vitamin E, lemon juice, and lemon slice, as well as parotid gland massage, significantly reduced salivary gland damage. However, the review also noted that the studies were limited in number, sample size, strength of evidence, and generalizability (PUBMED:27446226). Another study suggested that salivary gland radioiodine retention on post-ablation scintigraphy is a good predictor of functional impairment of the parotid glands after high-dose radioiodine therapy (PUBMED:25177377).
In summary, the evidence suggests that lemon candy, when started early after radioiodine therapy, may actually increase salivary gland damage rather than decrease it. Therefore, it should not be used immediately after radioiodine ingestion for thyroid cancer treatment. Other methods to reduce salivary gland damage are being studied, but more research is needed to establish effective interventions. |
Instruction: Clinical need-directed blood tests: a step in saving the NHS?
Abstracts:
abstract_id: PUBMED:31755796
Great clinical variability of Nance Horan syndrome due to deleterious NHS mutations in two unrelated Spanish families. Background: Nance-Horan syndrome (NHS) is an X-linked rare congenital disorder caused by mutations in the NHS gene. Clinical manifestations include congenital cataracts, facial and dental dysmorphism and, in some cases, intellectual disability. The aim of the present work was to identify the genetic cause of this disease in two unrelated Spanish NHS families and to determine the relative involvement of this gene in the pathogenesis.Materials and methods: Four members of a two-generation family, three males and one female (Family 1), and seven members of a three-generation family, two males and five females (Family 2) were recruited and their index cases were screened for mutations in the NHS gene and 26 genes related with ocular congenital anomalies by NGS (Next Generation Sequencing).Results: Two pathogenic variants were found in the NHS gene: a nonsense mutation (p.Arg373X) and a frameshift mutation (p.His669ProfsX5). These mutations were found in the two unrelated NHS families with different clinical manifestations.Conclusions: In the present study, we identified two truncation mutations (one of them novel) in the NHS gene, associated with NHS. Given the wide clinical variability of this syndrome, NHS may be difficult to detect in individuals with subtle clinical manifestations or when congenital cataracts are the primary clinical manifestation which makes us suspect that it can be underdiagnosed. Combination of genetic studies and clinical examinations are essential for the clinical diagnosis optimization.
abstract_id: PUBMED:37221585
Nance-Horan Syndrome: characterization of dental, clinical and molecular features in three new families. Background: Nance-Horan syndrome (NHS; MIM 302,350) is an extremely rare X-linked dominant disease characterized by ocular and dental anomalies, intellectual disability, and facial dysmorphic features.
Case Presentation: We report on five affected males and three carrier females from three unrelated NHS families. In Family 1, index (P1) showing bilateral cataracts, iris heterochromia, microcornea, mild intellectual disability, and dental findings including Hutchinson incisors, supernumerary teeth, bud-shaped molars received clinical diagnosis of NHS and targeted NHS gene sequencing revealed a novel pathogenic variant, c.2416 C > T; p.(Gln806*). In Family 2, index (P2) presenting with global developmental delay, microphthalmia, cataracts, and ventricular septal defect underwent SNP array testing and a novel deletion encompassing 22 genes including the NHS gene was detected. In Family 3, two half-brothers (P3 and P4) and maternal uncle (P5) had congenital cataracts and mild to moderate intellectual deficiency. P3 also had autistic and psychobehavioral features. Dental findings included notched incisors, bud-shaped permanent molars, and supernumerary molars. Duo-WES analysis on half-brothers showed a hemizygous novel deletion, c.1867delC; p.(Gln623ArgfsTer26).
Conclusions: Dental professionals can be the first-line specialists involved in the diagnosis of NHS due to its distinct dental findings. Our findings broaden the spectrum of genetic etiopathogenesis associated with NHS and aim to raise awareness among dental professionals.
abstract_id: PUBMED:34729747
Identification of a novel variant of NHS gene underlying Nance-Horan syndrome Objective: To explore the genetic basis for a pedigree affected with Nance-Horan syndrome.
Methods: Clinical manifestation of the patients was analyzed. Genomic DNA was extracted from peripheral blood samples of the pedigree members and 100 unrelated healthy controls. A panel of genes for congenital cataract was subjected to next-generation sequencing (NGS), and candidate variant was verified by Sanger sequencing and bioinformatic analysis based on guidelines of American College of Medical Genetics and Genomics (ACMG). mRNA expression was determined by reverse transcriptase-PCR (RT-PCR). Linkage analysis based on short tandem repeats was carried out to confirm the consanguinity.
Results: A small insertional variant c.766dupC (p.Leu256Profs*21) of the NHS gene was identified in the proband and his affected mother, but not among unaffected members and the 100 healthy controls. The variant was unreported in Human Gene Mutation Database (HGMD) and other databases. Based on the ACMG guideline, the variant is predicted to be pathogenic (PVS1+PM2+PM6+PP4).
Conclusion: The novel variant c.766dupC of the NHS gene probably underlay the X-linked dominant Nance-Horan syndrome in this pedigree.
abstract_id: PUBMED:22229851
Phenotype-genotype correlation in potential female carriers of X-linked developmental cataract (Nance-Horan syndrome). Purpose: To correlate clinical examination with underlying genotype in asymptomatic females who are potential carriers of X-linked developmental cataract (Nance-Horan syndrome).
Methods: An ophthalmologist blind to the pedigree performed comprehensive ophthalmic examination for 16 available family members (two affected and six asymptomatic females, five affected and three asymptomatic males). Facial features were also noted. Venous blood was collected for sequencing of the gene NHS.
Results: All seven affected family members had congenital or infantile cataract and facial dysmorphism (long face, bulbous nose, abnormal dentition). The six asymptomatic females ranged in age from 4-35 years old. Four had posterior Y-suture centered lens opacities; these four also exhibited the facial dysmorphism of the seven affected family members. The fifth asymptomatic girl had scattered fine punctate lens opacities (not centered on the Y-suture) while the sixth had clear lenses, and neither exhibited the facial dysmorphism. A novel NHS mutation (p.Lys744AsnfsX15 [c.2232delG]) was found in the seven patients with congenital or infantile cataract. This mutation was also present in the four asymptomatic girls with Y-centered lens opacities but not in the other two asymptomatic girls or in the three asymptomatic males (who had clear lenses).
Conclusions: Lens opacities centered around the posterior Y-suture in the context of certain facial features were sensitive and specific clinical signs of carrier status for NHS mutation in asymptomatic females. Lens opacities that did not have this characteristic morphology in a suspected female carrier were not a carrier sign, even in the context of her affected family members.
abstract_id: PUBMED:12504852
Cloning of mouse Cited4, a member of the CITED family p300/CBP-binding transcriptional coactivators: induced expression in mammary epithelial cells. The CITED family proteins bind to CBP/p300 transcriptional integrators through their conserved C-terminal acidic domain and function as coactivators. The 21-kDa mouse Cited4 protein, a novel member of the CITED family, interacted with CBP/p300 as well as isoforms of the TFAP2 transcription factor, coactivating TFAP2-dependent transcription. The cited4 gene consisted of only a single exon located on chromosome 4 at 56.5-56.8 cM flanked by marker genes kcnq4 and scml1. Expression of Cited4 protein was strong and selective in embryonic hematopoietic tissues and endothelial cells. In adult animals, Cited4 showed strong milk cycle-dependent induction in pregnant and lactating mammary epithelial cells. Strong induction of Cited4 expression was also observed in SCp2 mouse mammary epithelial cells during their prolactin-dependent in vitro differentiation. These results implied possible roles for Cited4 in regulation of gene expression during development and differentiation of blood cells, endothelial cells, and mammary epithelial cells.
abstract_id: PUBMED:32303606
Low grade mosaicism in hereditary haemorrhagic telangiectasia identified by bidirectional whole genome sequencing reads through the 100,000 Genomes Project clinical diagnostic pipeline. N/A
abstract_id: PUBMED:17451191
Prenatal detection of congenital bilateral cataract leading to the diagnosis of Nance-Horan syndrome in the extended family. Objectives: To describe a family in which it was possible to perform prenatal diagnosis of Nance-Horan Syndrome (NHS).
Methods: The fetus was evaluated by 2nd trimester ultrasound. The family underwent genetic counseling and ophthalmologic evaluation. The NHS gene was sequenced.
Results: Ultrasound demonstrated fetal bilateral congenital cataract. Clinical evaluation revealed other family members with cataract, leading to the diagnosis of NHS in the family. Sequencing confirmed a frameshift mutation (3908del11bp) in the NHS gene.
Conclusion: Evaluation of prenatally diagnosed congenital cataract should include a multidisciplinary approach, combining experience and input from sonographer, clinical geneticist, ophthalmologist, and molecular geneticist.
abstract_id: PUBMED:25091991
Identification of a novel mutation in a Chinese family with Nance-Horan syndrome by whole exome sequencing. Objective: Nance-Horan syndrome (NHS) is a rare X-linked disorder characterized by congenital nuclear cataracts, dental anomalies, and craniofacial dysmorphisms. Mental retardation was present in about 30% of the reported cases. The purpose of this study was to investigate the genetic and clinical features of NHS in a Chinese family.
Methods: Whole exome sequencing analysis was performed on DNA from an affected male to scan for candidate mutations on the X-chromosome. Sanger sequencing was used to verify these candidate mutations in the whole family. Clinical and ophthalmological examinations were performed on all members of the family.
Results: A combination of exome sequencing and Sanger sequencing revealed a nonsense mutation c.322G>T (E108X) in exon 1 of NHS gene, co-segregating with the disease in the family. The nonsense mutation led to the conversion of glutamic acid to a stop codon (E108X), resulting in truncation of the NHS protein. Multiple sequence alignments showed that codon 108, where the mutation (c.322G>T) occurred, was located within a phylogenetically conserved region. The clinical features in all affected males and female carriers are described in detail.
Conclusions: We report a nonsense mutation c.322G>T (E108X) in a Chinese family with NHS. Our findings broaden the spectrum of NHS mutations and provide molecular insight into future NHS clinical genetic diagnosis.
abstract_id: PUBMED:29402928
A novel small deletion in the NHS gene associated with Nance-Horan syndrome. Nance-Horan syndrome is a rare X-linked recessive inherited disease with clinical features including severe bilateral congenital cataracts, characteristic facial and dental abnormalities. Data from Chinese Nance-Horan syndrome patients are limited. We assessed the clinical manifestations of a Chinese Nance-Horan syndrome pedigree and identified the genetic defect. Genetic analysis showed that 3 affected males carried a novel small deletion in NHS gene, c.263_266delCGTC (p.Ala89TrpfsTer106), and 2 female carriers were heterozygous for the same variant. All 3 affected males presented with typical Nance-Horan syndrome features. One female carrier displayed lens opacities centered on the posterior Y-suture in both eyes, as well as mild dental abnormalities. We recorded the clinical features of a Chinese Nance-Horan syndrome family and broadened the spectrum of mutations in the NHS gene.
abstract_id: PUBMED:17417607
Identification of three novel NHS mutations in families with Nance-Horan syndrome. Purpose: Nance-Horan Syndrome (NHS) is an infrequent and often overlooked X-linked disorder characterized by dense congenital cataracts, microphthalmia, and dental abnormalities. The syndrome is caused by mutations in the NHS gene, whose function is not known. The purpose of this study was to identify the frequency and distribution of NHS gene mutations and compare genotype with Nance-Horan phenotype in five North American NHS families.
Methods: Genomic DNA was isolated from white blood cells from NHS patients and family members. The NHS gene coding region and its splice site donor and acceptor regions were amplified from genomic DNA by PCR, and the amplicons were sequenced directly.
Results: We identified three unique NHS coding region mutations in these NHS families.
Conclusions: This report extends the number of unique identified NHS mutations to 14.
Answer: The provided abstracts do not contain information relevant to the question about "Clinical need directed blood tests: a step in saving the NHS?" None of the abstracts discuss blood tests, their clinical need, or their impact on the National Health Service (NHS) in terms of cost savings or efficiency improvements. Instead, the abstracts focus on the genetic and clinical characterization of Nance-Horan Syndrome (NHS), which is a rare X-linked disorder characterized by congenital cataracts, dental anomalies, and sometimes intellectual disability (PUBMED:31755796, PUBMED:37221585, PUBMED:34729747, PUBMED:22229851, PUBMED:12504852, PUBMED:32303606, PUBMED:17451191, PUBMED:25091991, PUBMED:29402928, PUBMED:17417607).
The question seems to be about healthcare policy or management, specifically about the potential benefits of implementing clinical need-directed blood tests within the healthcare system of the United Kingdom's National Health Service. However, the provided abstracts are focused on genetic research and clinical case studies related to a specific genetic disorder, and thus do not provide information that would be useful in answering the question about blood tests and the NHS. |
Instruction: Can radiation exposure to the surgeon be reduced with freehand pedicle screw fixation technique in pediatric spinal deformity correction?
Abstracts:
abstract_id: PUBMED:24365904
Can radiation exposure to the surgeon be reduced with freehand pedicle screw fixation technique in pediatric spinal deformity correction? A prospective multicenter study. Study Design: Prospective multicenter study of patients who underwent pediatric spinal deformity correction with posterior spinal fusion and instrumentation.
Objective: To quantify radiation exposure to the surgeon during pedicle screw fixation using the freehand technique in pediatric spinal deformity surgery.
Summary Of Background Data: Pedicle screw placement in thoracic and lumbar spine for spinal deformity is technically demanding and involves radiation exposure. Many experienced spinals surgeons use the freehand technique for pedicle screw fixation in spinal deformity surgery. There are no studies analyzing radiation exposure to the surgeon regarding freehand pedicle screw fixation technique.
Methods: A prospective multicenter study was designed to evaluate radiation exposure to the operating spinal surgeon who uses the freehand pedicle screw fixation technique in pediatric spinal deformity correction. All of the operating surgeons placed a gamma radiation dosimeter on their chest outside the lead apron during surgery. Surgeons placed pedicle screws in the pediatric spinal deformity using the freehand technique. We included patients who had undergone correction with posterior spinal fusion and instrumentation with all pedicle screw constructs in this study.
Results: We analyzed 125 patients with pediatric spinal deformity who were operated on between 2008 and 2012. The average fluoroscopic time was 40.5 ± 21 seconds. The overall average fluoroscopic time for placement of a single pedicle screw and per fixation level were 2.6 ± 1.7 seconds and 3.9 ± 2.5 seconds, respectively. In each surgery, the recorded radiation exposure to the surgeon was less than the minimum reportable dose (<0.010 mSv) with an average of 0.0005 ± 0.00013 mSv per surgery.
Conclusion: The use of freehand technique for pedicle screw fixation in spinal deformity correction requires a minimum amount of fluoroscopic use, hence decreasing radiation exposure to the surgeon and patient.
Level Of Evidence: 4.
abstract_id: PUBMED:30069524
Utilization of the 3D-printed spine model for freehand pedicle screw placement in complex spinal deformity correction. Background: We aim to demonstrate the safety and efficacy of utilizing 3D-printed spine models to facilitate freehand pedicle screw placement in complex spinal deformity correction. Currently there is no data on using 3D-printed models for freehand pedicle screw placement spinal deformity correction.
Methods: All patients undergoing spinal deformity correction over a 16-month period (September 2015 - December 2016) at the Spine Hospital of Columbia University Medical Center by the senior surgeon were reviewed. 3D-printed spine models were used to facilitate intraoperative freehand pedicle screw placement in patients with severe spinal deformities. Intraoperative O-arm imaging was obtained after pedicle screw placement in all patients. Screws were graded as intrapedicular, <2 mm breach, 2-4 mm breach, and >4 mm breach; anterior breaches >4 mm were also recorded. Screw accuracy was compared to a historical cohort (not using 3D-printed models) using SPSS 23.0 (Chicago, IL, USA).
Results: A total of 513 freehand pedicle screws were placed from T1 to S1 in 23 patients. Overall, 494 screws (96.3%) were placed in acceptable positions according to the pre-operative plan, which had no statistically significant difference (P=0.99) compared to a historical cohort with less severe deformities. There were 84.2% screw that were intrapedicular or <2 mm breach; among the 81 screws (15.8%) with >2 mm breach, 67 were lateral breaches (most are intended juxtapedicular placement), whereas 14 were medial breaches. There were 11 screws (2.1%) that required repositioning due to pedicle violation, and eight screws (1.6%) had >4 mm anterior breach and required shortening. There was no neuromonitoring change or any other complications directly or indirectly related to freehand pedicle screw placement.
Conclusions: The 3D-printed spinal model can make freehand pedicle screw placement safer in severe spinal deformity cases with acceptable accuracy, and no neurological or vascular complications.
abstract_id: PUBMED:36208012
SAP Principle Guided Free Hand Technique: A Secret for T1 to S1 Pedicle Screw Placement. Objective: Existing freehand techniques of screw placement mainly emphasized on various entry points and complex trajectory reference. The aim of this study is to illustrate a standardized and reliable freehand technique of pedicle screw insertion for open pedicle screw fixation with a universal entry point and a stereoscopic trajectory reference system and report the results from a single surgeon's clinical experience with the technique.
Method: In this study, the author respectively reviewed a total of 200 consecutive patients who had undergone open freehand pedicle screw fixation with Superior Articular Process (SAP) technique from January 2019 to May 2020. For accuracy and safety, all 200 cases had undergone postoperative X-ray while 33 cases including spinal deformity, infection, and tumor had received additional CT-scan. Screw accuracy was analyzed via a CT-based classification system with Student's t test.
Results: A total of 1126 screws had been placed from T1-S1 with SAP-guided freehand technique and the majority had been confirmed safe in X-ray without the need of CT scan. A total of 316 screws in deformity or infectious or tumor cases had undergone additional CT scan with 95.5% (189 of 198 screws) accuracy in thoracic group and 94.9% (112 of 118 screws) in lumbar group. The accuracy had been 90.5% (114 of 126 screws) in deformity group and 95.8% (182 of 190 screws) in non-deformity group. All perforation cases had been rated Grade B (<2 mm) without significant difference between the medial and the lateral (p < 0.05). No cases had been detected with significant neurological deficiencies. The mean intraoperative X-ray shots were 0.73 per screw.
Conclusion: SAP-guidance is a reliable freehand technique for thoracic and lumbar pedicle screw instrument. It allows accurate and safe screw insertion in both non-deformity and deformity cases with less radiation exposure.
abstract_id: PUBMED:32447530
Intraoperative radiation exposure to patients in idiopathic scoliosis surgery with freehand insertion technique of pedicle screws and comparison to navigation techniques. Purpose: In surgical correction of scoliosis with pedicle screw dual-rod systems, frequently used freehand technique of screw positioning is challenging due to 3D deformity. Screw malposition can be associated with serious complications. Image-guided technologies are already available to improve accuracy of screw positioning and decrease radiation to surgeon. This study was conducted to measure intraoperative radiation to patients in freehand technique, evaluate screw-related complications and compare radiation values to published studies using navigation techniques.
Methods: Retrospective analysis of prospectively collected data of 73 patients with idiopathic scoliosis, who underwent surgical correction with pedicle screw dual-rod system. Evaluated parameters were age, effective radiation dose (ED), fluoroscopy time, number of fused segments, correction and complications. Parameters were compared with regarding single thoracic curve (SC) and double thoracic and lumbar curves (DC), adolescent (10-18 years) or adult (> 18 years) idiopathic scoliosis, length of instrumentation. ED was compared with values for navigation from online database.
Results: Average age was 21.0 ± 9.7 years, ED was 0.17 ± 0.1 mSv, time of fluoroscopy was 24.1 ± 18.6 s, 9.5 ± 1.9 fused segments. Average correction for SC was 75.7%, for DC 69.9% (thoracic) and 76.2% (lumbar). No screw-related complications. ED was significantly lower for SC versus DC (p < 0.01), short versus long fusions (p < 0.01), no significant difference for age (p = 0.1). Published navigation data showed 6.5- to 8.8-times higher radiation exposure for patients compared to our results.
Conclusion: Compared to navigation procedures, freehanded positioning of pedicle screws in experienced hands is a safe and effective method for surgical correction of idiopathic scoliosis with a significant decrease in radiation exposure to patients.
abstract_id: PUBMED:31258575
Freehand pedicle screw fixation: A safe recipe for dorsal, lumbar and sacral spine. Objective: To determine outcome of freehand pedicle screw fixation for dorsal, lumbar and sacral fractures at a tertiary care centre in the developing world.
Methods: A retrospective review was performed of 150 consecutive patients who underwent pedicle screw fixation from January 1, 2012 to 31st December 2017. A total of 751 pedicle screws were placed. Incidence and extent of cortical breach by misplaced pedicle screw was determined by review of intra-operative and post-operative radiographs and/or computed tomography.
Results: Among the total 751 free hand placed pedicle screws, four screws (0.53%) were repositioned due to a misdirected trajectory towards the disc space. six screws (0.79%) were identified to have cause moderate breach while four screws (0.53%) cause severe breach. There was no occurrence of iatrogenic nerve root damage or violation of the spinal canal.
Conclusion: Free hand pedicle screw placement based on external landmarks showed remarkable safety and accuracy in our center. The authors conclude that assiduous adherence to technique and preoperative planning is vital to success.
abstract_id: PUBMED:33558972
3D-printed drill guide template, a promising tool to improve pedicle screw placement accuracy in spinal deformity surgery: A systematic review and meta-analysis. Purpose: This study aimed to compare the pedicle screw placement accuracy and surgical outcomes between 3D-printed (3DP) drill guide template technique and freehand technique in spinal deformity surgery.
Methods: A comprehensive systematic literature search of databases (PubMed, Embase, Cochrane Library, and Web of Science) was conducted. The meta-analysis compared the pedicle screw placement accuracy and other important surgical outcomes between the two techniques.
Results: A total of seven studies were included in the meta-analysis, comprising 87 patients with 1384 pedicle screws placed by 3DP drill guide templates and 88 patients with 1392 pedicle screws placed by freehand technique. The meta-analysis results revealed that the 3DP template technique was significantly more accurate than the freehand technique to place pedicle screws and had a higher rate of excellently placed screws (OR 2.22, P < 0.001) and qualifiedly placed screws (OR 3.66, P < 0.001), and a lower rate of poorly placed screws (OR 0.23, P < 0.001). The mean placement time per screw (WMD-1.99, P < 0.05), total screw placement time (WMD-27.86, P < 0.001), and blood loss (WMD-104.58, P < 0.05) were significantly reduced in the 3DP template group compared with the freehand group. Moreover, there was no significant statistical difference between the two techniques in terms of the operation time and correction rate of main bend curve.
Conclusions: This study demonstrated that the 3DP drill guide template was a promising tool for assisting the pedicle screw placement in spinal deformity surgery and deserved further promotion.
abstract_id: PUBMED:31493597
Freehand C2 Pedicle Screw Placement: Surgical Anatomy and Operative Technique. We present a surgical video demonstrating the anatomy and technique of freehand C2 pedicle screw placement using a cadaveric specimen and 3-dimensional simulation software. C2 pedicle screws have been shown to augment cervical constructs and provide increased biomechanical stability compared with pars screws due to the increased length and bony purchase of pedicle screws within the pedicle and vertebral body.1 The presence of vertebral artery variations within the transverse foramen may preclude pedicle screw placement, and these should be identified on preoperative imaging. The C2 pedicle can be directly palpated at the time of screw placement, which aids screw placement in cases of deformity or trauma. A freehand technique without the use of computed tomography scan guidance or intraoperative fluoroscopy decreases radiation exposure for the operator and patient and has been shown to be safe for patient-related outcomes.2-5 Complete exposure of the C2 posterior elements is key to identifying the pedicle. The trajectory is based on direct visualization of the medial and superior pedicle borders to avoid lateral or inferior breaches into the transverse foramen. A curved probe is used for access into the vertebral body, respecting the outer cortical walls of the pedicle. The intraosseous position is confirmed with a ball-tipped probe. Fluoroscopy should be performed after screw placement to confirm proper position. By accomplishing proper exposure and understanding the anatomy of the C2 pedicle, the placement of C2 pedicle screws using a freehand technique is a safe and efficient technique for high cervical fixation.
abstract_id: PUBMED:31001014
Is Freehand Technique of Pedicle Screw Insertion in Thoracolumbar Spine Safe and Accurate? Assessment of 250 Screws. Background: Pedicle screw fixation is one of the widely used procedures for instrumentation and stabilization of the thoracic and lumbar spine. It has the advantage of stabilizing all the three columns in single approach. Various assistive techniques are available to place the pedicle screws more accurately but at the expense of increased exposure to radiation, prolonged surgical duration, and cost.
Objective: The objective of this study is to determine the accuracy and safety of pedicle screw fixation in the thoracolumbar spine using freehand surgical technique.
Materials And Methods: We evaluated all patients who underwent pedicle screw fixation of the thoracolumbar spine for various ailments at our institute from January 2016 to December 2017 with postoperative computed tomography scan for placement accuracy. We used Gertzbein classification to grade pedicle breaches. Screw penetration more than 4 mm was taken as critical and those less than that were classified as noncritical.
Results: A total of 256 screws inserted in T1-L5 vertebrae were included from 40 consecutive patients. Six screws were excluded according to selection criteria. The mean age was 39 years. Trauma (36 patients) was the common reason for which the pedicle screw fixation was done followed by degenerative disease (2 patients) and tumour (2 patients). A total of ten pedicle screw breaches (4%) were identified in eight patients. Among these, three critical breaches (1.2%) were occurred in two patients which required revision. The remaining seven breaches were noncritical and kept under close observation and follow-up.
Conclusion: Pedicle screw had become the workhorse of posterior stabilization of the spine. Based on external anatomy and landmarks alone, freehand technique for pedicle screw fixation can be performed with acceptable safety and accuracy avoiding cumulative radiation exposure and prolonged operative time.
abstract_id: PUBMED:35719818
Screw Insertion Time, Fluoroscopy Time, and Operation Time for Robotic-Assisted Lumbar Pedicle Screw Placement Compared With Freehand Technique. Introduction The purpose of this study was to clarify the superiority of robotic-assisted lumbar pedicle screw placement in terms of screw insertion time, fluoroscopy time, and operation time. Methods The subjects were 46 patients who underwent a posterior lumbar interbody fusion with an open procedure for lumbar degenerative disease from April 2021 to February 2022. The robot group contained 29 cases of screw insertion using a spine robotic system (Mazor X Stealth Edition, Medtronic Inc., Dublin, Ireland). The freehand group contained 17 cases of screw insertion with the freehand technique utilizing the conventional C-arm image guidance. The screw insertion time, fluoroscopy time, and operation time were compared between the robot and the freehand group. Results The screw insertion time did not differ significantly between the two groups (robot group: 179.0 ± 65.2 sec; freehand group: 164.2 ± 83.4 sec; p = 0.507). The fluoroscopy time was significantly shorter in the robot group (robot group: 28.3 ± 25.8 sec; freehand group: 67.5 ± 72.8 sec; p = 0.011). The fluoroscopy time per segment was also significantly shorter in the robot group (robot group: 17.8 ± 23.0 sec; freehand group: 60.2 ± 74.8 sec; p = 0.007). The operation time was significantly longer in the robot group (robot group: 249.6 ± 72.5 min; freehand group: 195.8 ± 60.1 sec; p = 0.013), but the operation time per segment did not differ significantly between the two groups (robot group: 144.1 ± 39.0 min; freehand group: 159.7 ± 34.4 min; p = 0.477). Conclusions The screw insertion time and operation time per segment were similar when employing the spine robotic system compared to the freehand technique; however, the fluoroscopy time was shorter. The fluoroscopy time per segment in the robot group was 29.6% of the time of the freehand group using the C-arm. The surgeon's radiation exposure is thought to be decreased since the spine robotic system shortens the fluoroscopy time.
abstract_id: PUBMED:27293283
A comparative study of pedicle screw fixation in dorsolumbar spine by freehand versus image-assisted technique: A cadaveric study. Background: New and expensive technology such as three-dimensional computer assisted surgery is being used for pedicle screw fixation in dorsolumbar spine. Their availability, expenses and amount of radiation exposure are issues in a developing country. On the contrary, freehand technique of pedicle screw placement utilizes anatomic landmarks and tactile palpation without fluoroscopy or navigation to place pedicle screws. The purpose of this study was to analyze and compare the accuracy of freehand and image-assisted technique to place pedicle screws in the dorsolumbar spine of cadavers by an experienced surgeon and a resident. Evaluation was done using dissection of pedicle and computed tomography (CT) imaging.
Materials And Methods: Ten cadaveric dorsolumbar spines were exposed by a posterior approach. Titanium pedicle screws were inserted from D5 to L5 vertebrae by freehand and image-assisted technique on either side by an experienced surgeon and a resident. CT was obtained. A blinded radiologist reviewed the imaging. The spines were then dissected to do a macroscopic examination. Screws, having evidence of cortical perforation of more than 2 mm on CT, were considered to be a significant breach.
Results: A total of 260 pedicle screws were placed. The surgeon and the resident placed 130 screws each. Out of 130 screws, both of them placed 65 screws each by freehand and image- assisted technique each. The resident had a rate of 7.69% significant medial and 10.76% significant lateral breach with freehand technique while with image-assisted had a rate of 3.07% significant medial and 9.23% significant lateral breach. The expert surgeon had a rate of 6.15% significant medial and 1.53% significant lateral breach with freehand technique while with image-assisted had a rate of 3.07% significant medial and 6.15% significant lateral breach on CT evaluation.
Conclusion: Freehand technique is as good as the image-assisted technique. Under appropriate supervision, residents can safely learn to place freehand pedicle screws with an acceptable violation rate.
Answer: Yes, radiation exposure to the surgeon can be reduced with the freehand pedicle screw fixation technique in pediatric spinal deformity correction. A prospective multicenter study found that the use of the freehand technique for pedicle screw fixation in spinal deformity correction required a minimum amount of fluoroscopic use, hence decreasing radiation exposure to both the surgeon and the patient. The recorded radiation exposure to the surgeon was less than the minimum reportable dose with an average of 0.0005 ± 0.00013 mSv per surgery (PUBMED:24365904). Additionally, another study demonstrated that the freehand technique for pedicle screw fixation can be performed with acceptable safety and accuracy, avoiding cumulative radiation exposure and prolonged operative time (PUBMED:31001014). Furthermore, a study comparing freehand and robotic-assisted lumbar pedicle screw placement found that the fluoroscopy time, which correlates with radiation exposure, was significantly shorter in the robotic group, suggesting that even with freehand techniques, efforts to minimize radiation exposure are effective (PUBMED:35719818). |
Instruction: Does Down syndrome affect the long-term results of complete atrioventricular septal defect when the defect is repaired during the first year of life?
Abstracts:
abstract_id: PUBMED:15740947
Does Down syndrome affect the long-term results of complete atrioventricular septal defect when the defect is repaired during the first year of life? Background: Down syndrome is known to affect the natural history of complete atrioventricular septal defect. We analyzed whether Down syndrome affect the long-term results of complete atrioventricular septal defect when the defect is repaired during the first year of life.
Methods: Repairs of complete atrioventricular septal defect were performed in 64 infants. Thirty-four infants were associated with Down syndrome, while the other 30 were non-Down patients.
Results: Complete follow-up rate was 95% with mean follow-up period of 99+/-47 months (maximum 169 months) in Down patients and 80+/-64 months (maximum 213 months) in non-Down patients. There was one operative death in each group (mortality rate of 2.9% in Down patients and 3.3% in non-Down patients), and three patients died at the late phase (one in Down patients and two in non-Down patients). Five patients underwent re-operation due to postoperative left atrioventricular valve regurgitation (one in Down patients and four in non-Down patients). Freedom from re-operation for left atrioventricular valve regurgitation and actuarial survival rate at 13 years were 96+/-4 and 94+/-4% in Down patients and 85+/-7 and 90+/-5% in non-Down patients (not significantly different).
Conclusions: Down syndrome does not affect the long-term results of complete atrioventricular septal defect when the defect is repaired during the first year of life.
abstract_id: PUBMED:30590597
Long-term results after surgical repair of atrioventricular septal defect. Objectives: We analysed our 29-year experience of surgical repair of atrioventricular septal defect (AVSD) to define risk factors for mortality and reoperation.
Methods: Between 1988 and 2017, 508 patients received AVSD repair in our institution; 359 patients underwent surgery for complete AVSD, 76 for intermediate AVSD and 73 for partial AVSD. The median age of the patients was 6.1 months (interquartile range 10.3 months), and the median weight was 5.6 kg (interquartile range 3.2 kg). The standard AVSD repair was performed using 2-patch technique (n = 347) and complete cleft closure (n = 496). The results were divided into 2 surgical eras (early era 1986-2004 and late era 2004-2017). Risk factors were analysed to determine the impact of patient age, weight, the presence of trisomy 21 and complex AVSD on mortality and reoperation rate.
Results: In-hospital mortality decreased from 10.2% (n = 26) in early surgical era to 1.6% (n = 4) in late surgical era (P < 0.001). Seventy-seven patients required reoperation. Freedom from reoperation was 84.4% after 25 years. The main indication for reoperation was left atrioventricular valve regurgitation (13.8%). The multivariable Cox regression analysis revealed reoperation of the left AV valve, early surgical era, patient age <3.0 months and complex AVSD to be independent risk factors for mortality. Age <3.0 months, complex AVSD and moderate/severe left AV valve regurgitation at discharge predicted reoperation.
Conclusions: AVSD repair can be performed with low mortality and reoperation rate. Age <3 months, complex AVSD and moderate/severe regurgitation of the left AV valve at discharge were predictors for reoperation. Reoperation of the left AV valve was the strongest risk factor for mortality.
abstract_id: PUBMED:34953470
Long-Term Outcome Up To 40 Years after Single Patch Repair of Complete Atrioventricular Septal Defect in Infancy or Childhood. Objectives: Patients with repaired complete atrioventricular septal defect (CAVSD) represent an increasing portion of grown-ups with congenital heart disease. For repair of CAVSD, the single-patch technique has been employed first. This technique requires division of the bridging leaflets, thus, among other issues, long-term function of the atrioventricular valves is of particular concern.
Methods: Between 1978 and 2001, 100 consecutive patients with isolated CAVSD underwent single-patch repair in our institution. Hospital mortality was 11%. Primary endpoints were clinical status, atrioventricular valve function, and freedom from reoperation in long term. Follow-up was obtained contacting the patient and/or caregiver, and the referring cardiologist.
Results: Eighty-three patients were eligible for long-term follow-up (21.0 ± 8.7, mean ± standard deviation [21.5; 2.1-40.0, median; min-max] years after surgical repair). Actual long-term mortality was 3.4%. Quality of life (QoL; self- or caregiver-reported in patients with Down syndrome) was excellent or good in 81%, mild congestive heart failure was present in 16%, moderate in 3.6% as estimated by New York Heart Association classification. Echocardiography revealed normal systolic left ventricular function in all cases. Regurgitation of the right atrioventricular valve was mild in 48%, mild-moderate in 3.6%, and moderate in 1.2%. The left atrioventricular valve was mildly stenotic in 15% and mild to moderately stenotic in 2%; regurgitation was mild in 54%, mild to moderate in 13%, and moderate in 15% of patients. Freedom from left atrioventricular-valve-related reoperation was 95.3, 92.7, and 89.3% after 5, 10, and 30 years, respectively. Permanent pacemaker therapy, as an immediate result of CAVSD repair (n = 7) or as a result of late-onset sick sinus syndrome (n = 5), required up to six reoperations in single patients. Freedom from pacemaker-related reoperation was 91.4, 84.4, and 51.5% after 5, 10, and 30 years, respectively.
Conclusion: Up to 40 years after single-patch repair of CAVSD, clinical status and functional results are promising, particularly, in terms of atrioventricular valve function. Permanent pacemaker therapy results in a life-long need for surgical reinterventions.
abstract_id: PUBMED:2299871
Two-patch repair of complete atrioventricular septal defect in the first year of life. Results and sequential assessment of atrioventricular valve function. Before January 1987, 62 infants underwent two-patch repair of complete (51) or intermediate (11) atrioventricular septal defect at the Royal Children's Hospital, Melbourne. Median age at repair was 4.3 months and median weight was 4.4 kg. Early deaths (3%) were confined to two infants with preoperative respiratory tract infections; a further two patients died during follow-up (late mortality rate 3%). Reoperation for severe postoperative mitral regurgitation was necessary in 10 infants (16%), two of whom subsequently required mitral valve replacement with a prosthesis. Preoperative atrioventricular valve regurgitation was assessed retrospectively in 49 patients from angiography or Doppler echocardiography and was found to be absent or mild in 33 (68%), moderate in 9 (18%), and severe in 7 (14%). At the time of latest review (at a mean of 2.4 years after repair), judged from a combination of clinical and echocardiographic criteria, mitral regurgitation was absent or mild in 49 (84%) of the 58 survivors; none of them had symptomatic regurgitation or were requiring continuing medical treatment. Analysis of sequential atrioventricular valve function in 46 of the 49 patients in whom objective preoperative data were available showed no relationship between the degree of preoperative and postoperative atrioventricular valve regurgitation. Infants without Down's syndrome, however, had a significantly higher reoperation rate for severe postoperative mitral valve regurgitation (50%) than those with Down's syndrome (10%) (p = 0.007). Complete atrioventricular septal defect can be repaired in early infancy with a low mortality rate and good intermediate term results.
abstract_id: PUBMED:32919770
Long-term outcome after early repair of complete atrioventricular septal defect in young infants. Objective: The long-term outcome after repair of complete atrioventricular septal defect in young infants is still not fully understood. The objective of this study was to evaluate data after repair for complete atrioventricular septal defect over a 25-year period to assess survival and identify risk factors for left atrioventricular valve-related reoperations.
Methods: A total of 304 consecutive patients underwent surgical correction for complete atrioventricular septal defect between April 1993 and October 2018. The results for young infants (aged <3 months; n = 55; mean age 1.6 ± 0.6 months) were compared with older infants (aged >3 months; n = 249; mean age, 5.1 ± 5.2 months). Mean follow-up was 13.2 ± 7.8 years (median, 14.0 years; interquartile range, 7.0-20.0). The Kaplan-Meier method was used to assess overall survival and freedom from left atrioventricular valve-related reoperation.
Results: Overall, 30-day mortality was 1.0% (3/304) with no difference between young and older infants (P = 1.0). Overall survival in the total population at 20-year follow-up was 95.1% (±1.3%). Independent risk factors for poor survival were the presence of an additional ventricular septal defect (P = .042), previous coarctation of the aorta (P < .001), persistent left superior vena cava (P = .026), and genetic syndromes other than Trisomy 21 (P = .017). Freedom from left atrioventricular valve-related reoperation was 92.6% (±1.7%) at 20 years. There was no significant difference in left atrioventricular valve-related reoperation in young infants compared with older infants (P = .084).
Conclusions: Our data demonstrated that excellent long-term survival could be achieved with early repair for complete atrioventricular septal defect, and the need for reoperations due to left atrioventricular valve regurgitation was low. Primary correction in patients aged less than 3 months is, when clinically necessary, well tolerated. Palliative procedures can be avoided in the majority of patients.
abstract_id: PUBMED:36102879
Timing of surgical repair and resource utilisation in infants with complete atrioventricular septal defect. Introduction: Variation exists in the timing of surgery for balanced complete atrioventricular septal defect repair. We sought to explore associations between timing of repair and resource utilisation and clinical outcomes in the first year of life.
Methods: In this retrospective single-centre cohort study, we included patients who underwent complete atrioventricular septal defect repair between 2005 and 2019. Patients with left or right ventricular outflow tract obstruction and major non-cardiac comorbidities (except trisomy 21) were excluded. The primary outcome was days alive and out of the hospital in the first year of life.
Results: Included were 79 infants, divided into tertiles based on age at surgery (1st = 46 to 137 days, 2nd = 140 - 176 days, 3rd = 178 - 316 days). There were no significant differences among age tertiles for days alive and out of the hospital in the first year of life by univariable analysis (tertile 1, median 351 days; tertile 2, 348 days; tertile 3, 354 days; p = 0.22). No patients died. Fewer post-operative ICU days were used in the oldest tertile relative to the youngest, but days of mechanical ventilation and hospitalisation were similar. Clinical outcomes after repair and resource utilisation in the first year of life were similar for unplanned cardiac reinterventions, outpatient cardiology clinic visits, and weight-for-age z-score at 1 year.
Conclusions: Age at complete atrioventricular septal defect repair is not associated with important differences in clinical outcomes or resource utilisation in the first year of life.
abstract_id: PUBMED:31006033
Propensity-matched comparison of the long-term outcome of the Nunn and two-patch techniques for the repair of complete atrioventricular septal defects. Objectives: To compare the long-term performance of the Nunn and 2-patch techniques for the repair of complete atrioventricular septal defects.
Methods: Between January 1995 and December 2015, a total of 188 patients (Nunn n = 41; 2-patch n = 147) were identified from hospital databases. Univariable Cox regression was performed to calculate the risk of reintervention in each group. Propensity score matching was used to balance the Nunn group and the 2-patch group.
Results: Baseline characteristics including age at surgery, weight, trisomy 21, other cardiac anomalies, previous operations and preoperative atrioventricular valve regurgitation did not differ between the 2 groups. Overall, there was no difference in mortality between the 2 groups (P = 0.43). Duration of cardiopulmonary bypass (CPB) and myocardial ischaemia time were 29 min (P < 0.001) and 28 min (P < 0.001) longer, respectively, in the 2-patch group. Median follow-up was 10.8 years (2-21 years). Unadjusted Cox regression did not reveal a significant difference in the risk of reoperation for either group 9 years after initial surgery [hazard ratio (HR) (Nunn) 0.512, 95% confidence interval 0.176-1.49; Nunn 89%; 2-patch 82%]. This finding was reiterated from Cox regression performed on the propensity-matched sample (31 pairs). The probability of freedom from moderate or worse left atrioventricular valve regurgitation or left ventricular outflow obstruction was similar in the 2 groups.
Conclusions: The Nunn and 2-patch techniques are comparable in terms of the long-term mortality and probability of freedom from reoperation, moderate or severe left atrioventricular valve regurgitation and left ventricular outflow obstruction. However, the duration of CPB and myocardial ischaemia is longer in the 2-patch group.
abstract_id: PUBMED:25099029
Repair of complete atrioventricular septal defect in infants with down syndrome: outcomes and long-term results. In clinical practice, the combination of congenital heart disease (CHD) with malformations of other organs occurs in about 10 % of cases, including chromosomal disease with heart defects, which are observed mainly with certain syndromes. In the Bakoulev SCCS (Moscow, Russian Federation), from 01.2005 to 01.2011, complete atrioventricular septal defect (CAVSD) repair was performed on 163 patients (5.6 ± 3.0 months) with Down Syndrome (DS) using the single-patch (n = 40) and the two-patch (n = 123) methods. The control group consisted of 214 infants aged 6.49 ± 3.03 months with CAVSD and normal karyotype. A retrospective cohort study was made, as well as a comparative analysis of the immediate (up to 30 days) and long-term (12-75 months, at the average of 56 ± 15) results of the repair of CAVSD in infants with DSand normal karyotype/chromosome set (NK). During the hospital treatment period, we registered the following complications: pulmonary hypertensive crises in 6 % (n = 9) of patients with DS and in 10 % (n = 21) of infants with NK, infectious complications in 21% (n = 34) of patients with DS and in 8% (n = 17) of infants with NK. Squeal structures in groups were differentiated. The doses and duration of cardiotonic support in the NK patients were significantly higher in comparison with the DS patients (7.5 ± 2.1 days vs 3.4 ± 1.15 days, p < 0.05). Respiratory infections on the background of immunodeficiency were found more often in the DS group (21% in DS vs 8% in NK, p < 0.05), demanding higher postoperative pulmonary ventilation time in DS patients in comparison with normal infants was required (DS 5.1 ± 2.8 days vs NK 1.7 ± 0.8 days, p < 0.05). In DS infants, abnormalities of the left AV valve (doubling of the mitral valve, single papillary muscle, closely spaced groups of papillary muscles, leaflet or chordal dysplasia, hypoplastic valve ring) occur as statistically significant (8% DS vs 12% NK; p < 0.05) which is rarer than in children having the same defect, but without Down syndrome. Concerning the long-term results, there was no significant difference (Gehan-Wilcoxon test) in actuarial freedom from reoperation after repair of CAVSD between DS and NK groups (p < 0.13). However, the presence of Down Syndrome in patients significantly increases the risk of severe co-morbidities that have a significant impact on the recovery period, as well as on life expectancy even after successful CHD correction.
abstract_id: PUBMED:22883625
Successful replacement of common atrioventricular valve with a single mechanical prosthetic valve in an infant with repaired complete atrioventricular septal defect and methicillin-resistant Staphylococcus aureus endocarditis. A 4-month-old infant with trisomy 21 and repaired complete atrioventricular septal defect developed endocarditis with methicillin-resistant Staphylococcus aureus in the early postoperative period. We report the successful replacement of the common atrioventricular valve with a single St. Jude mechanical prosthetic valve, along with an intraluminal pulmonary artery banding to restrict pulmonary flow.
abstract_id: PUBMED:29948031
Bradyarrhythmias in Repaired Atrioventricular Septal Defects: Single-Center Experience Based on 34 Years of Follow-Up of 522 Patients. Atrioventricular Septal Defect (AVSD) is a rare congenital heart defect (CHD) often associated with genetic syndromes, most commonly Down syndrome (DS). Over the last four decades, surgical repair has increased survival and improved quality of life in these patients. The prevalence of bradyarrhythmias namely, atrioventricular block (AVB) and sinus node dysfunction (SND) in AVSD is partially known. 522 cases with both partial and complete AVSD (38.7% with DS), undergoing intracardiac repair from 1982 to 2016 at our institution, were reviewed from our system database. 38 (7.3%) patients received permanent PM implantation for AVB (early or late) or SND. On one hand, AVB requiring PM was found in 26 (4.98%). This was further subdivided into early-onset 14 (2.6%) and late-onset AVB 12 (2.2%) (median 4 [IQR 1-7] years). On the other hand, 12 (2.3%) experienced late SND requiring PM (median 11 [IQR 3.5-15.2] years). Early and late AVB were independent from the type of AVSD (partial or complete), whereas the late SND was remarkably observed in complete AVSD compared to partial AVSD (p = 0.017). We classified the cohort into two main categories: DS (202, 38.7%) and non-DS (320, 61.3%). At Kaplan-Meier survival analysis, DS was significantly associated with late-onset bradyarrhythmias (p = 0.024). At Cox regression analysis, we identified DS as an independent predictor of PM implantation (HR 2.17). In conclusion, about 7% of repaired AVSD patients need PM implantation during follow-up. There are no differences in early and late AVB occurrence according to the type of AVSD. There is a higher incidence of late SND in repaired complete AVSD, with a later timing onset in patients with associated DS. Moreover, DS seems to be an independent predictor of PM implantation.
Answer: Down syndrome does not negatively affect the long-term results of complete atrioventricular septal defect (CAVSD) when the defect is repaired during the first year of life. A study with a complete follow-up rate of 95% and a mean follow-up period of 99±47 months for Down syndrome patients and 80±64 months for non-Down syndrome patients found that the freedom from re-operation for left atrioventricular valve regurgitation and the actuarial survival rate at 13 years were not significantly different between the two groups. Specifically, the rates were 96±4% and 94±4% for Down syndrome patients and 85±7% and 90±5% for non-Down syndrome patients, respectively (PUBMED:15740947).
Additionally, other studies have shown that the presence of Down syndrome does not significantly increase the risk of severe comorbidities that impact recovery or life expectancy after successful CAVSD correction (PUBMED:25099029). Furthermore, the long-term outcome after early repair of CAVSD in young infants, including those with Down syndrome, has demonstrated excellent survival rates and a low need for reoperations due to left atrioventricular valve regurgitation (PUBMED:32919770).
In summary, the presence of Down syndrome does not appear to adversely affect the long-term outcomes of CAVSD repair when performed during the first year of life. |
Instruction: Is hypogastric artery embolization during endovascular aortoiliac aneurysm repair (EVAR) innocuous and useful?
Abstracts:
abstract_id: PUBMED:18276173
Is hypogastric artery embolization during endovascular aortoiliac aneurysm repair (EVAR) innocuous and useful? Introduction: We hypothesized that the coverage of the hypogastric artery with a stent-graft causes an occlusion of the artery in its proximal segment, allowing collateral network formation in distal segments of the artery. In contrast, hypogastric embolisation may cause the formation of microthrombi that tend to disseminate leading to embolic occlusion of secondary branches and collaterals. This phenomenon worsens pelvic ischemia. To answer this question we compared two groups of patients with aortoiliac aneurysms treated with or without coil embolization to assess 1) The occurrence and evolution of buttock ischemia and 2) the effect on endoleak.
Materials/methods: Between October 1995 and January 2007, 147 out of 598 EVAR patients (24.6%) required occlusion of one or both hypogastric arteries. 101 were available for over one year of follow-up. Group A included 76 patients (75%) who underwent coil embolization before EVAR and group B 25 patients (25%) who had their hypogastric artery covered by the sole limb of the stent. Patient demographics, aneurysm characteristics, operative details, immediate and long term clinical outcomes, and CT-scan evaluation were stored prospectively in a specific data base and analyzed retrospectively.
Results: They were 96 males (95%). Mean age was 72.1+/-9.5 years. One month postoperatively, 51 patients (50.0%) suffered from buttock claudication. After six months, 34 patients were still disabled (34%), 32 in Group A (42%) and 2 in Group B (8%) (p=0.001). Post-operative sexual dysfunction occurred in 19 (19.6%) without statistical difference between the two groups. Type 2 endoleaks occurred in 12 patients (16.0%) in group A and 4 patients (16.0%) in group B (p=1). Endoleak from the hypogastric artery occurred in one patient in each group. Univariate analysis showed that predictive factors of long term (over six months) buttock claudication were embolization (p<0.001), younger age (p<0.03), coronary disease (p=0.06) and left ventricular dysfunction (p<0.01). The logistic regression analysis showed that buttock claudication was independently associated with embolization OR=9.1[95%CI=1.9-44] and left ventricular dysfunction OR=4.1[95%CI=1.3-12.7].
Conclusions: Coil embolization of hypogastric artery during EVAR is not an innocuous procedure and may not reduce the rate of type II endoleak.
abstract_id: PUBMED:38328452
Mid-term outcomes of hypogastric artery embolization in endovascular aneurysm repair: a case series. Hypogastric artery embolization is performed during endovascular aneurysm repair (EVAR) involving the common iliac artery. Within this case series, we have observed elevated rates of sac expansion subsequent to this intervention. April 2009 to March 2021, 22 patients underwent EVAR with hypogastric artery embolization. We evaluated the mid-term outcomes for these patients. The mean follow-up period was 57 months. We achieved a 100% technical success rate without open conversion and no hospital deaths. The rates of freedom from aneurysm expansion at 1, 3, and 5 years were 90.5%, 59.1%, and 37.5%, respectively. The percentage of sac expansion exceeding 5 mm was 54.5% (12/22). Combined endovascular aortic aneurysm repair and embolization of the hypogastric artery might be associated with a high rate of remote sac expansion. Larger trials are needed to verify risks and benefits.
abstract_id: PUBMED:33038602
The Effect of Hypogastric Artery Revascularization on Ischemic Colitis in Endovascular Aneurysm Repair. Background: The objective of the study was to examine the effect of hypogastric revascularization maneuvers on the rate of postoperative ischemic colitis among patients undergoing endovascular aortoiliac aneurysm repair.
Methods: Using the 2011-2018 Endovascular Aneurysm Repair Procedure-Targeted American College of Surgeons National Surgical Quality Improvement Program Participant Use Files, we analyzed patients undergoing elective endovascular infrarenal aortoiliac aneurysm repairs. Using multivariable modeling techniques, a cohort of patients at high risk for postoperative ischemic colitis was identified. The outcomes of this group were then compared using Pearson's chi-square testing in accordance with whether or not they underwent hypogastric revascularization.
Results: Of 4753 patients undergoing endovascular aortoiliac aneurysm repair in the National Surgical Quality Improvement Program cohort, 1161 had concomitant hypogastric revascularization procedures. High-risk predictors of ischemic colitis included chronic obstructive pulmonary disease and concurrent renal artery or external iliac artery stenting. There was not a significant association between pelvic revascularization and postoperative ischemic colitis [1.0% with versus 0.5% without pelvic revascularization; adjusted odds ratio of ischemic colitis with revascularization 2.07 (0.96, 4.46); P = 0.06] after adjustment for patient- and procedure-related factors. In a subgroup analysis of patients with a distal aneurysm extent beyond the common iliac artery, the incidence of ischemic colitis was significantly lower in patients without pelvic revascularization (0.1% versus 1.6%, P = 0.004).
Conclusions: Our analysis of patients undergoing elective endovascular repair of infrarenal aortoiliac aneurysmal disease did not find a reduced incidence of postoperative ischemic colitis in patients who received a concomitant pelvic revascularization procedure, suggesting instead that such procedural adjuncts may actually increase risk for this complication.
abstract_id: PUBMED:35532782
Prospective clinical study for claudication after endovascular aneurysm repair involving hypogastric artery embolization. Purpose: This prospective study aimed to assess the prognosis of claudication after endovascular aneurysm repair (EVAR) involving hypogastric artery (HGA) embolization.
Methods: Patients who were scheduled to undergo EVAR involving bilateral or unilateral HGA embolization (BHE or UHE, respectively) between May 2017 and January 2019 were included in this study. Patients underwent the walk test preoperatively, one week postoperatively, and monthly thereafter for six months. The presence of claudication and the maximum walking distance (MWD) were recorded. A near-infrared spectroscopy monitor was placed on the buttocks, and the recovery time (RT) was determined. A walking impairment questionnaire (WIQ) was completed to determine subjective symptoms.
Results: Of the 13 patients who completed the protocol, 12 experienced claudication in the 6-min walk test. The MWD was significantly lower at one week postoperatively than preoperatively. The claudication prevalence was significantly higher at five and six months postoperatively after BHE than after UHE. BHE was associated with longer RTs and lower WIQ scores than UHE.
Conclusions: We noted a trend in adverse effects on the gluteal circulation and subjective symptoms ameliorating within six months postoperatively, with more effects being associated with BHE than with UHE. These findings should be used to make decisions concerning management strategies for HGA reconstruction.
abstract_id: PUBMED:25414170
Clinical impact of hypogastric artery occlusion in endovascular aneurysm repair. Purpose: To report the long-term results for patients treated with endovascular aneurysm repair and additional embolization and coverage of the hypogastric artery compared with patients treated with simple endovascular aneurysm repair.
Methods: A database of our endovascular aneurysm repair patient cohort was reviewed to find patients with iliac artery aneurysms. The baseline characteristics, the procedural data and the results for patients treated with endovascular aneurysm repair and concomitant hypogastric artery embolization were compared with those for patients treated with simple endovascular aneurysm repair. The results were analyzed for significant differences.
Results: Of 106 endovascular aneurysm repair patients treated at our vascular unit from 2001 to 2010, 24 had undergone additional hypogastric artery embolization. The complication rate was significantly increased in this group (12.5% vs. 2.4%; p = 0.041), and the long-term results were significantly poorer. Additional hypogastric artery embolization resulted in late rupture (1.2% vs. 12.5%; p = 0.036), buttock claudication (8.6% vs. 43.8%; p = 0.001) and new onset erectile dysfunction (17.3% vs. 42.9%; p = 0.043).
Conclusion: Endovascular aneurysm repair with extension of the stent graft to the external iliac artery and embolization of the hypogastric artery was associated with more complications and worse long-term results compared with simple endovascular aneurysm repair.
abstract_id: PUBMED:32442614
The association between perioperative embolization of hypogastric arteries and type II endoleaks after endovascular aortic aneurysm repair. Objective: Type II endoleaks (T2ELs) are the most common type of endoleak after endovascular aneurysm repair (EVAR). The iliolumbar artery arising from the hypogastric artery is often a major source of T2ELs, and transarterial embolization of the iliolumbar artery through the hypogastric artery is sometimes performed to interrupt sac expansion during follow-up. Considering the equivocal results of an association between hypogastric embolization and T2ELs in previous studies, this topic has re-emerged after the advent of iliac branch devices. This study reviewed our series to clarify whether hypogastric embolization is associated with T2ELs at 12 months after EVAR.
Methods: Patients who underwent elective EVAR between June 2007 and May 2017 at our institution were retrospectively reviewed. Patients with postoperative computed tomography angiography (CTA) at 12 months were included. Patients in whom CTA revealed type I or type III endoleaks during follow-up, who required reinterventions before 12 months, and who had solitary iliac aneurysms were excluded. The primary outcome was the incidence of T2ELs at 12 months after EVAR. The associations of patients' characteristics, anatomic factors, hypogastric embolization, and type of endograft with the primary outcome were analyzed.
Results: In total, 375 patients were enrolled. During the median follow-up of 59.5 months (interquartile range, 19-126 months), 40 patients died, and 50 reinterventions were performed. In 108 patients (28.8%), either hypogastric artery was embolized to extend distal landings to the external iliac artery. Bilateral and unilateral embolization was performed in nine and 99 patients, respectively. In total, 153 patients (40.8%) had T2ELs found by CTA at 12 months. In the univariate analysis, the status of hypogastric artery occlusion or embolization was not significantly different between patients with and without T2ELs. However, there were not enough patients to detect a 10% difference in T2ELs with >80% statistical power. In the multivariate analysis, significant associations with T2EL were observed for female sex (P = .049), patent inferior mesenteric artery (P = .006), and presence of five or more patent lumbar arteries (P < .001) but not for hypogastric embolization. In addition, compared with the Zenith (Cook Medical, Bloomington, Ind) endograft, the Excluder (W. L. Gore & Associates, Flagstaff, Ariz) endograft was significantly related to T2EL (P = .001).
Conclusions: No significant association between hypogastric embolization and T2EL was demonstrated in this retrospective study, which lacked adequate statistical power.
abstract_id: PUBMED:31471238
Preservation of pelvic perfusion with iliac branch devices does not decrease ischemic colitis compared with hypogastric embolization in endovascular abdominal aortic aneurysm repair. Objective: Ischemic colitis is a rare but devastating complication of endovascular repair of infrarenal abdominal aortic aneurysms. Although it is rare (0.9%) in standard endovascular aneurysm repair (EVAR), the incidence increases to 2% to 3% in EVAR with hypogastric artery embolization (HAE). This study investigated whether preservation of pelvic perfusion with iliac branch devices (IBDs) decreases the incidence of ischemic colitis.
Methods: We used the targeted EVAR module in the American College of Surgeons National Surgical Quality Improvement Program database to identify patients undergoing EVAR of infrarenal abdominal aortic aneurysm from 2012 to 2017. The cohort was further stratified into average-risk and high-risk groups. Average-risk patients were those who underwent elective repair for sizes of the aneurysms, whereas high-risk patients were repaired emergently for indications other than asymptomatic aneurysms. Within these groups, we examined the 30-day outcomes of standard EVARs, EVAR with HAE, and EVAR with IBDs. The primary outcome was the incidence of ischemic colitis. Secondary outcomes included mortality, major organ dysfunction, thromboembolism, length of stay, and return to the operating room. The χ2 test, Fisher exact test, Kruskal-Wallis test, and multivariate regression models were used for data analysis.
Results: There were 11,137 patients who had infrarenal EVAR identified. We designated this the all-risk cohort, which included 9263 EVAR, 531 EVAR-HAE, and 1343 EVAR-IBD procedures. These were further stratified into 9016 cases with average-risk patients and 2121 cases with high-risk patients. In the average-risk group, 7482 had EVAR, 411 had EVAR-HAE, and 1123 had EVAR-IBD. In the high-risk group, 1781 had EVAR, 120 had EVAR-HAE, and 220 had EVAR-IBD. There was no significant difference in 30-day outcomes (including ischemic colitis) between EVAR, EVAR-HAE, and EVAR-IBD in the all-risk and high-risk groups. In the average-risk cohort, EVAR-HAE was associated with a higher mortality rate than EVAR (2.2% vs 1.0%; adjusted odds ratio, 2.58; P = .01). Although EVAR-IBD was not superior to EVAR-HAE in 30-day mortality, major organ dysfunction, or ischemic colitis in this average-risk cohort, EVAR-IBD exhibited a trend toward lower mortality compared with EVAR-HAE in this cohort, but it was not statistically significant (1.0% vs 2.2%; adjusted odds ratio, 0.42; P = .07).
Conclusions: Ischemic colitis is a rare complication of EVAR. HAE does not appear to increase the risk of ischemic colitis, and preservation of pelvic perfusion with IBDs does not decrease its incidence. Although HAE is associated with significantly higher mortality than standard EVAR in average-risk patients, the preservation of pelvic perfusion with IBDs does not appear to improve mortality over HAE.
abstract_id: PUBMED:19765531
Hypogastric artery preservation during endovascular aortic aneurysm repair: is it important? Endovascular repair of aortoiliac artery aneurysm is a safe and effective treatment strategy. Selective hypogastric artery embolization with coils may be necessary to allow the endograft to anchor in the aneurysm-free external iliac artery, thereby eliminating hypogastric endoleak into the aortoiliac aneurysm. Considerable controversy exists regarding the safety of intentional occlusion of the hypogastric artery. Proximal occlusion of a hypogastric artery with embolic coils typically produces little or no clinical symptoms due to well-collateralized pelvic arterial networks. On the other hand, significant complications, such as colonic ischemia, spinal cord paralysis, buttock claudication, or erectile dysfunction are well-recognized adverse events after hypogastric artery embolization. This article examines the natural history of hypogastric artery embolization as well as clinical data regarding the safety and complications following this procedure. Clinical studies regarding risk factors that might contribute to ischemic complication following hypogastric artery embolization are presented. Lastly, treatment strategies to preserve the hypogastric artery thereby obviating the need for hypogastric artery embolization are discussed.
abstract_id: PUBMED:28528715
Embolization for persistent type IA endoleaks after chimney endovascular aneurysm repair with Onyx®. Purpose: The purpose of this study was to determine retrospectively the safety and technical success rate of embolization using ethylene vinyl alcohol copolymer (Onyx®) for persistent type 1A endoleaks after chimney endovascular aneurysm repair (EVAR) for complex aortic aneurysms.
Material And Methods: Nine consecutive patients (6 men, 3 women) with a mean age of 78.6 years (range: 62-87 years) presenting with persistent type IA endoleaks after chimney EVAR and an increase of aneurysm size were treated using transarterial embolization with Onyx®.
Results: Technical success was obtained in all patients (100%) and no complications were observed. Mean follow-up was 16 months (range: 3-35 months). Primary clinical efficacy was obtained for 8/9 patients (89%) and primary technical efficacy for 6/9 patients (67%). Secondary clinical efficacy was 100%, and secondary technical efficacy was 78%.
Conclusion: Our results suggest that arterial embolization using Onyx® appears as a feasible and safe endovascular procedure of type IA endoleaks after chimney EVAR, although further validation is now required.
abstract_id: PUBMED:28216366
Implications of concomitant hypogastric artery embolization with endovascular repair of infrarenal abdominal aortic aneurysms. Objective: Hypogastric artery embolization (HAE) is associated with significant risk of ischemic complications. We assessed the impact of HAE on 30-day outcomes of endovascular aneurysm repair (EVAR) of infrarenal abdominal aortic aneurysms.
Methods: We queried the American College of Surgeons National Surgical Quality Improvement Program database from 2011 to 2014 to identify and to compare clinical features, operative details, and 30-day outcomes of EVAR with those of concomitant HAE with EVAR (HAE + EVAR). Multivariate analysis was performed to determine preoperative and intraoperative factors associated with development of significant complications observed in patients with HAE + EVAR.
Results: In a cohort of 5881 patients, 387 (6.6%) underwent HAE + EVAR. Compared with EVAR, a higher incidence of ischemic colitis (2.6% vs 0.9%; P = .002), renal failure requiring dialysis (2.8% vs 1%; P = .001), pneumonia (2.6% vs 1.3%; P = .039), and perioperative blood transfusion (17% vs 13%; P = .024) was noted after HAE + EVAR. Thirty-day thromboembolic events, strokes, myocardial infarction, lower extremity ischemia, reoperation, and readmission rates were not significantly different (P > .05). Mortality at 30 days in HAE + EVAR patients was 4.1% compared with 2.5% with EVAR (P = .044). HAE was independently associated with increased risk of colonic ischemia (adjusted odds ratio, 2.98; 95% confidence interval, 1.44-6.14; P = .003) and renal failure requiring dialysis (adjusted odds ratio, 2.22; 95% confidence interval, 1.09-4.53; P = .029). However, HAE was not an independent predictor of mortality. Average length of hospital stay was 4 ± 8.5 days after HAE + EVAR vs 3.3 ± 5.9 days after EVAR (P = .001).
Conclusions: Concomitant HAE with EVAR is associated with longer and more complicated hospital stays. Ischemic colitis is a rare complication of EVAR. HAE increases the risk of ischemic colitis and renal failure requiring dialysis. This study highlights the importance of hypogastric artery preservation during EVAR.
Answer: Hypogastric artery embolization during endovascular aortoiliac aneurysm repair (EVAR) is not an innocuous procedure and may not be as useful as previously thought. Studies have shown that coil embolization of the hypogastric artery during EVAR can lead to significant complications such as buttock claudication, sexual dysfunction, and does not necessarily reduce the rate of type II endoleak (PUBMED:18276173). Additionally, hypogastric artery embolization has been associated with elevated rates of sac expansion after the intervention (PUBMED:38328452), and it does not appear to decrease the incidence of ischemic colitis compared to preservation of pelvic perfusion with iliac branch devices (PUBMED:31471238).
Furthermore, hypogastric artery preservation during EVAR is important as intentional occlusion of the hypogastric artery can lead to complications such as colonic ischemia, spinal cord paralysis, buttock claudication, or erectile dysfunction (PUBMED:19765531). The embolization has also been associated with increased risk of ischemic colitis and renal failure requiring dialysis (PUBMED:28216366). A study also found that hypogastric artery embolization was associated with more complications and worse long-term results compared with simple EVAR (PUBMED:25414170).
In contrast, a retrospective study did not demonstrate a significant association between hypogastric embolization and type II endoleaks, although it acknowledged a lack of adequate statistical power (PUBMED:32442614). Additionally, a prospective study indicated that the adverse effects on gluteal circulation and subjective symptoms tend to ameliorate within six months postoperatively, with bilateral hypogastric artery embolization associated with more effects than unilateral embolization (PUBMED:35532782).
In conclusion, hypogastric artery embolization during EVAR carries risks of complications and may not provide the anticipated benefits such as reducing type II endoleaks or preventing ischemic colitis. Preservation of the hypogastric artery, when possible, is recommended to avoid these complications. |
Instruction: Is suicidality distinguishable from depression?
Abstracts:
abstract_id: PUBMED:30808118
Self-Forgiveness Moderates the Effects of Depression on Suicidality. Objective: Not all depressive individuals are suicidal. An increasing body of studies has examined forgiveness, especially self-forgiveness, as a protective factor of suicide based on that suicide is often accompanied by negative self-perceptions. However, less has been studied on how different subtypes of forgiveness (i.e., forgiveness-of-self, forgiveness-of-others and forgiveness-of-situations) could alleviate the effects of depression on suicide. Hence, this study examined forgiveness as a moderator of depression and suicidality.
Methods: 305 participants, consisted of 87 males and 218 females, were included in the study. The mean age was 41.05 (SD: 14.48; range: 19-80). Depression, anxiety, and forgiveness were measured through self-report questionnaires, and suicidal risk was measured through a structuralized interview. Moderations were examined through hierarchical regression analyses.
Results: Depression positively correlated with suicidality.
Results: of the hierarchical regression analysis indicated forgiveness as a moderator of depression on suicidality. Further analysis indicated only forgiveness-of-self as a significant moderator; the effects of forgiveness-of-others and forgiveness-of-situation were not significant.
Conclusion: These findings suggest that forgiveness-of-self is essential in reducing of the effects of depression on suicidality. It is suggested that self-acceptance and the promotion of self-forgiveness should be considered as an important factor when developing suicide prevention strategies.
abstract_id: PUBMED:36457302
Depression and Suicidality: The Roles of Social Support and Positive Mental Health. Objective: Despite being preventable, suicide remains a leading cause of death globally, with depression being one of the more prominent risk factors. This study examines the roles of social support and positive mental health in the depression-suicidality pathway.
Methods: We utilized data from the Singapore Mental Health Study 2016. Social support and positive mental health were examined as mediators in the relationship between 12-month depression and 12-month suicidality using survey-weighted generalized structural equation modeling.
Results: Overall positive mental health was found to partially mediate the relationship between depression and suicide. Of the discrete positive mental health domains, the depression-suicidality relationship was partially mediated by general coping and fully mediated by personal growth and autonomy.
Conclusion: While findings regarding social support were inconclusive, positive mental health may play a significant role in alleviating the effects of depression on suicidality. This highlights the multifaceted nature of suicidality and reveals positive mental health as a new area in assessing and treating at-risk people, to improve clinical outcomes.HIGHLIGHTSThe effect of depression on suicidality was partially mediated by overall positive mental health.General coping partially mediated the relationship between depression and suicidality.Personal growth and autonomy fully mediated the relationship between depression and suicidality.
abstract_id: PUBMED:32818041
Depression and suicidality among adolescents living with human immunodeficiency virus in Lagos, Nigeria. Background: Nigeria is considered to have the second highest number of people living with human immunodeficiency virus (HIV) worldwide with a national HIV infection prevalence of 5.2% in children and adolescents. Adolescents with HIV-infection have been reported to be more prone to developing comorbid emotional difficulties including depression and suicidality compared to those without HIV-infection. This study is aimed at determining the prevalence and correlates of depression and suicidality in adolescents living with HIV infection.
Methods: Through a consecutive sampling method, two hundred and one adolescents attending HIV outpatient clinics in two tertiary hospital (Lagos state University Teaching Hospital and Nigerian Institute of Medical Research) were recruited. Confidentiality was assured and maintained. Suicidality and Depression were assessed with their corresponding modules in Mini International Neuropsychiatric Interview for children and adolescents (MINI-Kid) by researcher, while the independent variables were assessed using self-administered questionnaires. Data was analyzed with Statistical Package for Social Science version 20.
Result: The prevalence of current and lifetime major depressive episode, and suicidality were 16.9%, 44.8% and 35.3% respectively. Female gender, decreased cluster of differentiation 4 (CD4) count and high adverse childhood experience (ACE), were significantly associated with current depressive episode, while poor social support, high ACE, physical abuse, contacting HIV infection after birth and disclosure of status, were associated with lifetime major depressive episode. Factors associated with suicidality were high ACE score, physical abuse, and emotional abuse. After logistic regression analysis; gender, high ACE and CD4 level were independently associated with current major depression, while only poor social support and contracting HIV infection after birth, were independently associated with lifetime major depression. There was a positive correlation between suicidality and depression.
Conclusion: The presence of high rate of depression and suicidality among adolescents living with HIV-infection in the current study clearly shows the need for regular psychological assessment in these group of adolescents, and thus a strong indication for a multidisciplinary management in them.
abstract_id: PUBMED:38206536
Do Self-Processes and Parenting Mediate the Effects of Anxious Parents' Psychopathology on Youth Depression and Suicidality? To understand how anxious parents' global psychopathology increases children's risks for depression and suicidality, we tested mediational pathways through which parent global psychopathology was associated with youth depression and suicidality over a six-year period. Parents (n = 136) who had an anxiety disorder at baseline reported global psychopathology and youth internalizing problems. Youth did not have any psychiatric disorder at baseline and they reported self-esteem, perceived control, and perceived parental warmth and rejection at baseline and 1-year follow-up. At 6-year follow-up, youth depression and suicidality were assessed via multiple reporters including the self, parent, and/or an independent evaluator. Results showed that parental psychopathology had an indirect but not direct effect on youth depression and suicidality via perceived control. No associations were found for the other hypothesized mediators. Perceived control might be a transdiagnostic intervention target in depression and suicide prevention programs for youth exposed to parental anxiety.
abstract_id: PUBMED:21139987
Suicidality, depression, major and minor negative life events: a mediator model. Background: Major negative life events are associated with higher suicidality. In this association, two mediating paths were hypothesized: (a) via minor negative life events and (b) via depression.
Methods: Ninety-six adolescent primary care patients were recruited in clinics, a physician's office, and school nurses' offices.
Results: (1) Minor negative life events were associated with depressive symptoms and suicidality. (2) Depressive symptoms were associated with suicidality. (3) Depressive symptoms mediated the association of minor negative life events with suicidality.
Conclusions: Findings suggest that minor negative life events may be associated with suicidal ideation among adolescent primary care patients, and that depressive symptoms may mediate the association of minor negative life events with suicidality.
abstract_id: PUBMED:37701338
Disability Weights and Years Lived with Disability of Depression with and Without Suicidality. Background: Globally, depression is a silent epidemic, and more than 350 million people suffer from depression. For a long time, the belief prevailed that children and young people cannot suffer from depressive disorders, and depression is slowly becoming one of the leading health problems among the young population.
Objective: This research aims to determine the mental health disorders burden attributed to depression, anxiousness, and fear with and without suicidal ideation among youth in Bosnia and Herzegovina.
Methods: A prospective cross-sectional study was performed as screening of depression by Hamilton standardized screening instrument from May 3, 2018, to April 4, 2019, among young people, students in secondary schools, and the Faculty of Pharmacy and Medical Faculty of the University of Tuzla in the most populous Tuzla Canton in the Federation of Bosnia and Herzegovina. In achieving the research goals, we expressed the burden attributed to depression with and without suicidality, anxiousness, and fear as Disability Weight (DW) and Years Lived with Disability (YLD). For the population level, YLD was calculated by multiplying DW by the prevalence rate of depression, anxiousness, and fear per thousand of the population (YLD= DW x prevalence/1000), and DW was adjusted for suicidality.
Results: The participants' ages ranged from 16 to 24 years, with a mean of 20,6 ± 1,9 years. The Body mass index (BMI) of 21,9 ± 2,7 is the recommended reference value of 18.5-24.9 kg/m2. The depression score of all participants ranged from 0 to 32 with a mean of 7.4± 6.3, which for our population of respondents at the sample level implies entry into the zone of presence of depressive symptoms. Descriptive statistics and differences per gender in sociodemographic variables (age, education state, and secure monthly existence); and modified factors attributed to satisfaction needs (life satisfaction, hope for the future, support from person of influence). Most participants belong to the age group 19-21 years, 71,44% (n=180), and the same 14,28% (n=36) other age groups (16-18 and 22-24 years), and sixty-two percent of participants are university students, and twenty percent are university failures.
Conclusion: Based on our findings, the very high burden of depression in Bosnia and Herzegovina was found greatly not recognized and unsolved problem among the young population aged 16-24 years. Recognizing and screening depression in young people is the first step to prevention.
abstract_id: PUBMED:32343191
Minority status, depression and suicidality among counseling center clients. Objective This study examined race/ethnicity, gender, sexual orientation, and financial stress and their association with depression and suicidality among university counseling center clients. Methods: The sample included 3,189 participants who received services at a university counseling center. Results: Asian American college students reported more depressive symptoms than European American and Hispanic students and were more likely to have a depression diagnosis than European American and African American students. Female and lesbian/gay/bisexual/questioning (LGBQ) individuals had higher depressive symptom scores, were more likely to have a depression diagnosis, and history of suicidal ideation and attempts than male and heterosexual individuals, respectively. Students with high financial stress reported higher depression scores and were more likely to have experienced past and current suicidality. More minority statuses were associated with higher risk for depression and suicidality. Conclusions: Counseling center clients who identified with one or more minority groups had higher risk for depression and suicidality.
abstract_id: PUBMED:34501475
Dissociation and Suicidality in Eating Disorders: The Mediating Function of Body Image Disturbances, and the Moderating Role of Depression and Anxiety. In patients with eating disorders (EDs), elevated dissociation may increase the risk of suicide. Bodily related disturbances, depression, and anxiety may intervene in the association between dissociation and suicidality. In this study we aimed to examine the influence of bodily related disturbances, depression, anxiety, severity of ED symptoms, body mass index (BMI), and type and duration of the ED on the relationship between elevated dissociation and elevated suicidality. The study included 172 inpatients: 65 with anorexia nervosa restricting type, 60 with anorexia nervosa binge/purge type, and 37 with bulimia nervosa. Participants were assessed using self-rating questionnaires for dissociation, suicidality, bodily related parameters, and severity of ED symptomatology, depression, and anxiety. We found that dissociation and suicidality were directly associated. In addition, depression and anxiety moderated the mediating role of body image parameters in the association between increased dissociation and increased suicidality. Thus, only in inpatients with high depression and anxiety, i.e., above the median range, body image disturbances were found to mediate the association between dissociation and suicidality. ED-related parameters did not moderate these relationships. Our study demonstrates that in inpatients with EDs, increased dissociation may be significantly associated with increased suicidality, both directly and via the intervening influence of body image, depression, and anxiety.
abstract_id: PUBMED:34157662
Does depression moderate the relationship between pain and suicidality in adolescence? A moderated network analysis. Background: Whilst growing research suggests that pain is associated with suicidality in adolescence, it remains unclear whether this relationship is moderated by co-morbid depressive symptoms. The present study aimed to investigate whether the pain-suicidality association is moderated by depressive symptoms.
Methods: We performed secondary analyses on cross-sectional, pre-intervention data from the 'My Resilience in Adolescence' [MYRIAD] trial (ISRCTN ref: 86619085; N=8072, 11-15 years). Using odds ratio tests and (moderated) network analyses, we investigated the relationship between pain and suicidality, after controlling for depression, anxiety, inhibitory control deficits and peer problems. We investigated whether depression moderates this relationship and explored gender differences.
Results: Overall, 20% of adolescents reported suicidality and 22% reported pain, whilst nine percent of adolescents reported both. The experience of pain was associated with a four-fold increased risk of suicidality and vice versa (OR=4.00, 95%-CI=[3.54;4.51]), with no gender differences. This cross-sectional association remained significant after accounting for depression, anxiety, inhibitory control deficits and peer problems (aOR=1.39). Depression did not moderate the pain-suicidality association.
Limitations: The item-based, cross-sectional assessment of pain and suicidality precludes any conclusions about the direction of the effects and which aspects of suicidality and pain may drive this association.
Conclusions: Our findings underscore the need to consider pain as an independent risk correlate of suicidality in adolescents. Longitudinal research should examine how this relationship develops during adolescence. Clinically, our findings emphasise the need to assess and address suicidality in adolescents with pain, even in the absence of depressive symptoms.
abstract_id: PUBMED:19221909
Is suicidality distinguishable from depression? Evidence from a community-based sample. Objective: Suicidal behaviour is often considered in the context of depression. Taking an empirical, dimensional and population-based approach, this investigation examines two issues: first, whether it is justifiable to regard suicidality as a symptom of depression or an independent construct. Second, although suicidal behaviour has been shown to decline with increasing age and differ across gender, little is known about the stability of the underlying construct across the lifespan and between genders.
Method: Data were derived from a community-based sample consisting of 7485 people aged 20-24, 40-44 or 60-64 years. Confirmatory factor analysis was used to assess the adequacy of model fit and measurement invariance across gender and then age groups.
Results: The data fitted a two-factor model of depression and suicidality better than a single-factor model. Multi-group analysis indicated strict measurement invariance for both gender and age groups, indicating no significant differences between groups in measurement model fit.
Conclusions: These analyses establish that suicidality is distinguishable from depression, although the factors are substantially correlated. Results suggest that the underlying factorial relationship between suicidality and depression appears not to vary considerably between genders or across cohorts aged 20-24, 40-44 and 60-64 years. Finally, the analyses also highlight the necessity of inventories with scales identifying symptoms of suicidality independent of depression symptoms.
Answer: Yes, suicidality is distinguishable from depression. The investigation using a community-based sample consisting of 7485 people aged 20-24, 40-44, or 60-64 years found that the data fitted a two-factor model of depression and suicidality better than a single-factor model. This indicates that suicidality and depression are separate constructs, although they are substantially correlated. The study also confirmed that the underlying factorial relationship between suicidality and depression does not vary significantly between genders or across the different age cohorts studied (PUBMED:19221909). |
Instruction: Diffuse jejuno-ileitis of Crohn's disease: a separate form of the disease?
Abstracts:
abstract_id: PUBMED:10384331
Diffuse jejuno-ileitis of Crohn's disease: a separate form of the disease? Background And Aims: Diffuse jejuno-ileitis of Crohn's disease may be a homogeneous clinical subgroup. The aim of this work was to compare the demographic and clinical data at diagnosis and the initial treatments of patients with diffuse jejuno-ileitis of Crohn's disease and to the ones without this localization.
Patients And Methods: For demographic and clinical studies, 48 (32M/16F) incident cases of diffuse jejuno-ileitis of Crohn's disease diagnosed between 1988 and 1994 in the EPIMAD register were compared with 96 (48M/48F) controls diagnosed the same year. As far as for the therapeutic management, the 48 incident cases were compared with 48 controls.
Results: Diffuse jejuno-ileitis constituted 3.3% of the total incident cases. Median age at diagnosis was significantly lower (20 vs 23 years, P = 0.01) and an upper digestive involvement was more frequent (56% vs 34%, P = 0.03) in patients with diffuse jejuno-ileitis. These patients were more often treated by total parenteral nutrition (43.8% vs 19.6%, P = 0.01) or azathioprine (50% vs 20.8%, P = 0.005). Azathioprine was also introduced earlier (20.7 vs 40.3 months, P = 0.009). Surgery for resection was less often required in diffuse jejuno-ileitis than in controls (65.2% vs 99.8%, P = 0.02) while more stricturoplasties were performed (52.9% vs 10%, P = 0.003); overall surgical rates did not significantly differ in the 2 groups.
Conclusion: Our series suggest that diffuse jejuno-ileitis of Crohn's disease is a subgroup of patients characterized by a young age at diagnosis, with more frequent and earlier requirement for azathioprine.
abstract_id: PUBMED:9625427
An unusual case associating ileal Crohn's disease and diffuse large B-cell lymphoma of an adjacent mesenteric lymph node. Intestinal non-Hodgkin's lymphomas are a rare complication of long-standing Crohn's disease and generally arise in sites of active inflammatory disease. To our knowledge, we report the first case of an unusual association between ileal Crohn's disease and a diffuse large B-cell non-Hodgkin's lymphoma involving an adjacent mesenteric lymph node but not the intestinal tract. A 22-year-old man was seen for intermittent abdominal pain, vomiting, and severe weight loss that were suggestive of intestinal obstruction. A segmental ileocolonic resection was performed. Gross examination revealed a terminal ileal inflammatory stenosis and enlarged mesenteric lymph nodes. Histologically, terminal ileal Crohn's disease was associated with a diffuse large cell lymphoma localized within one mesenteric lymph node without intestinal involvement. Immunophenotyping performed on deparaffinized sections demonstrated the B phenotype of this lymphoma.
abstract_id: PUBMED:11150039
Long-term outcome of surgical management for diffuse jejunoileal Crohn's disease. Background: In diffuse jejunoileal Crohn's disease, resectional surgery may lead to short-bowel syndrome. Since 1980 strictureplasty has been used for jejunoileal strictures. This study reviews the long-term outcome of surgical treatment for diffuse jejunoileal Crohn's disease.
Methods: The cases of 46 patients who required surgery for diffuse jejunoileal Crohn's disease between 1980 and 1997 were reviewed.
Results: Strictureplasty was used for short strictures without perforating disease (perforation, abscess, fistula). Long strictures (<20 cm) or perforating disease was treated with resection. During an initial operation, strictureplasty was used on 63 strictures in 18 patients (39%). After a median follow-up of 15 years, there were 3 deaths: 1 from postoperative sepsis, 1 from small-bowel carcinoma, and 1 from bronchogenic carcinoma. Thirty-nine patients required 113 reoperations for jejunoileal recurrence. During 75 of the 113 reoperations (66%), strictureplasty was used on 315 strictures. Only 2 patients experienced the development of short-bowel syndrome and required home parenteral nutrition. At present, 4 patients are symptomatic and require medical treatment. All other patients are asymptomatic and require neither medical treatment nor nutritional support.
Conclusions: Most patients with diffuse jejunoileal Crohn's disease can be restored to good health with minimal symptoms by surgical treatment that includes strictureplasty.
abstract_id: PUBMED:8935747
Diffuse jejunoileitis with vasculitic abnormalities in the mesenteric arteries: a rare manifestation of Crohn's disease. The case history of a 31-year-old woman with abdominal complaints of long duration is presented. After 4 years and several hospital admissions the patient underwent diagnostic laparotomy and finally a diagnosis of diffuse jejunoileitis as a manifestation of Crohn's disease was made. The diagnostic procedures as well as the therapeutic strategy are described. A review of the literature is given.
abstract_id: PUBMED:9236620
Primary intestinal Hodgkin's disease complicating ileal Crohn's disease. An unusual primary intestinal lymphoma that occurred as a complication of ileal Crohn's disease is presented. Immunohistochemistry confirmed the light microscopic diagnosis of Hodgkin's disease (nodular sclerosing), and characterized a distinct mucosal nodule as a large-cell anaplastic non-Hodgkin's lymphoma. This unusual lymphoma developed while the patient was being treated with immunosuppressant medication. The present report is a reminder to clinicians of the possibility of occult lymphoma in ileal Crohn's disease.
abstract_id: PUBMED:24337
Diagnosis and treatment of diffuse ileojejunitis. Thirteen patients with diffuse ileojejunitis have been diagnosed and treated by us over the past ten years. The disease bears close resemblance to Crohn's disease and may represent a variant of it. No clearcut relationship to celiac sprue was observed in this group of patients. Therapeutic success was obtained in the majority of patients treated with the anti-inflammatory drugs, sulfasalazine and steroids, with four patients requiring resectional surgery, all others manageable by nonsurgical means. There was no mortality in this series of patients.
abstract_id: PUBMED:885023
Crohn's disease with several intestinal foci ("skip lesions") A 35-year-old man underwent a resection of part of the ileum due to six "skip lesions" in Crohn's disease. At reoperation four years later, in addition to a typical terminal ileitis, two stenosing lesions were found at the ileal-jejunal junction. In connection with this rare observation, a report is made on multiple lesions and the diffuse type of Crohn's disease, both of which must be included in the therapeutic concept. "Adequate" resections including all segmental lesions are demanded, different resections in additional distant lesions, and conservative medical treatment in the widespread diffuse type of the disease.
abstract_id: PUBMED:14797848
Symptomatic complexes of segmental and diffuse enteritis N/A
abstract_id: PUBMED:11023103
Diffuse duodenitis associated with ulcerative colitis. Backwash ileitis and postcolectomy pouchitis are well-recognized complications of ulcerative colitis (UC), whereas inflammation of the proximal small intestine is not. In contrast, small intestinal disease at any level is common in Crohn's disease (CD). Despite this well-established and accepted dogma, rare cases of histologically proven diffuse duodenitis (DD) associated with UC appear in the literature. In this study, we report our experience with similar cases exhibiting this unusual inflammatory phenomenon. Routine histologic sections from four cases of DD associated with well-documented UC were reviewed and the findings correlated with all available medical records. Multiple endoscopic biopsies showing histologic features of UC and colectomy specimens confirming severe ulcerative pancolitis were available for all cases. Varying degrees of active chronic inflammation and architectural mucosal distortion identical to UC were observed in pre- and postcolectomy duodenal biopsies of one of four and four of four cases, respectively. Similar inflammatory patterns were present postoperatively in the ileum in three of four cases and in the jejunum in one case. Endorectal pull-through (ERPT) procedures were performed in three of four patients and an end-to-end ileorectal anastomosis was done in one patient. Despite extensive upper gastrointestinal tract involvement, none of the patients developed postsurgical Crohn's-like complications during a follow-up period of 12 to 54 months. This suggests that patients with pancolitis and DD do not necessarily have CD, but rather may have UC and, most importantly, that successful ERPT procedures may be performed in these patients.
abstract_id: PUBMED:3996935
Crohn's disease in the elderly. The natural history of Crohn's disease in 47 patients, 60 years of age or older at the time of diagnosis has been defined, and their clinical management and long term prognosis reviewed. Distal colonic involvement is common in this group while extensive colonic and diffuse small bowel disease is rare. Distal colonic involvement usually carries a good prognosis except for those few patients who present with perforation which accounts for most of the disease related mortality. The pattern of distal ileal disease is similar to that observed in the younger patients except for the acute nature of symptoms at first presentation, and the low recurrence rates after initial surgical resection.
Answer: Diffuse jejuno-ileitis of Crohn's disease may indeed represent a separate clinical subgroup within Crohn's disease. This form of the disease is characterized by a young age at diagnosis, with patients being diagnosed at a median age significantly lower than those without this localization (20 vs 23 years, P = 0.01). Patients with diffuse jejuno-ileitis also have more frequent upper digestive involvement (56% vs 34%, P = 0.03) and are more often treated with total parenteral nutrition (43.8% vs 19.6%, P = 0.01) or azathioprine (50% vs 20.8%, P = 0.005). Additionally, azathioprine is introduced earlier in the treatment of these patients (20.7 vs 40.3 months, P = 0.009). Surgery for resection is less often required in diffuse jejuno-ileitis than in controls (65.2% vs 99.8%, P = 0.02), while more stricturoplasties are performed (52.9% vs 10%, P = 0.003), although overall surgical rates do not significantly differ between the two groups (PUBMED:10384331).
The long-term outcome of surgical management for diffuse jejunoileal Crohn's disease indicates that most patients can be restored to good health with minimal symptoms through surgical treatment that includes strictureplasty. Strictureplasty is used for short strictures without perforating disease, while long strictures or perforating disease is treated with resection. The majority of patients do not require medical treatment or nutritional support after surgery (PUBMED:11150039).
However, it is important to note that diffuse jejunoileitis can present with complications such as vasculitic abnormalities in the mesenteric arteries (PUBMED:8935747), and there is a potential association with lymphomas, as seen in cases where ileal Crohn's disease is complicated by adjacent lymph node involvement by diffuse large B-cell lymphoma (PUBMED:9625427) or primary intestinal Hodgkin's disease (PUBMED:9236620).
Overall, the evidence suggests that diffuse jejuno-ileitis of Crohn's disease has distinct clinical features and treatment approaches that may warrant its consideration as a separate form of the disease. |
Instruction: Is malaria illness among young children a cause or a consequence of low socioeconomic status?
Abstracts:
abstract_id: PUBMED:22571516
Is malaria illness among young children a cause or a consequence of low socioeconomic status? evidence from the united Republic of Tanzania. Background: Malaria is commonly considered a disease of the poor, but there is very little evidence of a possible two-way causality in the association between malaria and poverty. Until now, limitations to examine that dual relationship were the availability of representative data on confirmed malaria cases, the use of a good proxy for poverty, and accounting for endogeneity in regression models.
Methods: A simultaneous equation model was estimated with nationally representative data for Tanzania that included malaria parasite testing with RDTs for young children (six-59 months), and accounted for environmental variables assembled with the aid of GIS. A wealth index based on assets, access to utilities/infrastructure, and housing characteristics was used as a proxy for socioeconomic status. Model estimation was done with instrumental variables regression.
Results: Results show that households with a child who tested positive for malaria at the time of the survey had a wealth index that was, on average, 1.9 units lower (p-value < 0.001), and that an increase in the wealth index did not reveal significant effects on malaria.
Conclusion: If malaria is indeed a cause of poverty, as the findings of this study suggest, then malaria control activities, and particularly the current efforts to eliminate/eradicate malaria, are much more than just a public health policy, but also a poverty alleviation strategy. However, if poverty has no causal effect on malaria, then poverty alleviation policies should not be advertised as having the potential additional effect of reducing the prevalence of malaria.
abstract_id: PUBMED:23790353
Socioeconomic development as an intervention against malaria: a systematic review and meta-analysis. Background: Future progress in tackling malaria mortality will probably be hampered by the development of resistance to drugs and insecticides and by the contraction of aid budgets. Historically, control was often achieved without malaria-specific interventions. Our aim was to assess whether socioeconomic development can contribute to malaria control.
Methods: We did a systematic review and meta-analysis to assess whether the risk of malaria in children aged 0-15 years is associated with socioeconomic status. We searched Medline, Web of Science, Embase, the Cochrane Database of Systematic Reviews, the Campbell Library, the Centre for Reviews and Dissemination, Health Systems Evidence, and the Evidence for Policy and Practice Information and Co-ordinating Centre evidence library for studies published in English between Jan 1, 1980, and July 12, 2011, that measured socioeconomic status and parasitologically confirmed malaria or clinical malaria in children. Unadjusted and adjusted effect estimates were combined in fixed-effects and random-effects meta-analyses, with a subgroup analysis for different measures of socioeconomic status. We used funnel plots and Egger's linear regression to test for publication bias.
Findings: Of 4696 studies reviewed, 20 met the criteria for inclusion in the qualitative analysis, and 15 of these reported the necessary data for inclusion in the meta-analysis. The odds of malaria infection were higher in the poorest children than in the least poor children (unadjusted odds ratio [OR] 1·66, 95% CI 1·35-2·05, p<0·001, I(2)=68%; adjusted OR 2·06, 1·42-2·97, p<0·001, I(2)=63%), an effect that was consistent across subgroups.
Interpretation: Although we would not recommend discontinuation of existing malaria control efforts, we believe that increased investment in interventions to support socioeconomic development is warranted, since such interventions could prove highly effective and sustainable against malaria in the long term.
Funding: UK Department for International Development.
abstract_id: PUBMED:8066388
Treatment of malaria fever episodes among children in Malawi: results of a KAP survey. Caretakers of children (< 10 years of age) were questioned about management of pediatric malarial fever episodes in a nation-wide knowledge, attitudes, and practices survey conducted in Malawi. A total of 1,531 households in 30 randomly selected clusters of 51 households each were sampled and interviewed. Overall 557 caretakers reported a fever in their child in the previous 2 weeks; 43%-judged the illness as severe. Fifty-two percent of caretakers brought their febrile children to clinic. Clinic attendance was positively correlated with young age of the child (< 4 years), severe illness, and higher socioeconomic status. Seventy-four percent of clinic attenders gave their child an antimalarial; in contrast, only 42% of those not attending clinic gave an antimalarial. Optimal therapy (administration of an antimalarial promptly and at the proper dosage) was received by only 7% of febrile children. Children taken to clinic were twice as likely to receive optimal therapy as were non-attenders. Identification of critical points in the optimal therapy algorithm and characteristics of caretakers linked with sub-optimal therapy may help malaria control programs target specific groups and health education messages to improve treatment of malaria fever episodes.
abstract_id: PUBMED:16222012
Malaria and nutritional status among pre-school children: results from cross-sectional surveys in western Kenya. Protein-energy malnutrition (PEM) affects millions of children in the developing world. The relationship between malaria and PEM is controversial. The goal of this study was to evaluate whether undernutrition is associated with increased or decreased malaria attributable morbidity. Three cross-sectional surveys were conducted using insecticide-treated bed nets (ITNs) among children aged 0-36 months living in an area with intense malaria transmission. Data were collected on nutritional status, recent history of clinical illness, socioeconomic status, current malaria infection status, and hemoglobin. In multivariate models, stunted children had more malaria parasitemia (odds ratio [OR] 1.98, P < 0.0001), high-density parasitemia (OR 1.84; P < 0.0001), clinical malaria (OR 1.77; P < 0.06), and severe malarial anemia (OR 2.65; P < 0.0001) than nonstunted children. The association was evident in children with mild-to-moderate (-3 < height-for-age Z-score [HAZ] < -2) and severe stunting (HAZ < -3). The cross-sectional nature of the study limits the interpretation of causality, but the data provide further observational support that the presence of undernutrition, in particular chronic undernutrition, places children at higher, not lower risk of malaria-related morbidity.
abstract_id: PUBMED:9076404
Weakness of biochemical markers of nutritional and inflammatory status as prognostic indices for growth retardation and morbidity of young children in central Africa. Objective: To determine to what extent biochemical markers of the nutritional and inflammatory status of young children are related to subsequent growth retardation and morbidity.
Design: Population-based follow-up study of a cohort of children from admission to final survey round six months later.
Setting: Health area in Northern Kivu, Zaire.
Subjects: 842 children under two years of age of whom about one-third gave informed consent to capillary blood collection.
Main Outcome Measures: Concentration of albumin, transferrin, transthyretin, alpha 1-acid glycoprotein, C-reactive protein, and complement component C3 at baseline, and three and six months later. Incremental growth per 1 month, 3 months and 6 months of follow-up. Cumulative incidence of disease per 1 month and 3 months interval.
Results: A high proportion of children was with low concentrations of transport proteins and high concentrations of acute-phase reactants. Weight growth and arm circumference growth did not vary significantly with respect to initial concentrations of biomarkers, but subsequent height growth was lower in children with high values of transferrin, alpha 1-acid glycoprotein, and complement component C3 at baseline. Cumulative incidence of malaria, respiratory illness, and diarrhoea was not significantly affected by the concentration of the biomarkers at baseline.
Conclusions: In this part of central Africa performing biochemical measurements should not be encouraged as a means for risk scoring in non-hospitalized children.
abstract_id: PUBMED:23749652
Hospitalized for fever? Understanding hospitalization for common illnesses among insured women in a low-income setting. Background: Health microinsurance is a financial tool that increases utilization of health care services among low-income persons. There is limited understanding of the illnesses for which insured persons are hospitalized. Analysis of health claims at VimoSEWA, an Indian microinsurance scheme, shows that a significant proportion of hospitalization among insured adult women is for common illnesses—fever, diarrhoea and malaria—that are amenable to outpatient treatment. This study aims to understand the factors that result in hospitalization for common illnesses.
Methods: The article uses a mixed methods approach. Quantitative data were collected from a household survey of 816 urban low-income households in Gujarat, India. The qualitative data are based on 10 in-depth case studies of insured women hospitalized for common illnesses and interviews with five providers. Quantitative and qualitative data were supplemented with data from the insurance scheme’s administrative records.
Results: Socioeconomic characteristics and morbidity patterns among insured and uninsured women were similar with fever the most commonly reported illness. While fever was the leading cause for hospitalization among insured women, no uninsured women were hospitalized for fever. Qualitative investigation indicates that 9 of 10 hospitalized women first sought outpatient treatment. Precipitating factors for hospitalization were either the persistence or worsening of symptoms. Factors that facilitated hospitalization included having insurance and the perceptions of doctors regarding the need for hospitalization.
Conclusion: In the absence of quality primary care, health insurance can lead to hospitalization for non-serious illnesses. Deterrents to hospitalization point away from member moral hazard; provider moral hazard cannot be ruled out. This study underscores the need for quality primary health care and its better integration with health microinsurance schemes.
abstract_id: PUBMED:32882032
High Frequency of Antibiotic Prescription in Children With Undifferentiated Febrile Illness in Kenya. Background: In low-resource, malaria-endemic settings, accurate diagnosis of febrile illness in children is challenging. The World Health Organization (WHO) currently recommends laboratory-confirmed diagnosis of malaria prior to starting treatment in stable children. Factors guiding management of children with undifferentiated febrile illness outside of malaria are not well understood.
Methods: This study examined clinical presentation and management of a cohort of febrile Kenyan children at 5 hospital/clinic sites from January 2014 to December 2017. Chi-squared and multivariate regression analyses were used to compare frequencies and correlate demographic, environmental, and clinical factors with patient diagnosis and prescription of antibiotics.
Results: Of 5735 total participants, 68% were prescribed antibiotic treatment (n = 3902), despite only 28% given a diagnosis of bacterial illness (n = 1589). Factors associated with prescription of antibiotic therapy included: negative malaria testing, reporting head, ears, eyes, nose and throat (HEENT) symptoms (ie, cough, runny nose), HEENT findings on exam (ie, nasal discharge, red throat), and having a flush toilet in the home (likely a surrogate for higher socioeconomic status).
Conclusion: In a cohort of acutely ill Kenyan children, prescription of antimalarial therapy and malaria test results were well correlated, whereas antibiotic treatment was prescribed empirically to most of those who tested malaria negative. Clinical management of febrile children in these settings is difficult, given the lack of diagnostic testing. Providers may benefit from improved clinical education and implementation of enhanced guidelines in this era of malaria testing, as their management strategies must rely primarily on critical thinking and decision-making skills.
abstract_id: PUBMED:10206266
How useful are anthropometric, clinical and dietary measurements of nutritional status as predictors of morbidity of young children in central Africa? Objective: To identify useful predictors of morbidity of young children in central Africa.
Method: Population-based follow-up study in Northern Kivu, Congo, of 842 children under two years of age who completed weekly follow-up interviews and health examinations during three months. Main outcome measures were crude and adjusted effects of summary measures of nutritional status on one-month cumulative incidence of malaria, respiratory illness, and diarrhoea.
Results: Anthropometric indicators appeared to perform badly in predicting morbidity. In contrast, non-anthropometric variables such as growth as judged by the caretaker, child's diet at the time of examination, and occurrence of disease in the month preceding the interval of observation were useful.
Conclusions: In the context of the 'Sick Child Initiative', simple tests and diagnostic tools to improve quality of both prevention and cure in first-level facilities need to be identified. Focusing on non-anthropometric indicators should be encouraged to offer a comprehensive appraisal of health status to all children.
abstract_id: PUBMED:27400781
Non-malaria fevers in a high malaria endemic area of Ghana. Background: The importance of fevers not due to malaria [non-malaria fevers, NMFs] in children in sub-Saharan Africa is increasingly being recognised. We have investigated the influence of exposure-related factors and placental malaria on the risk of non-malaria fevers among children in Kintampo, an area of Ghana with high malaria transmission.
Methods: Between 2008 and 2011, a cohort of 1855 newborns was enrolled and followed for at least 12 months. Episodes of illness were detected by passive case detection. The primary analysis covered the period from birth up to 12 months of age, with an exploratory analysis of a sub-group of children followed for up to 24 months.
Results: The incidence of all episodes of NMF in the first year of life (first and subsequent) was 1.60 per child-year (95 % CI 1.54, 1.66). The incidence of NMF was higher among infants with low birth weight [adjusted hazard ratio (aHR) 1.22 (95 % CI 1.04-1.42) p = 0.012], infants from households of poor socio-economic status [aHR 1.22 (95 % CI 1.02-1.46) p = 0.027] and infants living furthest from a health facility [aHR 1.20 (95 % CI 1.01-1.43) p = 0.037]. The incidence of all episodes of NMF was similar among infants born to mothers with or without placental malaria [aHR 0.97 (0.87, 1.08; p = 0.584)].
Conclusion: The incidence of NMF in infancy is high in the study area. The incidence of NMF is associated with low birth weight and poor socioeconomic status but not with placental malaria.
abstract_id: PUBMED:29661245
Socioeconomic health inequality in malaria indicators in rural western Kenya: evidence from a household malaria survey on burden and care-seeking behaviour. Background: Health inequality is a recognized barrier to achieving health-related development goals. Health-equality data are essential for evidence-based planning and assessing the effectiveness of initiatives to promote equity. Such data have been captured but have not always been analysed or used to manage programming. Health data were examined for microeconomic differences in malaria indices and associated malaria control initiatives in western Kenya.
Methods: Data was analysed from a malaria cross-sectional survey conducted in July 2012 among 2719 people in 1063 households in Siaya County, Kenya. Demographic factors, history of fever, malaria parasitaemia, malaria medication usage, insecticide-treated net (ITN) use and expenditure on malaria medications were collected. A composite socioeconomic status score was created using multiple correspondence analyses (MCA) of household assets; households were classified into wealth quintiles and dichotomized into poorest (lowest 3 quintiles; 60%) or less-poor (highest 2 quintiles; 40%). Prevalence rates were calculated using generalized linear modelling.
Results: Overall prevalence of malaria infection was 34.1%, with significantly higher prevalence in the poorest compared to less-poor households (37.5% versus 29.2%, adjusted prevalence ratio [aPR] 1.23; 95% CI = 1.08-1.41, p = 0.002). Care seeking (aPR = 0.95; 95% CI 0.87-1.04, p = 0.229), medication use (aPR = 0.94; 95% CI 0.87-1.00, p = 0.087) and ITN use (aPR = 0.96; 95% CI = 0.87-1.05, p = 0.397) were similar between households. Among all persons surveyed, 36.4% reported taking malaria medicines in the prior 2 weeks; 92% took artemether-lumefantrine, the recommended first-line malaria medication. In the poorest households, 4.9% used non-recommended medicines compared to 3.5% in less-poor (p = 0.332). Mean and standard deviation [SD] for expenditure on all malaria medications per person was US$0.38 [US$0.50]; the mean was US$0.35 [US$0.52] amongst the poorest households and US$0.40 [US$0.55] in less-poor households (p = 0.076). Expenditure on non-recommended malaria medicine was significantly higher in the poorest (mean US$1.36 [US$0.91]) compared to less-poor households (mean US$0.98 [US$0.80]; p = 0.039).
Conclusions: Inequalities in malaria infection and expenditures on potentially ineffective malaria medication between the poorest and less-poor households were evident in rural western Kenya. Findings highlight the benefits of using MCA to assess and monitor the health-equity impact of malaria prevention and control efforts at the microeconomic level.
Answer: The relationship between malaria illness among young children and low socioeconomic status appears to be complex, with evidence suggesting that malaria can be both a cause and a consequence of poverty.
One study conducted in Tanzania using nationally representative data found that households with a child who tested positive for malaria had a significantly lower wealth index, suggesting that malaria illness among young children is a cause of low socioeconomic status (PUBMED:22571516). This study used a simultaneous equation model and instrumental variables regression to account for environmental variables and endogeneity, indicating a potential causal relationship where malaria contributes to poverty.
Conversely, a systematic review and meta-analysis found that the risk of malaria in children aged 0-15 years is associated with socioeconomic status, with the poorest children having higher odds of malaria infection (PUBMED:23790353). This suggests that low socioeconomic status can also be a contributing factor to the prevalence of malaria, indicating a bidirectional relationship.
Additional studies have explored various aspects of this relationship. For example, a study in Malawi found that clinic attendance for pediatric malarial fever episodes was positively correlated with higher socioeconomic status (PUBMED:8066388). Another study in western Kenya found that undernutrition, which is often associated with poverty, places children at higher risk of malaria-related morbidity (PUBMED:16222012). However, a study in central Africa found that biochemical markers of nutritional and inflammatory status were not significant predictors of growth retardation and morbidity from diseases like malaria (PUBMED:9076404).
In summary, the evidence suggests that malaria illness among young children can be both a cause and a consequence of low socioeconomic status. Malaria can contribute to poverty by affecting the health and economic productivity of households, while poverty can increase the risk of malaria due to factors such as poor living conditions and limited access to preventive measures and healthcare services. |
Instruction: Pediatric fingertip injuries: do prophylactic antibiotics alter infection rates?
Abstracts:
abstract_id: PUBMED:18347491
Pediatric fingertip injuries: do prophylactic antibiotics alter infection rates? Study Objective: Fingertip injuries are common in the pediatric population. Considerable controversy exists as to whether prophylactic antibiotics are necessary after repair of these injuries. Our goal was to compare the rate of bacterial infections among subgroups treated with and without prophylactic antibiotics. The study hypothesis was that infection rates were similar in the 2 groups.
Methods: This was a prospective randomized control trial of pediatric patients presenting to an urban children's hospital with trauma to the distal fingertip, requiring repair. Patients were randomized to 2 groups: group 1 received no antibiotics, and group 2 received antibiotics (cephalexin). Repairs were performed in a standardized fashion, and all patients were reevaluated in the same emergency department in 48 hours and again by phone 7 days after repair. The primary outcome measure was the rate of infection at 7 days after repair.
Results: One hundred forty-six patients were initially enrolled in the study, 11 patients were withdrawn before study completion, 69 subjects were randomized to the no-antibiotic group, and 66 subjects were randomized to the antibiotic group. There was 1 infection in each group at 7 days after repair. The infection rate was 1.45% (95% confidence interval, 0.04%-7.81%) for the no-antibiotic group and was 1.52% (95% confidence interval, 0.04%-8.16%) for the antibiotic group, not statistically significant (P = 1.00).
Conclusions: This study suggests that routine prophylactic antibiotics do not reduce the rate of infection after repair of distal fingertip injuries.
abstract_id: PUBMED:33711805
Contemporary management of pediatric open skull fractures: a multicenter pediatric trauma center study. Objective: The authors sought to evaluate the contemporary management of pediatric open skull fractures and assess the impact of variations in antibiotic and operative management on the incidence of infectious complications.
Methods: The records of children who presented from 2009 to 2017 to 6 pediatric trauma centers with an open calvarial skull fracture were reviewed. Data collected included mechanism and anatomical site of injury; presence and depth of fracture depression; antibiotic choice, route, and duration; operative management; and infectious complications.
Results: Of the fractures among the 138 patients included in the study, 48.6% were frontal and 80.4% were depressed; 58.7% of patients underwent fragment elevation. The average duration of intravenous antibiotics was 4.6 (range 0-21) days. Only 53 patients (38.4%) received a single intravenous antibiotic for fewer than 4 days. and 56 (40.6%) received oral antibiotics for an average of 7.3 (range 1-20) days. Wounds were managed exclusively in the emergency department in 28.3% of patients. Two children had infectious complications, including a late-presenting hardware infection and a superficial wound infection. There were no cases of meningitis or intracranial abscess. Neither antibiotic spectrum or duration nor bedside irrigation was associated with the development of infection.
Conclusions: The incidence of infectious complications in this population of children with open skull fractures was low and was not associated with the antibiotic strategy or site of wound care. Most minimally contaminated open skull fractures are probably best managed with a short duration of a single antibiotic, and emergency department closure is appropriate unless there is significant contamination or fragment elevation is necessary.
abstract_id: PUBMED:28344521
Timing of Debridement and Infection Rates in Open Fractures of the Hand: A Systematic Review. Background: Literature on open fracture infections has focused primarily on long bones, with limited guidelines available for open hand fractures. In this study, we systematically review the available hand surgery literature to determine infection rates and the effect of debridement timing and antibiotic administration. Methods: Searches of the MEDLINE, EMBASE, and Cochrane computerized literature databases and manual bibliography searches were performed. Descriptive/quantitative data were extracted, and a meta-analysis of different patient cohorts and treatment modalities was performed to compare infection rates. Results: The initial search yielded 61 references. Twelve articles (4 prospective, 8 retrospective) on open hand fractures were included (1669 open fractures). There were 77 total infections (4.6%): 61 (4.4%) of 1391 patients received preoperative antibiotics and 16 (9.4%) of 171 patients did not receive antibiotics. In 7 studies (1106 open fractures), superficial infections (requiring oral antibiotics only) accounted for 86%, whereas deep infections (requiring operative debridement) accounted for 14%. Debridement within 6 hours of injury (2 studies, 188 fractures) resulted in a 4.2% infection rate, whereas debridement within 12 hours of injury (1 study, 193 fractures) resulted in a 3.6% infection rate. Two studies found no correlation of infection and timing to debridement. Conclusions: Overall, the infection rate after open hand fracture remains relatively low. Correlation does exist between the administration of antibiotics and infection, but the majority of infections can be treated with antibiotics alone. Timing of debridement, has not been shown to alter infection rates.
abstract_id: PUBMED:36522213
Comparison of nonoperative versus operative management in pediatric gustilo-anderson type I open tibia fractures. Background: Recent studies suggest pediatric Gustilo-Anderson type I fractures, especially of the upper extremity, may be adequately treated without formal operative debridement, though few tibial fractures have been included in these studies. The purpose of this study is to provide initial data suggesting whether Gustilo-Anderson type I tibia fractures may be safely treated nonoperatively.
Methods: Institutional retrospective review was performed for children with type I tibial fractures managed with and without operative debridement from 1999 through 2020. Incomplete follow-up, polytrauma, and delayed diagnosis of greater than 12 h since the time of injury were criteria for exclusion. Data including age, sex, mechanism of injury, management, time-to-antibiotic administration, and complications were recorded.
Results: Thirty-three patients met inclusion criteria and were followed to union. Average age was 9.9 ± 3.7 years. All patients were evaluated in the emergency department and received intravenous antibiotics within 8 h of presentation. Median time-to-antibiotics was 2 h. All patients received cefazolin except one who received clindamycin at an outside hospital and subsequent cephalexin. Three patients (8.8%) received augmentation with gentamicin. Twenty-one patients (63.6%) underwent operative irrigation and debridement (I&D), and of those, sixteen underwent surgical fixation of their fracture. Twelve (36.4%) patients had bedside I&D with saline under conscious sedation, with one requiring subsequent operative I&D and intramedullary nailing. Three infections (14.3%) occurred in the operative group and none in the nonoperative group. Complications among the nonoperative patients include delayed union (8.3%), angulation (8.3%), and refracture (8.3%). Complications among the operative patients include delayed union (9.5%), angulation (14.3%), and one patient experienced both (4.8%). Other operative group complications include leg-length discrepancy (4.8%), heterotopic ossification (4.8%), and symptomatic hardware (4.8%).
Conclusion: No infections were observed in a small group of children with type I tibia fractures treated with bedside debridement and antibiotics, and similar non-infectious complication rates were observed relative to operative debridement. This study provides initial data that suggests nonoperative management of type I tibial fractures may be safe and supports the development of larger studies.
abstract_id: PUBMED:36974281
Suture Fixation of Subacute Pediatric Seymour Fractures. Seymour fractures are common injuries in the pediatric population. High rates of deep infection have been reported due to delayed presentation and subsequent treatment. This report describes the case of a 13-year-old male wrestler who presented 1 month after a finger injury that was later diagnosed as a subacute Seymour fracture with osteomyelitis. The patient underwent irrigation and debridement and fracture reduction stabilized with nonabsorbable suture fixation. After 6 weeks of intravenous antibiotics, the patient was recovering well, with radiographic evidence of fracture healing and clearance of infection. This case highlights the use of a single suture as a treatment option for fixation of unstable Seymour fractures with delayed presentation. The management of acute open distal phalangeal physeal fractures is well described in the literature; however, further investigations are warranted into the optimal management of chronically infected digits with unstable Seymour fractures.
abstract_id: PUBMED:29387302
Systemic Preoperative Antibiotics with Mandible Fractures: Are They Indicated at the Time of Injury? Mandible fractures are the most common result of facial trauma. The proximity of oral flora to the site of both the injury and resulting surgical instrumentation makes managing infection a unique challenge. The benefit of antibiotic prophylaxis at the time of surgical treatment of mandible fractures is well established. However, the routine use of antibiotics between the time of injury and surgery is of unclear benefit. We aim to define the role of antibiotics in the preoperative period: from the time of injury to surgical intervention. Demographic and clinical data were collected retrospectively on all patients who were treated for mandible fracture by the Division of Plastic and Reconstructive Surgery at our institution between 2003 and 2013. The use of both preoperative (between injury and surgery) and perioperative (at the time of surgery) systemic antibiotics was recorded along with the incidence of postoperative infections and other complications. Complete data were available for 269 patients. Of the 216 patients who received preoperative antibiotics, 22 (10%) developed an infection postoperatively. Of the 53 patients who did not receive preoperative antibiotics, 2 (4%) developed infection ( p = 0.184). Likewise, preoperative antibiotics were not significantly associated with hardware complication rates. In our retrospective review, the use of antibiotics between injury and surgical repair had no impact on postoperative infection rates. These data suggest that preoperative antibiotic use may actually be associated with an increased incidence of postoperative infection. Our results do not support the routine use of antibiotics between injury and surgical repair in patients with mandible fractures.
abstract_id: PUBMED:30214163
A cohort study to evaluate infection prevention protocol in pediatric trauma patients with blunt splenic injury in a Dutch level 1 trauma center. Purpose: Asplenic patients are at increased risk for the development of overwhelming postsplenectomy infection (OPSI) syndrome. It is believed that adequate immunization, antimicrobial prophylaxis, as well as appropriate education concerning risks on severe infection lead to the decreased incidence of OPSI. The aim of this study was to analyze the methods used to prevent OPSI in trauma patients splenectomized before the age of 18.
Patients And Methods: A retrospective, single-center study of all pediatric patients sustaining blunt splenic injury (BSI) managed at our level 1 trauma center from January 1979 to March 2012 was performed. A questionnaire was sent to all the included patients to determine the level of knowledge concerning infection risks, the use of antibiotics, and compliance to vaccination recommendations. Furthermore, we investigated whether the implementation of guidelines in 2003 and 2011 resulted in higher vaccination rates.
Results: We included 116 children with BSI. A total of 93 completed interviews were eligible for analysis, resulting in a total response rate of 80% and 1,116 patient years. Twenty-seven patients were splenectomized, and 66 patients were treated by a spleen preserving therapy (including embolization). Only two out of 27 splenectomized patients were adequately vaccinated, five patients without a spleen used prophylactic antibiotics, and about half of the asplenic patients had adequate knowledge of the risk that asplenia entails. A total of 22/27 splenectomized patients were neither adequately vaccinated nor received prophylactic antibiotics. There was no OPSI seen in our study population during the 1,116 follow-up years.
Conclusion: The vaccination status, the level of knowledge concerning prevention of an OPSI, and the use of prophylactic antibiotics are suboptimal in pediatric patients treated for BSI. Therefore, we created a new follow-up treatment guideline to have adequate preventive coverage to current standards for these patients.
abstract_id: PUBMED:36534104
Management of Dog Bite Injuries: Procedural Sedation, Infection, and Operative Indications at a Single-Institution Level I Pediatric Trauma Hospital. Background: Dog bite injuries are common within the pediatric population. Currently, there are inconclusive data on best sedation practice, antibiotic regimen, and need for plastic surgery referrals for treatment of dog bite injuries in the emergency department (ED) versus operating room (OR). This study set out to determine sedation practice, infection management, and necessity for plastic surgery referral at a level I pediatric trauma center.
Methods: A retrospective review of all pediatric (0-18 years old) dog bites documented in electronic medical records from January 1, 2010, to December 31, 2019, was performed. Bitten by dog encounters were identified by International Classification of Diseases, Ninth Revision and Tenth Revision codes E906.0 and W54.0, W54.0XXA, and W54, respectively. Data gathered included age, gender, month of injury, circumstance of injury, injury characteristics, location of repair, person performing repair, sedation (if used, then length of sedation), inpatient admission, antibiotics prescribed, dog characteristics (breed, size, sex, age, relationship to patient), and complications. Summary statistics were calculated as mean ± SD. Comparisons for nominal variables were performed using the χ2 test. All analyses were performed using Stata v.16. 1.
Results: A total of 1438 pediatric patients were included in this study over a 10-year period. Of injuries requiring repair (n = 846), most repairs were performed in the ED (97.1% [822/846]), whereas 24 (2.8%) required repair in the OR. Of the bites that required repair (n = 846), 81.1% (686/846) were performed by an emergency medicine physician and 147 (17.4%) by plastic surgeons. Procedural sedation in the ED was performed in 146 repairs (17.3%). Documented sedation time ranged from 10 to 96 minutes. Most patients received a prescription for antibiotics (80.5%), usually amoxicillin/clavulanate (90.8%). Infection was the most common sequela (9.5%). There was no significant difference in infection rates between repairs performed in the ED versus those in the OR.
Conclusion: Our study indicates that pediatric patient dog bite injuries can be successfully managed in an ED. Procedural sedation has demonstrated no increased safety risks compared with the OR. Rates of infection are also not demonstrated to be significantly higher in repairs done in the ED versus those taken to operating theater.
abstract_id: PUBMED:26604528
Pediatric open globe injury: A review of the literature. Open globe injury (OGI) is a severe form of eye trauma estimated at 2-3.8/100,000 in the United States. Most pediatric cases occur at home and are the result of sharp object penetration. The aim of this article is to review the epidemiology, diagnosis, management, and prognosis of this condition by conducting a systematic literature search with inclusion of all case series on pediatric OGI published between 1996 and 2015. Diagnosis of OGI is based on patient history and clinical examination supplemented with imaging, especially computed tomography when indicated. Few prospective studies exist for the management of OGI in pediatric patients, but adult recommendations are often followed with success. The main goals of surgical management are to repair the open globe and remove intraocular foreign bodies. Systemic antibiotics are recommended as medical prophylaxis against globe infection, or endophthalmitis. Other complications are similar to those seen in adults, with the added focus of amblyopia therapy in children. Severe vision decline is most likely due to traumatic cataracts. The ocular trauma score, a system devised to predict final visual acuity (VA) in adults, has proven to be of prognostic value in pediatric OGI as well. Factors indicating poor visual prognosis are young age, poor initial VA, posterior eye involvement, long wound length, globe rupture, lens involvement, vitreous hemorrhage, retinal detachment, and endophthalmitis. A thorough understanding of OGI and the key differences in epidemiology, diagnosis, management, and prognosis between adults and children is critical to timely prevention of posttraumatic vision loss early in life.
abstract_id: PUBMED:35883959
Management of Pediatric Facial Burns with Zinc-Hyaluronan Gel. Zinc-hyaluronan-containing burn dressings have been associated with enhanced reepithelialization and low infection rates, although their effectiveness has not yet been investigated in pediatric facial thermal injuries. This single-arm, retrospective cohort study assessed the characteristics of 23 children (≤17-year-old) with facial superficial partial-thickness burns and the wound closure capabilities of the applied zinc-hyaluronan gel. Patients were admitted consecutively to the Pediatric Surgery Division in Pécs, Hungary, between 1 January 2016 and 15 October 2021. The mean age of the children was 6.2 years; 30.4% of them were younger than 1 year. An average of 3% total body surface was injured in the facial region and 47.8% of the patients had other areas damaged as well, most frequently the left upper limb (30.4%). The mean time until complete reepithelialization was 7.9 days and the children spent 2 days in the hospital. Wound cultures revealed normal bacterial growth in all cases and follow-up examinations found no hypertrophic scarring. In conclusion, pediatric facial superficial partial-thickness burns are prevalent during infancy and coincide with left upper limb injuries. Rapid wound closure and low complication rates are accountable for the moderate amount of hospitalization. These benefits, along with the gel's ease of applicability and spontaneous separation, are linked to child-friendly burn care.
Answer: The study "Pediatric fingertip injuries: do prophylactic antibiotics alter infection rates?" (PUBMED:18347491) aimed to determine whether the use of prophylactic antibiotics would reduce the rate of infection after repair of distal fingertip injuries in pediatric patients. The study was a prospective randomized control trial that included pediatric patients presenting with trauma to the distal fingertip that required repair. Patients were randomized into two groups: one group did not receive antibiotics, and the other group received cephalexin as a prophylactic antibiotic. Both groups were reevaluated 48 hours after repair and again by phone 7 days after repair. The primary outcome measure was the rate of infection at 7 days post-repair.
The results showed that there was one infection in each group at 7 days after repair. The infection rate was 1.45% for the no-antibiotic group and 1.52% for the antibiotic group, with no statistically significant difference between the two groups (P = 1.00). Based on these findings, the study concluded that routine prophylactic antibiotics do not reduce the rate of infection after repair of distal fingertip injuries in the pediatric population. |
Instruction: Estimating the Cost-Effectiveness of Implementation: Is Sufficient Evidence Available?
Abstracts:
abstract_id: PUBMED:15566221
Evidence based medicine and cost-effectiveness analysis in ophthalmology Goal: To make the reader familiar with the term evidence based medicine (EBM), to explain the principle of cost-effectiveness analysis (price-profit), and to show its usefulness to compare the effectiveness of different medical procedures.
Method: Based on few examples, in this article the relevance and calculation of important parameters of cost-effectiveness analysis (CE), as utility value (UV), quality adjusted life years (QALY) is explained. In addition, calculation of UV and QALY for the cataract surgery, including its complications, is provided.
Results: According to this method, laser photocoagulation and cryocoagulation of the early stages of retinopathy of prematurity, treatment of amblyopia, cataract surgery of one or both eyes, from the vitreoretinal procedures the early vitrectomy in cases of hemophtalmus in proliferative diabetic retinopathy or grid laser photocoagulation in diabetic macular edema or worsening of the visual acuity due to the branch retinal vein occlusion belong to highly effective procedures. On the other hand, to the procedures with low cost effectiveness belongs the treating of the central retinal artery occlusion with anterior chamber paracentesis, as well as with CO2 inhalation, or photodynamic therapy in choroidal neovascularization in age-related macular degeneration with visual acuity of the better eye 20/200.
Conclusion: Cost-effectiveness analysis is a new perspective method evaluating successfulness of medical procedure comparing the final effect with the financial costs. In evaluation of effectiveness of individual procedures, three main aspects are considered: subjective feeling of influence of the disease on the patient's life, objective results of clinical examination and financial costs of the procedure. According to this method, the cataract surgery, as well as procedures in the pediatric ophthalmology belong to the most effective surgical methods.
abstract_id: PUBMED:34711358
Do Centers for Medicare and Medicaid Services Quality Measures Reflect Cost-Effectiveness Evidence? Objectives: Despite its importance of quality measures used by the Centers for Medicare and Medicaid Services, the underlying cost-effectiveness evidence has not been examined. This study aimed to analyze cost-effectiveness evidence associated with the Centers for Medicare and Medicaid Services quality measures.
Methods: After classifying 23 quality measures with the Donabedian's structure-process-outcome quality of care model, we identified cost-effectiveness analyses (CEAs) relevant to these measures from the Tufts Medical Center CEA Registry based on the PICOTS (population, intervention, comparator, outcome, time horizon, and setting) framework. We then summarized available incremental cost-effectiveness ratios (ICERs) to determine the cost-effectiveness of the quality measures.
Results: The 23 quality measures were categorized into 14 process, 7 outcome, and 2 structure measures. Cost-effectiveness evidence was only available for 8 of 14 process measures. Two measures (Tobacco Screening and Hemoglobin bA1c Control) were cost-saving and quality-adjusted life-years (QALYs) improving, and 5 (Depression Screening, Influenza Immunization, Colon Cancer Screening, Breast Cancer Screening, and Statin Therapy) were highly cost-effective (median ICER ≤ $50 000/QALY). The remaining measure (Fall Screening) had a median ICER of $120 000/QALY. No CEAs were available for 15 measures: 10 defined by subjective patient ratings and 5 employed outcome measures without specifying an intervention or process.
Conclusions: When relevant CEAs were available, cost-effectiveness evidence was consistent with quality measures (measures were cost-effective). Nevertheless, most quality measures were based on subjective ratings or outcome measures, posing a challenge in identifying supporting economic evidence. Refining and aligning quality measures with cost-effectiveness evidence can help further improve healthcare efficiency by demonstrating that they are good indicators of both quality and cost-effectiveness of care.
abstract_id: PUBMED:37597696
GRADE guidance 23: considering cost-effectiveness evidence in moving from evidence to health-related recommendations. Background: This is the 23rd in a series of articles describing the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to grading the certainty of evidence and strength of recommendations for systematic reviews, health technology assessments, and clinical guideline development.
Objectives: We outline how resource utilization and cost-effectiveness analyses are integrated into health-related recommendations, using the GRADE Evidence to Decision (EtD) frameworks.
Study Design And Setting: Through iterative discussions and refinement, in-person, and online meetings, and through e-mail communication, we developed draft guidance to incorporate economic evidence in the formulation of health-related recommendations. We developed scenarios to operationalize the guidance. We presented a summary of the results to members of the GRADE Economic Evaluation Project Group.
Results: We describe how to estimate the cost of preventing (or achieving) an event to inform assessments of cost-effectiveness of alternative treatments, when there are no published economic evaluations. Evidence profiles and Summary of Findings tables based on systematic reviews of cost-effectiveness analyses can be created to provide top-level summaries of results and quality of multiple published economic evaluations. We also describe how this information could be integrated in GRADE's EtD frameworks to inform health-related recommendations. Three scenarios representing various levels of available cost-effectiveness evidence were used to illustrate the integration process.
Conclusion: This GRADE guidance provides practical information for presenting cost-effectiveness data and its integration in the development of health-related recommendations, using the EtD frameworks.
abstract_id: PUBMED:27599770
Evidence for and cost-effectiveness of physiotherapy in haemophilia: a Dutch perspective. Introduction: Musculoskeletal impact of haemophilia justifies physiotherapy throughout life. Recently the Dutch Health Care Institute constrained their 'list of chronic conditions', and withdrew financial coverage of physiotherapy for elderly persons with haemophilia (PWH). This decision was based on lack of scientific evidence and not being in accordance with 'state of science and practice'.
Methods: In general, evidence regarding physiotherapy is limited, and especially in rare diseases like haemophilia. 'Evidence based medicine' classifies and recommends evidence based on meta-analyses, systematic reviews and randomized controlled trials, but also means integrating evidence with individual clinical expertise. For the evaluation of physiotherapy - usually individualized treatment - case studies, observational studies and Case Based Reasoning may be more beneficial.
Results: Overall annual treatment costs for haemophilia care in the Netherlands are estimated over 100 million Euros, of which 95% is covered by clotting factor concentrates. The cost for physiotherapy assessments in all seven Dutch HTCs (seven centres for adult PWH and seven centres for children) is limited at approximately 500 000 Euros annually. Costs of the actual physiotherapy sessions, carried out in our Dutch first-line care system, will also not exceed 500 000 Euros. Thus, implementation of physiotherapy in haemophilia care the Netherlands in a most optimal way would cost less than 1% of the total budget.
Aim: The present paper describes the role of physiotherapy in haemophilia care including available evidence and providing suggestions regarding generation of evidence. Establishing the effectiveness and cost-effectiveness of physiotherapy in haemophilia care is a major topic for the next decennium.
abstract_id: PUBMED:19826516
Users' guide to the orthopaedic literature: what is a cost-effectiveness analysis? As the cost of healthcare continue to rise, orthopaedic surgeons are being pressured to practice cost-effective healthcare. Consequently, economic evaluation of treatment options are being reported more commonly in medical and surgical literature. As new orthopaedic procedures and treatments may improve patient outcome and function over traditional treatment options, the effect of the potentially higher costs of new treatments should be formally evaluated. Unfortunately, the resources available for healthcare spending are typically limited. Therefore, cost-effectiveness analyses have become an important and useful tool in informing which procedure or treatment to implement into practice. Cost-effectiveness analysis is a type of economic analysis that compares both the clinical outcomes and the costs of new treatment options to current treatment options or standards of care. For a clinician to be able to apply the results of a cost-effectiveness analysis to their practice, they must be able to critically review the available literature. Conducting an economic analysis is a challenging process, which has resulted in a number of published economic analyses that are of lower quality and may be fraught with bias. It is important that the reader of an economic analysis or cost-effectiveness analysis have the skills required to properly evaluate and critically appraise the methodology used before applying the recommendations to their practice. Using the principles of evidence-based medicine and the questions outlined in the Journal of the American Medical Association's Users' Guide to the Medical Literature, this article attempts to illustrate how to critically appraise a cost-effectiveness analysis in the orthopaedic surgery literature.
abstract_id: PUBMED:27021746
Estimating the Cost-Effectiveness of Implementation: Is Sufficient Evidence Available? Background: Timely implementation of recommended interventions can provide health benefits to patients and cost savings to the health service provider. Effective approaches to increase the implementation of guidance are needed. Since investment in activities that improve implementation competes for funding against other health generating interventions, it should be assessed in term of its costs and benefits.
Objective: In 2010, the National Institute for Health and Care Excellence released a clinical guideline recommending natriuretic peptide (NP) testing in patients with suspected heart failure. However, its implementation in practice was variable across the National Health Service in England. This study demonstrates the use of multi-period analysis together with diffusion curves to estimate the value of investing in implementation activities to increase uptake of NP testing.
Methods: Diffusion curves were estimated based on historic data to produce predictions of future utilization. The value of an implementation activity (given its expected costs and effectiveness) was estimated. Both a static population and a multi-period analysis were undertaken.
Results: The value of implementation interventions encouraging the utilization of NP testing is shown to decrease over time as natural diffusion occurs. Sensitivity analyses indicated that the value of the implementation activity depends on its efficacy and on the population size.
Conclusions: Value of implementation can help inform policy decisions of how to invest in implementation activities even in situations in which data are sparse. Multi-period analysis is essential to accurately quantify the time profile of the value of implementation given the natural diffusion of the intervention and the incidence of the disease.
abstract_id: PUBMED:28675422
Reviving Cochrane's contribution to evidence-based medicine: bridging the gap between evidence of efficacy and evidence of effectiveness and cost-effectiveness. Throughout the quarter century since the advent of evidence-based medicine (EBM), medical research has prioritized 'efficacy' (i.e. internal validity) using randomized controlled trials. EBM has consistently neglected 'effectiveness' and 'cost-effectiveness', identified in the pioneering work of Archie Cochrane as essential for establishing the external (i.e. clinical) validity of health care interventions. Neither Cochrane nor other early pioneers appear to have foreseen the extent to which EBM would be appropriated by the pharmaceutical and medical devices industries, which are responsible for extensive biases in clinical research due to selective reporting, exaggeration of benefits, minimization of risks, and misrepresentation of data. The promise of EBM to effect transformational change in health care will remain unfulfilled until (i) studies of effectiveness and cost-effectiveness are pursued with some of the same fervour that previously succeeded in elevating the status of the randomized controlled trial, and (ii) ways are found to defeat threats to scientific integrity posed by commercial conflicts of interest.
abstract_id: PUBMED:23804510
Evidence synthesis for decision making 6: embedding evidence synthesis in probabilistic cost-effectiveness analysis. When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent'a third approach. A fourth option is bootstrap resampling--a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed.
abstract_id: PUBMED:27284904
Does the use of efficacy or effectiveness evidence in cost-effectiveness analysis matter? Objective: To test the association of clinical evidence type, efficacy-based or effectiveness-based ("E"), versus whether or not asthma interventions' cost-effectiveness findings are favorable.
Data Sources: We conducted a systematic review of PubMed, EMBASE, Tufts CEA registry, Cochrane CENTRAL, and the UK National Health Services Economic Evaluation Database from 2009 to 2014.
Study Selection: All cost-effectiveness studies evaluating asthma medication(s) were included. Clinical evidence type, "E," was classified as efficacy-based if the evidence was from an explanatory randomized controlled trial(s) or meta-analysis, while evidence from pragmatic trial(s) or observational study(s) was classified as effectiveness-based. We defined three times the World Health Organization cost-effectiveness willingness-to-pay (WTP) threshold or less as a favorable cost-effectiveness finding. Logistic regression tested the likelihood of favorable versus unfavorable cost-effectiveness findings against the type of "E."
Results And Conclusions: 25 cost-effectiveness studies were included. Ten (40.0%) studies were effectiveness-based, yet 15 (60.0%) studies were efficacy-based. Of 17 studies using endpoints that could be compared to WTP threshold, 7 out of 8 (87.5%) effectiveness-based studies yielded favorable cost-effectiveness results, whereas 4 out of 9 (44.4%) efficacy-based studies yielded favorable cost-effectiveness results. The adjusted odds ratio was 15.12 (95% confidence interval; 0.59 to 388.75) for effectiveness-based versus efficacy-based achieving favorable cost-effectiveness findings. More asthma cost-effectiveness studies used efficacy-based evidence. Studies using effectiveness-based evidence trended toward being more likely to disseminate favorable cost-effective findings than those using efficacy. Health policy decision makers should pay attention to the type of clinical evidence used in cost-effectiveness studies for accurate interpretation and application.
abstract_id: PUBMED:21329415
Compilation of cost-effectiveness evidence for different heart conditions and treatment strategies. Objectives: Despite the continuing interest in health economic research, we could find no accessible data set on cost-effectiveness, useful as practical information to decision makers who must allocate scarce resources within the cardiovascular field. The aim of this paper was to present cost-effectiveness ratios, based on a systematic literature search for the treatment of heart diseases.
Design: A comprehensive literature search on cost-effectiveness analyses of intervention strategies for the treatment of heart diseases was conducted. We compiled available cost-effectiveness ratios for different heart conditions and treatment strategies, in a cost-effectiveness ranking table. The cost-effectiveness ratios were expressed as a cost per quality adjusted life year (QALY) or life year gained.
Results: Cost-effectiveness ratios, ranging from dominant to those costing more than 1,000,000 Euros per QALY gained, and bibliographic references are provided for. The table was categorized according to disease group, making the ranking table readily available.
Conclusions: Cost-effectiveness ranking tables provide a means of presenting cost-effectiveness evidence. They provide valid information within a limited space aiding decision makers on the allocation of health care resources. This paper represents an extensive compilation of health economic evidence for the treatment of heart diseases.
Answer: The question "Estimating the Cost Effectiveness of Implementation: Is Sufficient Evidence Available?" relates to the availability and adequacy of evidence to estimate the cost-effectiveness of implementing specific healthcare interventions or practices. The abstracts provided offer insights into various aspects of cost-effectiveness analysis (CEA) in healthcare and the evidence required to support such analyses.
One study demonstrates the use of multi-period analysis together with diffusion curves to estimate the value of investing in implementation activities to increase uptake of a recommended intervention, in this case, natriuretic peptide (NP) testing in patients with suspected heart failure. The study indicates that the value of implementation interventions can decrease over time as natural diffusion occurs, and that the value depends on the efficacy of the implementation activity and the population size. It suggests that value of implementation can inform policy decisions even when data are sparse, and emphasizes the importance of multi-period analysis to accurately quantify the value of implementation given the natural diffusion of the intervention and the incidence of the disease (PUBMED:27021746).
Another abstract discusses the GRADE approach to integrating cost-effectiveness evidence into health-related recommendations, outlining how resource utilization and cost-effectiveness analyses can be incorporated using the GRADE Evidence to Decision (EtD) frameworks. It provides practical information for presenting cost-effectiveness data and its integration in the development of health-related recommendations (PUBMED:37597696).
The abstract from the Centers for Medicare and Medicaid Services Quality Measures study reveals that cost-effectiveness evidence was consistent with quality measures when available, but most quality measures were based on subjective ratings or outcome measures, posing a challenge in identifying supporting economic evidence. This suggests that while some evidence is available, it may not be sufficient for all quality measures, and there is a need for refining and aligning quality measures with cost-effectiveness evidence (PUBMED:34711358).
In summary, while there is evidence available to estimate the cost-effectiveness of certain healthcare interventions, the sufficiency of this evidence can vary depending on the specific intervention, the quality of the measures used, and the methodologies employed in the analysis. There is a need for more comprehensive and robust evidence, particularly in areas where current measures are based on subjective ratings or do not specify an intervention or process. Additionally, the integration of cost-effectiveness evidence into healthcare decision-making processes, such as through the GRADE EtD frameworks, is crucial for ensuring that healthcare resources are allocated efficiently and effectively. |
Instruction: Are prisoners reliable survey respondents?
Abstracts:
abstract_id: PUBMED:11196252
Survey of forensic psychiatrists on evaluation and treatment of prisoners on death row. Psychiatrists have debated their role in evaluating prisoners accused of capital crimes and in treating prisoners on death row when restoration of competence would result in execution. Despite debate, there are no previous surveys of psychiatrists' opinions on this issue. We sent an anonymous questionnaire to all board-certified forensic psychiatrists in the United States. Of the 456 forensic psychiatrists identified, 290 (64%) returned the survey. Most respondents supported a role, in at least some cases, for psychiatric evaluation of prisoners accused of capital crimes. Respondents were divided on whether or not psychiatrists should treat incompetent death row prisoners if restoration of competence would result in execution. Attitudes about the ethical acceptability of capital punishment were associated with views about the psychiatrists' role but were not determinative in every case.
abstract_id: PUBMED:33751153
The mental health of ex-prisoners: analysis of the 2014 English National Survey of Psychiatric Morbidity. Purpose: Prisoners experience extremely high rates of psychiatric disturbance. However, ex-prisoners have never previously been identified in representative population surveys to establish how far this excess persists after release. Our purpose was to provide the first community-based estimate of ex-prisoners' mental health in England using the data from the 2014 Adult Psychiatric Morbidity Survey (APMS).
Methods: APMS 2014 provides cross-sectional data from a random sample (N = 7546) of England's household population aged 16 or above. Standardised instruments categorised psychiatric disorders and social circumstances. Participants who had been in prison were compared with the rest of the sample.
Results: One participant in seventy had been in prison (1.4%; 95% CI 1.1-1.7; n = 103). Ex-prisoners suffered an excess of current psychiatric problems, including common mental disorders (CMDs), psychosis, post-traumatic disorder, substance dependence, and suicide attempts. They were more likely to screen positive for attention-deficit/hyperactivity disorder and autistic traits, to have low verbal IQ, and to lack qualifications. They disclosed higher rates of childhood adversity, including physical and sexual abuse and local authority care. The odds (1.88; 95% CI 1.02-3.47) of CMDs were nearly doubled in ex-prisoners, even after adjusting for trauma and current socioeconomic adversity.
Conclusions: Prison experience is a marker of enduring psychiatric vulnerability, identifying an important target population for intervention and support. Moreover, the psychiatric attributes of ex-prisoners provide the context for recidivism. Without effective liaison between the criminal justice system and mental health services, the vulnerability of ex-prisoners to relapse and to reoffending will continue, with consequent personal and societal costs.
abstract_id: PUBMED:26141499
Serological and Behavioral Survey on HIV/AIDS among prisoners in Nouakchott (Mauritania) In Mauritania, epidemiological data estimate national HIV prevalence to less than 1%. Our study is the first joint survey on HIV/AIDS conducted among prisoners in Mauritania. It is a cross-sectional survey with anonymity and informed consent. The study covered a sample of 296 prisoners enrolled in a population of 706 prisoners held in Nouakchott. The sex ratio was 14.6. The refusal rate of blood sample was 4.7%. HIV prevalence in the sample was 3.9%. 53.37% of prisoners knew the concept of seropositivity but only 7.4% had a perfect knowledge of the ways of HIV transmission untainted by false beliefs. The results showed that 99% of prisoners knew that the condom is a means of protection against HIV infection, but they also showed many false beliefs about protection against HIV in the majority of prisoners. Indeed, 98.49% of respondents said they protected themselves by avoiding sex with strangers and 94.97% of them thought that sex with young girls or virgins are protection against HIV. Nearly one quarter of the prisoners did not have a good perception of risk of contracting HIV in prison although homosexual relations between prisoners have been reported. This study showed that prisoners in Mauritania are a vulnerable group to HIV because the prevalence of HIV in this group was higher than the national prevalence and this sub-population was unfamiliar with the disease and adopt risk behaviors.
abstract_id: PUBMED:34244077
Developing a short screener for acquiescent respondents. Background: Acquiescent response style (ARS) refers to survey respondents' tendency to choose response categories agreeing to questions regardless of their content and is hypothesized as a stable respondent trait. While what underlies acquiescence is debatable, the effect of ARS on measurement is clear: bias through artificially increased agreement ratings. With certain population subgroups (e.g., racial/ethnic minorities in the U.S.) are associated with systemically higher ARS, it causes concerns for research involving those groups. For this reason, it may be necessary to classify respondents as acquiescers or a nonacquiescers, which allows independent analysis or accounting for this stylistic artifact. However, this classification is challenging, because ARS is latent, observed only as a by-product of collected data.
Objectives: To propose a screener that identifies respondents as acquiescers.
Methods: With survey data collected for ARS research, various ARS classification methods were compared for validity as well as implementation practicality.
Results: The majority of respondents was classified consistently into acquiescers or nonacquiescers under various classification methods.
Conclusions: We propose a method based on illogical responses given to two balanced, theoretically distant multi-item measurement scales as a screener.
abstract_id: PUBMED:21117913
Are prisoners reliable survey respondents? A validation of self-reported traumatic brain injury (TBI) against hospital medical records. Aims: To compare prisoners' self-reported history of TBI associated with hospital attendance with details extracted from relevant hospital medical records and to identify factors associated with the level of agreement between the two sources.
Methods: From a sample of prison entrants, this study obtained a history of TBIs for which medical attention was sought at a hospital. Audit tools were developed for data extraction relevant to any possible TBI from records at a total of 23 hospitals located within New South Wales, Australia. The level of agreement between self-report and hospital records was compared in relation to demographic, psychological and criminographic characteristics.
Results: Of the 200 participants in the study, 164 (82%) reported having sustained a past TBI giving a total of 420 separate TBI incidents. Of these, 156 (37%) were alleged to have resulted in attendance at a hospital emergency department including 112 (72%) at a hospital accessible for the validation exercise. For 93/112 (83%) of reported TBIs, a corresponding hospital medical record was located of which 78/112 (70%) supported the occurrence of a TBI. Lower education and a lifetime history of more than seven TBIs were associated with less agreement between self-report and medical record data with regard to specific details of the TBI.
Conclusions: Overall, these findings suggest that prisoners' self-report of TBI is generally accurate when compared with the 'gold standard' of hospital medical record. This finding is contrary to the perception of this group as 'dishonest' and 'unreliable'.
abstract_id: PUBMED:35012501
Reliability of prisoners' survey responses: comparison of self-reported health and biomedical data from an australian prisoner cohort. Objective: Prisoner health surveys primarily rely on self-report data. However, it is unclear whether prisoners are reliable health survey respondents. This paper aimed to determine the level of agreement between self-report and biomedical tests for a number of chronic health conditions.
Method: This study was a secondary analysis of existing data from three waves (1996, 2001, 2009) of the New South Wales (NSW) Inmate Health Survey. The health surveys were cross-sectional in nature and included a stratified random sample of men (n=2,114) from all NSW prisons. Self-reported histories of hepatitis, sexually transmissible infections, and diabetes were compared to objective biomedical measures of these conditions.
Results: Overall, the sensitivity (i.e., the respondents who self-reported having the condition also had markers indicative of the condition using biomedical tests) was high for hepatitis C (96%) and hepatitis B (83%), but low for all other assessed conditions (ranging from 9.1% for syphilis using RPR to 64% for diabetes). However, Kappa scores indicated substantial agreement only for hepatitis C. That is, there were false positives and false negatives which occurred outside of chance leading to poor agreement for all other assessed conditions.
Conclusions: Prisoners may have been exposed to serious health conditions while failing to report a history of infection. It may be possible that prisoners do not get tested given the asymptomatic presentation of some conditions, were unaware of their health status, have limited health-service usage preventing the opportunity for detection, or are subject to forgetting or misunderstanding prior test results. These findings demonstrate the importance of the custodial environment in screening for health conditions and referral for treatment should this be needed. Testing on entry, periodically during incarceration, and prior to release is recommended.
abstract_id: PUBMED:30984041
Suicide in Older Prisoners in Germany. As in many countries, the numbers of older prisoners are rising in Germany, but scientific information on this group is scarce. For the current study, a survey was used that included all prison suicides in Germany between the years of 2000 and 2013. Suicide rates of the elderly prisoners exceeded the suicide rates of the general population and the same age group. We observed a continuous decrease in the suicide rate of elderly prisoners. When compared to the younger suicide victims in prison, significantly more elderly suicide victims were: female, of German nationality, remand prisoners, or serving a life sentence. In Germany, elderly prisoners are a vulnerable subpopulation of the prison population. Higher suicide rates than in the same age group in the general population indicate unmet needs regarding mental disorders and their specific treatment.
abstract_id: PUBMED:1543938
A survey of pre-arrest drug use in sentenced prisoners. The paper presents the results of a retrospective, self-report survey of pre-arrest drug use in a representative sample of 1751 men serving a prison sentence. Reported drugs used were cannabis (34%), opiates (9%), amphetamine (9%) and cocaine (5%), including 1% 'crack' users. Pre-arrest injecting was reported by 11% of inmates, including 68% of all opiate users and 57% of amphetamine users. Drug dependence was reported by 11%, including 7% dependent on opiates, 2% on amphetamines and 1% on cocaine. Relative to other drugs, the figure for cocaine is higher than is suggested by a previous clinic survey. Pre-arrest cannabis use was reported by 54% of black prisoners and 34% of white. White prisoners are more likely to report use of 'hard' drugs, drug dependence and injecting, but this masks a higher rate of cocaine use by black prisoners. Opiate use varied between health regions, from 3% of prisoners in the West Midlands to 25% of those from the Mersey region. These findings have implications for service provision and for an understanding of cultural influences on illicit drug use.
abstract_id: PUBMED:12201070
The National Survey of Psychiatric Morbidity among prisoners and the future of prison healthcare. It has long been known that psychiatric disorders are highly prevalent among prisoners (Coid, 1984; Gunn et al., 1991; Maden et al., 1995; Joukamaa, 1995; Bland et al., 1998; Lamb and Weinberger, 1998). However, the Survey of Psychiatric Morbidity Among Prisoners in England and Wales (Singleton et al., 1998) represents a considerable advance on earlier surveys. By using the same standardized psychiatric assessment procedures, and similar questions on medication, service use and social functioning, its findings can be compared with previous national surveys of adults living in private households (Meltzer et al., 1995), residents in institutions (Meltzer et al., 1996), homeless persons (Gill et al., 1996), and with the forthcoming household survey in England, Wales and Scotland. It should also inform the future organisation of healthcare for prisoners, following recent recommendations from a joint Home Office/Department of Health Working Party that Health Authorities must work with prisons in their catchment areas to carry out joint health needs assessments, agree prison healthcare improvement strategies and jointly plan and commission services (HM Prison Service and NHS Executive 1999). The ultimate test of the survey will be whether it provides a benchmark to evaluate the future effectiveness of the new policy changes.
abstract_id: PUBMED:24201457
Preliminary report from a prison survey: should prisoners be considered as organ donors? Background: Opinions on letting prisoners donate organs appear increasingly. The aim of this preliminary study was to evaluate attitudes toward transplantation among inmates from a single prison in Poland.
Material And Methods: We administered a questionnaire consisting of 14 open queries about the knowledge, attitude, and personal views on organ donation to 100 male prisoners from the Second Penitentiary in Lodz, Poland. Completion of the form was anonymous and self-directed under supervision of the interviewer.
Results: Transplantation as a treatment option was understood by 90% of inmates. Prisoners' main sources of information on transplantology were newspapers and television (54%). The majority of prisoners (92%) were positive about transplantation; a smaller number of inmates (72%) knew about transplantation-related legal regulations in Poland. The terms "Central Register of Refusals" and "presumed consent" were understood by 63% and 61%, respectively, of the surveyed group. Most (77%) respondents knew that brain death is irreversible and 68% accepted these circumstances for donation of organs. The majority of inmates (74%) were fairly positive about donating their own organs and 60% said they would agree to donate an organ from a deceased family member. Prisoners rarely discussed transplantation issues (37%) with family members. The vast majority of prisoners (82%) said they trusted the medical and transplant communities.
Conclusions: We conclude that surveyed prisoners have a basic understanding about transplantation. The majority of respondents were in favor of organ donation and willing to donate their own organs. However, further studies to evaluate opinions on organ transplantation with larger groups of inmates are needed to help set new boundaries for prisoner organ donations.
Answer: Based on the provided abstracts, it appears that prisoners can be considered reliable survey respondents in certain contexts. For instance, a study comparing prisoners' self-reported history of traumatic brain injury (TBI) with hospital medical records found that prisoners' self-report of TBI was generally accurate when compared with hospital medical records, suggesting that prisoners can be reliable in reporting certain types of information (PUBMED:21117913). Additionally, a study on the reliability of prisoners' survey responses in relation to self-reported health and biomedical data found high sensitivity for hepatitis C and substantial agreement for hepatitis C, indicating that prisoners can reliably self-report certain health conditions (PUBMED:35012501).
However, the reliability of prisoners as survey respondents may vary depending on the type of information being reported and the context of the survey. For example, the study on HIV/AIDS among prisoners in Mauritania highlighted that prisoners had many false beliefs about protection against HIV, which could affect the reliability of their responses in surveys related to knowledge and behaviors concerning HIV/AIDS (PUBMED:26141499). Furthermore, the development of a short screener for acquiescent respondents suggests that some prisoners may have a tendency to agree with survey questions regardless of their content, which could introduce bias and affect the reliability of survey data (PUBMED:34244077).
In summary, while prisoners can provide reliable information in certain areas, such as self-reported medical history that can be validated against medical records, there may be limitations to their reliability in other areas, such as knowledge-based surveys where misconceptions may be prevalent. It is important to consider the specific context and subject matter of the survey when assessing the reliability of prisoners as survey respondents. |
Instruction: The volume-quality relationship of mental health care: does practice make perfect?
Abstracts:
abstract_id: PUBMED:15569901
The volume-quality relationship of mental health care: does practice make perfect? Objective: An extensive literature has demonstrated a relationship between hospital volume and outcomes for surgical care and other medical procedures. The authors examined whether an analogous association exists between the volume of mental health delivery and the quality of mental health care.
Method: The study used data for the 384 health maintenance organizations participating in the Health Employer Data and Information Set (HEDIS), covering 73 million enrollees nationwide. Analyses examined the association between three measures of mental health volume (total annual ambulatory visits, inpatient discharges, and inpatient days) and the five HEDIS measures of mental health performance (two measures of follow-up after psychiatric hospitalization and three measures of outpatient antidepressant management), with adjustment for plan and enrollee characteristics.
Results: Plans in the lowest quartile of outpatient and inpatient mental health volume had an 8.45 (95% CI [confidence interval]=4.97-14.37) to 21.09 (95% CI=11.32-39.28) times increase in odds of poor 7- and 30-day follow-up after discharge from inpatient psychiatric hospitalization. Low-volume plans had a 3.49 (95% CI=2.15-5.67) to 5.42 (95% CI=3.21-9.15) times increase in odds of poor performance on the acute, continuation, and provider measures of antidepressant treatment.
Conclusions: The large and consistent association between mental health volume and performance suggests parallels with the medical and surgical literature. As with that previous literature, further work is needed to better understand the mechanisms underlying this association and the potential implications for using volume as a criterion in plan choice.
abstract_id: PUBMED:29695225
Inpatient Volume and Quality of Mental Health Care Among Patients With Unipolar Depression. Objective: The relationship between inpatient volume and the quality of mental health care remains unclear. This study examined the association between inpatient volume in psychiatric hospital wards and quality of mental health care among patients with depression admitted to wards in Denmark.
Methods: In a nationwide, population-based cohort study, 17,971 patients (N=21,120 admissions) admitted to psychiatric hospital wards between 2011 and 2016 were identified from the Danish Depression Database. Inpatient volume was categorized into quartiles according to the individual ward's average caseload volume per year during the study period: low volume (quartile 1, <102 inpatients per year), medium volume (quartile 2, 102-172 inpatients per year), high volume (quartile 3, 173-227 inpatients per year) and very high volume (quartile 4, >227 inpatients per year). Quality of mental health care was assessed by receipt of process performance measures reflecting national clinical guidelines for care of depression.
Results: Compared with patients admitted to low-volume psychiatric hospital wards, patients admitted to very-high-volume wards were more likely to receive a high overall quality of mental health care (≥80% of the recommended process performance measures) (adjusted relative risk [ARR]=1.78, 95% confidence interval [CI]=1.02-3.09) as well as individual processes of care, including a somatic examination (ARR=1.35, CI=1.03-1.78).
Conclusions: Admission to very-high-volume psychiatric hospital wards was associated with a greater chance of receiving guideline-recommended process performance measures for care of depression.
abstract_id: PUBMED:36404415
Better together: Relationship quality and mental health among cardiac patients and spouses. Reductions in marital relationship quality are pervasive post-cardiac event. It is not yet understood how relationship quality is linked to mental health outcomes in couples where one member has established cardiovascular disease (CVD) and the interdependence within dyads is seldom measured. This research is required as psychological distress has been independently linked to CVD incidence, morbidity, and mortality. This study assessed associations of relationship quality with depression and anxiety among patients with CVD and their spouses. Participants completed questionnaires measuring four dimensions of relationship quality and mental health. Data were analyzed using an Actor-Partner Interdependence Model with hierarchical moderation analyses. 181 dyads (N = 362 participants) comprised the study sample. Most patients had coronary artery disease (66.3%) and 25.9% were female. Patients reported higher relationship satisfaction and fewer anxiety symptoms than did spouses. Patients and spouses with high dyadic consensus and affectional expression reported fewer mental health symptoms, but only when the other partner also perceived high levels of consensus and affectional expression in the relationship. Patients and spouses with low dyadic cohesion reported worse mental health symptoms (actor effects), but those effects were no longer significant when both the patient and the spouse appraised the relationship as having high levels of dyadic cohesion. Taken together, relationship quality is linked to mental health symptoms in patients with CVD and their spouses. Longitudinal and experimental studies are now warranted to further substantiate the cross-sectional findings of this study.
abstract_id: PUBMED:36714376
Relationship Quality and Mental Health Implications for Adolescents during the COVID-19 Pandemic: a Longitudinal Study. Although parent-adolescent and peer-adolescent relationship quality are critical for adolescent wellbeing during typical stressful life events, the unique features of the COVID-19 pandemic put into question whether strong parent-adolescent and peer-adolescent relationship quality functioned as protective factors of adolescent mental health in this context. The current longitudinal study examined a community sample of adolescents across 3 time points, each 6 months apart (Time 1: Fall, 2019; n = 163, 50.9% male; mean age = 15.75 years, SD = 1.02). Results showed that increases in depression symptoms, perceived stress, and emotion dysregulation from Fall 2019 to Fall 2020 were predicted by changes in parent, but not peer relationship quality. The current study demonstrates that adolescent-parent relationship quality may be protective against mental health difficulties during the COVID-19 pandemic, while adolescent-peer relationship quality may not. Identifying protective factors that may play a role in mitigating the impact of the pandemic, and other such widespread health crises, on youth mental health is critical in reducing the long-term psychological harm of the viral outbreak, as well as promoting adolescent wellbeing and resilience.
abstract_id: PUBMED:26725292
Admission Volume and Quality of Mental Health Care Among Danish Patients With Recently Diagnosed Schizophrenia. Objective: The relationship between admission volume and the quality of mental health care remains unclear. This study examined the association between admission volume of psychiatric hospital units and quality of mental health care among patients with recently diagnosed schizophrenia (past year) admitted to units in Denmark.
Methods: In a nationwide population-based cohort study, 3,209 patients admitted to psychiatric hospital units between 2004 and 2011 were identified from the Danish Schizophrenia Registry. Admission volume was categorized into four quartiles according to the individual unit's average caseload volume per year during the study period: low volume (quartile 1, ≤75 admissions per year), medium volume (quartile 2, 76-146 admissions per year), high volume (quartile 3, 147-256 admissions per year) and very high volume (quartile 4, >256 admissions per year). Quality of mental health care was defined as having received processes of care recommended in guidelines.
Results: Compared with patients admitted to low-volume psychiatric hospital units, patients admitted to very-high-volume units were more likely to receive high overall quality of mental health care (≥80% of recommended processes of care) (risk ratio [RR]=1.40, 95% confidence interval [CI]=1.03-1.91) and to receive several specific processes of care, including assessment of psychopathology by a specialist in psychiatry (RR=1.05, CI=1.01-1.10) and psychoeducation (RR=1.16, CI=1.05-1.28). Moreover, patients admitted to high-volume units were more likely to have a suicide risk assessment at discharge (RR=1.14, CI=1.07-1.21).
Conclusions: Admission to very-high-volume and high-volume psychiatric hospital units was associated with a greater chance of receiving guideline-recommended processes of care among patients admitted with recently diagnosed schizophrenia.
abstract_id: PUBMED:31754947
Child Challenging Behavior Influences Maternal Mental Health and Relationship Quality Over Time in Fragile X Syndrome. Parenting children with neurodevelopmental disabilities is often challenging. Biological mothers of children with Fragile X Syndrome (FXS) may be susceptible to increased risk of mental health problems. This study examined the longitudinal relationships between maternal mental health, child challenging behaviors, and mother-child relationship quality in children and adolescents with FXS. Fifty-five mother-child dyads were followed from childhood into adolescence. The findings suggest that child challenging behaviors, maternal mental health, and mother-child relationship quality were stable during that period. Additionally, elevated levels of child challenging behaviors negatively impacted maternal mental health. Finally, child challenging behaviors, in combination with maternal mental health, influenced mother-child relationship quality. Clinical implications are discussed.
abstract_id: PUBMED:28561917
Impact of Providing Compassion on Job Performance and Mental Health: The Moderating Effect of Interpersonal Relationship Quality. Purpose: To examine the relationships of providing compassion at work with job performance and mental health, as well as to identify the role of interpersonal relationship quality in moderating these relationships.
Design And Methods: This study adopted a two-stage survey completed by 235 registered nurses employed by hospitals in Taiwan. All hypotheses were tested using hierarchical regression analyses.
Findings: The results show that providing compassion is an effective predictor of job performance and mental health, whereas interpersonal relationship quality can moderate the relationships of providing compassion with job performance and mental health.
Conclusions: When nurses are frequently willing to listen, understand, and help their suffering colleagues, the enhancement engendered by providing compassion can improve the provider's job performance and mental health. Creating high-quality relationships in the workplace can strengthen the positive benefits of providing compassion.
Clinical Relevance: Motivating employees to spontaneously exhibit compassion is crucial to an organization. Hospitals can establish value systems, belief systems, and cultural systems that support a compassionate response to suffering. In addition, nurses can internalize altruistic belief systems into their own personal value systems through a long process of socialization in the workplace.
abstract_id: PUBMED:26151646
College student mental health and quality of workplace relationships. Objective: The goal of this study was to examine the effect of quality of workplace relationships on the mental health of employed undergraduates, with work-related variables as a potential mechanism.
Participants: Participants were 170 employed students (76% female, average age = 19.9) recruited in March 2011. Most worked part-time and had been at their jobs over a year.
Methods: Students were recruited from an undergraduate introductory psychology course and completed online surveys about the quality of workplace relationships, mental health (ie, somatic stress symptoms, depression, anxiety, and life satisfaction), and work-related variables (ie, job satisfaction, support, turnover and burnout).
Results: Students who reported having workplace relationships with co-occurring positivity and negativity had worse self-reported mental health outcomes than students reporting having wholly positive relationships. The relationship between workplace relationship quality and mental health was mediated by negative work-related variables.
Conclusions: Workplace relationships-even in part-time employment settings-influence college students' mental health.
abstract_id: PUBMED:29892314
The Relationship of Spiritual Health with Quality of Life, Mental Health, and Burnout: The Mediating Role of Emotional Regulation. Objective: The World Health Organization's definition of health now stands open to severe criticism due to changes in today's world and the accompanying mental void; in addition to physical, psychological, and social aspects, spiritual health and its interaction with the other aspects has been studied in scientific literature and recent research. The present study was conducted to investigate the mediating role of emotional regulation in the relationship between spiritual health with quality of life, psychological health, and burnout. Method: In this study, 231 staff from Baqiyatallah University of Medical Sciences completed Spiritual Well-Being Scale (SWBS), Difficulties in Emotion Regulation Scale (DERS), World Health Organization Quality of Life-BREF (WHOQOL-BREF), General Health Questionnaire-28 (GHQ-28), and Maslach Burnout Inventory (MBI). The gathered data were analyzed using Pearson correlation, Hierarchical Regression analysis, and Sobel's test. Results: All variables were correlated with one another (p<0.001). The hierarchical regression analysis and Sobel's test indicated that the emotional regulation have a relative mediating role in the relationship between spiritual health and quality of life (ß=0.53, Z=4.05, p<0.001) and a complete mediating role in the relationship between spiritual health with mental health (ß=0.68, Z=5.62, p<0.001) and burnout (ß=0.70, Z=6.12, p<0.001). Conclusion: There is a complex and non-linear relationship between spiritual health and the areas of quality of life, mental health and burnout. This relationship is potentially influenced by emotional regulation.
abstract_id: PUBMED:36518012
The Mediating Effect of Caregiving Relationship Quality on the Association Between Caregiving Stressors and Mental Health Problems Among Older Spousal Caregivers. Providing care to a spouse can be especially challenging for older adults given their compounding stressors resulting from aging and caregiving. This cross-sectional study examines the relationships between caregiving stressors and caregiver mental health problems and the potential mediator (i.e., caregiving relationship quality) of these associations. A total of 431 Americans (≥65 years) were selected from the National Study of Caregiving. Path analysis shows that care assistance was positively associated with caregiver mental health problems, and this association was mediated by negative relationship quality (Indirect effect = .14, p = .016). Moreover, role overload was positively associated with caregiver mental health problems, which was mediated by negative relationship quality (indirect effect = .13, p = .002). Findings suggest that caregiving stressors can adversely affect mental health by exacerbating negative relationship quality. Interventions that limit negative exchanges and increase compassionate communications between older spousal caregivers and their care-receiving partners are needed.
Answer: The volume-quality relationship in mental health care suggests that higher volumes of mental health service delivery are associated with better quality of care. This relationship is analogous to findings in surgical care and other medical procedures, where higher hospital volumes are linked to better outcomes.
For instance, health maintenance organizations (HMOs) with lower volumes of outpatient and inpatient mental health services were found to have significantly higher odds of poor follow-up after psychiatric hospitalization and poor performance on outpatient antidepressant management measures (PUBMED:15569901). Similarly, in Denmark, patients admitted to very-high-volume psychiatric hospital wards were more likely to receive a high overall quality of mental health care and specific processes of care, such as a somatic examination, compared to those admitted to low-volume wards (PUBMED:29695225). This pattern was also observed among Danish patients with recently diagnosed schizophrenia, where admission to very-high-volume and high-volume psychiatric hospital units was associated with a greater likelihood of receiving guideline-recommended processes of care (PUBMED:26725292).
These findings indicate that higher volumes of mental health care delivery are associated with better quality of care, which could be due to various factors such as increased provider experience, better-established protocols, and more robust systems for delivering care. However, the mechanisms underlying this association require further investigation to understand how volume can be used as a criterion for plan choice and to improve mental health care quality across different settings. |
Instruction: Physician versus paramedic in the setting of ground forces operations: are they interchangeable?
Abstracts:
abstract_id: PUBMED:17436776
Physician versus paramedic in the setting of ground forces operations: are they interchangeable? Background: The setting of military ground force operations can be demanding and requires a matched medical assistance plan. A major consideration is the type of medical caregiver that is assigned to the mission. We studied the similarities, differences, advantages, and disadvantages of physicians versus paramedics in this scenario.
Methods: We interviewed 20 ground force physicians, highly experienced in this setting. We summarized their responses and formulated quantitative decision-making tables regarding two sorts of missions: a long-duration mission, far from friendly definitive care, and a short-duration mission, close to friendly hospitals.
Results: The major areas in which physicians and paramedics differ, pertinent to a ground force operation are: formal education, on-job training, knowledge base, ability to treat a wide variety of medical conditions, ability to perform manual lifesaving procedures, social and moral impact, availability, physical fitness, combat skills, and cost. Of a maximum score of 100 points, for a long-term mission a physician scores 77.7 points while a paramedic scores 63.6 points. The scores for a short-term mission are 72.7 and 67.9, respectively.
Discussion: Physicians and paramedics are distinct groups of medical caregivers and this is also true for the setting of ground force operations. They are not interchangeable. Our data show that a physician has a relative advantage over a paramedic, especially in long-term missions, far from friendly facilities.
Conclusion: A physician is the first choice for all kinds of military ground force missions while a paramedic can be a reasonable substitute for missions of short duration, close to definitive care.
abstract_id: PUBMED:31718381
Paramedic-Delivered Fibrinolysis in the Treatment of ST-Elevation Myocardial Infarction: Comparison of a Physician-Authorized versus Autonomous Paramedic Approach. Background: For those patients who receive fibrinolysis in the treatment of ST-elevation myocardial infarction (STEMI), early treatment, i.e., within 2 hours of symptom onset, confers the greatest clinical benefit. This rationale underpins paramedic-delivered fibrinolysis in the prehospital setting. However, the current New Zealand approach requiring paramedics to first gain physician authorization, has proved inefficient and time consuming, particularly due to technological failings. Therefore, this study aimed to trial a new autonomous paramedic-delivered fibrinolysis model, examining the impact on time-to-treatment, paramedic diagnostic accuracy and patient outcomes. Methods: Utilizing a prospective observational approach, over a 24-month period, paramedics identified patients with a clinical presentation and electrocardiogram features consistent with STEMI, and initiated fibrinolysis. These patients were compared to a historic cohort who received fibrinolysis by paramedics within the same regions but following physician authorization. The main outcome measures were pain-to-needle (PTN) time and accuracy of paramedic diagnosis. A secondary end-point was 30-day and 6-month mortality and hospital length of stay (LOS). Results: A total of 174 patients received fibrinolysis (mean age, 64 years, SD ± 11.2). Median PTN time was 87 minutes (IQR = 58) for the historic cohort (n = 96), versus 65 minutes (IQR = 31) for the experimental cohort (n = 78), (p = 0.007). Autonomous paramedic diagnosis showed a sensitivity of 96% (95% CI 89-99) and specificity of 91% (95% CI 76-98). A significant reduction in both 30-day mortality and hospital LOS was observed among the experimental cohort (p = 0.04 and <0.001, respectively). No significant difference was observed between groups in terms of 6-month mortality. Conclusions: Prehospital fibrinolysis provided autonomously by paramedics without direct physician oversight is safe and feasible. Moreover, this independent approach can significantly improve time-to-treatment, resulting in short term mortality benefit and reduced hospital LOS.
abstract_id: PUBMED:31313273
Paramedic versus physician-staffed ambulances and prehospital delays in the management of patients with ST-segment elevation myocardial infarction. Background: Time delays to reperfusion therapy in ST-segment elevation myocardial infarction (STEMI) still remain a considerable drawback in many healthcare systems. Emergency medical service (EMS) has a critical role in the early management of STEMI. Under investigation herein, was whether the use of physician-staffed ambulances leads to shorter pre-hospital delays in STEMI patients.
Methods: This was an observational and retrospective study, using data from the registry of the Silesian regional EMS system in Katowice, Poland and the Polish Registry on Acute Coronary Syndromes (PL-ACS) for a study period of January 1, 2013 to December 31, 2016. The study population (n = 717) was divided into two groups: group 1 (n = 546 patients) - physician-staffed ambulances and group 2 (n = 171 patients) - paramedic-staffed ambulances.
Results: Responses during the day and night shifts were similar. Paramedic-led ambulances more often transmitted 12-lead electrocardiogram (ECG) to the percutaneous coronary intervention centers. All EMS time intervals were similar in both groups. The type of EMS dispatched to patients (physicianstaffed vs. paramedic/nurse-only staffed ambulance) was adjusted for ECG transmission, sex had no impact on in-hospital mortality (odds ratio [OR] 1.41; 95% confidence interval [CI] 0.79-1.95; p = 0.4). However, service time exceeding 42 min was an independent predictor of in-hospital mortality (OR 4.19; 95% CI 1.27-13.89; p = 0.019). In-hospital mortality rate was higher in the two upper quartiles of service time in the entire study population.
Conclusions: These findings suggest that both physician-led and paramedic-led ambulances meet the criteria set out by the Polish and European authorities. All EMS time intervals are similar regardless of the type of EMS unit dispatched. A physician being present on board did not have a prognostic impact on outcomes.
abstract_id: PUBMED:24285705
Does an instrumented treadmill correctly measure the ground reaction forces? Since the 1990s, treadmills have been equipped with multi-axis force transducers to measure the three components of the ground reaction forces during walking and running. These measurements are correctly performed if the whole treadmill (including the motor) is mounted on the transducers. In this case, the acceleration of the treadmill centre of mass relative to the reference frame of the laboratory is nil. The external forces exerted on one side of the treadmill are thus equal in magnitude and opposite in direction to the external forces exerted on the other side. However, uncertainty exists about the accuracy of these measures: due to friction between the belt and the tread-surface, due to the motor pulling the belt, some believe that it is not possible to correctly measure the horizontal components of the forces exerted by the feet on the belt. Here, we propose a simple model of an instrumented treadmill and we demonstrate (1) that the forces exerted by the subject moving on the upper part of the treadmill are accurately transmitted to the transducers placed under it and (2) that all internal forces - including friction - between the parts of the treadmill are cancelling each other.
abstract_id: PUBMED:29729612
Joint kinematics and ground reaction forces in overground versus treadmill graded running. Background: Treadmills are often used to assess running biomechanics, however the validity of applying results from treadmill graded running to overground graded running is currently unknown.
Research Question: The purpose of this study was to investigate whether treadmill and overground graded running have comparable kinematics and ground reaction force parameters.
Methods: Eleven healthy male adults ran overground and on an instrumented treadmill as motion capture and force platform data were collected for the following conditions: downhill running at a slope of -8° at 10, 13 and 16 km⋅h-1; level running at 10 and 13 km⋅h-1; uphill running at a slope of +8° at 8, 10 and 13 km⋅h-1. Sagittal joint angles at heel strike, mid-stance, and toe-off were computed for the ankle, knee and hip. Ground reaction force parameters including peak average and instantaneous normal loading rate, peak impact and active normal force, peak tangential (braking and propulsive) forces, and normal and tangential impulses were also calculated.
Results: Joint kinematics and ground reaction forces for level running were generally similar between overground and treadmill conditions. The following variables were significantly higher during overground uphill running (mean difference ± SD): average normal loading rate (14.4 ± 7.1 BW⋅s-1), normal impulse (0.04 ± 0.02 BW⋅s), propulsive impulse (0.04 ± 0.02 BW⋅s), and vertical center of mass excursion (0.092 ± 0.031 m). The following variables were significantly higher during overground downhill running (mean difference ± SD): ankle plantarflexion at toe-off (-5.39 ± 6.19°) and vertical center of mass excursion (0.046 ± 0.039 m).
Significance: These findings suggest that subtle differences in kinematics and ground reaction forces exist between overground and treadmill graded running. These differences aside, we believe that overground kinematics and ground reaction forces in graded running are reasonably replicated on a treadmill.
abstract_id: PUBMED:28926753
Undergraduate paramedic student psychomotor skills in an obstetric setting: An evaluation. The clinical education of paramedic students is an international concern. In Australia, student placements are commonly undertaken with local district ambulance services, however these placements are increasingly limited. Clinical placements within inter-professional settings represent an innovative yet underdeveloped area of investigation. This paper addresses that gap by reporting a pilot evaluation of paramedic student clinical placements in a specialised obstetrics setting. Using a case study approach, the evaluation aimed to identify paramedic psychomotor skills that could be practised in this setting, and understand the nature of key learning events. A purposive sample of paramedic students was recruited following completion of the obstetrics placement. A combination of student reflection and assessed psychomotor skills data were collected from clinical placement logs. Content analysis of all data was conducted inductively and deductively, as appropriate. Findings indicated a comprehensive range of psychomotor skills can be practised in this setting, with over thirty psychomotor skills identified directly related to the paramedic curriculum; and seven psychomotor skills indirectly related. The themes finding confidence in maternity care, watching the experts, and putting theory into practice provide narrative insight into the clinical learning experience of paramedic students in this setting. Further research is recommended to build upon this pilot.
abstract_id: PUBMED:31340513
Curve Similarity Model for Real-Time Gait Phase Detection Based on Ground Contact Forces. This paper proposed a new novel method to adaptively detect gait patterns in real time through the ground contact forces (GCFs) measured by load cell. The curve similarity model (CSM) is used to identify the division of off-ground and on-ground statuses, and differentiate gait patterns based on the detection rules. Traditionally, published threshold-based methods detect gait patterns by means of setting a fixed threshold to divide the GCFs into on-ground and off-ground statuses. However, the threshold-based methods in the literature are neither an adaptive nor a real-time approach. In this paper, the curve is composed of a series of continuous or discrete ordered GCF data points, and the CSM is built offline to obtain a training template. Then, the testing curve is compared with the training template to figure out the degree of similarity. If the computed degree of similarity is less than a given threshold, they are considered to be similar, which would lead to the division of off-ground and on-ground statuses. Finally, gait patterns could be differentiated according to the status division based on the detection rules. In order to test the detection error rate of the proposed method, a method in the literature is introduced as the reference method to obtain comparative results. The experimental results indicated that the proposed method could be used for real-time gait pattern detection, detect the gait patterns adaptively, and obtain a low error rate compared with the reference method.
abstract_id: PUBMED:36385782
Predicting vertical ground reaction forces from 3D accelerometry using reservoir computers leads to accurate gait event detection. Accelerometers are low-cost measurement devices that can readily be used outside the lab. However, determining isolated gait events from accelerometer signals, especially foot-off events during running, is an open problem. We outline a two-step approach where machine learning serves to predict vertical ground reaction forces from accelerometer signals, followed by force-based event detection. We collected shank accelerometer signals and ground reaction forces from 21 adults during comfortable walking and running on an instrumented treadmill. We trained one common reservoir computer using segmented data using both walking and running data. Despite being trained on just a small number of strides, this reservoir computer predicted vertical ground reaction forces in continuous gait with high quality. The subsequent foot contact and foot off event detection proved highly accurate when compared to the gold standard based on co-registered ground reaction forces. Our proof-of-concept illustrates the capacity of combining accelerometry with machine learning for detecting isolated gait events irrespective of mode of locomotion.
abstract_id: PUBMED:31615404
Women's experience of unplanned out-of-hospital birth in paramedic care. Background: Healthcare literature describes predisposing factors, clinical risk, maternal and neonatal clinical outcomes of unplanned out-of-hospital birth; however, there is little quality research available that explores the experiences of mothers who birth prior to arrival at hospital.
Methods: This study utilised a narrative inquiry methodology to explore the experiences of women who birth in paramedic care.
Results: The inquiry was underscored by 22 narrative interviews of women who birthed in paramedic care in Queensland, Australia between 2011 and 2016. This data identified factors that contributed to the planned hospital birth occurring in the out-of-hospital setting. Women in this study began their story by discussing previous birth experience and their knowledge, expectations and personal beliefs concerning the birth process. Specific to the actual birth event, women reported feeling empowered, confident and exhilarated. However, some participants also identified concerns with paramedic practice; lack of privacy, poor interpersonal skills, and a lack of consent for certain procedures.
Conclusions: This study identified several factors and a subset of factors that contributed to their experiences of the planned hospital birth occurring in the out-of-hospital setting. Women described opportunities for improvement in the care provided by paramedics, specifically some deficiencies in technical and interpersonal skills.
abstract_id: PUBMED:24835563
Ground reaction forces during treadmill running in microgravity. Astronauts perform treadmill exercise during long-duration space missions to counter the harmful effects of microgravity exposure upon bone, muscle, and cardiopulmonary health. When exercising in microgravity, astronauts wear a harness and bungee system that provides forces that maintain attachment to the treadmill. Typical applied forces are less than body weight. The decreased gravity-replacement force could result in differences in ground-reaction force at a given running speed when compared to those achieved in normal gravity, which could influence the adaptive response to the performed exercise. Seven astronauts (6 m/1 f) who completed approximately 6-month missions on the International Space Station (ISS) completed a preflight (1G) and multiple in-flight (0G) data collection sessions. Ground-reaction forces were measured during running at speeds of 8.0 kph and greater on an instrumented treadmill in the lab and on the ISS. Ground-reaction forces in 0G were less than in 1G for a given speed depending upon the gravity-replacement force, but did increase with increased speed and gravity-replacement force. Ground-reaction forces attained in 1G during slower running could be attained by increasing running speed and/or increasing gravity-replacement forces in 0G. Loading rates in 1G, however, could not be replicated in 0G. While current gravity-replacement force devices are limited in load delivery magnitude, we recommend increasing running speeds to increase the mechanical loads applied to the musculoskeletal system during 0G treadmill exercise, and to potentially increase exercise session efficiency.
Answer: Physicians and paramedics are not interchangeable in the setting of ground forces operations. A study that interviewed 20 ground force physicians with experience in such settings found that there are significant differences between physicians and paramedics in terms of formal education, on-job training, knowledge base, ability to treat a wide variety of medical conditions, ability to perform manual lifesaving procedures, social and moral impact, availability, physical fitness, combat skills, and cost. In a quantitative analysis, physicians scored higher than paramedics for both long-term and short-term missions, with a more pronounced advantage in long-term missions far from friendly facilities. Therefore, while a physician is the preferred choice for all kinds of military ground force missions, a paramedic can be a reasonable substitute for short-duration missions that are close to definitive care (PUBMED:17436776). |
Instruction: Off-pump versus on-pump coronary artery bypass: does number of grafts performed represent a selection bias in comparative studies?
Abstracts:
abstract_id: PUBMED:16212076
Off-pump versus on-pump coronary artery bypass: does number of grafts performed represent a selection bias in comparative studies? Results from a matched cohort comparison. Background: Several retrospective studies comparing off-pump and on-pump coronary surgery and the largest randomized studies published to date showed a lower number of grafts performed in patients submitted to off-pump coronary artery bypass surgery (OPCAB). These findings bring about the question of the general applicability of the results. We eliminated the selection bias correlated with the number of grafts per patient by comparing the short-term outcomes of patients undergoing OPCAB and standard coronary artery bypass grafting (CABG) matched for number of grafts.
Methods: Eighty-seven consecutive patients undergoing OPCAB (group A) were selected from the database of our Institution during a 2-year period. Matching was performed by iterative selection prioritizing, in the following sequence: number of grafts, EuroSCORE, and age. A total of 87 patients operated upon with the on-pump technique represented the control group (group B).
Results: There were no significant differences in preoperative characteristics between the two groups. The number of grafts per patient was 2.2 +/- 0.5 in group A and 2.2 +/- 0.5 in group B. Early mortality did not differ between the two groups and it was 2.2% (2 patients) in group A and 3.4% (3 patients) in group B (p = NS). The incidence of myocardial infarction did not differ between the two groups. No patient in either group had stroke or coma. Five (5.7%) patients in group A and 7 (8.0%) patients in group B had atrial fibrillation (p = NS).
Conclusions: We were unable to demonstrate any significant differences in short-term mortality or morbidity outcome between OPCAB and standard CABG patients Our findings suggest that excellent results can be obtained with both surgical approaches.
abstract_id: PUBMED:19324136
Fewer grafts performed in off-pump bypass surgery: patient selection or incomplete revascularization? Background: Comparisons of off-pump (OPCAB) versus conventional on-pump coronary artery bypass (CCAB) consistently report fewer grafts per patient with OPCAB. Performing fewer grafts than indicated based on angiographic assessment could result in incomplete revascularization. We questioned whether OPCAB influenced surgeons to perform fewer grafts than needed.
Methods: Preoperative angiographic and surgical data were collected prospectively on 945 patients undergoing coronary artery bypass grafting (370 OPCAB, 575 CCAB) at 8 hospitals between February 1, 2004, and July 31, 2004. The number of grafts needed per patient was determined from the reported number of vessels with angiographic stenoses of 50% or greater, and compared with the number received per patient, stratified by coronary artery bypass grafting technique.
Results: The OPCAB and CCAB groups were demographically similar. The mean number of grafts needed per patient was significantly less in the OPCAB group (2.95 versus 3.48), accounting for fewer grafts received in that group (2.75 versus 3.36). The ratio of grafts (received/needed) was the same in both groups. Patients receiving more than three grafts were more likely to have CCAB (71.2%), whereas those receiving fewer than three grafts were almost as likely to have OPCAB as CCAB (55.5%). The rate of 1-year major adverse events (death, myocardial infarction, repeat revascularization) was the same in OPCAB and CCAB (15.5% versus 14.1%; p = 0.57).
Conclusions: Completeness of revascularization, determined by comparing the number of grafts performed to the number needed, was equivalent in OPCAB and CCAB patients, and 18-month clinical outcomes were equivalent. Preferential selection of patients needing more bypass grafts to CCAB results in the lower mean number of grafts per patient with OPCAB.
abstract_id: PUBMED:28942940
Meta-Analysis Comparing ≥10-Year Mortality of Off-Pump Versus On-Pump Coronary Artery Bypass Grafting. Off-pump coronary artery bypass grafting (CABG) is suggested to be associated with an increase in long-term (≥5-year) all-cause mortality. To determine whether off-pump CABG is associated with an increase in very long-term (≥10-year) all-cause mortality, we performed a meta-analysis of propensity-score matched observational comparative studies of off-pump versus on-pump CABG. MEDLINE and EMBASE were searched through May 2017. A hazard ratio of follow-up (including early) all-cause mortality for off-pump versus on-pump CABG was extracted from each individual study. Study-specific estimates were combined using inverse variance-weighted averages of logarithmic hazard ratios in the random-effects model. Of 164 potentially relevant studies, our search identified 16 propensity-score matched observational comparative studies of off-pump versus on-pump CABG with ≥10-year follow-up enrolling a total of 82,316 patients. A pooled analysis of all the 16 studies demonstrated that off-pump CABG was significantly associated with an increase in all-cause mortality (hazard ratio 1.07, 95% confidence interval 1.03 to 1.12, p for effect = 0.0008; p for heterogeneity = 0.30, I2 = 12%). In a sensitivity analysis, exclusion of any single hazard ratio from the analysis (leave-one-out meta-analysis) did not substantively alter the overall result. There was no evidence of a significant publication bias. In conclusion, off-pump CABG is associated with an increase in very long-term (≥10 years) all-cause mortality compared with on-pump CABG.
abstract_id: PUBMED:31549144
On-pump and off-pump coronary artery bypass grafting for patients needing at least two grafts: comparative outcomes at 20 years. Objectives: Despite evidence from several randomized controlled trials and observational studies validating short-term safety and efficacy of off-pump coronary artery bypass grafting (CABG), concerns persist regarding the impact of off-pump CABG on long-term survival and freedom from reintervention. This persistent scepticism regarding off-pump CABG prompted us to review our practice of CABG over the last 20 years with a view to comparing the impact of off-pump and on-pump CABG on short-term and long-term outcomes in a high-volume off-pump coronary surgery centre.
Methods: We retrospectively analysed prospectively collected data from the Patients Analysis and Tracking System database (Dendrite Clinical Systems, Oxford, UK) for all isolated first-time CABG procedures with at least 2 grafts performed at our institution from January 1996 to September 2017. Over the study period, 5995 off-pump CABG and 4875 on-pump CABG were performed by surgeons with exclusive off-pump and on-pump practices, respectively. Multivariable logistic regression and the Cox model were used to investigate the effect of off-pump versus on-pump procedures on short-term outcomes and long-term survival. Propensity score matching was used to compare the 2 matched groups.
Results: Off-pump CABG was associated with a lower risk for 30-day mortality [odds ratio (OR) 0.42, 95% confidence interval (CI) 0.32-0.55; P < 0.001], reintubation/tracheostomy (OR 0.58, 95% CI 0.47-0.72; P < 0.001) and re-exploration for bleeding (OR 0.48, 95% CI 0.37-0.62; P < 0.001). The benefit in terms of operative deaths from off-pump was significant in those with Society of Cardio-Thoracic Surgery logistic EuroSCORE >2 (interaction P = 0.04). When compared with on-pump CABG, off-pump CABG did not significantly reduce the risk of stroke (OR 0.96, 95% CI 0.88-1.12; P = 0.20) and postoperative haemofiltration (OR 0.98, 95% CI 0.86-1.20; P = 0.35). At the median follow-up of 12 years (interquartile range 6-17, max 21), off-pump CABG did not affect late survival [log rank P = 0.24; hazard ratio (HR) 0.95, 95% CI 0.89-1.02] or the need for reintervention (log rank P = 0.12; HR 1.19, 95% CI 0.95-1.48).
Conclusions: This large volume, single-centre study with the longest reported follow-up confirms that off-pump CABG performed by experienced surgeons, who perform only off-pump procedures in a high-volume off-pump coronary surgery centre, is associated with lower risk of operative deaths, fewer postoperative complications and similar 20-year survival and freedom from reintervention rates compared with on-pump CABG.
abstract_id: PUBMED:31409492
Off-pump versus on-pump coronary artery bypass grafting in moderate renal failure. Objectives: Off-pump coronary artery bypass (OPCAB) may benefit select high-risk patients. We sought to analyze the long-term outcomes of OPCAB versus on-pump coronary artery bypass (ONCAB) in patients with moderate renal failure.
Methods: A retrospective cohort analysis of primary isolated CAB surgery performed in Ontario, Canada, from October 2008 to March 2016 in the CorHealth Ontario Cardiac Registry identified 50,115 cases. Of these, 7782 (15.5%) had estimated glomerular filtration rate (eGFR) of 30 to 59 mL/min/1.73 m2. OPCAB was compared to ONCAB after propensity score matching.
Results: Following propensity score matching, 1578 patient pairs were formed. Total number of bypass grafts was higher in ONCAB (3.31 ± 1.01 vs 3.12 ± 1.14; P < .01) and more arterial grafts were used in OPCAB (1.55 ± 0.71 vs 1.14 ± 0.58; P < .01). OPCAB was associated with lower rate of in-hospital stroke (0.7% vs 2.2%; P < .01), renal failure requiring dialysis (1.2% vs 2.9%; P < .01), and blood transfusion (52.4% vs 69.3%; P < .01). There was no difference in perioperative mortality (2.4% vs 3.0%; P = .36) between OPCAB and ONCAB, respectively. At 8-year follow-up, survival probability was not different when comparing OPCAB versus ONCAB: 62% versus 65%, respectively (hazard ratio, 0.98; 95% confidence interval, 0.84-1.13; P = .38). Cumulative incidence of permanent dialysis did not differ at 8-year follow-up: 7% versus 7%, respectively (hazard ratio, 1.01; 95% confidence interval, 0.72-1.43; P = .74.
Conclusions: OPCAB is associated with improved in-hospital renal outcomes, but is not associated with changes in short- or long-term mortality, or with the long-term cumulative incidence of end-stage renal failure requiring permanent dialysis in patients with moderate renal failure.
abstract_id: PUBMED:32173100
Ten-year outcomes after off-pump versus on-pump coronary artery bypass grafting: Insights from the Arterial Revascularization Trial. Objective: We performed a post hoc analysis of the Arterial Revascularization Trial to compare 10-year outcomes after off-pump versus on-pump surgery.
Methods: Among 3102 patients enrolled, 1252 (40% of total) and 1699 patients received off-pump and on-pump surgery (151 patients were excluded because of other reasons); 2792 patients (95%) completed 10-year follow-up. Propensity matching and mixed-effect Cox model were used to compare long-term outcomes. Interaction term analysis was used to determine whether bilateral internal thoracic artery grafting was a significant effect modifier.
Results: One thousand seventy-eight matched pairs were selected for comparison. A total of 27 patients (2.5%) in the off-pump group required conversion to on-pump surgery. The off-pump and on-pump groups received a similar number of grafts (3.2 ± 0.89 vs 3.1 ± 0.8; P = .88). At 10 years, when compared with on-pump, there was no significant difference in death (adjusted hazard ratio for off-pump, 1.1; 95% confidence interval, 0.84-1.4; P = .54) or the composite of death, myocardial infarction, stroke, and repeat revascularization (adjusted hazard ratio, 0.92; 95% confidence interval, 0.72-1.2; P = .47). However, off-pump surgery performed by low volume off-pump surgeons was associated with a significantly lower number of grafts, increased conversion rates, and increased cardiovascular death (hazard ratio, 2.39; 95% confidence interval, 1.28-4.47; P = .006) when compared with on-pump surgery performed by on-pump-only surgeons.
Conclusions: The findings showed that in the Arterial Revascularization Trial, off-pump and on-pump techniques achieved comparable long-term outcomes. However, when off-pump surgery was performed by low-volume surgeons, it was associated with a lower number of grafts, increased conversion, and a higher risk of cardiovascular death.
abstract_id: PUBMED:31826727
Hybrid Coronary Revascularization Versus Off-Pump Coronary Artery Bypass Grafting: Comparative Effectiveness Analysis With Long-Term Follow-up. Background Hybrid coronary revascularization (HCR) involves the integration of coronary artery bypass grafting (CABG) and percutaneous coronary intervention to treat multivessel coronary artery disease. Our objective was to perform a comparative analysis with long-term follow-up between HCR and conventional off-pump CABG. Methods and Results We compared all double off-pump CABG (n=216) and HCR (n=147; robotic-assisted minimally invasive direct CABG of the left internal thoracic artery to the left anterior descending artery and percutaneous coronary intervention to one of the non-left anterior descending vessels) performed at a single institution between March 2004 and November 2015. To adjust for the selection bias of receiving either off-pump CABG or HCR, we performed a propensity score analysis using inverse-probability weighting. Both groups had similar results in terms of re-exploration for bleeding, perioperative myocardial infarction, stroke, blood transfusion, in-hospital mortality, and intensive care unit length of stay. HCR was associated with a higher in-hospital reintervention rate (CABG 0% versus HCR 3.4%; P=0.03), lower prolonged mechanical ventilation (>24 hours) rate (4% versus 0.7%; P=0.02), and shorter hospital length of stay (8.1±5.8 versus 4.5±2.1 days; P<0.001). After a median follow-up of 81 (48-113) months for the off-pump CABG and 96 (53-115) months for HCR, the HCR group of patients had a trend toward improved survival (85% versus 96%; P=0.054). Freedom from any form of revascularization was similar between the 2 groups (92% versus 91%; P=0.80). Freedom from angina was better in the HCR group (73% versus 90%; P<0.001). Conclusions HCR seems to provide, in selected patients, a shorter postoperative recovery, with similar excellent short- and long-term outcomes when compared with standard off-pump CABG.
abstract_id: PUBMED:18828462
On-pump versus off-pump coronary artery bypass surgery: a comparison of two consecutive series. Background Data: On-pump and off-pump techniques are both widely used approaches to coronary artery bypass surgery. Yet, statistically valid comparisons of the results between the two groups have been limited, in part, by patient selection bias.
Methods: Two hundred sixty-nine consecutive patients undergoing off-pump coronary artery bypass and 379 consecutive patients undergoing on-pump bypass were compared in a retrospective chart review. The two groups were compared for preoperative characteristics as well as operative outcomes. To avoid selection bias, no on-pump coronary artery bypass surgery was performed during the off-pump coronary artery bypass series, and no patients were done off-pump during the coronary artery bypass series.
Results: There was no statistical difference in the groups pre-operatively except that there were slightly more patients with three-vessel disease in the on-pump group and more patients with single vessel disease in the off-pump group. Significant benefits were found in the off-pump group in that they required fewer re-operations for bleeding (0.8% vs. 5.7%, p-value < 0.002), and they left the hospital with higher hematocrits (32.1% vs. 30.8%, p-value < 0.001). Patients who had off-pump coronary artery bypass also had fewer sternal dehiscences (0% vs. 1.8%, p-value < 0.027). More patients receiving off-pump bypass demonstrated the need for prolonged mechanical ventilation (8.2% vs. 2.5%, p-value < 0.027), and they also had significantly fewer grafts (3 vs. 3.2, p-value < 0.005). There was no statistically significant difference among the other outcomes investigated.
Conclusions: While there were no significant differences in some of the outcomes studied, others showed significant advantages in favor of off-pump surgery. Substantial advantages in off-pump coronary artery bypass were seen in bleeding reduction, improved sternal healing, and higher discharge hematocrits despite fewer transfusions. These advantages and others reported in specific high-risk patient groups, combined with documented cost reductions, warrant continued use of off-pump techniques. Off-pump coronary artery bypass is a safe, proven method with significant advantages over on-pump methods and, when appropriate, should be offered to patients undergoing coronary bypass surgery.
abstract_id: PUBMED:15975355
On-pump versus off-pump surgical revascularization for left main stem stenosis: risk adjusted outcomes. Background: Recent publications have shown coronary surgery is safe and effective in patients with critical left main stem stenosis when using off-pump coronary surgery techniques. However, these studies were small and did not adjust for differences in case mix.
Methods: Between April 1997 and March 2003, 1,197 consecutive patients with critical left main stem stenosis (> 50%) underwent coronary surgery. Two hundred and fifty-nine (21.6%) of these patients had off-pump coronary surgery, while 938 (78.4%) received on-pump coronary surgery. Multivariate logistic regression and Cox proportional hazards analysis were used to assess the effect of off-pump coronary surgery on outcomes, while adjusting for patient characteristics (treatment selection bias). Treatment selection bias was controlled by constructing a propensity score from core patient characteristics. The propensity score was the probability of receiving off-pump coronary surgery and was included along with the comparison variable in the multivariable analyses of outcome.
Results: After adjusting for the propensity score, the requirement for inotropic support (22.4% versus 35.3%; p < 0.001) or a prolonged length of stay (5.3% versus 9.3%; p = 0.034) were significantly reduced after receiving off-pump coronary surgery. There was a trend to suggest that off-pump patients had a lower incidence of stroke and chest infection. The adjusted freedom from death in off-pump patients at 2 years was 94.6% compared with 93.6% for on-pump patients (p = 0.54).
Conclusions: After risk adjustment, patients with critical left main stem stenosis can undergo off-pump coronary surgery safely, with results comparable with on-pump coronary surgery.
abstract_id: PUBMED:24609839
Off-pump versus on-pump revascularization in females: a meta-analysis of observational studies. Background: Coronary revascularization in female patients presents several challenges, including smaller target vessels and smaller conduits. Furthermore, late presentation and more co-morbidities than males may increase complication rates. The aim of this study was to assess whether off-pump coronary artery bypass (OPCAB) improves outcomes when compared to on-pump coronary artery bypass (ONCAB) in the female population.
Methods: A systematic literature review identified six observational studies, incorporating 23313 patients (n=9596 OPCAB, 13717 ONCAB). These were meta-analyzed using random effects modeling. Heterogeneity, subgroup analysis, quality scoring and publication bias were assessed. The primary endpoints were 30-day mortality and major cardiac, respiratory and renal complications. Secondary endpoints were the number of grafts per patient.
Results: No statistically significant difference was observed in 30-day mortality between the OPCAB and ONCAB groups (4.8% vs. 0.7%; OR 0.96; 95% CI [0.41, 2.24], p=0.92). Significant inter-study heterogeneity was also present (I2=94%) and was not explained by study size or quality. Peri-operative myocardial infarction (OR 0.65; 95% CI [0.51, 0.84], p=0.0009) was significantly lower with OPCAB without significant heterogeneity; however, OPCAB did not significantly alter other morbidity outcomes. OPCAB was associated with a trend towards fewer grafts per patient than ONCAB.
Conclusions: OPCAB is a safe alternative to ONCAB in the surgical revascularisation of female patients and may reduce post-operative myocardial infarction (MI). However, this does not translate into a reduction in 30-day mortality and OPCAB does not significantly improve other cardiovascular, renal or neurological outcomes. Late outcome data remains lacking and a well-structured, randomized trial is required to answer vital questions regarding the effect of OPCAB on women in the long-term.
Answer: Yes, the number of grafts performed can represent a selection bias in comparative studies of off-pump versus on-pump coronary artery bypass surgery (CABG). Several studies have indicated that patients undergoing off-pump coronary artery bypass surgery (OPCAB) tend to receive fewer grafts compared to those undergoing standard on-pump CABG (PUBMED:16212076; PUBMED:19324136). This discrepancy raises concerns about whether OPCAB influences surgeons to perform fewer grafts than needed, potentially leading to incomplete revascularization (PUBMED:19324136).
To address this potential bias, one study matched patients undergoing OPCAB and standard CABG for the number of grafts, EuroSCORE, and age, finding no significant differences in short-term mortality or morbidity outcomes between the two groups (PUBMED:16212076). Another study found that the completeness of revascularization, determined by comparing the number of grafts performed to the number needed, was equivalent in OPCAB and CCAB patients, suggesting that the lower mean number of grafts per patient with OPCAB was due to preferential selection of patients needing more bypass grafts to CCAB (PUBMED:19324136).
However, a meta-analysis comparing long-term mortality (≥10 years) found that off-pump CABG was associated with an increase in all-cause mortality compared to on-pump CABG, suggesting that the number of grafts could have long-term implications (PUBMED:28942940). Another study with a 20-year follow-up indicated that off-pump CABG performed by experienced surgeons in a high-volume off-pump coronary surgery center was associated with lower risk of operative deaths, fewer postoperative complications, and similar long-term survival and freedom from reintervention rates compared with on-pump CABG (PUBMED:31549144).
In conclusion, while short-term outcomes may not differ significantly when matching for the number of grafts, the number of grafts performed can still represent a selection bias in comparative studies of off-pump versus on-pump CABG, and it may have implications for long-term outcomes. |
Instruction: Is this D vitamin to worry about?
Abstracts:
abstract_id: PUBMED:37369545
Examining Associations Between Metacognitive Beliefs and Type II Worry: The Specificity of Negative Metacognitive Beliefs to State Type II Worry During a Worry Episode. The metacognitive model of generalized anxiety disorder (GAD) considers Type II worry, which represents one's tendency to negatively appraise worry, as a defining feature of GAD, and negative metacognitive beliefs are central to eliciting Type II worry during worry episodes. Extant research has found that individuals experiencing GAD report elevated Type II worry, and that negative metacognitive beliefs correlate with Type II worry. However, because of how Type II worry was assessed in existing studies, it remains unclear if negative metacognitive beliefs relate to state Type II worry specifically during a worry episode. This study sought to fill that gap in the existing literature among a sample of individuals experiencing elevated GAD symptom severity (N = 106). Participants completed an assessment of GAD symptom severity and metacognitive beliefs, while later attending an in-person study session where they completed a worry induction and state Type II worry, as conceptualized as the strength of negative appraisals of worry, which was then assessed. Metacognitive beliefs generally positively correlated with state Type II worry, with negative metacognitive beliefs being the only metacognitive belief domain that correlated with state Type II worry in multivariate analyses. Implications for how these results support the metacognitive model of GAD and treatment implications are discussed.
abstract_id: PUBMED:30013842
Assessing metacognitive beliefs about worry: validation of German versions of the Why Worry Scale II and the Consequences of Worry Scale. Background: Metacognitive beliefs have been proposed to play a key role in initiating and maintaining worry. The Why Worry-Scale-II (WW-II) and Consequences of Worry Scale (COWS) are self-report questionnaires assessing positive and negative metacognitive beliefs. The main goal of this study was to validate German versions of these two questionnaires.
Method: N = 603 participants completed a questionnaire battery, including the two self-report measures of metacognitive beliefs. We conducted confirmatory factor analyses, calculated internal consistencies, and examined convergent and divergent validity. In addition, the questionnaires' power in predicting worry, repetitive negative thinking (RNT) and generalized anxiety disorder (GAD) symptoms were investigated.
Results: The factor structure of the original versions could be replicated for both measures. Furthermore, the translated questionnaires demonstrated excellent internal consistency and evidence of convergent and divergent validity. Importantly they also possessed predictive power in explaining worry, RNT and GAD symptoms, even over and above the Metacognitions Questionnaire-30 (MCQ-30) as the current gold standard.
Conclusions: Overall, our findings suggest that the WW-II and COWS show solid psychometric properties and are useful in measuring metacognitive beliefs independently from the MCQ-30.
abstract_id: PUBMED:35810601
Visual worry in patients with schizophrenia. Objective: Worrying is a pervasive transdiagnostic symptom in schizophrenia. It is most often associated in the literature with verbal modality due to many studies of its presence in generalised anxiety disorder. The current study aimed to elucidate worry in different sensory modalities, visual and verbal, in individuals with schizophrenia.
Method: We tested persons with schizophrenia (n = 92) and healthy controls (n = 138) in a cross-sectional design. We used questionnaires of visual and verbal worry (original Worry Modality Questionnaire), trait worry (Penn State Worry Questionnaire) and general psychopathology symptoms (General Functioning Questionnaire-58 and Brief Psychiatric Rating Scale).
Results: Both visual and verbal worry were associated with psychotic, anxiety and general symptoms of psychopathology in both groups with medium to large effect sizes. Regression analyses indicated that visual worry was a single significant predictor of positive psychotic symptoms in a model with verbal and trait worry, both in clinical and control groups (β's of 0.49 and 0.38, respectively). Visual worry was also a superior predictor of anxiety and general psychopathology severity (β's of 0.34 and 0.37, respectively) than verbal worry (β's of 0.03 and -0.02, respectively), under control of trait worry, in the schizophrenia group. We also proposed two indices of worry modality dominance and analysed profiles of dominating worry modality in both groups.
Conclusions: Our study is the first to demonstrate that visual worry might be of specific importance for understanding psychotic and general psychopathology symptoms in persons with schizophrenia.
abstract_id: PUBMED:34973394
Mind the "worry fatigue" amid Omicron scares. In addition to worry, the accumulated unknowns and uncertainties about COVID-19 may also result in "worry fatigue" that could harm the public's vigilance towards the pandemic and their adherence to preventive measures. Worry could be understood as future-oriented concerns and challenges that could result in negative outcomes, whereas worry fatigue is the feeling of extreme burden and burnout associated with too much worry unsolved. As the world embraces its second COVID-19 winter, along with the pandemic-compromised holiday season, the Omicron variant has been declared a variant of concern by the World Health Organization. However, the fluid and unpredictable nature of COVID-19 variants dictates that, instead of definitive answers that could ease people's worry about Omicron, dividing debates and distracting discussions that could further exacerbate people's worry fatigue might be the norm in the coming months. This means that, amid the ever-changing public health guidance, the forever-breaking news reports, and the always-debatable media analyses, government and health officials need to be more invested in addressing people's potential worry and worry fatigue about the pandemic, to ensure the public's rigorous cooperation and compliance with safety measures.
abstract_id: PUBMED:24839945
The language of worry: examining linguistic elements of worry models. Despite strong evidence that worry is a verbal process, studies examining linguistic features in individuals with generalised anxiety disorder (GAD) are lacking. The aim of the present study is to investigate language use in individuals with GAD and controls based on GAD and worry theoretical models. More specifically, the degree to which linguistic elements of the avoidance and intolerance of uncertainty worry models can predict diagnostic status was analysed. Participants were 19 women diagnosed with GAD and 22 control women and their children. After participating in a diagnostic semi-structured interview, dyads engaged in a free-play interaction where mothers' language sample was collected. Overall, the findings provided evidence for distinctive linguistic features of individuals with GAD. That is, after controlling for the effect of demographic variables, present tense, future tense, prepositions and number of questions correctly classified those with GAD and controls such that a considerable amount of the variance in diagnostic status was explained uniquely by language use. Linguistic confirmation of worry models is discussed.
abstract_id: PUBMED:34456783
Association Between Daily Worry, Pathological Worry, and Fear of Progression in Patients With Cancer. Background: Fear of progression (FoP), or fear of cancer recurrence (FCR), is characterized by worries or concerns about negative illness-related future events. Actually, to worry is a common cognitive process that, in its non-pathological form, belongs to daily life. However, worry can also become pathological appearing as a symptom of mental disorders. This study aimed at investigating the associations among daily worry, pathological worry, and FoP in patients with cancer. Methods: This is a cross-sectional study that includes 328 hospitalized patients with cancer. Patients filled out the FoP Questionnaire (FoP-Q), the Worry Domains Questionnaire (WDQ) for the assessment of daily worry, and the Penn State Worry Questionnaire (PSWQ) for the assessment of pathological worry. Depressive, anxiety, and somatic symptoms were measured with modules of the Patient Health Questionnaire [Patient Health Questionnaire-Depressive Symptoms (PHQ-2), Generalized Anxiety Disorder-2 (GAD-2), and Patient Health Questionnaire-Somatic Symptoms (PHQ-15)]. Furthermore, a structured clinical interview was conducted for the assessment of anxiety disorders. The hierarchical multiple linear regression analysis was used to identify factors independently associated with FoP. Results: Mean age of the participants was M = 58.5 years (SD = 12.8), and 64.6% were men. FoP and worry were significantly intercorrelated (r = 0.58-0.78). The level of FoP was most strongly associated with daily worry (β = 0.514, p < 0.001), followed by pathological worry (β = 0.221, p < 0.001). Further significant determinants were younger age and depressive and anxiety symptoms. Clinical variables were not independently associated with FoP. The final model explained 74% of the variance. Discussion: Fear of progression is strongly associated with daily worry and pathological worry. These results bring up the question of whether FoP is an expression of a general tendency to worry. Whether a general tendency to worry, in fact, represents an independent vulnerability factor for experiencing FCR/FoP needs to be investigated in a longitudinal research design.
abstract_id: PUBMED:36628379
COVID-19 Worry and Related Factors: Turkish Adaptation and Psychometric Properties of the COVID-19 Worry Scale. Background: This study aimed to evaluate the psychometric properties of the Coronavirus Worry Scale and related factors with COVID-19 worry.
Methods: The data were collected through online survey from 846 participants and final sample was 804 after excluding missing data. The psychometric properties of the Turkish Coronavirus Worry Scale were assessed through exploratory factor analysis, confirmatory factor analysis, internal consistency reliability analysis, and Pearson product moment correlation with other psychological constructs. Finally, the one-way analysis of variance and independent samples t-test were utilized for comparing the Coronavirus Worry Scale scores between different socio-demographic and clinical variables. Higher Coronavirus Worry Scale scores suggested higher COVID-19 worry.
Results: Exploratory factor analysis explored the single-factor structure of the Turkish Coronavirus Worry Scale and confirmatory factor analysis confirmed this single-factor structure with good model fits. This scale had good internal consistency reliability (Cronbach's α = 0.92, McDonald's ω = 0.92). The Coronavirus Worry Scale scores were significantly positively correlated with the Coronavirus Anxiety Scale (r = 0.41, P < .01), Fear of COVID-19 Scale (r =0.67, P < .01), Obsession with COVID-19 Scale (r = 0.54, P < .01), and Depression Anxiety Stress Scale-21 (r = 0.36, P < .01). COVID-19 worry was higher in females, those who had a chronic disease, the loss of first-degree or other relatives or close friends due to COVID-19, or those who had never been vaccinated for COVID-19. Those who obeyed the COVID-19 rules, such as wearing masks and physical distancing had higher Coronavirus Worry Scale scores. Also, those who avoided crowded environments to protect themselves from COVID-19 transmission had higher Coronavirus Worry Scale scores.
Conclusion: These findings show that the Turkish Coronavirus Worry Scale is a valid and reliable instrument for assessing COVID-19 worry.
abstract_id: PUBMED:26511764
Reducing worry and subjective health complaints: A randomized trial of an internet-delivered worry postponement intervention. Objectives: Several studies have shown that perseverative, worrisome thoughts are prospectively related to subjective health complaints (SHC) and that a short worry postponement intervention can decrease these complaints. As SHC and worry are prevalent and costly, we tested whether the intervention can be offered online to reduce these complaints in the general population.
Design: A randomized parallel-group trial was conducted with self-selected participants from the general population.
Methods: Via the research website, 996 participants were instructed to register their worrying for 6 consecutive days. The intervention group was instructed to postpone worry to a special 30-min period in the early evening. The Subjective Health Complaints inventory, as administered before and after the intervention, and daily worry frequency and duration were considered the primary outcomes.
Results: Three hundred and sixty-one participants completed the study. Contrary to our expectation, the registration group (n = 188) did not differ from the intervention group (n = 163) in SHC (ηp² = .000, CI [0.000-0.003]), or in worry frequency or duration. Nevertheless, the different worry parameters were moderately related to SHC (r between .238 and .340, p ≤ .001).
Conclusions: In contrast to previous studies using pen-and-pencil versions of the worry postponement intervention, this study suggests that a direct online implementation was not effective in reducing SHC and worry. Overall, participants had high trait worry levels and reported difficulty with postponing worrying. Reducing SHC and worries via the Internet might require more elaborate interventions that better incorporate the advantages of delivering interventions online.
Statement Of Contribution: What is already known on this subject? The perseverative cognition hypothesis argues that perseverative cognition, such as worry and rumination, acts as a mediator by which psychosocial stress may produce negative health effects. Prior research has indeed shown that worry and subjective health complaints (SHC) are prospectively related, but causality studies - that is, showing that changes in worry induce changes in health outcomes - are scarce and have mainly been conducted in young samples. These studies showed that reducing worry, using a worry postponement intervention, can reduce daily worrying and SHC. What does this study add? Trait and daily worrying are associated with SHC. An online worry postponement intervention is ineffective in reducing worry and SHC. Paper-and-pencil interventions cannot directly be used as online interventions.
abstract_id: PUBMED:27792968
Thinking about worry: Investigation of the cognitive components of worry in children. Background: Despite being extensively studied in adults, investigation of worry and it's associated cognitive variables remains in its infancy in paediatric samples.
Aims: This study aimed to investigate 1) whether the child cognitive variables of intolerance of uncertainty (IU), positive beliefs about worry (PBW), negative beliefs about worry (NBW), negative problem orientation (NPO) and cognitive avoidance (CA) were associated with child worry, 2) whether parental worry and cognitive variables were associated with child worry, 3) whether the relationship between child and parent worry was mediated by child cognitive variables and 4) whether the relationships between specific parent cognitive factors and child worry were mediated by specific child cognitive factors.
Method: Participants were 114 children plus one of their parents. Parents and children completed questionnaires assessing each of the five cognitive variables and worry.
Results: When examined together, child NBW and CA significantly predicted child worry. Parent worry, IU and CA were significantly and positively related to child worry. The relationship between parent worry and child worry was mediated by child NBW and CA. The relationship between parent IU, NPO and CA and child worry was mediated by child IU, NPO and CA respectively.
Conclusions: Child NBW and CA may be particularly important to child worry and may represent potential treatment targets. Parental worry and cognitive variables may play a role in the development and / or maintenance of child worry.
abstract_id: PUBMED:37873941
An examination of worry and self-distancing as coping strategies for anxiety-provoking experiences in individuals high in worry. Objectives: This preliminary online study investigated the short-term effects of self-distancing, worry, and distraction on anxiety and worry-related appraisals among individuals high in worry.
Design And Methods: N = 104 community members high in trait worry were randomly assigned to think about a personally identified worry-provoking situation using self-distancing (SC), worry (WC), or distraction (DC). Participants rated their anxiety (Visual Analogue Scale for Anxiety) and appraisals of the situation (Perceived Probability, Coping, and Cost Questions) at post-task and one-day follow-up.
Results: Mixed factorial ANOVAs revealed an increase in anxiety within the WC (d = .475) and no difference in anxiety within the SC (d = .010) from pre- to post-task. There was no difference in anxiety within the DC (p = .177). Participants within the SC reported a decrease in the perceived cost associated with their identified situation from pre- to post-task (d = .424), which was maintained at one-day follow-up (d = .034). Participants reported an increase in perceived ability to cope from post-task to one-day follow-up (d = .236), and from pre-task to one-day follow-up (d = .338), regardless of condition.
Conclusions: Self-distancing may prevent increases in anxiety and catastrophizing while reflecting on a feared situation.
Answer: The provided abstracts do not contain any information regarding vitamin D or its relation to worry or any health concerns. The abstracts focus on various aspects of worry, such as metacognitive beliefs about worry, worry in different sensory modalities, worry fatigue amid the COVID-19 pandemic, linguistic elements of worry models, associations between worry and health complaints, and coping strategies for anxiety-provoking experiences in individuals with high levels of worry. None of the abstracts mention vitamin D or its potential effects on worry or mental health. Therefore, based on the provided abstracts, there is no information to suggest that vitamin D is something to worry about. |
Instruction: Immediate reexploration for the perioperative neurologic event after carotid endarterectomy: is it worthwhile?
Abstracts:
abstract_id: PUBMED:11107077
Immediate reexploration for the perioperative neurologic event after carotid endarterectomy: is it worthwhile? Purpose: When managing a new neurologic deficit after carotid endarterectomy (CEA), the surgeon is often preoccupied with determining the cause of the problem, requesting diagnostics tests, and deciding whether the patient should be surgically reexplored. The goal of this study was to analyze a series of perioperative neurologic events and to determine if careful analysis of their timing and mechanisms can predict which cases are likely to improve with reoperation.
Methods: A review of 2024 CEAs performed from 1985 to 1997 revealed 38 patients who manifested a neurologic deficit in the perioperative period (1.9%). These cases form the focus of this analysis.
Results: The causes of the events included intraoperative clamping ischemia in 5 patients (13.2%); thromboembolic events in 24 (63.2%); intracerebral hemorrhage in 5 (13.2%); and deficits unrelated to the operated artery in 4 (10.5%). Neurologic events manifesting in the first 24 hours after surgery were significantly more likely to be caused by thromboembolic events than by other causes of stroke (88.0% vs. 12.0%, P<.002); deficits manifesting after the first 24 hours were significantly more likely to be related to other causes. Of 25 deficits manifesting in the first 24 hours after surgery, 18 underwent immediate surgical reexploration. Intraluminal thrombus was noted in 15 of the 18 reexplorations (83. 3%); any technical defects were corrected. After the 18 reexplorations, in 12 cases there was either complete resolution of or significant improvement in the neurologic deficit that had been present (66.7%).
Conclusions: Careful analysis of the timing and presentation of perioperative neurologic events after CEA can predict which cases are likely to improve with reoperation. Neurologic deficits that present during the first 24 hours after CEA are likely to be related to intraluminal thrombus formation and embolization. Unless another etiology for stroke has clearly been established, we think immediate reexploration of the artery without other confirmatory tests is mandatory to remove the embolic source and correct any technical problems. This will likely improve the neurologic outcome in these patients, because an uncorrected situation would lead to continued embolization and compromise.
abstract_id: PUBMED:37839660
Impact of intraoperative blood products, fluid administration, and persistent hypothermia on bleeding leading to reexploration after cardiac surgery. Objective: Risk factors for severe postoperative bleeding after cardiac surgery remain multiple and incompletely elucidated. We evaluated the impact of intraoperative blood product transfusions, intravenous fluid administration, and persistently low core body temperature (CBT) at intensive care unit arrival on risk of perioperative bleeding leading to reexploration.
Methods: We retrospectively queried our tertiary care center's Society of Thoracic Surgeons Institutional Database for all index, on-pump, adult cardiac surgery patients between July 2016 and September 2022. Intraoperative fluid (crystalloid and colloid) and blood product administrations, as well as perioperative CBT data, were harvested from electronic medical records. Linear and nonlinear mixed models, treating surgeon as a random effect to account for inter-surgeon practice differences, were used to assess the association between above factors and reexploration for bleeding.
Results: Of 4037 patients, 151 (3.7%) underwent reexploration for bleeding. Reexplored patients experienced remarkably greater postoperative morbidity (23% vs 6%, P < .001) and 30-day mortality (14% vs 2%, P < .001). In linear models, progressively increasing IV crystalloid administration (adjusted odds ratio, 1.11, 95% confidence interval, 1.03-1.19) and decreasing CBT on intensive care unit arrival (adjusted odds ratio, 1.20; 95% confidence interval, 1.05-1.37) were associated with greater risk of bleeding leading to reexploration. Nonlinear analysis revealed increasing risk after ∼6 L of crystalloid administration and a U-shaped relationship between CBT and reexploration risk. Intraoperative blood product transfusion of any kind was not associated with reexploration.
Conclusions: We found evidence of both dilution- and hypothermia-related effects associated with perioperative bleeding leading to reexploration in cardiac surgery. Interventions targeting modification of such risk factors may decrease the rate this complication.
abstract_id: PUBMED:2805307
Reexploration for thrombosis in carotid endarterectomy. We reviewed the records of patients undergoing carotid endarterectomy and manifesting either postoperative stroke or thrombosis by oculopneumoplethysmography (OPG-Gee) to analyze the etiology of stroke and to determine the indications for reexploration. Of 900 consecutive elective endarterectomies performed during an 8-year period, 41 patients experienced a perioperative stroke, carotid thrombosis, or both. These patients were subdivided into three groups: group 1, 22 patients with perioperative stroke and carotid thrombosis; group 2, six patients with carotid thrombosis but without symptoms; and group 3, 13 patients with postoperative stroke but no thrombosis. In group 1, 17 patients were reexplored (group 1a), and five were observed without reexploration (group 1b). In group 2, three of the patients were reexplored (group 2a), and the remaining three were observed (group 2b). None of the group 3 patients were reexplored. In group 1a, four (23%) patients awoke from anesthesia with neurological deficits, whereas in group 3, nine (69%) patients awoke with such deficits. Follow-up at 30 days revealed that 76% of group 1a patients demonstrated improvement in symptoms, whereas similar results were seen in only 20% of group 1b patients and 23% of group 3 patients. These trends were maintained throughout the follow-up period of 1-5 years. Those patients who were asymptomatic, group 2, with thrombosis were more likely to have been operated on for asymptomatic carotid stenosis. with thrombosis were more likely to have been operated on for asymptomatic carotid stenosis. Thrombosis was the most common cause of postoperative stroke (63%) in patients after carotid endarterectomy.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:33487041
Impact of intraoperative neurologic deficits in carotid endarterectomy under regional anesthesia. Objective: Patients undergoing carotid endarterectomy (CEA) may experiment neurologic deficits during the carotid cross-clamping due to secondary cerebral hypoperfusion. An associated risk of postoperative stroke incidence is also well established. This work aimed to assess the postoperative adverse events related to neurologic deficits in the awake test after clamping and to determine its predictive factors. Methods. From January 2012 to January 2018, 79 patients from a referral hospital that underwent CEA with regional anesthesia for carotid stenosis and manifested neurologic deficits were gathered. Consecutively selected controls (n = 85) were submitted to the same procedure without developing neurological changes. Postoperative complications such as stroke, myocardial infarction, all-cause death, and Clavien-Dindo classification were assessed 30 days after the procedure. Univariate and binary logistic regressions were performed for data assessment. Results. Patients with clamping associated neurologic deficits were significantly more obese than the control group (aOR = 9.30; 95% CI: 2.57-33.69; p = .01). Lower degree of ipsilateral stenosis and higher degree of contralateral stenosis were independently related to clamping intolerance (aOR = 0.70; 95% CI: 0.49-0.99; p = .047 and aOR = 1.30; 95% CI: 1.06-1.50; p = .009, respectively). Neurologic deficits were a main 30-day stroke predictor (aOR = 4.30; 95% CI: 1.10-16.71; p = .035). Conclusions. Neurologic deficits during carotid clamping are a predictor of perioperative stroke. Body mass index > 30 kg/m2, a lower degree of ipsilateral stenosis, and a higher degree of contralateral stenosis are independent predictors of neurologic deficits and, therefore, might play a role in the prevention of procedure-related stroke.
abstract_id: PUBMED:10805896
Immediate postoperative thrombolytic therapy: an aggressive strategy for neurologic salvage when cerebral thromboembolism complicates carotid endarterectomy. A 42-year-old man with a high-grade left internal carotid artery (ICA) stenosis demonstrated on a duplex scan was referred to us. A cerebral arteriogram confirmed a greater than 90% left internal carotid stenosis, but with the unexpected finding of a moderate amount of thrombus in the proximal ICA. He underwent emergent left carotid endarterectomy, but during the operation, only a small amount of thrombus was identified as adherent to the atherosclerotic plaque. he awakened in the operating room with a dense right hemiplegia and aphasia. Immediate reexploration demonstrated a patent endarterectomy site, a distal thromboembolectomy was performed without extraction of thrombus, and urokinase (250,000 Units) was infused into the distal ICA. He reawakened with an unchanged right hemiplegia and aphasia. The patient then underwent an urgent postoperative carotid and cerebral arteriogram that demonstrated an embolus to the middle cerebral artery. he was treated with the superselective infusion of urokinase (500,000 Units), with almost complete resolution of the clot. Over the course of the next 48 hours, the patient made a nearly complete neurologic recovery, and he was discharged from the hospital with only a slight facial droop. At 2 months' follow-up he was completely neurologically healthy. To our knowledge this is the first reported case of urokinase administered in the immediate postoperative period in the angiography suite to treat a thromboembolus complicating a carotid endarterectomy.
abstract_id: PUBMED:8624200
Mechanisms of neurologic deficits and mortality with carotid endarterectomy. Objective: To evaluate the incidence and etiology of perioperative complications of carotid endarterectomy.
Design: Retrospective review of carotid endarterectomies performed over 13 years. Risk factors, indications, results of electroencephalographic (EEG) monitoring, and outcomes were evaluated.
Setting: University medical center.
Patients: Three hundred sixty-seven consecutive primary carotid endarterectomies were performed on 336 patients. Indications for operation included transient ischemic attack (48.5%), asymptomatic stenosis (24%), stroke (17%), nonlateralizing ischemia (9.5%), and stroke-in-evolution (1%).
Main Outcome Measures: Postoperative neurologic deficits (permanent and transient) and deaths were correlated with preoperative symptoms, probable mechanism of the neurologic event, intraoperative EEG changes, and the use of intraoperative shunts.
Results: Four new permanent neurologic deficits (1.1%) and one transient postoperative deficit were noted. Of the five deficits, three were related to undiagnosed intraoperative cerebral ischemia and two were related to perioperative emboli. Three perioperative deaths (0.8%) occurred: two of myocardial infarction and one of an intracerebral hemorrhage from a ruptured arteriovenous malformation. Intraoperative EEG tracings for the most recent consecutive 175 procedures were analyzed. Shunts were used in 45 patients (26%), 38 of whom demonstrated significant EEG changes with carotid clamping.
Conclusions: Carotid endarterectomy can be performed with a low risk of stroke (1.1%) and death (0.8%). Stroke was due to cerebral ischemia or embolization. With meticulous surgical technique, death is due to myocardial ischemia and not neurologic events.
abstract_id: PUBMED:21129904
Urgent carotid endarterectomy to prevent recurrence and improve neurologic outcome in mild-to-moderate acute neurologic events. Objectives: This study evaluated the safety and benefit of urgent carotid endarterectomy (CEA) in patients with carotid disease and an acute stable neurologic event.
Methods: The study involved patients with acute neurologic impairment, defined as ≥ 4 points on the National Institutes of Health Stroke Scale (NIHSS) evaluation related to a carotid stenosis ≥ 50% who underwent urgent CEA. Preoperative workup included neurologic assessment with the NIHSS on admission or immediately before surgery and at discharge, carotid duplex scanning, transcranial Doppler ultrasound imaging, and head computed tomography or magnetic resonance imaging. End points were perioperative (30-day) neurologic mortality, significant NIHSS score improvement or worsening (defined as a variation ≥ 4), and hemorrhagic or ischemic neurologic recurrence. Patients were evaluated according to their NIHSS score on admission (4-7 or ≥ 8), clinical and demographic characteristics, timing of surgery (before or after 6 hours), and presence of brain infarction on neuroimaging.
Results: Between January 2005 and December 2009, 62 CEAs were performed at a mean of 34.2 ± 50.2 hours (range, 2-280 hours) after the onset of symptoms. No neurologic mortality nor significant NIHSS score worsening was detected. The NIHSS score decreased in all but four patients, with no new ischemic lesions detected. The mean NIHSS score was 7.05 ± 3.41 on admission and 3.11 ± 3.62 at discharge in the entire group (P < .01). Patients with an NIHSS score of ≥ 8 on admission had a bigger score reduction than those with a lower NIHSS score (NIHSS 4-7; mean 4.95 ± 1.03 preoperatively vs 1.31 ± 1.7 postoperatively, NIHSS ≥ 8 10.32 ± 1.94 vs 4.03 ± 3.67; P < .001).
Conclusions: In patients with acute neurologic event, a high NIHSS score does not contraindicate early surgery. To date, guidelines recommend treatment of symptomatic carotid stenosis ≤ 2 weeks from onset of symptoms to minimize the neurologic recurrence. Our results suggest that minimizing the time for intervention not only reduces the risk of recurrence but can also improve neurologic outcome.
abstract_id: PUBMED:27838111
A short time interval between the neurologic index event and carotid endarterectomy is not a risk factor for carotid surgery. Objective: Current guidelines recommend that carotid endarterectomy (CEA) be performed as early as possible after the neurologic index event in patients with 50% to 99% carotid artery stenosis. However, recent registry data showed that patients treated ≤48 hours had a significantly increased perioperative risk. Therefore, the aim of this single-center study was to determine the effect of the time interval between the neurologic index event and CEA on the periprocedural complication rate at our institution.
Methods: Prospectively collected data for 401 CEAs performed between 2004 and 2014 for symptomatic carotid stenosis were analyzed. Patients were divided into four groups according to the interval between the last neurologic event and surgery: group I, 0 to 2 days; group II, 3 to 7 days; group III, 8 to 14 days; and group IV, 15 to 180 days. The primary end point was the combined rate of in-hospital stroke or mortality. Data were analyzed by way of χ2 tests and multivariable regression analysis.
Results: The patients (68% men) had a median age of 70 years (interquartile range, 63-76 years). The index events included transient ischemic attack in 43.4%, amaurosis fugax in 25.4%, and an ipsilateral stroke in 31.2%. CEA was performed using the eversion technique in 61.1% of patients, and 50.1% were treated under locoregional anesthesia. The perioperative combined stroke and mortality rate was 2.5% (10 of 401), representing a perioperative mortality rate of 1.0% and stroke rate of 1.5%. Overall, myocardial infarction, cranial nerve injuries, and postoperative bleeding occurred in 0.7%, 2.2%, and 1.7%, respectively. We detected no significant differences for the combined stroke and mortality rate by time interval: 3% in group I, 3% in group II, 2% in group III, and 2% in group IV. Multivariable regression analysis showed no significant effect of the time interval on the primary end point.
Conclusions: The combined mortality and stroke rate was 2.5% and did not differ significantly between the four different time interval groups. CEA was safe in our cohort, even when performed as soon as possible after the index event.
abstract_id: PUBMED:31857527
Perioperative Stroke in Carotid Artery Stenting as a Surrogate Marker and Predictor for 30-day Postprocedural Mortality - A Pooled Analysis of 156,000 Patients with Carotid Artery Disease. Background: Carotid artery stenosis (CAS) is being recognized as an effective alternative for carotid endarterectomy (CEA). CAS is especially preferred over CEA in high-risk surgical patients with severe carotid stenosis. However, CAS carries an increased risk of stroke and transient ischemic attack (TIA).
Objective: To assess the association between periprocedural stroke/TIA and 30-day mortality in carotid stenosis patients undergoing CAS.
Methods: We searched PubMed, Embase, and World Science for relevant publications. Studies reporting on perioperative neurologic status (stroke/TIA) and 30-day mortality in patients undergoing CAS were included. Sensitivity, specificity, pooled odds ratio (OR), and relative risk (RR) of perioperative stroke in predicting 30-day mortality following CAS were calculated.
Results: 146 studies with 156,854 patients were included in the meta-analysis. The mean patient age was 70.7 years, and 57.6% were males. Only 26.5% of the CAS cohort were symptomatic and 15.2% had bilateral carotid disease. The incidence of perioperative TIA and stroke were 2.4 and 2.7 per 100 CAS procedure, respectively. Around 11.8% of stroke-events were fatal. The pooled OR of 30-day mortality after perioperative stroke was 24.58 (95% CI, 19.92-30.32) and the pooled RR was 21.65 (95% CI, 17.87-26.22). Perioperative stroke had a sensitivity of 42.0% (95% CI 37.8-46.4%) and specificity of 97.0% (95% CI 96.7-97.3%) in predicting 30-day mortality.
Conclusions: Perioperative stroke drastically increases the risk of 30-day mortality. The occurrence of perioperative stroke exhibited high specificity but modest sensitivity in predicting 30-day mortality following CAS. This highlights the importance of neurophysiologic monitoring to detect intraoperative cerebral ischemia and perform timely interventions.
abstract_id: PUBMED:2709521
Comparison of saphenous vein patch, polytetrafluoroethylene patch, and direct arteriotomy closure after carotid endarterectomy. Part I. Perioperative results. To evaluate the perioperative outcomes and the immediate increases in size after patch closure, 140 carotid endarterectomies were randomized into one of three groups: direct no-patch closure, saphenous vein patch closure, and polytetrafluoroethylene patch closure. Seven patients (4.4%) experienced signs of cerebral ischemia in the immediate postoperative period. In three cases this was transient and reversible. In the other four reexploration was undertaken and carotid thrombosis was corrected by thrombectomy. The condition of one of these patients deteriorated to a permanent stroke, whereas the other patients made a complete recovery. Neurologic complications were more frequent in the no-patch group, but the differences between the groups were not significant. The incidence of perioperative internal carotid stenosis, aneurysmal dilatation, and other morphologic abnormalities was assessed in 131 intravenous digital subtraction angiograms taken before the patient was discharged from the hospital. Eight (17.0%) of the endarterectomies in the no-patch group were narrowed by 30% to 50% diameter stenosis, whereas none of the patched arteries had more than 30% stenosis. In contrast, dilatation of the common or internal carotid artery to more than twice the measured diameter was absent in non-patched arteries but was present in seven (17.0%) saphenous patch closures and four (9.23%) polytetrafluoroethylene patch closures. We conclude that patch closure after carotid endarterectomy is less likely to cause stenosis in the perioperative period. Poly-tetrafluoroethylene patches resist dilatation better than do saphenous vein patches and are less likely to become aneurysmal.
Answer: Immediate reexploration after carotid endarterectomy (CEA) can be worthwhile in certain circumstances. A study analyzing 2024 CEAs found that neurologic deficits presenting within the first 24 hours after surgery were significantly more likely to be caused by thromboembolic events than by other causes of stroke. Immediate surgical reexploration in these cases revealed intraluminal thrombus in 83.3% of the reexplored patients, and after reexploration, 66.7% of patients showed either complete resolution of or significant improvement in the neurologic deficit (PUBMED:11107077). This suggests that careful analysis of the timing and presentation of perioperative neurologic events can predict which cases are likely to improve with reoperation.
Another study reviewing 900 consecutive elective endarterectomies found that thrombosis was the most common cause of postoperative stroke (63%) in patients after CEA. In patients with perioperative stroke and carotid thrombosis, 76% of those who were reexplored demonstrated improvement in symptoms, whereas similar results were seen in only 20% of patients who were not reexplored (PUBMED:2805307). This further supports the notion that reexploration for thrombosis in CEA can be beneficial.
Moreover, a case report of a patient who underwent emergent CEA and experienced a postoperative thromboembolism complicating the procedure was successfully treated with immediate postoperative thrombolytic therapy, leading to a nearly complete neurologic recovery (PUBMED:10805896). This case highlights that aggressive strategies for neurologic salvage, including immediate reexploration and thrombolytic therapy, can be effective in certain situations.
In conclusion, immediate reexploration for perioperative neurologic events after CEA is worthwhile, especially when the neurologic deficit presents within the first 24 hours and is likely due to thromboembolic events. Such an approach can lead to significant improvement in neurologic outcomes for patients experiencing these complications. |
Instruction: Is It Useful to Measure Efficiency Indices of a Deceased-Donor Kidney Transplant Program in One Intensive Care Unit?
Abstracts:
abstract_id: PUBMED:26293019
Is It Useful to Measure Efficiency Indices of a Deceased-Donor Kidney Transplant Program in One Intensive Care Unit? Background: Before 2010, donor detection rate and donor conversion rate at our tertiary level care institution were low. To assess the effectiveness of the implemented organizational changes, an analysis of organizational indicators with the use of the DOPKI (Improving the Knowledge and Practices in Organ Donation) project was conducted.
Methods: Three groups of DOPKI indicators were used: indicators of the potential for deceased organ donation, indicators on areas for improvement in the deceased donation process, and indicators of program effectiveness. We compared the 3-year period before instituting organizational measures with the 3-year period after the changes.
Results: Significant differences in almost all DOPKI indicators were found. Most importantly, the number of actual donors has increased significantly, pointing to the effectiveness of the organizational measures that we put in place in 2010. In addition, the study highlights the value of the use of DOPKI indicators in one intensive care unit to improve the transplant program on a hospital level.
Conclusions: We conclude by arguing that despite the lack of a uniform national database, DOPKI indicators could still be useful for improving the quality of donor programs.
abstract_id: PUBMED:34884335
Outcomes of Deceased Donor Kidney Transplantation in the Eurotransplant Senior Program with A Focus on Recipients ≥75 Years. To evaluate the outcomes of kidney transplantations (KTs) in the Eurotransplant Senior Program (ESP) with a focus on the very old, defined as recipients ≥75 years. This retrospective clinical study included 85 patients, who under the ESP protocol underwent deceased donor kidney transplantation from January 2010 to July 2018 at the Charité-Universitätsmedizin Berlin in Germany. Recipients were divided in three age groups, i.e., Group 65-69, Group 70-74, Group ≥75, and compared. Prognostic risk factors for short and long-term outcomes of kidney transplantations were investigated. Graft survival at 1 and 5 years were respectively 90.7% and 68.0% for group 65-69, 88.9% and 76.2% for Group 70-74, and 100% and 71.4% for Group ≥75. Patient survival at 1 and 5 years were respectively 92.9% and 68.0% for Group 65-69, 85.7% and 61.5% for Group 70-74 and 100% and 62.5% for Group ≥75. Serum creatinine did not significantly differ between the three groups, with the exception of serum creatinine at 1 year. Increased recipient age and prolonged time on dialysis correlated with increased occurrence of postoperative complication. An increase in BMI, pretransplant diabetes mellitus and prolonged time on dialysis correlated with the occurrence of delayed graft function (DGF). History of smoking was identified as an independent risk factor for events of rejection. Increased human leukocyte antigen mismatches (HLA-MM) and prolonged cold ischemia time (CIT) correlated with higher rates of intensive care unit (ICU) treatment. This study supports kidney transplantations for the very old. End-stage renal disease (ESRD) patients ≥75 years of age who underwent kidney transplantation experienced comparable results to their younger counterparts. A comprehensive evaluation of ESRD patients with consideration of prognostic risk factor is the most suitable mean of identifying adequate kidney transplant candidates.
abstract_id: PUBMED:34376939
Deceased Donor Renal Transplantation: A Single Center Experience. Introduction: Deceased donor kidney transplant are still not common across India. This study was done to assess various measures taken at a single center level to increase organ donation rate and to analyse the outcomes of transplants performed from these donors.
Methods: All deceased donor renal transplants performed from November 2011 to February 2017 were analysed for patient and death censored graft survival, rate of delayed graft function, rate of rejection and mortality. Kaplan Meir analysis for Survival Curves was used.
Results: Organ donation rate at our center improved from one donation every alternate year in 2004 to a peak of 44 donations in 2017. Patient survival was 93.42%, 89.44%, 85.53%, and death censored graft survival was 94.07%, 88.21%, and 82.86% at 1, 2 and 3 years respectively. Mean duration of hemodialysis pre transplantation was 34.6 ± 27.43 months.
Conclusions: This study has shown that steps taken at a single center level alone can also significantly improve organ donation rates. Employment of dedicated professionals including transplant surgeons and coordinators, developing a protocol-based approach for referral, and early counseling in triage along with regular audits can help to establish deceased donor program with acceptable outcomes elsewhere in the country.
abstract_id: PUBMED:37342743
Deceased Donor Kidney Transplantation Outcomes at a Sri Lankan Center: A Comprehensive Single-Center Analysis. Background Chronic kidney disease (CKD) causes significant morbidity and mortality in patients and incurs a huge burden on healthcare expenses globally. Renal replacement therapy becomes imperative when patients reach end-stage renal disease. Kidney transplant is the best modality of choice for the majority of patients, and deceased donor kidney transplantation is the major contributor in the majority of countries. We present an outcome study in Sri Lanka for deceased donor kidney transplantation. Methodology This is an observational study conducted at the Nephrology Unit 1 at the National Hospital of Sri Lanka, Colombo, in patients who had undergone deceased donor kidney transplantation from July 2018 to mid-2020. We studied the outcomes of these patients for one year, including delayed graft function, acute rejection, infection, and mortality. Ethical clearance was obtained from the ethical review committee of the National Hospital of Sri Lanka, Colombo, and the University of Colombo. Results The study included 27 participants with a mean age of 55 ± 9.519 years. Diabetes mellitus (69.2%), hypertension (11.5%), chronic glomerulonephritis (7.7%), chronic pyelonephritis (7.7%), and obstructive uropathy (3.8%) were the etiological factors of CKD. Basiliximab was used as an induction agent, and a tacrolimus-based triple-drug regimen was used for maintenance in all patients. The mean cold ischemic time was 9 ± 3.861 hours. The majority (44%) of recipients had an O-positive blood group. At one year, the mean serum creatinine was 1.40 ± 0.686 mg/dL, and the mean estimated glomerular filtration rate was 62 ± 21.281 mL/minute/1.73 m2. Delayed graft function occurred in 25.9% of the recipients, and 22.2% had acute transplant rejection. Postoperative infection was observed in 44.4% of recipients. One year after transplantation, 22% of the recipients died. Infection was the cause of death in 83% of recipients (five of six patients). The causes of death in the study sample were pneumonia (50%), including pneumocystis pneumonia (17%), myocardial infarction (17%), mucormycosis (16%), and other infections (17%). There was no significant association between outcomes at one year with age, gender, causes of CKD, or postoperative complications. Conclusions Our study found that the one-year survival rate following deceased donor kidney transplantation in Sri Lanka is relatively low, with infections being the leading cause of mortality. The high infection rate during the early post-transplant period underscores the need for enhanced infection prevention and control measures. Although we did not observe any significant association between the outcomes and the variables studied, it is important to note that the small sample size of our study may have influenced this finding. Future research with larger sample sizes may provide more insights into the factors influencing post-transplant outcomes in Sri Lanka.
abstract_id: PUBMED:32545566
Should We Perform Old-for-Old Kidney Transplantation during the COVID-19 Pandemic? The Risk for Post-Operative Intensive Stay. Health care systems worldwide have been facing major challenges since the outbreak of the SARS-CoV-2 pandemic. Kidney transplantation (KT) has been tremendously affected due to limited personal protective equipment (PPE) and intensive care unit (ICU) capacities. To provide valid information on risk factors for ICU admission in a high-risk cohort of old kidney recipients from old donors in the Eurotransplant Senior Program (ESP), we retrospectively conducted a bi-centric analysis. Overall, 17 (16.2%) patients out of 105 KTs were admitted to the ICU. They had a lower BMI, and both coronary artery disease (CAD) and hypertensive nephropathy were more frequent. A risk model combining BMI, CAD and hypertensive nephropathy gained a sensitivity of 94.1% and a negative predictive value of 97.8%, rendering it a valuable search test, but with low specificity (51.1%). ICU admission also proved to be an excellent parameter identifying patients at risk for short patient and graft survivals. Patients admitted to the ICU had shorter patient (1-year 57% vs. 90%) and graft (5-year 49% vs. 77%) survival. To conclude, potential kidney recipients with a low BMI, CAD and hypertensive nephropathy should only be transplanted in the ESP in times of SARS-CoV-2 pandemic if the local health situation can provide sufficient ICU capacities.
abstract_id: PUBMED:28751577
Outcomes of Deceased Donor Kidney Offers to Patients at the Top of the Waiting List. Background And Objectives: Transplant centers may accept or refuse deceased-donor kidneys that are offered to their patients at the top of the waiting list. We sought to determine the outcomes of deceased-donor kidney offers and their association with characteristics of waitlisted patients and organ donors.
Design, Setting, Participants, & Measurements: We examined all 7 million deceased-donor adult kidney offers in the United States from 2007 to 2012 that led to eventual transplantation. Data were obtained from the national organ allocation system through the United Network of Organ Sharing. The study cohort consisted of 178,625 patients waitlisted for a deceased-donor kidney transplant and 31,230 deceased donors. We evaluated offers made to waitlisted patients and their outcomes (transplantation or specific reason for refusal).
Results: Deceased-donor kidneys were offered a median of seven times before being accepted for transplantation. The most common reasons for refusal of an offer were donor-related factors, e.g., age or organ quality (3.2 million offers, 45.0%), and transplant center bypass, e.g., minimal acceptance criteria not met (3.2 million offers, 44.0%). After adjustment for characteristics of waitlisted patients, organ donors, and transplant centers, male (odds ratio [OR], 0.93; 95% confidence interval [95% CI], 0.91 to 0.95) and Hispanic (OR, 0.96; 95% CI, 0.93 to 0.99) waitlisted patients were less likely to have an offer accepted than female and white patients, respectively. The likelihood of offer acceptance varied greatly across transplant centers (interquartile ratio, 2.28).
Conclusions: Transplant centers frequently refuse deceased-donor kidneys. Such refusals differ by patient and donor characteristics, may contribute to disparities in access to transplantation, and vary greatly across transplant centers.
Podcast: This article contains a podcast at https://www.asn-online.org/media/podcast/CJASN/2017_07_27_Huml.mp3.
abstract_id: PUBMED:35373010
Number of Donor Renal Arteries and Early Outcomes after Deceased Donor Kidney Transplantation. Background: Anatomic abnormalities increase the risk of deceased donor kidney discard, but their effect on transplant outcomes is understudied. We sought to determine the effect of multiple donor renal arteries on early outcomes after deceased donor kidney transplantation.
Methods: For this retrospective cohort study, we identified 1443 kidneys from 832 deceased donors with ≥1 kidney transplanted at our center (2006-2016). We compared the odds of delayed graft function and 90-day graft failure using logistic regression. To reduce potential selection bias, we then repeated the analysis using a paired-kidney cohort, including kidney pairs from 162 donors with one single-artery kidney and one multiartery kidney.
Results: Of 1443 kidneys included, 319 (22%) had multiple arteries. Multiartery kidneys experienced longer cold ischemia time, but other characteristics were similar between groups. Delayed graft function (50% multiartery versus 45% one artery, P=0.07) and 90-day graft failure (3% versus 3%, P=0.83) were similar between groups before and after adjusting for donor and recipient characteristics. In the paired kidney analysis, cold ischemia time was significantly longer for multiartery kidneys compared with single-artery kidneys from the same donor (33.5 versus 26.1 hours, P<0.001), but delayed graft function and 90-day graft failure were again similar between groups.
Conclusions: Compared with single-artery deceased donor kidneys, those with multiple renal arteries are harder to place, but experience similar delayed graft function and early graft failure.
abstract_id: PUBMED:27555674
The development and current status of Intensive Care Unit management of prospective organ donors. Introduction: Despite continuous advances in transplant medicine, there is a persistent worldwide shortage of organs available for donation. There is a growing body of research that supports that optimal management of deceased organ donors in Intensive Care Unit can substantially increase the availability of organs for transplant and improve outcomes in transplant recipients.
Methods: A systematic literature review was performed, comprising a comprehensive search of the PubMed database for relevant terms, as well as individual assessment of references included in large original investigations, and comprehensive society guidelines.
Results: In addition to overall adherence to catastrophic brain injury guidelines, optimization of physiologic state in accordance with established donor management goals (DMGs), and establishment of system-wide processes for ensuring early referral to organ procurement organizations (OPOs), several specific critical care management strategies have been associated with improved rates and outcomes of renal transplantation from deceased donors. These include vasoactive medication selection, maintenance of euvolemia, avoidance of hydroxyethyl starch, glycemic control, targeted temperature management, and blood transfusions if indicated.
Conclusions: Management of deceased organ donors should focus first on maintaining adequate perfusion to all organ systems through adherence to standard critical care guidelines, early referral to OPOs, and family support. Furthermore, several specific DMGs and strategies have been recently shown to improve both the rates and outcomes of organ transplantation.
abstract_id: PUBMED:36090778
Deceased Donor Characteristics and Kidney Transplant Outcomes. Kidney transplantation is the therapy of choice for people living with kidney failure who are suitable for surgery. However, the disparity between supply versus demand for organs means many either die or are removed from the waiting-list before receiving a kidney allograft. Reducing unnecessary discard of deceased donor kidneys is important to maximize utilization of a scarce and valuable resource but requires nuanced decision-making. Accepting kidneys from deceased donors with heterogenous characteristics for waitlisted kidney transplant candidates, often in the context of time-pressured decision-making, requires an understanding of the association between donor characteristics and kidney transplant outcomes. Deceased donor clinical factors can impact patient and/or kidney allograft survival but risk-versus-benefit deliberation must be balanced against the morbidity and mortality associated with remaining on the waiting-list. In this article, the association between deceased kidney donor characteristics and post kidney transplant outcomes for the recipient are reviewed. While translating this evidence to individual kidney transplant candidates is a challenge, emerging strategies to improve this process will be discussed. Fundamentally, tools and guidelines to inform decision-making when considering deceased donor kidney offers will be valuable to both professionals and patients.
abstract_id: PUBMED:25282158
Is the Kidney Donor Risk Index a step forward in the assessment of deceased donor kidney quality? The allocation of deceased donor kidneys has become more complex because of the increasing spectrum of donors and recipients age and comorbidities. Several scoring systems have been proposed to evaluate the donor quality of deceased donor kidneys, based on clinical, pathological or combined parameters to predict the risk of renal allograft failure. Nonetheless, besides the dichotomous extended criteria donor (ECD) score, none of the others have been used in clinical practice because of numerous reasons, ranging from lack of robust validation to the technical challenges associated with the evaluation of donor biopsies. Recently, the Kidney Donor Risk Index (KDRI) and Profile Index (KDPI) were introduced in the USA as a refined version of the ECD score. This scoring system is based on 10 donor factors, therefore providing a finely granulated evaluation of donor quality without the need of a kidney biopsy.Here, we review the advantages and drawbacks of the main scoring systems, and we describe the components of the KDRI and KDPI. It is an easily accessible online tool, based solely on donor factors readily available at the moment of the donor offer. Importantly, the KDPI has also been made part of the 'longevity matching' allocation in the USA, where the best kidneys are allocated to the recipients with the longest predicted post-transplant survival. The KDRI should provide us with a robust qualitative evaluation of deceased donor quality, and therefore will probably play a role in deceased donor kidney allocation policies across Europe in the near future. Hopefully, the KDRI and the KDPI should help transplant programmes to better allocate the scarce resource of deceased donor kidneys.
Answer: Yes, it is useful to measure efficiency indices of a deceased donor kidney transplant program in one intensive care unit. The study conducted to assess the effectiveness of organizational changes implemented in a tertiary level care institution used DOPKI indicators, which include indicators of the potential for deceased organ donation, areas for improvement in the deceased donation process, and indicators of program effectiveness. The study found significant differences in almost all DOPKI indicators after the organizational measures were put in place in 2010, most notably an increase in the number of actual donors. This points to the effectiveness of the organizational measures and highlights the value of using DOPKI indicators in one intensive care unit to improve the transplant program on a hospital level. The study concludes that DOPKI indicators could be useful for improving the quality of donor programs, even in the absence of a uniform national database (PUBMED:26293019). |
Instruction: Is Obesity a Risk Factor for Adverse Events After Knee Arthroscopy?
Abstracts:
abstract_id: PUBMED:27013106
Is Obesity a Risk Factor for Adverse Events After Knee Arthroscopy? Purpose: To evaluate how body mass index (BMI) affects rates of 30-day complication, hospital readmissions, and mortality in patients undergoing knee arthroscopy.
Methods: Patients undergoing knee arthroscopy procedures between 2006 and 2013 were identified in the American College of Surgeons National Surgical Quality Improvement Program database. Patient demographics and preoperative risk factors including BMI were analyzed for postoperative complications within 30 days. Cochran-Armitage testing was performed to detect differences in complication rates across BMI categories according to World Health Organization classification. The independent risk of BMI was assessed using multivariate regression analysis.
Results: Of 41,919 patients with mean age 48 years undergoing knee arthroscopy, 20% were classified as normal weight (BMI 18.5 to 24), 35% overweight (BMI 25 to 29), 24% obese class I (BMI 30 to 34), 12% class II (BMI 35 to 40), and 9% class III (BMI ≥40). Risk of complication increased significantly with increasing BMI (normal: 1.5%, overweight: 1.6%, obese class I: 1.7%, obese class II: 1.8%, obese class III: 1.9%, P = .043). On multivariate analysis, there was no increased risk of postoperative complication directly attributed to patient BMI. Independent risk factors for medical and surgical complications after knee arthroscopy included American Society of Anesthesiologists (ASA) rating (class 4 v class 1 odds ratio [OR]: 5.39 [95% confidence interval: 3.11-9.33], P < .001), functional status for activities of daily living (dependent v independent OR: 2.13 [1.42, 3.31], P < .001), history of renal comorbidity (presence v absence OR: 5.10 [2.30, 11.29], P < .001), and previously experienced history of wound infection prior to current surgery (presence v absence OR: 4.91 [2.88, 8.39], P < .001).
Conclusions: More than 40% of knee arthroscopy patients qualify as obese. Although univariate analysis suggests that obesity is associated with increased postoperative complications within 30 days of surgery, BMI alone does not predict complications. Independent predictors of complications include patients with high ASA classification, dependent functional status, renal comorbidities, and a recent history of wound infection.
Level Of Evidence: Level IV, prognostic case series.
abstract_id: PUBMED:38379989
Risk factors for venous thromboembolism following knee arthroscopy: A systematic review and meta-analysis of observational studies. Objectives: To evaluate the risk factors for increased risk of venous thrombosis after arthroscopic knee surgery.
Methods: PubMed, EMBASE and Cochrane Library were searched from their inception to April 4, 2023. Observational studies investigated venous thrombosis following arthroscopic knee surgery were included. The Newcastle Ottawa Scale (NOS) was used to evaluate the methodological quality of included studies. The odd ratios (ORs) and 95% confidence intervals (CIs) pertaining to each risk factor were synthesized through a random effects model by STATA 14 software.
Results: The protocol this meta-analysis has been registered on PROSPERO (CRD42023410283). A total of 22 observational studies were included in the systematic review, all of which were of moderate or high methodological quality. The results of the meta-analysis revealed that several factors were significantly associated with an elevated risk of venous thrombosis following arthroscopic knee surgery. These factors included age (mean age ≥30 years) [OR = 1.08, 95%CI (1.04, 1.13), P = 0.001], overweight or obesity [OR = 1.31, 95%CI (1.13, 1.52), P<0.001], oral contraceptive use [OR = 1.90, 95%CI (1.52, 2.37), P<0.001], and smoking history [OR = 1.35, 95%CI (1.06, 1.71), P = 0.014]. Furthermore, the subgroup analysis indicated that patients with an average age over 50 years [OR = 3.18, 95%CI (1.17, 8.66), P = 0.001] and those who underwent surgery with a tourniquet for ≥90 min [OR = 4.79, 95%CI (1.55, 14.81), P = 0.007] were at a significantly increased risk of venous thrombosis after knee arthroscopy.
Conclusion: Age, obesity, oral contraceptives, smoking history, and prolonged tourniquet use may increase the risk of venous thrombosis after arthroscopic knee surgery. The incidence of venous thrombosis after knee arthroscopy is on a downward trend, but due to its severity, increasing awareness of risk factors and implementing effective prophylaxis are important tasks for clinicians to prevent the risk of venous thrombosis after knee arthroscopy.
abstract_id: PUBMED:29114639
Knee Septic Arthritis after Arthroscopy: Incidence, Risk Factors, Functional Outcome, and Infection Eradication Rate. Purpose Septic knee arthritis following arthroscopy is a rare but dreaded complication. Definition and management of knee deep infections are quite discussed in literature. In this review, literature regarding infections after knee arthroscopy is analyzed highlighting the incidence, causative bacteria, risk factors as well as clinical outcomes. Methods We performed a review of the literature matching the following key words: "septic arthritis" OR "infection" AND "arthroscopy" AND "knee." Knee arthroscopic procedures, such as debridement, meniscectomy, meniscus repair, synovectomy, microfracture, and lateral release, were considered. Complex procedures, such as ligament reconstruction, fractures, or complex cartilage repair techniques, were not included. Results Thirteen studies were included in this review. Incidence of infection ranged from 0.009 to 1.1% in patients undergoing simple arthroscopic procedures. Staphylococci are the most commonly isolated organisms from postarthroscopy infection. Use of intraoperative intra-articular steroids, smoking, obesity, male sex, diabetes, number of procedures performed during surgery, time of surgery, and tourniquet time of more than 60 minutes have been certified as risk factors for knee infection. Conclusion Postarthroscopy septic arthritis of the knee causes significant morbidity, usually requiring readmission to the hospital, at least one additional operation, and prolonged antibiotic therapy, both intravenous and oral. Prompt diagnosis and treatment are associated with a high success rate. Level of Evidence Level IV, systematic review of I-IV studies.
abstract_id: PUBMED:33155684
Ultrasound-Assisted Posterior Knee Arthroscopy: A Description of the Technique. Entering the posterior knee with arthroscopy can be difficult. Scar tissue, a tumor, and the obese patient can make instrument placement difficult and risk iatrogenic injury. Ultrasound can be used to visualize the posterior knee and provide direct guidance of instrumentation. We describe the technique and indications for using ultrasound during arthroscopy. Accurate and atraumatic insertion of instruments can be performed with no damage to total knee components or the knee joint. Ultrasound guidance should be considered during difficult posterior knee arthroscopy.
abstract_id: PUBMED:33250328
Prior Knee Arthroscopy Increases the Failure Rate of Subsequent Unicompartmental Knee Arthroplasty. Background: In selected patients, knee arthroscopy is performed prior to unicompartmental knee arthroplasty (UKA) to treat symptomatic mechanical pathology, delay arthroplasty, and assess the knee compartments. The purpose of this study was to determine if knee arthroscopy prior to UKA is associated with increased rates of UKA failure or conversion to total knee arthroplasty (TKA).
Methods: Data was collected from the Humana insurance database from 2007-2017. Patients who underwent knee arthroscopy within two years prior to UKA were identified and matched with controls based on age, gender, Charlson Comorbidity Index, smoking status, and obesity. Rates of conversion to TKA and failure for various causes were compared between cohorts.
Results: Prior to propensity matching, 8353 UKA patients met inclusion criteria. Of these, 1079 patients (12.9%) underwent knee arthroscopy within two years of UKA and were matched to 1079 patients (controls) who did not undergo knee arthroscopy in the two years preceding UKA. No differences in demographics/comorbidities existed among cohorts. Compared to controls, the knee arthroscopy cohort was more likely to experience failure for aseptic loosening (2.4% vs 1.1%; OR 2.166; P = .044) and significantly more likely to require conversion to TKA (10.4% vs 4.9%; OR 2.113; P < .001) within two years of UKA.
Conclusion: Knee arthroscopy within two years of UKA is associated with an increased rate of UKA conversion to TKA and a higher rate of UKA failure from aseptic loosening. Although clinicians should be mindful of this association when performing knee arthroscopy in patients who may be indicated for future UKA, further research is needed to better characterize these findings.
abstract_id: PUBMED:30733034
Body Mass Index as a Risk Factor for 30-Day Postoperative Complications in Knee, Hip, and Shoulder Arthroscopy. Purpose: To use the American College of Surgeons National Surgical Quality Improvement Program database to determine whether body mass index (BMI) is associated with 30-day postoperative complications following arthroscopic surgery.
Methods: Cases of elective knee, hip, and shoulder arthroscopy were identified. A retrospective comparative analysis was conducted, and the overall rates of morbidity, mortality, readmission, reoperation, and venothromboembolism (VTE) were compared using univariate analyses and binary logistic regressions to ascertain the adjusted effect of BMI, with and without diabetes, on morbidity, readmission, reoperation, and VTE.
Results: There were 141,335 patients who met the criteria. The most common complications were deep vein thrombosis (0.27%), superficial surgical site infection (0.17%), urinary tract infection (0.13%), and pulmonary embolism (0.11%). Obesity class III with diabetes was a risk factor for morbidity (odds ratio [OR] = 1.522; 95% confidence interval [CI], 1.101-2.103) and readmission (OR = 2.342; 95% CI, 1.998-2.745) following all procedures, while obesity class I was protective toward reoperation (OR = 0.687, 95% CI, 0.485-0.973). Underweight patients were at higher risk for morbidity following shoulder arthroscopy (OR = 3.776; 95% CI, 1.605-8.883), as were the class I obese (OR = 1.421; 95% CI, 1.010-1.998) and class II obese (OR = 1.726, 95% CI, 1.159-2.569). BMI did not significantly affect morbidity following knee arthroscopy. VTE risk factors included being overweight (OR = 1.474; 95% CI, 1.088-1.996) or diabetic with class I obesity (OR = 1.469; 95% CI, 1.027-2.101).
Conclusions: Arthroscopic procedures are safe with very low complication rates. However, underweight and class I and class II obese patients are at higher risk for morbidity following shoulder arthroscopy, and diabetic patients with class III obesity are at higher risk for morbidity and readmission following all arthroscopy. Because BMI is a modifiable risk factor, these patients should be evaluated carefully before being considered for outpatient arthroscopic surgery.
Level Of Evidence: Level III, retrospective comparative study.
abstract_id: PUBMED:27459139
Total Knee Arthroplasty After Knee Arthroscopy in Patients Older Than 50 Years. Several orthopedic registries have described the incidence of total knee arthroplasty (TKA) in patients who have undergone knee arthroscopy. Patient risk factors may play a role in the conversion rate from knee arthroscopy to TKA. This study quantifies the incidence of conversion of knee arthroscopy to TKA from a US mixed-payer database and describes some common patient risk factors for conversion. The medical records of more than 50 million patients who were treated between 1998 and 2014 were mined with a commercially available software platform. During the study period, a total of 68,090 patients older than 50 years underwent knee arthroscopy for partial meniscectomy, chondroplasty, or debridement. Reported rates of TKA at 1, 2, and 3 years after arthroscopy were 10.1%, 13.7%, and 15.6%, respectively. Obesity, depressive disorder, rheumatoid arthritis, diabetes, and age 70 years and older were associated with increased relative risk of conversion to TKA at 2 years. When obesity was combined individually with the top 5 other risk factors, no combination produced a higher relative risk than that of obesity alone. Patients who were 50 to 54 years of age had the lowest incidence of conversion to TKA (8.3%, P<.001). Men had a lower incidence of conversion to TKA (11.3%) than women (15.8%, P<.001). This information can help surgeons to counsel patients on the incidence of TKA after knee arthroscopy and identify preoperative risk factors that increase risk. [Orthopedics. 2016; 39(6):e1041-e1044.].
abstract_id: PUBMED:33894744
Bilateral pulmonary embolism without deep venous thrombosis was observed after knee arthroscopy: a case report. Background: Symptomatic pulmonary embolism (PE) after knee arthroscopy is extremely rare. If the embolism is not treated promptly, the patient may die. Bilateral pulmonary embolism with associated pulmonary infarct without concomitant deep vein thrombosis has never been reported following routine knee arthroscopy.
Case Presentation: A 50-year-old female patient with no other risk factors other than hypertension, obesity, varicose veins in the ipsilateral lower extremities and elevated triglyceride (TG) presented to our ward. She had experienced sudden chest tightness, polypnea and fainting after going to the bathroom the morning of the second postoperative day and received emergency medical attention. Colour ultrasonography of the extremities showed no deep vein thrombosis. Lung computed tomography angiography (CTA) showed multiple embolisms scattered in both pulmonary artery branches. Thus, emergency interventional thrombolysis therapy was performed, followed by postoperative symptomatic treatment with drugs with thrombolytic, anticoagulant and protective activities. One week later, lung CTA showed a significant improvement in the PEs compared with those in the previous examination. Since the aetiology of PE and no obvious symptoms were discerned, the patient was discharged.
Conclusion: Although knee arthroscopy is a minimally invasive and quick procedure, the risk factors for PE in the perioperative period should be considered and fully evaluated to enhance PE detection. Moreover, a timely diagnosis and effective treatment are important measures to prevent and cure PE after knee arthroscopy. Finally, clear guidelines regarding VTE thromboprophylaxis following knee arthroscopy in patients with a low risk of VTE development are needed.
abstract_id: PUBMED:30730416
Symptomatic Venous Thromboembolism After Adolescent Knee Arthroscopy. Background: The frequency of knee arthroscopy procedures is increasing in pediatric and adolescent patients. In general, complications after these procedures in adolescents are uncommon. The purposes of this study are to report the incidence of venous thromboembolism (VTE) in adolescent patients after knee arthroscopy procedures, as well identify risk factors in this patient population.
Methods: Medical records were reviewed in all pediatric and adolescent patients (≤19 y) who underwent an arthroscopic knee procedure from 2010 to 2014 and were diagnosed with a symptomatic VTE in the postoperative period. Demographic features were recorded, and included age, sex, body mass index, clinical characteristics (diagnosis, type of surgical intervention, tourniquet time), VTE risk factors [family history of VTE, obesity (body mass index >30), oral contraceptive use, and smoking use/exposure] and treatment (anticoagulation type/duration).
Results: Out of 2783 patients who underwent knee arthroscopy during the 5-year study period, 7 patients (3 males, 4 females, mean age, 16.9 y, range, 15 to 18) developed a symptomatic postoperative VTE (incidence, 0.25%, 95% confidence interval, 0.11%-0.54%). There were 6 unilateral deep venous thrombosis, and 1 bilateral deep venous thrombosis. Arthroscopic procedures performed in this cohort included anterior cruciate ligament reconstruction (3), isolated lateral release (1), meniscectomy (2), and patellar realignment with arthroscopic lateral release, open tibial tubercle osteotomy, and open proximal medial retinacular reefing (1). VTE was diagnosed an average of 9 days following surgery (range, 3 to 16). All patients were initially treated with low-molecular-weight heparin, and 2 were converted to warfarin. Mean duration of anticoagulation treatment was 64 days (range, 28 to 183). All patients had at least 1 identifiable medical or surgical risk factor, including oral contraceptive use (2), smoking (2), obesity (2), an arthroscopically assisted open procedure (4), or tourniquet time >60 minutes (3).
Conclusions: VTE after adolescent knee arthroscopy has not been well described. The incidence is ∼0.25%. Previously established risk factors for VTE were present in 100% of the affected population. Low-molecular-weight heparin was used to successfully treat this complication.
Level Of Evidence: Level IV.
abstract_id: PUBMED:34604486
Time Trends in Patient Characteristics and In-Hospital Adverse Events for Primary Total Knee Arthroplasty in the United States: 2010-2017. Background: Perioperative care for total knee arthroplasty (TKA) has improved over time. We present an analysis of inpatient safety after TKA.
Methods: 14,057 primary TKAs captured by the Medicare Patient Safety Monitoring System between 2010 and 2017 were retrospectively reviewed. We calculated changes in demographics, comorbidities, and adverse events (AEs) over time. Risk factors for AEs were also assessed.
Results: Between 2010 and 2017, there was an increased prevalence of obesity (35.1% to 57.6%), tobacco smoking (12.5% to 17.8%), and renal disease (5.2% to 8.9%). There were reductions in coronary artery disease (17.3% to 13.4%) and chronic warfarin use (6.7% to 3.1%). Inpatient AEs decreased from 4.9% to 2.5%, (P < .01), primarily driven by reductions in anticoagulant-associated AEs, including major bleeding and hematomas (from 2.8% to 1.0%, P < .001), catheter-associated urinary tract infections (1.1% to 0.2%, P < .001), pressure ulcers (0.8% to 0.2%, P < .001), and venous thromboembolism (0.3% to 0.1%, P = .04). The adjusted annual decline in the risk of developing any in-hospital AE was 14% (95% confidence interval [CI] 10%-17%). Factors associated with developing an AE were advanced age (odds ratio [OR] = 1.01, 95% CI 1.00-1.01), male sex (OR = 1.21, 95% CI 1.02-1.44), coronary artery disease (OR = 1.35, 95% CI 1.07-1.70), heart failure (OR = 1.70, 95% CI 1.20-2.41), and renal disease (OR = 1.71, 95% CI 1.23-2.37).
Conclusions: Despite increasing prevalence of obesity, tobacco smoking, and renal disease, inpatient AEs after primary TKA have decreased over the past several years. This improvement is despite the increasing complexity of the inpatient TKA population over time.
Answer: Yes, obesity is considered a risk factor for adverse events after knee arthroscopy. Several studies have found an association between higher body mass index (BMI) and increased rates of complications following knee arthroscopy. For instance, one study found that the risk of complication increased significantly with increasing BMI, with rates of 1.5% in normal weight individuals, 1.6% in overweight, 1.7% in obese class I, 1.8% in obese class II, and 1.9% in obese class III patients (PUBMED:27013106). However, this same study noted that BMI alone did not predict complications when other factors were considered, such as high American Society of Anesthesiologists (ASA) classification, dependent functional status, renal comorbidities, and a recent history of wound infection.
Another study highlighted that overweight or obesity was significantly associated with an elevated risk of venous thrombosis following arthroscopic knee surgery (PUBMED:38379989). Additionally, a systematic review identified obesity as a risk factor for knee infection after arthroscopy (PUBMED:29114639). Furthermore, a retrospective comparative study found that obesity class III with diabetes was a risk factor for morbidity and readmission following arthroscopic surgery (PUBMED:30733034).
In summary, while obesity is associated with an increased risk of postoperative complications within 30 days of knee arthroscopy, it is not the sole predictor of complications. Other independent risk factors also play a significant role in the likelihood of adverse events occurring after knee arthroscopy. Clinicians should be aware of these associations and consider them when evaluating patients for surgery, as well as in postoperative care and monitoring. |
Instruction: Is it necessary to resect the diseased esophagus in performing reconstruction for corrosive esophageal stricture?
Abstracts:
abstract_id: PUBMED:11423265
Is it necessary to resect the diseased esophagus in performing reconstruction for corrosive esophageal stricture? Objective: The incidence of carcinoma of the esophagus among patients with chronic esophageal stricture caused by ingestion of corrosive agents is reported to be significantly higher than that of the general population. The question of whether or not a resection of the diseased esophagus should be included in the surgical reconstruction procedure of the undilatable esophageal stricture continues to be a controversial.
Methods: During the 12 year period from 1988 to 1999, a total of 54 consecutive patients with caustic stricture of the esophagus were treated in our department. We retrospectively reviewed these cases and analyzed the incidence of cicatrical carcinoma among the patients and the risk of esophagectomy according to the procedures performed.
Results: We found seven cases of esophageal cancer among these patients. There was no significant increase in mortality or morbidity related to esophagectomy.
Conclusions: Considering the high incidence of cicatrical carcinoma from the stricture sites as well as the possible chance of hidden malignancy, we concluded that the simultaneous resection of the esophagus with reconstruction for patients with chronic intractable caustic stricture would give the patients a better probability of being completely cured of the disease.
abstract_id: PUBMED:32494524
Thoracolaparoscopic-Assisted Esophagectomy for Corrosive-Induced Esophageal Stricture. Corrosive-induced stricture of the digestive tract is a dreaded complication following corrosive ingestion. When surgical reconstruction is needed, esophagectomy helps to avoid the long-term complications related to leaving behind the scarred native esophagus. We tried to ascertain the feasibility and safety of a thoracolaparoscopic-assisted esophagectomy in such a setting. A 32-year-old male presented with corrosive-induced esophageal stricture that lead to progressive dysphagia not amenable for endoscopic dilatation. Thoracoscopic approach was used for mobilization of the scarred esophagus under vision. Laparoscopic approach was used in mobilizing the stomach and creating a conduit. Esophagogastric anastomosis was performed in the neck. The patient had an uneventful recovery postoperatively and was discharged after six days on a semisolid diet. Thoracolaparoscopic-assisted esophagectomy can be safely performed for corrosive strictures of the esophagus. Besides improving the ease of performing the procedure, it also helps mitigate the morbidity associated with conventional open surgery in such cases.
abstract_id: PUBMED:19386567
Development of scar cancer after subtotal oesophagectomy for corrosive injury Introduction: The incidence of cicatricial carcinoma of the scarred esophagus in patients with corrosive injuries is relatively high. Therefore, the necessity to resect the diseased oesophagus was raised as opposed to carry out a simple by-pass reconstruction only.
Case Report: A 56-year-old female patient with a past medical history of lye consumption presented with a stricture of the esophagus. She underwent resection of the diseased esophagus with mediastinal colon interposition. 28 years after surgery the patient had symptoms of progressive dysphagia and loss of weight caused by scar cancer of the esophagus. After neoadjuvant chemo-radiotherapy, resection of the remainder oesophagus was performed with free jejunal transplantation. On postoperative day 14 the patient had been discharged with no complications and good swallowing function.
Conclusion: In our case, scar cancer developed 28 years after oesophageal resection and more than 50 years after the corrosive injury. This case is another argument for simple bypass.
abstract_id: PUBMED:2818056
Reconstruction of the esophagus with the left colon. This report reviews our experience with 96 patients with benign or malignant stricture of the esophagus who underwent interposition of the left colon with or without esophageal resection from July 1982 to June 1987. There were 67 male and 29 female patients ranging in age from 8 to 80 years. Thirty-seven patients had fibrotic stricture secondary to corrosive injury of the esophagus, 42 had cancer of the esophagus, and 17 had cancer of the gastric cardia. The incidence of postoperative complications and surgical mortality, respectively, was 16.2% and 2.7% for patients with corrosive stricture of the esophagus, 35.7% and 11.9% for patients with cancer of the esophagus, and 35.2% and 5.8% for patients with cancer of the gastric cardia. Reconstruction resulted in good function in 75.6% of the patients with corrosive stricture of the esophagus, 66.6% of the patients with cancer of the esophagus, and 70.5% of patients with cancer of the gastric cardia. The morbidity and mortality were higher in the group with malignant esophageal strictures because of advanced age, poor general condition of the patient, and extent of the surgical procedure needed. Cervical anastomotic leakage was the most frequently encountered complication (13.5%), and all the poor-function results were caused by this complication. In our experience, reconstruction of the esophagus with left colon is a satisfactory method that can be accomplished with acceptable morbidity and mortality. The left colon is a durable and functional substitute.
abstract_id: PUBMED:35509761
Robotic Ivor-Lewis Esophagectomy for Corrosive-Induced Esophageal Stricture. Corrosive-induced stricture of the esophagus is associated with long-standing morbidity. Though required in particular situations, esophagectomy circumvents the long-term complications of the remnant scarred native esophagus. We performed a robotic Ivor-Lewis esophagectomy for corrosive esophageal stricture and demonstrated its feasibility for the same. A young male patient presented with a history of caustic ingestion, leading to a long segment stricture in the lower third of the esophagus. He developed absolute dysphagia, which was refractory to endoscopic dilatation. A robotic approach was utilized to create a gastric conduit followed by intrathoracic esophagogastric anastomosis. He had a smooth postprocedure course, was discharged on a soft diet on the seventh postoperative day, and is doing well after six months of follow-up. The robotic Ivor-Lewis approach can be safely performed for corrosive esophageal stricture, akin to esophageal malignancy. Besides the comfort of performing the procedure, especially intra-thoracic anastomosis, it helps alleviate the chances of mucocele formation and sequelae of cervical neck anastomosis.
abstract_id: PUBMED:26822961
Is rigid endoscopy necessary with childhood corrosive ingestion? a retrospective comparative analysis of 458 cases. The aim of this study was to determine the necessity of endoscopy in cases in which a corrosive substance was ingested and to find a practical way to avoid unnecessary endoscopies for similar cases in the future. The clinical records of 458 hospitalized cases with clinical histories of corrosive substance ingestion between January 2007 and December 2013 were retrospectively reviewed. The demographics of the cases, the ingested substances, and the rigid endoscopy findings were evaluated. The three most commonly ingested corrosive agents were household bleach (22.9%), household degreaser (15.9%), and drain cleaner (13.1%). Rigid esophagoscopy was performed in 367 of the 458 cases. Corrosive agents were grouped according to their purpose of household use; eight groups were created. The degree of corrosive injury observed in the different groups was compared with the degree of injury caused by household bleach. Among the corrosive agent groups, dishwashing machine products (Gr.1), laundry products (Gr.2), liquid cleaners (Gr.3), and household bleach (Gr.4) did not cause high-grade injuries. The resulting injuries and esophagoscopy results among the above groups, whether symptomatic or not, did not differ from one another. Corrosive agents such as drain cleaner (Gr.6), household degreaser (Gr.7), and several other acidic products (Gr.8) caused high-grade injuries in the esophagus; however, lime remover/HCl (Gr.5) did not. Thus, hospitalization and rigid endoscopy seem unnecessary to assess esophageal injury in most cases, if the ingested corrosive agent fits into group 1, 2, 3, or 4 and if the patient can be easily fed. Esophagoscopy is useful to shorten the hospitalization times in cases where strong corrosive agents were ingested, such as those in groups 5, 6, 7, and 8.
abstract_id: PUBMED:8693857
Anastomosis suture technic and complications of esophagocoloplasty in corrosive lesions For the reconstruction of the esophagus due to a corrosive stenosis a colon transplant is usually used. In all esophagocoloplastics three anastomosis are necessary: anastomosis that continues the alimentary tract, anastomosis of the distal part of transplant with the stomach or duodenum, and the most important proximal anastomosis of the esophagus (or pharynx) with the transplant. In the period of 29 years (from January 1, 1964 until December 31, 1993) on the Institute for digestive diseases in Belgrade 250 esophagocoloplastics were performed with 750 anastomosis, in the patients with corrosive stenosis of the esophagus. All the anastomosis are sewn in two layers with the interrupted or continuous stitch, except for the anastomosis with the pharynx where due to a structure of the wall a one layer continuous stitch was only possible. From 750 anastomosis in 30 patients or 4% an anastomotic leakage occurred and in only 4 patients or 0.5% it ended lethaly. Stenosis of the anastomosis occurred in 18 patients or 2.4%.
abstract_id: PUBMED:28333543
Corrosive-Induced Carcinoma of Esophagus: Esophagographic and CT Findings. Objective: The purpose of this study was to evaluate the esophagographic and CT findings of corrosive esophageal cancer.
Materials And Methods: The records of all patients who presented with corrosive esophageal strictures at one institution between June 1989 and April 2015 were retrospectively identified. The search yielded the records of 15 patients with histopathologically proven esophageal cancer. Esophagograms (13 patients) and chest CT images (14 patients) were interpreted independently by two reviewers. Esophagographic findings included the location of tumor, morphologic type, presence and length of mucosal irregularity, presence of asymmetric involvement, and presence of rigidity. CT findings included presence and type of esophageal wall thickening, pattern of enhancement, presence of periesophageal infiltration, and presence of hilar or mediastinal lymphadenopathy.
Results: Esophagography showed that the tumor was involved with the stenotic portion in 10 of the 13 patients (76.9%). The most common morphologic feature was a polypoid mass, in 10 patients. In 12 patients (92.3%), mucosal irregularities were observed; the mean affected length was 4.92 cm. Asymmetric involvement and rigidity were observed in nine patients (69.2%). On CT scans, eccentric wall thickening was observed in 10 of the 14 patients (71.4%), homogeneous enhancement in nine (64.2%), and periesophageal infiltration in 11 (78.5%).
Conclusion: Esophagography commonly shows corrosive esophageal cancer as a polypoid mass with long-segment mucosal irregularities at the stenotic portion, asymmetric involvement, and rigidity. CT shows eccentric esophageal wall thickening with homogeneous enhancement and periesophageal infiltration, which are suggestive of the development of malignancy in patients with corrosive esophageal strictures.
abstract_id: PUBMED:6674770
Carcinoma of the esophagus engrafted on corrosive stricture of the esophagus A highly significant incidence of esophageal cancer engrafted on corrosive stricture of the esophagus has been statistically evaluated. So far, only 8 cases in Japan and over 100 cases in foreign literatures have been reported. In 1976, we reported the first case of carcinoma of the esophagus associated with corrosive stricture of the esophagus in Japan. Our second case operated in 1975 was pathologically confirmed to be adenosquamous carcinoma of the esophagus which has never been reported in the literatures. In this paper surgical and pathological problems of the carcinoma of the esophagus associated with corrosive stricture of the esophagus were discussed adding to the analysis of 8 reported cases in Japan.
abstract_id: PUBMED:22552862
Laparoscopic retrosternal bypass for corrosive stricture of the esophagus. Introduction: Surgical management of corrosive stricture of the esophagus entails replacement of the scarred esophagus with a gastric or colonic conduit. This has traditionally been done using the conventional open surgical approach. We herein describe the first ever reported minimally invasive technique for performing retrosternal esophageal bypass using a stomach conduit.
Methods: Patients with corrosive stricture involving the esophagus alone with a normal stomach were selected. The surgery was performed with the patient in supine position using four abdominal ports and a transverse skin crease neck incision. Steps included mobilization of the stomach and division of the gastroesophageal junction, creation of a retrosternal space, transposition of stomach into the neck (via retrosternal space), and a cervical esophagogastric anastomosis.
Results: Four patients with corrosive stricture of the esophagus underwent this procedure. The average duration of surgery was 260 (240-300) min. All patients could be ambulated on the first postoperative day and were allowed oral liquids between the fifth and seventh day. At mean follow-up of 6.5 (3-9) months, all are euphagic to solid diet and have excellent cosmetic results.
Conclusions: Laparoscopic bypass for corrosive stricture of the esophagus using a gastric conduit is technically feasible. It results in early postoperative recovery, effective relief of dysphagia, and excellent cosmesis in these young patients.
Answer: The necessity of resecting the diseased esophagus in performing reconstruction for corrosive esophageal stricture is a topic of debate. Some studies and case reports suggest that resection of the diseased esophagus should be included in the surgical reconstruction procedure for chronic intractable caustic stricture. This is due to the high incidence of cicatrical carcinoma from the stricture sites and the possibility of hidden malignancy. Resection along with reconstruction is believed to give patients a better probability of being completely cured of the disease (PUBMED:11423265).
However, there are also arguments for simple bypass without resection. A case report described the development of scar cancer 28 years after esophageal resection and more than 50 years after the corrosive injury, suggesting that simple bypass might be another viable option (PUBMED:19386567).
Different surgical techniques have been employed for the reconstruction of the esophagus after corrosive injury, including thoracolaparoscopic-assisted esophagectomy (PUBMED:32494524), robotic Ivor-Lewis esophagectomy (PUBMED:35509761), and laparoscopic retrosternal bypass (PUBMED:22552862). These methods aim to avoid the long-term complications related to leaving behind the scarred native esophagus and have been reported to be safe and feasible, with good postoperative outcomes.
In summary, while there is evidence supporting the resection of the diseased esophagus during reconstruction to prevent the high risk of carcinoma development (PUBMED:11423265), alternative approaches that do not involve resection are also practiced and can be successful in certain cases (PUBMED:19386567, PUBMED:32494524, PUBMED:35509761, PUBMED:22552862). The decision to resect or not may depend on individual patient factors, the extent of the stricture, and the surgeon's experience and preference. |
Instruction: Is there a place for intra-aortic balloon counterpulsation support in acute right ventricular failure by pressure-overload?
Abstracts:
abstract_id: PUBMED:26142968
Is there a place for intra-aortic balloon counterpulsation support in acute right ventricular failure by pressure-overload? Background: Most therapeutic strategies for acute right ventricular failure (RVF) by pressure-overload are directed to improve cardiac output and coronary perfusion pressure by vasopressive agents. The eventual role of intra-aortic balloon counterpulsation (IABP) support remains questionable. This study investigates the contribution of IABP for acute RVF by pressure-overload, in comparison with phenylephrine (PE) and norepinephrine (NOR).
Methods: Acute RVF is induced by fixed pulmonary artery constriction in 6 pigs, pursuing a 50% reduction of cardiac output. Assessment of the treatment interventions included biventricular PV-loop analysis, and continuous measurement of aortic and right coronary artery flow.
Results: Restoration of baseline cardiac output was only observed by administration of NOR (Baseline=3.82±1.52ml/min - RVF=2.03±0.59ml/min - IABP=2.45±0.62ml/min - PE=2.98±0.63ml/min - NOR=3.95±0.73ml/min, p<0.001). NOR had most effect on biventricular contractility (PRSW-slope-RV: IABP +24% - PE +59% - NOR +208%, p<0.001 and PRSW-slope-LV: IABP +36% - PE +53% - NOR +196%, p<0.001), heart rate acceleration (IABP +7% - PE +12% - NOR +51%, p<0.001), and RCA flow (IABP +31% - PE +58% - NOR +180%, p<0.001), concomitant to a higher increase of LV-to-RV pressure ratio (IABP: +7% versus -3%, PE: +36% versus +8%, NOR: +101% versus 42%). The hemodynamic contribution of IABP was limited, unless a modest improvement of LV compliance during PE and NOR infusion.
Conclusion: In a model of acute pressure-overload RV failure, IABP appears to offer limited hemodynamic benefit. The administration of norepinephrine is most effective to correct systemic output and myocardial perfusion through adding an inotropic and chronotropic effect to systemic vasopression.
abstract_id: PUBMED:19766243
Right ventricular failure resulting from pressure overload: role of intra-aortic balloon counterpulsation and vasopressor therapy. Background: Augmentation of coronary perfusion may improve right ventricular (RV) failure following acute increases of RV afterload. We investigated whether intra-aortic balloon counterpulsation (IABP) can improve cardiac function by enhancing myocardial perfusion and reversing compromised biventricular interactions using a model of acute pressure overload.
Materials And Methods: In 10 anesthetized pigs, RV failure was induced by pulmonary artery constriction and systemic hypertension strategies with IABP, phenylephrine (PE), or the combination of both were tested. Systemic and ventricular hemodynamics [cardiac index(CI), ventricular pressures, coronary driving pressures (CDP)] were measured and echocardiography was used to assess tricuspid valve regurgitation, septal positioning (eccentricity index (ECI)), and changes in ventricular and septal dimensions and function [myocardial performance index (MPI), peak longitudinal strain].
Results: Pulmonary artery constriction resulted in doubling of RV systolic pressure (54 ± 4mm Hg), RV distension, severe TR (4+) with decreased RV function (strain: -33%; MPI: +56%), septal flattening (Wt%: -35%) and leftward septal shift (ECI:1.36), resulting in global hemodynamic deterioration (CI: -51%; SvO(2): -26%), and impaired CDP (-30%; P<0.05). IABP support alone failed to improve RV function despite higher CDP (+33%; P<0.05). Systemic hypertension by PE improved CDP (+70%), RV function (strain: +22%; MPI: -21%), septal positioning (ECI:1.12) and minimized TR, but LV dysfunction (strain: -25%; MPI: +31%) occurred after LV afterloading (P<0.05). With IABP, less PE (-41%) was needed to maintain hypertension and CDP was further augmented (+25%). IABP resulted in LV unloading and restored LV function, and increased CI (+46%) and SvO(2) (+29%; P<0.05).
Conclusions: IABP with minimal vasopressors augments myocardial perfusion pressure and optimizes RV function after pressure-induced failure.
abstract_id: PUBMED:9436561
Intraaortic balloon counterpulsation improves right ventricular failure resulting from pressure overload. Background: Right ventricular (RV) dysfunction is common after heart transplantation, and myocardial ischemia is considered to be a significant contributor. We studied whether intraaortic balloon counterpulsation would improve cardiac function using a model of acute RV pressure overload.
Methods: In 10 anesthetized sheep, RV failure was induced using a pulmonary artery constrictor. Baseline measurements included mean systemic blood pressure, RV peak systolic pressure, cardiac index, and RV ejection fraction. Myocardial and organ perfusion were measured using radioactive microspheres.
Results: After pulmonary artery constriction, there was an increase in RV peak systolic pressure (32 +/- 2 to 60 +/- 3 mm Hg; p < 0.01) and a decrease in mean systemic blood pressure (68 +/- 4 to 49 +/- 2 mm Hg; p < 0.01), RV ejection fraction (0.51 +/- 0.04 to 0.16 +/- 0.02; p < 0.01), and cardiac index (2.48 +/- 0.04 to 1.02 +/- 0.11; p < 0.01). Blood flow to the RV did not change significantly, but there was a significant reduction in blood flow to the left ventricle. The initiation of intraaortic balloon counterpulsation (1:1) using a 40-mL intraaortic balloon inserted through the left femoral artery resulted in an increase in mean systemic blood pressure (49 +/- 2 to 61 +/- 3 mm Hg; p < 0.01), cardiac index (1.02 +/- 0.11 to 1.45 +/- 0.14; p < 0.05), RV ejection fraction (0.16 +/- 0.02 to 0.23 +/- 0.02; p < 0.01), and blood flow to the left ventricle.
Conclusions: In a model of right heart failure, the institution of intraaortic balloon counterpulsation caused a significant improvement in cardiac function. Although RV ischemia was not demonstrated, the augmentation of left coronary artery blood flow by intraaortic balloon counterpulsation and subsequent improvement in left ventricular function suggest that left ventricular ischemia contributes to RV dysfunction, presumably through a ventricular interdependence mechanism. Therefore, study of the safety and efficacy of intraaortic balloon counterpulsation in the management of patients with acute right heart dysfunction is warranted.
abstract_id: PUBMED:6476946
Pulmonary artery balloon counterpulsation for right ventricular failure: I. Experimental results. The effects of pulmonary artery balloon counterpulsation (PABC) as a circulatory assist for the failing right ventricle were investigated. Sixteen anesthetized dogs underwent instrumentation to measure cardiac output and to record pressures in both ventricles, the pulmonary artery, and the aorta. Autonomic control of the heart was surgically and pharmacologically ablated. A specially designed counterpulsation balloon was inserted through the right ventricular (RV) outflow tract into the pulmonary artery. Pulmonary hypertension, induced acutely by the microembolization of the pulmonary circulation with glass beads combined with infusion of serotonin, served as a model for development of acute RV failure. Immediate effects of PABC were investigated in 10 dogs during normal function and failure of the right ventricle at different levels of preload. After further embolization which caused progressive cardiogenic shock, the effects of 10 minutes of PABC, and of its withdrawal, were examined. In all cases, PABC immediately decreased RV preload and afterload. In the failing right ventricle, counterpulsation also significantly increased cardiac output. Progressive cardiogenic shock was successfully reversed by PABC; after 10 minutes of counterpulsation, increases in cardiac output (+53%), arterial pressure (+55%), and RV minute work (+62%) were observed, paralleled by a fall in RV preload (-22%). After PABC was discontinued, the circulatory status again began to deteriorate. We conclude that PABC effectively improves function of the failing right ventricle caused by acute pulmonary hypertension.
abstract_id: PUBMED:3994443
Pulmonary artery balloon counterpulsation for treatment of intraoperative right ventricular failure. Pulmonary artery balloon counterpulsation was used in 3 patients who underwent open-heart operation for the treatment of acquired cardiac lesions. This form of support was initiated because the patients could not be weaned from cardiopulmonary bypass even with intraaortic balloon counterpulsation and maximal pharmacological support. After pulmonary artery balloon pumping was instituted, cardiopulmonary bypass was successfully terminated in all 3 patients. One of them is alive and well one year after operation.
abstract_id: PUBMED:25588185
S3-Guideline: Recommendations for intra-aortic balloon pumping in cardiac surgery Although intra-aortic balloon pumping (IABP) is the most frequently used mechanical cardiac assist device in cardiothoracic surgery, there are only guidelines for substantive sections of aortic counterpulsation including prophylactic and postoperative use. In contrast, evidence-based recommendations are still lacking concerning intraoperative use, management, contraindication and other relevant issues. According to international surveys, important aspects of IABP usage show a wide variation in clinical practice. The results of a national questionnaire performed before initiation of this guideline confirmed these findings and demonstrated a clear need for the development of a consensus-based guideline. Therefore, the presented multidisciplinary S-3-guideline was developed under the direction of the German Society for Thoracic and Cardiovascular Surgery (Deutsche Gesellschaft für Thorax-, Herz- und Gefäßchirurgie, DGTHG) to make evidence-based recommendations for the usage of aortic counterpulsation after cardiothoracic surgery according to the requirements of the Association of the Scientific Medical Societies in Germany (AWMF) and the Medical Centre for Quality (Ärztliches Zentrum für Qualität - (ÄZQ)). Main topics discussed in this guideline involve IABP support in the prophylactic, preoperative, intraoperative and postoperative setting as well as the treatment of right heart failure, contraindications, anticoagulation, monitoring, weaning, and limitations of IABP therapy. The presented 15 key messages of the guideline were approved after two consensus meetings under moderation of the AWMF with participation of the German Society of Cardiology (DGK), German Society of Anaesthesiology and Intensive Care Medicine (DGAI), German Interdisciplinary Association for Intensive Care (DIVI) and the German Society for Cardiovascular Engineering (DGfK).
abstract_id: PUBMED:6476947
Pulmonary artery balloon counterpulsation for right ventricular failure: II. Clinical experience. The use of pulmonary artery balloon counterpulsation (PABC) provided immediate salvage following cardiac surgical procedures in 2 patients with biventricular failure in whom inotropic drugs and intraaortic balloon counterpulsation did not provide sufficient support to allow weaning from cardiopulmonary bypass. Although both patients eventually died, the hemodynamic effectiveness of PABC was documented. The various clinical settings for right ventricular as well as biventricular failure are reviewed, the currently available options for treatment are summarized, and the directions for future laboratory investigation and possible clinical applications are presented.
abstract_id: PUBMED:7431972
Pulmonary artery balloon counterpulsation for acute right ventricular failure. The development and availability of right ventricular assist devices has not kept pace with the evolution of devices designed to mechanically support the systemic circulation. This report describes the application of the counterpulsation concept to the pulmonary circuit to unload the failing right ventricle and augment pulmonary blood flow. Conventional, widely available balloon pumping equipment was employed. Use of this double balloon pump system enabled a patient to be weaned from cardiopulmonary bypass after all other measures had failed. Other relevant potential clinical applications for this technique are discussed.
abstract_id: PUBMED:18400825
Circulatory support with right ventricular assist device and intra-aortic balloon counterpulsation in patient with right ventricle failure after pulmonary embolectomy. Severe pulmonary embolism may lead to acute right ventricular failure despite immediate surgical embolectomy, which is regarded as the treatment of choice after recent CABG surgery. We report a case of a patient with massive pulmonary thromboembolism which resulted in acute right ventricular failure following early surgical embolectomy. Pulmonary embolism developed two days after an elective off-pump CABG surgery. We observed severe circulatory collapse which resulted in cardiac arrest and proved refractory to pharmacological treatment after immediate cardiopulmonary resuscitation. Intra-aortic balloon pumping was used in an attempt to improve hemodynamic performance during surgical skin preparation. After the completion of the embolectomy and failure to wean the patient from CPB, upon clinical signs of low cardiac output and akinetic right ventricle, the decision was made to support its function with a centrifugal pump. The substantial improvement of the right ventricular function observed in the next 24 h allowed weaning the patient from right ventricle support. In spite of hemodynamic recovery, the patient remained in a coma on discharge from the cardiac-surgical ICU after 18 days, and died 10 days later from systemic infection.
abstract_id: PUBMED:27230622
The impact of a failing right heart in patients supported by intra-aortic balloon counterpulsation. Background: Intra-aortic balloon pumps (IABPs) provide primarily left ventricular support, yet few data detail the efficacy of this temporary mechanical circulatory support device in patients with concomitant right ventricular failure. We compared the efficacy of IABPs in cardiogenic shock patients with isolated left ventricular versus biventricular failure.
Methods: IABP-treated cardiogenic shock patients were identified from our center between 2006 and 2012, with patients stratified by either isolated left ventricular failure or biventricular failure. We compared baseline characteristics and 72-hour and 30-day outcomes between groups. Outcomes of interest included escalation of mechanical circulatory support, a clinical definition of IABP failure, and death.
Results: Among 107 patients, 60 patients (56%) had isolated left ventricular failure compared with 47 patients (44%) having biventricular failure. Patients with isolated left ventricular failure were older and more likely to have coronary artery disease ( p<0.05, both). Patients with biventricular failure more often required escalation of mechanical circulatory support at both 72 hours (21% vs. 2%, p<0.001) and 30 days (36% vs. 30%). However, there was no significant difference between groups for failure of IABP therapy at 72 hours ( p=0.27) or at 30 days ( p=0.62) and death at 30 days ( p=0.98). In adjusted analysis, there was no significant difference between groups with regard to risk for a clinical definition of IABP failure at 30 days (odds ratio=0.85, 95% confidence interval (0.27, 2.69)).
Conclusions: IABP-treated cardiogenic shock patients with biventricular failure more often required early escalation of mechanical circulatory support. However, there were no significant differences by type of ventricular failure with regard to 30-day outcomes.
Answer: The role of intra-aortic balloon counterpulsation (IABP) in the management of acute right ventricular failure (RVF) due to pressure overload has been explored in several studies, with varying conclusions.
Some studies suggest that IABP may offer limited hemodynamic benefit in acute pressure-overload RV failure. For instance, a study on pigs with induced acute RVF by fixed pulmonary artery constriction found that norepinephrine was more effective than IABP in restoring baseline cardiac output and improving myocardial perfusion, suggesting that IABP's contribution was modest (PUBMED:26142968).
Another study in anesthetized pigs with RV failure induced by pulmonary artery constriction showed that IABP alone failed to improve RV function despite higher coronary driving pressures. However, when combined with minimal vasopressors, IABP augmented myocardial perfusion pressure and optimized RV function after pressure-induced failure (PUBMED:19766243).
In contrast, a study in anesthetized sheep demonstrated that IABP improved cardiac function in a model of right heart failure, suggesting that augmentation of left coronary artery blood flow by IABP and subsequent improvement in left ventricular function could benefit RV dysfunction through a ventricular interdependence mechanism (PUBMED:9436561).
Clinical experiences with pulmonary artery balloon counterpulsation (PABC) have also been reported. PABC was used in patients who could not be weaned from cardiopulmonary bypass even with IABP and maximal pharmacological support, and it successfully terminated cardiopulmonary bypass in all patients in one study (PUBMED:3994443). However, the long-term outcomes were not as favorable, with patients eventually dying despite the hemodynamic effectiveness of PABC (PUBMED:6476947).
In the context of cardiogenic shock, a study found that patients with biventricular failure who were treated with IABP more often required early escalation of mechanical circulatory support, but there were no significant differences in 30-day outcomes when compared to patients with isolated left ventricular failure (PUBMED:27230622).
In summary, while IABP may provide some hemodynamic support in acute RVF due to pressure overload, its efficacy appears to be limited and may be more beneficial when used in conjunction with vasopressors. The evidence suggests that IABP's role in managing acute RVF by pressure overload is not definitive and may depend on the individual patient's condition and the presence of biventricular involvement. |
Instruction: Car mass and fatality risk: has the relationship changed?
Abstracts:
abstract_id: PUBMED:8279608
Car mass and fatality risk: has the relationship changed? Objectives: The finding that the relative safety disadvantage of small compared with large cars is less for post-1980 cars than for pre-1980 cars has stimulated speculation that increasing fuel economy standards would increase fatalities less than previously expected. Fatal crashes between two cars of similar model year were examined to see whether this would be the case.
Methods: Driver fatality risk in relation to car mass was examined with Fatal Accident Reporting System data for crashes between two cars of a specific model year.
Results: The relative risk for driver fatality in the lighter car compared with the other driver's risk in a car 50% heavier was as follows: for 1966 through 1979 cars, the risk was between 3.7 and 5.1; for 1984 cars, 2.6; and for 1990 cars, 4.1.
Conclusions: The results suggest that the lesser mass effect observed for mid-1980s cars occurred because improved crashworthiness features appeared in small cars earlier than in large cars. As all cars are redesigned, the relationship between risk and mass can be expected to approach that observed earlier in pre-1980 cars. If so, future fatality increases from fuel economy increases will be greater than estimated on the basis of mid-1980 data.
abstract_id: PUBMED:1636830
Car size or car mass: which has greater influence on fatality risk? Objectives: Proposed increases in corporate average fuel economy standards would probably lead to lighter cars. Well-established relationships between occupant risk and car mass predict consequent additional casualties. However, if size, not mass, is the causative factor in these relationships, then decreasing car mass need not increase risk. This study examines whether mass or size is the causative factor.
Methods: Data from the Fatal Accident Reporting System are used to explore relationships between car mass, car size (as represented by wheelbase), and driver fatality risk in two-car crashes.
Results: When cars of identical (or similar) wheelbase but different mass crash into each other, driver fatality risk depends strongly on mass; the relationship is quantitatively similar to that found in studies that ignore wheelbase. On the other hand, when cars of similar mass but different wheelbase crash into each other, the data reveal no dependence of driver fatality risk on wheelbase.
Conclusions: Mass is the dominant causative factor in relationships between driver risk and car size in two-car crashes, with size, as such, playing at most a secondary role. Reducing car mass increases occupant risk.
abstract_id: PUBMED:4096791
Fatality risk for belted drivers versus car mass. This study was performed to determine how the likelihood of a belted driver being killed in a single car crash depends on the mass of the car. This was done by applying the pedestrian fatality exposure approach to the subset of fatalities in the Fatal Accident Reporting System (FARS) for which the driver was coded as using a shoulder belt and/or a lap belt. Combining the 1975 through 1982 data provided a sufficiently large population of belted drivers to perform the analysis. In the exposure approach used, the number of car drivers killed in single car crashes is divided by the number of nonoccupant fatalities (pedestrians or motorcyclists) associated with the same group of cars. The ratio is interpreted to reflect the physical effect of car mass, essentially independent of driver behavior effects. In the present application, car mass effects for belted drivers were determined by considering the number of belted drivers killed divided by the number of nonoccupants killed in crashes involving cars whose drivers were coded in the FARS files as being belted. Because the belt use of surviving drivers is, to some extent, self-reported, it is considered that the data given in the report should be not used to estimate the effectiveness of seat belts in preventing fatalities. The results are presented as graphical and analytical comparisons of fatality likelihood versus car mass for belted and unbelted drivers. It is concluded that the effect of car mass on relative driver fatality likelihood is essentially the same for belted and unbelted drivers (for example, the present analysis gives that a belted driver in a 900 kg car is 2.3 times as likely to be killed in a single car crash as is the belted driver in an 1800 kg car. The corresponding ratio determined here for unbelted drivers is 2.4). As a consequence of this conclusion, the relative effectiveness of seat belts in preventing driver fatalities is similar for cars of different masses.
abstract_id: PUBMED:11441734
Causal influence of car mass and size on driver fatality risk. Objectives: This study estimated how adding mass, in the form of a passenger, to a car crashing head-on into another car affects fatality risks to both drivers. The study distinguished the causal roles of mass and size.
Methods: Head-on crashes between 2 cars, one with a right-front passenger and the other with only a driver, were examined with Fatality Analysis Reporting System data.
Results: Adding a passenger to a car led to a 14.5% reduction in driver risk ratio (risk to one driver divided by risk to the other). To divide this effect between the individual drivers, the author developed equations that express each driver's risk as a function of causal contributions from the mass and size of both involved cars. Adding a passenger reduced a driver's frontal crash fatality risk by 7.5% but increased the risk to the other driver by 8.1%.
Conclusions: The presence of a passenger reduces a driver's frontal crash fatality risk but increases the risk to the driver of the other car. The findings are applicable to some single-car crashes, in which the driver risk decrease is not offset by any increase in harm to others. When all cars carry the same additional cargo, total population risk is reduced.
abstract_id: PUBMED:31239920
The Forensic Anthropologist in the Mass Fatality Context. Mass fatality incidents require a multi-agency, multidisciplinary response to effectively and efficiently manage the recovery and identification of human remains. The forensic anthropologist is uniquely suited for a significant role in the disaster response, demonstrated in the recovery and triage of human remains, interpretation of skeletal trauma, and identification of victims. However, the majority of published literature discusses these response operations in the context of large-scale incidents with significant numbers of highly fragmented and commingled human remains, which does not reflect the operational reality of mass fatality incidents in the United States. This article provides a realistic definition of the term "mass fatality incident" for medicolegal jurisdictions and provides the contributions of the forensic anthropologist for all types of incidents.
abstract_id: PUBMED:31239985
Trends in United States Mass Fatality Incidents and Recommendations for Medical Examiners and Coroners. It is imperative that medicolegal jurisdictions prepare for the occurrence of a mass fatality incident. Despite the trend to plan for catastrophic and complicated incidents, this analysis of recent mass fatality events seeks to better inform authorities regarding the scale and types of incidents that could potentially impact their jurisdiction. The guidance provided by this study serves as a tool to guide the development of plans, acquisition of appropriate resources, and training of staff. To perform this analysis, data were collected from mass fatality incidents occurring in the United States from January 1, 2000 to December 31, 2016 that resulted in ten or more fatalities. Specific data points were collected for each incident including the date, location, number of fatalities, incident type (e.g., man-made or natural), incident subtype, and description (e.g., mass shooting, hurricane, aviation). A total of 137 incidents fit the criteria for inclusion in the analysis, resulting in a total of 8462 fatalities. The average number of incidents was eight per year during the study period. The analysis demonstrates that most mass fatality incidents (88.8%) result in between ten and 50 fatalities and are variable based on incident type and geographic location. This study includes several large-scale incidents, which as outliers have influenced fatality management operations and preparedness efforts on a national level. In particular, the World Trade Center attack of September 11, 2001 and subsequent remains recovery and identification operations have served to inform the New York City Office of Chief Medical Examiner of the capabilities required to manage a complex, protracted victim identification process involving extensive body fragmentation and commingling. While the World Trade Center attack has been shown to be outside the normal trends of mass fatality incidents, it has nonetheless offered the medicolegal community several invaluable lessons.
abstract_id: PUBMED:7999205
Driver injury and fatality risk in two-car crashes versus mass ratio inferred using Newtonian mechanics. This paper aims at explaining the results of a recent empirical study that found that when cars of unequal mass crash into each other, the ratio of driver fatality risk in the lighter care to risk in the heavier car (the fatality risk ratio) increased as a power function of the ratio of the mass of the heavier car to that of the lighter car (the mass ratio). The present study uses two sources of information to examine the relationship between these same quantities: first, calculations based on Newtonian mechanics, which show that when two cars crash head-on into each other, the ratio of their changes in speed (delta-v) is inversely proportional to mass ratio; second, National Accident Sampling System data, which show how delta-v affects driver injury risk. The study is performed for fatalities and severe injuries and for unbelted and belted drivers. Combining the two sources of information gives the result that fatality risk ratio increases as a power function of mass ratio, the same functional form found in the empirical study. Because the study is rooted in Newtonian mechanics, it clearly and directly identifies physical mechanisms involved and leads to the conclusion that mass, as such, causes large differences in driver injury and fatality risk when cars of unequal mass crash into each other.
abstract_id: PUBMED:27139212
Assessment of Mass Fatality Preparedness and Response Content in Dental Hygiene Education. When mass fatality incidents (MFIs) occur, they can quickly overwhelm local, state, and government agencies, resources, and personnel. It is important to have a rapid and effective response with skilled, multidisciplinary victim identification teams since specific skill sets are necessary to participate in mass fatality preparedness and response. The aims of this study were to determine the extent of formal education related to mass fatality preparedness and response training in U.S. dental hygiene programs and to assess program directors' perceptions of the need for such training. A 23-item cross-sectional survey was emailed to 319 U.S. dental hygiene programs in 2015. Survey questions addressed if the program offered mass fatality preparedness and response training to its students and how much training was given, as well as collecting respondents' demographics and opinions regarding education and training. An overall response rate of 36% was obtained, with 111 program chairs completing the survey. The results showed that only a small percentage of responding programs incorporated coursework related to mass fatality and preparedness in their curricula. Of the responding programs, 84% had no formal instruction on the role of a dental hygienist in MFIs; however, 53 of 69 program directors agreed or strongly agreed that the role of dental hygienists in MFIs should be covered in dental hygiene curricula. The top three barriers to incorporating such training reported by respondents were time requirements, lack of faculty expertise, and lack of equipment. Future research is needed to establish standardized competencies for mass fatality preparedness and response in dental hygiene education.
abstract_id: PUBMED:32713413
COVID-19 and Mass Fatality Management: A Public Health Challenge. The COVID-19 pandemic has posed a serious question over preparedness to deal with mass fatality. The current trend shows that there would be more bodies than the capacity and resources to handle them. The international agencies have alerted governments that the number of deaths may overwhelm the local capacity to handle dead bodies properly. Mass fatality management and planning are important to respecting the dignity of the deceased and surviving family. Inadequate capacity to deal with dead bodies may affect the psychological well-being of survivors which may result in distress to families and community.
abstract_id: PUBMED:28101750
Mortuary operations following mass fatality natural disasters: a review. This is a critical review to discuss the best practice approaches to mortuary operations in preparation for and the response to natural, mass fatality, disaster events, as identified by a review of published articles. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) Statement guided the identification of potential articles to use in this critical review. Subsequent searches were also conducted to identify articles relating to heat wave, and flood mortality. All identified peer-reviewed studies published in English which discussed the preparation and response of mortuaries to mass fatality natural disasters occurring in developed countries were included. Using the PRISMA-P method of identifying articles, 18 articles were selected for inclusion in this review. Although there are numerous articles which describe the mortuary response to mass fatality incidents, few articles analyzed the response, or discussed the roles which supported and enabled the organization to undertake the task of identifying disaster victims. It is thus difficult to determine objectively if the actions and activities outlined in the articles represent best-practice.
Answer: The relationship between car mass and fatality risk has been a subject of study and the findings suggest that this relationship has indeed changed over time. For cars manufactured between 1966 and 1979, the driver fatality risk in the lighter car compared with the risk in a car 50% heavier ranged between 3.7 and 5.1. However, for 1984 model cars, this risk was lower at 2.6, and for 1990 model cars, it was 4.1 (PUBMED:8279608). This indicates that the relative safety disadvantage of small compared to large cars was less for post-1980 cars than for pre-1980 cars. The lesser mass effect observed for mid-1980s cars was attributed to improved crashworthiness features appearing in small cars earlier than in large cars. As all cars are redesigned, the relationship between risk and mass is expected to revert to that observed in pre-1980 cars, suggesting that future fatality increases from fuel economy increases could be greater than estimated based on mid-1980 data (PUBMED:8279608).
Another study confirmed that car mass is the dominant causative factor in relationships between driver risk and car size in two-car crashes, with size playing at most a secondary role. Reducing car mass was found to increase occupant risk (PUBMED:1636830). Furthermore, the effect of car mass on relative driver fatality likelihood was found to be essentially the same for belted and unbelted drivers, indicating that the relative effectiveness of seat belts in preventing driver fatalities is similar for cars of different masses (PUBMED:4096791).
In summary, the relationship between car mass and fatality risk has changed over time, with the relative safety disadvantage of smaller cars decreasing in the mid-1980s due to improved crashworthiness. However, as car designs continue to evolve, the relationship is expected to align more closely with the patterns observed before the 1980s, potentially leading to higher fatality rates as fuel economy standards drive the production of lighter vehicles. |
Instruction: Willingness to pay: a method for measuring preferences for maternity care?
Abstracts:
abstract_id: PUBMED:35660773
Critiquing the evolution of maternity care preferences research: A systematic mixed studies review. Objective: Whether women's preferences for maternity care are informed remains unclear, suggesting that maternal preferences may not accurately represent what women truly want. The aim of this study was to understand and critique research on women's maternity care preferences published since 2010.
Design: Systematic mixed studies review. CINHAL, EMBASE, MEDLINE, and ProQuest Nursing and Allied Health electronic databases were searched from January 2010 to April 2022.
Findings: Thirty-five articles were included. Models of care and mode of birth were the most frequently investigated preference topics. Roughly three-quarters of included studies employed a quantitative design. Few studies assessed women's baseline knowledge regarding the aspects of maternity care investigated, and three provided information to help inform women's maternity care preferences. Over 85% of studies involved women who were either pregnant at the time of investigation or had previously given birth, and 71% employed study designs where women were required to select from pre-determined response options to describe their preferences. Two studies asked women about their preferences in the face of unlimited access and availability to specific maternity care services.
Key Conclusions: Limited provision of supporting information, the predominant inclusion of women with experience using maternity care services, and limited use of mixed methods may have hindered the collection of accurate information from women about their preferences.
Implications For Practice: Women's maternity care preferences research since 2010 may only present a limited version of what they want.
abstract_id: PUBMED:34695625
Call (and pay) the midwife: A discrete choice experiment on mothers' preferences and their willingness to pay for midwifery care. Background: Mothers in Germany are entitled to midwifery care; however, they face a lack of skilled professionals. While the reliability of the access to midwifery is of great public interest, we know little about clients' preferences.
Objectives: We conduct a discrete choice experiment to study preferences and willingness to accept copayment for the entire scope of midwifery care (pregnancy, delivery, and postnatal). Thereby, we aim to provide policy recommendations for priority settings in times of scarcity. Furthermore, we evaluate to what extent midwives' education matters to parents and assess the degree of support for the latest Midwifery Reform Act that transfers education from vocational schools to universities.
Design: Discrete choice experiment with separated adaptive dual response.
Settings: Online Survey promoted through Facebook to parents in Germany.
Respondents: 2080 respondents completed the experiment. They all have or are expecting at least one natural child, mainly born between 2018 and 2020 (87%). The average respondent is female (99%), 33 years old, with a university degree (50%).
Methods: We use a d-optimal fractional factorial design and obtain individual parameter estimates through a Multinomial Logit analysis with Hierarchical Bayes estimation techniques. We calculate willingness to pay and importance weights and simulate uptake probabilities for different packages of care. To avoid extreme choice behavior, we apply separated adaptive dual response.
Results: Home visits during the postnatal phase are most important (importance weight 50%); online support is demanded when no personal support is available. We find that 1:1 care during delivery is highly preferred, but one midwife supporting two women intrapartum is still acceptable. The midwife´s education plays a minor role with an importance weight of 3%; however, we find a preference for midwives trained at vocational schools rather than at universities.
Conclusions: In times of scarcity, postnatal care in the form of home visits should be prioritized over pregnancy counseling, and online services should be promoted as an add-on but not as a substitute for personal support. There is a high level of willingness to accept co-financing to ensure the availability of services usually covered by health insurance.
abstract_id: PUBMED:27677443
Care Consistency With Documented Care Preferences: Methodologic Considerations for Implementing the "Measuring What Matters" Quality Indicator. A basic tenet of palliative care is discerning patient treatment preferences and then honoring these preferences, reflected by the inclusion of "Care Consistency With Documented Care Preferences" as one of 10 "Measuring What Matters quality" indicators. Measuring What Matters indicators are intended to serve as a foundation for quality measurement in health care settings. However, there are a number of logistic and practical issues to be considered in the application of this quality indicator to clinical practice. In this brief methodologic report, we describe how care consistency with documented care preferences has been measured in research on patients near the end of life. Furthermore, we outline methodologic challenges in using this indicator in both research and practice, such as documentation, specificity and relevance, preference stability, and measuring nonevents. Recommendations to strengthen the accuracy of measurement of this important quality marker in health care settings include consistent recording of preferences in the medical record, considerations for selection of treatment preferences for tracking, establishing a protocol for review of preferences, and adoption of a consistent measurement approach.
abstract_id: PUBMED:9534503
Willingness to pay: a method for measuring preferences for maternity care? Background: The aim of this study was to assess the feasibility of the use of "willingness to pay" as a measure of the benefits of intrapartum care.
Methods: A questionnaire was mailed to 150 pregnant women booking at Aberdeen Maternity Hospital in the northeast of Scotland, giving information on options for intrapartum care compiled from a recent randomized trial of care in a midwife-managed delivery unit versus care in a consultant-led labor ward. Women were asked which type of care they preferred and what would be their maximum willingness to pay for their preferred option. Data were also collected on demographic and clinical characteristics.
Results: Most women (55%) expressed a preference for care in a midwives unit. However, strength of preference, as reflected in willingness to pay, was greater among those in the smaller group, who expressed a preference for care in a consultant-led labor ward. The willingness-to-pay results were not associated with ability to pay.
Conclusions: These data should be used together with cost data to decide on provision of care. Given the strength of preference of the minority group, and if the cost implications are not too great, a flexible service that takes account of women's wishes should be provided, even if this goes against the trend for care of those at low risk. By analyzing choice of care by income groups and social class groupings, it is possible to examine whether willingness-to-pay results are associated with indicators of ability to pay. In this case, they were not. Willingness to pay has an advantage in allowing respondents to account for more than just health gain when valuing different types of care.
abstract_id: PUBMED:28419708
Willingness to Pay for a Maternity Waiting Home Stay in Zambia. Introduction: Complications of pregnancy and childbirth can pose serious risks to the health of women, especially in resource-poor settings. Zambia has been implementing a program to improve access to emergency obstetric and neonatal care, including expansion of maternity waiting homes-residential facilities located near a qualified medical facility where a pregnant woman can wait to give birth. Yet it is unclear how much support communities and women would be willing to provide to help fund the homes and increase sustainability.
Methods: We conducted a mixed-methods study to estimate willingness to pay for maternity waiting home services based on a survey of 167 women, men, and community elders. We also collected qualitative data from 16 focus group discussions to help interpret our findings in context.
Results: The maximum willingness to pay was 5.0 Zambian kwacha or $0.92 US dollars per night of stay. Focus group discussions showed that willingness to pay is dependent on higher quality of services such as food service and suggested that the pricing policy (by stay or by night) could influence affordability and use.
Discussion: While Zambians seem to value and be willing to contribute a modest amount for maternity waiting home services, planners must still address potential barriers that may prevent women from staying at the shelters. These include cash availability and affordability for the poorest households.
abstract_id: PUBMED:36084519
Trust in the publicly financed care system and willingness to pay for long-term care: A discrete choice experiment in Denmark. Aging populations put pressure on the provision and financing of long-term care (LTC) services in many countries. The projected increase in LTC expenditures may in particular constitute a threat to the future sustainability of public budgets in welfare states, where LTC is financed through taxes. To accommodate the increasing number of 80+ year-olds in society, policy-makers and service administrators need a better understanding of care preferences among future older adults: What types of services do older citizens prefer most, and which factors shape their LTC preferences? A discrete choice experiment (DCE) was administered to a representative sample of the Danish population aged 54-64 from May to July 2019 (n = 1154), investigating which factors shape individuals' preferences and willingness-to-pay (WTP) for their future LTC. Our results reveal that respondents are willing to make additional out-of-pocket payments to supplement the care provided for free by the municipality. The WTP was highest for services such as receiving help from a regular care team ($129 per month) and an extra shower a week ($116 per month). Moreover, we find heterogeneous care preferences, with three user characteristics associated with higher WTP for services: higher education, high wealth, and a low trust in the publicly financed care system. Our results raise concerns that inequalities between relatively more- and less-resourceful older adults may increase in Scandinavian-type welfare states in the future. Such increasing inequality in service provision may undermine citizens' trust in and support of the publicly financed care system.
abstract_id: PUBMED:35125736
Respectful Maternity Care Initiative: A Qualitative Study. Aim: To assess the available standards for respectful maternity care in a public maternity hospital by evaluation of responses to a questionnaire given to birthing women.
Methodology: Assessment was done to find out the level of respectful maternity care provided under the most sensitive and important areas, namely (1) confidentiality and privacy, (2) physical harm or ill treatment, (3) dignity and respect, (4) left without care, (5) right to information, informed consent, and choice/preferences, by obtaining the response of birthing women.
Results: Confidentiality and Privacy: No birthing woman (0%) expressed her opinion that she was dissatisfied with privacy provided, at any time of her stay in the hospital. Physical harm or ill treatment: It was significant to note that no woman reported being ill-treated or physically harmed. Dignity and Respect: A response of satisfaction regarding this important aspect of maternity care was received by nearly 95% of birthing women, A very small percent of 5.1% of women were not completely satisfied. Left without care or Attention given at all times:1.9% of women felt that they were not given immediate response when they called for any need. Right to information, informed consent, and choice/preferences: The greater majority of 95.7% of women were satisfied with methods engaged by hospital staff regarding right to information, informed consent and practices.
Conclusion: The response from a significant majority of birthing women was that they had respectful maternity care given to them at Government hospital for Women and Children.
abstract_id: PUBMED:26771063
Maternity-care: measuring women's perceptions. Purpose: Achieving maternity-care outcomes that align with women's needs, preferences and expectations is important but theoretically driven measures of women's satisfaction with their entire maternity-care experience do not appear to exist. The purpose of this paper is to outline the development of an instrument to assess women's perception of their entire maternity-care experience.
Design/methodology/approach: A questionnaire was developed on the basis of previous research and informed by a framework of standard service quality categories covering the spectrum of typical consumer concerns. A pilot survey with a sample of 195 women who had recent experience of birth was undertaken to establish valid and reliable scales pertaining to different stages of maternity care. Exploratory factor analysis was used to interpret scales and convergent validity was assessed using a modified version of the Client Satisfaction Questionnaire.
Findings: Nine theoretically informed, reliable and valid stand-alone scales measuring the achievement of different dimensions of women's expectancies of public maternity care were developed. The study scales are intended for use in identifying some potential areas of focus for quality improvement in the delivery of maternity care.
Research Limitations/implications: Reliable and valid tools for monitoring the extent to which services respond to women's expectations of their entire maternity care form part of the broader toolkit required to adequately manage health-care quality. This study offers guidance on the make-up of such tools.
Originality/value: The scales produced from this research offer a means to assess maternity care across the full continuum of care and are brief and easy to use.
abstract_id: PUBMED:28166481
The benefits, risks and costs of privacy: patient preferences and willingness to pay. Objective: Multiple surveys show that patients want medical privacy; however, there are costs to maintaining privacy. There are also risks if information is not shared. A review of previous surveys found that most surveys asked questions about patient's privacy concerns and willingness to share their medical information. We found only one study that asked about sharing medical information for better care and no survey that asked patients about the risk, cost or comparison between medical privacy and privacy in other areas. To fill this gap, we designed a survey to: (1) compare medical privacy preferences to privacy preferences in other areas; (2) measure willingness to pay the cost of additional privacy measures; and (3) measure willingness to accept the risks of not sharing information.
Methods: A total of 834 patients attending physician offices at 14 sites completed all or part of an anonymous questionnaire.
Results: Over 95% of patients were willing to share all their medical information with their treating physicians. There was no difference in willingness to share between primary care and specialty sites including psychiatry and an HIV clinic. In our survey, there was no difference in sharing preference between standard medical information and information with additional legal protections including genetic testing, drug/alcohol treatment and HIV results. Medical privacy was ranked lower than sharing social security and credit card numbers, but was deemed more private than other information including tax returns and handgun purchases. There was no statistical difference for any questions by site except for HIV/AIDS clinic patients ranking privacy of the medical record more important than reducing high medical costs and risk of medical errors (p < .05). Most patients were willing to spend a modest amount of additional time for privacy, but few were willing to pay more for additional medical privacy. Most patients were unwilling to take on additional risks to keep medical information hidden.
Conclusions: Patients were very willing to share medical information with their providers. They were able to see the importance of sharing medical information to provide the best possible care. They were unwilling to hide information from providers if there was increased medical risk. Patients were willing to spend additional time for privacy, but most were unwilling to spend extra money. Sixty-eight percent of patients favored reducing medical costs over privacy.
abstract_id: PUBMED:26635471
Understanding patient preferences and willingness to pay for hemophilia therapies. Background: Despite clearly improved clinical outcomes for prophylaxis compared to on-demand therapy, on average only 56% of patients diagnosed with severe hemophilia receive prophylactic factor replacement therapy in the US. Prophylaxis rates generally drop as patients transition from childhood to adulthood, partly due to patients becoming less adherent when they reach adulthood. Assessment of patient preferences is important because these are likely to translate into increased treatment satisfaction and adherence. In this study, we assessed preferences and willingness to pay (WTP) for on-demand, prophylaxis, and longer acting prophylaxis therapies in a sample of US hemophilia patients.
Methods: Adult US hemophilia patients and caregivers (N=79) completed a discrete-choice survey that presented a series of trade-off questions, each including a pair of hypothetical treatment profiles. Using a mixed logit model for analysis, we compared the relative importance of five treatment characteristics: 1) out-of-pocket treatment costs (paid by patients), 2) factor dose adjustment, 3) treatment side effects, 4) availability of premixed factor, and 5) treatment effectiveness and dosing frequency. Based on these attribute estimates, we calculated patients' WTP.
Results: Out-of-pocket treatment costs (P<0.001), side effects (P<0.001), and treatment effectiveness and dosing frequency (P<0.001) were found to be statistically significant in the model. Patients were willing to pay US $410 (95% confidence interval: $164-$656) out of pocket per month for thrice-weekly prophylaxis therapy compared to on-demand therapy and $360 (95% confidence interval: $145-$575) for a switch from thrice-weekly to once-weekly prophylaxis therapy.
Conclusion: Improvements in treatment effectiveness and dosing frequency, treatment side effects, and out-of-pocket costs per month were the greatest determinants of hemophilia treatment choice and WTP. The positive preferences and WTP for longer acting prophylactic therapies suggest that the uptake is likely to increase adherence, improving treatment outcomes. These preferences should also inform the Food and Drug Administration's assessment of new longer acting hemophilia therapies.
Answer: The concept of "willingness to pay" (WTP) has been explored as a method for measuring preferences for maternity care in several studies. WTP is an economic concept that reflects the maximum amount an individual is willing to spend to acquire a good or service, in this case, maternity care. This method can provide insights into how much value individuals place on different aspects of care and can inform healthcare providers and policymakers about consumer preferences and priorities.
In the study by PUBMED:9534503, the feasibility of using WTP as a measure of the benefits of intrapartum care was assessed. The study found that most women expressed a preference for care in a midwives unit, and the strength of preference, as reflected in WTP, was greater among those who preferred care in a consultant-led labor ward. The results suggested that WTP could be used alongside cost data to inform the provision of care, advocating for a flexible service that accounts for women's wishes.
Similarly, PUBMED:28419708 investigated WTP for a maternity waiting home stay in Zambia. The study estimated that the maximum WTP was modest, and focus group discussions indicated that WTP is dependent on the quality of services provided. This study highlighted that while there is a willingness to contribute towards maternity waiting home services, potential barriers such as cash availability and affordability must be addressed.
In the context of measuring women's perceptions of their entire maternity-care experience, PUBMED:26771063 developed an instrument that included WTP as one of the dimensions. This instrument aimed to assess women's satisfaction with their maternity care, aligning with their needs, preferences, and expectations.
Overall, WTP has been utilized as a method for measuring preferences for maternity care, providing valuable information on how women value different aspects of care and what they are willing to pay for. This information can be critical for designing maternity care services that are responsive to women's preferences and for making informed policy decisions (PUBMED:9534503, PUBMED:28419708, PUBMED:26771063). |
Instruction: Is laparoscopic adjustable gastric banding a day surgery procedure?
Abstracts:
abstract_id: PUBMED:32309408
Laparoscopic adjustable gastric banding, the past, the present and the future. The laparoscopic implantation of an adjustable gastric banding (LAGB) was first described in 1993. Thereafter, the LAGB underwent to a lot of modifications, revision and refinements to become as it is currently defined. This procedure quickly became one of the most common bariatric surgical operations in the world in the first decade of the 2000s but, over the last few years, it has turned into the fourth more common procedure. A series of more or less clear reasons, led to this decrease of LAGB. The knowledge of the history of the LAGB, of its evolution over the years and its limitations can be the key-point to recognize the reasons that are leading to its decline. The adjustability and the absolute reversibility characteristic of LAGB, make this surgical procedure a "bridge treatment" to allow the specific goal of eradicating obesity.
abstract_id: PUBMED:25147629
Recurrent aspiration pneumonia after laparoscopic adjustable gastric banding for obesity surgery. Laparoscopic adjustable gastric banding (LAGB) is an increasingly common therapeutic option in the management of obesity and certain obesity-related comorbid conditions. As it gains popularity for its advantages of being minimally invasive and reversible, clinicians should be aware of growing evidence of esophageal and pulmonary complications, which may be irreversible and associated with long-term morbidity. We report a case of esophageal and pulmonary complications in a patient with successful weight loss after lap-band surgery necessitating its removal.
abstract_id: PUBMED:23255999
Band misplacement: a rare complication of laparoscopic adjustable gastric banding. Introduction: Laparoscopic adjustable gastric banding (LAGB) is considered to be a very effective minimally invasive procedure for treating morbidly obese patients. Nevertheless, there are numerous complications that a good surgeon should be aware of. Most of them have been widely presented in the literature.
Aim: In this study we would like to focus on the rare but important complication which is ante-gastric positioning of the band.
Material And Methods: Between January 2005 and May 2008, 122 patients (88 female and 34 male) with mean body mass index (BMI) of 48.5 kg/m(2) (range 35-80 kg/m(2)) underwent LAGB procedure. The average time of hospitalization was 2.47 days. The first radiological control with band calibration was performed 6 weeks after the operation. Consecutive follow-up depended on the percent excess weight loss (EWL%).
Results: Of the 122 patients, 4 (3.3%) presented herein had a band misplaced in the ante-gastric position. There were three out of five surgeons who faced complications of this type. The most and the least experienced team members avoided misplacing the band. Two physicians encountered it at the beginning of their learning curve, and for one it was not related to the process of education. Among other postoperative complications there were two incidents of band slippage, 2 patients had their port localization corrected and in one case drain disconnection occurred. There were no mortalities.
Conclusions: Ante-gastric positioning of the band was the most common cause of obesity surgery failure in our group of patients. It was very difficult to recognize during the typical postoperative checkups; hence there arose a question whether it has been disregarded in other studies.
abstract_id: PUBMED:23255982
Laparoscopic adjustable gastric banding. A prospective randomized study comparing the Swedish Adjustable Gastric Band and the MiniMizer Extra: one-year results. Introduction: A number of different adjustable gastric bands are available for laparoscopic adjustable gastric banding (LAGB). Few attempts have been made to compare the influence of band design differences for efficiency and complication rate and conflicting results have emerged from comparative studies.
Aim: To compare SAGB (Swedish Adjustable Gastric Band) and MiniMizer Extra adjustable gastric bands.
Material And Methods: One hundred and three patients were included in the prospective randomized study. All patients underwent LAGB. The SAGB was used in 49 and MiniMizer Extra in 54 patients. The primary endpoint was weight loss, and secondary endpoints were complication rate, correction of co-morbidities and improvement of quality of life.
Results: There were no early complications. A significant difference in the proportion of patients who have reached good or excellent weight loss results (≥ 50% of initial excess body mass index loss) was found in favour of the MiniMizer Extra group (29.6% vs. 8.2%, p = 0.006). No difference was found in other weight loss parameters, resolution of co-morbidities and improvement of quality of life. One oesophageal dilatation and one leakage were diagnosed in the MiniMizer Extra group. Five band penetrations (9.3%) were diagnosed in the MiniMizer Extra group and no penetrations in the SAGB group (p = 0.069).
Conclusions: No major significant differences were found between the compared bands. Further results need to be confirmed by longer follow-up.
abstract_id: PUBMED:26452483
Concurrent Large Para-oesophageal Hiatal Hernia Repair and Laparoscopic Adjustable Gastric Banding: Results from 5-year Follow Up. Objective: The objective of the study is to identify the efficacy and safety of combining laparoscopic adjustable gastric banding with repair of large para-oesophageal hernias.
Background: Para-oesophageal hernias are more common in the obese with higher recurrence rates following repair. The effect and safety of combining para-oesophageal hernia repair with laparoscopic adjustable gastric banding is unknown.
Methods: One-hundred fourteen consecutive patients undergoing primary laparoscopic adjustable gastric banding with concurrent repair of a large para-oesophageal hernia were prospectively identified and matched to a control group undergoing primary laparoscopic adjustable gastric banding only. Weight loss and complication data were retrieved from a prospectively maintained database, and a standardised bariatric outcome questionnaire was used to assess post-operative symptoms, satisfaction with surgery and satiety scores.
Results: At a mean follow up of 4.9 ± 2.1 years, total weight loss was 16.4 ± 9.9% in the hernia repair group and 17.6 ± 12.6% in the control group (p = 0.949), with 17 vs. 11% loss to follow up rates (p = 0.246). No statistically significant difference in revisional surgery rate and symptomatic recurrence of hiatal hernia was documented in four patients in the hernia repair group (3.5%). No statistically significant difference in mean reflux (9.9 vs. 10.3, p = 0.821), dysphagia (20.7 vs. 20.1, p = 0.630) or satiety scores was identified.
Conclusions: Concurrent repair of large para-oesophageal hiatal hernia and laparoscopic adjustable gastric banding placement is safe and effective both in terms of symptom control and weight loss over the intermediate term. In obese patients with large hiatal hernias, consideration should be given to combining repair of the hernia with a bariatric procedure.
abstract_id: PUBMED:25623917
Laparoscopic Roux-en-Y gastric bypass versus laparoscopic adjustable gastric banding in the super-obese: peri-operative and early outcomes. Introduction: Controversy exists between laparoscopic Roux-en-Y gastric bypass and laparoscopic adjustable gastric banding in super-obese patients.
Methods: This is a retrospective review of prospectively collected data. A total of 102 consecutive super-obese (body mass index >50) patients underwent laparoscopic Roux-en-Y gastric bypass (Group 1), and 79 consecutive ones underwent laparoscopic adjustable gastric banding (Group 2). Early complications and weight loss outcomes were evaluated.
Results: No significant difference was found in operative mean (± standard deviation) time (93.5 ± 33 vs 87.7 ± 39 min, p = 0.29), hospital stay (2.68 ± 2.27 vs 2.75 ± 1.84 days, p = 0.80), or overall early postoperative morbidity (17.65% and 10.12%, p = 0.20). Intra-operative complications occurred in six patients (5.9%) in Group 1 and none in Group 2 (0.0%, p = 0.04). Mean excess weight loss percent at 6 and 12 months in Group 1 was 44.75% ± 11.84% and 54.71% ± 18.18% versus 26.20% ± 12.42% and 31.55% ± 19.79% in Group 2 (p < 0.001).
Conclusion: There seems to be no significant differences in early complications between laparoscopic Roux-en-Y gastric bypass and laparoscopic adjustable gastric banding operations in the short term. Weight loss and excess weight loss percent at 6 and 12 months are significantly better after laparoscopic Roux-en-Y gastric bypass.
abstract_id: PUBMED:23256007
Own experience improving port implantation in laparoscopic adjustable gastric banding. Introduction: Laparoscopic adjustable gastric banding (LAGB) is a method frequently used for treating obesity. It requires periodic band regulation associated with the need for port puncture. However, there is always a substantial risk of port rotation.
Aim: This publication presents a solution of the problem for a MiniMizer Extra band.
Material And Methods: One thousand one hundred and twenty-four individuals were operated on for obesity in the Department of Gastroenterological, Oncological and General Surgery of the Medical University of Lodz between 2005 and 2009. In 637 patients LAGB was performed. These LAGB patients were divided into three groups. In group I (20 patients) MiniMizer Extra bands were placed without port stabilization. In the second group (292 patients) MiniMizer Extra band placement with port stabilization was commenced. In the third group (325 patients) bands of other manufacturers (AMI, Inamed, Midband, Obtech) were used without port stabilization. The port was implanted into the subcutaneous tissue in the left subcostal region, medial to the left working tool trocar position.
Results: Port rotation was observed on the very first band adjustment in 3 (0.92%) and 11 (55%) patients with a band other than MiniMizer Extra (n = 325) and the first 20 patients with a MiniMizer Extra band. A different technique of port stabilization was applied in a further 292 patients on MiniMizer Extra band placement and no port rotation was noted.
Conclusions: We believe that MiniMizer additional port stabilization is necessary for its frequent rotation. Simultaneously, application of our method is easy, does not prolong the procedure significantly and secures comfortable access to the port.
abstract_id: PUBMED:32117500
Is it possible to improve long-term results of laparoscopic adjustable gastric banding with appropriate patient selection? Introduction: The gastric band is still offered as a good bariatric option for highly motivated and carefully selected patients. The question is whether this faith is justified or not.
Aim: To assess long-term clinical outcomes of patients who underwent laparoscopic adjustable gastric banding (LAGB) at a single bariatric center and to examine variables associated with patients' adherence to scheduled postoperative appointments.
Material And Methods: A retrospective review of patients who underwent LAGB between 2004 and 2009 was performed. The initial cohort included 167 patients. Data regarding sex, age, preoperative weight, hometown population and distance from the bariatric center, and gastric band volume were collected. Compliance was measured as the number of postoperative appointments. Clinical outcome was defined as percent excess weight loss (%EWL) at the end of the observation period or at band removal.
Results: The LAGB was performed in 167 patients between 2004 and 2009. The mean follow-up time was 90 ±24 months. Five (3%) patients were lost to follow-up; 37 (22.2%) had their band removed. The remaining 125 (74.8%) patients retained their bands and were included in the analysis. The mean %EWL was 33.0 ±26.6%. Thirty-one (18.6%) patients achieved %EWL > 50%.
Conclusions: This study found that LAGB was not an effective bariatric procedure in long-term observation. Only 25% of 125 patients who maintained a functioning band achieved %EWL > 50%. Compliance was the only independent prognostic factor for weight loss. Other factors had no influence on outcome.
abstract_id: PUBMED:37200715
Factors Associated With Weight Loss After Laparoscopic Adjustable Gastric Banding in Adolescents With Severe Obesity. Childhood obesity is associated with many comorbidities. Bariatric surgery is known to be efficient for reducing weight in adolescents.
Objectives: The primary outcome was to identify somatic or psychosocial factors associated with success at 24 months after a laparoscopic adjustable gastric banding (LAGB) procedure in our cohort of adolescents with severe obesity. Secondary endpoints were to describe weight loss outcomes, comorbidity resolution, and complications.
Methods: We have retrospectively reviewed medical records of patients who had LAGB placed between 2007 and 2017. Factors associated with success at 24 months after LAGB were researched, with success being defined as positive percentage of excess weight loss (%EWL) at 24 months.
Results: Forty-two adolescents underwent a LAGB procedure, the mean %EWL was 34.1% at 24 months, with improvement in most comorbidities and without major complications. Having lost weight before surgery was associated with success, whereas a high body mass index at surgery was associated with a higher risk of failure. No other factor was found to be associated with success.
Conclusion: Comorbidities mostly improved 24 months after LAGB and no major complication occurred. Having lost weight before surgery was associated with a successful surgery, whereas a high body mass index at surgery increases the risk of failure.
abstract_id: PUBMED:30309778
Large series examining laparoscopic adjustable gastric banding as a salvage solution for failed gastric bypass. Background: The Roux-en-Y gastric bypass (RYGB) has long been considered the gold standard of weight loss procedures. However, there is limited evidence on revisional options with both minimal risk and long-term weight loss results.
Objective: To examine percent excess weight loss, change in body mass index (BMI), and complications in patients who underwent laparoscopic adjustable gastric banding (LAGB) over prior RYGB.
Setting: Academic hospital.
Methods: Retrospective analysis of a single-center prospectively maintained database. Three thousand ninety-four LAGB placements were reviewed; 139 were placed in patients with prior RYGB.
Results: At the time of LAGB, the median BMI was 41.3. After LAGB, we observed weight loss or stabilization in 135 patients (97%). The median maximal weight loss after LAGB was 37.7% excess weight loss and -7.1 change in BMI (P < .0001). At last follow-up visit, the median weight loss was 27.5% excess weight loss and -5.3 change in BMI (P < .0001). Median follow-up was 2.48 years (.01-11.48): 68 of 132 eligible (52%) with 3-year follow-up, 12 of 26 eligible (44%) with 6-year eligible follow-up, and 3 of 3 eligible (100%) with >10-year follow-up. Eleven bands required removal, 4 for erosion, 4 for dysphagia, and 3 for nonband-related issues.
Conclusions: LAGB over prior RYGB is a safe operation, which reduces the surgical risks and nutritional deficiencies often seen in other accepted revisional operations. Complication rates were consistent with primary LAGB. Weight loss is both reliable and lasting, and it can be considered as the initial salvage procedure in patients with failed gastric bypass surgery.
Answer: No, laparoscopic adjustable gastric banding (LAGB) is not typically a day surgery procedure. The average time of hospitalization reported in one study was 2.47 days (PUBMED:23255999), indicating that patients are generally required to stay in the hospital for observation and recovery for at least a couple of days after the procedure. |