id
stringlengths
16
27
title
stringlengths
18
339
abstract
stringlengths
95
38.7k
category
stringlengths
7
44
10.1101/2022.03.15.22272432
Comorbidities Diminish the Likelihood of Seropositivity After SARS-CoV-2 Vaccination
BackgroundThe impact of chronic health conditions (CHC) on serostatus post-severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) vaccination is unknown. MethodsWe assessed serostatus post-SARS-CoV-2 vaccination among fully vaccinated participants recruited between April 2021 through August 2021 in 18 years and older residents of Jefferson County, Kentucky, USA. Serostatus was determined by measuring SARS-CoV-2 Spike protein specific immunoglobulin (Ig) G (Spike IgG) antibodies via enzyme-linked immunoassay (ELISA) in peripheral blood samples. ResultsOf the 5,178 fully vaccinated participants, 51 were seronegative and 5,127 were seropositive. Chronic kidney disease (CKD) (OR=13.49; 95% CI: 4.88-37.3; P<0.0001) and autoimmune disease (OR=11.34; 95% CI: 5.21-24.69; P<0.0001) showed highest association with negative serostatus in fully vaccinated participants. The absence of any CHC was strongly associated with positive serostatus (OR=0.37; 95% CI: 0.19-0.73; P=0.003). The risk of negative serostatus increased in the presence of two CHCs (OR=2.82; 95% CI: 1.14-7) to three or more CHCs (OR=4.52; 95% CI: 1.68-12.14). Similarly, use of 2 or more CHC related medications was significantly associated with seronegative status (OR=6.08; 95%: 2.01-18.35). ConclusionsPresence of any CHC, especially CKD or autoimmune disease, increased the likelihood of seronegative status among individuals who were fully vaccinated to SAR-CoV-2. This risk increased with a concurrent increase in number of comorbidities, especially with multiple medications. Absence of any CHC was protective and increased the likelihood of a positive serological response post-vaccination. These results will help develop appropriate guidelines for booster doses and targeted vaccination programs.
allergy and immunology
10.1101/2022.03.15.22272450
The external validity of four risk scores predicting 30-day mortality after surgery
BackgroundSurgical risk prediction tools can facilitate shared-decision-making and efficient allocation of perioperative resources. Such tools should be externally validated in target populations prior to implementation. MethodsPredicted risk of 30-day mortality was retrospectively derived for surgical patients at Royal Perth Hospital from 2014 to 2021 using the Surgical Outcome Risk Tool (SORT) and the related NZRISK (n=44,031, 53,395 operations). In a sub-population (n=31,153), the Physiology and Operative Severity Score for the enumeration of Mortality (POSSUM) and the Portsmouth variant of this (P-POSSUM) were matched from the Copeland Risk Adjusted Barometer (C2-Ai, Cambridge, UK). The primary outcome was risk score discrimination of 30-day mortality as evaluated by area-under-receiver operator characteristic curve (AUROC) statistics. Calibration plots and outcomes according to risk decile and time were also explored. ResultsAll four risk scores showed high discrimination (AUROC) for 30-day mortality (SORT=0.922, NZRISK=0.909, P-POSSUM=0.893; POSSUM=0.881) but consistently over-predicted risk. SORT exhibited the best discrimination and calibration. Thresholds to denote the highest and second-highest deciles of SORT risk (>3.92% and 1.52-3.92%) captured the majority of deaths (76% and 13% respectively) and hospital-acquired-complications. Year-on-year SORT calibration performance drifted towards over-prediction, reflecting a decrease in 30-day mortality over time despite an increase in the surgical population risk. ConclusionsSORT was the best performing risk score in predicting 30-day mortality after surgery. Categorising patients based on SORT into low, medium (80-90th percentile) and high-risk (90-100th percentile) can guide future allocation of perioperative resources. No tools were sufficiently calibrated to support shared-decision-making based on absolute predictions of risk.
anesthesia
10.1101/2022.03.16.22272460
Longitudinal assessment of lung function in Swiss childhood cancer survivors
RationaleChildhood cancer survivors (CCSs) are at increased risk for pulmonary morbidity due to exposure to lung-toxic treatments, including specific chemotherapies, radiotherapy, and surgery. Longitudinal data on lung function, with information on how outcomes change over time, in CCSs are scarce. ObjectivesTo investigate lung function trajectories in childhood cancer survivors over time and investigate the association with lung-toxic treatment. MethodsThis retrospective, multi-center cohort study included CCSs, who were diagnosed between 1990 and 2013 in Switzerland and had been exposed to lung-toxic chemotherapeutics or thoracic radiotherapy. Pulmonary function tests (PFTs) were obtained from hospital charts. We assessed quality of PFTs systematically and calculated z-scores and percentage predicted of forced expiratory volume in first second (FEV1), forced vital capacity (FVC), FEV1/FVC ratio, total lung capacity (TLC) and diffusion capacity for carbon-monoxide (DLCO) based on recommended reference equations. We described lung function over time and determined risk factors for change in FEV1 and FVC using multivariable linear regression. ResultsWe included 790 PFTs from 183 CCSs, with a median age of 12 years (IQR 7 - 14) at diagnosis. Common diagnoses were lymphoma (55%), leukemia (11%) and CNS tumors (12%). Median follow-up time was 5.5 years. Half (49%) of CCSs had at least one abnormal pulmonary function parameter, with restrictive impairment being common (22%). Trajectories of FEV1 and FVC started at z-scores of -1.5 at diagnosis and remained low throughout follow-up. CCSs treated with thoracic surgery started particularly low, with an FEV1 of -1.08 z-scores (-2.02 to -0.15) and an FVC of -1.42 z-scores (-2.27 to -0.57) compared to those without surgery. In CCS exposed to lung-toxic chemotherapeutics FEV1 z-scores increased slightly over time (0.12 per year; 95%CI 0.02 - 0.21). ConclusionThe large proportion of CCSs with reduced lung function identified in this study underlines the need for more research and long-term surveillance of this vulnerable population.
pediatrics
10.1101/2022.03.15.22272307
Age-based differences in quantity and frequency of consumption when screening for harmful alcohol use
AimsSurvey questions on usual quantity and frequency of alcohol consumption are regularly used in screening tools to identify drinkers requiring intervention. The aim of this study is to examine age-based differences in quantity and frequency of alcohol consumption on the Alcohol Use Disorders Identification Test (AUDIT) and how this relates to the prediction of harmful or dependent drinking DesignCross sectional survey SettingAustralia. ParticipantsData was taken from 17,399 respondents who reported any alcohol consumption in the last year and were aged 18 and over from the 2016 National Drug Strategy Household Survey, a broadly representative cross-sectional survey on substance use. MeasurementRespondents were asked about their frequency of consumption, usual quantity per occasion and the other items of the AUDIT. FindingsIn older drinkers, quantity per occasion ({beta}=0.53 (0.43, 0.64 95%CI in 43-47 year olds as an example) is a stronger predictor of dependence than frequency per occasion ({beta}=0.24 (0.17, 0.31). In younger drinkers the reverse was true with frequency a stronger predictor ({beta}=0.54 (0.39, 0.69) in 23-27 year olds) than quantity ({beta}=0.26 (0.18, 0.34) in 23-27 year olds). Frequency of consumption was not a significant predictor of dependence in respondents aged 73 and over ({beta}=-0.03 (-0.08, 0.02)). Similar patterns were found when predicting harmful drinking. Despite this, since frequency of consumption increased steadily with age, the question on frequency was responsible for at least 65% of AUDIT scores in drinkers aged 53 and over. ConclusionsThe items with a weaker association with dependent or harmful drinking in younger and older drinkers are the same items with the strongest influence on overall AUDIT scores. Further investigation into age-specific scoring of screening tools is recommended.
public and global health
10.1101/2022.03.15.22272448
Impact of COVID-19 on Life Expectancy among Native Americans
BackgroundThere has been little systematic research on the mortality impact of COVID-19 in the Native American population. ObjectiveWe provide the first estimates of loss of life expectancy directly due to COVID-19 in 2020 and 2021 for the Native American population. MethodsWe use several sources of data (the 2019 life table recently released by the National Center for Health Statistics for Native Americans, provisional COVID-19 deaths by age and race/ethnicity for 2020 and 2021, and population estimates from the US Census Bureau), along with multiple decrement techniques, to calculate life tables for Native Americans that include COVID-19 mortality. ResultsNative Americans had much lower life expectancy than other groups before the pandemic, which set this population behind further: the estimated loss in life expectancy at birth due to COVID-19 for Native Americans is 2.5 years in 2020 and 2.8 years in 2021. ConclusionsThese results underscore the disproportionate share of COVID-19 deaths experienced by Native Americans: a loss in 2020 due to COVID-19 that is almost three times as large as that for Whites and about a half-year greater than that for the Black population. Despite a successful vaccination campaign among Native Americans, the estimated loss in life expectancy at birth due to COVID-19 in 2021 unexpectedly exceeds that in 2020. ContributionThe increased loss in life expectancy in 2021, despite higher vaccination rates than in other racial/ethnic groups, highlights the huge challenges faced by Native Americans in their efforts to control the deleterious consequences of the pandemic.
public and global health
10.1101/2022.03.16.22272477
Demographic and clinical profile of black patients with chronic kidney disease attending Charlotte Maxeke Johannesburg Academic Hospital (CMJAH) in Johannesburg, South Africa.
BackgroundThe prevalence of chronic kidney disease (CKD) is increasing worldwide; black patients have an increased risk of developing CKD and end stage kidney disease (ESKD) at significantly higher rates than other races. MethodsA cross sectional study was carried out on black patients with CKD attending the kidney outpatient clinic at Charlotte Maxeke Johannesburg Academic Hospital (CMJAH) in South Africa, between September 2019 to March 2020. Demographic and clinical data were extracted from the ongoing kidney outpatient clinic records and interviews, and were filled in a questionnaire. Patients provided blood and urine for laboratory investigations as standard of care, data were descriptively and inferentially analysed using STATA version 17. Multivariable logistic regression analysis was used to identify demographic and clinical data associated with advanced CKD. ResultsA total of 312 black patients with CKD were enrolled during the study period; 58% patients had advanced CKD, of whom 31.5 % had grossly increased proteinuria, 96.7 % had hypertension, 38.7 % had diabetes mellitus and 38.1 % had both hypertension and diabetes mellitus. For patients with advanced CKD, the median age was 61 (IQR 51-69) years, eGFR 33 (30 -39) mL/min/1.73 m2, serum bicarbonate 22 (IQR 20 - 24), hemoglobin 12.9 (IQR 11.5 - 14.0) g/dl, serum transferrin 2.44 (IQR 2.23 - 2.73) g/L, serum uric acid 0.43 (IQR 0.37 - 0.53) and serum potassium 4.4 (IQR 3.9 - 4.8) mmol/L. The prevalence of metabolic acidosis was 62.4 %, anemia 46.4 %, gout 30.9 %, low transferrin levels 16.6 % and hyperkalemia 8.8 % among those with advanced CKD, while the prevalence of metabolic acidosis and anemia was 46.6 % and 25.9 % respectively in those with early CKD. Variables with higher odds for advanced CKD after multivariable logistic regression analysis were hypertension (OR 3.3, 95 % CI 1.2 - 9.2, P = 0.020), diabetes mellitus (OR 1.8, 95 % CI 1.1 - 3.3, P = 0.024), severe proteinuria (OR 3.5, 95 % CI 1.9 - 6.5, P = 0.001), angina (OR 2.5, 95 % CI 1.2 - 5.1, P = 0.008), anaemia (OR 2.9, 95% CI 1.7 - 4.9, P= 0.001), hyperuricemia (OR 2.4, 95 % CI 1.4 - 4.1, P = 0.001), and metabolic acidosis (OR 2.0, 95% CI 1.2 - 3.1, P= 0.005). Other associations with advanced CKD were widow/widower (OR 3.2, 95 % CI 1.4 - 7.4, P = 0.006), low transferrin (OR 2.4, 95% CI 1.1 - 5.1, P= 0.028), hyperkalemia (OR 5.4, 95% CI 1.2 - 24.1, P= 0.029), allopurinol (OR 2.4, 95 % CI 1.4 - 4.3, P = 0.005) and doxazosin (OR 1.9, 95% CI 1.2 - 3.1, P = 0.006). ConclusionHypertension and diabetes mellitus were strongly associated with advanced CKD, suggesting a need for primary and secondary population-based prevention measures. Metabolic acidosis, anaemia with low transferrin levels, hyperuricemia and hyperkalemia were highly prevalent in our patients, including those with early CKD, and they were strongly associated with advanced CKD, calling for the proactive role of clinicians and dietitians in supporting the needs of CKD patients in meeting their daily dietary requirements towards preventing and slowing the progression of CKD.
nephrology
10.1101/2022.03.16.22272456
Plasma MIA, CRP, and albumin predict cognitive decline in Parkinson's Disease
ObjectiveUsing a multi-cohort, Discovery-Replication-Validation design, we sought new plasma biomarkers that predict which PD individuals will experience cognitive decline. MethodsIn 108 Discovery Cohort PD individuals and 83 Replication Cohort PD individuals, we measured 940 plasma proteins on an aptamer-based platform. Using proteins associating with subsequent cognitive decline in both cohorts, we trained a logistic regression model to predict which PD patients showed fast (>=1 point drop/year on Montreal Cognitive Assessment (MoCA)) vs. slow (<1 point drop/year on MoCA) cognitive decline in the Discovery Cohort, testing it in the Replication Cohort. We developed alternate assays for the top three proteins and confirmed their ability to predict cognitive decline - defined by change in MoCA or development of incident Mild Cognitive Impairment (MCI) or dementia - in a Validation Cohort of 118 PD individuals. We investigated the top plasma biomarker for causal influence by Mendelian randomization. ResultsA model with only three proteins (Melanoma Inhibitory Activity Protein (MIA), C-Reactive Protein (CRP), albumin) separated Fast vs. Slow cognitive decline subgroups with an AUC of 0.80 in the Validation Cohort. Validation Cohort PD individuals in the top quartile of risk for cognitive decline based on this model were 4.4 times more likely to develop incident MCI or dementia than those in the lowest quartile. Genotypes at MIA SNP rs2233154 associated with MIA levels and cognitive decline, providing evidence for MIAs causal influence. ConclusionsAn easily-obtained plasma-based predictor identifies PD individuals at risk for cognitive decline. MIA may participate causally in development of cognitive decline.
neurology
10.1101/2022.03.15.22272326
Understanding social and environmental factors and their contribution mechanisms to HIV transmission among heterosexual men in Indonesia
The number of HIV infection among heterosexual men in Indonesia continues to increase. This paper describes social and environmental factors and the mechanisms through which these factors may have contributed to the transmission of HIV among men in Indonesia. A qualitative design using one-on-one and face-to-face in-depth interviews was employed to collect data from men living with HIV in Yogyakarta and Belu, from June to December 2019. Participants (n=40) were recruited using the snowball sampling technique. The logical model for socio-environmental determinants diagnosis was used to conceptualise the study and discuss the findings. The findings showed that social factors such as peer influence on sex, condom use and injecting drug use were contributing factors for HIV transmission among the participants. Other factors and drivers of HIV transmission included mobility, migration, and the environment the participants lived, worked and interacted, which facilitated their engagement in high-risk behaviours. The findings indicate the need for wide dissemination of information and education about HIV and condoms for men, within communities and migration areas in Indonesia and other similar settings globally in order to increase their understanding of the means of HIV transmission, and condom use for HIV prevention.
hiv aids
10.1101/2022.03.15.22272229
Transmission blocking activity of low dose tafenoquine in healthy volunteers experimentally infected with Plasmodium falciparum
BackgroundBlocking the transmission of parasites from humans to mosquitoes is a key component of malaria control. Tafenoquine exhibits activity against all stages of the malaria parasite and may have utility as a transmission blocking agent. We aimed to characterize the transmission blocking activity of low dose tafenoquine. MethodsHealthy adults were inoculated with P. falciparum 3D7-infected erythrocytes on day 0. Piperaquine was administered on days 9 and 11 to clear asexual parasitemia while allowing gametocyte development. A single 50 mg oral dose of tafenoquine was administered on day 25. Transmission was determined by enriched membrane feeding assays pre-dose and at 1, 4 and 7 days post-dose. Artemether-lumefantrine was administered following the final assay. Outcomes were the reduction in mosquito infection and gametocytemia post-tafenoquine, and safety parameters. ResultsSix participants were enrolled, and all were infective to mosquitoes pre-tafenoquine, with a median 86% (range: 22-98) of mosquitoes positive for oocysts and 57% (range: 4-92) positive for sporozoites. By day 4 post-tafenoquine, the oocyst and sporozoite positivity rate had reduced by a median 35% (IQR: 16-46) and 52% (IQR: 40-62), respectively, and by day 7, 81% (IQR 36-92) and 77% (IQR 52-98), respectively. The decline in gametocyte density post-tafenoquine was not significant. No significant participant safety concerns were identified. ConclusionLow dose tafenoquine reduces P. falciparum transmission to mosquitoes, with a delay in effect. Trial registrationAustralian New Zealand Clinical Trials Registry (ACTRN12620000995976). FundingQIMR Berghofer Medical Research Institute.
infectious diseases
10.1101/2022.03.13.22271992
The retirement-health puzzle: Breathe a sigh of relief at retirement?
ObjectivesWhile the health effects of retirement have been well studied, existing findings remain inconclusive, and the mechanisms underlying the linkage between retirement and health are unclear. Thus, this study aimed to evaluate the effects of retirement on health and its potential mediators. MethodsUsing a national household survey conducted annually from 2004 to 2019 in Japan (the Japan Household Panel Survey), we evaluated the effects of retirement among Japanese men aged 50 or older on their health, in addition to other outcomes that could be attributed to health changes associated with retirement (i.e. health behaviours, psychological well-being, time use for unpaid activities, and leisure activities). As outcomes are not measured every year, we analysed 5,794-10,682 person-year observations by 975-1,469 unique individuals. To address the potential endogeneity of retirement, we adopted an instrumental variable fixed-effects approach based on policy changes in pension-eligible ages for employee pensions. ResultsWe found that retirement improved psychological well-being, exercise habits, and time spent on unpaid work. The psychological benefits of retirement were no longer observed for longer durations after retirement, whereas healthy habits and unpaid activities continued. Moreover, health-related improvements after retirement occurred mostly in the higher-income group. DiscussionEnhancement in personal quality of life owing to increased leisure time and stress reduction from work in addition to life style changes may be key to understanding the health benefits of retirement. Considering the mechanisms behind retirement-health relationships and potential heterogeneous effects is essential for healthy retirement lives when increasing the retirement age. HighlightsO_LIIn line with theories, previous studies report mixed results on effects of retirement on health. C_LIO_LIEmpirical evidence on mechanisms underlying the linkage between retirement and health is scarce. C_LIO_LIRetirement effects on health and potential mediators are evaluated by a quasi-experimental approach. C_LIO_LIRetirement improves psychological well-being, exercise habits, time spent on unpaid work, and satisfaction with leisure. C_LIO_LIHealth-related improvements after retirement occur mostly in the higher-income group. C_LI
health economics
10.1101/2022.03.11.22272013
"Now, I know my life is not over!": Introduction and Adaptation of the RESPECT HIV Intervention, OraQuick, and Trauma-Informed Care for Female Victims of Non-Partner Sexual Violence in Haiti
IntroductionIn the Cite Soleil (CS) shantytown of Haiti, non-partner sexual violence (NPSV) is widespread, involves multiple assailants who do not use condoms and inflict intentional coital injuries. HIV prevalence in Haiti is 2.2%, CS HIV prevalence is 3.6% shame, guilt, self-blame and societal stigma impede access to HIV testing/treatment in a context of low confidentiality. In that context, NPSV victims often succumb to AIDS. Culturally adapted evidence-based HIV interventions (EBIs) can increase HIV awareness and reduce HIV risk. MethodsFollowing the ADAPT-ITT model, we used purposive sampling to recruit and interview key stakeholders (age 18 and older) in four focus groups (Victims and health providers, as part of adaptation of an EBI HIV (RESPECT) with an orally administered RAPID HIV antibody test (OraQuick) to increase HIV awareness and testing and to reduce HIV risk for victims of NPSV (N=32, 8/focus group). We also introduced trauma-informed care (TIC) to address the post-assault trauma of NPSV victims. Stakeholders were introduced to RESPECT, participated in RESPECT role plays, interpreted OraQuick HIV screen results after viewing a demonstration of a sample collection, and provided feedback on TIC. ATLAS.ti facilitated thematic content analysis of focus group transcripts. ResultsParticipants unanimously (100%) reported that RESPECT, OraQuick, and TIC were acceptable, feasible, and useful for increasing HIV awareness, reducing shame, guilt, and trauma, and empowering NPSV victims to reduce the risk of HIV acquisition/transmission in future consensual relationships. ConclusionEstablishing the acceptability, feasibility and effectiveness of RESPECT, OraQuick, and TIC in CS is a crucial first step towards responding to the HIV prevention and trauma needs of NPSV victims.
hiv aids
10.1101/2022.03.14.22272314
COVID-19 Prediction using Genomic Footprint of SARS-CoV-2 in Air, Surface Swab and Wastewater
ImportanceGenomic footprints of pathogens shed by infected individuals can be traced in environmental samples. Analysis of these samples can be employed for noninvasive surveillance of infectious diseases. ObjectiveTo evaluate the efficacy of environmental surveillance of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) for predicting COVID-19 cases in a college dormitory. DesignUsing a prospective experimental design, air, surface swabs, and wastewater samples were collected from a college dormitory from March to May 2021. Students were randomly screened for COVID-19 during the study period. SARS-CoV-2 in environmental samples was concentrated with electronegative filtration and quantified using Volcano 2nd Generation-qPCR. Descriptive analyses were conducted to examine the associations between time-lagged SARS-CoV-2 in environmental samples and clinically diagnosed COVID-19 cases. SettingThis study was conducted in a residential dormitory at the University of Miami, Coral Gables campus, FL, USA. The dormitory housed about 500 students. ParticipantsStudents from the dormitory were randomly screened, for COVID-19 for 2-3 days / week while entering or exiting the dormitory. Main OutcomeClinically diagnosed COVID-19 cases were of our main interest. We hypothesized that SARS-CoV-2 detection in environmental samples was an indicator of the presence of local COVID-19 cases in the dormitory, and SARS-CoV-2 can be detected in the environmental samples several days prior to the clinical diagnosis of COVID-19 cases. ResultsSARS-CoV-2 genomic footprints were detected in air, surface swab and wastewater samples on 52 (63.4%), 40 (50.0%) and 57 (68.6%) days, respectively, during the study period. On 19 (24%) of 78 days SARS-CoV-2 was detected in all three sample types. Clinically diagnosed COVID-19 cases were reported on 11 days during the study period and SARS-CoV-2 was also detected two days before the case diagnosis on all 11 (100%), 9 (81.8%) and 8 (72.7%) days in air, surface swab and wastewater samples, respectively. ConclusionProactive environmental surveillance of SARS-CoV-2 or other pathogens in a community/public setting has potential to guide targeted measures to contain and/or mitigate infectious disease outbreaks. Key PointsO_ST_ABSQuestionC_ST_ABSHow effective is environmental surveillance of SARS-CoV-2 in public places for early detection of COVID-19 cases in a community? FindingsAll clinically confirmed COVID-19 cases were predicted with the aid of 2 day lagged SARS-CoV-2 in environmental samples in a college dormitory. However, the prediction efficiency varied by sample type: best prediction by air samples, followed by wastewater and surface swab samples. SARS-CoV-2 was also detected in these samples even on days without any reported cases of COVID-19, suggesting underreporting of COVID-19 cases. MeaningSARS-CoV-2 can be detected in environmental samples several days prior to clinical reporting of COVID-19 cases. Thus, proactive environmental surveillance of microbiome in public places can serve as a mean for early detection of location-time specific outbreaks of infectious diseases. It can also be used for underreporting of infectious diseases.
infectious diseases
10.1101/2022.03.14.22272314
COVID-19 Prediction using Genomic Footprint of SARS-CoV-2 in Air, Surface Swab and Wastewater
ImportanceGenomic footprints of pathogens shed by infected individuals can be traced in environmental samples. Analysis of these samples can be employed for noninvasive surveillance of infectious diseases. ObjectiveTo evaluate the efficacy of environmental surveillance of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) for predicting COVID-19 cases in a college dormitory. DesignUsing a prospective experimental design, air, surface swabs, and wastewater samples were collected from a college dormitory from March to May 2021. Students were randomly screened for COVID-19 during the study period. SARS-CoV-2 in environmental samples was concentrated with electronegative filtration and quantified using Volcano 2nd Generation-qPCR. Descriptive analyses were conducted to examine the associations between time-lagged SARS-CoV-2 in environmental samples and clinically diagnosed COVID-19 cases. SettingThis study was conducted in a residential dormitory at the University of Miami, Coral Gables campus, FL, USA. The dormitory housed about 500 students. ParticipantsStudents from the dormitory were randomly screened, for COVID-19 for 2-3 days / week while entering or exiting the dormitory. Main OutcomeClinically diagnosed COVID-19 cases were of our main interest. We hypothesized that SARS-CoV-2 detection in environmental samples was an indicator of the presence of local COVID-19 cases in the dormitory, and SARS-CoV-2 can be detected in the environmental samples several days prior to the clinical diagnosis of COVID-19 cases. ResultsSARS-CoV-2 genomic footprints were detected in air, surface swab and wastewater samples on 52 (63.4%), 40 (50.0%) and 57 (68.6%) days, respectively, during the study period. On 19 (24%) of 78 days SARS-CoV-2 was detected in all three sample types. Clinically diagnosed COVID-19 cases were reported on 11 days during the study period and SARS-CoV-2 was also detected two days before the case diagnosis on all 11 (100%), 9 (81.8%) and 8 (72.7%) days in air, surface swab and wastewater samples, respectively. ConclusionProactive environmental surveillance of SARS-CoV-2 or other pathogens in a community/public setting has potential to guide targeted measures to contain and/or mitigate infectious disease outbreaks. Key PointsO_ST_ABSQuestionC_ST_ABSHow effective is environmental surveillance of SARS-CoV-2 in public places for early detection of COVID-19 cases in a community? FindingsAll clinically confirmed COVID-19 cases were predicted with the aid of 2 day lagged SARS-CoV-2 in environmental samples in a college dormitory. However, the prediction efficiency varied by sample type: best prediction by air samples, followed by wastewater and surface swab samples. SARS-CoV-2 was also detected in these samples even on days without any reported cases of COVID-19, suggesting underreporting of COVID-19 cases. MeaningSARS-CoV-2 can be detected in environmental samples several days prior to clinical reporting of COVID-19 cases. Thus, proactive environmental surveillance of microbiome in public places can serve as a mean for early detection of location-time specific outbreaks of infectious diseases. It can also be used for underreporting of infectious diseases.
infectious diseases
10.1101/2022.03.16.22272337
Determinants and causes of early neonatal mortality: Hospital based retrospective cohort study in Somali region of Ethiopia
BackgroundEarly neonatal mortality occurs when a newborn dies within the first seven days of life. Despite interventions, newborn mortality in Ethiopia has grown over time (33 death per 1000 live births). Determinants varies on level of neonatal mortality. The studys goal was to determine magnitude of early newborn death, as well as its causes and drivers, in Newborn Intensive Care Unit of Referral hospital in Ethiopias Somali region. MethodsHealth facility based retrospective study review was conducted between May 2019 to May 2021 in Shiek Hassan Yabare Referral Hospital of Jigjiga University of Ethiopia. All neonates between 0 to 7 days admitted at NICU and get registered using the new NICU registration book from May 2019 to May 2021 with complete data were included. Kobo toolkit was used for data collection and analyzed in SPSS 20. Logistic regression model was used to estimate determinants. ResultThe magnitude of early neonatal mortality rate (defined as death between 0-7 days) of Ethiopias Somali region is estimated to be 130 per 1000 live births--That is say 130 newborn couldnt celebrate their seventh day in every 1000 live births. Hypothermia, prematurity, maternal death at birth and shorter length of stay in NICU were increasing the chance of neonatal mortality at early stage while neonatal resuscitation had shown protective effect against neonatal mortality. Similarly birth asphyxia, preterm, sepsis, and congenital abnormalities were major causes of admission and death in the NICU. ConclusionThe magnitude of early neonatal mortality is considerable and causes are preventable. Enhancing quality of care including infection prevention and hypothermia through mentorship and encouraging the Kangaroo Mother Care practice is necessary at childbirth and NICU of the Hospital.
intensive care and critical care medicine
10.1101/2022.03.15.22272376
The Gut Microbiome and their Metabolites in Human Blood Pressure Variability
Blood pressure (BP) variability is an independent risk factor for cardiovascular events. Recent evidence supports a role for the gut microbiota in BP regulation. However, whether the gut microbiome is associated with BP variability is yet to be determined. Here, we aimed to investigate the interplay between the gut microbiome and their metabolites in relation to BP variability. Ambulatory BP monitoring was performed in 69 participants from Australia (55.1% women; mean{+/-}SD 59.8{+/-}7.26-years old, 25.2{+/-}2.83 kg/m2). This data was used to determine night-time dipping, morning BP surge (MBPS) and BP variability as standard deviation (SD). The gut microbiome was determined by 16S rRNA sequencing, and metabolite levels by gas chromatography. We identified specific taxa associated with systolic BP variability, night-time dipping and MBPS. Notably, Alistipesfinegoldii and Lactobacillus spp. were only present in participants within the normal ranges of BP variability, MBPS and dipping, while Prevotella spp. and various Clostridium spp. were found to be present in extreme dippers and the highest quartiles of BP SD and MBPS. There was a negative association between MBPS and microbial -diversity (r=-0.244, P=0.046). MBPS was also negatively associated with total levels of microbial metabolites called short-chain fatty acids (SCFAs) in the plasma (r=-0.305, P=0.020), particularly acetate (r=-0.311, P=0.017). In conclusion, gut microbiome diversity, levels of microbial metabolites, and the bacteria Alistipesfinegoldii and Lactobacillus were associated with lower BP variability, and Clostridium and Prevotella with higher BP variability. Thus, our findings suggest the gut microbiome and metabolites may be involved in the regulation of BP variability.
cardiovascular medicine
10.1101/2022.03.15.22271255
Conditions of Confinement in U.S. Carceral Facilities during COVID-19: Individuals Speak: Incarcerated during the COVID-19 Epidemic (INSIDE)
ObjectivesWe aimed to describe conditions of confinement among people incarcerated in the United States during the coronavirus disease 2019 (COVID-19) pandemic and assess the feasibility of a community-science data collection approach. MethodsWe developed a web-based survey with community partners to collect information on confinement conditions (COVID-19 safety, basic needs, support). Formerly incarcerated adults released after March 1, 2020, or non-incarcerated adults in communication with an incarcerated person (proxy) were recruited through social media from July 25, 2020, through March 27, 2021. Descriptive statistics were estimated in aggregate and separately by proxy or formerly incarcerated status. Additionally, we compared responses between proxy and formerly incarcerated respondents using chi-square or Fishers exact tests as appropriate based on alpha=0.05. ResultsOf 378 responses, 94% were by proxy, and 76% reflected state prison conditions. Participants reported inability to physically distance ([&ge;]6ft at all times) (92%), inadequate access to soap (89%), water (46%), toilet paper (49%) and showers (68%). Among people who received mental healthcare before the pandemic, 75% reported reduced care. We found that responses were consistent between formerly incarcerated people and proxy-respondents. ConclusionsOur findings suggest that a community-science approach to data collection is feasible. Based on these findings, COVID-19 safety and basic needs were not sufficiently addressed within some carceral settings. Thus, we recommend the lived experiences of incarcerated individuals should be included to make informed and equitable policy decisions.
epidemiology
10.1101/2022.03.15.22272323
Detecting Patient Position Using Bed-Reaction Forces and Monitoring Skin-Bed Interface Forces for Pressure Injury Prevention and Management
Pressure injuries are largely preventable, yet they affect one in four Canadians across all healthcare settings. A key best practice to prevent and treat pressure injuries is to minimize prolonged tissue deformation by ensuring at-risk individuals are repositioned regularly (typically every 2 hours). However, adherence to repositioning is poor in clinical settings and expected to be even worse in homecare settings. Our team has designed a position detection system for home use that uses machine learning approaches to predict a patients position in bed using data from load cells under the bed legs. The system predicts the patients position as one of three position categories: left-side lying, right-side lying, or supine. The objectives of this project were to: i) determine if measuring ground truth patient position with an inertial measurement unit can improve our system accuracy (predicting left-side lying, right-side lying, or supine) ii) to determine the range of transverse pelvis angles (TPA) that fully offloaded each of the great trochanters and sacrum and iii) evaluate the potential benefit of being able to predict the individuals position with higher precision (classifying position into more than three categories) by taking into account a potential drop in prediction accuracy as well as the range of TPA for which the greater trochanters and sacrum were fully offloaded. Data from 18 participants was combined with previous data sets to train and evaluate classifiers to predict the participants TPA using four different position bin sizes ([~]70{degrees}, 45{degrees}, [~]30{degrees}, and 15{degrees}) and the effects of increasing precision on performance, where patients are left side-lying at -90{degrees}, right side-lying at 90{degrees} and supine at 0{degrees}). A leave-one-participant-out cross validation approach was used to select the best performing classifier, which was found to have an accuracy of 84.03% with an F1 score of 0.8399. Skin-bed interface forces were measured using force sensitive resistors placed on the greater trochanters and sacrum. Complete offloading for the sacrum was only achieved for the positions with TPA angles <-90{degrees} or >90{degrees}, indicating there was no benefit to predicting with greater precision than with three categories: left, right, and supine.
nursing
10.1101/2022.03.14.22272394
Detection of bacterial co-infections and prediction of fatal outcomes in COVID-19 patients presenting to the emergency department using a 29 mRNA host response classifier.
ObjectiveClinicians in the emergency department (ED) face challenges in concurrently assessing patients with suspected COVID-19 infection, detecting bacterial co-infection, and determining illness severity since current practices require separate workflows. Here we explore the accuracy of the IMX-BVN-3/IMX-SEV-3 29 mRNA host response classifiers in simultaneously detecting SARS-CoV-2 infection, bacterial co-infections, and predicting clinical severity of COVID-19. Methods161 patients with PCR-confirmed COVID-19 (52.2% female, median age 50.0 years, 51% hospitalized, 5.6% deaths) were enrolled at the Stanford Hospital ED. RNA was extracted (2.5 mL whole blood in PAXgene Blood RNA) and 29 host mRNAs in response to the infection were quantified using Nanostring nCounter. ResultsThe IMX-BVN-3 classifier identified SARS-CoV-2 infection in 151 patients with a sensitivity of 93.8%. Six of 10 patients undetected by the classifier had positive COVID tests more than 9 days prior to enrolment and the remaining oscillated between positive and negative results in subsequent tests. The classifier also predicted that 6 (3.7%) patients had a bacterial co-infection. Clinical adjudication confirmed that 5/6 (83.3%) of the patients had bacterial infections, i.e. Clostridioides difficile colitis (n=1), urinary tract infection (n=1), and clinically diagnosed bacterial infections (n=3) for a specificity of 99.4%. 2/101 (2.8%) patients in the IMX-SEV-3 Low and 7/60 (11.7%) in the Moderate severity classifications died within thirty days of enrollment. ConclusionsIMX-BVN-3/IMX-SEV-3 classifiers accurately identified patients with COVID-19, bacterial co-infections, and predicted patients risk of death. A point-of-care version of these classifiers, under development, could improve ED patient management including more accurate treatment decisions and optimized resource utilization.
infectious diseases
10.1101/2022.03.16.22272493
Outdoor long-range transmission of COVID-19 and patient zero
Following the outdoor model of risk assessment developed in one of our previous studies, we demonstrate in the present work that long-range transport of infectious aerosols can initiate patient "zero" creation at distances downwind beyond one hundred kilometers. The very low probability of this outdoor transmission can be compensated by a high number and density of susceptibles such as it occurs in large cities.
infectious diseases
10.1101/2022.03.16.22272483
Automation of plate inoculation and reading reduces process time in the clinical microbiology laboratory, compared to a manual workflow
We evaluated the benefits of automation for clinical microbiology laboratory workflow and compared the time required for culture inoculation and plate reading between manual processes and BD Kiestra (but Kiestra) automation. Kiestra automation consists of different modules, each of which facilitate different tasks and work together from specimen inoculation to results reporting. We tracked the steps and measured the associated times-to-completion for both manual and automated workflows, including number of plate touches, hands-on-time (HOT), total technologist time (TTT), and walk-away time (WAT). Manual and automated processes both included 90 samples, and a total of 180 agar plates. There were three media quantity protocols used to mimic common laboratory samples: 1) 30 urine specimens on one biplate; 2) 30 urine specimens divided across two plates; and 3) 30 blood or wound specimens divided across three plates; all cultures incubated for a minimum of 18 hours prior to reading or imaging. Automation reduced HOT by 85% and overall plate touches by 88%. Automated inoculation (through implementation of the BD InoqulA module) resulted in a 100% reduction in plate touches and created 81.5-minutes of WAT. Automation of culture reading (through implementation of the BD ReadA incubation/plate imaging module and BD Synapsys Informatics) reduced HOT by 53%. Overall, laboratory automation resulted in shorter TTT and created WAT when compared to manual processes. Automation can facilitate increased processing capacity, more efficient use of the labor force, and reduced time to results in the clinical microbiology laboratory.
infectious diseases
10.1101/2022.03.16.22272481
Human cell-camouflaged nanomagnetic scavengers restore immune homeostasis in a rodent model with bacteremia
Bloodstream infection caused by antimicrobial resistance pathogens is a global concern because it is difficult to treat with conventional therapy. Here we report scavenger magnetic nanoparticles enveloped by nanovesicles derived from blood cells (MNVs), which magnetically eradicate an extreme range of pathogens in an extracorporeal circuit. We quantitatively reveal that glycophorin A and complement receptor (CR) 1 on red blood cell (RBC)-MNVs predominantly capture human fecal bacteria, carbapenem-resistant (CR) E. coli, and extended-spectrum beta-lactamases-positive (ESBL-positive) E. coli, vancomycin-intermediate S. aureus (VISA), endotoxins, and proinflammatory cytokines in human blood. Additionally, CR3 and CR1 on white blood cell-MNVs mainly contribute to depleting the virus envelope proteins of Zika, SARS-CoV-2, and their variants in human blood. Supplementing opsonins into the blood significantly augments the pathogen removal efficiency due to its combinatorial interactions between pathogens and CR1 and CR3 on MNVs. The extracorporeal blood cleansing enables full recovery of lethally infected rodent animals within seven days by treating them twice in series. We also validate that parameters reflecting immune homeostasis, such as blood cell counts, cytokine levels, and transcriptomics changes, are restored in blood of the fatally infected rats after treatment.
infectious diseases
10.1101/2022.03.16.22272491
Performance Evaluation of VIDAS Diagnostic Assays Detecting Dengue Virus NS1 Antigen, and Anti-Dengue Virus IgM and IgG Antibodies: a Multicenter, International Study
Dengue is a serious mosquito-transmitted disease caused by the dengue virus (DENV). Rapid and reliable diagnosis of DENV infection is urgently needed in dengue-endemic regions. We describe here the performance evaluation of the CE-marked VIDAS(R) dengue immunoassays developed for the automated detection of DENV NS1 antigen, and anti-DENV IgM and IgG antibodies. A multicenter concordance study was conducted in 1296 patients from dengueendemic regions in Asia, Latin America, and Africa. VIDAS(R) dengue results were compared to those of competitor enzyme-linked immunosorbent assays (ELISA). The VIDAS(R) dengue assays showed high precision (CV [&le;] 10.7%) and limited cross-reactivity ([&le;] 15.4%) with other infections. VIDAS(R) DENGUE NS1 Ag showed high positive and negative percent agreement (92.8% PPA, 91.7% NPA) in acute patients within 0-5 days of symptom onset. VIDAS(R) Anti-DENGUE IgM and IgG showed a moderate to high concordance with ELISA (74.8% to 90.6%) in post-acute and recovery patients. PPA was further improved in combined VIDAS(R) NS1/IgM (96.4% in 0-5 days acute patients) and IgM/IgG (91.9% in post-acute patients) tests. Altogether, the VIDAS(R) dengue NS1, IgM and IgG assays performed well, either alone or in combination, and should be suitable for the accurate diagnosis of DENV infection in dengue-endemic regions.
infectious diseases
10.1101/2022.03.16.22272499
Should individuals be paid to participate in mass drug administration to control malaria?
Mass drug administration (MDA) is a malaria control strategy in which antimalarial drugs are offered to a whole community. Although MDA can potentially clear malaria from a community, it is not routinely used in eradication efforts because of ethical concerns and past failures to achieve lasting elimination. One potential means to improving the outcome of MDA is to incentivize individuals to participate, for example, through monetary payments. In this study our aim is to inform the decision to use MDA to eradicate malaria and explore whether individuals should be incentivized to participate. Through the lens of a mathematical model, we clarify how the costs and benefits of MDA are context-dependent. We highlight that in a community experiencing a good improvement in clinical case management - relative to the prevalence of malaria in the community - the addition of MDA can catalyze stable elimination. However, participation rates are critical and individuals who avoid every round of treatment can prevent elimination. We explore how, in this scenario, individual incentives could change the cost-benefit breakdown, measured at three levels - personal, local community and wider community.
infectious diseases
10.1101/2022.03.16.22272513
Humoral and cellular immune responses to CoronaVac assessed up to one year after vaccination
BackgroundThe Sinovac SARS-CoV-2 inactivated vaccine (CoronaVac) has been demonstrated to be safe, well tolerated, and efficacious in preventing mild and severe Covid-19. Although different studies have demonstrated its short-term immunogenicity, long-term cellular and humoral response evaluations are still lacking. MethodsCellular and humoral responses were assessed after enrollment of volunteers in the PROFISCOV phase 3 double-blind, randomized, placebo-controlled clinical trial to evaluate CoronaVac. Assays were performed using flow cytometry to evaluate cellular immune response and an antigen binding electrochemiluminescence assay to detect antigen-specific antibodies to the virus. ResultsFifty-three volunteers were selected for long term assessment of their SARS-CoV-2-specific immune responses. CD4+ T cell responses (including circulating follicular helper (cTfh, CD45RA- CXCR5+) expressing CD40L, as well as non-cTfh cells expressing CXCR3) were observed early upon the first vaccine dose, increased after the second dose, remaining stable for 6-months. Memory CD4+ T cells were detected in almost all vaccinees, the majority being central memory T cells. IgG levels against Wuhan/WH04/2020 N, S and receptor binding domain (RBD) antigens and the variants of concern (VOCs, including B.1.1.7/Alpha, B.1.351/Beta and P.1/Gamma) S and RBD antigens peaked 14 days after the second vaccine shot, and were mostly stable for a 1-year period. ConclusionsCoronaVac two-doses regimen is able to induce a potent and durable SARS-CoV-2 specific cellular response. The cellular reaction is part of a coordinated immune response that includes high levels of specific IgG levels against parental and SARS-CoV-2 VOC strains, still detected after one year. FundingFundacao Butantan, Instituto Butantan and Sao Paulo Research Foundation (FAPESP) (grants 2020/10127-1 and 2020/06409-1). This work has also been supported by NIH contract 75N93019C00065 (A.S, D.W). PATH facilitated reagent donations for this work with support by the Bill & Melinda Gates Foundation (INV-021239). Under the grant conditions of the foundation, a Creative Commons Attribution 4.0 generic License has already been assigned to the Author Accepted Manuscript version that might arise from this submission.
allergy and immunology
10.1101/2022.03.16.22272464
The transcriptional profile of keloidal Schwann cells
Recently, a specific Schwann cell type with pro-fibrotic and tissue regenerative properties has been identified that contributes to keloid formation. In the present study, we have reanalysed published single cell RNA sequencing (scRNAseq) studies of keloids, healthy skin and normal scars to reliably determine the specific gene expression profile of the keloid-specific Schwann cell type in more detail. We were able to confirm the presence of the repair-like, pro-fibrotic Schwann cell type in the datasets of all three studies and identified a specific gene set for these Schwann cells. In contrast to keloids, in normal scars the number of Schwann cells was neither increased nor was their gene expression profile distinctly different from Schwann cells of normal skin. In addition, our bioinformatics analysis provided evidence for a role of transcription factors of the kruppel-like factor family and members of the immediate early response genes, in the de-differentiation process of keloidal Schwann cells. Together, our analysis strengthens the role of the pro-fibrotic Schwann cell type in the formation of keloids. Knowledge on the exact gene expression profile of these Schwann cells will facilitate their identification in other organs and diseases.
dermatology
10.1101/2022.03.16.22272492
Estimation of time-varying causal effects with multivariable Mendelian randomization: some cautionary notes
IntroductionFor many exposures present across the lifecourse, the effect of the exposure may vary over time. Multivariable Mendelian randomization (MVMR) is an approach that can assess the effects of related risk factors using genetic variants as instrumental variables. Recently, multivariable Mendelian randomization has been used to estimate the effects of an exposure during distinct time periods. MethodsWe investigated the behaviour of estimates from MVMR in a simulation study for different time-varying causal scenarios. We also performed an applied analysis to consider how MVMR estimates of body mass index on systolic blood pressure vary depending on the time periods considered. ResultsEstimates from MVMR in the simulation study were close to the true values when the outcome model was correctly specified: that is, when the outcome was a discrete function of the exposure at the precise timepoints at which the exposure was measured. However, in more realistic cases, MVMR estimates were not only incorrect, but misleading. For example, in one scenario MVMR estimates for early life were clearly negative despite the true causal effect being constant and positive. In the applied example, estimates were highly variable depending on the time period in which genetic associations with the exposure were estimated. ConclusionsThe poor performance of MVMR to study time-varying causal effects can be attributed to model misspecification and violation of the exclusion restriction assumption. We would urge caution about quantitative conclusions from such analyses and even qualitative interpretations about the direction, or presence or absence of a causal effect during a given time period.
epidemiology
10.1101/2022.03.16.22272518
Comparison of accelerometry-based measures of physical activity
BackgroundGiven the evolution of processing and analyzing accelerometry data over the past decade, it is of utmost importance that we as a field understand how newer (e.g., MIMS) summary measures compare to long-established ones (e.g., ActiGraph activity counts). ObjectiveOur study aims to compare and harmonize accelerometry-based measures of physical activity (PA) to increase the comparability, generalizability, and translation of findings across studies using objective measures of PA. MethodsHigh resolution accelerometry data were collected from 655 participants in the Baltimore Longitudinal Study on Aging who wore an ActiGraph GT9X device at wrist continuously for a week. Data were summarized at the minute-level as activity counts (AC; measure obtained from ActiGraphs ActiLife software) and MIMS, ENMO, MAD, and AI (open-source measures implemented in R). The correlation between AC and other measures was quantified both marginally and conditionally on age, sex and BMI. Next, each pair of measures were harmonized using nonparametric regression of minute-level measurements. ResultsThe study sample had the following characteristics: mean (sd) age of 69.8 (14.2), BMI of 27.3 (5.0) kg/m2, 54.5% females, and 67.9% white. The marginal participant-specific correlation between AC and MIMS, ENMO, MAD, and AI were 0.988, 0.867, 0.913 and 0.970, respectively. After harmonization, the mean absolute percentage error for predicting total AC from MIMS, ENMO, MAD, and AI was 2.5, 14.3, 11.3 and 6.3, respectively. The accuracy for predicting sedentary minutes based on AC (AC > 1853) using MIMS, ENMO, MAD and AI was 0.981, 0.928, 0.904, and 0.960, respectively. An R software with a unified interface for computation of the open-source measures from raw accelerometry data was developed and published as SummarizedActigraphy R package. ConclusionsOur comparison of accelerometry-based measures of PA enables researchers to extend the knowledge from the thousands of manuscripts that have been published using ActiGraph AC to MIMS and other measures by demonstrating their high correlation and providing a harmonization mapping.
epidemiology
10.1101/2022.03.16.22272518
Comparison of accelerometry-based measures of physical activity
BackgroundGiven the evolution of processing and analyzing accelerometry data over the past decade, it is of utmost importance that we as a field understand how newer (e.g., MIMS) summary measures compare to long-established ones (e.g., ActiGraph activity counts). ObjectiveOur study aims to compare and harmonize accelerometry-based measures of physical activity (PA) to increase the comparability, generalizability, and translation of findings across studies using objective measures of PA. MethodsHigh resolution accelerometry data were collected from 655 participants in the Baltimore Longitudinal Study on Aging who wore an ActiGraph GT9X device at wrist continuously for a week. Data were summarized at the minute-level as activity counts (AC; measure obtained from ActiGraphs ActiLife software) and MIMS, ENMO, MAD, and AI (open-source measures implemented in R). The correlation between AC and other measures was quantified both marginally and conditionally on age, sex and BMI. Next, each pair of measures were harmonized using nonparametric regression of minute-level measurements. ResultsThe study sample had the following characteristics: mean (sd) age of 69.8 (14.2), BMI of 27.3 (5.0) kg/m2, 54.5% females, and 67.9% white. The marginal participant-specific correlation between AC and MIMS, ENMO, MAD, and AI were 0.988, 0.867, 0.913 and 0.970, respectively. After harmonization, the mean absolute percentage error for predicting total AC from MIMS, ENMO, MAD, and AI was 2.5, 14.3, 11.3 and 6.3, respectively. The accuracy for predicting sedentary minutes based on AC (AC > 1853) using MIMS, ENMO, MAD and AI was 0.981, 0.928, 0.904, and 0.960, respectively. An R software with a unified interface for computation of the open-source measures from raw accelerometry data was developed and published as SummarizedActigraphy R package. ConclusionsOur comparison of accelerometry-based measures of PA enables researchers to extend the knowledge from the thousands of manuscripts that have been published using ActiGraph AC to MIMS and other measures by demonstrating their high correlation and providing a harmonization mapping.
epidemiology
10.1101/2022.03.16.22272521
The decay of coronavirus in sewage pipes and the development of a predictive model for the estimation of SARS-CoV-2 infection cases based on wastewater surveillance
Wastewater surveillance serves as a promising approach to elucidate the silent transmission of SARS-CoV-2 in a given community by detecting the virus in wastewater treatment facilities. This study monitored the viral RNA abundance at one WWTP and three communities during the COVID-19 outbreak in the Yanta district of Xian city from December 2021 to January 2022. To further understand the decay of the coronavirus in sewage pipes, avian infectious bronchitis virus (IBV) was seeded in two recirculating water systems and operated for 90 days. Based on the viral abundance in the wastewater of Xian and the above data regarding the decay of coronavirus in sewage pipes, Monte Carol simulations were performed to estimate the infectious cases in Xian. The results suggested that the delta variant was first detected on Dec-10, five days earlier than the reported date of clinical samples. SARS-CoV-2 was detected on December 18 in the monitored community two days earlier than the first case and was consecutively detected in the following two sampling times. In pipelines without biofilms, the results showed that high temperature significantly reduced the viral RNA abundance by 2.18 log10 GC/L after experiencing 20 km travel distance, while only a 1.68 log10 GC/L reduction was observed in the pipeline with a low water temperature. After 90 days of operation, the biofilm matured in the pipeline in both systems. Reductions of viral RNA abundance of 2.14 and 4.79 log10 GC/L were observed in low- and high-temperature systems with mature biofilms, respectively. Based on the above results, we adjusted the input parameters for Monte Carol simulation and estimated 23.3, 50.1, 127.3 and 524.2 infected persons in December 14, 18, 22 and 26, respectively, which is largely consistent with the clinical reports. This work highlights the viability of wastewater surveillance for the early warning of COVID-19 at both the community and city levels, which represents a valuable complement to clinical approaches.
epidemiology
10.1101/2022.03.16.22272521
The decay of coronavirus in sewage pipes and the development of a predictive model for the estimation of SARS-CoV-2 infection cases based on wastewater surveillance
Wastewater surveillance serves as a promising approach to elucidate the silent transmission of SARS-CoV-2 in a given community by detecting the virus in wastewater treatment facilities. This study monitored the viral RNA abundance at one WWTP and three communities during the COVID-19 outbreak in the Yanta district of Xian city from December 2021 to January 2022. To further understand the decay of the coronavirus in sewage pipes, avian infectious bronchitis virus (IBV) was seeded in two recirculating water systems and operated for 90 days. Based on the viral abundance in the wastewater of Xian and the above data regarding the decay of coronavirus in sewage pipes, Monte Carol simulations were performed to estimate the infectious cases in Xian. The results suggested that the delta variant was first detected on Dec-10, five days earlier than the reported date of clinical samples. SARS-CoV-2 was detected on December 18 in the monitored community two days earlier than the first case and was consecutively detected in the following two sampling times. In pipelines without biofilms, the results showed that high temperature significantly reduced the viral RNA abundance by 2.18 log10 GC/L after experiencing 20 km travel distance, while only a 1.68 log10 GC/L reduction was observed in the pipeline with a low water temperature. After 90 days of operation, the biofilm matured in the pipeline in both systems. Reductions of viral RNA abundance of 2.14 and 4.79 log10 GC/L were observed in low- and high-temperature systems with mature biofilms, respectively. Based on the above results, we adjusted the input parameters for Monte Carol simulation and estimated 23.3, 50.1, 127.3 and 524.2 infected persons in December 14, 18, 22 and 26, respectively, which is largely consistent with the clinical reports. This work highlights the viability of wastewater surveillance for the early warning of COVID-19 at both the community and city levels, which represents a valuable complement to clinical approaches.
epidemiology
10.1101/2022.03.16.22272469
General Framework for Evaluating Outbreak Prediction, Detection, and Annotation Algorithms
The COVID-19 pandemic has highlighted and accelerated the use of algorithmic-decision support for public health. The latters potential impact and risk of bias and harm urgently call for scrutiny and evaluation standards. One example is the early detection of local infectious disease outbreaks. Whereas many statistical models have been proposed and disparate systems are routinely used, each tai-lored to specific data streams and use, no systematic evaluation strategy of their performance in a real-world context exists. One difficulty in evaluating outbreak prediction, detection, or annotation lies in the scales of different approaches: How to compare slow but fine-grained genetic clustering of individual samples with rapid but coarse anomaly detection based on aggregated syndromic reports? Or alarms generated for different, overlapping geographical regions or demographics? We propose a general, data-driven, user-centric framework for evaluating hetero-geneous outbreak algorithms. Discrete outbreak labels and case counts are defined on a custom data grid, associated target probabilities are then computed and compared with algorithm output. The latter is defined as discrete "signals" are generated for a number of grid cells (the finest available in the benchmarking data set) with different weights and prior outbreak information from which then estimated outbreak label probabilities are derived. The prediction performance is quantified through a series of metrics, including confusion matrix, regression scores, and mutual information. The dimensions of the data grid can be weighted by the user to reflect epidemiological criteria.
epidemiology
10.1101/2022.03.16.22272469
General Framework for Evaluating Outbreak Prediction, Detection, and Annotation Algorithms
The COVID-19 pandemic has highlighted and accelerated the use of algorithmic-decision support for public health. The latters potential impact and risk of bias and harm urgently call for scrutiny and evaluation standards. One example is the early detection of local infectious disease outbreaks. Whereas many statistical models have been proposed and disparate systems are routinely used, each tai-lored to specific data streams and use, no systematic evaluation strategy of their performance in a real-world context exists. One difficulty in evaluating outbreak prediction, detection, or annotation lies in the scales of different approaches: How to compare slow but fine-grained genetic clustering of individual samples with rapid but coarse anomaly detection based on aggregated syndromic reports? Or alarms generated for different, overlapping geographical regions or demographics? We propose a general, data-driven, user-centric framework for evaluating hetero-geneous outbreak algorithms. Discrete outbreak labels and case counts are defined on a custom data grid, associated target probabilities are then computed and compared with algorithm output. The latter is defined as discrete "signals" are generated for a number of grid cells (the finest available in the benchmarking data set) with different weights and prior outbreak information from which then estimated outbreak label probabilities are derived. The prediction performance is quantified through a series of metrics, including confusion matrix, regression scores, and mutual information. The dimensions of the data grid can be weighted by the user to reflect epidemiological criteria.
epidemiology
10.1101/2022.03.16.22272497
Variant-specific burden of SARS-CoV-2 in Michigan: March 2020 through November 2021
Accurate estimates of total burden of SARS-CoV-2 are needed to inform policy, planning and response. We sought to quantify SARS-CoV-2 cases, hospitalizations, and deaths by age in Michigan. COVID-19 cases reported to the Michigan Disease Surveillance System were multiplied by age and time-specific adjustment factors to correct for under-detection. Adjustment factors were estimated in a model fit to incidence data and seroprevalence estimates. Age-specific incidence of SARS-CoV-2 hospitalization, death, and vaccination, and variant proportions were estimated from publicly available data. We estimated substantial under-detection of infection that varied by age and time. Accounting for under-detection, we estimate cumulative incidence of infection in Michigan reached 75% by mid-November 2021, and over 87% of Michigan residents were estimated to have had [&ge;]1 vaccination dose and/or previous infection. Comparing pandemic waves, the relative burden among children increased over time. Adults [&ge;]80 years were more likely to be hospitalized or die if infected in fall 2020 than if infected during later waves. Our results highlight the ongoing risk of periods of high SARS-CoV-2 incidence despite widespread prior infection and vaccination. This underscores the need for long-term planning for surveillance, vaccination, and other mitigation measures amidst continued response to the acute pandemic.
epidemiology
10.1101/2022.03.16.22272440
Serological Responses to the First Three Doses of SARS-CoV-2 Vaccination in Inflammatory Bowel Disease: A Prospective Cohort Study
BackgroundIndividuals with inflammatory bowel disease (IBD) who are immunocompromised may have a reduced serological response to the SARS-CoV-2 vaccine. We investigated serological responses following 1st, 2nd, and 3rd doses of SARS-CoV-2 vaccination in those with IBD. MethodsA prospective cohort study of persons with IBD (n = 496) assessed serological response 1-8 weeks after 1st dose vaccination, 1-8 weeks after 2nd dose, 8 or more weeks after 2nd dose, and at least 1 week after 3rd dose. Seroconversion and geometric mean titer (GMT) with 95% confidence intervals (CI) were assessed for antibodies to the SARS-CoV-2 spike protein. Multivariable linear regression models assessed the adjusted fold change (FC) in antibody levels. ResultsSeroconversion and GMT increased from post-1st dose to 1-8 weeks post-2nd dose (81.6%, 1814 AU/mL vs. 98.7%, 9229 AU/mL, p<0.001), decreased after 8 weeks post-2nd dose (94.9%, 3002 AU/mL, p<0.001), and rebounded post-3rd dose (99.6%, 14639 AU/mL, p<0.001). Prednisone was the only IBD-related medication associated with diminished antibody response after 3rd-dose vaccination (FC: 0.07 [95% CI: 0.02, 0.20]). Antibody levels steadily decline following the 2nd (FC: 0.92 [95% CI: 0.90, 0.94] per week) and 3rd dose (FC: 0.88 [95% CI: 0.84, 0.92] per week) of the SARS-CoV-2 vaccine. ConclusionA three-dose regimen of vaccination to SARS-CoV-2 yields a robust antibody response for those with IBD across all classes of IBD therapies except for prednisone. The decaying antibody levels following the 3rd dose of the vaccine should be monitored in future studies.
gastroenterology
10.1101/2022.03.16.22272467
Selective visuoconstructional impairment following mild COVID-19 with inflammatory and neuroimaging correlation findings.
People recovered from COVID-19 may still present complications including respiratory and neurological sequelae. In other viral infections, cognitive impairment occurs due to brain damage or dysfunction caused by vascular lesions and inflammatory processes. Persistent cognitive impairment compromises daily activities and psychosocial adaptation. Some level of neurological and psychiatric consequences were expected and described in severe cases of COVID-19. However, it is debatable whether neuropsychiatric complications are related to COVID-19 or to unfoldings from a severe infection. Nevertheless, the majority of cases recorded worldwide were mild to moderate self-limited illness in non-hospitalized people. Thus, it is important to understand what are the implications of mild COVID-19, which is the largest and understudied pool of COVID-19 cases. We aimed to investigate adults at least four months after recovering from mild COVID-19, which were assessed by neuropsychological, ocular and neurological tests, immune markers assay, and by structural MRI and 18FDG-PET neuroimaging to shed light on putative brain changes and clinical correlations. In approximately one-quarter of mild-COVID-19 individuals, we detected a specific visuoconstructive deficit, which was associated with changes in molecular and structural brain imaging, and correlated with upregulation of peripheral immune markers. Our findings provide evidence of neuroinflammatory burden causing cognitive deficit, in an already large and growing fraction of the world population. While living with a multitude of mild COVID-19 cases, action is required for a more comprehensive assessment and follow-up of the cognitive impairment, allowing to better understand symptom persistence and the necessity of rehabilitation of the affected individuals.
psychiatry and clinical psychology
10.1101/2022.03.16.22272412
Enabling Health Outcomes of Nature-based Interventions: A Systematic Scoping Review
BackgroundThe burden of poor mental health and non-communicable disease is increasing, and some practitioners are turning to nature to provide the solution. Nature-based interventions could offer cost-effective solutions that benefit both human health and the environment by reconnecting individuals with nature. Importantly, the relative success of these interventions depends upon the accessibility of green and blue spaces, and the way in which people engage with them. Aims and ObjectivesA scoping review was conducted to establish the evidence base for nature-based interventions as a treatment for poor mental and physical health, and to assess whether and how enablers influence engagement with natural outdoor environments. MethodsThis scoping review followed the PRISMA-ScR and the associated Cochrane guidelines for scoping reviews. A literature search was performed across five databases and the grey literature, and articles were selected based on key inclusion and exclusion criteria. Exposure was the active engagement with natural environments. The primary outcome was mental health and the secondary outcome was physical health, both defined using established metrics. All data was extracted to a charting table and reported as a narrative synthesis. ResultsThe final analysis included thirty-nine studies. Most of these focused on green spaces, with only five dedicated to blue spaces. Six nature-based health intervention types were identified: (i) educational interventions, (ii) physical activity in nature, (iii) wilderness therapy, (iv) leisure activities, (v) gardening and (vi) changes to the built environment. Of the 39 studies, 92.2% demonstrated consistent improvements across health outcomes when individuals engaged with natural outdoor environments (NOEs). Furthermore, of 153 enablers that were found to influence engagement, 78% facilitated engagement while 22% reduced engagement. Aspects such as the sense of wilderness, accessibility, opportunities for physical activity and the absence of noise/ air pollution all increased engagement. ConclusionFurther research is still needed to establish the magnitude and relative effect of nature- based interventions, as well as to quantify the compounding effect of enablers on mental and physical health. This must be accompanied by a global improvement in study design. Nevertheless, this review has documented the increasing body of heterogeneous evidence in support of NBIs as effective tools to improve mental, physical, and cognitive health outcomes. Enablers that facilitate greater engagement with natural outdoor environments, such as improved biodiversity, sense of wilderness and accessibility, as well as opportunities for physical activity and an absence of pollution, will likely improve the impact of nature-based interventions and further reduce public health inequalities.
public and global health
10.1101/2022.03.16.22272515
Did national holidays accelerate COVID-19 diffusion in Bangladesh? A case study
Bangladesh registered 1573828 confirmed cases of COVID-19 and the death toll crossed the grim milestone of 27946 across the country as of 9th December, 2021. Despite the enforcement of stringent COVID-19 measures, including nationwide lockdowns, travel bans, tighter curbs on nonessential activities, and social distancing, the country witnessed an accelerated diffusion of coronavirus cases during the national events and festivals in 2020. The present study aims to examine the association between the national events / festivals and the transmission dynamics of COVID-19 by looking at the instantaneous reproduction number, Rt, of the 64 districts in Bangladesh. We further illustrate the COVID-19 diffusion explicitly in Dhaka Division at the first phase of the pandemic. The comprehensive analysis shows an escalation of Rt value in Dhaka and in all industrialized cities during the major events such as, Garments reopening and religious holidays in Bangladesh. Based on the analysis, a set of array measurements has been also suggested to evade the future pandemic risks while running the national festival activities. HighlightsO_LIBangladesh confirmed 1573828 coronavirus cases and 27946 deaths due to the current COVID-19 outbreak. C_LIO_LICountry observed significant COVID-19 diffusion in its business hubs during national holidays. C_LIO_LIDhaka, the capital of Bangladesh, is the epicenter of the ongoing pandemic. C_LIO_LICalculated Rt value illustrates its escalation in Dhaka and its neighboring cities at the time of national events. C_LIO_LIBangladesh Government needs to consider interdisciplinary approaches and contextual policies to contain the future pandemic during any national events. C_LI
public and global health
10.1101/2022.03.16.22272510
Acceptance of COVID 19 vaccine among sub-Sahara African (SSA): a comparative study of residents and diaspora dwellers
The COVID-19 vaccines are being rolled out across all the Sub-Saharan Africa (SSA) countries, with countries setting targets for achieving full vaccination rates. The aim of this study was to compare the uptake of, resistance and hesitancy to the COVID-19 vaccine between SSA locally residents and in the diaspora. This was a cross-sectional study conducted using a web and paper-based questionnaire to obtain relevant information on COVID-19 vaccine acceptance. The survey items included questions on demography, uptake and planned acceptance or non-acceptance of the COVID-19 vaccines among SSAs. Multinomial logistic regression was used to determine probabilities of outcomes for factors associated with COVID-19 vaccination resistance and hesitancy among SSA respondents residing within and outside Africa. Uptake of COVID-19 vaccines varied among the local (14.2%) and diaspora (25.3%) residents. There was more resistance to COVID-19 vaccine among locals (68.1%) and across the sociodemographic variables of sex [ adjusted Relative Risk (ARR) =0.73, 95% CI; 0.58 - 0.93], primary/less [ARR =0.22, 95% CI; 0.12 - 0.40] and bachelors degree [ARR =0.58, 95% CI; 0.43 - 0.77] educational levels, occupation [ARR =0.32, 95% CI; 0.25 - 0.40] and working status [ARR =1.40, 95%CI; 1.06 - 1.84]. COVID-19 vaccine hesitancy was almost similar between locals and diasporas (17.7% and 17.8% respectively) significant only among healthcare workers [ARR =0.46, 95% CI; 0.16 - 1.35] in the diaspora after adjusting for the variables. Similarly, knowledge and perception of COVID-19 vaccine among locals were substantial, but only perception was remarkable to resistance [ARR =0.86, 95% CI; 0.82 - 0.90] and hesitancy [ARR =0.85, 95% CI; 0.80 - 0.90] of the vaccine. Differences exist in the factors that influence COVID-19 vaccine acceptance between local SSA residents and those in the diaspora. Knowledge about COVID-19 vaccines affects the uptake, resistance, and hesitancy to the COVID-19 vaccine. Information campaigns focusing on the efficacy and safety of vaccines could lead to improved acceptance of COVID-19 vaccines.
public and global health
10.1101/2022.03.16.22272476
Socioeconomic, demographic and environmental factors inform intervention prioritization in urban Nigeria
Nigeria is one of three countries projected to have the largest absolute increase in the size of its urban population and this could intensify malaria transmission in cities. Accelerated urban population growth is out-pacing the availability of affordable housing and basic services and resulting in living conditions that foster vector breeding and heterogeneous malaria transmission. Understanding community determinants of malaria transmission in urban areas informs the targeting of interventions to populations at greatest risk. In this study, we analyzed cluster-level data from the Demographic and Health Surveys (DHS) and the Malaria Indicator Survey (MIS) as well as geospatial covariates to describe malaria burden and its determinants in areas administratively defined as urban in Nigeria. Overall, we found low malaria test positivity across urban areas. We observed declines in test positivity rate over time and identified the percentage of individuals with post-primary education, the percentage of individuals in the rich wealth quintiles, the percentage of individuals living in improved housing in 2015, all age population density, median age, the percentage of children under the age of five that sought medical treatment for fever, total precipitation and enhanced vegetation index as key community predictors of malaria transmission intensity. The unrepresentativeness of the DHS and MIS in urban settings at the state and geopolitical zonal level, regional differences in malaria seasonality across Nigeria, and information detection bias were among likely factors that limited our ability to compare malaria burden across geographic space and ultimately drove model uncertainty. Nevertheless, study findings provide a starting point for informing decisions on intervention prioritization within urban spaces and underscore the need for improved regionally-focused surveillance systems in Nigeria.
public and global health
10.1101/2022.03.16.22272463
Examining Drivers of COVID-19 Vaccine Hesitancy in Ghana: The Roles of Political Allegiance, Misinformation Beliefs, and Sociodemographic Factors
The vast majority of people in the world who are unvaccinated against COVID-19 reside in LMIC countries in sub-Saharan Africa. This includes Ghana, where only 14.4% of the country is considered fully vaccinated as of March 2022. A key factor negatively impacting vaccination campaigns is vaccine hesitancy, defined as the delay in the acceptance, or blunt refusal, of vaccines. Three online cross-sectional surveys of Ghanaian citizens were conducted in August 2020 (N = 3048), March 2021 (N = 1558), and June 2021 (N = 1295) to observe temporal trends of vaccine hesitancy in Ghana, and to examine key groups and predictors associated with hesitancy. Quantitative measurements of hesitancy and participants subjective reasons for hesitancy were assessed, including predictors such as misinformation beliefs, political allegiance, and demographic and socioeconomic factors. Descriptive statistics were employed to analyse temporal trends in hesitancy between surveys, and logistic regression analyses were conducted to observe key predictors of hesitancy. Findings revealed that overall hesitancy decreased from 36.8% (95% CI: 35.1%-38.5%) in August 2020 to 17.2% (95% CI: 15.3%-19.1%) in March 2021. However, hesitancy increased to 23.8% (95% CI: 21.5%-26.1%) in June 2021. Key reasons for refusing the vaccine in June 2021 included not having enough vaccine-related information (50.6%) and concerns over vaccine safety (32.0%). Groups most likely to express hesitancy included Christians, urban residents, opposition political party voters, people with more years of education, females, people who received COVID-19 information from internet sources, and people who expressed uncertainty about their beliefs in common COVID-19 misinformation. Groups with increased willingness to vaccinate included elected political party voters and people who reported receiving information about COVID-19 from the Ghana Health Service. This study provides knowledge on Ghanaian population confidence and concerns about COVID-19 immunisations, and can support development of locally-tailored health promotion strategies.
public and global health
10.1101/2022.03.16.22272489
Validation of a rapid potency assay for cord blood stem cells using phospho flow cytometry: the IL-3-pSTAT5 assay
Public cord blood banks (CBBs) are required to measure cord blood units (CBUs) potency before their release, allowing for the identification of units that may be unsuitable for hematopoietic transplantation. We have developed a rapid flow cytometry assay based on the measurement of STAT-5 phosphorylation of CD34+ stem cells in response to IL-3 stimulation: the IL-3-pSTAT5 assay. To adapt the assay from a research setting to its implementation within our CBB regulated operations, we proceeded with a full method validation and a correlation comparison of the IL-3-pSTAT5 assay results with the colony-forming unit assay (CFU) results. A total of 60 CBUs cryopreserved in vials were analyzed by flow cytometry to determine the sensitivity, specificity, intra-assay precision, robustness, reproducibility and inter-laboratory agreement of the assay. The CFU assay was also done on the same samples for comparison purposes. The assay threshold was established at 50% CD34+CD45+pSTAT5+, which provides a 100% sensitivity and a 98.3% specificity. An average intra-assay CV of 7.3% was determined. All results met our qualitative results acceptance criteria regarding the inter-user and inter-laboratory agreements, IL-3 stimulation time, post-thaw incubation delay and staining time. The IL-3-pSTAT5 assay results correlated well with the total CFU determined using the CFU assay (r2 = 0.82, n = 56). This study shows that our rapid flow cytometry assay can be successfully validated and that the potency data obtained display good sensitivity, specificity and robustness. These results demonstrate the feasibility of implementing this assay within CBB operations, as a validated CB potency assay.
transplantation
10.1101/2022.03.16.22272485
Pride and adversity among nurses and physicians during the pandemic in two US healthcare systems: a mixed methods analysis
Our aims were to examine themes of the most difficult or distressing events reported by healthcare workers during the first wave of COVID-19 pandemic in two US health care systems in order to identify common themes and to relate them to both behavioral theory and measures of anxiety and depression. We conducted a cross-sectional survey during the early phases of the COIVD-19 pandemic in the US. We measured symptoms of anxiety and depression separately, captured demographics, and asked two open-ended questions regarding events that were the most difficult or stressful, and reinforced pride. The open-end questions were independently coded into themes developed by the authors and mapped to factors related to fostering well-being according to the Self-Determination Theory. We recruited 874 nurses and 248 physicians. About a half shared their most distressing experiences as well as those experiences they were most proud of related to their professions. Themes that emerged from these narratives were congruent with prediction of Self-Determination theory that autonomy-supportive experiences will foster pride, while autonomy-thwarting experiences will cause distress. Those who reported distressful events were more anxious and depressed compared to those who did not. Among those who reported incidences that reinforced pride in the profession, depression was rarer compared to those who did not. These trends were evident after allowing for medical history and other covariates in logistic regressions. Causal claims from our analysis should be made with caution due to the research design, a cross-sectional study design. Understanding of perceptions of the pandemic by nurses and physicians may help identify sources of distress and means of reinforcing pride in the professions, thereby helping nurses and physicians cope with disasters, and shape workplace policies during disasters that foster well-being among first responders. No Patient or Public Contribution: We studied physicians and nurses themselves.
occupational and environmental health
10.1101/2022.03.16.22272475
Health service research definition builder: An R Shiny application for exploring diagnosis codes associated with services reported in routinely collected health data
Many administrative health data-based studies define patient cohorts using procedure and diagnosis codes. The impact these criteria have on a studys final cohort is not always transparent to co-investigators or other audiences if access to the research data is restricted. We developed a SAS and R Shiny interactive research support tool which generates and displays the diagnosis code summaries associated with a selected medical service or procedure. This allows non-analyst users to interrogate claims data and groupings of reported diagnosis codes. The SAS program uses a tree classifier to find associated diagnosis codes with the service claims compared against a matched, random sample of claims without the service. Claims are grouped based on the overlap of these associated diagnosis codes. The Health Services Research (HSR) Definition Builder Shiny application uses this input to create interactive table and graphics, which updates estimated claim counts of the selected service as users select inclusion and exclusion criteria. This tool can help researchers develop preliminary and shareable definitions for cohorts for administrative health data research. It allows an additional validation step of examining frequency of all diagnosis codes associated with a service, reducing the risk of incorrect included or omitted codes from the final definition. In our results, we explore use of the application on three example services in 2016 US Medicare claims for patients aged over 65: knee arthroscopy, spinal fusion procedures and urinalysis. Readers can access the application at https://kelsey209.shinyapps.io/hsrdefbuilder/ and the code at https://github.com/kelsey209/hsrdefbuilder.
health informatics
10.1101/2022.03.14.22272342
Tongue Coating in COVID-19 Patients: A Case-Control Study
It has been suggested that COVID-19 patients have distinct tongue features, which may help to monitor the development of their condition. To determine if there was any specific tongue coating feature in COVID-19, this study investigated the difference in tongue coating between COVID-19 subjects and subjects with other acute inflammatory diseases characterized by fever. Tongue images taken with smartphones from three age-matched groups, namely, COVID group (n=92), non-COVID febrile group (n=92), and normal control group (n=92), were analyzed by two blinded raters according to a tongue coating scoring scheme, which assessed the levels of thick fur, slimy or greasy fur, discolored fur and composite index of tongue coating. Compared with control, significant increases in all coating indexes were found in the COVID group (P<0.001), as well as in the non-COVID febrile group (P<0.001). However, no difference was observed between COVID and non-COVID febrile groups for all coating indexes measured. In COVID-19 subjects, their scores of coating indexes had weak but significant correlations with certain inflammatory biomarkers, including WBC and neutrophil - lymphocyte ratio. It is concluded that COVID-19 subjects have pathological tongue coating patterns that are associated with inflammatory responses, and these coating patterns can help to indicate the direction of disease development.
infectious diseases
10.1101/2022.03.16.22272031
Feasibility and Sensitivity of Saliva GeneXpert MTB/RIF Ultra for Tuberculosis Diagnosis in Adults in Uganda
The objective of this prospective, observational study carried out at China-Uganda Friendship Hospital-Naguru in Kampala, Uganda, was to determine the performance of GeneXpert MTB/RIF Ultra (Xpert) testing on saliva for active tuberculosis (TB) disease among consecutive adults undergoing diagnostic evaluation. We calculated sensitivity to determine the diagnostic performance in comparison to that of the composite reference standard of Mycobacterium tuberculosis liquid and solid cultures on two spot sputum specimens. GeneXpert Ultra on saliva had a sensitivity of 90% (95% confidence interval [CI], 81-96%); this was similar to that of sputum fluorescence smear microscopy (FM) of 87% (95% CI, 77-94%). Sensitivity was 24% lower (95% CI for difference 2-48%, p=0.003) among persons living with HIV (71%, 95%CI 44-90%) than among persons living without HIV (95%, 95%CI 86-99%) and 46% lower (95% CI for difference 14-77%, p<0.0001) among sputum microscopy positive (96%, 95% CI 87-99%) than among sputum microscopy negative patients (50%, 95% CI 19-81%). Semi-quantitative Xpert grade was higher in sputum than in paired saliva samples from the same patient. In conclusion, saliva specimens appear to be feasible and similarly sensitive to sputum for active TB diagnosis using molecular testing, suggesting promise as a non-sputum diagnostic test for active TB in high-burden settings.
infectious diseases
10.1101/2022.03.16.22272374
Deep Learning-Derived Myocardial Strain
BackgroundEchocardiographic strain measurements require extensive operator experience and have significant inter-vendor variability. This study sought to develop an automated deep learning strain (DLS) analysis pipeline and validate its performance both externally and prospectively. MethodsThe DLS pipeline takes blood pool semantic segmentation results from the EchoNet-Dynamic network and derives longitudinal strain from the frame-by-frame change in the length of the left ventricular endocardial contour. The pipeline was developed using 7,465 echocardiographic videos, with preprocessing steps optimized to determine the change in endocardial length from systole to diastole. It was evaluated on a large external retrospective dataset and was prospectively compared with manual within-patient acquisition of repeated measures by two experienced sonographers and two separate vendor speckle-tracking methods on different machines. ResultsIn the external validation set, the DLS method maintained moderate agreement (intraclass correlation coefficient (ICC) 0.58, p<0.001) with a bias of -2.33% (limits of agreement -10.61 to 5.93%). The absolute difference in measurements was independent of subjective image quality ({beta}: 0.12, SE: 0.10, p=0.21). Compared to readers on repeated measures, our method has reduced variability (standard deviation: 1.35 vs. 2.55%) and better inter-vendor agreement (ICC: 0.45 vs. 0.29). ConclusionsThe DLS measurement provides lower variance than human measurements and similar quantitative results. The method is rapid, consistent, vendor-agnostic, publicly released, and robust across a wide range of imaging qualities.
cardiovascular medicine
10.1101/2022.03.09.22272142
Prevalence of intraventricular haemorrhage and determinants of its early outcomes among preterm neonates at the newborn unit of a teaching hospital in Western Kenya
Intraventricular haemorrhage (IVH) screened using cranial ultrasounds (cUS) is a major cause of morbidity and mortality among preterm neonates. Despite the adverse neonatal outcomes attributed to IVH, there are limited studies conducted in sub-Saharan Africa on IVH occurrence and determinants of early outcomes. This study assessed the proportion of neonates with IVH, its determinants and the early outcomes of preterm neonates with the condition. A prospective descriptive study conducted at the newborn unit of Moi Teaching and Referral Hospital in Western Kenya between March 2020 to March 2021. The neonates sampled systematically had their clinical characteristics and that of their mothers collected from medical records. A cUS screening was conducted on the third and fourteenth day of life while hydrocephalus was screened using serial weekly measurements of head circumference and mortality assessed within the first 28 days of life. Bivariate analysis was used to test for an association between patient characteristics, occurrence of IVH and early outcomes. Confounders were controlled using a multivariate logistic regression model. We enrolled 201 pre-term neonates of whom 105 (52.2%) were male and 68 (33.8%) had IVH. Among neonates with IVH, 46 (67.6%) had mild while 22 (32.4%) had severe IVH. Antenatal steroids significantly reduced the risk of IVH while extreme/very low gestational age and extremely low birthweight significantly increased the risk of IVH two and three-fold respectively. Neonates with thrombocytopenia and on mechanical ventilation were significantly more likely to be diagnosed with IVH. Early outcomes of IVH were hydrocephalus (4.0%) and mortality (19.1%). Intraventricular haemorrhage was seen in one-third of neonates enrolled with majority of them presenting with the lower IVH grades. The IVH risk significantly increased among neonates with very low gestational age and extremely low birthweight but was lowered by antenatal steroid use. Mortality rates were significantly higher among neonates with thrombocytopenia.
pediatrics
10.1101/2022.03.15.22272437
The Influence of Contextual Factors on Maternal Healthcare Utilization in sub-Saharan Africa: A Scoping Review of Multilevel Models
IntroductionSub-Saharan Africa still bears the heaviest burden of maternal mortality among the regions of the world, with an estimated 201,000 (66%) women dying annually due to pregnancy and childbirth related complications. Utilisation of maternal healthcare services including antenatal care, skilled delivery and postnatal care contribute to a reduction of maternal and child mortality and morbidity. Factors influencing use of maternal healthcare occur at both the individual and contextual levels. The objective of this study was to systematically examine the evidence regarding the influence of contextual factors on uptake of maternal health care in sub-Saharan Africa. Materials and MethodsThe process of scoping review involved searching several electronic databases, identifying articles corresponding to the inclusion criteria and selecting them for extraction and analysis. Peer reviewed multilevel studies on maternal healthcare utilisation in sub-Saharan Africa published between 2007 and 2019 were selected. Two reviewers independently evaluated each study for inclusion and conflicts were resolved by consensus. ResultsWe synthesised 34 studies that met the criteria of inclusion out of a total of 1,654 initial records. Most of the studies were single-country, cross-sectional in nature and involved two-level multilevel logistic regression models. The findings confirm the important role played by contextual factors in determining use of available maternal health care services in sub-Saharan Africa. The level of educational status, poverty, media exposure, autonomy, empowerment and access to health facilities within communities are some of the major drivers of use of maternal health services. ConclusionsThis review highlights the potential of addressing high-level factors in bolstering maternal health care utilisation in sub-Saharan Africa. Societies that prioritise the betterment of social conditions in communities and deal with the problematic gender norms will have a good chance of improving maternal health care utilisation and reducing maternal and child mortality. Better multilevel explanatory mechanisms that incorporates social theories are recommended in understanding use of maternal health care services in sub-Saharan Africa.
sexual and reproductive health
10.1101/2022.03.15.22272436
Assessment of prescribing in under-five pediatric outpatients in Nigeria: an application of the POPI (Pediatrics: Omission of Prescription and Inappropriate Prescription ) tool
BackgroundEnsuring the right drug for the right clinical condition in children under five years of age will dramatically reduce morbidity and mortality rates in developing countries where these values are alarmingly high. This study evaluated prescribing in children under the age of five attending pediatric outpatient clinics at three Central hospitals in Delta State, Nigeria, using the Pediatics: Omission of Prescriptions and Inappropriate Prescription (POPI) tool. MethodsThis was a prospective descriptive study of prescriptions made to children from 0 to 59 months who attended the clinics between August and November 2018.Prescriptions were evaluated using the POPI tool, occurrence of potentially inappropriate prescriptions and prescribing omissions were reported as percentages and inappropriate prescription types and prescription omissions were also reported as frequencies. Relationship between inappropriate prescriptions, omissions of prescriptions, and categorical variables of age group and sex, p <.05 were considered significant. ResultsA total of 1,327 prescriptions from the three centers were analyzed. There was a preponderance of infants (> 1 month-12 months of age) in the study (43.0%) and a somewhat even gender distribution. Exactly 29.8% of all the prescriptions studied had at least one occurrence of inappropriate prescription. The use of H1 antagonists with sedative or atropine-like effects accounted for the majority of inappropriate prescriptions (49.5%), while the prescription of drinkable amoxicillin or other antibiotics in doses other than mg was the most frequent omission of prescription (97.2%). There was a significant relationship between the occurrence of inappropriate prescription and age group (p> 0.001). ConclusionThe occurrence of inappropriate prescriptions and omissions of prescriptions was high and effectively detected by the POPI tool. Measures should be taken to improve prescribing in order to reduce morbidity and mortality in children below five years.
pediatrics
10.1101/2022.03.17.22272569
Toxoplasma gondii infection and insomnia: a case control seroprevalence study
AimTo determine the association between Toxoplasma gondii (T. gondii) infection and insomnia. Materials and methodsThrough an age-and gender-matched case-control study, 577 people with insomnia (cases) and 577 people without insomnia (controls) were tested for anti-T. gondii IgG and IgM antibodies using commercially available enzyme-immunoassays. ResultsAnti-T. gondii IgG antibodies were found in 71 (12.3%) of 577 individuals with insomnia and in 46 (8.0%) of 577 controls (OR=1.62; 95% CI: 1.09-2.39; P=0.01). Men with insomnia had a higher (16/73: 21.9%) seroprevalence of T. gondii infection than men without insomnia (5/73: 6.8%) (OR: 3.81; 95% CI: 1.31-11.06; P=0.009). The rate of high (>150 IU/ml) anti-T. gondii IgG antibody levels in cases was higher than the one in controls (OR=2.21; 95% CI: 1.13-4.31; P=0.01). Men with insomnia had a higher (8/73: 11.0%) rate of high anti-T. gondii IgG antibody levels than men without insomnia (0/73: 0.0%) (P=0.006). The rate of high anti-T. gondii IgG antibody levels in cases >50 years old (11/180: 6.1%) was higher than that (3/180: 1.7%) in controls of the same age group (OR: 3.84; 95% CI: 1.05-14.00; P=0.05). No difference in the rate of IgM seropositivity between cases and controls was found (OR=1.33; 95% CI: 0.57-3.11; P=0.50). ConclusionsResults of this seroepidemiology study suggest that infection with T. gondii is associated with insomnia. Men older than 50 years with T. gondii exposure might be prone to insomnia. Further research to confirm the association between seropositivity and serointensity to T. gondii and insomnia are needed.
epidemiology
10.1101/2022.03.17.22272535
Comparison of the 2021 COVID-19 'Roadmap' Projections against Public Health Data
Control and mitigation of the COVID-19 pandemic in England has relied on a combination of vaccination and non-pharmaceutical interventions (NPIs) such as closure of non-essential shops and leisure activities, closure of schools, social distancing, mask wearing, testing followed by isolation and general public health awareness. Some of these measures are extremely costly (both economically and socially), so it is important that they are relaxed promptly but without overwhelming already burdened health services. The eventual policy was a Roadmap of four relaxation steps throughout 2021, taking England from lock-down to the cessation of all restrictions on social interaction; with a minimum of five weeks between each step to allow the data to reflect the changes in restrictions and the results to analysed. Here we present a retrospective analysis of our six Roadmap documents generated in 2021 to assess the likely impacts of future relaxation steps in England. In each case we directly compare results generated at the time with more recent public health data (primarily hospital admissions, but also hospital occupancy and death) to understand discrepancies and potential improvements. We conclude that, in general, the model projections generated a reliable estimation of medium-term hospital admission trends, with the data points up to September 2021 generally lying within our 95% projection intervals. The greatest uncertainties in the modelled scenarios came from estimates of vaccine efficacy, hampered by the lack of data in the early stages of the Alpha and Delta variant waves, and from assumptions about human behaviour in the face of changing restrictions and changing risk. These are clearly avenues for future study.
epidemiology
10.1101/2022.03.17.22272555
Relative effectiveness of booster vs. 2-dose mRNA Covid-19 vaccination in the Veterans Health Administration: Self-controlled risk interval analysis
ImportancePrevious studies have analyzed effectiveness of booster mRNA Covid-19 vaccination and compared it with 2-dose primary series for both Delta and Omicron variants. Observational studies that estimate effectiveness by comparing outcomes among vaccinated and unvaccinated individuals may suffer from residual confounding and exposure misclassification. ObjectiveTo estimate relative effectiveness of booster vaccination versus the 2-dose primary series with self-controlled study design Design, Setting and ParticipantsWe used the Veterans Health Administration (VHA) Corporate Data Warehouse to identify U.S. Veterans enrolled in care [&ge;]2 years who received the 2-dose primary mRNA Covid-19 vaccine series and a mRNA Covid-19 booster following expanded recommendation for booster vaccination, and who had a positive SARS-CoV-2 test during the Delta (9/23/2021-11/30/2021) or Omicron (1/1/22-3/1/22) predominant period. Among them, we conducted a self-controlled risk interval (SCRI) analysis to compare odds of SARS-CoV-2 infection during a booster exposure interval versus a control interval. Exposurescontrol interval (days 4-6 post-booster vaccination, presumably prior to gain of booster immunity), and booster exposure interval (days 14-16 post-booster vaccination, presumably following gain of booster immunity) Outcomes and MeasuresPositive PCR or antigen SARS-CoV-2 test. Separately for Delta and Omicron periods, we used conditional logistic regression to calculate odds ratios (OR) of a positive test for the booster versus control interval and calculated relative effectiveness of booster versus 2-dose primary series as (1-OR)*100. The SCRI approach implicitly controlled for time-fixed confounders. ResultsWe found 42 individuals with a positive SARS-CoV-2 test in the control interval and 14 in the booster exposure interval during Delta period, and 137 and 66, respectively, in Omicron period. For the booster versus 2-dose primary series, the odds of infection were 70% (95%CI: 42%, 84%) lower during the Delta period and 56% (95%CI: 38%, 67%) lower during Omicron. Results were similar for ages <65 and [&ge;]65 years in the Omicron period. In sensitivity analyses among those with prior Covid-19 history, and age stratification, ORs were similar to the main analysis. ConclusionsBooster vaccination was more effective relative to a 2-dose primary series, the relative effectiveness was consistent across age groups and was higher during the Delta predominant period than during the Omicron period.
epidemiology
10.1101/2022.03.17.22272555
Relative effectiveness of booster vs. 2-dose mRNA Covid-19 vaccination in the Veterans Health Administration: Self-controlled risk interval analysis
ImportancePrevious studies have analyzed effectiveness of booster mRNA Covid-19 vaccination and compared it with 2-dose primary series for both Delta and Omicron variants. Observational studies that estimate effectiveness by comparing outcomes among vaccinated and unvaccinated individuals may suffer from residual confounding and exposure misclassification. ObjectiveTo estimate relative effectiveness of booster vaccination versus the 2-dose primary series with self-controlled study design Design, Setting and ParticipantsWe used the Veterans Health Administration (VHA) Corporate Data Warehouse to identify U.S. Veterans enrolled in care [&ge;]2 years who received the 2-dose primary mRNA Covid-19 vaccine series and a mRNA Covid-19 booster following expanded recommendation for booster vaccination, and who had a positive SARS-CoV-2 test during the Delta (9/23/2021-11/30/2021) or Omicron (1/1/22-3/1/22) predominant period. Among them, we conducted a self-controlled risk interval (SCRI) analysis to compare odds of SARS-CoV-2 infection during a booster exposure interval versus a control interval. Exposurescontrol interval (days 4-6 post-booster vaccination, presumably prior to gain of booster immunity), and booster exposure interval (days 14-16 post-booster vaccination, presumably following gain of booster immunity) Outcomes and MeasuresPositive PCR or antigen SARS-CoV-2 test. Separately for Delta and Omicron periods, we used conditional logistic regression to calculate odds ratios (OR) of a positive test for the booster versus control interval and calculated relative effectiveness of booster versus 2-dose primary series as (1-OR)*100. The SCRI approach implicitly controlled for time-fixed confounders. ResultsWe found 42 individuals with a positive SARS-CoV-2 test in the control interval and 14 in the booster exposure interval during Delta period, and 137 and 66, respectively, in Omicron period. For the booster versus 2-dose primary series, the odds of infection were 70% (95%CI: 42%, 84%) lower during the Delta period and 56% (95%CI: 38%, 67%) lower during Omicron. Results were similar for ages <65 and [&ge;]65 years in the Omicron period. In sensitivity analyses among those with prior Covid-19 history, and age stratification, ORs were similar to the main analysis. ConclusionsBooster vaccination was more effective relative to a 2-dose primary series, the relative effectiveness was consistent across age groups and was higher during the Delta predominant period than during the Omicron period.
epidemiology
10.1101/2022.03.17.22272541
Effects of Transcranial Direct Current Stimulation in Children and Young People with Psychiatric Disorders: A Systematic Review
BackgroundTranscranial Direct Current Stimulation (tDCS) has demonstrated benefits in adults with various psychiatric disorders, but its clinical utility in children and young people (CYP) remains unclear. ObjectiveThis PRISMA systematic review used published and ongoing studies to examine the effects of tDCS on disorder-specific symptoms, mood and neurocognition in CYP with psychiatric disorders. MethodsWe searched Medline via PubMed, Embase, PsychINFO via OVID, and Clinicaltrials.gov up to January 2022. Eligible studies involved multiple session (i.e. treatment) tDCS in CYP ([&le;] 25 years-old) with psychiatric disorders. Two independent raters assessed the eligibility of studies and extracted data using a custom-built form. ResultsOf 28 eligible studies (participant N= 379), the majority (n = 23) reported an improvement in at least one outcome measure of disorder-specific symptoms. Few studies (n = 9) examined tDCS effects on mood and/or neurocognition, but findings were mainly positive. Overall, tDCS was well-tolerated with minimal side-effects. Of 11 eligible ongoing studies, many are sham-controlled RCTs (n = 9) with better blinding techniques and a larger estimated participant enrolment (M = 74.7; range: 11-172) than published studies. ConclusionsFindings provide encouraging evidence of tDCS-related improvement in disorder-specific symptoms, but evidence remains limited, especially in terms of mood and neurocognitive outcomes. Ongoing studies appear to be of improved methodological quality; however, future studies should broaden outcome measures to more comprehensively assess the effects of tDCS and develop dosage guidance (i.e. treatment regimens).
pharmacology and therapeutics
10.1101/2022.03.17.22272591
Updated population-level estimates of child restraint practices among children aged 0-12 years in Australia, ten years after introduction of age-appropriate restraint use legislation.
Children must correctly use an appropriate restraint type to receive optimal crash protection. To this end, legislation was introduced in Australia in 2009/2010 requiring correct use of age-appropriate restraints among all children up to age 7 years. Prior to this legislation population-referenced estimates of restraint practices in NSW, Australia in 2008 showed suboptimal restraint practices to be widespread. There have been no updated estimates since that time. In this study, we aim to update these estimates using data collected approximately 10 years after the introduction of the legislation. A stratified cluster sample was constructed to collect population referenced observational data from children aged 0-12 years across the Greater Sydney region of NSW. Data collection methods mimicked those used in the 2008 data. Randomly selected children were observed in their own restraints in their own vehicles by trained researchers. Appropriate use was assessed based on compliance with current NSW legislation. Correct use was defined as use exactly as the manufacturer intended. Errors were categorised as minor or serious on their likely impact of crash protection, and whether they related to installing the restraint in the vehicle or securing the child in the restraint. Almost all children were appropriately restrained (99.3%, 95%CI 98.4-100). However, less than half were correctly restrained (Any error = 40.4%, 95%CI 32.6-48.3; Any serious error= 43.8%, 95%CI 35.0-52.7). Both securing and installation related errors remain common. There were variations in the extent of errors in use in different restraint types with the least errors among seat belt and the most among booster seat users. While, overall, the odds of incorrect use decreased with increasing age (OR), the relationship between age and incorrect use was different among booster seats user, with decreasing odds with ever year older. While there was no attempt to test the impact of the legislation on restraint use, it was clear there had been substantial increases in the proportion of children using appropriate forms of restraint over the last 10 years. However there appears to have been no real change in rates of correct use.
public and global health
10.1101/2022.03.17.22272588
Distinct genome-wide DNA methylation and gene expression signatures in classical monocytes from African American patients with systemic sclerosis
BackgroundSystemic sclerosis (SSc) is a multisystem autoimmune disorder that has an unclear etiology and disproportionately affects women and African Americans. Despite this, African Americans are dramatically underrepresented in SSc research. Additionally, monocytes show heightened activation in SSc and in African Americans relative to European Americans. In this study, we sought to investigate DNA methylation and gene expression patterns in classical monocytes in a health disparity population. MethodsClassical monocytes (CD14++CD16-) were FACS-isolated from 34 self-reported African American women. Samples from 12 SSc patients and 12 healthy controls were hybridized on MethylationEPIC BeadChip array, while RNA-seq was performed on 16 SSc patients and 18 healthy controls. Analyses were computed to identify differentially methylated CpGs (DMCs), differentially expressed genes (DEGs), and CpGs associated with changes in gene expression (eQTM analysis). ResultsWe observed modest DNA methylation and gene expression differences between cases and controls. The genes harboring the top DMCs, the top DEGs, as well as the top eQTM loci were enriched for metabolic processes. Genes involved in immune processes and pathways showed a weak upregulation in the transcriptomic analysis. While many genes were newly identified, several other have been previously reported as differentially methylated or expressed in different blood cells from patients with SSc, supporting for their potential dysregulation in SSc. ConclusionsWhile contrasting with results found in several blood cell types in largely European-descent groups, the results of this study reflect the variation in DNA methylation and gene expression that exists among individuals of different genetic, clinical, social, and environmental backgrounds. This finding supports the importance of including diverse, well-characterized patients to understand the different roles of DNA methylation and gene expression variability in the dysregulation of classical monocytes in diverse populations, which might help explaining the health disparities.
rheumatology
10.1101/2022.03.16.22272527
Severe Acute Respiratory Coronavirus-2 Antibody and T cell response after a third vaccine dose in hemodialysis patients compared with healthy controls
1.Hemodialysis patients (HD patients) have a high health risk from Severe Acute Respiratory Coronavirus-2 (SARS-CoV-2) infection. In this study, we assess the impact of a third vaccine dose (3D) on antibody levels and T cell response in HD patients and compare the results to those of a healthy control group. We conducted a prospective cohort study consisting of 60 HD patients and 65 healthy controls. All of them received two doses of the Comirnaty mRNA vaccine and a third mRNA vaccine dose (Spikevax or Comirnaty). The SARS-CoV-2 S antibody response in all participants was measured 6 months after the second vaccine dose and 6 to 8 weeks after administration of the 3D. We also assessed INF-{gamma} secretion 6-8 weeks after the 3D in 24 healthy controls, 17 HD patients with a normal and 20 HD patients with a low or no antibody response after the second dose. The groups were compared using univariate quantile regressions and multiple analyses. The adverse effects of vaccines were assessed via a questionnaire. After the 3D, the SARS-CoV-2-specific antibody and INF-{gamma} titers of most HD patients were comparable to those of healthy controls. A subgroup of HD patients who had shown a diminished antibody response after the first two vaccine doses developed a significantly lower antibody and INF-{gamma} response compared to responder HD patients and controls, even after the 3D. A new strategy is needed to protect this patient group from severe COVID-19 infection.
nephrology
10.1101/2022.03.17.22272589
A multiplexed Cas13-based assay with point-of-care attributes for simultaneous COVID-19 diagnosis and variant surveillance
Point-of-care (POC) nucleic acid detection technologies are poised to aid gold-standard technologies in controlling the COVID-19 pandemic, yet shortcomings in the capability to perform critically needed complex detection--such as multiplexed detection for viral variant surveillance--may limit their widespread adoption. Herein, we developed a robust multiplexed CRISPR-based detection using LwaCas13a and PsmCas13b to simultaneously diagnose SARS-CoV-2 infection and pinpoint the causative SARS-CoV-2 variant of concern (VOC)-- including globally dominant VOCs Delta (B.1.617.2) and Omicron (B.1.1.529)--all while maintaining high levels of accuracy upon the detection of multiple SARS-CoV-2 gene targets. The platform has several attributes suitable for POC use: premixed, freeze-dried reagents for easy use and storage; convenient direct-to-eye or smartphone-based readouts; and a one-pot variant of the multiplexed detection. To reduce reliance on proprietary reagents and enable sustainable use of such a technology in low- and middle-income countries, we locally produced and formulated our own recombinase polymerase amplification reaction and demonstrated its equivalent efficiency to commercial counterparts. Our tool--CRISPR-based detection for simultaneous COVID-19 diagnosis and variant surveillance which can be locally manufactured--may enable sustainable use of CRISPR diagnostics technologies for COVID- 19 and other diseases in POC settings.
infectious diseases
10.1101/2022.03.17.22272538
From Delta to Omicron SARS-CoV-2 variant: switch to saliva sampling for higher detection rate
BackgroundReal-time polymerase chain reaction (RT-PCR) testing on a nasopharyngeal swab is the current standard for SARS-CoV-2 virus detection. Since collection of this sample type is experienced uncomfortable by patients, saliva- and oropharyngeal swab collections should be considered as alternative specimens. ObjectivesEvaluation of the relative performance of oropharyngeal swab, nasopharyngeal swab and saliva for the RT-PCR based SARS-CoV-2 Delta (B.1.617.2) and Omicron (B.1.1.529) variant detection. Study designNasopharyngeal swab, oropharyngeal swab and saliva were collected from 246 adult patients who presented for SARS-CoV-2 testing at the screening centre in Ypres (Belgium). RT-PCR SARS-CoV-2 detection was performed on all three sample types separately. Variant type was determined for each positive patient using whole genome sequencing or Allplex SARS-CoV-2 variants I and II Assay. Results and conclusionsSaliva is superior compared to nasopharyngeal swab for the detection of the Omicron variant. For the detection of the Delta variant, nasopharyngeal swab and saliva can be considered equivalent specimens. Oropharyngeal swab is the least sensitive sample type and shows little added value when collected in addition to a single nasopharyngeal swab. HighlightsO_LISaliva is the preferred sample type for Omicron variant (B.1.1.529) detection C_LIO_LINasopharyngeal swab and saliva are equivalent for Delta variant (B.1.617.2) detection C_LIO_LIOropharyngeal swab is the least preferred sample type for SARS-CoV-2 detection C_LI
infectious diseases
10.1101/2022.03.17.22272572
Molecular and spatial epidemiology of HCV among people who inject drugs in Boston, Massachusetts
Integration of genetic, social network, and spatial data has the potential to improve understanding of transmission dynamics in established HCV epidemics. Sequence data were analyzed from 63 viremic people who inject drugs recruited in the Boston area through chain referral or time-location sampling. HCV subtype 1a was most prevalent (57.1%), followed by subtype 3a (33.9%). The phylogenetic distances between sequences were no shorter comparing individuals within versus across networks, nor by location or time of first injection. Social and spatial networks, while interesting, may be too ephemeral to inform transmission dynamics when the date and location of infection are indeterminate.
infectious diseases
10.1101/2022.03.17.22272570
Detection of Bourbon virus specific serum neutralizing antibodies in human serum in Missouri, USA
Bourbon virus (BRBV) was first discovered in 2014 in a fatal human case. Since then it has been detected in the Amblyomma americanum tick in the states of Missouri and Kansas in the United States. Despite the high prevalence of BRBV in ticks in these states, very few human cases have been reported, and the true infection burden of BRBV in the community is unknown. Here, we developed two virus neutralization assays, a VSV-BRBV pseudotyped rapid assay, and a BRBV focus reduction neutralization assay, to assess the seroprevalence of BRBV neutralizing antibodies in human sera collected in 2020 in St. Louis, Missouri. Out of 440 human serum samples tested, three (0.7%) were able to potently neutralize both VSV-BRBV and wild type BRBV. These findings suggest that human infections with BRBV are more common than previously recognized. ImportanceSince the discovery of the Bourbon virus (BRBV) in 2014, a total of five human cases have been identified, including two fatal cases. BRBV is thought to be transmitted by the Lone Star tick, which is prevalent in the East, Southeast, and Midwest of the United States (US). BRBV has been detected in ticks in Missouri and Kansas and serological evidence suggests that it is also present in North Carolina. However, the true infection burden of BRBV in humans is not known. In the present study, we developed two virus neutralization assays to assess the seroprevalence of BRBV specific antibodies in human sera collected in 2020 in St. Louis, Missouri. We found that a small subset of individuals is seropositive for neutralizing antibodies against BRBV. Our data suggest that BRBV infection in humans is more common than previously thought.
infectious diseases
10.1101/2022.03.09.22272531
Pattern and risk factors of menstrual regulation service use among ever-married women in Bangladesh: evidence from a nationally representative cross-sectional survey
BackgroundAround 47% of the total conceptions in Bangladesh are unintended which leads to several adverse consequences, including maternal and child mortality. Availability of menstrual regulation (MR) service and its use can help women to end conception at an earlier stage, as such, reducing adverse consequences related to the unintended pregnancy. We explored the prevalence and determinants of MR service knowledge and its use among ever-married women in Bangladesh. MethodsA total of 20 127 ever-married women data from the 2017 Bangladesh Demographic and Health Survey were analyzed. Knowledge about menstrual regulation (MR) and its use were our outcomes of interest. Several individual, household and community-level factors were considered as explanatory variables. The multilevel mixed-effects Poisson regression model was used to determine the factors associated with MR service knowledge and its use in Bangladesh. ResultsAround 71% of the total analyzed women reported they know about MR service while only 7% of them reported they used this service within three years of the survey date. MR service knowledge was found to be higher among women with increased age and education and engaged in income-generating employment. Knowledge about MR service was also found to be higher among women whose husbands were higher educated and engaged in physical work or business. Rural women and women who resided in the community with lower poverty and higher illiteracy were reported lower knowledge of MR service. MR service use was found higher among higher-aged women, women whose husbands were either physical workers or businessmen, women who have an increased number of children and inherent in the community with lower poverty. Lower use of MR service was found among women who resided in the Chattogram, Khulna, and Mymensingh divisions and women who resided in the community with increased illiteracy. ConclusionUse of MR service is very low in Bangladesh. This could be responsible for higher adverse consequences related to unintended pregnancy including higher maternal and child mortality. Policies and programs are important to aware women of MR.
public and global health
10.1101/2022.03.17.22272516
SARS-CoV-2 virus dynamics in recently infected people - data from a household transmission study
We used daily real-time reverse-transcription polymerase chain reaction (rRT-PCR) results from 67 cases of SARS-CoV-2 infection in a household transmission study to examine the trajectory of cycle threshold (Ct) values, an inverse correlate of viral RNA concentration, from nasal specimens collected between April 2020 and May 2021. Ct values varied over the course of infection, across RT-PCR platforms, and by participant age. Specimens collected from children and adolescents showed higher Ct values and adults aged [&ge;]50 years showed lower Ct values than adults aged 18-49 years. Ct values were lower on days when participants reported experiencing symptoms.
infectious diseases
10.1101/2022.03.16.22272508
COVID-19 Disease Model with Reservoir of Infection : Cleaning Surfaces and Wearing Masks Strategies
At the end of 2019 a new coronavirus (called SARS-COV-2) epidemic appears in china and spreads from China to the rest of the world at beginning of 2020 and caused a new disease called COVID -19. Its well known that, COVID -19 disease spreads between humans through the air by coughing and sneezing or by contact. In this paper, we develop a mathematical SIR model which takes into account the effect of disease transmission by coughing and sneezing and the period of latency which is represented by time delays. We prove that, there is non effect of latency period on the dynamics of the propagation and transmission of the coronavirus, and for some critical value of the basic reproduction number a transcritical bifurcation may occur and the disease disappears for values smaller than this critical value and persist otherwise. In the end, we carry out some numerical simulations in order to illustrate our theoretical results. Our study confirm that, cleaning surfaces and wearing masks is a controlling strategy for limiting the propagation of COVID - 19.
epidemiology
10.1101/2022.03.17.22272411
Assessing performance and clinical usefulness in prediction models with survival outcomes: practical guidance for Cox proportional hazards models
Risk prediction models need thorough validation to assess their performance. Validation of models for survival outcomes poses challenges due to the censoring of observations and the varying time horizon at which predictions can be made. We aim to give a description of measures to evaluate predictions and the potential improvement in decision making from survival models based on Cox proportional hazards regression. As a motivating case study, we consider the prediction of the composite outcome of recurrence and death (the event) in breast cancer patients following surgery. We develop a Cox regression model with three predictors as in the Nottingham Prognostic Index in 2982 women (1275 events within 5 years of follow-up) and externally validate this model in 686 women (285 events within 5 years). The improvement in performance was assessed following the addition of circulating progesterone as a prognostic biomarker. The model predictions can be evaluated across the full range of observed follow up times or for the event occurring by a fixed time horizon of interest. We first discuss recommended statistical measures that evaluate model performance in terms of discrimination, calibration, or overall performance. Further, we evaluate the potential clinical utility of the model to support clinical decision making. SAS and R code is provided to illustrate apparent, internal, and external validation, both for the three predictor model and when adding progesterone. We recommend the proposed set of performance measures for transparent reporting of the validity of predictions from survival models.
epidemiology
10.1101/2022.03.18.22272415
Diagnostic performance of non-invasive fibrosis markers for chronic hepatitis B in sub-Saharan Africa: a Bayesian individual patient data meta-analysis
ObjectiveIn sub-Saharan Africa, hepatitis B is the principal cause of liver disease. Non-invasive biomarkers of liver fibrosis are needed to identify patients requiring antiviral treatment. We assessed aspartate aminotransferase-to-platelet ratio index (APRI), gamma-glutamyl transferase-to-platelet ratio (GPR) and FIB-4 to diagnose significant fibrosis and cirrhosis in an individual patient data (IPD) meta-analysis. DesignIn total, 3,549 patients from 12 cohorts of HBsAg positive individuals in 8 sub-Saharan African countries were included. Transient elastography was used as a reference test for cirrhosis (>12.2 kPa), excluding patients who were pregnant, had hepatitis C, D, or HIV co-infection, were on hepatitis B therapy, or had acute hepatitis. A bivariate Bayesian IPD model was fitted with patient-level covariates and study-level random effects. ResultsAPRI and GPR had the best discriminant performance (area under receiver operating curve 0.81 and 0.82) relative to FIB-4 (0.77) for cirrhosis. The World Health Organization (WHO) recommended APRI threshold of [&ge;]2.0 was associated with a sensitivity and specificity (95% credible interval) of 16.5% (12.5-20.5) and 99.5% (99.2-99.7) for cirrhosis. For APRI, we identified an optimised rule-in threshold for cirrhosis (cut-off 0.65) with a sensitivity and specificity of 56.2% (50.5-62.2) and 90.0% (89.0-91.0), and an optimised rule-out threshold (cut-off 0.36) with a sensitivity and specificity of 80.6% (76.1-85.1) and 64.3% (62.8-65.8). ConclusionsThe WHO recommended APRI threshold of 2.0 is too high to diagnose cirrhosis in sub-Saharan Africa. We identified new and optimised rule-in and rule-out thresholds for cirrhosis, with direct consequences for treatment guidelines in this setting.
gastroenterology
10.1101/2022.03.16.22271100
Effectiveness of whole virus COVID-19 vaccine at protecting health care personnel against SARS-CoV-2 infections in Lima, Peru
In February 2021, Peru launched a vaccination campaign among healthcare personnel using BBIBP-CorV inactivated whole virus (BBIBP-CorV) COVID-19 vaccine. Two doses of BBIBP-CorV vaccine are recommended, 21 days apart. Data on BBIBP-CorV vaccine effectiveness will inform the use and acceptance of vaccination with BBIBP-CorV vaccine. We evaluated BBIBP-CorV vaccine effectiveness among an existing multi-year influenza cohort at two hospitals in Lima. We analyzed data on 290 participants followed between February and May 2021. Participants completed a baseline questionnaire and provided weekly self-collected anterior nasal swabs tested for SARS-CoV-2 by rRT-PCR for sixteen weeks. We performed multivariable logistic regression models adjusting for pre-selected characteristics (age, sex, exposure to COVID-19 patients, work in intensive care unit or emergency department, BMI, and exposure time in days). BBIBP-CorV vaccine effectiveness was calculated after the two-week post-vaccination period as (1-Odds Ratio for testing SARS-CoV-2 positive)x100%. SARS-CoV-2 was detected by rRT-PCR among 25 (9%) participants during follow-up (February-May 2021). Follow-up period ranged 1-11 weeks (median: 2 weeks). Among cohort participants who were fully vaccinated the adjusted vaccine effectiveness against SARS-CoV-2 infection was estimated as 95% (95% CI: 70%, 99%) and 100% (95% CI: 88%, 100%) for those partially vaccinated. During the study period, vaccination of healthcare personnel with BBIBP-CorV vaccine was effective at reducing SARS-CoV-2 infections in the weeks immediately following vaccination. This information can be used to support vaccination efforts in the region, especially among those who could be concerned about their effectiveness.
public and global health
10.1101/2022.03.17.22272479
Characteristics of mental health stability during COVID-19: An online survey with people residing in the Liverpool City Region
Background and aimDespite the significant mental health challenges the COVID-19 pandemic and its associated government measures have presented, research have shown that the majority of people have adapted and coped well. The aim of this study was i) to determine the proportion of people with mental stability and volatility during the pandemic in a North West urban environment sample and ii) to establish group differences in psychosocial variables. Mental stability and volatility refer to the extent to which individuals reported change in levels of common mental health symptoms over the course of 12 weeks. Methoda two-wave-online survey (N = 163) was used to explore the psychological and social impact of the pandemic on relatively disadvantaged neighbourhoods within the Liverpool City Region over 12 weeks. Kruskal-Wallis with post-hoc tests were used to determine how people with mental stability and volatility differed on factors categorised within an ecological framework of resilience (individual, community, societal, and COVID-19 specific). ResultsIndividuals categorised as stable in terms of mental health symptoms (63.6%) had better mental and physical health; were more tolerant of uncertainty; reported higher levels of resilience and wellbeing compared to very volatile people (19.8%). These individuals also reported feeling less socially isolated, experienced a greater sense of belonging to their community which was more likely to fulfil their needs, and were more likely to have access to green space nearby for their recommended daily exercise. Stable individuals did not report worrying any more during the pandemic than usual and tolerated uncertainty better compared to those in the volatile group. ImplicationsThe majority of participants in this sample were mentally stable and coping well with the challenges presented by the pandemic. The resilience of these individuals was related to key place-based factors such as a strong sense of community and useable local assets. The data showcase the role of place-based social determinants in supporting resilience and thereby highlight key preventative measures for public mental health during times of international crisis.
public and global health
10.1101/2022.03.16.22272509
Patient-reported respiratory outcome measures in the recovery of adults hospitalised with COVID-19: A systematic review and meta-analysis
BackgroundAcute COVID-19 clinical symptoms have been clearly documented, but long-term functional and symptomatic recovery from COVID -19 is less well described. MethodsA systematic review and meta-analysis were conducted to describe patient-reported outcome measures (PROMs) in adults at least 8 weeks post hospital discharge for COVID-19. Comprehensive database searches in accordance with the PRISMA statement were carried out up till 31/05/2021. Data were narratively synthesized, and a series of meta-analyses were performed using the random-effects inverse variance method. ResultsFrom 49 studies, across 14 countries with between 2-12 months follow up, the most common persisting symptom reported was fatigue with meta-analysis finding 36.6% (95 % CI 27.6 to 46.6, n=14) reporting it at 2-4 months, decreasing slightly to 32.5% still reporting it at >4 months (95% CI 22.6 to 44.2, n=15). This was followed by dyspnoea. Modified MRC score (mMRC) [&ge;]1 was reported in 48% (95% CI 30 to 37, n=5) at 2-4months reducing to 32% (95% CI 22 to 43, n=7) at 4 months. Quality of life (QOL) as assessed by the EQ-5D-5L VAS remained reduced at >4 months (73.6 95% CI 68.1 to 79.1, n=6). Hospitalisation with COVID-19 also resulted in persisting sick leave, change in scope of work, and continued use of primary and secondary healthcare. ConclusionThe symptomatic and functional impact of COVID-19 continues to be felt by patients months after discharge from hospital. This widespread morbidity points towards a multi-disciplinary approach to aid functional recovery.
respiratory medicine
10.1101/2022.03.15.22272441
Selection of appropriate reference genes for normalization of qRT-PCR based gene expression analysis in SARS-CoV-2, and Covid-associated Mucormycosis infection
Selection of reference genes in quantitative PCR is critical to determine accurate and reliable mRNA expression. Despite this knowledge, not a single study has investigated the expression stability of housekeeping genes to determine their suitability to act as reference gene in SARS-CoV-2 or Covid19 associated mucormycosis (CAM) infections. Herein, we address these gaps by investigating the expression stability of nine most commonly used housekeeping genes including TBP, CypA, B2M, 18S, PGC-1, GUSB, HPRT-1, {beta}-ACTIN and GAPDH in the patients of varying severity (asymptomatic, mild, moderate and severe). We observed significant differences in the expression of candidate genes across the disease spectrum. Next, using various statistical algorithms (delta Ct, Normfinder, Bestkeeper, RefFinder and GeNorm), we observed that CypA demonstrated the most consistent expression across the patients of varying severity and emerged as the most suitable gene in Covid19, and CAM infections. Incidentally, the most commonly used reference gene GAPDH showed maximum variations and found to be the least suitable. Lastly, comparative evaluation of expression of NRF2 is performed following normalization with GAPDH and CypA to highlight the relevance of an appropriate reference gene. Our results reinforce the idea of a selection of housekeeping genes only after validation especially during infections.
infectious diseases
10.1101/2022.03.17.22272389
Suppression of de novo antibody responses against SARS-CoV2 and the Omicron variant after mRNA vaccination and booster in patients with B cell malignancies undergoing active treatment, but maintenance of pre-existing antibody levels against endemic viruses.
The impact of SARS-CoV2 vaccination in cancer patients remains incompletely understood given the heterogeneity of cancer and cancer therapies. We assessed vaccine-induced antibody response to the SARS-CoV2 Omicron (B.1.1.529) variant in 57 patients with B cell malignancies with and without active B cell-targeted therapy. Ancestral- and Omicron-reactive antibody levels were determined by ELISA and neutralization assays. In over one third of vaccinated patients at the pre-booster timepoint, there were no ELISA-detectable antibodies against either the ancestral strain or Omicron variant. The lack of vaccine-induced antibodies was predominantly in patients receiving active therapy such as anti-CD20 monoclonal antibody (mAb) or Brutons tyrosine kinase inhibitors (BTKi). While booster immunization was able to induce detectable antibodies in a small fraction of seronegative patients, the benefit was disproportionately evident in patients not on active therapy. Importantly, in patients with post-booster ELISA-detectable antibodies, there was a positive correlation of antibody levels against the ancestral strain and Omicron variant. Booster immunization increased overall antibody levels, including neutralizing antibody titers against the ancestral strain and Omicron variant; however, predominantly in patients without active therapy. Furthermore, ancestral strain neutralizing antibody titers were about 5-fold higher in comparison with those to Omicron, suggesting that even with booster administration, there may be reduced protection against the Omicron variant. Interestingly, in almost all patients regardless of active therapy, including those unable to generate detectable antibodies against SARS-CoV2 spike, we observed comparable levels of EBV, influenza, and common cold coronavirus reactive antibodies demonstrating that B cell-targeting therapies primarily impair de novo but not pre-existing antibody levels. These findings suggest that patients with B cell malignancies on active therapy may be at disproportionately higher risk to new versus endemic viral infection and suggest utility for vaccination prior to B cell-targeted therapy.
infectious diseases
10.1101/2022.03.17.22272446
Founder effect contributes to the unique pattern of SARS-CoV-2 variant B.1.1.519 emergence in Alaska
Alaska is the largest geographic state in the United States with the lowest population density and a mix of urban centers and isolated rural communities. The differences in population dynamics in Alaska from the contiguous United States may have contributed to a unique pattern of emergence and spread of SARS-CoV-2 variants observed in early 2021. Here we examined 2,323 virus genomes from Alaska and 278,635 virus genomes from the contiguous United States collected between the first week of December 2020 through the last week of June 2021. We focused on this timeframe because of the notable emergence and spread of the SARS-CoV-2 lineage B.1.1.519 observed in Alaska. We found that this variant was consistently detected in Alaska from the end of January through June of 2021 with a peak prevalence in April of 77.9% unlike the rest of the United States with a peak prevalence of 4.6%. In Alaska, the earlier emergence of B.1.1.519 coincided with a later peak of Alpha (B.1.1.7) when compared to the rest of the United States. We also observed differences in the composition of lineages and variants over time between the two most populated regions of Alaska. Although there was a modest increase in COVID-19 cases during the peak incidence of B.1.1.519, it is difficult to disentangle how social dynamics conflated changes in COVID-19 during this time. We suggest that the viral characteristics, such as amino acid substitutions in the spike protein, and a founder effect likely contributed to the unique spread of B.1.1.519 in Alaska.
infectious diseases
10.1101/2022.03.17.22272427
An immune dysfunction score for stratification of patients with acute infection based on whole blood gene expression
Dysregulated host responses to infection can lead to organ dysfunction and sepsis, causing millions of deaths globally each year. To alleviate this burden, improved prognostication and biomarkers of response are urgently needed. We investigated the use of whole blood transcriptomics for stratification of patients with severe infection by integrating data from 3,149 samples of sepsis patients and healthy individuals into a gene expression reference map. We used this map to derive a quantitative sepsis response signature (SRSq) score reflective of immune dysfunction and predictive of clinical outcomes, which can be estimated using a 19-gene signature. Finally, we built a machine learning framework, SepstratifieR, to deploy SRSq in sepsis, H1N1 influenza, and COVID-19, demonstrating clinically relevant stratification across diseases and revealing the physiological alterations linking immune dysregulation to mortality. Our method enables early identification of individuals with dysfunctional immune profiles, thus bringing us closer to precision medicine in infection.
intensive care and critical care medicine
10.1101/2022.03.16.22272519
A log-odds system for waning and boosting of COVID-19 vaccine effectiveness
Immunity to SARS-CoV-2 following vaccination wanes over time in a non-linear fashion, making modelling of likely population impacts of COVID-19 policy options challenging. We observed that it was possible to mathematize non-linear waning of vaccine effectiveness (VE) on the percentage scale as linear waning on the log-odds scale, and developed a random effects logistic regression equation based on UK Health Security Agency data to model VE against Omicron following two and three doses of a COVID-19 vaccine. VE on the odds scale reduced by 47% per month for symptomatic infection after two vaccine doses, lessening to 35% per month for hospitalisation. Waning on the odds scale after triple dose vaccines was 35% per month for symptomatic disease and 19% for hospitalisation. This log-odds system for estimating waning and boosting of COVID-19 VE provides a simple solution that may be used to parametrize SARS-CoV-2 immunity over time parsimoniously in epidemiological models.
epidemiology
10.1101/2022.03.17.22272529
Protection of prior natural infection compared to mRNA vaccination against SARS-CoV-2 infection and severe COVID-19 in Qatar
BACKGROUNDProtection conferred by natural SARS-CoV-2 infection versus COVID-19 vaccination has not been investigated in rigorously controlled studies. We compared head-to-head protection conferred by natural infection to that from the BNT162b2 (Pfizer-BioNTech) and mRNA-1273 (Moderna) vaccines in Qatar, between February 28, 2020 and March 6, 2022. METHODSTwo national matched retrospective target-trial cohort studies were conducted to compare incidence of SARS-CoV-2 infection and COVID-19 hospitalization and death among those with a documented primary infection to incidence among those with a two-dose primary-series vaccination. Associations were estimated using Cox proportional-hazards regression models. RESULTSThe overall adjusted hazard ratio (AHR) for infection was 0.46 (95% CI: 0.45-0.48) comparing those with a prior infection to those vaccinated with BNT162b2, and 0.51 (95% CI: 0.48-0.53) comparing those with a prior infection to those vaccinated with mRNA-1273. For BNT162b2, the AHR decreased gradually from 0.55 (95% CI: 0.46-0.65) in the fourth month after primary infection/vaccination to 0.31 (95% CI: 0.27-0.37) in the eighth month, while for mRNA-1273, it decreased from 0.80 (95% CI: 0.59-1.07) to 0.35 (95% CI: 0.29-0.41) over the same time period. During the Omicron wave, the AHR was [~]0.50 for BNT162b2 and [~]0.60 for mRNA-1273. The overall AHR for any severe, critical, or fatal COVID-19 (against all variants) was 0.32 (95% CI: 0.10-1.00) for BNT162b2, and 0.58 (95% CI: 0.14-2.43) for mRNA-1273. CONCLUSIONSNatural infection was associated with stronger and more durable protection against infection, regardless of the variant, than mRNA primary-series vaccination. Nonetheless, vaccination remains the safest and optimal tool of protection against infection and COVID-19 hospitalization and death.
epidemiology
10.1101/2022.03.18.22272466
Risk of diabetes and cardiovascular diseases in women with pregnancies complicated by vaginal bleeding: Danish population-based cohort study
ObjectiveTo investigate an association between childbirths affected by vaginal bleeding within 20 gestational weeks and womens subsequent risks of diabetes and cardiovascular diseases. DesignCohort study in Denmark, 1979-2018. SettingDanish nationwide registries. ParticipantsBetween 1979 and 2017, we identified 3,085,955 pregnancies among 1,329,331 women. Of these pregnancies, 69,230 ended in a childbirth affected by vaginal bleeding (VB) within 20 gestational weeks; 2,180,855 ended in a VB-unaffected childbirth; 578,355 ended in a termination, and 258,430 ended in a miscarriage. Main outcome measuresThe outcomes were diabetes types 1 and 2, hypertension, ischaemic heart disease, including myocardial infarction, atrial fibrillation or flutter, heart failure, ischaemic and haemorrhagic stroke, coronary artery bypass grafting, and percutaneous coronary intervention. We computed the incidence rates per 10,000 person-years and cumulative incidences of cardiovascular outcomes at up to 40 years of follow-up. Hazard ratios (HRs) with 95% confidence intervals adjusted for age, calendar year, reproductive history, baseline comorbidities, and socioeconomic factors were computed using conventionally and inverse probability of treatment weighted Cox proportional hazards regression. ResultsAt the end of the follow-up, among women with VB-affected vs VB-unaffected childbirths, the cumulative risks of the outcome events were 4.5% vs 3.5% for diabetes type 1; 13.8% vs 11.1% for diabetes type 2; 25.1% vs 21.4% for hypertension; 10.6% vs 8.2% for ischaemic heart disease, including 2.9% vs 2.6% for myocardial infarction; 5.3% vs 4.4% for atrial fibrillation or flutter; 2.3% vs 2.0% for heart failure; 5.1% vs 4.0% for ischaemic stroke; 1.8% vs 1.4% for haemorrhagic stroke; 0.5% vs 0.4% for coronary artery bypass graft; and 2.4% vs 1.9% for percutaneous coronary intervention. Adjusted HRs were 1.2 to 1.3-fold increased for diabetes types 1 and 2, hypertension, ischaemic heart disease, atrial fibrillation or flutter, heart failure, ischaemic and haemorrhagic stroke as well as a coronary artery bypass grafting. Following womens first pregnancies and in sensitivity analyses, the HRs remained increased. For comparisons of VB-affected childbirths with terminations, HRs were up to 1.3-fold increased for diabetes and 1.1-fold increased for hypertension, ischaemic heart disease, atrial fibrillation or flutter, and haemorrhagic stroke. HRs were unity when we compared having a VB-affected childbirth with having a miscarriage. ConclusionsWomens risks of diabetes types 1 and 2 and multiple cardiovascular outcomes were increased on the relative scale following childbirths affected by VB within 20 gestational weeks when compared with VB-unaffected childbirths and terminations, but not when compared with miscarriages. O_TEXTBOXWHAT IS ALREADY KNOWN ON THIS TOPICO_LIVaginal bleeding is a common complication affecting up to 30% of clinically detected pregnancies. C_LIO_LIAlthough an association between miscarriages and greater risk of cardiovascular events is known, the risk of cardiovascular outcomes following a childbirth complicated by vaginal bleeding not ending in a miscarriage is not well investigated. C_LI WHAT THIS STUDY ADDSO_LIThis nationwide registry-based cohort study with up to 40 years of follow-up showed that compared with having a childbirth without vaginal bleeding, having a childbirth with vaginal bleeding within 20 gestational weeks was associated with womans increased risk of diabetes types 1 and 2, hypertension, ischaemic heart disease, including myocardial infarction, atrial fibrillation or flutter, heart failure, coronary artery bypass grafting, ischaemic and haemorrhagic stroke. C_LIO_LIThe cardiovascular risks were increased following womens first pregnancies and when additionally adjusted for smoking and body-mass index. C_LIO_LIThe long-term cardiovascular risk profile of women with vaginal bleeding-affected childbirths was similar to that of women with miscarriages. C_LI C_TEXTBOX
epidemiology
10.1101/2022.03.16.22272523
Mobility was a Significant Determinant of Reported COVID-19 Incidence During the Omicron Surge in the Most Populous U.S. Counties
BackgroundSignificant immune escape by the Omicron variant, along with the emergence of widespread worry fatigue, have called into question the robustness of the previously observed relation between population mobility and COVID-19 incidence. MethodsWe employed principal component analysis to construct a one-dimensional summary indicator of six Google mobility categories. We related this mobility indicator to case incidence among 111 of the most populous U.S. counties during the Omicron surge from December 2021 through February 2022. ResultsReported COVID-19 incidence peaked earlier and declined more rapidly among those counties exhibiting more extensive decline in mobility between December 20 and January 3. Based upon a fixed-effects, longitudinal cohort model, we estimated that every 1-percent decline in mobility between December 20 and January 3 was associated with a 0.63 percent decline in peak incidence during the week ending January 17 (95% confidence interval, 0.40-0.86 percent). Based upon a cross-sectional analysis including mean household size and vaccination participation as covariates, we estimated that the same 1-percent decline in mobility was associated with a 0.36 percent decline in cumulative reported COVID-19 incidence from January 10 through February 28 (95% CI, 0.18-0.54 percent). ConclusionOmicron did not simply sweep through the U.S. population until it ran out of susceptible individuals to infect. To the contrary, a significant fraction managed to avoid infection by engaging in risk-mitigating behaviors. More broadly, the behavioral response to perceived risk should be viewed as an intrinsic component of the natural course of epidemics in humans.
epidemiology
10.1101/2022.03.16.22272523
Mobility was a Significant Determinant of Reported COVID-19 Incidence During the Omicron Surge in the Most Populous U.S. Counties
BackgroundSignificant immune escape by the Omicron variant, along with the emergence of widespread worry fatigue, have called into question the robustness of the previously observed relation between population mobility and COVID-19 incidence. MethodsWe employed principal component analysis to construct a one-dimensional summary indicator of six Google mobility categories. We related this mobility indicator to case incidence among 111 of the most populous U.S. counties during the Omicron surge from December 2021 through February 2022. ResultsReported COVID-19 incidence peaked earlier and declined more rapidly among those counties exhibiting more extensive decline in mobility between December 20 and January 3. Based upon a fixed-effects, longitudinal cohort model, we estimated that every 1-percent decline in mobility between December 20 and January 3 was associated with a 0.63 percent decline in peak incidence during the week ending January 17 (95% confidence interval, 0.40-0.86 percent). Based upon a cross-sectional analysis including mean household size and vaccination participation as covariates, we estimated that the same 1-percent decline in mobility was associated with a 0.36 percent decline in cumulative reported COVID-19 incidence from January 10 through February 28 (95% CI, 0.18-0.54 percent). ConclusionOmicron did not simply sweep through the U.S. population until it ran out of susceptible individuals to infect. To the contrary, a significant fraction managed to avoid infection by engaging in risk-mitigating behaviors. More broadly, the behavioral response to perceived risk should be viewed as an intrinsic component of the natural course of epidemics in humans.
epidemiology
10.1101/2022.03.17.22272494
Bone morphogenetic protein 2 is a new molecular target linked to nonalcoholic fatty liver disease with potential value as non-invasive screening tool.
Nonalcoholic fatty liver disease (NAFLD) is the commonest cause of chronic liver disease worldwide, being nonalcoholic steatohepatitis (NASH) its most clinically relevant form. Given the risks associated with taking a liver biopsy, the design of accurate non-invasive methods to identify NASH patients is of upmost importance. BMP2 plays a key role in metabolic homeostasis; however, little is known about its involvement in NAFLD onset and progression. This study aimed to elucidate the impact of BMP2 in NAFLD pathophysiology. Hepatic and circulating levels of BMP2 were quantified in serum and liver specimens from 115 biopsy-proven NAFLD patients and 75 subjects with histologically normal liver (NL). In addition, BMP2 content and release was determined in cultured human hepatocytes upon palmitic acid (PA) overload. We found that BMP2 expression was abnormally increased in livers from NAFLD patients than in subjects with NL and this was reflected in higher serum BMP2 levels. Notably, we observed that PA upregulated BMP2 expression and secretion by human hepatocytes. An algorithm based on serum BMP2 levels and clinically relevant variables to NAFLD showed an AUROC of 0.886 (95%CI, 0.83-0.94) to discriminate NASH. We used this algorithm to develop SAN (Screening Algorithm for NASH): a SAN < 0.2 implied a low risk and a SAN [&ge;] 0.6 indicated high risk of NASH diagnosis. This proof-of-concept study shows BMP2 as a new molecular target linked to NAFLD and introduces SAN as a simple and efficient algorithm to screen individuals at risk for NASH.
gastroenterology
10.1101/2022.03.16.22271274
How can instructions and feedback with external focus be shaped to enhance motor learning in children? A systematic review
AimThis systematic review investigates the effectiveness of instructions and feedback with external focus applied with reduced frequency, self-controlled timing and/or in visual or auditory form, on the performance of functional gross motor tasks in children aged 2 to 18 with typical or atypical development. MethodsFour databases (PubMed, Web of Science, Scopus, Embase) were systematically searched (last updated May 31st 2021). Inclusion criteria were: 1. children aged 2 to 18 years old; 2. Instructions/feedback with external focus applied with reduced frequency, self-controlled timing, and/or visual or auditory form as intervention, to learn functional gross motor tasks; 3. Instructions/feedback with external focus applied with continuous frequency, instructor-controlled timing, and/or verbal form as control; 4. performance measure as outcome; 5. (randomized) controlled studies. Article selection and risk of bias assessment (with the Cochrane risk of bias tools) was conducted by two reviewers independently. Due to heterogeneity in study characteristics and incompleteness of the reported data, a best-evidence synthesis was performed. ResultsThirteen studies of low methodological quality were included, investigating effectiveness of reduced frequencies (n = 8), self-controlled timing (n = 5) and visual form (n = 1) on motor performance of inexperienced typically (n = 348) and atypically (n = 195) developing children, for acquisition, retention and/or transfer. For accuracy, conflicting or no evidence was found for most comparisons, at most time points. However, there was moderate evidence that self-controlled feedback was most effective for retention, and limited evidence that visual analogy was most effective for retention and transfer. To improve quality of movement, there was limited evidence that continuous frequency was most effective for retention and transfer. ConclusionMore methodologically sound studies are needed to draw conclusions about the preferred frequency, timing or form. However, we cautiously advise considering self-controlled feedback, visual instructions, and continuous frequency. RegistrationProspero CRD42021225723
pediatrics
10.1101/2022.03.17.22270808
TARGET-HF: Developing a model for detecting incident heart failure among symptomatic patients in general practice using routine health care data
BackgroundTimely diagnosis of heart failure (HF) is essential to optimize treatment opportunities that improve symptoms, quality of life, and survival. While most patients consult their general practitioner (GP) prior to HF, early stages of HF may be difficult to identify. An integrated clinical support tool may aid in identifying patients at high risk of HF. We therefore constructed a prediction model using routine health care data. MethodsOur study involved a dynamic cohort of patients ([&ge;]35 years) who consulted their GP with either dyspnea and/or peripheral edema within the Amsterdam metropolitan area in 2011-2020. The outcome of interest was incident HF, verified by an expert panel. We developed a regularized multivariable proportional hazards model (TARGET-HF). The model was evaluated with bootstrapping on an isolated validation set and compared to an existing model developed with hospital insurance data as well as patient age as a sole predictor. ResultsData from 31,905 patients were included (40% male, median age 60) of whom 1,301 (4.1%) were diagnosed with HF over 124,676 person-years of follow-up. Data were allocated to a development (n=25,524) and validation (n=6,381) set. TARGET-HF attained a C-statistic of 0.853 (95%-CI:0.834-0.872) on the validation set, which proved to provide a better discrimination than C=0.822 for age alone (95% CI:0.801-0.842, p<0.001) and C=0.824 for the hospital-based model (95% CI:0.802-0.843, p<0.001). ConclusionThe TARGET-HF model illustrates that routine consultation codes can be used to build a performant model to identify patients at risk for HF at time of GP consultation.
primary care research
10.1101/2022.03.16.22272262
Survey of nationwide public perceptions regarding acceptance of wastewater used for community health monitoring in the United States
During the coronavirus disease 2019 (COVID-19) pandemic researchers looked for evidence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA in feces dissolved in wastewater samples to assess levels of infection across communities. This activity is known colloquially as sewer monitoring and called wastewater-based epidemiology in academic settings. When used for public health surveillance in the United States, wastewater monitoring is not regulated, although general ethical principles have been described. Prior to this study, no nationwide data existed regarding the publics perceptions of wastewater being used for community health monitoring. Using an online survey distributed to a representative sample of adults in the Unites States (N=3,083), we investigated the publics perceptions regarding what is monitored, where monitoring occurs, and privacy concerns related to wastewater monitoring as a public health surveillance tool. Further, the Privacy Attitudes Questionnaire assessed respondents general privacy boundaries. The results suggest that respondents supported using wastewater for health monitoring, but within some bounds. Participants were most likely to support or strongly support monitoring for disease (95%), environmental toxins (94%), and terroristic threats (90%, e.g., anthrax). Two-thirds of respondents endorsed no prohibition to locations being monitored while the most common category of location respondents wanted to be prohibited from monitoring was personal residencies. Additionally, the findings suggest that those younger in age and living in an urban area were more supportive of wastewater monitoring, compared to older, suburban dwellers.
public and global health
10.1101/2022.03.17.22272421
Direct estimates of absolute ventilation in primary health care clinics in South Africa
BackgroundHealthcare facilities are important sites for the transmission of pathogens spread via bioaerosols, such as Mycobacterium tuberculosis (Mtb). Natural ventilation can play an important role in reducing this transmission. In primary health care (PHC) clinics in low and middle-income settings, susceptible people, including healthcare workers, are exposed to individuals with infectious pulmonary tuberculosis. We measured rates of natural ventilation in PHC clinics in KwaZulu-Natal and Western Cape provinces, South Africa. Methods and FindingsWe measured ventilation in clinic spaces using a tracer-gas release method. In spaces where this was not possible, we estimated ventilation using data on indoor and outdoor carbon dioxide levels, under reasonable assumptions about occupants metabolic rates. Ventilation was measured i) under usual conditions and ii) with all windows and doors fully open. We used these ventilation rates to estimate the risk of Mtb transmission using the Wells-Riley Equation. We obtained ventilation measurements in 33 clinical spaces in 10 clinics: 13 consultation rooms, 16 waiting areas and 4 other clinical spaces. Under usual conditions, the absolute ventilation rate was much higher in waiting rooms (median 1769 m3/hr, range 338-4815 m3/hr) than in consultation rooms (median 197 m3/hr, range 0-1451 m3/hr). Ventilation was better in permanent than in temporary structures. When compared with usual conditions, fully opening existing doors and windows resulted in a median two-fold increase in ventilation. Our Wells-Riley estimates show that, following sustained exposure, or contact with highly infectious index cases, some risk of Mtb infection may persist in the best ventilated clinical spaces unless other components of transmission risk are also addressed. ConclusionsAmong the clinical spaces studied, we observed substantial variation in natural ventilation. Ventilation interventions may have considerable impact on Mtb transmission in this setting. We recommend these form part of a package of infection prevention and control interventions.
occupational and environmental health
10.1101/2022.03.17.22272385
Immunogenomic profiling of lung adenocarcinoma reveals high-grade growth patterns are associated with an immunogenic tumor microenvironment
Lung cancer is the leading cause of cancer-related mortality in the United States. Lung adenocarcinoma (LUAD) is the most common subtype and the most epidemiologically and genetically heterogeneous. Pathologists have routinely observed phenotypic heterogeneity among LUAD primary tumors as reflected by distinct patterns of tumor growth. However, despite prior implications on the association of immune-genomic environment and prognosis, this information is not utilized clinically. Herein, applying multiplatform immune-genomic analysis, we investigate two distinct classification systems and demonstrate that high-grade patterns of growth are associated with a distinct immunogenic tumor microenvironment that is predicted with a favorable response to immunotherapy, a finding with growing importance in the era of adjuvant and neoadjuvant immunotherapy.
oncology
10.1101/2022.03.16.22272511
Monoclonal antibody and antiviral therapy for treatment of mild-to-moderate COVID-19 in pediatric patients
The recent surge of SARS-CoV-2 Omicron variant (B.1.1.529) coincided with new treatment options for mild-to-moderate Covid-19 in high-risk adolescents and adults. In this report we describe patient characteristics, treatment-related process measures and outcomes associated with early Covid-19 therapy in high-risk pediatric patients.
infectious diseases
10.1101/2022.03.17.22272414
Modelling the impact of non-pharmaceutical interventions on workplace transmission of SARS-CoV-2 in the home-delivery sector
ObjectiveWe aimed to use mathematical models of SARS-COV-2 to assess the potential efficacy of non-pharmaceutical interventions on transmission in the parcel delivery and logistics sector. MethodsWe developed a network-based model of workplace contacts based on data and consultations from companies in the parcel delivery and logistics sectors. We used these in stochastic simulations of disease transmission to predict the probability of workplace outbreaks in this settings. Individuals in the model have different viral load trajectories based on SARS-CoV-2 in-host dynamics, which couple to their infectiousness and test positive probability over time, in order to determine the impact of testing and isolation measures. ResultsThe baseline model (without any interventions) showed different workplace infection rates for staff in different job roles. Based on our assumptions of contact patterns in the parcel delivery work setting we found that when a delivery driver was the index case, on average they infect only 0.18 other employees, while for warehouse and office workers this went up to 0.93 and 2.58 respectively. In the large-item delivery setting this was predicted to be 0.83, 0.94, and 1.61 respectively. Nonetheless, the vast majority of simulations resulted in 0 secondary cases among customers (even without contact-free delivery). Our results showed that a combination of social distancing, office staff working from home, and fixed driver pairings (all interventions carried out by the companies we consulted) reduce the risk of workplace outbreaks by 3-4 times. ConclusionThis work suggests that, without interventions, significant transmission could occur in these workplaces, but that these pose minimal risk to customers. We found that identifying and isolating regular close-contacts of infectious individuals (i.e. house-share, carpools, or delivery pairs) is an efficient measure for stopping workplace outbreaks. Regular testing can make these isolation measures even more effective but also increases the number of staff isolating at one time. It is therefore more efficient to combine these measures with social distancing and contact reduction interventions, as these reduce both transmission and the number of people needing to isolate. IMPORTANCEDuring the COVID-19 pandemic the home-delivery sector has been vital to maintaining peoples access to certain goods, and sustaining levels of economic activity for a variety of businesses. However, this important work necessarily involves contact with a large number of customers as well as colleagues. This means that questions have often been raised about whether enough is being done to keep customers and staff safe. Estimating the potential risk to customers and staff is complex, but here we tackle this problem by building a model of workplace and customer contacts, from which we simulate SARS-CoV-2 transmission. By involving industry representatives in the development of this model, we have simulated interventions that have either been applied or considered, and so the findings of this study are extremely relevant to decisions made in that sector moving forward. Furthermore, we can learn generic lessons from this specific case study which apply to many types of shared workplace as well as highlighting implications of the highly stochastic nature of disease transmission in small populations.
epidemiology
10.1101/2022.03.17.22271783
Spatially constrained ICA enables robust detection of schizophrenia from very short resting-state fMRI data
Resting-state functional network connectivity (rsFNC) has shown utility for identifying characteristic functional brain patterns in individuals with psychiatric and mood disorders, providing a promising avenue for biomarker development. However, several factors have precluded widespread clinical adoption of rsFNC diagnostics, namely a lack of standardized approaches for capturing comparable and reproducible imaging markers across individuals, as well as the disagreement on the amount of data required to robustly detect intrinsic connectivity networks (ICNs) and diagnostically relevant patterns of rsFNC at the individual subject level. Recently, spatially constrained independent component analysis (scICA) has been proposed as an automated method for extracting ICNs standardized to a chosen network template while still preserving individual variation. Leveraging the novel scICA methodology, which solves the former challenge of standardized neuroimaging markers, we investigate the latter challenge of identifying a minimally sufficient data length for clinical applications of resting-state fMRI (rsfMRI). Using a dataset containing individuals with schizophrenia and controls (M = 310) as well as simulated rsfMRI, we evaluate the robustness of ICN and rsFNC estimates at both the subject- and group-level, as well as the performance of diagnostic classification, with respect to the length of the rsfMRI time course. We found individual estimates of ICNs and rsFNC from the full-length (5 minute) reference time course were sufficiently approximated with just 3-3.5 minutes of data (r = 0.85, 0.88, respectively), and significant differences in group-average rsFNC could be sufficiently approximated with even less data, just 2 minutes (r = 0.86). The results from the shorter clinical data were consistent with the results from the longer simulated data, reliably estimating both individual- and group-level metrics from the full-length (30 minute) reference with just 3-4 minutes of data (r = 0.85 - 0.88). Furthermore, we found a model trained on 2 minutes of data retained 97-98% classification accuracy relative to that of the full-length reference model. Our results suggest that clinical rsfMRI scans, when decomposed with scICA, could potentially be shortened to just 2-4 minutes without significant loss of individual rsFNC information or classification performance of longer scan lengths.
psychiatry and clinical psychology
10.1101/2022.03.18.22272284
Medical Evaluation of Unanticipated Monogenic Disease Risks Identified through Newborn Genomic Screening: Findings from the BabySeq Project
Genomic sequencing of healthy newborns to screen for medically important genetic information has long been anticipated but data around downstream medical consequences are lacking. Among 159 infants randomized to the sequencing arm in the BabySeq Project, an unanticipated monogenic disease risk (uMDR) was discovered in 18 (11.3%). We assessed uMDR actionability by visualizing scores from a modified ClinGen Actionability SemiQuantitative Metric and tracked medical outcomes in these infants for 3-5 years. All uMDRs scored as highly actionable (mean 9, range: 7-11 on a 0-12 scale) and had readily available clinical interventions. In 4 cases, uMDRs revealed unsuspected genetic etiologies for existing phenotypes, and in the remaining 14 cases provided risk stratification for future surveillance. In 8 cases, uMDRs prompted screening for multiple at-risk family members. These results suggest that actionable uMDRs are more common than previously thought and support ongoing efforts to evaluate population-based newborn genomic screening.
genetic and genomic medicine
10.1101/2022.03.16.22272517
Phenotypic variability and Gastrointestinal Manifestations/Interventions for growth in Ogden syndrome (also known as NAA10-related Syndrome)
Our study of 61 children with Ogden Syndrome, an X-linked disorder due to NAA10 gene mutations, demonstrated a high prevalence of growth failure, with weight and height percentiles often in the failure-to-thrive diagnostic range; although dramatic weight fluctuations and phenotypic variability is evidenced in the growth parameters of this population. Although never previously explored in depth, the gastrointestinal pathology associated with OS includes feeding difficulties in infancy, dysphagia, GERD/silent reflux, vomiting, constipation, diarrhea, bowel incontinence, and presence of eosinophils on esophageal endoscopy, in order from most to least prevalent. Additionally, the gastrointestinal symptom profile for children with this syndrome has been expanded to include eosinophilic esophagitis, cyclic vomiting syndrome, Mallory Weiss tears, abdominal migraine, esophageal dilation, and subglottic stenosis. Although the exact cause of poor growth in OS probands is unclear and the degree of contribution to this problem by GI symptomatology remains uncertain, an analysis including nine G-tube or GJ-tube fed probands demonstrates that G/GJ-tubes are overall efficacious with respect to improvements in weight gain and caregiving. The choice to insert a gastrostomy or gastrojejunal tube to aid with weight gain is often a challenging decision to make for parents, who may alternatively choose to rely on oral feeding, caloric supplementation, calorie tracking, and feeding therapy. In this case, if OS children are not tracking above the FTT range past 1 year of age despite such efforts, they should promptly undergo G-tube placement to avoid prolonged growth failure. If G-tubes are not immediately inducing weight gain after insertion, recommendations include altering formula, increasing caloric input, or exchanging a G-tube for a GJ-tube by means of a minimally invasive procedure. Future directions could include a prospective natural history study investigating whether G/GJ tube insertion affects the cognitive trajectory, rate of reaching developmental milestones, and GI symptomatology of OS children in a positive or negative manner.
genetic and genomic medicine
10.1101/2022.03.18.22270280
Ticagrelor responsive platelet genes are associated with platelet function and bleeding
Structured AbstractO_ST_ABSImportanceC_ST_ABSTicagrelor inhibits platelet function, prevents myocardial infarction, and causes bleeding. A comprehensive analysis of the on- and off-target platelet effects of ticagrelor that underlie its clinical effects is lacking. ObjectiveTo test the hypothesis that platelet transcripts that change in response to ticagrelor exposure are associated with platelet function or bleeding. DesignA discovery cohort of healthy volunteers were sequentially exposed to aspirin, aspirin washout, and ticagrelor. Messenger RNA sequencing (mRNAseq) of purified platelets was performed pre/post each exposure. We defined the ticagrelor exposure signature (TES) as the ratio of mean expression of up-vs. down-regulated genes by ticagrelor that were prioritized based on lasso regression, weighted gene co-expression networks, and isoform level analyses. A separate healthy cohort was recruited to validate ticagrelors effects on TES genes measured using Nanostring. Platelet function was measured at baseline and in response to ticagrelor exposure in all participants. Self-reported bleeding was systematically queried during periods of ticagrelor exposure. SettingAn early phase, academic, clinical research unit. ParticipantsSelf-reported, healthy volunteers age > 30 and < 75, non-smoking, taking no daily prescribed medications. ExposuresTicagrelor (90 mg twice daily) and aspirin (81 mg/day and 325 mg/day) each for 4 weeks. Main outcomes and measuresExpression levels of platelet messenger RNA, platelet count, mean platelet volume, and 9 different measures of ex vivo platelet function (aggregated into a previously described platelet function score), and self-reported bleeding at baseline and after each exposure. ResultsIn the discovery cohort (n = 58, mean age 43, 39 female) platelet mRNAseq identified (FDR < 5%) 1820 up- and 1589 down-regulated genes associated with ticagrelor exposure. We prioritized 84 of these transcripts to calculate a TES score, which was increased by ticagrelor and unaffected by either dose of aspirin. In an independent cohort (n = 49, mean age 44, 24 female) we validated that ticagrelor exposure (beta = 0.48, SE = 0.08, p < 0.0001) increases TES scores. In combined analyses of discovery and validation cohorts, when TES levels were calculated using baseline platelet RNA, higher TES levels were associated with lower levels of baseline platelet function (meta-analysis beta = -0.60, standard error [SE] 0.29, P = 0.04) and self-reported bleeding during ticagrelor exposure (meta-analysis beta = 0.28, standard error [SE] = 0.14, P = 0.04). In contrast, we found no associations between bleeding with baseline platelet count, platelet volume, or platelet function. Conclusions and RelevanceTicagrelor exposure reproducibly and specifically changes a set of platelet transcripts, the baseline levels of which are a biomarker for platelet function and bleeding tendency on ticagrelor. Key PointsO_ST_ABSQuestionC_ST_ABSWhat are the global effects of ticagrelor exposure on platelets beyond platelet inhibition? FindingsIn an experimental human study of different antiplatelet therapies, we comprehensively characterized the effects of ticagrelor on platelet messenger RNA (mRNA). We found that 4 weeks of 90mg twice daily ticagrelor therapy specifically and reproducibly changes the levels of selected platelet mRNA. At baseline, volunteers with levels of platelet gene expression that mimic ticagrelor exposure had lower levels of platelet function and when exposed to ticagrelor a greater tendency for minor bleeding. MeaningBy using ticagrelor exposure as a molecular probe, we identified a platelet RNA biomarker that may identify patients at higher risk for ticagrelor-associated bleeding.
cardiovascular medicine
10.1101/2022.03.14.22272390
AI-Augmented Clinical Decision Support in a Patient-Centric Precision Oncology Registry
PurposexDECIDE is a clinical decision support system, accessed through a web portal and powered by a "Human-AI Team", that offers oncology healthcare providers a set of treatment options personalized for their cancer patients, and provides outcomes tracking through an observational research protocol. This article describes the xDECIDE process and the AI-assisted technologies that ingest semi-structured electronic medical records to identify and then standardize clinico-genomic features, generate a structured personal health record (PHR), and produce ranked treatment options based on clinical evidence, expert insights, and the real world evidence generated within the system itself. MethodPatients may directly enroll in the IRB-approved pan-cancer XCELSIOR registry (NCT03793088). Patient consent permits data aggregation, continuous learning from clinical outcomes, and sharing of limited datasets within the research team. Assisted by numerous AI-based technologies, the xDECIDE team aggregates and processes patients electronic medical records, and applies multiple levels of natural language processing (NLP) and machine learning to generate a structured case summary and a standardized list of patient features. Next a ranked list of treatment options is created by an ensemble of AI-based models, called xCORE. The output of xCORE is reviewed by molecular pharmacologists and expert oncologists in a virtual tumor board (VTB). Finally a report is produced that includes a ranked list of treatment options and supporting scientific and medical rationales. Treating physicians can use an interactive portal to view all aspects of these data and associated reports, and to continuously monitor their patients information. The xDECIDE system, including xCORE, is self-improving; feedback improves aspects of the process through machine learning, knowledge ingestion, and outcomes-directed process improvement. ResultsAt the time of writing, over 2,000 patients have enrolled in XCELSIOR, including over 650 with CNS cancers, over 300 with pancreatic cancer, and over 100 each with ovarian, colorectal, and breast cancers. Over 150 VTBs of CNS cancer patients and [~]100 VTBs of pancreatic cancer patients have been performed. In the course of these discussions, [~]450 therapeutic options have been discussed and over 2,000 consensus rationales have been delivered. Further, over 500 treatment rationale statements ("rules") have been encoded to improve algorithm decision making between similar therapeutics or regimens in the context of individual patient features. We have recently deployed the xCORE AI-based treatment ranking algorithm for validation in real-world patient populations. ConclusionClinical decision support through xDECIDE is available for oncologists to utilize in their standard practice of medicine by enrolling a patient in the XCELSIOR trial and accessing xDECIDE through its web portal. This system can help to identify potentially effective treatment options individualized for each patient, based on sophisticated integration of real world evidence, human expert knowledge and opinion, and scientific and clinical publications and databases.
health informatics
10.1101/2022.03.14.22272390
AI-Augmented Clinical Decision Support in a Patient-Centric Precision Oncology Registry
PurposexDECIDE is a clinical decision support system, accessed through a web portal and powered by a "Human-AI Team", that offers oncology healthcare providers a set of treatment options personalized for their cancer patients, and provides outcomes tracking through an observational research protocol. This article describes the xDECIDE process and the AI-assisted technologies that ingest semi-structured electronic medical records to identify and then standardize clinico-genomic features, generate a structured personal health record (PHR), and produce ranked treatment options based on clinical evidence, expert insights, and the real world evidence generated within the system itself. MethodPatients may directly enroll in the IRB-approved pan-cancer XCELSIOR registry (NCT03793088). Patient consent permits data aggregation, continuous learning from clinical outcomes, and sharing of limited datasets within the research team. Assisted by numerous AI-based technologies, the xDECIDE team aggregates and processes patients electronic medical records, and applies multiple levels of natural language processing (NLP) and machine learning to generate a structured case summary and a standardized list of patient features. Next a ranked list of treatment options is created by an ensemble of AI-based models, called xCORE. The output of xCORE is reviewed by molecular pharmacologists and expert oncologists in a virtual tumor board (VTB). Finally a report is produced that includes a ranked list of treatment options and supporting scientific and medical rationales. Treating physicians can use an interactive portal to view all aspects of these data and associated reports, and to continuously monitor their patients information. The xDECIDE system, including xCORE, is self-improving; feedback improves aspects of the process through machine learning, knowledge ingestion, and outcomes-directed process improvement. ResultsAt the time of writing, over 2,000 patients have enrolled in XCELSIOR, including over 650 with CNS cancers, over 300 with pancreatic cancer, and over 100 each with ovarian, colorectal, and breast cancers. Over 150 VTBs of CNS cancer patients and [~]100 VTBs of pancreatic cancer patients have been performed. In the course of these discussions, [~]450 therapeutic options have been discussed and over 2,000 consensus rationales have been delivered. Further, over 500 treatment rationale statements ("rules") have been encoded to improve algorithm decision making between similar therapeutics or regimens in the context of individual patient features. We have recently deployed the xCORE AI-based treatment ranking algorithm for validation in real-world patient populations. ConclusionClinical decision support through xDECIDE is available for oncologists to utilize in their standard practice of medicine by enrolling a patient in the XCELSIOR trial and accessing xDECIDE through its web portal. This system can help to identify potentially effective treatment options individualized for each patient, based on sophisticated integration of real world evidence, human expert knowledge and opinion, and scientific and clinical publications and databases.
health informatics
10.1101/2022.03.16.22271361
Hospital length of stay in a mixed Omicron and Delta epidemic in New South Wales, Australia
AimTo estimate the length of stay distributions of hospitalised COVID-19 cases during a mixed Omicron-Delta epidemic in New South Wales, Australia (16 Dec 2021 - 7 Feb 2022), and compare these to estimates produced over a Delta-only epidemic in the same population (1 Jul 2021 - 15 Dec 2022). BackgroundThe distribution of the duration that clinical cases of COVID-19 occupy hospital beds (the length of stay) is a key factor in determining how incident caseloads translate into health system burden as measured through ward and ICU occupancy. ResultsUsing data on the hospital stays of 19,574 individuals, we performed a competing-risk survival analysis of COVID-19 clinical progression. During the mixed Omicron-Delta epidemic, we found that the mean length of stay for individuals who were discharged directly from ward without an ICU stay was, for age groups 0-39, 40-69 and 70+ respectively, 2.16 (95% CI: 2.12-2.21), 3.93 (95% CI: 3.78-4.07) and 7.61 days (95% CI: 7.31-8.01), compared to 3.60 (95% CI: 3.48-3.81), 5.78 (95% CI: 5.59-5.99) and 12.31 days (95% CI: 11.75-12.95) across the preceding Delta epidemic (15 Jul 2021 - 15 Dec 2021). We also considered data on the stays of individuals within the Hunter New England Local Health District, where it was reported that Omicron was the only circulating variant, and found mean ward-to-discharge length of stays of 2.05 (95% CI: 1.80-2.30), 2.92 (95% CI: 2.50-3.67) and 6.02 days (95% CI: 4.91-7.01) for the same age groups. ConclusionsHospital length of stay was substantially reduced across all clinical pathways during a mixed Omicron-Delta epidemic compared to a prior Delta epidemic. These changes in length of stay have contributed to lessened health system burden despite greatly increased infection burden and should be considered in future planning of response to the COVID-19 pandemic in Australia and internationally.
epidemiology
10.1101/2022.03.15.22272434
Good recovery of immunization stress-related responses presenting as cluster of stroke-like events following CoronaVac and ChAdOx1 vaccinations
BackgroundImmunization stress-related responses presenting as stroke-like symptoms may develop following COVID-19 vaccination. This study aimed to describe the clinical characteristics of immunization stress-related responses causing stroke-like events following COVID-19 vaccination. MethodsWe conducted a retrospective study of the secondary data of reported adverse events following COVID-19 immunization that presented with neurologic manifestations. Between March 1 and July 31, 2021, we collected and analyzed the medical records of 221 patients diagnosed with stroke-like symptoms following immunization. Demographic and medical data included sex, age, vaccine type, sequence dose, time to event, laboratory data, and recovery status as defined by the modified Rankin score (i.e., defining the degree of severity/dependence, with higher scores indicating greater disability). The affected side was evaluated for associations with the injection site. ResultsIn total, 221 patients were diagnosed with immunization stress-related responses (stroke-like symptoms) following CoronaVac (Sinovac) or ChAdOx1 (AstraZeneca) vaccinations. Most patients (83.7%) were women. The median (interquartile range) age of onset was 34 (28-42) years in patients receiving CoronaVac and 46 (33.5-60) years in those receiving ChAdOx1. The median interval between vaccination and symptom onset for each vaccine type was 60 (16-960) min and 30 (8.8-750) min, respectively. Sensory symptoms were the most common symptomology. Most patients (53.8%) developed symptoms on the left side of the body; 99.5% of the patients receiving CoronaVac and 90% of those receiving ChAdOx1 recovered well (modified Rankin scores [&le;]2, indicating slight or no disability). ConclusionsImmunization stress-related responses presenting as stroke-like symptoms can develop following COVID-19 vaccination. Symptoms that are more likely to occur on the injection side are transient (i.e., without permanent pathological deficits). Public education and preparedness are important for administering successful COVID-19 vaccination programs.
neurology
10.1101/2022.03.19.22272575
Breakthrough Covid-19 cases despite tixagevimab and cilgavimab (Evusheld™) prophylaxis in kidney transplant recipients
While the combination of casirivimab-imdevimab (Ronapreve Roche Regeneron) has been shown to confer satisfactory protection against the delta variant kidney transplant recipients (KTRs) with COVID-19, it has limited neutralizing activity against the current variants of concern (Omicron BA.1, BA.1.1 and BA.2). In contrast, cilgavimab-tixagevimab combination (Evusheld, Astra Zeneca) retains a partial neutralizing activity against omicron in vitro. We examined whether preexposure prophylaxis with Evusheld can effectively protect kidney transplant recipients (KTRs) against the Omicron variant. Of the 416 KTRs who received intramuscular prophylactic injections of Evusheld (150 mg tixagevimab and 150 mg cilgavimab), 39 (9.4%) developed COVID-19. With the exception of one patient, all KTRs were symptomatic. Hospitalization and admission to an intensive care unit were required for 14 (35.9%) and three patients, respectively. Two KTRs died of COVID-19-related acute respiratory distress syndrome. SARS-CoV-2 sequencing was carried out in 15 cases (BA.1, n = 5; BA.1.1, n = 9; BA.2, n=1). Viral neutralizing activity of the serum against BA.1 variant was negative in the 12 tested patients, suggesting that this prophylaxis strategy provides insufficient protection against this variant of concern. Preexposure prophylaxis with Evusheld does not adequately protect KTRs against Omicron. Further clarification of the optimal dosing can assist in our understanding of how best to harness its protective potential.
infectious diseases
10.1101/2022.03.15.22272430
Omicron Spread in Vaccinated Jurisdictions: a Statistical Study
A correlation hypothesis between the level of vaccination and the rate of spread of the new Covid-19 variant is investigated based on the case and vaccination data from European and North American jurisdictions available in the public domain at the time point of past the crest of the Omicron wave in most jurisdictions. Statistical variables describing the rate of the spread based on observed new case statistics defined and discussed. An unexpected moderate positive correlation between the rate of the variant spread measured by two related parameters and vaccination level based on the dataset in the study is reported. While negative correlation was not statistically excluded, the analysis of the data in the study statistically excluded moderate to strong negative correlation. The results of this work, if confirmed by further independent studies can have implications for development of policies aimed at controlling future course of the Covid-19 pandemic.
epidemiology
10.1101/2022.03.17.22272543
An investigation into the wellbeing of optometry students.
IntroductionWellbeing is synonymous with positive mental health and impacts the efficacy of student learning. The wellbeing of optometry students is an understudied topic. The wellbeing of optometry students studying in a blended undergraduate course during the COVID-19 pandemic was also unknown. The aim of this study was to determine the status of optometry students wellbeing during COVID-19, by identifying their experiences of symptoms of anxiety and depression. Moreover, to determine to what extent students experience these symptoms and what specific factors influenced their wellbeing. MethodologyParticipants from four year groups completed online questionnaires. The response rate was 78.38% (n=87). Zung self-rating depression and anxiety scale questionnaires were used to determine whether students identified with a given list of symptoms commonly linked to anxiety or depression. Through open ended questions students wellbeing was further investigated. ResultsParticipants experienced normal levels of anxiety symptoms and most participants experienced mild to moderate depression symptoms. Of concern is the severe depression symptoms identified in the third and fourth year student cohorts. Mental health, Academics, Lifestyle, Relationships and Sleep were main themes identified that had an influence on the students general wellbeing. Uncertainty and Physical Health themes were additional influences of wellbeing specifically related to COVID-19. ContributionThis preliminary study into wellbeing of optometry students was undertaken in a unique timeframe, due to the COVID-19 pandemic. The results provide a platform to determine baseline wellbeing in the future student cohorts and the exploratory identification of factors causing stress and anxiety. The impact of COVID-19 on the wellbeing of students is evident. ConclusionOptometry students do experience symptoms of depression. COVID-19 has had considerable impact on their academic experience and their wellbeing.
medical education
10.1101/2022.03.16.22272457
Meta-analysis fine-mapping is often miscalibrated at single-variant resolution
Meta-analysis is pervasively used to combine multiple genome-wide association studies (GWAS) into a more powerful whole. To resolve causal variants, meta-analysis studies typically apply summary statistics-based fine-mapping methods as they are applied to single-cohort studies. However, it is unclear whether heterogeneous characteristics of each cohort (e.g., ancestry, sample size, phenotyping, genotyping, or imputation) affect fine-mapping calibration and recall. Here, we first demonstrate that meta-analysis fine-mapping is substantially miscalibrated in simulations when different genotyping arrays or imputation panels are included. To mitigate these issues, we propose a summary statistics-based QC method, SLALOM, that identifies suspicious loci for meta-analysis fine-mapping by detecting outliers in association statistics based on ancestry-matched local LD structure. Having validated SLALOM performance in simulations and the GWAS Catalog, we applied it to 14 disease endpoints from the Global Biobank Meta-analysis Initiative and found that 68% of loci showed suspicious patterns that call into question fine-mapping accuracy. These predicted suspicious loci were significantly depleted for having likely causal variants, such as nonsynonymous variants, as a lead variant (2.8x; Fishers exact P = 6.2 x 10-4). Compared to fine-mapping results in individual biobanks, we found limited evidence of fine-mapping improvement in the GBMI meta-analyses. Although a full solution requires complete synchronization across cohorts, our approach identifies likely spurious results in meta-analysis fine-mapping. We urge extreme caution when interpreting fine-mapping results from meta-analysis.
genetic and genomic medicine
10.1101/2022.03.16.22272200
An observational study of uptake and adoption of the NHS App in England
ObjectivesThis study aimed to evaluate patterns of uptake and adoption of the NHS App. Data metrics from the NHS App were used to assess acceptability by looking at total app downloads, registrations, appointment bookings, GP health records viewed, and prescriptions ordered. The impact of the UK COVID-19 lockdown and introduction of the COVID Pass were also explored to assess App usage and uptake. MethodsDescriptive statistics and an interrupted time series analysis were used to look at monthly NHS App metrics at a GP practice level from January 2019-May 2021 in the population of England. Interrupted time series models were used to identify changes in level and trend among App usage and the different functionalities before and after the first COVID-19 lockdown. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines were used for reporting and analysis. ResultsBetween January 2019 and May 2021, there were a total of 8,524,882 NHS App downloads and 4,449,869 registrations. There was a 4-fold increase in app downloads from April 2021 (650,558 downloads) to May 2021 (2,668,535 downloads) when the COVID Pass feature was introduced. Areas with the highest number of App registrations proportional to the GP patient population occurred in Hampshire, Southampton and Isle of Wight CCG, and the lowest in Blackburn with Darwen CCG. After the announcement of the first lockdown (March 2020), a positive and significant trend in the number of login sessions was observed at 602,124 (p=0.004)** logins a month. National NHS App appointment bookings ranged from 298 to 42,664 bookings per month during the study period. The number of GP health records viewed increased by an average of 371,656 (p=0.001)** views per month and the number of prescriptions ordered increased by an average of 19934 (p<0.001)*** prescriptions per month following the first lockdown. ConclusionThis analysis has shown that uptake and adoption of the NHS App was positive post lockdown, and increased significantly due to the COVID Pass feature being introduced, but further research is needed to measure the extent to which it improves patient experience and influences health service access and care outcomes.
health informatics
10.1101/2022.03.15.22271655
Using Machine Learning Algorithms to predict sepsis and its stages in ICU patients
Sepsis is blood poisoning disease that occurs when body shows dysregulated host response to an infection and cause organ failure or tissue damage which may increase the mortality rate in ICU patients. As it becomes major health problem, the hospital cost for treatment of sepsis is increasing every year. Different methods have been developed to monitor sepsis electronically, but it is necessary to predict sepsis as soon as possible before clinical reports or traditional methods, because delayed in treatment can increase the risk of mortality with every single hour. For the early detection of sepsis, specifically in ICU patients, different machine learning models i.e Linear learner, Multilayer perceptron neural networks, Random Forest, lightgbm and Xgboost has trained on the data set proposed by Physio Net/ Computing in Cardiology Challenge in 2019. This study shows that Machine learning algorithms can accurately predict sepsis at the admission time of patient in ICU by using six vitals signs extracted from patient records over the age of 18 years. After comparative analysis of machine learning models, Xgboost model achieved a highest accuracy of 0.98, precision of 0.97, and recall 0.98 under the precision-recall curve on the publicly available data. Early prediction of sepsis can help clinicians to implement supportive treatments and reduce the mortality rate as well as healthcare expenses.
health informatics