id
stringlengths 16
27
| title
stringlengths 18
339
| abstract
stringlengths 95
38.7k
| category
stringlengths 7
44
|
---|---|---|---|
10.1101/2022.03.02.22271654 | Personalised structural connectomics for moderate-to-severe traumatic brain injury | Graph theoretical analysis of the structural connectome has been employed successfully to characterise brain network alterations in patients with traumatic brain injury (TBI). However, heterogeneity in neuropathology is a well-known issue in the TBI population, such that group comparisons of patients against controls are confounded by within-group variability. Recently, novel single-subject profiling approaches have been developed to capture inter-patient heterogeneity. We present a personalised connectomics approach that examines structural brain alterations in six chronic patients with moderate-to-severe TBI who underwent anatomical and diffusion magnetic resonance imaging (MRI). We generated individualised profiles of lesion characteristics and network measures (including personalised graph metric GraphMe plots, and nodal and edge-based brain network alterations) and compared them against healthy reference cases (N=12) to assess brain damage qualitatively and quantitatively at the individual level. Our findings revealed clinically significant alterations of brain networks with high variability between patients. Our profiling can be used by clinicians to formulate a neuroscience-guided integrative rehabilitation program for TBI patients, and for designing personalised rehabilitation protocols based on their unique lesion load and connectome. | neurology |
10.1101/2022.03.02.22271635 | Faster post-malnutrition weight gain during childhood is associated with risk of non-communicable disease in adult survivors of severe malnutrition | BackgroundNutritional rehabilitation during severe malnutrition (SM) aims to rapidly restore body size and minimize poor short-term outcomes. We hypothesized that too rapid weight gain during and after treatment might however predispose to cardiometabolic risk in adult life.
MethodsWeight and height during hospitalization and one year post-hospitalization were abstracted from hospital records of children who survived SM. Six definitions of post-malnutrition weight gain/growth were analysed as continuous variables, quintiles and latent classes in age-sex and minimum weight-for-age z-scores-adjusted regression models against adult anthropometry, body composition (DEXA), blood pressure, blood glucose, insulin, and lipids.
Results60% of 278 participants were male, mean (SD) age 28.2 (7.7) years, mean (SD) BMI 23.6 (5.2) kg/m2. Mean admission age for SM was 10.9 months (range 0.3-36.3 months) and 207/270 (77%) were wasted (weight-for-height z-score<-2). During childhood, mean rehabilitation weight gain (SD) was 10.1(3.8) g/kg/day and 0.8(0.5) g/kg/day in the first year post-hospitalization. Rehabilitation weight gain >12.9 g/kg/day was associated with higher adult BMI (difference=0.5kg/m2, 95%CI: 0.1-0.9, p = 0.02), waist circumference (difference=1.4cm, 95%CI: 0.4-2.4, p=0.005), fat mass (difference = 1.1kg, 95%CI: 0.2-2, p=0.02), fat mass index (difference=0.32, 95%CI: -0.0001-0, p=0.05), and android fat mass (difference=0.09 kg, 95%CI: 0.01-0.2, p=0.03). Rehabilitation (g/month) and post-hospitalization (g/kg/month) weight gain were associated with greater lean mass (difference = 0.7 kg, 95% CI: 0.1, 1.3, p = 0.02) (difference=1.3kg, 95% CI: 0.3-2.4, p=0.015) respectively.
ConclusionRehabilitation weight gain exceeding 13g/kg/day was associated with adult adiposity in young, normal-weight adult SM survivors. This raises questions around existing malnutrition weight gain targets and warrants further studies exploring optimal post-malnutrition growth. | nutrition |
10.1101/2022.03.02.22271773 | A Systematic Review and Meta-Analysis of Digital Application use in Clinical Research in Pain Medicine | ImportancePain is a silent global epidemic impacting approximately a third of the population. Pharmacological and surgical interventions are primary modes of treatment. Cognitive/behavioural management approaches and interventional pain management strategies are approaches that have been used to assist with the management of chronic pain. Accurate data collection and reporting treatment outcomes are vital to addressing the challenges faced. In light of this, we conducted a systematic evaluation of the current digital application landscape within chronic pain medicine.
ObjectiveThe primary objective was to consider the prevalence of digital application usage for chronic pain management. These digital applications included mobile apps, web apps, and chatbots.
Data SourcesWe conducted searches on PubMed and ScienceDirect for studies that were published between 1st January 1990 and 1st January 2021.
Study SelectionOur review included studies that involved the use of digital applications for chronic pain conditions. There were no restrictions on the country in which the study was conducted. Only studies that were peer-reviewed and published in English were included. Four reviewers had assessed the eligibility of each study against the inclusion/exclusion criteria. Out of the 84 studies that were initially identified, 38 were included in the systematic review.
Data Extraction and SynthesisThe AMSTAR guidelines were used to assess data quality. This assessment was carried out by 3 reviewers. The data were pooled using a random-effects model.
Main Outcome(s) and Measure(s)Before data collection began, the primary outcome was to report on the prevalence of digital application usage for chronic pain conditions. We also recorded the type of digital application studied (e.g. mobile application, web application) and, where the data was available, the prevalence of pain intensity, pain inferences, depression, anxiety, and fatigue.
Results38 studies were included in the systematic review and 22 studies were included in the meta-analysis.
The digital interventions were categorised to web and mobile applications and chatbots, with pooled prevalence of 0.22 (95% CI -0.16, 0.60), 0.30 (95% CI 0.00, 0.60) and -0.02 (95% CI -0.47, 0.42) respectively. Pooled standard mean differences for symptomatologies of pain intensity, depression, and anxiety symptoms were 0.25 (95% CI 0.03, 0.46), 0.30 (95% CI 0.17, 0.43) and 0.37 (95% CI 0.05, 0.69) respectively.
A sub-group analysis was conducted on pain intensity due to the heterogeneity of the results (I2=82.86%; p=0.02). After stratifying by country, we found that digital applications were more likely to be effective in some countries (e.g. USA, China) than others (e.g. Ireland, Norway).
Conclusions and RelevanceThe use of digital applications in improving pain-related symptoms shows promise, but further clinical studies would be needed to develop more robust applications. | pain medicine |
10.1101/2022.03.01.22271607 | Prescription opioid-related alterations to amygdalar and thalamic functional networks in chronic knee pain: A retrospective case control resting-state connectivity study. | ObjectiveLong-term opioid use is associated with diminished pain relief, hyperalgesia, and addiction which is not well understood. This study aimed to characterise opioid-related brain network alterations in chronic pain, focused on the right amygdala, and left mediodorsal thalamic nuclei that play key roles in affective pain processing, and are particularly rich in mu opioid receptors (MOR).
SubjectsParticipants on opioid prescriptions with painful knee osteoarthritis and matched non-opioid using control pain participants.
Methods and designSeed-based functional connectivity (FC) maps from resting-state fMRI data were compared between groups.
ResultsWe found right amygdala hyperconnectivity with the posterior default mode network (pDMN) and the dorsomedial prefrontal cortex in opioid users in contrast to anti-correlations in controls. Conversely, opioid users showed predominant hypoconnectivity of the left dorsomedial thalamic seed with the cingulate cortex except for the subgenual part displaying an anti-correlation in opioid users and no association in non-users. Opioid users also showed higher negative affect in exploratory post-hoc tests suggesting a potential contribution of trait anxiety to amygdala-pDMN FC alteration.
ConclusionOpioid use related hyperconnectivity of the right amygdalar network likely reflects maladaptive mechanisms involving negative affect and network plasticity. Hypoconnectivity of the mediodorsal thalamic nuclei with the anterior and mid cingulate on the other hand may reflect impaired resilience in line with previously reported compensatory MOR upregulation. In conclusion, this study provides new insight into possible brain mechanisms underlying adverse effects of prolonged opioids in chronic pain and offer candidate network targets for novel interventions. | pain medicine |
10.1101/2022.03.03.22271834 | The role of alternative splicing in CEP290-related disease pathogenesis | Primary ciliopathies are a group of inherited developmental disorders resulting from defects in the primary cilium. Mutations in CEP290 (Centrosomal protein of 290kDa) are the most frequent cause of recessive ciliopathies (incidence up to 1:15,000). Pathogenic variants span the full length of this large (93.2kb) 54 exon gene, causing phenotypes ranging from isolated inherited retinal dystrophies (IRDs; Leber Congenital Amaurosis, LCA) to a pleiotropic range of severe syndromic multi-organ ciliopathies affecting retina, kidney and brain. Most pathogenic CEP290 variants are predicted null (37% nonsense, 42% frameshift), but there is no clear genotype-phenotype association. Almost half (26/53) of the coding exons in CEP290 are in-phase "skiptic" (or skippable) exons. Variants located in skiptic exons could be removed from CEP290 transcripts by skipping the exon, and nonsense-associated altered splicing (NAS) has been proposed as a mechanism that attenuates the pathogenicity of nonsense or frameshift CEP290 variants. Here, we have used in silico bioinformatic techniques to study the propensity of CEP290 skiptic exons for NAS. We then used CRISPR-Cas9 technology to model CEP290 frameshift mutations in induced pluripotent stem cells (iPSCs) and analysed their effects on splicing and ciliogenesis. We identified exon 36, a hotspot for LCA mutations, as a strong candidate for NAS that we confirmed in mutant iPSCs that exhibited sequence-specific exon skipping. Exon 36 skipping did not affect ciliogenesis, in contrast to a larger frameshift mutant that significantly decreased cilia size and incidence in iPSCs. We suggest that sequence-specific NAS provides the molecular basis of genetic pleiotropy for CEP290-related disorders. | genetic and genomic medicine |
10.1101/2022.03.02.22271795 | Association of clinical outcome assessments of mobility capacity and incident disability in community-dwelling older adults - a systematic review and meta-analysis | IMPORTANCEThe predictive value of common performance-based outcome assessments of mobility capacity on incident disability in activities of daily living in community-dwelling older adults remains uncertain.
OBJECTIVETo synthesize all available research on the association between mobility capacity and incident disability in non-disabled older adults.
DATA SOURCESMEDLINE, EMBASE and CINAHL databases were searched without any limits or restrictions.
STUDY SELECTIONPublished reports of longitudinal cohort studies that estimated a direct association between baseline mobility capacity, assessed with a standardized outcome assessment, and subsequent development of disability, including initially non-disabled older adults.
DATA EXTRACTION AND SYNTHESISData extraction was completed by independent pairs of reviewers. The risk of bias was assessed using the Quality in Prognosis Studies (QUIPS) tool. Random-effect models were used to explore the objective. The certainty of evidence was assessed using GRADE.
MAIN OUTCOME AND MEASURESThe main outcome measures were the pooled relative risks (RR) per one conventional unit per mobility assessment for incident disability in activities of daily living.
RESULTSA total of 40 reports were included, evaluating 85,515 and 78,379 participants at baseline and follow-up, respectively (median mean age: 74.6 years). The median disability rate at follow-up was 12.0% (IQR: 5.4%-23.3%). The overall risk of bias was judged as low, moderate and high in 6 (15%), 6 (15%), and 28 (70%) reports, respectively.
For usual and fast gait speed, the RR per -0.1 m/s was 1.23 (95% CI: 1.18-1.28; 26,638 participants) and 1.28 (95% CI: 1.19-1.38; 8,161 participants), respectively. Each point decrease in Short Physical Performance Battery score increased the risk of incident disability by 30% (RR = 1.30, 95% CI: 1.23- 1.38; 9,183 participants). The RR of incident disability by each second increase in Timed Up and Go test and Chair Rise Test performance was 1.15 (95% CI: 1.09-1.21; 30,426 participants) and 1.07 (95% CI: 1.04-1.10; 9,450 participants), respectively.
CONCLUSIONS AND RELEVANCEAmong community-dwelling non-disabled older adults, a poor mobility capacity is a potent modifiable risk factor for incident disability. Mobility impairment should be mandated as a quality indicator of health for older people.
Key PointsO_ST_ABSQUESTIONC_ST_ABSWhat are the associations between clinical outcome assessments of mobility capacity and incident disability in community-dwelling older adults?
FINDINGSIn this systematic review and meta-analysis that included 40 reports and data of 85,515 older adults, the risk ratios of incident disability on activities of daily living were 1.23, 1.30, 1.15, and 1.07 for usual gait speed, Short Physical Performance Battery, Timed Up and Go test, and Chair Rise Test, respectively, per one conventional unit.
MEANINGCommon assessments of mobility capacity may help identify older people at risk of incident disability and should be routinely established in regular health examinations. | geriatric medicine |
10.1101/2022.03.02.22271583 | Cost-effectiveness of the digital therapeutics for essential hypertension | BackgroundHypertension increases the risk of cardiovascular and other diseases. Lifestyle modification is a significant component of nonpharmacological treatments for hypertension. We previously reported the clinical efficacy of digital therapeutics (DTx) in the HERB-DH1 pivotal trial. However, there is still a lack of cost-effectiveness assessments evaluating the impact of prescription DTx. This study aimed to analyze the cost-effectiveness of using prescription DTx in treating hypertension.
MethodsWe developed a monthly cycle Markov model and conducted Monte Carlo simulations using the HERB-DH1 trial data to investigate quality-adjusted life-years (QALYs) and the cost of DTx for hypertension plus guideline-based lifestyle modification consultation treatment as usual (TAU), comparing DTx+TAU and TAU-only groups with a lifetime horizon. The model inputs were obtained from published academic papers, publicly available data, and expert assumptions. The incremental cost-effectiveness ratio (ICER) per QALY was used as the benchmark for cost-effectiveness. We performed probabilistic sensitivity analyses (PSAs) using the Monte Carlo simulation with 2 million sets.
ResultsThe DTx+TAU strategy produced 18.778 QALY and was associated with {yen}3,924,075 ($34,122) expected costs, compared with 18.686 QALY and {yen}3,813,358 ($33,160) generated by the TAU-only strategy over a lifetime horizon, resulting in an ICER of {yen}1,199,880 ($10,434)/QALY gained for DTx+TAU. The monthly cost and attrition rate of DTx for hypertension have a significant impact on ICERs. In the PSA, the probability of the DTx arm being a cost-effective option was 87.8% at a threshold value of {yen}5 million ($43,478)/QALY gained.
ConclusionsThe DTx+TAU strategy was more cost-effective than the TAU-only strategy. | health economics |
10.1101/2022.03.02.22271583 | Cost-effectiveness of the digital therapeutics for essential hypertension | BackgroundHypertension increases the risk of cardiovascular and other diseases. Lifestyle modification is a significant component of nonpharmacological treatments for hypertension. We previously reported the clinical efficacy of digital therapeutics (DTx) in the HERB-DH1 pivotal trial. However, there is still a lack of cost-effectiveness assessments evaluating the impact of prescription DTx. This study aimed to analyze the cost-effectiveness of using prescription DTx in treating hypertension.
MethodsWe developed a monthly cycle Markov model and conducted Monte Carlo simulations using the HERB-DH1 trial data to investigate quality-adjusted life-years (QALYs) and the cost of DTx for hypertension plus guideline-based lifestyle modification consultation treatment as usual (TAU), comparing DTx+TAU and TAU-only groups with a lifetime horizon. The model inputs were obtained from published academic papers, publicly available data, and expert assumptions. The incremental cost-effectiveness ratio (ICER) per QALY was used as the benchmark for cost-effectiveness. We performed probabilistic sensitivity analyses (PSAs) using the Monte Carlo simulation with 2 million sets.
ResultsThe DTx+TAU strategy produced 18.778 QALY and was associated with {yen}3,924,075 ($34,122) expected costs, compared with 18.686 QALY and {yen}3,813,358 ($33,160) generated by the TAU-only strategy over a lifetime horizon, resulting in an ICER of {yen}1,199,880 ($10,434)/QALY gained for DTx+TAU. The monthly cost and attrition rate of DTx for hypertension have a significant impact on ICERs. In the PSA, the probability of the DTx arm being a cost-effective option was 87.8% at a threshold value of {yen}5 million ($43,478)/QALY gained.
ConclusionsThe DTx+TAU strategy was more cost-effective than the TAU-only strategy. | health economics |
10.1101/2022.03.03.22271835 | Acute and long-term impacts of COVID-19 on economic vulnerability: a population-based longitudinal study (COVIDENCE UK) | BackgroundSocio-economic deprivation is well recognised as a risk factor for developing COVID-19. However, the impact of COVID-19 on economic vulnerability has not previously been characterised.
ObjectiveTo determine whether COVID-19 has a significant impact on adequacy of household income to meet basic needs (primary outcome) and work absence due to sickness (secondary outcome), both at the onset of illness (acutely) and subsequently (long-term).
DesignMultivariate mixed regression analysis of self-reported data from monthly on-line questionnaires, completed 1st May 2020 to 28th October 2021, adjusting for baseline characteristics including age, sex, socioeconomic status and self-rated health.
Setting and ParticipantsParticipants (n=16,910) were UK residents aged 16 years or over participating in a national longitudinal study of COVID-19 (COVIDENCE UK).
ResultsIncident COVID-19 was independently associated with increased odds of participants reporting household income as being inadequate to meet their basic needs, both acutely (adjusted odds ratio [aOR) 1.39, 95% confidence interval [CI] 1.12 to 1.73) and in the long-term (aOR 1.15, 95% CI 1.00 to 1.33). Exploratory analysis revealed the long-term association to be restricted to those who reported long COVID, defined as the presence of symptoms lasting more than 4 weeks after the acute episode (aOR 1.39, 95% CI 1.10 to 1.77). Incident COVID-19 associated with increased odds of reporting sickness absence from work in the long-term (aOR 5.29, 95% CI 2.76 to 10.10) but not acutely (aOR 1.34, 95% CI 0.52 to 3.49).
ConclusionsWe demonstrate an independent association between COVID-19 and increased risk of economic vulnerability, both acutely and in the long-term. Taking these findings together with pre-existing research showing that socio-economic disadvantage increases the risk of developing COVID-19, this may generate a vicious cycle of impaired health and poor economic outcomes.
Trial registrationNCT04330599
Summary BoxO_ST_ABSWhat is already known on this topicC_ST_ABSO_LISocioeconomic deprivation is recognised as a major risk factor for incidence and severity of COVID-19 disease, mediated via factors including increased occupational and household exposure to SARS-CoV-2 and greater physical vulnerability due to comorbidities
C_LIO_LIThe potential for COVID-19 to act as a cause, rather than a consequence, of economic vulnerability has not previously been characterised.
C_LI
What this study addsO_LIWe demonstrate an independent association between incident COVID-19 and subsequent self-report of household income being inadequate to meet basic needs, both acutely and in the long term
C_LIO_LIIncident COVID-19 was also associated with increased odds of subsequent self-report of sickness absence from work in the long-term.
C_LI | health economics |
10.1101/2022.03.02.22271734 | An observational study of the association between COVID-19 vaccination rates and participation in a vaccine lottery | ObjectivesAre financial incentives from entry in a vaccine lottery associated with a higher probability of vaccination for COVID-19?
DesignA cross-sectional study with adjustment for covariates using logistic regression
SettingOctober and November 2021, Australia.
Participants2,375 respondents of the Taking the Pulse of the Nation Survey
InterventionsParticipation in the Million Dollar Vaccination Lottery
Primary and secondary outcome measuresThe proportion of respondents who had any vaccination, a first dose only, or second dose compared to all other respondents
ResultsThose who participated in the lottery were 2.27 times more likely to be vaccinated after the lottery opened on October 1st than those who did not. This was driven by those receiving second doses. Lottery participants were 1.38 times more likely to receive their first dose after October 1st and 2.31 times more likely to receive their second dose after October 1st.
ConclusionsLottery participation is associated with a higher vaccination rate, with this effect dominated by a higher rate of second doses. There is a smaller insignificant difference for those receiving a first dose, suggesting lotteries may not be as effective at reducing vaccine hesitancy, compared to nudging people to get their second dose more quickly.
Strengths and limitations of this studyO_LIWe use a nationally representative sample of individuals.
C_LIO_LIWe distinguish between the association between lottery participation and first and second doses.
C_LIO_LIWe adjust for a rich set of individual characteristics associated with vaccination status
C_LIO_LIThe strong association for second dose vaccinations may reflect some individuals who had already had scheduled their second dose after the lottery opened, potentially leading to an overestimate of the association.
C_LI
Ethics statementThis study was approved by the University of Melbourne Faculty of Business and Economics & Melbourne Business School Human Ethics Advisory Group (Ref: 2056754.1). | health economics |
10.1101/2022.03.02.22271734 | An observational study of the association between COVID-19 vaccination rates and entry into the "Million Dollar Vax" competition | ObjectivesAre financial incentives from entry in a vaccine lottery associated with a higher probability of vaccination for COVID-19?
DesignA cross-sectional study with adjustment for covariates using logistic regression
SettingOctober and November 2021, Australia.
Participants2,375 respondents of the Taking the Pulse of the Nation Survey
InterventionsParticipation in the Million Dollar Vaccination Lottery
Primary and secondary outcome measuresThe proportion of respondents who had any vaccination, a first dose only, or second dose compared to all other respondents
ResultsThose who participated in the lottery were 2.27 times more likely to be vaccinated after the lottery opened on October 1st than those who did not. This was driven by those receiving second doses. Lottery participants were 1.38 times more likely to receive their first dose after October 1st and 2.31 times more likely to receive their second dose after October 1st.
ConclusionsLottery participation is associated with a higher vaccination rate, with this effect dominated by a higher rate of second doses. There is a smaller insignificant difference for those receiving a first dose, suggesting lotteries may not be as effective at reducing vaccine hesitancy, compared to nudging people to get their second dose more quickly.
Strengths and limitations of this studyO_LIWe use a nationally representative sample of individuals.
C_LIO_LIWe distinguish between the association between lottery participation and first and second doses.
C_LIO_LIWe adjust for a rich set of individual characteristics associated with vaccination status
C_LIO_LIThe strong association for second dose vaccinations may reflect some individuals who had already had scheduled their second dose after the lottery opened, potentially leading to an overestimate of the association.
C_LI
Ethics statementThis study was approved by the University of Melbourne Faculty of Business and Economics & Melbourne Business School Human Ethics Advisory Group (Ref: 2056754.1). | health economics |
10.1101/2022.03.02.22271779 | Prevalence of Bacterial Coinfection and Patterns of Antibiotics Prescribing in Patients with COVID-19: A Systematic review and Meta-Analysis | BackgroundEvidence around prevalence of bacterial coinfection and pattern of antibiotic use in COVID-19 is controversial although high prevalence rates of bacterial coinfection have been reported in previous similar global viral respiratory pandemics. Early data on the prevalence of antibiotic prescribing in COVID-19 indicates conflicting low and high prevalence of antibiotic prescribing which challenges antimicrobial stewardship programmes and increases risk of antimicrobial resistance (AMR).
AimTo determine current prevalence of bacterial coinfection and antibiotic prescribing in COVID-19 patients
Data SourceOVID MEDLINE, OVID EMBASE, Cochrane and MedRxiv between January 2020 and June 2021.
Study EligibilityEnglish language studies of laboratory-confirmed COVID-19 patients which reported (a) prevalence of bacterial coinfection and/or (b) prevalence of antibiotic prescribing with no restrictions to study designs or healthcare setting
ParticipantsAdults (aged [≥] 18 years) with RT-PCR confirmed diagnosis of COVID-19, regardless of study setting.
MethodsSystematic review and meta-analysis. Proportion (prevalence) data was pooled using random effects meta-analysis approach; and stratified based on region and study design.
ResultsA total of 1058 studies were screened, of which 22, hospital-based studies were eligible, compromising 76,176 of COVID-19 patients. Pooled estimates for the prevalence of bacterial co-infection and antibiotic use were 5.62% (95% CI 2.26 - 10.31) and 61.77% (CI 50.95 - 70.90), respectively. Sub-group analysis by region demonstrated that bacterial co-infection was more prevalent in North American studies (7.89%, 95% CI 3.30-14.18).
ConclusionPrevalence of bacterial coinfection in COVID-19 is low, yet prevalence of antibiotic prescribing is high, indicating the need for targeted COVID-19 antimicrobial stewardship initiatives to reduce the global threat of AMR. | infectious diseases |
10.1101/2022.03.02.22271724 | Morning SARS-CoV-2 testing yields better detection of infection due to higher viral loads in saliva and nasal swabs upon waking | BackgroundThe analytical sensitivities of SARS-CoV-2 diagnostic tests span 6 orders of magnitude. Optimizing sample-collection methods to achieve the most reliable detection for a given sensitivity would increase the effectiveness of testing and minimize COVID-19 outbreaks.
MethodsFrom September 2020 to April 2021 we performed a household-transmission study in which participants self-collected samples every morning and evening throughout acute SARS-CoV-2 infection. Seventy mildly symptomatic participants collected saliva and, of those, 29 also collected nasal-swab samples. Viral load was quantified in 1194 saliva and 661 nasal-swab samples using a high-analytical-sensitivity RT-qPCR assay (LOD, 1,000 SARS-CoV-2 RNA copies/mL).
FindingsViral loads in both saliva and nasal-swab samples were significantly higher in morning-collected samples than evening-collected samples after symptom onset. We used these quantitative measurements to infer which diagnostic tests would have detected infection (based on sample type and test analytical sensitivity). We find that morning collection would have resulted in significantly improved detection and that this advantage would be most pronounced for tests with low to moderate analytical sensitivity, which would likely have missed infections if sampling in the evening.
InterpretationCollecting samples for COVID-19 testing in the morning offers a simple and low-cost improvement to clinical diagnostic sensitivity of low- to moderate-analytical-sensitivity tests. The phenomenon of higher viral loads in the morning may also have implications related to when transmission is more likely to occur.
FundingBill & Melinda Gates Foundation, Ronald and Maxine Linde Center for New Initiatives (Caltech), Jacobs Institute for Molecular Engineering for Medicine (Caltech)
RESEARCH IN CONTEXTO_ST_ABSEvidence before this studyC_ST_ABSReliable COVID-19 diagnostic testing is critical to reducing transmission of SARS-CoV-2 and reducing cases of severe or fatal disease, particularly in areas with limited vaccine access or uptake. Saliva and anterior-nares nasal swabs are common sample types; however, different diagnostic tests using these sample types have a range of analytical sensitivities spanning 6 orders of magnitude, with limits of detection (LODs) between 102 and 108 genomic copy equivalents of SARS-CoV-2 RNA (copies) per mL of sample. Due to limitations in clinical laboratory capacity, many low-resource settings rely on COVID-19 tests that fall on the moderate (LODs of 104 to 105 copies/mL) to lower (LODs of 105 to 108 copies/mL) end of this spectrum of analytical sensitivity. Alterations in sample collection methods, including time of sample collection, may improve the performance of these diagnostics.
Added value of this studyThis study quantifies viral loads from saliva and nasal-swab samples that were longitudinally self-collected by symptomatic patients in the morning immediately after waking and in the evening just prior to sleeping throughout the course of acute SARS-CoV-2 infection. The study cohort was composed of mildly or moderately symptomatic individuals (outpatients). This analysis demonstrates significantly higher viral loads in samples collected in the morning, relative to those collected in the evening. When using moderate to lower analytical sensitivity test methods, these loads are inferred to result in significantly better detection of infected individuals in the morning.
Implications of available evidenceThese findings suggest that samples collected in the morning immediately after waking will better detect SARS-CoV-2 infection in symptomatic individuals tested by moderate to lower analytical sensitivity COVID-19 diagnostic tests (LODs at or above 104 viral copies per mL of sample), such as many rapid antigen tests currently available. | infectious diseases |
10.1101/2022.03.01.22271473 | Deep learning based electrocardiographic screening for chronic kidney disease | BackgroundUndiagnosed chronic kidney disease (CKD) is a common and usually asymptomatic disorder that causes a high burden of morbidity and early mortality worldwide. We developed a deep learning model for CKD screening from routinely acquired ECGs.
MethodsWe collected data from a primary cohort with 111,370 patients which had 247,655 ECGs between 2005 and 2019. Using this data, we developed, trained, validated, and tested a deep learning model to predict whether an ECG was taken within one year of the patient receiving a CKD diagnosis. The model was additionally validated using an external cohort from another healthcare system which had 312,145 patients with 896,620 ECGs from between 2005 and 2018.
ResultsUsing 12-lead ECG waveforms, our deep learning algorithm achieved discrimination for CKD of any stage with an AUC of 0.77 (95% CI 0.76-0.77) in a held-out test set and an AUC of 0.71 (0.71-0.71) in the external cohort. Our 12-lead ECG-based model performance was consistent across the severity of CKD, with an AUC of 0.75 (0.0.74-0.77) for mild CKD, AUC of 0.76 (0.75-0.77) for moderate-severe CKD, and an AUC of 0.78 (0.77-0.79) for ESRD. In our internal health system with 1-lead ECG waveform data, our model achieved an AUC of 0.74 (0.74-0.75) in detecting any stage CKD. In the external cohort, our 1-lead ECG-based model achieved an AUC of 0.70 (0.70-0.70). In patients under 60 years old, our model achieved high performance in detecting any stage CKD with both 12-lead (AUC 0.84 [0.84-0.85]) and 1-lead ECG waveform (0.82 [0.81-0.83]).
ConclusionsOur deep learning algorithm was able to detect CKD using ECG waveforms, with particularly strong performance in younger patients and patients with more severe stages of CKD. Given the high global burden of undiagnosed CKD, further studies are warranted to evaluate the clinical utility of ECG-based CKD screening. | cardiovascular medicine |
10.1101/2022.03.03.22271695 | Stability of diagnostic coding of psychiatric outpatient visits across the transition from the second to the third version of the Danish National Patient Registry | BackgroundIn Denmark, data on hospital contacts are reported to the Danish National Patient Registry (DNPR). The ICD-10 main diagnoses from the DNPR are often used as proxies for mental disorders in psychiatric research. With the transition from the second version of the DNPR (DNPR2) to the third (DNPR3) in February-March 2019, the way main diagnoses are coded in relation to outpatient treatment changed substantially. Specifically, in the DNPR2, each outpatient treatment course was labelled with only one main diagnosis. In the DNPR3, however, each visit during an outpatient treatment course is labelled with a main diagnosis. We assessed whether this change led to a break in the diagnostic time-series represented by the DNPR, which would pose a threat to the research relying on this source.
MethodsAll main diagnoses from outpatients attending the Psychiatric Services of the Central Denmark Region from 2013 to 2021 (n=100,501 unique patients) were included in the analyses. The stability of the DNPR diagnostic time-series at the ICD-10 subchapter level was examined by comparing means across the transition from the DNPR2 to the DNPR3.
ResultsWhile the proportion of psychiatric outpatients with diagnoses from some ICD-10 subchapters changed statistically significantly from the DNPR2 to the DNPR3, the changes were small in absolute terms (e.g., +0.9% for F2 - psychotic disorders and +0.1% for F3 - mood disorders).
ConclusionThe change from the DNPR2 to the DNPR3 is unlikely to pose a substantial threat to the validity of most psychiatric research at the diagnostic subchapter level.
Data availabilityDue to restrictions in Danish law for protecting patient privacy, the data used in this study is only available for research projects conducted by employees in the Central Denmark Region following approval from the Legal Office under the Central Denmark Region (in accordance with the Danish Health Care Act [§]46, Section 2). However, similar data can be accessed through Statistics Denmark. Danish institutions can apply for authorization to work with data within Statistics Denmark, and such organisations can provide access to affiliated researchers inside and outside of Denmark.
Significant OutcomesO_LIA significant administrative change in diagnostic coding of psychiatric outpatient treatment in Denmark did not induce marked destabilisation in the incidence of psychiatric diagnoses or the number of diagnoses received by each patient.
C_LIO_LIIn the DNPR3, most outpatient treatment courses are labelled with the same main diagnosis (subchapter level) at the first and last visit.
C_LI
LimitationsO_LIAnalyses were performed on data from the Central Denmark Region. While the administrative change was at the national level, and results should generalise, this is not testable from these data.
C_LIO_LIData was obtained from the Business Intelligence Office. While this source receives data from the same source as the National Patient Registry, and should be identical, this could not be tested.
C_LI | epidemiology |
10.1101/2022.03.03.22271769 | The Coronavirus Calendar (CoronaCal): a Simplified SARS-CoV-2 Test System for Sampling and Retrospective Analysis | BackgroundThe testing of saliva samples for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA has become a useful and common method to diagnose coronavirus disease 2019 (Covid-19). However, there are limited examples of serial testing with correlated clinical metadata, especially in the outpatient setting.
MethodWe developed a method to collect serial saliva samples on ordinary white printer paper, which can be subsequently analyzed for the presence of SARS-CoV-2 RNA using established polymerase chain reaction (PCR) procedures. The collection systems consisted of a biological diary (CoronaCal) where subjects dab their saliva onto ovals printed onto paper. The dried samples are covered with a sticker that includes a symptom checklist to create a biological diary. Each sheet of letter paper can accommodate up to 14 serial samples.
ResultsIn a pilot study, ten subjects used CoronaCals for durations of nine to 44 days. SARS-CoV-2 RNA was extracted and detected in CoronaCals from nine of nine people with either Covid-19 symptoms or exposure to someone with Covid-19, and in zero of one asymptomatic person. The CoronaCals were stored for up to 70 days at room temperature during collection and then frozen for up to four months before analysis, suggesting that SARS-CoV-2 RNA is stable once dried onto paper. Interestingly, the temporal pattern of symptoms was not well correlated with SARS-CoV-2 RNA in serial daily collections for up to 44 days. In addition, SARS-CoV-2 positivity was discontinuous over time in most cases but persisted for up to 24 days.
ConclusionsWe conclude that sampling of saliva on simple paper CoronaCals may provide a useful method to study the natural history and epidemiology of Covid-19. The CoronaCal collection and testing method we developed is also easy to implement, inexpensive, non-invasive, and scalable. More broadly, the approach can be used to archive biological samples for retrospective analysis to deepen epidemiological understanding during viral disease outbreaks and to provide information about the natural history of emerging infections. | epidemiology |
10.1101/2022.03.02.22271732 | Quantifying the impact of association of environmental mixture in a type 1 and type 2 error balanced framework | In environmental epidemiology, analysis of environmental mixture in association to health effects is gaining popularity. Such models mostly focus on inferences of hypotheses or summarizing strength of association through regression coefficients and corresponding estimates of precision. Nonetheless, when a decision is made against alternative hypothesis, it becomes increasingly difficult to tease apart whether the decision is influenced by sample size or represents genuine absence of association and whether the result warrants further investigation. Similarly, in case of a decision made in favour of alternative hypothesis, a significant association may indicate influence of large sample and not a strong effect. Moreover, the disparate type 1 and type 2 errors, might render these inferences unreliable. Using Cohens f2 to evaluate the strength of explanatory associations in a more fundamental way, we herein propose a new concept, optimal impact, to quantify the maximum explanatory association solely contributed by an environmental-mixture after controlling for confounders and covariates such that the type 2 error remains at its minimum. optimal impact is built upon a novel hypothesis testing procedure in which the rejection region is determined in a way that type 1 and type 2 errors are balanced. Even when an association does not achieve statistical significance, its optimal impact might deem it meaningful and strong enough for further investigation. This idea was naturally extended to estimate sample size in designing studies by striking a balance between explanatory precision and utility. The properties of this framework are carefully studied and detailed results are established. A straightforward application of this procedure is illustrated using an exposure-mixture analysis of per-and-poly-fluoroalkyl substances and metals with serum cholesterols using data from 2017-2018 US National Health and Nutrition Examination Survey. | epidemiology |
10.1101/2022.03.02.22271771 | Effects of BA.1/BA.2 subvariant, vaccination, and prior infection on infectiousness of SARS-CoV-2 Omicron infections | BACKGROUNDQatar experienced a large SARS-CoV-2 Omicron (B.1.1.529) wave that started on December 19, 2021 and peaked in mid-January, 2022. We investigated effects of Omicron subvariant (BA.1 and BA.2), previous vaccination, and prior infection on infectiousness of Omicron infections, between December 23, 2021 and February 20, 2022.
METHODSUnivariable and multivariable regression analyses were conducted to estimate the association between the RT-qPCR cycle threshold (Ct) value of PCR tests (a proxy for SARS-CoV-2 infectiousness) and each of the Omicron subvariants, mRNA vaccination, prior infection, reason for RT-qPCR testing, calendar week of RT-qPCR testing (to account for phases of the rapidly evolving Omicron wave), and demographic factors.
RESULTSCompared to BA.1, BA.2 was associated with 3.53 fewer cycles (95% CI: 3.46-3.60), signifying higher infectiousness. Ct value decreased with time since second and third vaccinations. Ct values were highest for those who received their boosters in the month preceding the RT-qPCR test--0.86 cycles (95% CI: 0.72-1.00) higher than for unvaccinated persons. Ct value was 1.30 (95% CI: 1.20-1.39) cycles higher for those with a prior infection compared to those without prior infection, signifying lower infectiousness. Ct value declined gradually with age. Ct value was lowest for those who were tested because of symptoms and was highest for those who were tested for travel-related purposes. Ct value was lowest during the exponential-growth phase of the Omicron wave and was highest after the wave peaked and was declining.
CONCLUSIONSThe BA.2 subvariant appears substantially more infectious than the BA.1 subvariant. This may reflect higher viral load and/or longer duration of infection, thereby explaining the rapid expansion of this subvariant in Qatar. | epidemiology |
10.1101/2022.03.03.22271613 | Process evaluation protocol for the SLEEP study: a hybrid digital CBT-I intervention for working people with insomnia | Introduction: The Supporting Employees with Insomnia and Emotional Regulation Problems (SLEEP) pilot uses hybrid digital Cognitive Behavioural Therapy for Insomnia (CBT-I) to help working people address their sleep and emotion regulation problems, using cognitive, behavioural, and psychoeducation techniques. A process evaluation within this trial will improve our understanding of how the intervention brought about change, including identifying barriers and facilitators to engagement and subsequent change, and the extent to which contextual factors played a role. Methods and analysis: This qualitative process evaluation will use semi-structured interviews, conducted via online videoconferencing, to explore participant experiences of the SLEEP intervention. Twenty-five participants who completed the SLEEP intervention (16 %) will be randomly sampled. Interviews will be analysed using a thematic and framework analytic approach. A codebook style thematic analysis will be used, and a framework focusing on the research questions will be applied to the codebook. The final report will present themes generated, alongside the finalised codebook. The report resulting from this research will adhere to CORE-Q quality guidelines. Ethics and dissemination: Full approval for the SLEEP study was given by the University of Warwick Biomedical and Research Ethics Committee (BSREC 45/20-21). All data collection adheres to data collection guidelines. Participants provided written informed consent for the main trial, and all interviewees provided additional written and verbal (audio-recorded) consent. The results of the process evaluation will be published in a peer-reviewed journal and presented at conferences. A lay report will be provided to all participants. Trial registration: ISRCTN13596153; Pre-results. | psychiatry and clinical psychology |
10.1101/2022.03.03.22271613 | Process evaluation protocol for the SLEEP study: a hybrid digital CBT-I intervention for working people with insomnia | Introduction: The Supporting Employees with Insomnia and Emotional Regulation Problems (SLEEP) pilot uses hybrid digital Cognitive Behavioural Therapy for Insomnia (CBT-I) to help working people address their sleep and emotion regulation problems, using cognitive, behavioural, and psychoeducation techniques. A process evaluation within this trial will improve our understanding of how the intervention brought about change, including identifying barriers and facilitators to engagement and subsequent change, and the extent to which contextual factors played a role. Methods and analysis: This qualitative process evaluation will use semi-structured interviews, conducted via online videoconferencing, to explore participant experiences of the SLEEP intervention. Twenty-five participants who completed the SLEEP intervention (16 %) will be randomly sampled. Interviews will be analysed using a thematic and framework analytic approach. A codebook style thematic analysis will be used, and a framework focusing on the research questions will be applied to the codebook. The final report will present themes generated, alongside the finalised codebook. The report resulting from this research will adhere to CORE-Q quality guidelines. Ethics and dissemination: Full approval for the SLEEP study was given by the University of Warwick Biomedical and Research Ethics Committee (BSREC 45/20-21). All data collection adheres to data collection guidelines. Participants provided written informed consent for the main trial, and all interviewees provided additional written and verbal (audio-recorded) consent. The results of the process evaluation will be published in a peer-reviewed journal and presented at conferences. A lay report will be provided to all participants. Trial registration: ISRCTN13596153; Pre-results. | psychiatry and clinical psychology |
10.1101/2022.03.02.22271807 | Investigating neurophysiological effects of a short course of tDCS for cognition in schizophrenia: a target engagement study. | BackgroundCognitive impairment is highly prevalent in schizophrenia and treatment options are severely limited. Development of effective treatments will rely on successful engagement of biological targets. There is growing evidence that the cognitive impairments in schizophrenia are related to impairments in prefrontal cortical inhibition and dysfunctional cortical oscillations.
MethodsIn the current study we sought to investigate whether a short course of transcranial Direct Current Stimulation (tDCS) could modulate these pathophysiological targets. Thirty participants with schizophrenia were recruited and underwent neurobiological assessment (Transcranial Magnetic Stimulation combined with EEG [TMS-EEG] and task-related EEG) and assessment of cognitive functioning (n-back task and the MATRICS Consensus Cognitive Battery). Participants were then randomized to receive 5 sessions of either active or sham anodal tDCS to the left prefrontal cortex. Twenty-four hours after the last tDCS session participants repeated the neurobiological and cognitive assessments. Neurobiological outcome measures were TMS-evoked potentials (TEPs), TMS-related oscillations and oscillatory power during a 2-back task. Cognitive outcome measures were d prime and accurate reaction time on the 2-back and MATRICS scores.
ResultsFollowing active tDCS there was a significant reduction in the N40 TEP amplitude in the left parietal occipital region. There were no other significant changes.
ConclusionsFuture interrogation of evidence based therapeutic targets in large scale RCTs is required. | psychiatry and clinical psychology |
10.1101/2022.03.03.22271801 | Predicting sex, age, general cognition and mental health with machine learning on brain structural connectomes | There is increasing expectation that advanced, computationally expensive machine learning techniques, when applied to large population-wide neuroimaging datasets, will help to uncover key differences in the human brain in health and disease. We take a comprehensive approach to explore how multiple aspects of brain structural connectivity can predict sex, age, general cognitive function and general psychopathology, testing different machine learning algorithms from deep learning model (BrainNetCNN) to classical machine learning methods. We modelled N = 8, 183 structural connectomes from UK Biobank using six different structural network weightings obtained from diffusion MRI. Streamline count generally provided highest prediction accuracies in all prediction tasks. Deep learning did not improve on prediction accuracies from simpler linear models. Further, high correlations between model coefficients from deep learning and linear models suggested similar decision mechanisms were used. This highlights that model complexity is unlikely to improve detection of associations between structural connectomes and complex phenotypes. | psychiatry and clinical psychology |
10.1101/2022.03.03.22271828 | Determinants of COVID-19 vaccine acceptability in Mozambique: the role of institutional trust | BackgroundVaccination plays an imperative role in protecting public health and preventing avoidable mortality. Yet, the reasons for vaccine hesitancy are not well understood. This study investigates the factors associated with the acceptability of COVID-19 vaccine in Mozambique.
MethodsThe data came from the three waves of the COVID-19 Knowledge, Attitudes and Practices (KAP) survey which followed a cohort of 1,371 adults in Mozambique over three months (N=3,809). Data collection was through a structured questionnaire using telephone interviewing (CAPI). Multilevel regression analysis was conducted to identify the trajectories of, and the factors associated with COVID-19 vaccine acceptability.
ResultsThere was great volatility in COVID-19 vaccine acceptability over time. Institutional trust was consistently and strongly correlated with different measures of vaccine acceptability. There was a greater decline in vaccine acceptability in people with lower institutional trust. The positive correlation between institutional trust and vaccine acceptability was stronger in younger than older adults. Vaccine acceptability also varied by gender and marital status.
ConclusionsVaccine acceptability is sensitive to news and information circulated in the public domain. Institutional trust is a central driver of vaccine acceptability and contributes to the resilience of the health system. Our study highlights the importance of health communication and building a trustful relationship between the general public and public institutions in the context of a global pandemic. | public and global health |
10.1101/2022.03.02.22271790 | Deep Sequence Modeling for Pressure Controlled Mechanical Ventilation | This paper presents a deep neural network approach to simulate the pressure of a mechanical ventilator. The traditional mechanical ventilator has a control pressure monitored by a medical practitioner, which could behave inaccurately by missing the proper pressure. This paper exploits recent studies and provides a simulator based on a deep sequence model to predict the airway pressure in the respiratory circuit during the inspiratory phase of a breath given a time series of control parameters and lung attributes. This approach demonstrates the effectiveness of neural network-based controllers in tracking pressure waveforms significantly better than the current industry standard and provides insights to build effective and robust pressure-controlled mechanical ventilators. | respiratory medicine |
10.1101/2022.03.02.22271810 | Deep sequencing of early T stage colorectal cancers reveals disruption of homologous recombination repair in microsatellite stable tumours with high mutational burdens | Early T stage colorectal cancers (CRC) that invade lymph nodes (Stage IIIA) are greatly under-represented in large-scale genomic mapping projects such as TCGA datasets. We retrieved 10 Stage IIIA CRC cases, matched these to 16 Stage 1 CRC cases (T1 depth without lymph node metastasis) and carried out deep sequencing of 409 genes using the IonTorrent system. Tumour mutational burdens (TMB) ranged from 2.4-77.2/Mb sequenced. Using mean TMB as a cut-point to define groups, TMB-low (TMB-L) specimens showed higher frequency of KRAS and TP53 mutations in Stage IIIA compared to Stage I, consistent with TCGA data. TMB-High (TMB-H) specimens consisted of both microsatellite instable high (MSI-H) and microsatellite stable (MSS) genotypes. Comparison of TMB-H with TMB-L groups revealed clear differences in mutations of ATM, KDM5C, PIK3CA and APC. APC was less frequently mutated in the TMB-H group although variant composition was more diverse. Variants in ATM were restricted to the TMB-H group, and in four of five MSS specimens we observed the co-occurrence of mutations in homologous recombination repair (HRR) genes in either two of ATM, CDK12, PTEN or ATR, with at least one of these being a truncating mutation. No MSI-H specimens carried nonsense mutations in these genes. These findings add to our knowledge of early T stage CRC and highlight a potential therapeutic vulnerability in the HRR pathway of TMB-H MSS CRC. | oncology |
10.1101/2022.03.03.22271700 | Clonality and timing of relapsing colorectal cancer metastasis revealed through whole-genome single-cell sequencing | BACKGROUNDRecurrence of tumor cells following local and systemic therapy is a significant hurdle in cancer. Most patients with metastatic colorectal cancer (mCRC) will relapse, despite resection of the metastatic lesions. A better understanding of the evolutionary history of recurrent lesions is thus required to identify the spatial and temporal patterns of metastatic progression and expose the genetic determinants of therapeutic resistance.
METHODSUtilizing a robust Bayesian phylogenetic approach, we analyzed a unique single-cell whole-genome sequencing dataset comprising 60 cells sampled from metastatic and recurrent hepatic lesions of a patient with a long-term disease course to investigate the origin, timing, and clonality of a colorectal metastatic relapse. We further tracked the changes in the size of the malignant cell population and evaluated the impact of the treatment strategy on the mutational landscape of this tumor.
RESULTSOur results suggest that the recurrent lesion originated from the clonal expansion of a single drug-resistant metastatic lineage, which began to expand around one year before surgical resection of the relapse. We additionally observed substantial variability in the substitution rates along the tumor cell phylogeny and found a large number of mutations specific to the ancestral lineage that gave rise to the relapse, including non-silent mutations in CRC genes. Moreover, our results point to a substantial contribution of chemotherapy exposure to the overall mutational burden.
CONCLUSIONSOur study suggests that resistant colorectal metastatic clones can quickly grow, even under strong drug-imposed pressure, highlighting the importance of profiling the genomic landscape of tumor lesions to identify mutations potentially contributing to treatment failure. | oncology |
10.1101/2022.03.02.22271785 | Identifying SARS-CoV-2 Variants of Concern through saliva-based RT-qPCR by targeting recurrent mutation sites | SARS-CoV-2 variants of concern (VOCs) continue to pose a public health threat which necessitates a real-time monitoring strategy to compliment whole genome sequencing. Thus, we investigated the efficacy of competitive probe RT-qPCR assays for six mutation sites identified in SARS-CoV-2 VOCs and, after validating the assays with synthetic RNA, performed these assays on positive saliva samples. When compared with whole genome sequence results, the S{Delta}69-70 and ORF1a{Delta}3675-3677 assays demonstrated 93.60% and 68.00% accuracy, respectively. The SNP assays (K417T, E484K, E484Q, L452R) demonstrated 99.20%, 96.40%, 99.60%, and 96.80% accuracies, respectively. Lastly, we screened 345 positive saliva samples from December 7-22, 2021 using Omicron-specific mutation assays and were able to quickly identify rapid spread of Omicron in Upstate South Carolina. Our workflow demonstrates a novel approach for low-cost, real-time population screening of VOCs.
ImportanceSARS-CoV-2 variants of concern and their many sublineages can be characterized by mutations present within their genetic sequences. These mutations can provide selective advantages such as increased transmissibility and antibody evasion, which influences public health recommendations such as mask mandates, quarantine requirements, and treatment regimens. Our real-time RT-qPCR workflow allows for strain identification of SARS-CoV-2 positive saliva samples by targeting common mutation sites shared between VOCs and detecting single nucleotides present at the targeted location. This differential diagnostic system can quickly and effectively identify a wide array of SARS-CoV-2 strains, which can provide more informed public health surveillance strategies in the future. | infectious diseases |
10.1101/2022.03.02.22271777 | Reverse Inflammaging: Long-term effects of HCV cure on biological age | Background and AimsChronic hepatitis C virus (HCV) infection can be cured with direct-acting antiviral agents (DAA). However, not all sequelae of chronic hepatitis C appear to be completely reversible after sustained virologic response (SVR). Recently, chronic viral infections have been shown to be associated with biological age acceleration defined by the epigenetic clock. The aim of this study was to investigate whether chronic HCV infection is associated with epigenetic changes and biological age acceleration and whether this is reversible after SVR.
MethodsWe included 54 well-characterized patients with chronic hepatitis C at three time points: DAA treatment initiation, end of treatment, and long-term follow-up (median 96 weeks after end of treatment). Genome-wide DNA methylation status from peripheral blood mononuclear cells (PBMC) was generated and used to calculate epigenetic age acceleration (EAA) using Horvaths clock.
ResultsHCV patients had an overall significant EAA of 3.12 years at baseline compared with -2.61 years in the age-matched reference group (p<0.00003). HCV elimination resulted in a significant long-term increase in DNA methylation dominated by hypermethylated CpGs in all patient groups. Accordingly, EAA decreased to 1.37 years at long-term follow-up. The decrease in EAA was significant only between the end of treatment and follow-up (p=0.01). Interestingly, eight patients who developed hepatocellular carcinoma after SVR had the highest EAA and showed no evidence of reversal after SVR.
ConclusionsOur data contribute to the understanding of the biological impact of HCV elimination after DAA and demonstrate that HCV elimination can lead to "reverse inflammaging". In addition, we provide new conceptual ideas for the use of biological age as a potential biomarker for HCV sequelae after SVR.
Lay SummaryChronic hepatitis C virus infection is now curable with direct acting antiviral agents (DAA), but are concomitant and sequelae also fully reversible after cure? Recent data demonstrate that chronic viral infections lead to an increase in biological age as measured by epigenetic DNA methylation status. Using a unique cohort of hepatitis C patients with and without cirrhosis as well as progression to HCC, we demonstrated that these epigenetic changes and concomitant increase in biological age are also observed in chronic HCV infection. Our data further suggest that this effect seems to be partially reversible in the long-term course after sustained virological response (SVR) by DAA therapy and that biological regeneration occurs. In this regard, the recovery effect appears to be dependent on disease course and was significantly lower in patients with progression to HCC. This suggests the use of biological age based on epigenetic state as a potential biomarker for HCV sequelae.
Graphical abstract
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=128 SRC="FIGDIR/small/22271777v1_ufig1.gif" ALT="Figure 1">
View larger version (33K):
[email protected]@16fdf7forg.highwire.dtl.DTLVardef@1ba318aorg.highwire.dtl.DTLVardef@10a6a45_HPS_FORMAT_FIGEXP M_FIG C_FIG Highlights- Patients with chronic hepatitis C have accelerated epigenetic age compared with healthy controls.
- DAA treatment and HCV elimination partially reverse the accelerated epigenetic age in the long-term follow-up.
- Patients who developed hepatocellular carcinoma after HCV elimination did not show reversal of accelerated epigenetic aging during the follow-up. | infectious diseases |
10.1101/2022.03.02.22271814 | Monitoring and understanding household clustering of SARS-CoV-2 cases using surveillance data in Fulton County, Georgia | BackgroundHouseholds are important for SARS-CoV-2 transmission due to high intensity exposure in enclosed living spaces over prolonged durations. Using contact tracing, the secondary attack rate in households is estimated at 18-20%, yet no studies have examined COVID-19 clustering within households to inform testing and prevention strategies. We sought to quantify and characterize household clustering of COVID-19 cases in Fulton County, Georgia and further explore age-specific patterns in household clusters.
MethodsWe used state surveillance data to identify all PCR- or antigen-confirmed cases of COVID-19 in Fulton County, Georgia. Household clustered cases were defined as cases with matching residential address with positive sample collection dates within 28 days of one another. We described proportion of COVID-19 cases that were clustered, stratified by age and over time and explored trends in age of first diagnosed case within clusters and age patterns between first diagnosed case and subsequent household cases.
ResultsBetween 6/1/20-10/31/21, there were 106,233 COVID-19 cases with available address reported in Fulton County. Of these, 31,449 (37%) were from 12,955 household clusters. Children were more likely to be in household clusters than any other age group and children increasingly accounted for the first diagnosed household case, rising from 11% in February 2021 to a high of 31% in August 2021. Bubble plot density of age of first diagnosed case and subsequent household cases mirror age-specific patterns in household social mixing.
DiscussionOne-third of COVID-19 cases in Fulton County were part of a household cluster. High proportion of children in household clusters reflects higher probability of living in larger homes with caregivers or other children. Increasing probability of children as the first diagnosed case coincide with temporal trends in vaccine roll-out among the elderly in March 2021 and the return to in-person schooling for the Fall 2021 semester. While vaccination remains the most effective intervention at reducing household clustering, other household-level interventions should also be emphasized such as timely testing for household members to prevent ongoing transmission. | public and global health |
10.1101/2022.03.02.22271757 | Laboratory verification of new commercial lateral flow assays for Cryptococcal antigen (CrAg) detection against the predicate IMMY CrAg LFA in a reference laboratory in South Africa | BackgroundReflex Cryptococcal antigen (CrAg) testing in HIV-positive patients is done routinely at 47 laboratories in South Africa on samples with a confirmed CD4 count <100 cells/l, using the IMMY Lateral Flow Assay (LFA) as the standardized predicate method.
ObjectiveThis study aimed to verify the diagnostic performance of newer CrAg LFA assays against the predicate method.
MethodsRemnant CD4 samples collected between February and June 2019, with confirmed predicate LFA CrAg results, were retested on settled plasma with the (i) IMMY CrAg semi-quantitative (SQ) LFA; (ii) Bio-Rad RDT CryptoPS SQ; and (iii) Dynamiker CrAg SQ assays, within 24 hours of predicate testing. Sensitivity/ specificity analyses were conducted comparing predicate versus the newer assays, with McNemars tests p-values reported for comparative results (p values <0.05 significant). Positivity grading was noted for the IMMY SQ and Bio-Rad assays.
ResultsOf the 254 samples tested, 228 had comparative CrAg results across all assays. The predicate method reported 85 CrAg positive (37.2%), compared to between 35.08 and 37.28% for the Bio-Rad, IMMY SQ and Dynamiker assays. The IMMY CrAg SQ grading (+1 to +5) showed 67% of CrAg positive results had a grading [≥]3, indicative of higher CrAg concentration (infection severity). False-negative results across all assays were <2%, with sensitivity >95% for all. False-positive results were highest for the Dynamiker LFA (14%) with a specificity of 77% (p=0.001). IMMY SQ and Bio-Rad assays specificities exceeded 90% (p=0.6 and 0.12). Internal quality control showed 100% accuracy for all assays.
ConclusionPerformance verification of newer CrAg LFA assays under typical laboratory conditions varied, with best results by IMMY SQ and Bio-Rad. The high burden of HIV and cryptococcal disease in South Africa requires high specificity and - sensitivity (>90%) to prevent unnecessary treatment/hospitalization. The added value of positivity grading for patient management needs confirmation. | infectious diseases |
10.1101/2022.03.03.22271843 | Adiposity and the risk of dementia: mediating effects from inflammation and lipid levels | ObjectiveWhile midlife adiposity is a risk factor for dementia, adiposity in late-life appears to be associated with lower risk. What drives the associations is poorly understood, especially the inverse association in late-life.
MethodsUsing results from genome-wide association studies, we identified inflammation and lipid metabolism as biological pathways involved in both adiposity and dementia. To test if these factors mediate the effect of midlife and/or late-life adiposity on dementia, we then used cohort data from the Swedish Twin Registry, with measures of adiposity and potential mediators taken in midlife (age 40-64, n=5999) or late-life (age 65-90, n=7257). Associations between body-mass index (BMI), waist-hip ratio (WHR), C-reactive protein (CRP), lipid levels, and dementia were tested in survival and mediation analyses.
ResultsOne standard deviation (SD) higher WHR in midlife was associated with 25% (95% CI 2-52%) higher dementia risk, with slight attenuation when adjusting for BMI. No evidence of mediation through CRP or lipid levels was present. After age 65, one SD higher BMI, but not WHR, was associated with 8% (95% CI 1-14%) lower dementia risk. The association was partly mediated by higher CRP and suppressed by lower high-density lipoprotein.
InterpretationThe negative effects of midlife adiposity on dementia risk were driven directly by factors associated with body fat distribution, and not mediated by inflammation or lipid levels. There was an inverse association between late-life adiposity and dementia risk, especially where the bodys inflammatory response and lipid homeostasis is intact. | epidemiology |
10.1101/2022.03.03.22271836 | The relationship between BMI and COVID-19: exploring misclassification and selection bias in a two-sample Mendelian randomisation study | ObjectiveTo use the example of the effect of body mass index (BMI) on COVID-19 susceptibility and severity to illustrate methods to explore potential selection and misclassification bias in Mendelian randomisation (MR) of COVID-19 determinants.
DesignTwo-sample MR analysis.
SettingSummary statistics from the Genetic Investigation of ANthropometric Traits (GIANT) and COVID-19 Host Genetics Initiative (HGI) consortia.
Participants681,275 participants in GIANT and more than 2.5 million people from the COVID-19 HGI consortia.
ExposureGenetically instrumented BMI.
Main outcome measuresSeven case/control definitions for SARS-CoV-2 infection and COVID-19 severity: very severe respiratory confirmed COVID-19 vs not hospitalised COVID-19 (A1) and vs population (those who were never tested, tested negative or had unknown testing status (A2)); hospitalised COVID-19 vs not hospitalised COVID-19 (B1) and vs population (B2); COVID-19 vs lab/self-reported negative (C1) and vs population (C2); and predicted COVID-19 from self-reported symptoms vs predicted or self-reported non-COVID-19 (D1).
ResultsWith the exception of A1 comparison, genetically higher BMI was associated with higher odds of COVID-19 in all comparison groups, with odds ratios (OR) ranging from 1.11 (95%CI: 0.94, 1.32) for D1 to 1.57 (95%CI: 1.57 (1.39, 1.78) for A2. As a method to assess selection bias, we found no strong evidence of an effect of COVID-19 on BMI in a no-relevance analysis, in which COVID-19 was considered the exposure, although measured after BMI. We found evidence of genetic correlation between COVID-19 outcomes and potential predictors of selection determined a priori (smoking, education, and income), which could either indicate selection bias or a causal pathway to infection. Results from multivariable MR adjusting for these predictors of selection yielded similar results to the main analysis, suggesting the latter.
ConclusionsWe have proposed a set of analyses for exploring potential selection and misclassification bias in MR studies of risk factors for SARS-CoV-2 infection and COVID-19 and demonstrated this with an illustrative example. Although selection by socioeconomic position and arelated traits is present, MR results are not substantially affected by selection/misclassification bias in our example. We recommend the methods we demonstrate, and provide detailed analytic code for their use, are used in MR studies assessing risk factors for COVID-19, and other MR studies where such biases are likely in the available data.
SummaryO_ST_ABSWhat is already known on this topicC_ST_ABS- Mendelian randomisation (MR) studies have been conducted to investigate the potential causal relationship between body mass index (BMI) and COVID-19 susceptibility and severity.
- There are several sources of selection (e.g. when only subgroups with specific characteristics are tested or respond to study questionnaires) and misclassification (e.g. those not tested are assumed not to have COVID-19) that could bias MR studies of risk factors for COVID-19.
- Previous MR studies have not explored how selection and misclassification bias in the underlying genome-wide association studies could bias MR results.
What this study adds- Using the most recent release of the COVID-19 Host Genetics Initiative data (with data up to June 2021), we demonstrate a potential causal effect of BMI on susceptibility to detected SARS-CoV-2 infection and on severe COVID-19 disease, and that these results are unlikely to be substantially biased due to selection and misclassification.
- This conclusion is based on no evidence of an effect of COVID-19 on BMI (a no-relevance control study, as BMI was measured before the COVID-19 pandemic) and finding genetic correlation between predictors of selection (e.g. socioeconomic position) and COVID-19 for which multivariable MR supported a role in causing susceptibility to infection.
- We recommend studies use the set of analyses demonstrated here in future MR studies of COVID-19 risk factors, or other examples where selection bias is likely. | epidemiology |
10.1101/2022.03.02.22271806 | Covid-19 Exposure Assessment Tool (CEAT): Easy-to-use tool to quantify exposure based on airflow, group behavior, and infection prevalence in the community | The COVID-19 Exposure Assessment Tool (CEAT) allows users to compare respiratory relative risk to SARS-CoV-2 for various scenarios, providing understanding of how combinations of protective measures affect exposure, dose, and risk. CEAT incorporates mechanistic, stochastic and epidemiological factors including the: 1) emission rate of virus, 2) viral aerosol degradation and removal, 3) duration of activity/exposure, 4) inhalation rates, 5) ventilation rates (indoors/outdoors), 6) volume of indoor space, 7) filtration, 8) mask use and effectiveness, 9) distance between people, 10) group size, 11) current infection rates by variant, 12) prevalence of infection and immunity in the community, 13) vaccination rates of the community, and 14) implementation of COVID-19 testing procedures. Demonstration of CEAT, from published studies of COVID-19 transmission events, shows the model accurately predicts transmission. We also show how health and safety professionals at NASA Ames Research Center used CEAT to manage potential risks posed by SARS-CoV-2 exposures. Given its accuracy and flexibility, the wide use of CEAT will have a long lasting beneficial impact in managing both the current COVID-19 pandemic as well as a variety of other scenarios. | epidemiology |
10.1101/2022.03.02.22271806 | Covid-19 Exposure Assessment Tool (CEAT): Easy-to-use tool to quantify exposure based on airflow, group behavior, and infection prevalence in the community | The COVID-19 Exposure Assessment Tool (CEAT) allows users to compare respiratory relative risk to SARS-CoV-2 for various scenarios, providing understanding of how combinations of protective measures affect exposure, dose, and risk. CEAT incorporates mechanistic, stochastic and epidemiological factors including the: 1) emission rate of virus, 2) viral aerosol degradation and removal, 3) duration of activity/exposure, 4) inhalation rates, 5) ventilation rates (indoors/outdoors), 6) volume of indoor space, 7) filtration, 8) mask use and effectiveness, 9) distance between people, 10) group size, 11) current infection rates by variant, 12) prevalence of infection and immunity in the community, 13) vaccination rates of the community, and 14) implementation of COVID-19 testing procedures. Demonstration of CEAT, from published studies of COVID-19 transmission events, shows the model accurately predicts transmission. We also show how health and safety professionals at NASA Ames Research Center used CEAT to manage potential risks posed by SARS-CoV-2 exposures. Given its accuracy and flexibility, the wide use of CEAT will have a long lasting beneficial impact in managing both the current COVID-19 pandemic as well as a variety of other scenarios. | epidemiology |
10.1101/2022.03.01.22271698 | Integrative analyses identify potential key genes and calcium-signaling pathway in familial AVNRT using whole-exome sequencing | BackgroundAtrioventricular nodal reentrant tachycardia (AVNRT) is a common arrhythmia. Growing evidence suggests that family aggregation and genetic factors are involved in AVNRT. However, in families with a history of AVNRT, disease-causing genes have not been reported.
ObjectiveTo investigate the genetic contribution of familial AVNRT using a WES approach.
MethodsBlood samples were collected from 20 patients from nine families with a history of AVNRT and 100 control participants, and we systematically analyzed mutation profiles using whole-exome sequencing. Gene-based burden analysis, integration of previous sporadic AVNRT data, pedigree-based co-segregation, protein-protein interaction network analysis, single-cell RNA sequencing, and confirmation of animal phenotype were performed.
ResultsAmong 95 reference genes, seven pathogenic genes have been identified both in sporadic and familial AVNRT, including CASQ2, AGXT, ANK2, SYNE2, ZFHX3, GJD3, and SCN4A. Among the 37 reference genes from sporadic AVNRT, five pathogenic genes were identified in patients with both familial and sporadic AVNRT: LAMC1, RYR2, COL4A3, NOS1, and ATP2C2. Considering the unique internal pathogenic gene within pedigrees, three genes, TRDN, CASQ2, and WNK1, were likely to be the pathogenic genes in familial AVNRT. Notably, the core calcium-signaling pathway may be closely associated with the occurrence of AVNRT, including CASQ2, RYR2, TRDN, NOS1, ANK2, and ATP2C2.
ConclusionThese results revealed the underlying mechanism for familial AVNRT. | pathology |
10.1101/2022.03.03.22271848 | Psychosis spectrum symptoms among individuals with schizophrenia-associated copy number variants and evidence of cerebellar correlates of symptom severity | BackgroundThe 3q29 deletion (3q29Del) is a copy number variant (CNV) with the highest known effect size for psychosis-risk (>40-fold increased risk). Systematic research on this CNV offers promising avenues for identifying mechanisms underlying psychosis and related disorders. Relative to other high-impact CNVs like 22q11.2Del, far less is known about the phenotypic presentation and pathophysiology of 3q29Del. Emerging findings indicate that posterior fossa abnormalities are common among 3q29Del carriers; however, their clinical relevance is unknown.
MethodsHere, we report the first in-depth evaluation of psychotic symptoms in study participants with 3q29Del (N=23), using the Structured Interview for Psychosis-Risk Syndromes (SIPS), and compare to SIPS data from 22q11.2Del participants (N=31) and healthy controls (N=279). We also investigate the relationship between psychotic symptoms, cerebellar morphology, and cystic/cyst-like malformations of the posterior fossa in 3q29Del by structural brain imaging.
ResultsCumulatively, 48% of the 3q29Del sample exhibited a psychotic disorder (N=4) or attenuated positive symptoms (N=7), with three individuals with attenuated symptoms meeting the frequency and timing criteria for clinical high risk for psychosis. Males with 3q29Del scored higher in negative symptoms than females. 3q29Del participants had more severe ratings than controls on all domains and exhibited less severe negative symptoms than 22q11.2Del participants. An inverse relationship was identified between positive symptom severity and cerebellar cortex volume in 3q29Del, while cystic/cyst-like malformations yielded no clinical link with psychosis.
ConclusionsOverall, our findings establish the unique and shared profiles of psychotic symptoms across two CNVs and highlight cerebellar involvement in elevated psychosis-risk in 3q29Del. | psychiatry and clinical psychology |
10.1101/2022.03.03.22271848 | Psychosis spectrum symptoms among individuals with schizophrenia-associated copy number variants and evidence of cerebellar correlates of symptom severity | BackgroundThe 3q29 deletion (3q29Del) is a copy number variant (CNV) with the highest known effect size for psychosis-risk (>40-fold increased risk). Systematic research on this CNV offers promising avenues for identifying mechanisms underlying psychosis and related disorders. Relative to other high-impact CNVs like 22q11.2Del, far less is known about the phenotypic presentation and pathophysiology of 3q29Del. Emerging findings indicate that posterior fossa abnormalities are common among 3q29Del carriers; however, their clinical relevance is unknown.
MethodsHere, we report the first in-depth evaluation of psychotic symptoms in study participants with 3q29Del (N=23), using the Structured Interview for Psychosis-Risk Syndromes (SIPS), and compare to SIPS data from 22q11.2Del participants (N=31) and healthy controls (N=279). We also investigate the relationship between psychotic symptoms, cerebellar morphology, and cystic/cyst-like malformations of the posterior fossa in 3q29Del by structural brain imaging.
ResultsCumulatively, 48% of the 3q29Del sample exhibited a psychotic disorder (N=4) or attenuated positive symptoms (N=7), with three individuals with attenuated symptoms meeting the frequency and timing criteria for clinical high risk for psychosis. Males with 3q29Del scored higher in negative symptoms than females. 3q29Del participants had more severe ratings than controls on all domains and exhibited less severe negative symptoms than 22q11.2Del participants. An inverse relationship was identified between positive symptom severity and cerebellar cortex volume in 3q29Del, while cystic/cyst-like malformations yielded no clinical link with psychosis.
ConclusionsOverall, our findings establish the unique and shared profiles of psychotic symptoms across two CNVs and highlight cerebellar involvement in elevated psychosis-risk in 3q29Del. | psychiatry and clinical psychology |
10.1101/2022.03.03.22271787 | Association between COVID-19 Risk-Mitigation Behaviors and Specific Mental Disorders in Youth | ImportanceAlthough studies of adults show that pre-existing mental disorders increase risk for COVID-19 infection and severity, there is limited information about this association among youth. Mental disorders in general as well as specific types of disorders may influence their ability to comply with risk-mitigation strategies to reduce COVID-19 infection and transmission.
ObjectiveTo examine associations between specific mental disorders and COVID-19 risk-mitigation practices among 314 female and 514 male youth.
DesignYouth compliance (rated as "Never," "Sometimes," "Often," or "Very often/Always") with risk mitigation was reported by parents on the CoRonavIruS Health Impact Survey (CRISIS) in January 2021. Responses were summarized using factor analysis of risk mitigation, and their associations with lifetime mental disorders (assessed via structured diagnostic interviews) were identified with linear regression analyses (adjusted for covariates). All analyses used R Project for Statistical Computing for Mac (v.4.0.5).
SettingThe Healthy Brain Network (HBN) in New York City Participants. 314 female and 514 male youth (ages 5-21)
Main Outcome(s) and Measure(s)COVID-19 risk mitigation behaviors among youth
ResultsA two-factor model was the best-fitting solution. Factor 1 (avoidance behaviors) included avoiding groups, indoor settings, and other peoples homes; avoidance was more likely among youth with any anxiety disorder (p=.01). Factor 2 (hygiene behaviors) included using hand sanitizer, washing hands, and maintaining social distance; practicing hygiene was less likely among youth with ADHD (combined type) (p=.02). Mask wearing, which did not load on either factor, was not associated with any mental health disorder.
Conclusion and RelevanceFindings suggest that education and monitoring of risk-mitigation strategies in certain subgroups of youth may reduce risk of exposure to COVID-19 and other contagious diseases. Additionally, they highlight the need for greater attention to vaccine prioritization for individuals with ADHD.
Key PointsO_ST_ABSQuestionC_ST_ABSAre mental disorders among youth associated with COVID-19 risk-mitigation behaviors?
FindingsBased on the parent CoRonavIruS Health Impact Survey (CRISIS) of 314 females and 514 males aged 5-21, youth with anxiety disorders were more likely to avoid high-risk exposure settings, and those with ADHD (combined type) were less likely to follow hygiene practices. In contrast, mask wearing was not associated with youth mental disorders.
MeaningSpecific types of disorders in youth may interfere with their ability to employ risk-mitigation strategies that may lead to greater susceptibility to COVID-19. | psychiatry and clinical psychology |
10.1101/2022.03.03.22271837 | Digitizing Non-Invasive Neuromodulation Trials: Scoping Review, Process Mapping, and Recommendations from a Delphi Panel | Although relatively costly and non-scalable, non-invasive neuromodulation interventions are treatment alternatives for neuropsychiatric disorders. The recent developments of highly-deployable transcranial electric stimulation (tES) systems, combined with mobile-Health technologies, could be incorporated in digital trials to overcome methodological barriers and increase equity of access. We convened 61 highly-productive specialists and contacted 8 tES companies to assess 71 issues related to tES digitalization readiness, and processes, barriers, advantages, and opportunities for implementing tES digital trials. Delphi-based recommendations (>60% agreement) were provided. Device appraisal showed moderate digitalization readiness, with high safety and possibility of trial implementation, but low connectivity. Panelists recognized the potential of tES for scalability, generalizability, and leverage of digital trials processes; although they reached no consensus about aspects regarding methodological biases. We further propose and discuss a conceptual framework for exploiting shared aspects between mobile-Health tES technologies with digital trials methodology to drive future efforts for digitizing tES trials.
Graphical Abstract. Consensus Roadmap
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=124 SRC="FIGDIR/small/22271837v1_ufig1.gif" ALT="Figure 1">
View larger version (56K):
[email protected]@14af228org.highwire.dtl.DTLVardef@16043a2org.highwire.dtl.DTLVardef@f85f22_HPS_FORMAT_FIGEXP M_FIG C_FIG (A) Recruitment process. The study procedure started with defining the components of the research problem by the core research team. After defining the problems, two different sets of participants (the steering committee (SC) including key leaders of the field identified by the core team and the expert panel (EP) as a more diverse group of experts identified based on the number of publications based on a systematic review) were identified and were invited to participate in a Delphi study. The study facilitators (first and last authors) led the communications with the SC to design the initial questionnaire through an iterative approach. (B) Evidence synthesis: To collect the available evidence, companies producing portable tES (ptES) devices were contacted, based on the companies suggested by the SC and EP to provide details about the available devices. For mapping methodological processes of digitizing tES trials, two distinct strategies were performed and embedded into the questionnaire, namely SIPOC (Suppliers, Inputs, Process, Outputs and Customer) and SWOT (Strengths, Weaknesses, Opportunities, and Threats) assessment were performed and embedded into the questionnaire. (C) Consensus development: In the next phase, the questionnaire was validated and finalized via collecting and summarizing opinions. Afterwards, the SC and EP responded to the final questionnaire and results were analyzed providing a list of recommendations for running tES digital trials based on a pre-registered consensus threshold. | health informatics |
10.1101/2022.03.03.22271504 | Precision recruitment for high-risk participants in a COVID-19 research study | Studies for developing diagnostics and treatments for infectious diseases usually require observing the onset of infection during the study period. However, when the infection base rate incidence is low, the cohort size required to measure an effect becomes large, and recruitment becomes costly and prolonged. We describe an approach for reducing recruiting time and resources in a COVID-19 study by targeting recruitment to high-risk individuals. Our approach is based on direct and longitudinal connection with research participants and computes individual risk scores from individually permissioned data about socioeconomic and behavioural data, in combination with predicted local prevalence data. When we used these scores to recruit a balanced cohort of participants for a COVID-19 detection study, we obtained a 4-7-fold greater COVID-19 infection incidence compared with similar real-world study cohorts. | infectious diseases |
10.1101/2022.03.03.22271525 | Features of an aseasonal 2021 RSV epidemic in the UK and Ireland: analysis of the first 10,000 patients | BronchStart is a prospective cohort study of infants with clinical bronchiolitis attending Emergency Departments in the United Kingdom and Ireland. We found the 2021 summer lower respiratory tract infection peak, although temporally disrupted and with an attenuated disease burden, predominantly affected younger age groups as in previous years. | pediatrics |
10.1101/2022.03.03.22271793 | Reasons underlying the intention to vaccinate children aged 5-11 against COVID-19: A cross-sectional study of parents in Israel, November 2021 | BackgroundVaccination is a key tool to mitigate impacts of the COVID-19 pandemic. In Israel, COVID-19 vaccines became available to adults in December 2020 and to 5-11-year-old children in November 2021. Ahead of the vaccine roll-out in children, we aimed to determine whether parents intended to vaccinate their children and describe reasons for their intentions.
MethodsWe recruited parents on social media and collected information on parental socio-demographic characteristics, COVID-19 vaccine history, intention to vaccinate their children against COVID- 19, and reasons for parental decisions, using an anonymous online survey. We identified associations between parental characteristics and intention to vaccinate children using a logistic regression model and described reasons for intentions to vaccinate or not using proportions together with 95% confidence intervals (CI).
Results1837 parents participated. Parental non-vaccination and having experienced major vaccination side effects were strongly associated with non-intention to vaccinate their children (OR 0.09 and 0.18 respectively, p<0.001). Compared with others, parents who were younger, lived in the socio-economically deprived periphery, and belonged to the Arab population had lower intentions to vaccinate their children. Commonly stated reasons for non-intention to vaccinate included vaccine safety and efficacy (53%, 95%CI 50-56) and the belief that COVID-19 was a mild disease (73%, 95%CI 73-79). The most frequently mentioned motives for intending to vaccine children was returning to normal social and educational life (89%, 95%CI 87-91).
ConclusionParental socio-demographic background and their own vaccination experience was associated with intention to vaccinate their children aged 5-11. Intention to vaccinate was mainly for social and economic reasons rather than health, whereas non-intention to vaccinate mainly stemmed from health concerns. Understanding rationales for COVID-19 vaccine rejection or acceptance, as well as parental demographic data, can pave the way for intentional educational campaigns to encourage not only vaccination against COVID-19, but also regular childhood vaccine programming.
HighlightsO_LIParental intention to vaccinate 5-11 children is much lower than vaccine coverage in parental age groups
C_LIO_LIBeing unvaccinated and having experienced side effects following vaccination were the greatest negative predictors in parents of intention to vaccinate their children
C_LIO_LIParents were more likely to accept a COVID-19 vaccine for their children to allow them to return to daily social life and to ensure economic security in the family
C_LIO_LIParents were more likely to reject a COVID-19 vaccination for health reasons such as safety concerns or due the belief that COVID-19 was a mild disease in children
C_LI | epidemiology |
10.1101/2022.03.03.22271793 | Reasons underlying the intention to vaccinate children aged 5-11 against COVID-19: A cross-sectional study of parents in Israel, November 2021 | BackgroundVaccination is a key tool to mitigate impacts of the COVID-19 pandemic. In Israel, COVID-19 vaccines became available to adults in December 2020 and to 5-11-year-old children in November 2021. Ahead of the vaccine roll-out in children, we aimed to determine whether parents intended to vaccinate their children and describe reasons for their intentions.
MethodsWe recruited parents on social media and collected information on parental socio-demographic characteristics, COVID-19 vaccine history, intention to vaccinate their children against COVID- 19, and reasons for parental decisions, using an anonymous online survey. We identified associations between parental characteristics and intention to vaccinate children using a logistic regression model and described reasons for intentions to vaccinate or not using proportions together with 95% confidence intervals (CI).
Results1837 parents participated. Parental non-vaccination and having experienced major vaccination side effects were strongly associated with non-intention to vaccinate their children (OR 0.09 and 0.18 respectively, p<0.001). Compared with others, parents who were younger, lived in the socio-economically deprived periphery, and belonged to the Arab population had lower intentions to vaccinate their children. Commonly stated reasons for non-intention to vaccinate included vaccine safety and efficacy (53%, 95%CI 50-56) and the belief that COVID-19 was a mild disease (73%, 95%CI 73-79). The most frequently mentioned motives for intending to vaccine children was returning to normal social and educational life (89%, 95%CI 87-91).
ConclusionParental socio-demographic background and their own vaccination experience was associated with intention to vaccinate their children aged 5-11. Intention to vaccinate was mainly for social and economic reasons rather than health, whereas non-intention to vaccinate mainly stemmed from health concerns. Understanding rationales for COVID-19 vaccine rejection or acceptance, as well as parental demographic data, can pave the way for intentional educational campaigns to encourage not only vaccination against COVID-19, but also regular childhood vaccine programming.
HighlightsO_LIParental intention to vaccinate 5-11 children is much lower than vaccine coverage in parental age groups
C_LIO_LIBeing unvaccinated and having experienced side effects following vaccination were the greatest negative predictors in parents of intention to vaccinate their children
C_LIO_LIParents were more likely to accept a COVID-19 vaccine for their children to allow them to return to daily social life and to ensure economic security in the family
C_LIO_LIParents were more likely to reject a COVID-19 vaccination for health reasons such as safety concerns or due the belief that COVID-19 was a mild disease in children
C_LI | epidemiology |
10.1101/2022.03.03.22271866 | Monocyte, Neutrophil and Whole Blood Transcriptome Dynamics Following Ischemic Stroke | BackgroundAfter ischemic stroke (IS), peripheral leukocytes infiltrate the damaged region and modulate the response to injury. Peripheral blood cells display distinctive gene expression signatures post IS and these transcriptional programs reflect changes in immune responses to IS. Dissecting the temporal dynamics of gene expression after IS improves our understanding of immune and clotting responses at the molecular and cellular level that are involved in acute brain injury and may assist with time-targeted, cell-specific therapy.
MethodsThe transcriptomic profiles from peripheral monocytes, neutrophils, and whole blood from 38 ischemic stroke patients and 18 controls were analyzed with RNAseq as a function of time and etiology after stroke. Differential expression analyses were performed at 0-24 h, 24-48 h, and >48 h following stroke.
ResultsUnique patterns of temporal gene expression and pathways were distinguished for monocytes, neutrophils and whole blood with enrichment of interleukin signaling pathways for different timepoints and stroke etiologies. Compared to control subjects, gene expression was generally up-regulated in neutrophils and generally down- regulated in monocytes over all times for cardioembolic, large vessel and small vessel strokes. Self-Organizing Maps identified gene clusters with similar trajectories of gene expression over time for different stroke causes and sample types. Weighted Gene Co- expression Network Analyses identified modules of co-expressed genes that significantly varied with time after stroke and included hub genes of immunoglobulin genes in whole blood.
ConclusionsAltogether, the identified genes and pathways are critical for understanding how the immune and clotting systems change over time after stroke. This study identifies potential time- and cell-specific biomarkers and treatment targets. | neurology |
10.1101/2022.03.03.22271831 | Zonal Model of Aerosol Persistence in ICUs: Utilization of Time and Space-resolved Sensor Network | The COVID-19 pandemic heightened public awareness about airborne particulate matter (PM) due to the spread of infectious diseases via aerosols. The persistence of potentially infectious aerosol in public spaces, particularly medical settings, deserves close investigation; however, approaches for rapidly parameterizing the temporospatial distribution of particles released by an infected individual have not been reported in literature. This paper presents a methodology for mapping the movement of aerosol plumes using a network of low-cost PM sensors in ICUs. Mimicking aerosol generation by a patient, we tracked aerosolized NaCl particles functioning as tracers for potentially infectious aerosols. In positive (closed door) and neutral-pressure (open door) ICUs, an aerosol spike was detected outside the room, with up to 6% or 19% of all PM escaping through the door gaps, respectively. The outside sensors registered no aerosol spike in negative-pressure ICUs. The K-means clustering analysis of temporospatial data suggests three distinct zones: (1) near the aerosol source, (2) room periphery, and (3) immediately outside the room. These zones inform two-phase aerosol plume behavior: dispersion of the original aerosol spike throughout the room, and evacuation phase where "well-mixed" PM decayed uniformly. Decay rates were calculated for 4 ICUs in positive, neutral, and negative mode, with negative modes decaying the fastest. This research demonstrates the methodology for aerosol persistence monitoring in medical settings; however, it is limited by a relatively small data set. Future studies need to evaluate medical settings with high risks of infectious disease, assess risks of airborne disease transmission, and optimize hospital infrastructure.
Significance StatementAirborne infectious diseases, including COVID-19, are a major concern for patients and hospital staff. Here, we develop a systematic methodology for mapping the temporospatial distribution of aerosols in ICUs using a network of low-cost particulate matter (PM) sensors. Our method of analysis provides a perspective on the exfiltration efficiency of ICUs as well as the benefits of rooms that have a negative-pressure mode. Our methods could be extended to other public spaces with a high risk of infectious disease to optimize infrastructure and assess the risk of airborne disease. | occupational and environmental health |
10.1101/2022.03.03.22271831 | Aerosol Persistence in ICUs: Utilization of Low-cost Sensor Network | The COVID-19 pandemic heightened public awareness about airborne particulate matter (PM) due to the spread of infectious diseases via aerosols. The persistence of potentially infectious aerosol in public spaces, particularly medical settings, deserves close investigation; however, approaches for rapidly parameterizing the temporospatial distribution of particles released by an infected individual have not been reported in literature. This paper presents a methodology for mapping the movement of aerosol plumes using a network of low-cost PM sensors in ICUs. Mimicking aerosol generation by a patient, we tracked aerosolized NaCl particles functioning as tracers for potentially infectious aerosols. In positive (closed door) and neutral-pressure (open door) ICUs, an aerosol spike was detected outside the room, with up to 6% or 19% of all PM escaping through the door gaps, respectively. The outside sensors registered no aerosol spike in negative-pressure ICUs. The K-means clustering analysis of temporospatial data suggests three distinct zones: (1) near the aerosol source, (2) room periphery, and (3) immediately outside the room. These zones inform two-phase aerosol plume behavior: dispersion of the original aerosol spike throughout the room, and evacuation phase where "well-mixed" PM decayed uniformly. Decay rates were calculated for 4 ICUs in positive, neutral, and negative mode, with negative modes decaying the fastest. This research demonstrates the methodology for aerosol persistence monitoring in medical settings; however, it is limited by a relatively small data set. Future studies need to evaluate medical settings with high risks of infectious disease, assess risks of airborne disease transmission, and optimize hospital infrastructure.
Significance StatementAirborne infectious diseases, including COVID-19, are a major concern for patients and hospital staff. Here, we develop a systematic methodology for mapping the temporospatial distribution of aerosols in ICUs using a network of low-cost particulate matter (PM) sensors. Our method of analysis provides a perspective on the exfiltration efficiency of ICUs as well as the benefits of rooms that have a negative-pressure mode. Our methods could be extended to other public spaces with a high risk of infectious disease to optimize infrastructure and assess the risk of airborne disease. | occupational and environmental health |
10.1101/2022.03.03.22271867 | A multi-analyte serum biomarker panel for early detection of pancreatic adenocarcinoma | PurposeWe determined whether a large, multi-analyte panel of circulating biomarkers can improve detection of early-stage pancreatic ductal adenocarcinoma (PDAC).
Experimental DesignWe defined a biologically relevant subspace of blood analytes based on previous identification in premalignant lesions or early-stage PDAC and evaluated each in pilot studies. The 31 analytes that met minimum diagnostic accuracy were measured in serum of 837 subjects (461 healthy, 194 benign pancreatic disease, 182 early stage PDAC). We used machine learning to develop classification algorithms using the relationship between subjects based on their changes across the predictors. Model performance was subsequently evaluated in an independent validation data set from 186 additional subjects.
ResultsA classification model was trained on 669 subjects (358 healthy, 159 benign, 152 early-stage PDAC). Model evaluation on a hold-out test set of 168 subjects (103 healthy, 35 benign, 30 early-stage PDAC) yielded an area under the receiver operating characteristic (ROC) curve (AUC) of 0.920 for classification of PDAC from non-PDAC (benign and healthy controls) and an AUC of 0.944 for PDAC vs. healthy controls. The algorithm was then validated in 146 subsequent cases presenting with pancreatic disease (73 benign pancreatic disease, 73 early and late stage PDAC) as well as 40 healthy control subjects. The validation set yielded an AUC of 0.919 for classification of PDAC from non-PDAC and an AUC of 0.925 for PDAC vs. healthy controls.
ConclusionsIndividually weak serum biomarkers can be combined into a strong classification algorithm to develop a blood test to identify patients who may benefit from further testing. | oncology |
10.1101/2022.03.03.22271869 | Declines and pronounced state-level variation in tapentadol use in the US | BackgroundTapentadol is an opioid approved for the treatment of moderate-to-severe pain in the United States (US). Tapentadol is unique as it is the only Schedule II prescription drug that has dual modes of action as it combines agonist activity at the {micro} opioid receptor with norepinephrine reuptake inhibition. This descriptive study characterized tapentadol use in the US.
MethodsDrug distribution data from 2010 to 2020 were extracted for each state from the Drug Enforcement Administration. Use per state, corrected for population, was analyzed. The percentage of distribution channels (pharmacies, hospitals, and providers), the distributed amount of tapentadol, and the final adjusted quota of tapentadol were obtained. Data on tapentadol use as reported by the Medicare and Medicaid programs for 2010 to 2020 were also analyzed.
ResultsThe distributed amount of tapentadol was 3.5 tons in 2020 and on average, the final adjusted production quota was 207.2% greater than the distributed amount between 2010 and 2020. Distributed tapentadol was 1.3% of all Schedule II opioids distributed in 2020. Tapentadol use decreased by -53.8% between 2012 and 2020 in the US whereas New Hampshire was the only state that had a positive change (+13.1%). There were minor changes in the amounts of tapentadol distributed via various distribution channels (Pharmacies = 98.0%, hospitals = 1.9% in 2020). Tapentadol prescribed by Nurse Practitioners experienced the largest increase of +8.7% among all specialties to 18.0%, the highest percentage of Medicare claims of tapentadol in 2019. Diabetes prevalence was significantly correlated with tapentadol distribution in 2012 (r(50) = .44, p < .01) and 2020 (r(50) = .28, p < .05).
DiscussionThere has been a substantial decline over the past decade in tapentadol distribution and prescribing to Medicaid patients. The unusual tapentadol prescribing pattern in New Hampshire may warrant investigation regarding differing prescribers attitudes towards tapentadol or the employment of tapentadol as part of a step-down therapy for opioid addiction. | pain medicine |
10.1101/2022.03.03.22271360 | FinnGen: Unique genetic insights from combining isolated population and national health register data | Population isolates such as Finland provide benefits in genetic studies because the allelic spectrum of damaging alleles in any gene is often concentrated on a small number of low-frequency variants (0.1% [≤] minor allele frequency < 5%), which survived the founding bottleneck, as opposed to being distributed over a much larger number of ultra--rare variants. While this advantage is well-- established in Mendelian genetics, its value in common disease genetics has been less explored. FinnGen aims to study the genome and national health register data of 500,000 Finns, already reaching 224,737 genotyped and phenotyped participants. Given the relatively high median age of participants (63 years) and dominance of hospital-based recruitment, FinnGen is enriched for many disease endpoints often underrepresented in population-based studies (e.g., rarer immune-mediated diseases and late onset degenerative and ophthalmologic endpoints). We report here a genome-wide association study (GWAS) of 1,932 clinical endpoints defined from nationwide health registries. We identify genome--wide significant associations at 2,491 independent loci. Among these, finemapping implicates 148 putatively causal coding variants associated with 202 endpoints, 104 with low allele frequency (AF<10%) of which 62 were over two-fold enriched in Finland.
We studied a benchmark set of 15 diseases that had previously been investigated in large genome-wide association studies. FinnGen discovery analyses were meta-analysed in Estonian and UK biobanks. We identify 30 novel associations, primarily low-frequency variants strongly enriched, in or specific to, the Finnish population and Uralic language family neighbors in Estonia and Russia.
These findings demonstrate the power of bottlenecked populations to find unique entry points into the biology of common diseases through low-frequency, high impact variants. Such high impact variants have a potential to contribute to medical translation including drug discovery. | genetic and genomic medicine |
10.1101/2022.03.03.22270812 | SARS-CoV-2 Omicron Variant Infection of Individuals with High Titer Neutralizing Antibodies Post-3rd mRNA Vaccine Dose | BackgroundVaccination with COVID-19 mRNA vaccines prevent hospitalization and severe disease caused by wildtype SARS-CoV-2 and several variants, and likely prevented infection when serum neutralizing antibody (NAb) titers were >1:160. Preventing infection limits viral replication resulting in mutation, which can lead to the emergence of additional variants.
MethodsDuring a longitudinal study to evaluate durability of a three-dose mRNA vaccine regimen (2 primary doses and a booster) using a rapid test that semi-quantitatively measures NAbs, the Omicron variant emerged and quickly spread globally. We evaluated NAb levels measured prior to symptomatic breakthrough infection, in groups infected prior to and after the emergence of Omicron.
ResultsDuring the SARS-CoV-2 Delta variant wave, 93% of breakthrough infections in our study occurred when serum NAb titers were <1:80. In contrast, after the emergence of Omicron, study participants with high NAb titers that had received booster vaccine doses became symptomatically infected. NAb titers prior to infection were [≥]1:640 in 64% of the Omicron-infected population, [≥]1:320 (14%), and [≥]1:160 (21%).
DiscussionThese results indicate that high titers of NAbs elicited by currently available mRNA vaccines do not protect against infection with the Omicron variant, and that mild to moderate symptomatic infections did occur in a vaccinated and boosted population, although did not require hospitalization. | infectious diseases |
10.1101/2022.03.03.22271766 | Evaluation of the role of home rapid antigen testing to determine isolation period after infection with SARS-CoV-2 | ImportanceRecent CDC COVID-19 isolation guidance for non-immunocompromised individuals with asymptomatic or mild infection allows ending isolation after 5 days if asymptomatic or afebrile with improving symptoms. The role of rapid antigen testing in further characterizing the risk of viral transmission to others is unclear.
ObjectiveUnderstand rates of rapid antigen test (RAT) positivity after day 5 from a positive COVID-19 test and the relationship of this result to symptoms and viral culture.
DesignIn this single center, observational cohort study, ambulatory individuals newly testing SARS-CoV-2 positive completed daily symptom logs, and RAT self-testing starting day 6 until negative. Anterior nasal and oral swabs were collected on a subset for viral culture.
Main Outcomes and MeasuresDay 6 SARS-CoV-2 RAT result, symptoms and viral culture.
Results40 individuals enrolled between January 5 and February 11, 2022 with a mean age of 32 years (range 22 to 57). 23 (58%) were women and 17 (42%) men. All were vaccinated. 33 (83%) were symptomatic. Ten (25%) tested RAT negative on day 6. 61 of 90 (68%) RATs performed on asymptomatic individuals after day 5 were positive. Day 6 viral cultures were positive in 6 (35%) of 17 individuals. A negative RAT or being asymptomatic on day 6 were 100% and 78% predictive respectively for negative culture, while improving symptoms was 69% predictive. A positive RAT was 50% predictive of positive culture.
Conclusion and RelevanceRATs are suboptimal in predicting viral culture results on day 6. Use of routine RATs to guide end of COVID-19 isolation could result in significant numbers of culture negative, potentially non-infectious individuals undergoing prolonged isolation. However, a negative RAT was highly predictive of being culture negative. Complete absence of symptoms was inferior to a negative RAT in predicting a negative culture result, but performed better than improving symptoms. If a positive viral culture is a proxy for infectiousness, these data may help further refine a safer strategy for ending isolation. | infectious diseases |
10.1101/2022.03.03.22271824 | Different forms of superspreading lead to different outcomes: heterogeneity in infectiousness and contact behavior relevant for the case of SARS-CoV-2 | Superspreading events play an important role in the spread of SARS-CoV-2 and several other pathogens. Hence, while the basic reproduction number of the original Wuhan SARS-CoV-2 is estimated to be about 3 for Belgium, there is substantial inter-individual variation in the number of secondary cases each infected individual causes. Multiple factors contribute to the occurrence of superspreading events: heterogeneity in infectiousness and susceptibility, variations in contact behavior, and the environment in which transmission takes place. While superspreading has been included in several infectious disease transmission models, our understanding of the effect that these different forms of superspreading have on the spread of pathogens and the effectiveness of control measures remains limited. To disentangle the effects of infectiousness-related heterogeneity on the one hand and contact-related heterogeneity on the other, we implemented both forms of superspreading in an individual-based model describing the transmission and spread of SARS-CoV-2 in the Belgian population. We considered its impact on viral spread as well as on the effectiveness of social distancing. We found that the effects of superspreading driven by heterogeneity in infectiousness are very different from the effects of superspreading driven by heterogeneity in contact behavior. On the one hand, a higher level of infectiousness-related heterogeneity results in less outbreaks occurring following the introduction of one infected individual. Outbreaks were also slower, with a lower peak which occurred at a later point in time, and a lower herd immunity threshold. Finally, the risk of resurgence of an outbreak following a period of lockdown decreased. On the other hand, when contact-related heterogeneity was high, this also led to smaller final sizes, but caused outbreaks to be more explosive in regard to other aspects (such as higher peaks which occurred earlier, and a higher herd immunity threshold). Finally, the risk of resurgence of an outbreak following a period of lockdown increased. Determining the contribution of both source of heterogeneity is therefore important but left to be explored further.
Author summaryTo investigate the effect of different sources of superspreading on disease dynamics, we implemented superspreading driven by heterogeneity in infectiousness and heterogeneity in contact behavior into an individual-based model for the transmission of SARS-CoV-2 in the Belgian population. We compared the impact of both forms of superspreading in a scenario without interventions as well as in a scenario in which a period of strict social distancing (i.e. a lockdown) is followed by a period of partial release. We found that both forms of superspreading have very different effects. On the one hand, increasing the level of infectiousness-related heterogeneity led to less outbreaks being observed following the introduction of one infected individual in the population. Furthermore, final outbreak sizes decreased, and outbreaks became slower, with lower and later peaks, and a lower herd immunity threshold. Finally, the risk for resurgence of an outbreak following a period of lockdown also decreased. On the other hand, when contact-related heterogeneity was high, this also led to smaller final sizes, but caused outbreaks to be more explosive regarding other aspects (such as higher peaks that occurred earlier). The herd immunity threshold also increased, as did the risk of resurgence of outbreaks. | infectious diseases |
10.1101/2022.02.28.22271676 | Development of a COVID-19 risk-assessment model for participants at an outdoor music festival: Evaluation of the validity and control measure effectiveness | We developed an environmental exposure model to estimate the Coronavirus Disease 2019 (COVID-19) risk among participants at an outdoor music festival and validated the model using a real cluster outbreak case. Furthermore, we evaluated the extent to which the risk could be reduced by additional infection control measures such as negative proofs of antigen tests on the day of the event, wearing masks, disinfection of environmental surfaces, and vaccination. The total number of already- and newly-infected individuals who participated in the event according to the new model was 47.0 (95% uncertainty interval: 12.5-185.5), which is in good agreement with the reported value (45). Among the additional control measures, vaccination, mask-wearing, and disinfection of surfaces were determined to be effective. Based on the combination of all measures, a 94% risk reduction could be achieved. In addition to setting a benchmark for an acceptable number of newly-infected individuals at the time of an event, the application of this model will enable us to determine whether it is necessary to implement additional measures, limit the number of participants, or refrain from holding an event. | infectious diseases |
10.1101/2022.03.03.22271855 | SARS-CoV-2 Seroprevalence among health care workers after the first and second pandemic wave | BackgroundThe Grand Hopital de Charleroi is a large non-academic Belgian hospital that treated a large number of COVID-19 inpatients. In the context of this pandemic, all professions-combined healthcare workers (HCWs), and not only direct caregivers, are a frontline workforce in contact with suspected and confirmed COVID-19 cases and seem to be a high-risk group for exposure. The aim of our study was to estimate the prevalence of anti-SARS-CoV-2 antibodies in HCWs in our hospital after the first and the second pandemic wave and also to characterize the distribution of this seroprevalence in relation to various criteria.
MethodsAt the end of the two recruitment periods, a total of 4008 serological tests were performed in this single-center cross-sectional study. After completing a questionnaire including demographic and personal data, possible previous COVID-19 diagnostic test results and/or the presence of symptoms potentially related to COVID-19, the study participants underwent blood sampling and serological testing using DiaSorins LIAISON(R) SARS-CoV-2 S1/S2 IgG test for the first phase and LIAISON(R) SARS-CoV-2 TrimericS IgG test for the second phase of this study.
Results302 study participants (10,72%) in the first round of the study and 404 (33,92%) in the second round were positive for SARS-CoV-2-IgG antibodies. The prevalence of seropositivity observed after the second wave was 3,16 times higher than after the first wave. We confirmed that direct, prolonged and repeated contact with patients or their environment was a predominant seroconversion factor, but more unexpectedly, that this was the case for all HCWs and not only caregivers. Finally, the notion of high-risk contact seemed more readily identifiable in ones workplace than in ones private life.
ConclusionOur study confirmed that HCWs are at a significantly higher risk of contracting COVID-19 than the general population, and suggest that repeated contacts with at-risk patients, regardless of the HCWs profession, represent the most important risk factor for seroconversion. (Clinicaltrials.gov number, NCT04723290) | infectious diseases |
10.1101/2022.03.03.22271803 | Testing Frequency Matters | An Evaluation of the Diagnostic Performance of a SARS-CoV-2 Rapid Antigen Test in United States Correctional Facilities | BackgroundThe CDC recommends serial rapid antigen assay collection within congregate facilities for screening and outbreak testing. Though modeling and observational studies from community and long-term care facilities have shown serial collection provides adequate sensitivity and specificity, the diagnostic accuracy of this testing strategy within correctional facilities remains unknown.
MethodsUsing Connecticut Department of Corrections (DOC) data from November 21st 2020 to June 15th 2021, we estimated the accuracy of a rapid assay, BinaxNOW, under three collection strategies, a single test in isolation and two and three serial tests separated by 1-4 day intervals. Diagnostic accuracy metrics were estimated in relation to RT-PCRs collected within one day before the first or after the last included rapid antigen tests in a series.
ResultsOf the 17,669 residents who contributed at least one RT-PCR or rapid antigen during the study period, 3,979 contributed [≥]1 paired rapid antigen test series. In relation to RT-PCR, the three-rapid antigen test strategy had a sensitivity of 89.6% (95% confidence intervals: 86.1-92.6%) and specificity of 97.2% (CI: 95.1-98.3%). The sensitivities for two and one-rapid antigen test strategy were 75.2% and 52.8%, respectively, and the specificities were 98.5% and 99.4%, respectively. The sensitivity was higher among symptomatic residents and when the RT-PCR was collected before the rapid antigen tests.
ConclusionsWe found the serial collection of an antigen test resulted in high diagnostic accuracy. These findings support serial testing within correctional facilities for outbreak investigation, screening, and when rapid detection is required (such as intakes or transfers). | infectious diseases |
10.1101/2022.03.03.22271601 | Immunogenicity and reactogenicity against the SARS-CoV-2 variants following heterologous primary series involving CoronaVac and ChAdOx1 and BNT162b2 plus heterologous BNT162b2 booster vaccination: An open-label randomized study in healthy Thai adults. | We evaluated the immunogenicity and reactogenicity of heterologous COVID-19 primary series vaccination schedules. Participants were randomized to one of seven groups that received two-dose homologous BNT162b2 or heterologous combinations of CoronaVac, ChAdOx1 and BNT162b2, with 4 weeks interval. Of 210 participants, median age was 38 (19-60) years, 51% were female. The groups that received BNT162b2 as second dose induced the highest virus-specific IgG response against the ancestral strain [BNT162b2: geometric mean concentration (GMC) 2133-2249, 95%CI 1558 to 3055; ChAdOx1: 851-1201, 95%CI 649 to 1522; CoronaVac: 137-225, 95%CI 103-286 BAU/mL], neutralising antibodies (NAb) against Beta and Delta, and interferon gamma response. All groups induced low to negligible NAb against Omicron. A BNT162b2 booster (3rd dose) following heterologous CoronaVac and ChAdOx1 regimens induced >140-fold increase in NAb titres against Omicron. Our findings indicate that heterologous regimens using BNT162b2 as the second dose may be considered an alternative schedule to maximize immune response. | infectious diseases |
10.1101/2022.03.03.22271159 | Spike mutations of the SARS-CoV-2 Omicron: more likely to occur in the epitopes | Almost two years since the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak in 2019, and it is still pandemic over the world. SARS-COV-2 continuing to mutate and evolve, which further exacerbated the spread of the epidemic. Omicron variant, as an emerging mutation recently in South Africa, spreaded fastly to other countries worldwide. However, the gene charicterstic of Omicron and the effect on epitopes are still unclear. In this study, we retrieved 800 SARS-CoV-2 full-length sequences from GISAID database on 14 December 2021 (Alpha 110, Beta 101, Gamma 108, Delta 110, Omicron 107, Lambda 98, Mu 101, GH/490R 65). Overall, 1320 amino acid (AA) sites were mutated in these 800 SARS-CoV-2 sequences. Covariant network analysis showed that the covariant network of Omicron variant was significantly different from other variants. Further, 218 of the 1320 AA sites were occurred in the S gene, including 78 high-frequency mutations (>90%). Notably, we identified 25 unique AA mutations in Omicron, which may affect the transmission and pathogenicity of SARS-CoV-2. Finally, we analyzed the effect of Omicron on epitope peptide. As expected, 64.1% mutations (25/39) of Omicron variants were in epitopes, which was significantly higher than in other variants. These mutations may cause a poor response to vaccines to Omicron variants. In conclusion, Omicron variants, as an emerging mutation, should be alerted for us due that it may lead to poor vaccine response, and more data is needed to evaluate the virulence and vaccines responses to this variants. | infectious diseases |
10.1101/2022.03.02.22271106 | Proteome reveals antiviral host response and NETosis during acute COVID-19 in high-risk patients | SARS-CoV-2 remains an acute threat to human health, endangering hospital capacities worldwide. Many studies have aimed at informing pathophysiologic understanding and identification of disease indicators for risk assessment, monitoring, and therapeutic guidance. While findings start to emerge in the general population, observations in high-risk patients with complex pre-existing conditions are limited.
To this end, we biomedically characterized quantitative proteomics in a hospitalized cohort of COVID-19 patients with mild to severe symptoms suffering from different (co)-morbidities in comparison to both healthy individuals and patients with non-COVID related inflammation. Deep clinical phenotyping enabled the identification of individual disease trajectories in COVID-19 patients. By the use of this specific disease phase assignment, proteome analysis revealed a severity dependent general type-2 centered host response side-by-side with a disease specific antiviral immune reaction in early disease. The identification of phenomena such as neutrophil extracellular trap (NET) formation and a pro-coagulatory response together with the regulation of proteins related to SARS-CoV-2-specific symptoms by unbiased proteome screening both confirms results from targeted approaches and provides novel information for biomarker and therapy development.
Graphical AbstractSars-CoV-2 remains a challenging threat to our health care system with many pathophysiological mechanisms not fully understood, especially in high-risk patients. Therefore, we characterized a cohort of hospitalized COVID-19 patients with multiple comorbidities by quantitative plasma proteomics and deep clinical phenotyping. The individual patients disease progression was determined and the subsequently assigned proteome profiles compared with a healthy and a chronically inflamed control cohort. The identified disease phase and severity specific protein profiles revealed an antiviral immune response together with coagulation activation indicating the formation of NETosis side-by-side with tissue remodeling related to the inflammatory signature.
O_FIG O_LINKSMALLFIG WIDTH=197 HEIGHT=200 SRC="FIGDIR/small/22271106v1_ufig1.gif" ALT="Figure 1">
View larger version (50K):
[email protected]@1d2037dorg.highwire.dtl.DTLVardef@e33ad8org.highwire.dtl.DTLVardef@2c4fe0_HPS_FORMAT_FIGEXP M_FIG C_FIG | respiratory medicine |
10.1101/2022.03.03.22271873 | Alzheimer's Disease Classification Using Cluster-based Labelling for Graph Neural Network on Tau PET Imaging and Heterogeneous Data | Graphical neural networks (GNNs) offer the opportunity to provide new insights into the diagnosis, prognosis and contributing factors of Alzheimers disease (AD). However, previous studies using GNNs on AD did not incorporate tau PET neuroimaging features or, more importantly, have applied class labels based only on clinician diagnosis, which can be inconsistent. This study addresses these limitations by applying cluster-based labelling on heterogeneous data, including tau PET neuroimaging data, before applying GNN for classification. The data comprised sociodemographic, medical/family history, cognitive and functional assessments, apolipoprotein E (APoE) {varepsilon}4 allele status, and combined tau PET neuroimaging and MRI data. We identified 5 clusters embedded in a 5-dimensional nonlinear UMAP space. Further projection onto a 3-dimensional UMAP space for visualisation, and using association rule and information gain algorithms, the individual clusters were revealed to have specific feature characteristics, specifically with respect to clinical diagnosis of AD, gender, parental history of AD, age, and underlying neurological risk factors. In particular, there was one cluster comprised mainly of clinician diagnosis of AD. We re-labelled the diagnostic classes based around this AD cluster such that AD cases occurred only within this cluster while those AD cases outside of it were re-labelled as a prodromal stage of AD. A post-hoc analysis supported the re-labelling of these cases given their similar brain tau PET levels, lying between those of the remaining AD and non-AD cases. We then trained a GNN model using the re-labelled data and compared the AD classification performance to that trained using clinician diagnostic labels. We found that GNN with re-labelled data had significantly higher average classification accuracy (p=0.011) and less variability (93.2{+/-}0.03%) than with clinician labelled data (90.06{+/-}0.04%). During model training, the cluster-labelled GNN model also converged faster than that using clinician labels. Overall, our work demonstrates that more objective cluster-based labels are viable for training GNN on heterogeneous data for diagnosing AD and cognitive decline, especially when aided by tau PET neuroimaging data. | health informatics |
10.1101/2022.03.03.22271860 | Sero-surveillance for IgG to SARS-CoV-2 at antenatal care clinics in three Kenyan referral hospitals: repeated cross-sectional surveys 2020-21 | IntroductionThe high proportion of SARS-CoV-2 infections that have remained undetected presents a challenge to tracking the progress of the pandemic and estimating the extent of population immunity.
MethodsWe used residual blood samples from women attending antenatal care services at three hospitals in Kenya between August 2020 and October 2021and a validated IgG ELISA for SARS-Cov-2 spike protein and adjusted the results for assay sensitivity and specificity. We fitted a two-component mixture model as an alternative to the threshold analysis to estimate of the proportion of individuals with past SARS-CoV-2 infection.
ResultsWe estimated seroprevalence in 2,981 women; 706 in Nairobi, 567 in Busia and 1,708 in Kilifi. By October 2021, 13% of participants were vaccinated (at least one dose) in Nairobi, 2% in Busia. Adjusted seroprevalence rose in all sites; from 50% (95%CI 42-58) in August 2020, to 85% (95%CI 78-92) in October 2021 in Nairobi; from 31% (95%CI 25-37) in May 2021 to 71% (95%CI 64-77) in October 2021 in Busia; and from 1% (95% CI 0-3) in September 2020 to 63% (95% CI 56-69) in October 2021 in Kilifi. Mixture modelling, suggests adjusted cross-sectional prevalence estimates are underestimates; seroprevalence in October 2021 could be 74% in Busia and 72% in Kilifi.
ConclusionsThere has been substantial, unobserved transmission of SARS-CoV-2 in Nairobi, Busia and Kilifi Counties. Due to the length of time since the beginning of the pandemic, repeated cross-sectional surveys are now difficult to interpret without the use of models to account for antibody waning. | epidemiology |
10.1101/2022.03.03.22271862 | Burden of Disease from Contaminated Drinking Water in Countries with High Access to Safely Managed Water: A Systematic Review | BackgroundThe vast majority of residents of high-income countries (>90%) reportedly have high access to safely managed drinking water, the highest level of service under the Sustainable Development Goal (SDG) framework. Owing perhaps to the widely held perception of near universal access to high-quality water services in these countries, the burden of waterborne disease in these contexts is understudied.
MethodsWe conducted a systematic review of estimates of the disease burden attributed to drinking water in countries where >90% of the population has access to safely managed drinking water per official SDG monitoring. We searched PubMed, Web of Science, and SCOPUS for articles published before September 10, 2021.
FindingsWe identified 24 studies reporting estimates for the disease burden attributable to microbial contaminants. Across these studies, the population-weighted average burden of gastrointestinal illness risks attributed to drinking water was about 3,529 annual cases per 100,000 people. Beyond exposure to infectious agents, we also identified 10 studies reporting disease burden, predominantly, cancer risks, associated with chemical contaminants. Across these studies, the pooled population-weighted average of excess cancer cases attributable to drinking water was 1.8 annual cancer cases per 100,000 people.
InterpretationThese estimates exceed WHO-recommended normative targets for disease burden attributable to drinking water and highlight that there remains important preventable disease burden in these contexts. However, the available literature was scant and limited in geographic scope, disease outcomes, range of microbial and chemical contaminants, and inclusion of subpopulations that could most benefit from water infrastructure investments. These subpopulations include rural, low-income communities; Indigenous or Aboriginal peoples; and populations marginalized due to discrimination by race, ethnicity, or socioeconomic status. Studies quantifying drinking water-associated disease burden in countries with reportedly high access to safe drinking water, with a focus on specific subpopulations and promoting environmental justice, are needed.
Research in contextO_ST_ABSEvidence before this studyC_ST_ABSAlthough the burden of disease associated with unsafe drinking water in low- and middle-income countries has been subject of much research, there is uncertainty about the magnitude and nature of this disease burden in high-income countries and countries where access to safely managed drinking water is >90%. To assess the magnitude of this burden in countries with high levels of access to high quality water, we conducted a systematic review of estimates of the disease burden attributed to drinking water in countries where >90% of the population has access to safely managed drinking water as defined by the WHO/UNICEF Joint Monitoring Programme for Water Supply, Sanitation and Hygiene (JMP). We searched PubMed, Web of Science, and SCOPUS for relevant articles published before September 10, 2021. We searched for articles including "burden of disease" OR "disease burden" OR "gastrointestinal illness" anywhere in the text. From among these articles, we selected those with the terms "drinking water" OR "tap water." From this reduced set, we selected articles reporting on countries with >90% safe water access and including estimates of cases, hospitalizations, or deaths related to drinking water contamination. Additional articles were identified from consultations with subject-matter experts. Only articles in English were recovered, which may have biased our results towards specific countries.
Added value of this studyTo our knowledge, this is the first global review of its kind to identify burden of waterborne disease estimates associated with microbiological contaminants as well as chemical contaminants found in drinking water, specifically in countries with reportedly high access (>90%) to safely managed drinking water. Despite the perception that high quality water is ubiquitous in these countries, we found that burden estimates in these countries exceeded WHO-recommended targets for disease burden attributable to unsafe drinking water. This review also highlighted the gaps in our understanding of waterborne disease burden across and within countries where access to high quality drinking water is common.
Implications of all the available evidenceEven though the levels of access to safely managed drinking water are high in many high-income and some middle-income countries, there remains a potentially important preventable disease burden in these high-income countries. Our review suggests that further burden research expanding the scope of contaminants, disease outcomes, geographic area, and vulnerable subpopulations within countries is necessary. Currently, available evidence is insufficient to support rational decision-making and to prioritize public investment to protect the health of marginalized communities. These groups include rural, low-income communities; Indigenous or Aboriginal peoples; and populations marginalized due to discrimination by race, ethnicity, or socioeconomic status. | epidemiology |
10.1101/2022.03.03.22271313 | Open label phase I/II clinical trial and predicted efficacy of SARS-CoV-2 RBD protein vaccines SOBERANA 02 and SOBERANA Plus in children | ObjectivesTo evaluate heterologous scheme in children 3-18 y/o using two SARS-CoV-2 r-RBD protein vaccines.
MethodsA phase I/II open-label, adaptive and multicenter trial evaluated the safety and immunogenicity of two doses of SOBERANA02 and a heterologous third dose of SOBERANA Plus in 350 children 3-18 y/o in Havana Cuba. Primary outcomes were safety (in phase I) and safety/immunogenicity (in phase II) measured by anti-RBD IgG ELISA, molecular and live-virus neutralization tests and specific T-cells response. An immunobridging and prediction of efficacy were additional analysis.
ResultsLocal pain was the unique adverse event with >10% frequency, none was serious or severe. Two doses of SOBERANA 02 elicited humoral immune response similar to natural infection; the third dose increased significantly the response in all children, similar to that achieved in vaccinated young adults and higher than in convalescents children. The neutralizing titer was evaluated in a participants subset: all had neutralizing antibodies vs. alpha and delta; 97.9% vs. beta. GMT was 173.8 (CI 95% 131.7; 229.5) vs. alpha, 142 (CI 95% 101.3; 198.9) vs. delta and 24.8 (CI 95% 16.8; 36.6) vs. beta. An efficacy over 90% was estimated.
ConclusionHeterologous scheme was safe and immunogenic in children 3-18 y/o. Trial registry: https://rpcec.sld.cu/trials/RPCEC00000374 | pediatrics |
10.1101/2022.03.03.22271313 | Open label phase I/II clinical trial and predicted efficacy of SARS-CoV-2 RBD protein vaccines SOBERANA 02 and SOBERANA Plus in children | ObjectivesTo evaluate heterologous scheme in children 3-18 y/o using two SARS-CoV-2 r-RBD protein vaccines.
MethodsA phase I/II open-label, adaptive and multicenter trial evaluated the safety and immunogenicity of two doses of SOBERANA02 and a heterologous third dose of SOBERANA Plus in 350 children 3-18 y/o in Havana Cuba. Primary outcomes were safety (in phase I) and safety/immunogenicity (in phase II) measured by anti-RBD IgG ELISA, molecular and live-virus neutralization tests and specific T-cells response. An immunobridging and prediction of efficacy were additional analysis.
ResultsLocal pain was the unique adverse event with >10% frequency, none was serious or severe. Two doses of SOBERANA 02 elicited humoral immune response similar to natural infection; the third dose increased significantly the response in all children, similar to that achieved in vaccinated young adults and higher than in convalescents children. The neutralizing titer was evaluated in a participants subset: all had neutralizing antibodies vs. alpha and delta; 97.9% vs. beta. GMT was 173.8 (CI 95% 131.7; 229.5) vs. alpha, 142 (CI 95% 101.3; 198.9) vs. delta and 24.8 (CI 95% 16.8; 36.6) vs. beta. An efficacy over 90% was estimated.
ConclusionHeterologous scheme was safe and immunogenic in children 3-18 y/o. Trial registry: https://rpcec.sld.cu/trials/RPCEC00000374 | pediatrics |
10.1101/2022.03.03.22271827 | Immunogenicity and Safety of Booster Dose of S-268019-b or Tozinameran in Japanese Participants: An Interim Report of Phase 2/3, Randomized, Observer-Blinded, Noninferiority Study | In this randomized, observer-blinded, phase 2/3 study, S-268019-b (n=101), a recombinant spike protein vaccine, was analyzed for noninferiority versus tozinameran (n=103), when given as a booster [≥]6 months after 2-dose tozinameran regimen in Japanese adults without prior COVID-19 infection. Interim results showed noninferiority of S-268019-b versus tozinameran in co-primary endpoints for neutralizing antibodies on day 29: geometric mean titer (GMT) (124.97 versus 109.70; adjusted-GMT ratio [95% CI], 1.14 [0.94-1.39]; noninferiority P-value, <0.0001) and seroresponse rate (both 100%; noninferiority P-value, 0.0004). Both vaccines elicited anti-spike-protein immunoglobulin G antibodies, and produced T-cell response (n=29/group) and neutralizing antibodies against Delta and Omicron pseudovirus and live virus variants (n=24/group) in subgroups. Most participants reported low-grade reactogenicity on days 1-2, the most frequent being fatigue, fever, myalgia, and injection-site pain. No serious adverse events were reported. In conclusion, S-268019-b was safe and showed robust immunogenicity as a booster, supporting its use as COVID-19 booster vaccine.
JRCT IDjRCT2031210470
HighlightsO_LIThird COVID-19 vaccine dose (booster) enhances immune response
C_LIO_LIInterim phase 2/3 data for booster [≥]6 months after the 2nd dose in Japan are shown
C_LIO_LIS-268019-b was noninferior to tozinameran in inducing neutralizing antibodies
C_LIO_LISera boosted with either vaccines neutralized Delta and Omicron virus variants
C_LIO_LIS-268019-b was safe, and results support its use as a booster in vaccinated adults
C_LI | infectious diseases |
10.1101/2022.03.03.22271827 | Immunogenicity and Safety of Booster Dose of S-268019-b or Tozinameran in Japanese Participants: An Interim Report of Phase 2/3, Randomized, Observer-Blinded, Noninferiority Study | In this randomized, observer-blinded, phase 2/3 study, S-268019-b (n=101), a recombinant spike protein vaccine, was analyzed for noninferiority versus tozinameran (n=103), when given as a booster [≥]6 months after 2-dose tozinameran regimen in Japanese adults without prior COVID-19 infection. Interim results showed noninferiority of S-268019-b versus tozinameran in co-primary endpoints for neutralizing antibodies on day 29: geometric mean titer (GMT) (124.97 versus 109.70; adjusted-GMT ratio [95% CI], 1.14 [0.94-1.39]; noninferiority P-value, <0.0001) and seroresponse rate (both 100%; noninferiority P-value, 0.0004). Both vaccines elicited anti-spike-protein immunoglobulin G antibodies, and produced T-cell response (n=29/group) and neutralizing antibodies against Delta and Omicron pseudovirus and live virus variants (n=24/group) in subgroups. Most participants reported low-grade reactogenicity on days 1-2, the most frequent being fatigue, fever, myalgia, and injection-site pain. No serious adverse events were reported. In conclusion, S-268019-b was safe and showed robust immunogenicity as a booster, supporting its use as COVID-19 booster vaccine.
JRCT IDjRCT2031210470
HighlightsO_LIThird COVID-19 vaccine dose (booster) enhances immune response
C_LIO_LIInterim phase 2/3 data for booster [≥]6 months after the 2nd dose in Japan are shown
C_LIO_LIS-268019-b was noninferior to tozinameran in inducing neutralizing antibodies
C_LIO_LISera boosted with either vaccines neutralized Delta and Omicron virus variants
C_LIO_LIS-268019-b was safe, and results support its use as a booster in vaccinated adults
C_LI | infectious diseases |
10.1101/2022.03.03.22271839 | Individualised Profiling of White Matter Organisation in Moderate-to-Severe Traumatic Brain Injury Patients Using TractLearn: A Proof-of-Concept Study | Approximately 65% of moderate-to-severe traumatic brain injury (m-sTBI) patients present with poor long-term behavioural outcomes, which can significantly impair activities of daily living. Numerous diffusion-weighted MRI studies have linked these poor outcomes to decreased white matter integrity of several commissural tracts, association fibres and projection fibres in the brain. However, these studies focused on group-based analyses, which are unable to deal with the substantial between-patient heterogeneity in m-sTBI. As a result, there is increasing interest in conducting individualised neuroimaging analyses. Here, we generated a detailed subject-specific characterisation of microstructural organisation of white matter tracts in 5 chronic patients with m-sTBI (29 - 49y, 2 females). We developed an imaging analysis framework using fixel-based analysis and TractLearn to determine whether the values of fibre density of white matter tracts at the individual patient level deviate from the healthy control group (n = 12, 8F, Mage=35.7y, age range 25 - 64y). Our individualised analysis confirmed unique white matter profiles, and the heterogeneous nature of m-sTBI to properly characterise the extent of brain abnormality. Future studies incorporating clinical data, as well as utilising larger reference samples and examining the test-retest reliability of the fixel-wise metrics are warranted. This proof-of-concept study suggests that these resulting individual profiles may assist clinicians in planning personalised training programs for chronic m-sTBI patients, which is necessary to achieve optimal behavioural outcomes and improved quality of life. | radiology and imaging |
10.1101/2022.03.02.22271774 | Human cytomegalovirus strain diversity and dynamics reveal the donor lung as a major contributor after transplantation | Mixed human cytomegalovirus (HCMV) strain infections are frequent in lung transplant recipients (LTRs). To date, the influence of the donor (D) and recipient (R) HCMV-serostatus on intra-host HCMV strain composition and replication dynamics after transplantation is only poorly understood.
Here, we investigated ten pre-transplant lungs from HCMV-seropositive donors, and 163 sequential HCMV-DNA positive plasma and bronchoalveolar lavage samples from 50 LTRs with multiviremic episodes post-transplantation. The study cohort included D+R+ (38%), D+R- (36%), and D-R+ (26%) patients. All samples were subjected to quantitative genotyping by short amplicon deep sequencing, and 24 thereof were additionally PacBio long-read sequenced for genotype linkages.
We find that D+R+ patients show a significantly elevated intra-host strain diversity compared to D+R- and D-R+ patients (P=0.0089). Both D+ patient groups display significantly higher replication dynamics than D- patients (P=0.0061). Five out of ten pre-transplant donor lungs were HCMV-DNA positive, whereof in three multiple HCMV strains were detected, indicating that multi-strain transmission via lung transplantation is likely. Using long reads, we show that intra-host haplotypes can share distinctly linked genotypes, which limits overall intra-host diversity in mixed infections. Together, our findings demonstrate donor-derived strains as a main source for increased HCMV strain diversity and dynamics post-transplantation, while a relatively limited number of intra-host strains may facilitate rapid adaptation to changing environments in the host. These results foster targeted strategies to mitigate the potential transmission of the donor strain reservoir with the allograft. | transplantation |
10.1101/2022.03.03.22271349 | COVID-19 Hospitalisation in Portugal, the first year: Results from hospital discharge data | Using Portuguese diagnosis-related groups (DRGs) data we analysed the dynamics of the COVID-19 hospitalisation in Portugal, with the ultimate goal of estimating parameters to be used in COVID-19 mathematical modelling.
After processing the data, corresponding to the period between March 2020 and the end of March 2021, an exploratory analysis was conducted, in which the length of stay (LoS) in hospital care, the time until death (TuD), the number of admissions and mortality count were estimated and described with respect to the age and gender of the patients, but also by region of admission and evolution over time.
The median and mean of LoS was estimated to be 8 and 12.5 days (IQR:4-15) for non-ICU patients and for ICU patients as 18 and 24.3 days (IQR:11-30). The percentage of patients that, during their hospitalisation, required ICU care was determined to be 10.9%. In-hospital mortality of non-ICU patients (22.6%) and of ICU patients (32.8%) were also calculated. In a visual exploratory analysis we observed that they changed with time, age group, gender and region.
Various univariable probability distributions were fitted to the LoS and TuD data, using maximum likelihood estimation (MLE) but also maximum goodness-of-fit estimation (MGE) to obtain the distribution parameters.
AMS Subject Classification62-07,62E07,62F10,62P10 | epidemiology |
10.1101/2022.03.06.22271797 | The reliability of two prospective cortical biomarkers for pain: EEG peak alpha frequency and TMS corticomotor excitability | Many pain biomarkers fail to move from discovery to clinical application, attributed to poor reliability and feasible classifications of at-risk individuals. Preliminary evidence has shown that higher pain sensitivity is associated with slow peak alpha frequency (PAF) and depression of corticomotor excitability (CME). The present study evaluated the reliability of these measures, specifically whether, over several days of pain, a) PAF remains stable and b) individuals show two stable and distinct CME responses: facilitation and depression. Seventy-five healthy participants were given an injection of nerve growth factor (NGF) into the right masseter muscle on Day 0 and Day 2, inducing sustained pain. Electroencephalography (EEG) to assess PAF and transcranial magnetic stimulation (TMS) to assess CME were recorded on Day 0, Day 2 and Day 5. PAF reliability was in the excellent range even without standard pre-processing and [~]2 minutes recording length. Moreover, two distinct and stable CME responses were demonstrated: facilitation and depression. These findings support the notion that PAF is a stable trait characteristic, with reliability unaffected by pain, and excellent reliability achievable with minimal pre-processing and [~]2 minutes recording, making it a readily translatable biomarker. Furthermore, the study showed novel evidence of two stable corticomotor adaptations to sustained pain. Overall, the study provides support for the reliability of PAF and CME as prospective cortical biomarkers. | pain medicine |
10.1101/2022.03.03.22271885 | Machine Learning Based Reanalysis of Clinical Scores for Distinguishing Between Ischemic and Hemorrhagic Stroke in Low Resource Setting | BACKGROUNDIdentification of stroke and classifying them as ischemic and hemorrhagic type using clinical scores alone faces two unaddressed issues. One pertains to over-estimation of performance of scores and the other involves class imbalance nature of stroke data leading to biased accuracy. We conducted a quantitative comparison of existing scores, after correcting them for the above-stated issues. We explored the utility of Machine Learning theory to address overestimation of performance and class imbalance inherent in these clinical scores.
METHODSWe included validation studies of Siriraj (SS), Guys Hospital/Allen (GHS/AS), Greek (GS), and Besson (BS) Scores for stroke classification, from 2001-2021, identified from systematic search on PubMed, ERIC, ScienceDirect, and IEEE-Xplore. From included studies we extracted the reported cross tabulation to identify the listed issues. Further, we mitigated them while recalculating all the performance metrics for a comparative analysis of the performance of SS, GHS/AS, GS, and BS.
RESULTSA total of 21 studies were included. Our calculated sensitivity range (IS-diagnosis) for SS is 40-90% (median 70%[IQR:57-73%], aggregate 71%[SD:15%]) as against reported 43-97% (78%[IQR:65-88%]), for GHS/AS 35-93% (64%[IQR:53-71%], 64%[SD:17%]) against 35-94% (73%[IQR:62-88%]), and for GS 60-74% (64%[IQR:62-69%], 69%[SD:7%]) against 74-94% (89%[IQR:81-92%]). Calculated sensitivity (HS-diagnosis), for SS, GHS/AS, and GS respectively, are 34-86% (59%[IQR:50-79%], 61%[SD:17%]), 20-73% (46%[IQR:34-64%], 44%[SD:17%]), and 11-80% (43%[IQR:27-62%], 51%[SD:35%]) against reported 50-95% (71%[IQR:64-82%]), 33-93% (63%[IQR:39-73%]), and 41-80% (78%[IQR:59-79%]). Calculated accuracy ranges, are 37-86% (67%[IQR:56-75%], 68%[SD:13%]), 40-87% (58%[IQR:47-61%], 59%[SD:14%]), and 38-76% (51%[IQR:45-63%], 61%[SD:19%]) while the weighted accuracy ranges are 37-85% (64%[IQR:54-73%], 66%[SD:12%]), 43-80% (53%[IQR:47-62%], 54%[SD:13%]), and 38-77% (51%[IQR:44-64%], 60%[SD:20%]). Only one study evaluated BS.
CONCLUSIONQuantitative comparison of existing scores indicated significantly lower ranges of performance metrics as compared to the ones reported by the studies. We conclude that published clinical scores for stroke classification over-estimate performance. We recommend inclusion of equivocal predictions while calculating performance metrics for such analysis. Further, the high variability in performance of clinical scores in stroke identification and classification could be improved upon by creating a global data-pool with statistically important attributes. Scores based on Machine Learning from such globally pooled data may perform better and generalise at scale. | neurology |
10.1101/2022.03.05.22271948 | A Systematic Review and Meta-analysis of Polycystic Ovary Syndrome and Mental Health among Black Asian Minority Ethnic populations | BackgroundPolycystic Ovary Syndrome (PCOS) is a chronic and a common gynaecological condition impacting women of reproductive age. Women with PCOS have hormonal, ovulatory and metabolic dysfunction resulting in multiple symptoms. The correlation between hormonal disbalance and the impact on womens mental health (MH) has been researched for decades. However, the prevalence among different ethnicities has not been fully evaluated.
MethodsA systematic methodology was developed, and a protocol was published in PROSPERO (CRD42020210863) and a systematic review of publications between 1st January 1990-30th January 2021 was conducted. Multiple electronic databases were explored using keywords and MeSH terms. The finalised dataset was analysed using statistical methods such as random-effect models, subgroup analysis and sensitivity analysis.
FindingsWe included 30 studies reporting on 3,944 PCOS women. Majority of studies addressed depression anxiety, and common mental health. Studies had fair to poor methodological quality and includes observational studies and Randomised Clinical Trials (RCTs). Overall, 17% (95% CI: 7% to 29%) of women with PCOS have clinical diagnosis of major or severe depression; 33% (95% CI: 26% to 40%) have elevated depressive symptoms or a clinical diagnosis of depression; 41% (95% CI: 28% to 54%) report anxiety symptoms, and 31% (95% CI: 15% to 51%) have a form a common mental health or are takingpsychiatric medication for anxiety and / or depression. The use of various tools to assess mental health symptoms was among the reasons for the substantial heterogeneity across studies.
InterpretationPCOS is associated with an increased risk of mental health disorders including depressive s, anxiety, and other mental health disorders. While BAME populations account for about 20% of most of the samples studied, stratification by ethnicity was rarely attempted which made it difficult to elucidate the MH impact of PCOS on different communities.
FundingNot applicable
PROSPERO registration numberCRD42020210863 | obstetrics and gynecology |
10.1101/2022.03.03.22271882 | Placental transcription profiling in 6-23 weeks' gestation reveals differential transcript usage in early development | The human placenta is a rapidly developing transient organ that is key to pregnancy success. Early development of the conceptus occurs in a low oxygen environment before oxygenated maternal blood begins to flow into the placenta at [~]10-12 weeks gestation. This process is likely to substantially affect overall placental gene expression. Transcript variability underlying gene expression has yet to be profiled. In this study, accurate transcript expression profiles were identified for 84 human placental chorionic villus tissue samples collected across 6-23 weeks gestation. Differential gene expression (DGE), differential transcript expression (DTE) and differential transcript usage (DTU) between 6-10 weeks and 11-23 weeks gestation groups were assessed. In total, 229 genes had significant DTE yet no significant DGE. Integration of DGE and DTE analyses found that differential expression patterns of individual transcripts were commonly masked upon aggregation to the gene-level. Of the 611 genes that exhibited DTU, 534 had no significant DGE or DTE. The four most significant DTU genes ADAM10, VMP1, GPR126, and ASAH1, were associated with hypoxia-responsive pathways. Transcript usage is a likely regulatory mechanism in early placentation. Identification of functional roles will facilitate new insight in understanding the origins of pregnancy complications. | obstetrics and gynecology |
10.1101/2022.03.04.22271880 | Population-based analysis of radiation-induced gliomas after cranial radiotherapy for childhood cancers | BackgroundCranial radiotherapy (RT) is used to treat pediatric central nervous system (CNS) cancers and leukemias. RT carries a risk of secondary CNS malignancies, including radiation-induced gliomas, the epidemiology of which is poorly understood.
MethodsThis retrospective study using SEER registry data (1975-2016) included two cohorts. Cohort 1 included patients diagnosed with Grade III/IV or ungraded glioma as a second malignancy at least 2 years after receiving beam radiation and/or chemotherapy for a first malignancy diagnosed at ages 0-19 years, either a primary CNS tumor treated with RT (1a, n=57) or leukemia with unknown RT treatment (1b, n=20). Cohort 2 included patients with possible missed RIG who received RT for a primary CNS tumor diagnosed at 0-19 and then died of presumed progressive disease more than 5 years after diagnosis, since previous studies have documented many missed RIGs in this group (n=296). Controls (n=10,687) included all other patients ages 0-19 who received RT for a first CNS tumor or leukemia who did not fit inclusion criteria above.
ResultsFor Cohort 1 (likely/definite RIGs), 0.97% of patients receiving cranial RT went on to develop RIG. 3.39% of patients receiving cranial RT for primary CNS tumors fell in Cohort 2 (potential RIGs). Median latency to RIG diagnosis was 11.1 years; latency was significantly shorter for Cohort 1b (median 10.0, range 5.0-16.1) vs. 1a (12.0, 3.6-34.4, p=0.018). Median OS for Cohort 1 was 9.0 months. Receiving surgery, radiation, or chemotherapy were all associated with a non-statistically significant improvement in OS (p 0.1-0.2). 1.8% of brain tumor deaths in the cohort fell in Cohort 1, with an additional 7.9% in Cohort 2.
ConclusionWithin the limitations of a population-based study, 1-4% of patients undergoing cranial RT for pediatric cancers later develop RIG, which is incurable and can occur anywhere from 3-35 years later. 2-10% of pediatric brain tumor deaths are attributable to RIG. Effective treatment of RIG remains unclear and is thus deserving of increased attention in preclinical and clinical studies. | oncology |
10.1101/2022.03.04.22271830 | Safety and Immunogenicity of a 100 μg mRNA-1273 Vaccine Booster for Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) | ImportanceDue to the emergence of highly transmissible SARS-CoV-2 variants, evaluation of boosters is needed.
ObjectivesEvaluate safety and immunogenicity of 100-{micro}g of mRNA-1273 booster dose in adults.
DesignOpen-label, Phase 2/3 study.
SettingMulticenter study at 8 sites in the U.S.
ParticipantsThe mRNA-1273 100-{micro}g booster was administered to adults who previously received a two dose primary series of 100-{micro}g mRNA-1273 in the phase 3 Coronavirus Efficacy (COVE) trial, at least 6 months earlier.
InterventionLipid nanoparticle containing 100-{micro}g of mRNA encoding the spike glycoprotein of SARS-CoV-2 (Wuhan-HU-1).
Main Outcomes and MeasuresSolicited local and systemic adverse reactions, and unsolicited adverse events were collected after vaccination. Primary immunogenicity objectives were to demonstrate non-inferiority of the neutralizing antibody (nAb) response against SARS-CoV-2 based on the geometric mean titer (GMTs) and the seroresponse rates (SRRs) (booster dose vs. primary series in a historical control group). nAbs against SARS-CoV-2 variants were also evaluated.
ResultsThe 100-{micro}g booster dose had a greater incidence of local and systemic adverse reactions compared to the second dose of mRNA-1273 as well as the 50-{micro}g mRNA-1273 booster in separate studies. The geometric mean titers (GMTs; 95% CI) of SARS-CoV-2 nAbs against the ancestral SARS-CoV-2 at 28 days after the 100-{micro}g booster dose were 4039.5 (3592.7,4541.8) and 1132.0 (1046.7,1224.2) at 28 days after the second dose in the historical control group [GMT ratio=3.6 (3.1,4.2)]. SRRs (95% CI) were 100% (98.6,100) at 28 days after the booster and 98.1% (96.7,99.1) 28 days after the second dose in the historical control group [percentage difference=1.9% (0.4,3.3)]. The GMT ratio (GMR) and SRR difference for the booster as compared to the primary series met the pre-specified non-inferiority criteria. Delta-specific nAbs also increased (GMT fold-rise=233.3) after the 100-{micro}g booster of mRNA-1273.
Conclusions and RelevanceThe 100-{micro}g mRNA-1273 booster induced a robust neutralizing antibody response against SARS-CoV-2, and reactogenicity was higher with the 100-{micro}g booster dose compared to the authorized booster dose level in adults (50-{micro}g). mRNA-1273 100-{micro}g booster dose can be considered when eliciting an antibody response might be challenging such as in moderately or severely immunocompromised hosts.
Trial Registration: NCT04927065
Key PointsQuestion: What is the safety and immunogenicity of a booster dose of 100 {micro}g of mRNA-1273 in adults who previously received the primary series of mRNA-1273?
Findings: In this open-label, Phase 2/3 study, the 100 {micro}g booster dose of mRNA-1273 had a greater incidence of local and systemic adverse reactions compared to a 50 {micro}g booster dose of mRNA- 1273 or after the second dose of mRNA-1273 during the primary series. The 100 {micro}g booster dose of mRNA-1273 induced a robust antibody response against the ancestral SARS-CoV-2 and variants.
Meaning: mRNA-1273 100 {micro}g booster dose might be considered when eliciting an antibody response might be challenging, such as in moderately or severely immunocompromised hosts. | infectious diseases |
10.1101/2022.03.04.22271825 | Efficacy of LD Bio Aspergillus ICT Lateral flow assay for serodiagnosis of chronic pulmonary aspergillosis | BackgroundThe diagnosis of CPA relies on the detection of IgG Aspergillus antibody which is not freely available, especially in resource-poor settings. Point-of-care tests like LDBio Aspergillus ICT lateral flow assay, evaluated in only a few studies, have shown promising results for diagnosis of CPA. However no study has compared the diagnostic performances of LDBio LFA in setting of tuberculosis endemic countries and have compared it with that of IgG Aspergillus.
ObjectivesThis study aimed to evaluate the diagnostic performances of LDBio LFA in CPA and compare it with existing diagnostic algorithm utilising ImmunoCAP IgG Aspergillus.
MethodsSerial patients presenting with respiratory symptoms (cough, hemoptysis, fever etc) for > 4 weeks were screened for eligibility. Relevant investigations including direct microscopy and culture of respiratory secretions, IgG Aspergillus, chest imaging etc were done according to existing algorithm. Serum of all patients were tested by LDBio LFA and IgG Aspergillus (ImmunoCAP Asp IgG) and their diagnostic performances were compared.
ResultsA total of 174 patients were included in the study with [~]66.7% patients having past history of tuberculosis. A diagnosis of CPA was made in 74 (42.5%) of patients. The estimated sensitivity and specificity of LDBio LFA was 67.6% (95% CI: 55.7%-78%) and 81% (95% CI: 71.9%-81%) respectively which increased to 73.3% (95% CI: 60.3%-83.9%) and 83.9% (95% CI: 71.7%-92.4%) respectively in patients with past history of tuberculosis. The sensitivity and specificity of IgG Aspergillus was 82.4% (95% CI: 71.8%-90.3%) and 82% (95% CI: 73.1-89%); 86.7% (95% CI: 75.4%-94.1%) and 80.4% (95% CI: 67.6%-89.8%) in the whole group and those with past history of tuberculosis respectively.
ConclusionsLDBio LFA is a point-of-care test with reasonable sensitivity and specificity. However, further tests may have to be done to rule-in or rule-out the diagnosis of CPA in the appropriate setting. | infectious diseases |
10.1101/2022.03.04.22271895 | Oxylipin profiling identifies a mechanistic signature of metabolic syndrome: results from two independent cohorts | BackgroundMetabolic syndrome (MetS) is a complex condition encompassing a constellation of cardiometabolic abnormalities. Integratively phenotyping the molecular pathways involved in MetS would help to deeply characterize its pathophysiology and to better stratify the risk of cardiometabolic diseases. Oxylipins are a superfamilly of lipid mediators regulating most biological processes involved in cardiometabolic health.
MethodsA high-throughput validated mass spectrometry method allowing the quantitative profiling of over 130 oxylipins was applied to identify and validate the oxylipin signature of MetS in two independent case/control studies involving 476 participants.
ResultsWe have uncovered and validated an oxylipin signature of MetS (coined OxyScore) including 23 oxylipins and having high performances of classification and replicability (cross-validated AUCROC of 89%, 95% CI: 85%-93% and 78%, 95% CI: 72%-85% in the Discovery and Replication studies, respectively). Correlation analysis and comparison with a classification model incorporating both the oxylipins and the MetS criteria showed that the oxylipin signature brings consistent and complementary information supporting its clinical utility. Moreover, the OxyScore provides a unique mechanistic signature of MetS regarding the activation and/or negative feedback regulation of crucial molecular pathways that may help identify patients at higher risk of cardiometabolic diseases.
ConclusionOxylipin profiling identifies a mechanistic signature of metabolic syndrome that may help to enhance MetS phenotyping and ultimately to better predict the risk of cardiometabolic diseases via a better patient stratification. | endocrinology |
10.1101/2022.03.04.22271238 | The association between plasma chemokines and breast cancer risk and prognosis: A Mendelian randomization study | BackgroundDespite the potential role of several chemokines in the migration of cytotoxic immune cells to prohibit breast cancer cell proliferation, a comprehensive view of chemokines and the risk and prognosis of breast cancer is scarce, and little is known about their causal associations.
MethodsWith a two-sample Mendelian randomization (MR) approach, genetic instruments associated with 30 plasma chemokines were created. Their genetic associations with breast cancer risk and survival were extracted from the recent genome-wide association study, with available survival information for 96,661 patients. We further tested the associations between the polygenetic risk score (PRS) for chemokines and breast cancer in the UK Biobank cohort using logistic regression models. The association between chemokine expression in tumors and breast cancer survival was analyzed in the TCGA cohort with Cox regression models.
ResultsPlasma CCL5 was causally associated with the risk of breast cancer in the MR analysis, which was significant in the luminal and HER-2 enriched subtypes and further confirmed using PRS analysis (OR=0.94, 95% CI=0.89-1.00). A potential causal association with breast cancer survival was only found for plasma CCL19, especially for ER-positive patients. In addition, we also found an inverse association between CCL19 expression in tumors and breast cancer overall and relapse-free survival (HR=0.58, 95% CI=0.35-0.95).
ConclusionWe observed an inverse association between genetic predisposition to CCL5 and the risk of breast cancer, while CCL19 was associated with breast cancer survival. These associations suggested the potential of these chemokines as tools for breast cancer prevention and treatment. | epidemiology |
10.1101/2022.03.05.22271939 | Field evaluation of a novel, rapid diagnostic assay RLDT, and molecular epidemiology of enterotoxigenic E. coli among Zambian children presenting with diarrhea. | BackgroundEnterotoxigenic Escherichia coli (ETEC) is one of the top aetiologic agents of diarrhea in children under the age of 5 in low-middle income countries (LMICs). The lack of point of care diagnostic tools for routine ETEC diagnosis results in limited data regarding the actual burden and epidemiology in the endemic areas. We evaluated performance of the novel Rapid LAMP based Diagnostic Test (RLDT) for detection of ETEC in stool as a point of care diagnostic assay in a resource-limited setting.
MethodsWe conducted a cross-sectional study of 324 randomly selected stool samples from children under 5 presenting with moderate to severe diarrhea (MSD). The samples were collected between November 2012 to September 2013 at selected health facilities in Zambia. The RLDT was evaluated by targeting three ETEC toxin genes [heat labile toxin (LT) and heat stable toxins (STh, and STp)]. Quantitative PCR was used as the "gold standard" to evaluate the diagnostic sensitivity and specificity of RLDT for detection of ETEC. We additionally described the prevalence and seasonality of ETEC.
ResultsThe study included 50.6% of participants that were female. The overall prevalence of ETEC was 19.8% by qPCR and 19.4 % by RLDT. The children between 12 to 59 months had the highest prevalence of 22%. The study determined ETEC toxin distribution was LT (49%), ST (34%) and LT/ST (16%). The sensitivity and specificity of the RLDT compared to qPCR using a Ct 35 as the cutoff, were 90.7% and 97.5% for LT, 85.2% and 99.3% for STh and 100% and 99.7% for STp, respectively.
ConclusionThe results of this study suggest that RLDT is sufficiently sensitive and specific and easy to implement in the endemic countries. Being rapid and simple, the RLDT also presents as an attractive tool for point-of-care testing at the health facilities and laboratories in the resource-limited settings.
Author SummaryETEC is one of the top causes of diarrheal diseases in low and middle income countries. The advancement of molecular diagnosis has made it possible to accurately detect ETEC in endemic areas. However, the complexity, infrastructure and cost implication of these tests has made it a challenge to routinely incorporate them in health facilities in endemic settings. The ETEC RLDT is a simple and cost-effective molecular tool that can be used to screen for ETEC in resource limited settings. Here, we described the performance of the RLDT against a qPCR as the gold standard. Our findings showed that the ETEC RLDT performs comparable to the qPCR and would be a suitable screening tool in health facilities in recourse limited settings. | epidemiology |
10.1101/2022.03.04.22271888 | A systematic review about skin lesions in children affected by Coronavirus Disease 2019 (excluding multisystem inflammatory syndrome in children): study protocol. | COVID-19 disease can give a range of skin manifestations, some of which specific to children and young patients. Given the different nature of the lesions in specific stages of the disease and the fact that they may be the only or predominant symptom of the disease, it is of great importance for the pediatrician and dermatologist to be familiar with COVID-19 skin lesions. The aim of this systematic review, conducted according to the PRISMA statement, is to investigate COVID-19-associated cutaneous involvement in children in terms of type of skin lesions, frequency, and time of onset, with the exclusion of those associated with multisystem inflammatory syndrome in children. A comprehensive literature search will be performed through PubMed between December 2019 and December 2021. The quality of each study will be assessed according to the STROBE tool for observational studies. This systematic review will provide evidence about the dermatological manifestations of COVID-19 disease in children published up to the end of year 2021. Recognizing skin symptoms as potential manifestations of COVID-19 in children is important for a non-delayed diagnosis, with prompt activation of the adequate tracing, isolation, and hygienic measures, and for timely treatment of unlikely but possible serious complications. | pediatrics |
10.1101/2022.03.03.22271863 | Financing global health security: estimating the costs of pandemic preparedness in Global Fund eligible countries | IntroductionThe Global Fund to Fight AIDS, TB, and Malaria (the Global Fund) pivoted investments to support countries in their response to the COVID-19 pandemic. Recently, the Global Funds Board approved global pandemic preparedness and response as part of their new six-year strategy from 2023-2028.
MethodsPrior research estimated that US$124 billion is required, globally, to build sufficient country-level capacity for health security, with US$76 billion needed over an initial three-year period. Action-based cost estimates generated from that research were coded as directly, indirectly, or unrelated to systems strengthening efforts applicable to HIV, TB, and/or malaria.
ResultsOf approximately US$76 billion needed for country level capacity-building over the next three-year allocation period, we estimate that US$66 billion is needed in Global Fund-eligible countries, and over one-third relates directly or indirectly (US$6 billion and US$21 billion, respectively) to health systems strengthening efforts applicable to HIV, TB, and/or malaria disease programs currently supported by the Global Fund. Among these investments, cost drivers include financing for surveillance and laboratory systems, to combat antimicrobial resistance, and for training, capacity-building, and ongoing support for the healthcare and public health workforce.
ConclusionThis work highlights a potential strategic role for the Global Fund to contribute to health security while remaining aligned with its core mission. It demonstrates the value of action-based costing estimates to inform strategic investment planning in pandemic preparedness.
What is already known on this topicO_LIThe costs, globally, to build country-level public health capacity to address these gaps over the next five years has been previously estimated as US$96-$204 billion, with an estimated US$63-131 billion in investments required over the next three years.
C_LIO_LIResearch conducted prior to the COVID-19 pandemic indicated that over one-third of Global Funds budgets in 10 case-study countries aligned with health security priorities articulated by the Joint External Evaluation, particularly in the areas of laboratory systems, antimicrobial resistance, and workforce development.
C_LI
What this study addsO_LIWe estimate that over 85% of investments needed to build national capacities in health security, globally, over the next three years are in countries eligible for Global Fund support.
C_LIO_LIAreas of investment opportunity aligned with the Global Funds core mandate include financing for surveillance and laboratory systems, combating antimicrobial resistance, and developing and supporting robust healthcare and public health workforces.
C_LI
How this study might affect research, practice or policyO_LIIn aggregate, global-level data highlight areas of opportunity for the Global Fund to expand and further develop its support of global health security in areas aligned with its mandate and programmatic scope.
C_LIO_LISuch investment opportunities have implications not only for existing budgeting and allocation processes, but also for implementation models, partners, programming, and governance structures, should these areas of potential expansion be prioritized.
C_LIO_LIThis work emphasizes a role for targeted, action-based cost estimation to identify gaps and to inform strategic investment decisions in global health.
C_LI | public and global health |
10.1101/2022.03.04.22271920 | Patients at risk of pulmonary fibrosis Post Covid-19: Epidemiology, pulmonary sequelaes and humoral response | BackgroundThe COVID-19 pandemic is one of the greatest public health problems. Our aims were to describe epidemiological characteristics, know the amount of protective antibodies and their permanence after a COVID-19 primary infection in patients with risk of pulmonary fibrosis.
MethodsDescriptive epidemiological and follow-up study of the humoral response in patients at risk of pulmonary fibrosis Post-Covid-19 hospitalized, between March and October 2020, and who were followed up for a one year after hospital discharge.
Results72 patients participated in the study, 52 showed pre-existing chronic comorbidities. COVID-19 clinical severity was rated in 6% mild, 58% as moderate and 36% as severe. After a year of follow-up, the forty percent had pulmonary sequelae, the most frequent (20%) was mild pulmonary fibrosis. Any case of reinfection was detected. All patients presented RBD IgG antibodies and 88% presented IgA antibodies after 8-9 months. The amount of RBD IgG was similar at 4-5 and 8-9 months post-Covid. There was no difference when level of RBD IgG according to the severity of the COVID-19 (p=0.441, p=0.594).
ConclusionsMild pulmonary fibrosis sequelae is exceptional but was detected in a high percentage. The amount of RBD IgG is maintained throughout the convalescent phase and seems to protect against new reinfections despite of emerging viral variants. However, seems not predict the developed or not of pulmonary fibrosis. | respiratory medicine |
10.1101/2022.03.04.22271892 | A rapid review of the effectiveness of alternative education delivery strategies in medical, dental, nursing and pharmacy education during the COVID-19 pandemic | BackgroundEducation delivery in higher education institutions was severely affected by the COVID-19 pandemic, with emergency remote teaching developed and adapted promptly for the circumstances. This rapid review investigated the effectiveness of alternative education delivery strategies during the pandemic for medical, dental, nursing and pharmacy students, to help plan and adapt further education provision.
MethodsWe included 23 primary studies in undergraduate education, all published in 2020-2021, no relevant UK-based or postgraduate studies were found. Included studies comprised 10 single cohort descriptive; 11 comparative descriptive; and two RCTs. There was considerable variability in terms of students, type of distance learning, platforms used and outcome measures.
ResultsIn medicine (n=14), self-reported competency and confidence, and demonstrable suturing skills were achieved through participating in remote learning. However, lower levels of knowledge were obtained by students who received virtual or blended learning compared to in-person teaching (low-very low confidence). Using bespoke interactive platforms in undergraduate medical training was superior to standard video (low confidence) or textbook presentations (very low confidence).
In dentistry (n=2), remote learning led to knowledge gained (low confidence), but self-reported practical and interpersonal skills were lower with remote rather than in-person learning (very low confidence).
In nursing (n=3), remote learning, when compared to in-person, resulted in similar knowledge and self-reported competency levels (very low confidence) pre-COVID, but confidence was higher when learning or assessment was conducted virtually (low confidence).
In pharmacy (n=4), virtual learning was associated with higher skills, but lower knowledge compared to in-person, pre-COVID; self-reported competency and confidence scores were similar between the two groups (very low confidence).
ConclusionsRemote teaching was valued, and learning was achieved, but the comparative effectiveness of virtual versus in-person teaching is less clear. Supplementary alternative or in-person practical sessions may be required post-emergency to address learning needs for some disadvantaged student groups. | medical education |
10.1101/2022.03.05.22271944 | Elevated Levels of IL-6 in IgA Nephropathy patients are induced by an Epigenetically-Driven Mechanism Triggered by Viral and Bacterial RNA | Recently, a several models has been proposed to describe the pathogenesis of Immunoglobulin A nephropathy (IgAN), and among them the multihit and the gut-microbiota. These models explain the pathogenesis of IgAN caused by the production of aberrant IgA, but it is believed further predisposing factors are present, including immunological, genetic, environmental, or nutritional factors that can influence the pathogenesis and that could be useful for development of precision nephrology and personalized therapy.
Newly, the role of IL-6 in pathogenesis is becoming increasingly important. It is essential for glomerular immunoglobulin A deposition and the development of renal pathology in Cd37-deficient mice, even if the reason why levels of IL-6 are elevated in IgAN patients is not well understood. One attainable hypothesis on high levels of IL-6 in IgAN comes out from our recent whole genome DNA methylation screening in IgAN patients, that identified, among others, a hypermethylated region comprising Vault RNA 2-1 (VTRNA2-1), a non-coding RNA also known as precursor of miR-886 (pre-mi-RNA). Consistently, the VTRNA2-1 expression was found down-regulated in IgAN patients.
Here we confirm that VTRNA2-1 is low expressed in IgAN subjects compared to HS and we found that also in transplanted IgAN patients (TP-IgAN), compared to non IgAN transplanted patients (TP), the VTRNA2-1 transcript was expressed at level very low. We found that in IgAN patients with downregulated VTRNA2-1, PKR is overactivated, coherently with the role of the VTRNA2-1 that binds to PKR and inhibits its phosphorylation. The loss of the VTRNA2-1 natural restrain causes, in turn, the activation of CREB by PKR. We found CREB, a classical cAMP-inducible CRE-binding factor interacting with a region of the IL-6 promoter and leading to IL-6 production, overactivated both in IgAN and in TP-IgAN patients. Effectively, in the same patients, we found elevated levels of IL-6 correlating with CREB and PKR phosphorylation.
Since PKR is normally activated by bacterial and viral RNA we hypothesized that these microrganisms can further activate the PKR/CREB/IL-6 pathway leading to an excess of IL-6 production, explaining both the high levels of IL-6, both infection involvement in the disease, both cases of IgAN associated with COVID-19 infection and with COVID-19 RNA-vaccination, and recent data showing microbiota involvment in IgAN.
Effectively, we found that Effectively, we sfound that both the RNA poly(I:C) and the COVID-19 RNA-vaccine stimulation significantly increase the IL-6 levels in IgAN patient PBMCs.
The PKR/CREB/IL-6 pathway may be very important also in the setting of renal transplantation. We found that this pathway is upregulated also in IgAN transplanted patients. Recent studies showed that the cumulative risk of IgA nephropathy recurrence increases after transplant and is associated with a 3.7-fold greater risk of graft loss. Finally, we showed that the IL-6 secretion can be reduced by the PKR inhibitor imoxin.
In conclusion, the discovery of the upregulated VTRNA2-1/PKR/CREB/IL-6 pathway in IgAN patients may provide novel approach to treat the disease and may be useful for development of precision nephrology and personalized therapy, possibly by checking the VTRNA2-1 methylation level in IgAN patients. | nephrology |
10.1101/2022.03.05.22271944 | Elevated Levels of IL-6 in IgA Nephropathy patients are induced by an Epigenetically Driven Mechanism Triggered by Viral and Bacterial RNA | Recently, a several models has been proposed to describe the pathogenesis of Immunoglobulin A nephropathy (IgAN), and among them the multihit and the gut-microbiota. These models explain the pathogenesis of IgAN caused by the production of aberrant IgA, but it is believed further predisposing factors are present, including immunological, genetic, environmental, or nutritional factors that can influence the pathogenesis and that could be useful for development of precision nephrology and personalized therapy.
Newly, the role of IL-6 in pathogenesis is becoming increasingly important. It is essential for glomerular immunoglobulin A deposition and the development of renal pathology in Cd37-deficient mice, even if the reason why levels of IL-6 are elevated in IgAN patients is not well understood. One attainable hypothesis on high levels of IL-6 in IgAN comes out from our recent whole genome DNA methylation screening in IgAN patients, that identified, among others, a hypermethylated region comprising Vault RNA 2-1 (VTRNA2-1), a non-coding RNA also known as precursor of miR-886 (pre-mi-RNA). Consistently, the VTRNA2-1 expression was found down-regulated in IgAN patients.
Here we confirm that VTRNA2-1 is low expressed in IgAN subjects compared to HS and we found that also in transplanted IgAN patients (TP-IgAN), compared to non IgAN transplanted patients (TP), the VTRNA2-1 transcript was expressed at level very low. We found that in IgAN patients with downregulated VTRNA2-1, PKR is overactivated, coherently with the role of the VTRNA2-1 that binds to PKR and inhibits its phosphorylation. The loss of the VTRNA2-1 natural restrain causes, in turn, the activation of CREB by PKR. We found CREB, a classical cAMP-inducible CRE-binding factor interacting with a region of the IL-6 promoter and leading to IL-6 production, overactivated both in IgAN and in TP-IgAN patients. Effectively, in the same patients, we found elevated levels of IL-6 correlating with CREB and PKR phosphorylation.
Since PKR is normally activated by bacterial and viral RNA we hypothesized that these microrganisms can further activate the PKR/CREB/IL-6 pathway leading to an excess of IL-6 production, explaining both the high levels of IL-6, both infection involvement in the disease, both cases of IgAN associated with COVID-19 infection and with COVID-19 RNA-vaccination, and recent data showing microbiota involvment in IgAN.
Effectively, we found that Effectively, we sfound that both the RNA poly(I:C) and the COVID-19 RNA-vaccine stimulation significantly increase the IL-6 levels in IgAN patient PBMCs.
The PKR/CREB/IL-6 pathway may be very important also in the setting of renal transplantation. We found that this pathway is upregulated also in IgAN transplanted patients. Recent studies showed that the cumulative risk of IgA nephropathy recurrence increases after transplant and is associated with a 3.7-fold greater risk of graft loss. Finally, we showed that the IL-6 secretion can be reduced by the PKR inhibitor imoxin.
In conclusion, the discovery of the upregulated VTRNA2-1/PKR/CREB/IL-6 pathway in IgAN patients may provide novel approach to treat the disease and may be useful for development of precision nephrology and personalized therapy, possibly by checking the VTRNA2-1 methylation level in IgAN patients. | nephrology |
10.1101/2022.03.04.22271901 | Understanding the comorbidity between posttraumatic stress severity and coronary artery disease using genome-wide information and electronic health records | BackgroundThe association between coronary artery disease (CAD) and posttraumatic stress disorder (PTSD) contributes to the high morbidity and mortality observed among affected individuals. To understand the dynamics underlying PTSD-CAD comorbidity, we conducted a genetically-informed causal inference analysis using large-scale genome-wide association (GWA) statistics and follow-up analysis using electronic health records (EHR) and PTSD Checklist (PCL-17 or PCL-6) assessments available from the Million Veteran Program (MVP) and the UK Biobank (UKB), respectively.
MethodsWe used GWA statistics from MVP, UKB, the Psychiatric Genomics Consortium, and the CARDIoGRAMplusC4D Consortium to perform a bidirectional, two-sample Mendelian randomization (MR) analysis to assess cause-effect relationships between CAD and PTSD. We also conducted a pleiotropic meta-analysis to investigate loci with concordant vs. discordant effects between the traits investigated. Leveraging individual-level information derived from MVP and UKB EHRs, we assessed longitudinal changes in the association between CAD and posttraumatic stress severity.
FindingsWe observed a genetic correlation of CAD with PTSD case-control and quantitative outcomes, ranging from 0.18 to 0.32. Our two-sample MR showed a significant bidirectional relationship between CAD and PTSD symptom severity. Genetically-determined PCL-17 total score was associated with increased CAD risk (odds ratio=1.04; 95% confidence interval, 95%CI=1.01-1.06). Conversely, CAD genetic liability was associated with reduced PCL-17 total score (beta=-0.42; 95%CI=-0.04 - -0.81). These estimates were consistent across datasets and were not affected by heterogeneity or horizontal pleiotropy. The pleiotropic meta-analysis between PCL-17 and CAD identified loci with concordant effect enriched for platelet amyloid precursor protein pathway (p=2.97x10-7) and negative regulation of astrocyte activation (p=2.48x10-6) while discordant-effect loci were enriched for biological processed related lipid metabolism (e.g., triglyceride-rich lipoprotein particle clearance, p=1.61x10-10). The EHR-based follow-up analysis highlighted that earlier CAD diagnosis is associated with increased PCL-total score later in life, while lower PCL total score was associated with increased risk of a later CAD diagnosis (Mann-Kendall trend test: MVP tau=0.932, p<2x10-16; UKB tau=0.376, p=0.005)
InterpretationOur results highlight a complicated relationship between PTSD and CAD that may be affected by the long-term consequences of CAD on the mental health of the individuals affected.
FundingThis research was supported by funding from the VA Cooperative Studies Program (CSP, no. CSP575B) and the Veterans Affairs Office of Research and Development MVP (grant nos. MVP000 and VA Merit MVP025). | genetic and genomic medicine |
10.1101/2022.03.04.22271813 | Investigation of Genetic Variants and Causal Biomarkers Associated with Brain Aging | Delta age is a biomarker of brain aging that captures differences between the chronological age and the predicted biological brain age. Using multimodal data of brain MRI, genomics, and blood-based biomarkers and metabolomics in UK Biobank, this study investigates an explainable and causal basis of high delta age. A visual saliency map of brain regions showed that lower volumes in the fornix and the lower part of the thalamus are key predictors of high delta age. Genome-wide association analysis of the delta age using the SNP array and whole-exome sequencing (WES) data identified novel associated genes related to carcinogenesis, immune response, and neuron survival. Mendelian randomization (MR) for all metabolomic biomarkers and blood-related phenotypes showed that immune-related phenotypes have a causal impact on increasing delta age. Our analysis revealed regions in the brain that are susceptible to the aging process and provided evidence of the causal and genetic connections between immune responses and brain aging. | genetic and genomic medicine |
10.1101/2022.03.04.22271813 | Investigation of Genetic Variants and Causal Biomarkers Associated with Brain Aging | Delta age is a biomarker of brain aging that captures differences between the chronological age and the predicted biological brain age. Using multimodal data of brain MRI, genomics, and blood-based biomarkers and metabolomics in UK Biobank, this study investigates an explainable and causal basis of high delta age. A visual saliency map of brain regions showed that lower volumes in the fornix and the lower part of the thalamus are key predictors of high delta age. Genome-wide association analysis of the delta age using the SNP array and whole-exome sequencing (WES) data identified novel associated genes related to carcinogenesis, immune response, and neuron survival. Mendelian randomization (MR) for all metabolomic biomarkers and blood-related phenotypes showed that immune-related phenotypes have a causal impact on increasing delta age. Our analysis revealed regions in the brain that are susceptible to the aging process and provided evidence of the causal and genetic connections between immune responses and brain aging. | genetic and genomic medicine |
10.1101/2022.03.04.22270966 | Social connections at work and mental health during the first wave of the COVID-19 pandemic: Evidence from employees in Germany | Empirical evidence on the social and psychological impact of the COVID-19 pandemic in the workplace and the resulting consequences for the mental health of employees is still sparse. As a result, research on this subject is urgently needed to develop appropriate countermeasures. This study builds on Person-Environment fit theory to investigate social connections at work and mental health during the first wave of the COVID-19 pandemic. It analyses employees needs for social connections and how social connections affect different mental health measures. Survey data were collected in May 2020 in an online survey of employees across Germany and analysed using response surface analysis. Mental health was measured as positive mental health and mental health disorders. Social connections were measured as social support and social interactions. 507 employees participated in the survey and more than one third reported having less social support and social interaction at work than they desired (p < .001). This was associated with a decrease in mental health. In contrast, having more than the desired amount of social support was associated with a decrease and having more than the desired amount of social interaction with an increase in mental health. This study provides important early evidence on the impact of the first wave of the COVID-19 pandemic in the workplace. With it, we aim to stimulate further research in the field and provide early evidence on potential mental health consequences of social distancing measures - while also opening avenues to combat them. | health economics |
10.1101/2022.03.04.22271701 | Health indicators as a measure of individual health status: public perspectives | ObjectiveWe examined the perspectives of the general public on 29 health indicators to provide evidence for further prioritizing the indicators, which were obtained from the literature review. Health status is different from disease status, which can refer to different stages of cancer.
DesignThis study uses a cross-sectional design.
SettingAn online survey was administered through Ohio University, ResearchMatch, and Clemson University.
ParticipantsParticipants included the general public who are 18 years or older. A total of 1153 valid responses were included in the analysis.
Primary outcomes measuresParticipants rated the importance of the 29 health indicators. The data were aggregated, cleaned, and analyzed in three ways: (1) to determine the agreement among the three samples on the importance of each indicator (IV = the three samples, DV = individual survey responses); (2) to examine the mean differences between the retained indicators with agreement across the three samples (IV = the identified indicators, DV = individual survey responses); and (3) to rank the groups of indicators after grouping the indicators with no mean differences (IV = the groups of indicators, DV = individual survey responses).
ResultsThe descriptive statistics indicate that the top-five rated indicators are drug or substance abuse, smoking or tobacco use, alcohol abuse, major depression, diet and nutrition. The importance of 13 of the 29 health indicators was agreed upon among the three samples. The 13 indicators were categorized into seven groups. Groups 1-3 were rated as significantly higher than Groups 4-7.
ConclusionsThis study provides a baseline for prioritizing further the 29 health indicators, which can be used by electronic health records or personal health record system developers. Currently, self-rated health status is used predominantly. Our study provides a foundation to track and measure preventive services more accurately and to develop an individual health status index.
Strengths and limitations of this studyO_LIThe work establishes the foundation to measure individual health status more comprehensively and objectively
C_LIO_LIThe work reflects perspectives from three communities with a relatively large sample size
C_LIO_LIThe work provides the foundation to prioritize the 29 health indicators further
C_LIO_LIWith real-world longitudinal data, the public perspective data on individual health status measurement would be verified and validated further
C_LI | health informatics |
10.1101/2022.03.05.22271945 | EFFECTS OF HIV INFECTION AND FORMER COCAINE DEPENDENCE ON NEUROANATOMICAL MEASURES AND NEUROCOGNITIVE PERFORMANCE | Evidence from animal research, postmortem analyses, and MRI investigations indicate substantial morphological alteration in brain structure as a function of HIV or cocaine dependence (CD). Although previous research on HIV+ active cocaine users suggests the presence of deleterious morphological effects in excess of either condition alone, a yet unexplored question is whether there is a similar deleterious interaction in HIV+ individuals with CD who are currently abstinent. To this end, the combinatorial effects of HIV and CD history on regional brain volume, cortical thickness, and neurocognitive performance was examined across four groups of participants: healthy controls, HIV-negative individuals with a history of CD, HIV+ individuals with no history of CD, HIV+ individuals with a history of CD. Our analyses revealed no statistical evidence of an interaction between both conditions on brain morphometry and neurocognitive performance. While descriptively, individuals with comorbid HIV and a history of CD exhibited the lowest neurocognitive performance scores, using Principle Component Analysis of neurocognitive testing data, HIV was identified as a primary driver of neurocognitive impairment. Higher caudate volume was evident in CD+ participants relative to CD-participants. Taken together, these data provide evidence of independent effects of HIV and CD history on brain morphometry and neurocognitive performance in cocaine-abstinent individuals. | hiv aids |
10.1101/2022.03.04.22271915 | Severity of SARS-CoV-2 Infection in Pregnancy in Ontario: A Matched Cohort Analysis | BackgroundPregnancy represents a physiological state associated with increased vulnerability to severe outcomes from infectious diseases, both for the pregnant person and developing infant. The SARS-CoV-2 pandemic may have important health consequences for pregnant individuals, who may also be more reluctant than non-pregnant people to accept vaccination. We sought to estimate the degree to which increased severity of SARS-CoV-2 outcomes can be attributed to pregnancy.
MethodsOur study made use of a population-based SARS-CoV-2 case file from Ontario, Canada. Due to both varying propensity to receive vaccination, and changes in dominant circulating viral strains over time, a time-matched cohort study was performed to evaluate the relative risk of severe illness in pregnant women with SARS-CoV-2 compared to other SARS-CoV-2 infected women of childbearing age (10 to 49 years old). Risk of severe SARS-CoV-2 outcomes (hospitalization or intensive care unit (ICU) admission) was evaluated in pregnant women and time-matched non-pregnant controls using multivariable conditional logistic regression.
ResultsCompared to the rest of the population, non-pregnant women of childbearing age had an elevated risk of infection (standardized morbidity ratio (SMR) 1.28), while risk of infection was reduced among pregnant women (SMR 0.43). After adjustment for age, comorbidity, healthcare worker status, vaccination, and infecting viral variant, pregnant women had a markedly elevated risk of hospitalization (adjusted OR 4.96, 95% CI 3.86 to 6.37) and ICU admission (adjusted OR 6.58, 95% CI 3.29 to 13.18). The relative increase in hospitalization risk associated with pregnancy was greater in women without comorbidities than in those with comorbidities (P for heterogeneity 0.004).
InterpretationA time-matched cohort study suggests that while pregnant women may be at a decreased risk of infection relative to the rest of the population, their risk of severe illness is markedly elevated if infection occurs. Given the safety of SARS-CoV-2 vaccines in pregnancy, risk-benefit calculus strongly favours SARS-CoV-2 vaccination in pregnant women. | infectious diseases |
10.1101/2022.03.04.22271890 | Protective antibodies and T cell responses to Omicron variant three months after the booster dose of BNT162b2 vaccine | The high number of mutations in the Omicron variant of SARS-CoV-2 cause its immune escape when compared to the earlier variants of concern (VOC). At least three vaccine doses are required for the induction of Omicron neutralizing antibodies and further reducing the risk for hospitalization. However, most of the studies have focused on the immediate response after the booster vaccination while the duration of immune response is less known. We here studied longitudinal serum samples from the vaccinated individuals up to three months after their third dose of the BNT162b2 vaccine for their capacity to produce protective antibodies and T cell responses to Wuhan and Omicron variants. After the second dose, the antibody levels to the unmutated spike protein were significantly decreased at three months, and only 4% of the individuals were able to inhibit Omicron spike interaction compared to 47%, 38%, and 14% of individuals inhibiting wild-type, delta, and beta variants spike protein. Nine months after the second vaccination, the antibody levels were similar to the levels before the first dose and none of the sera inhibited SARS-CoV-2 wild-type or any of the three VOCs. The booster dose remarkably increased antibody levels and their ability to inhibit all variants. Three months after the booster the antibody levels and the inhibition activity were trending lower but still up and not significantly different from their peak values at two weeks after the third dose. Although responsiveness towards mutated spike peptides was lost in less than 20 % of vaccinated individuals, the wild-type spike-specific CD4+ and CD8+ memory T cells were still present at three months after the booster vaccination in the majority of studied individuals. Our data show that two doses of the BNT62b2 vaccine are not sufficient to protect against the Omicron variant, however, the spike-specific antibodies and T cell responses are strongly elicited and well maintained three months after the third vaccination dose. | infectious diseases |
10.1101/2022.03.04.22271540 | The mutational steps of SARS-CoV-2 to become like Omicron within seven months: the story of immune escape in an immunocompromised patient. | We studied a unique case of prolonged viral shedding in an immunocompromised patient that generated a series of SARS-CoV-2 immune escape mutations over a period of seven months. During the persisting SARS-CoV-2 infection seventeen non-synonymous mutations were observed, thirteen (13/17; 76.5%) of which occurred in the genomic region coding for spike. Fifteen (15/17; 88.2%) of these mutations have already been described in the context of variants of concern and include the prominent immune escape mutations S:E484K, S:D950N, S:P681H, S:N501Y, S:del(9), N:S235F and S:H655Y. Fifty percent of all mutations acquired by the investigated strain (11/22) are found in similar form in the Omicron variant of concern. The study shows the chronology of the evolution of intra-host mutations, which can be seen as the straight mutational response of the virus to specific antibodies and should therefore be given special attention in the rating of immune escape mutations of SARS-CoV-2. | infectious diseases |
10.1101/2022.03.05.22271962 | Q3: An ICU Acuity Score for Exploring the Effects of Changing Acuity Throughout the Stay | IntroWe develop a straightforward ICU acuity score (Q3) that is calculated every 3 hours throughout the first 10 days of the ICU stay. Q3 uses components of the Oxford Acute Severity of Illness Score (OASIS) and incorporates a new component score for vasopressor use. In well-behaved models of ICU mortality, the marginal effects of Q3 are significant across the first 10 days of the ICU stay. In separate models, Q3 has significant effects on ICU remaining length of stay. The score has implications for work that seeks to explain modifiable mechanisms of changing acuity during the ICU stay.
MethodsFrom the MIMIC-III database, select ICU stays from 5 adult ICUs were partitioned into consecutive 3-hour segments. For each segment, the number of vasopressors administered and all 10 OASIS component scores were computed. Models of ICU mortality were estimated. OASIS component effects were examined, and vasopressor count bins were weighted. Q3 was defined as the sum of 8 retained OASIS components and a new weighted vasopressor score. Models of ICU mortality quadratic in Q3 were estimated for each of the first 10 ICU days and were subjected to segment-level, location-specific tests of discrimination and calibration on newer ICU stays. Marginal effects of Q3 were computed at different levels of Q3 by ICU day, and average marginal effects of Q3 were computed at each location by ICU day. ICU remaining length of stay (LOS) models were also estimated and the effects of Q3 were similarly examined.
ResultsDaily ICU mortality models using Q3 show no evidence of misspecification (Pearson-Windmeijer p>0.05, Stukel p>0.05), discriminate well in all ICUs over the first 10 days (AUROC [~] 0.72 - 0.85), and are generally well calibrated (Hosemer-Lemeshow p>0.05, Spiegelhalters z p>0.05). A one-unit increase in Q3 from typical levels (Q3=15) affects the odds of ICU mortality by a factor of 1.14 to 1.20, depending on ICU day (p<0.001), and the ICU remaining LOS by 5.8 to 9.6% (p<0.001). On average, a one-unit increase in Q3 increases the probability of ICU mortality by 1 to 2 percentage points depending on location and ICU day, and ICU remaining LOS by 5 to 10 hours depending on location and ICU day.
ConclusionQ3 significantly affects ICU mortality and ICU remaining LOS in different ICUs across the first 10 days of the ICU stay. Depending on location and ICU day, a one-unit increase in Q3 increases the probability of ICU mortality by or 1-2 percentage points and ICU remaining LOS by 5 to 10 hours. Unlike static acuity scores or those updated infrequently, Q3 could be used in explanatory models to help elucidate mechanisms of changing ICU acuity. | intensive care and critical care medicine |
10.1101/2022.03.04.22271893 | Platelet Dependent Thrombogenicity, Inflammation and Five Year Mortality in Patients with Non-ST Elevation Acute Coronary Syndrome and Type 2 Diabetes Mellitus: An Observational Study | Platelet dependent thrombogenicity is higher in type 2 diabetes mellitus (T2DM) but there is no data on association between thrombus and mortality or vascular inflammation.
We studied 80 patients who received guideline based therapy (40 T2DM) after NSTE-ACS. Platelet dependent thrombus was assessed using the validated ex vivo Badimon Chamber and all patients were followed up for five years. Bio-markers of coagulation and inflammation were measured.
There were 17 (21.3%) deaths in total (12 in the T2DM, 71% of all deaths), at 5 years. Of the patients who died, thrombus burden was higher ({micro}2 per mm, Mean {+/-} SD, 12,164 {+/-} 3235 vs 9,751 {+/-} 4333). Serum inflammatory cytokines, P selectin and soluble CD40 ligand were higher in T2DM. Univariate analysis showed thrombus area, diabetic status, age, interleukin 1 and P selectin were independent predictors of mortality. Inflammatory biomarkers were associated with thrombus burden. If these findings can be proven in large scale studies, this will lead on to novel therapeutic strategies especially in the current Covid 19 pandemic which has renewed our interest of this interplay of pathophysiology between thrombus and inflammation. | cardiovascular medicine |
10.1101/2022.03.05.22271952 | Sociodemographic Differences in Population-Level Immunosenescence in Older Age | BackgroundThe COVID-19 pandemic has highlighted the urgent need to understand variation in immunosenescence at the population-level. Thus far, population patterns of immunosenescence are not well described.
MethodsWe characterized measures of immunosenescence from newly released venous blood data from the nationally representative U.S Health and Retirement Study (HRS) of individuals ages 56 years and older.
FindingsMedian values of the CD8+:CD4+, EMRA:Naive CD4+ and EMRA:Naive CD8+ ratios were higher among older participants and were lower in those with additional educational attainment. Generally, minoritized race and ethnic groups had immune markers suggestive of a more aged immune profile: Hispanics had a CD8+:CD4+ median value of 0.37 (95% CI: 0.35, 0.39) compared to 0.30 in Whites (95% CI: 0.29, 0.31). Blacks had the highest median value of the EMRA:Naive CD4+ ratio (0.08; 95% CI: 0.07, 0.09) compared to Whites (0.03; 95% CI: 0.028, 0.033). In regression analyses, race/ethnicity and education were associated with large differences in the immune ratio measures after adjustment for age and sex. For example, each additional level of education was associated with roughly an additional decade of immunological age, and the racial/ethnic differences were associated with two to four decades of additional immunological age.
InterpretationOur study provides novel insights into population variation in immunosenescence. This has implications for both risk of age-related disease and vulnerability to novel pathogens (e.g., SARS-CoV-2).
FundingThis study was partially funded by the U.S. National Institutes of Health, National Institute on Aging R00AG062749. AEA and GAN acknowledge support from the National Institutes of Health, National Institute on Aging R01AG075719. JBD acknowledges support from the Leverhulme Trust (Centre Grant) and the European Research Council grant ERC-2021-CoG-101002587
Research in contextO_ST_ABSEvidence before this studyC_ST_ABSAlterations in immunity with chronological aging have been consistently demonstrated across human populations. Some of the hallmark changes in adaptive immunity associated with aging, termed immunosenescence, include a decrease in naive T-cells, an increase in terminal effector memory cells, and an inverted CD8:CD4 T cell ratio. Several studies have shown that social and psychosocial exposures can alter aspects of immunity and lead to increased susceptibility to infectious diseases.
Add value of this studyWhile chronological age is known to impact immunosenescence, there are no studies examining whether social and demographic factors independently impact immunosenescence. This is important because immunosenescence has been associated with greater susceptibility to disease and lower immune response to vaccination. Identifying social and demographic variability in immunosenescence could help inform risk and surveillance efforts for preventing disease in older age. To our knowledge, we present one of the first large-scale population-based investigations of the social and demographic patterns of immunosenescence among individuals ages 50 and older living in the US. We found differences in the measures of immunosenescence by age, sex, race/ethnicity, and education, though the magnitude of these differences varied across immune measures and sociodemographic subgroup. Those occupying more disadvantaged societal positions (i.e., minoritized race and ethnic groups and individuals with lower educational attainment) experience greater levels of immunosenescence compared to those in less disadvantaged positions. Of note, the magnitude of effect of sociodemographic factors was larger than chronological age for many of the associations.
Implications for practice or policy and future researchThe COVID-19 pandemic has highlighted the need to better understand variation in adaptive and innate immunity at the population-level. While chronological age has traditionally been thought of as the primary driver of immunological aging, the magnitude of differences we observed by sociodemographic factors suggests an important role for the social environment in the aging human immune system. | epidemiology |
10.1101/2022.03.03.22271672 | Epidemiology of Respiratory Viruses in Acute Respiratory Illness in Malaysia: Patterns, Seasonality and Age Distribution | ObjectivesAcute Respiratory Infections (ARIs) are one of the leading causes of childhood morbidity and mortality worldwide. However, there is limited surveillance data on the epidemiological burden of respiratory pathogens in tropical countries like Malaysia. This study aims to estimate the prevalence of respiratory pathogens causing ARIs among children aged <18 years old in Malaysia and their epidemiological characteristics.
MethodsNasopharyngeal swab specimens received at 12 laboratories located in different states of Malaysia from 2015-2019 were studied. Detection of 18 respiratory pathogens were performed using multiplex PCR.
ResultsData from a total of 23,306 paediatric patients who presented with ARI over a five-year period was studied. Of these, 18538 (79.5%) were tested positive. The most prevalent respiratory pathogens detected in this study were enterovirus/ rhinovirus (6837/ 23000; 29.7%), influenza virus (5176/ 23000; 22.5%) and respiratory syncytial virus (RSV) (3652/ 23000; 15.9%). Throughout the study period, RSV demonstrated the most pronounce seasonality; peak infection occurred during July to September. Whereas the influenza virus was detected year-round in Malaysia. No seasonal variation was noted in other respiratory pathogens. The risk of RSV hospitalisation was found to be significantly higher in children aged less than two years old, whereas hospitalisation rates for the influenza virus peaked at children aged between 3-6 years old.
ConclusionsThis study provides insight into the epidemiology and the seasonality of the causative pathogens of ARI among the paediatric population in Malaysia. Knowledge of seasonal respiratory pathogens epidemiological dynamics will facilitate the identification of a target window for vaccination. | epidemiology |
10.1101/2022.03.04.22271903 | Prenatal vitamin intake in first month of pregnancy and DNA methylation in cord blood and placenta in two prospective cohorts | BackgroundPrenatal vitamin use is recommended before and during pregnancies for normal fetal development. Prenatal vitamins do not have a standard formulation, but many contain calcium, folic acid, iodine, iron, omega-3 fatty acids, zinc, and vitamins A, B6, B12, and D, and usually they contain higher concentrations of folic acid and iron than regular multivitamins in the U.S. Nutrient levels can impact epigenetic factors such as DNA methylation, but relationships between maternal prenatal vitamin use and DNA methylation have been relatively understudied. We examined use of prenatal vitamins in the first month of pregnancy in relation to cord blood and placenta DNA methylation in two prospective pregnancy cohorts: the Early Autism Risk Longitudinal Investigation (EARLI) and Markers of Autism Risk Learning Early Signs (MARBLES) studies.
ResultsIn placenta, prenatal vitamin intake was marginally associated with -0.52% (95% CI: - 1.04, 0.01) lower mean array-wide DNA methylation in EARLI, and associated with -0.60% (-1.08, -0.13) lower mean array-wide DNA methylation in MARBLES. There was little consistency in the associations between prenatal vitamin intake and single DNA methylation site effect estimates across cohorts and tissues, with only a few overlapping sites with correlated effect estimates. However, the single DNA methylation sites with p-value<0.01 (EARLI cord nCpGs=4,068, EARLI placenta nCpGs =3,647, MARBLES cord nCpGs =4,068, MARBLES placenta nCpGs=9,563) were consistently enriched in neuronal developmental pathways.
ConclusionsTogether, our findings suggest that prenatal vitamin intake in the first month of pregnancy may be related to lower placental global DNA methylation and related to DNA methylation in brain-related pathways in both placenta and cord blood. | epidemiology |
10.1101/2022.03.04.22271918 | Bladder Cancer Prognosis Using Deep Neural Networks and Histopathology Images | Recent studies indicate bladder cancer is among the top 10 most common cancer in the world [1]. Bladder cancer frequently reoccurs, and prognostic judgments may vary among clinicians. Classification of histopathology slides is essential for accurate prognosis and effective treatment of bladder cancer patients, as a favorable prognosis might help to inform less aggressive treatment plans. Developing automated and accurate histopathology image analysis methods can help pathologists in determining the prognosis of bladder cancer. In this study, we introduced Bladder4Net, a deep learning pipeline to classify whole-slide histopathology images of bladder cancer into two classes: low-risk (combination of PUNLMP and low-grade tumors) and high-risk (combination of high-grade and invasive tumors). This pipeline consists of 4 convolutional neural network (CNN) based classifiers to address the difficulties of identifying PUNLMP and invasive classes. We evaluated our pipeline on 182 independent whole-slide images from the New Hampshire Bladder Cancer Study (NHBCS) [22] [23] [24] collected from 1994 to 2004 and 378 external digitized slides from The Cancer Genome Atlas (TCGA) database [26]. The weighted average F1-score of our approach was 0.91 (95% confidence interval (CI): 0.86-0.94) on the NHBCS dataset and 0.99 (95% CI: 0.97-1.00) on the TCGA dataset. Additionally, we computed Kaplan-Meier survival curves for patients predicted as high-risk versus those predicted as low-risk. For the NHBCS test set, patients predicted as high-risk had worse overall survival than those predicted as low-risk, with a Log-rank P-value of 0.004. If validated through prospective trials, our model could be used in clinical settings to improve patient care. | pathology |
10.1101/2022.03.04.22271917 | Social capital, urbanization level, and COVID-19 vaccination uptake in the United States: A national level analysis | Vaccination remains the most promising mitigation strategy for the COVID-19 pandemic. However, existing literature shows significant disparities in vaccination uptake in the United States. Using publicly available national-level data, we aimed to explore if county-level social capital can further explain disparities in vaccination uptake rate adjusting for demographic and social determinants of health (SDOH) variables; and if association between social capital and vaccination uptake may vary by urbanization level. Bivariate analyses and hierarchical multivariable quasi-binomial regression analysis were conducted, then the regression analysis was stratified by urban-rural status. The current study suggests that social capital contributes significantly to the disparities of vaccination uptake in the US. The results of stratification analysis show common predictors of vaccine uptake but also suggest various patterns based on urbanization level regarding the associations of COVID-19 vaccination uptake with SDOH and social capital factors. The study provides a new perspective to address disparities in vaccination uptake through fostering social capital within communities, which may inform tailored public health intervention efforts in enhancing social capital and promoting vaccination uptake. | public and global health |