id
stringlengths 16
27
| title
stringlengths 18
339
| abstract
stringlengths 95
38.7k
| category
stringlengths 7
44
|
---|---|---|---|
10.1101/2020.12.01.20241836 | Assessing the risk of vaccine-driven virulence evolution in SARS-CoV-2 | How might COVID-19 vaccines alter selection for increased SARS-CoV-2 virulence, or lethality? Framing current evidence surrounding SARS-CoV-2 biology and COVID-19 vaccines in the context of evolutionary theory indicates that prospects for virulence evolution remain uncertain. However, differential effects of vaccinal immunity on transmission and disease severity between respiratory compartments could select for increased virulence. To bound expectations for this outcome, we analyze an evo-epidemiological model. Synthesizing model predictions with vaccine efficacy data, we conclude that while vaccine driven virulence evolution remains a theoretical risk, it is unlikely to threaten prospects for herd immunity in immunized populations. Given that this event would nevertheless impact unvaccinated populations, virulence should be monitored to facilitate swift mitigation efforts.
Significance statementVaccines can provide personal and population level protection against infectious disease, but these benefits can exert strong selective pressures on pathogens. Virulence, or lethality, is one pathogen trait that can evolve in response to vaccination. We investigated whether COVID-19 vaccines could select for increased SARS-CoV-2 virulence by reviewing current evidence about vaccine efficacy and SARS-CoV-2 biology in the context of evolutionary theory, and subsequently analyzing a mathematical model. Our findings indicate that while vaccine-driven virulence evolution in SARS-CoV-2 is a theoretical risk, the consequences of this event would be limited for vaccinated populations. However, virulence evolution should be monitored, as the ramifications of a more virulent strain spreading into an under-vaccinated population would be more severe. | infectious diseases |
10.1101/2020.12.01.20242172 | Implications of delayed reopening in controlling the COVID-19 surge in Southern and West-Central USA | 1In the wake of the rapid surge in the Covid-19 infected cases seen in Southern and West-Central USA in the period of June-July 2020, there is an urgent need to develop robust, data-driven models to quantify the effect which early reopening had on the infected case count increase. In particular, it is imperative to address the question: How many infected cases could have been prevented, had the worst affected states not reopened early? To address this question, we have developed a novel Covid-19 model by augmenting the classical SIR epidemiological model with a neural network module. The model decomposes the contribution of quarantine strength to the infection timeseries, allowing us to quantify the role of quarantine control and the associated reopening policies in the US states which showed a major surge in infections. We show that the upsurge in the infected cases seen in these states is strongly co-related with a drop in the quarantine/lockdown strength diagnosed by our model. Further, our results demonstrate that in the event of a stricter lockdown without early reopening, the number of active infected cases recorded on 14 July could have been reduced by more than 40% in all states considered, with the actual number of infections reduced being more than 100, 000 for the states of Florida and Texas. As we continue our fight against Covid-19, our proposed model can be used as a valuable asset to simulate the effect of several reopening strategies on the infected count evolution; for any region under consideration. | epidemiology |
10.1101/2020.12.01.20241778 | High efficacy of saliva in detecting SARS-CoV-2 by RT-PCR in adults and children | BackgroundRT-PCR of nasopharyngeal swabs (NPS) is the acknowledged gold standard for the detection of SARS-CoV-2 infection. Rising demands for repetitive screens and mass-testing necessitate, however, the development of additional test strategies. Saliva may serve as an alternative to NPS as its collection is simple, non-invasive and amenable for mass- and home-testing but rigorous validation of saliva particularly in children is missing.
MethodsWe conducted a large-scale head-to-head comparison of SARS-CoV-2 detection by RT-PCR in saliva and nasopharyngeal swab (NPS) of 1270 adults and children reporting to outpatient test centers and an emergency unit for an initial SARS-CoV-2 screen. The saliva collection strategy developed utilizes common, low-cost plastic tubes, does not create biohazard waste at collection and was tailored for self-collection and suitability for children.
ResultsIn total, 273 individuals were tested SARS-CoV-2 positive in either NPS or saliva. SARS-CoV-2 RT-PCR results in the two specimens showed a high agreement (Overall Percent Agreement = 97.8%). Despite lower viral loads in saliva of both adults and children, detection of SARS-CoV-2 in saliva compared well to NPS (Positive Percent Agreement = 92.5%). Importantly, in children, SARS-CoV-2 infections were more often detected in saliva than NPS (Positive Predictive Value = 84.8%), underlining that NPS sampling in children can be challenging.
ConclusionsThe comprehensive parallel analysis reported here establishes saliva as a generally reliable specimen for the detection of SARS-CoV-2 with particular advantages for testing children that is readily applicable to increase and facilitate repetitive and mass-testing in adults and children.
Article Summary Main PointsComparison with nasopharyngeal swabs in a large test center-based study confirms that saliva is a reliable and convenient material for the detection of SARS-CoV-2 by RT-PCR in adults and increases detection efficacy in children. | infectious diseases |
10.1101/2020.12.02.20234989 | Identifying US Counties with High Cumulative COVID-19 Burden and Their Characteristics | Identifying areas with high COVID-19 burden and their characteristics can help improve vaccine distribution and uptake, reduce burdens on health care systems, and allow for better allocation of public health intervention resources. Synthesizing data from various government and nonprofit institutions of 3,142 United States (US) counties as of 12/21/2020, we studied county-level characteristics that are associated with cumulative case and death rates using regression analyses. Our results showed counties that are more rural, counties with more White/non-White segregation, and counties with higher percentages of people of color, in poverty, with no high school diploma, and with medical comorbidities such as diabetes and hypertension are associated with higher cumulative COVID-19 case and death rates. We identify the hardest hit counties in US using model-estimated case and death rates, which provide more reliable estimates of cumulative COVID-19 burdens than those using raw observed county-specific rates. Identification of counties with high disease burdens and understanding the characteristics of these counties can help inform policies to improve vaccine distribution, deployment and uptake, prevent overwhelming health care systems, and enhance testing access, personal protection equipment access, and other resource allocation efforts, all of which can help save more lives for vulnerable communities.
Significance statementWe found counties that are more rural, counties with more White/non-White segregation, and counties with higher percentages of people of color, in poverty, with no high school diploma, and with medical comorbidities such as diabetes and hypertension are associated with higher cumulative COVID-19 case and death rates. We also identified individual counties with high cumulative COVID-19 burden. Identification of counties with high disease burdens and understanding the characteristics of these counties can help inform policies to improve vaccine distribution, deployment and uptake, prevent overwhelming health care systems, and enhance testing access, personal protection equipment access, and other resource allocation efforts, all of which can help save more lives for vulnerable communities. | epidemiology |
10.1101/2020.12.01.20241364 | Post-infectious inflammatory disease in MIS-C features elevated cytotoxicity signatures and autoreactivity that correlates with severity | Multisystem inflammatory syndrome in children (MIS-C) is a life-threatening post-infectious complication occurring unpredictably weeks after mild or asymptomatic SARS-CoV2 infection in otherwise healthy children. Here, we define immune abnormalities in MIS-C compared to adult COVID-19 and pediatric/adult healthy controls using single-cell RNA sequencing, antigen receptor repertoire analysis, unbiased serum proteomics, and in vitro assays. Despite no evidence of active infection, we uncover elevated S100A-family alarmins in myeloid cells and marked enrichment of serum proteins that map to myeloid cells and pathways including cytokines, complement/coagulation, and fluid shear stress in MIS-C patients. Moreover, NK and CD8 T cell cytotoxicity genes are elevated, and plasmablasts harboring IgG1 and IgG3 are expanded. Consistently, we detect elevated binding of serum IgG from severe MIS-C patients to activated human cardiac microvascular endothelial cells in culture. Thus, we define immunopathology features of MIS-C with implications for predicting and managing this SARS-CoV2-induced critical illness in children. | pediatrics |
10.1101/2020.12.02.20241703 | The contribution of evolutionary game theory to understanding and treating cancer | Evolutionary game theory mathematically conceptualizes and analyzes biological interactions where ones fitness not only depends on ones own traits, but also on the traits of others. Typically, the individuals are not overtly rational and do not select, but rather inherit their traits. Cancer can be framed as such an evolutionary game, as it is composed of cells of heterogeneous types undergoing frequency-dependent selection. In this article, we first summarize existing works where evolutionary game theory has been employed in modeling cancer and improving its treatment. Some of these game-theoretic models suggest how one could anticipate and steer cancers eco-evolutionary dynamics into states more desirable for the patient via evolutionary therapies. Such therapies offer great promise for increasing patient survival and decreasing drug toxicity, as demonstrated by some recent studies and clinical trials. We discuss clinical relevance of the existing game-theoretic models of cancer and its treatment, and opportunities for future applications. Moreover, we discuss the developments in cancer biology that are needed to better utilize the full potential of game-theoretic models. Ultimately, we demonstrate that viewing tumors with an evolutionary game theory approach has medically useful implications that can inform and create a lockstep between empirical findings and mathematical modeling. We suggest that cancer progression is an evolutionary game and needs to be viewed as such. | oncology |
10.1101/2020.12.02.20235879 | Predictive modeling of morbidity and mortality in COVID-19 hospitalized patients and its clinical implications. | Clinical activity of 3740 de-identified COVID-19 positive patients treated at NYU Langone Health (NYULH) were collected between January and August 2020. XGBoost model trained on clinical data from the final 24 hours excelled at predicting mortality (AUC=0.92, specificity=86% and sensitivity=85%). Respiration rate was the most important feature, followed by SpO2 and age 75+. Performance of this model to predict the deceased outcome extended 5 days prior with AUC=0.81, specificity=70%, sensitivity=75%. When only using clinical data from the first 24 hours, AUCs of 0.79, 0.80, and 0.77 were obtained for deceased, ventilated, or ICU admitted, respectively. Although respiration rate and SpO2 levels offered the highest feature importance, other canonical markers including diabetic history, age and temperature offered minimal gain. When lab values were incorporated, prediction of mortality benefited the most from blood urea nitrogen (BUN) and lactate dehydrogenase (LDH). Features predictive of morbidity included LDH, calcium, glucose, and C-reactive protein (CRP). Together this work summarizes efforts to systematically examine the importance of a wide range of features across different endpoint outcomes and at different hospitalization time points. | health informatics |
10.1101/2020.12.02.20238907 | Peripheral and lung resident T cell responses against SARS-CoV-2 | Considering that SARS-CoV-2 interacts with the host at the respiratory tract mucosal interface, T cells strategically placed within these surfaces, namely resident memory T cells, will be essential to limit viral spread and disease. Importantly, these cells are mostly non-recirculating, which reduces the window of opportunity to examine circulating lymphocytes in blood as they home to the lung parenchyma. Here, we demonstrate that viral specific T cells can migrate and establish in the lung as resident memory T cells remaining detectable up to 10 months after initial infection. Moreover, focusing on the acute phase of the infection, we identified virus-specific T cell responses in blood with functional, migratory and apoptotic patterns modulated by viral proteins and associated with clinical outcome. Our study highlights IL-10 secretion by virus-specific T cells associated to a better outcome and the persistence of resident memory T cells as key players for future protection against SARS-CoV-2 infection. | infectious diseases |
10.1101/2020.12.02.20239194 | Identifying those at risk of reattendance at discharge from emergency departments using explainable machine learning | Short-term reattendances to emergency departments are a key quality of care indicator. Identifying patients at increased risk of early reattendance could help reduce the number of missed critical illnesses and could reduce avoidable utilization of emergency departments by enabling targeted post-discharge intervention. In this manuscript we present a retrospective, single-centre study where we created and evaluated an extreme gradient boosted decision tree model trained to identify patients at risk of reattendance within 72 hours of discharge from an emergency department (University Hospitals Southampton Foundation Trust, UK). Our model was trained using 35,447 attendances by 28,945 patients and evaluated on a hold-out test set featuring 8,847 attendances by 7,237 patients. The set of attendances from a given patient appeared exclusively in either the training or the test set. Our model was trained using both visit level variables (e.g., vital signs, arrival mode, and chief complaint) and a set of variables available in a patients electronic patient record, such as age and any recorded medical conditions. On the hold-out test set, our highest performing model obtained an AUROC of 0.747 (95% CI : 0.722-0.773) and an average precision of 0.233 (95% CI : 0.194-0.277). These results demonstrate that machine-learning models can be used to classify patients, with moderate performance, into low and high-risk groups for reattendance. We explained our models predictions using SHAP values, a concept developed from coalitional game theory, capable of explaining predictions at an attendance level. We demonstrated how clustering techniques can be used to investigate the different sub-groups of explanations present in our patient cohort. | emergency medicine |
10.1101/2020.12.02.20239194 | Using explainable machine learning to identifypatients at risk of reattendance at discharge from emergency departments | Short-term reattendances to emergency departments are a key quality of care indicator. Identifying patients at increased risk of early reattendance could help reduce the number of missed critical illnesses and could reduce avoidable utilization of emergency departments by enabling targeted post-discharge intervention. In this manuscript we present a retrospective, single-centre study where we created and evaluated an extreme gradient boosted decision tree model trained to identify patients at risk of reattendance within 72 hours of discharge from an emergency department (University Hospitals Southampton Foundation Trust, UK). Our model was trained using 35,447 attendances by 28,945 patients and evaluated on a hold-out test set featuring 8,847 attendances by 7,237 patients. The set of attendances from a given patient appeared exclusively in either the training or the test set. Our model was trained using both visit level variables (e.g., vital signs, arrival mode, and chief complaint) and a set of variables available in a patients electronic patient record, such as age and any recorded medical conditions. On the hold-out test set, our highest performing model obtained an AUROC of 0.747 (95% CI : 0.722-0.773) and an average precision of 0.233 (95% CI : 0.194-0.277). These results demonstrate that machine-learning models can be used to classify patients, with moderate performance, into low and high-risk groups for reattendance. We explained our models predictions using SHAP values, a concept developed from coalitional game theory, capable of explaining predictions at an attendance level. We demonstrated how clustering techniques can be used to investigate the different sub-groups of explanations present in our patient cohort. | emergency medicine |
10.1101/2020.12.02.20242610 | The Moderating Role of Parental Sleep Knowledge on Children with Developmental Disabilities and Their Parents' Sleep | BackgroundChildren with intellectual and developmental difficulties often experience 2 sleep problems, which in turn may impact parental sleep patterns. This study explored the role of parental sleep knowledge as a moderator on the relationship between child sleep and parental sleep impairment.
Methods582 parents or caregivers (92.6% mothers) of children with different developmental disabilities (Age M = 9.34, 29.5 % females) such as Downs syndrome, participated in an online survey. Multiple regression analysis was conducted.
ResultsParental sleep knowledge of child sleep was a moderating variable in the relationship between child sleep nocturnal duration and parental sleep impairment. Although overall, sleep knowledge was high in this sample, two specific knowledge gaps were identified namely child sleep duration requirements, and the recognition of signs of a well-rested child.
ConclusionThis study has provided evidence that increased parental sleep knowledge can positively impact both child and parental sleep outcomes. | psychiatry and clinical psychology |
10.1101/2020.12.04.20243923 | Correction of human forehead temperature variations measured by non-contact infrared thermometer | Elevated body temperature (fever) can be a common symptom of a medical condition, such as a viral or bacterial infection, including SARS-CoV-2 or influenza. Non-contact infrared thermometers are able to measure forehead temperature in a timely manner and were used to perform a fast fever screening in a population. However, forehead temperature measurements differ greatly from basal body temperatures, and are the target of massive perturbations from the environment. Here we gathered a dataset of N=18024 measurements using the same precision infrared sensor in different locations while tracking both outside temperature, room temperature, time of measurement, and identity. Herein, we propose a method able to extract and remove the influence of external perturbations and set the threshold for fever based on local statistics to 37.38 {degrees}C, after calibration and temperature correction. This method can help manufacturers and decision-makers to build and use more accurate tools so as to maximize both sensitivity and specificity of the screening protocol.
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=181 SRC="FIGDIR/small/20243923v2_ufig1.gif" ALT="Figure 1">
View larger version (34K):
[email protected]@119381dorg.highwire.dtl.DTLVardef@b30f0corg.highwire.dtl.DTLVardef@1a1073e_HPS_FORMAT_FIGEXP M_FIG C_FIG | public and global health |
10.1101/2020.12.02.20242651 | Inflated false-negative rates in pooled RT-PCR tests of SARS-CoV-2 | PCR testing is an important tool to mitigate outbreaks of infectious diseases. One way of increasing testing throughput is by simultaneously testing multiple samples for the presence of a pathogen, a technique known as pooling. During the current COVID-19 pandemic, rapidly testing individuals for the presence of SARS-CoV-2 is conducted in large amounts. Since testing is often a bottleneck in mitigating the spread of SARS-CoV-2, pooling is increasing in popularity. Most analyses of the error rates of pooling schemes assume that including more than a single infected sample in a pooled test does not increase the probability of a positive outcome. We challenge this assumption with experimental data and suggest a novel probabilistic model for the outcomes of pooled tests. As an application, we analyze the false-negative rates of one common pooling scheme known as Dorfman pooling. We show that the false-negative rates of Dorfman pooling increase when the prevalence of infection decreases. However, low infection prevalence is exactly the condition under which Dorfman pooling achieves highest throughput. We therefore implore the cautious use of pooling and development of pooling schemes that consider correctly accounting for tests error rates. | health informatics |
10.1101/2020.12.02.20242651 | An Accurate Model for SARS-CoV-2 Pooled RT-PCR Test Errors | PCR testing is an important tool to mitigate outbreaks of infectious diseases. One way of increasing testing throughput is by simultaneously testing multiple samples for the presence of a pathogen, a technique known as pooling. During the current COVID-19 pandemic, rapidly testing individuals for the presence of SARS-CoV-2 is conducted in large amounts. Since testing is often a bottleneck in mitigating the spread of SARS-CoV-2, pooling is increasing in popularity. Most analyses of the error rates of pooling schemes assume that including more than a single infected sample in a pooled test does not increase the probability of a positive outcome. We challenge this assumption with experimental data and suggest a novel probabilistic model for the outcomes of pooled tests. As an application, we analyze the false-negative rates of one common pooling scheme known as Dorfman pooling. We show that the false-negative rates of Dorfman pooling increase when the prevalence of infection decreases. However, low infection prevalence is exactly the condition under which Dorfman pooling achieves highest throughput. We therefore implore the cautious use of pooling and development of pooling schemes that consider correctly accounting for tests error rates. | health informatics |
10.1101/2020.12.02.20240648 | The impact of population-wide rapid antigen testing on SARS-CoV-2 prevalence in Slovakia | Slovakia conducted multiple rounds of population-wide rapid antigen testing for SARS-CoV-2 in late 2020, combined with a period of additional contact restrictions. Observed prevalence decreased by 58% (95% CI: 57-58%) within one week in the 45 counties that were subject to two rounds of mass testing, an estimate that remained robust when adjusting for multiple potential confounders. Adjusting for epidemic growth of 4.4% (1.1-6.9%) per day preceding the mass testing campaign, the estimated decrease in prevalence compared to a scenario of unmitigated growth was 70% (67-73%). Modelling suggests that this decrease cannot be explained solely by infection control measures, but requires the additional impact of isolation as well as quarantine of household members of those testing positive. | epidemiology |
10.1101/2020.12.02.20242669 | Electrocorticography and stereo EEG provide distinct measures of brain connectivity: implications for network models | Brain network models derived from graph theory have the potential to guide functional neurosurgery, and to improve rates of post-operative seizure freedom for patients with epilepsy. A barrier to applying these models clinically is that intracranial EEG electrode implantation strategies vary by center, region and country, from cortical grid & strip electrodes (ECoG), to purely stereotactic depth electrodes (SEEG), to a mixture of both. To determine whether models derived from one type of study are broadly applicable to others, we investigate the differences in brain networks mapped by ECoG and SEEG in a cohort of patients who underwent surgery for temporal lobe epilepsy and achieved a favorable outcome. We show that networks derived from ECoG and SEEG define distinct relationships between resected and spared tissue, which may be driven by sampling bias of temporal depth electrodes in patients with predominantly cortical grids. We propose a method of correcting for the effect of internodal distance that is specific to electrode type and explore how additional methods for spatially correcting for sampling bias affect network models. Ultimately, we find that smaller surgical targets tend to have lower connectivity with respect to the surrounding network, challenging notions that abnormal connectivity in the epileptogenic zone is typically high. Our findings suggest that effectively applying computational models to localize epileptic networks requires accounting for the effects of spatial sampling, particularly when analyzing both ECoG and SEEG recordings in the same cohort, and that future network studies of epilepsy surgery should also account for differences in focality between resection and ablation. We propose that these findings are broadly relevant to intracranial EEG network modeling in epilepsy and an important step in translating them clinically into patient care.
Author summaryBernabei et al. report that electrocorticography and stereo EEG provide different quantifications of epileptogenic zone connectivity due to differences in electrode type and implant patterns. After correcting for sampling differences between modalities, they find that more focal forms of epilepsy surgery target regions of weaker connectivity compared to the remaining epileptic network. | neurology |
10.1101/2020.12.02.20242685 | Determinants of COVID-19 Incidence and Mortality in the US: Spatial Analysis | OBJECTIVESThe US continues to account for the highest proportion of the global Coronavirus Disease-2019 (COVID-19) cases and deaths. Currently, it is important to contextualize the spread and success of mitigation efforts. The objective of this study was to assess the ecological determinants (policy, health behaviors, socio-economic, physical environment, and clinical care) of COVID-19 incidence and mortality in the US.
METHODSData from the New York Times COVID-19 repository (01/21/2020-10/27/2020), 2020 County Health Rankings, 2016 County Presidential Election Returns, and 2018-2019 Area Health Resource File were used. County-level logged incidence and mortality rate/million were modeled using the Spatial Autoregressive Combined model and spatial lag model.
RESULTSCounties with higher proportions of racial minorities (African American {beta}= 0.007, Native Americans {beta}= 0.008, Hispanics {beta}= 0.015), non-English speakers ({beta}= 0.010), population density ([logged] {beta}= 0.028), and air pollution ({beta}= 0.062) were significantly associated with high COVID-19 incidence rates. Subsequently, counties with higher Republican voters ({beta}= 0.017), excessive drinkers ({beta}= 0.107), children in single-parent households ({beta}= 0.018), uninsured adults ({beta}= 0.038), racial minorities (African American {beta}= 0.032, Native Americans {beta}= 0.034, Hispanics {beta}= 0.037), females ({beta}= 0.101), and population density ([logged] {beta}= 0.270), air pollution ({beta}= 0.130), and non-Whites/Whites residential segregation ({beta}= 0.014) were significantly associated with high COVID-19 mortality rates. Additionally, longer state-level restrictions were associated with lower COVID-19 incidence and mortality rates.
CONCLUSIONSThe spatial models identified longer state-level restrictions, population density, air pollution, uninsured rate, and race/ethnicity as important determinants of the geographic disparities in COVID-19 incidence and mortality. | infectious diseases |
10.1101/2020.12.02.20242958 | Anemia prior to or during COVID-19 is a risk factor for rehospitalization after SARS-CoV-2 clearance | BackgroundAs the number of new and recovering COVID-19 cases continues to rise, it has become evident that patients can experience symptoms and complications after viral clearance. Clinical biomarkers characterizing patients who are likely to experience these prolonged effects are unknown.
MethodsWe conducted a retrospective study to compare longitudinal lab test measurements (hemoglobin, hematocrit, estimated glomerular filtration rate, serum creatinine, and blood urea nitrogen) in patients rehospitalized after PCR-confirmed SARS-CoV-2 clearance (n=104) versus patients not rehospitalized after viral clearance (n=278).
FindingsCompared to patients who were not rehospitalized after PCR-confirmed viral clearance, those who were rehospitalized had lower median hemoglobin levels in the year prior to COVID-19 diagnosis (cohens D = -0.50; p=1.2x10-3) and during the active infection window (cohens D = -0.71; p=4.6x10-8). Patients hospitalized after viral clearance were also more likely to be diagnosed with moderate or severe anemia during the active infection window (OR = 2.18; p = 4.99x10-9).
ConclusionsThe occurrence of moderate or severe anemia in hospitalized COVID-19 patients is strongly associated with rehospitalization after viral clearance. Whether interventions to mitigate anemia can improve long term outcomes of COVID-19 patients should be further investigated.
FundingThis study was funded by nference. | infectious diseases |
10.1101/2020.12.02.20242958 | Anemia is a risk factor for rehospitalization after SARS-CoV-2 clearance | BackgroundAs the number of new and recovering COVID-19 cases continues to rise, it has become evident that patients can experience symptoms and complications after viral clearance. Clinical biomarkers characterizing patients who are likely to experience these prolonged effects are unknown.
MethodsWe conducted a retrospective study to compare longitudinal lab test measurements (hemoglobin, hematocrit, estimated glomerular filtration rate, serum creatinine, and blood urea nitrogen) in patients rehospitalized after PCR-confirmed SARS-CoV-2 clearance (n=104) versus patients not rehospitalized after viral clearance (n=278).
FindingsCompared to patients who were not rehospitalized after PCR-confirmed viral clearance, those who were rehospitalized had lower median hemoglobin levels in the year prior to COVID-19 diagnosis (cohens D = -0.50; p=1.2x10-3) and during the active infection window (cohens D = -0.71; p=4.6x10-8). Patients hospitalized after viral clearance were also more likely to be diagnosed with moderate or severe anemia during the active infection window (OR = 2.18; p = 4.99x10-9).
ConclusionsThe occurrence of moderate or severe anemia in hospitalized COVID-19 patients is strongly associated with rehospitalization after viral clearance. Whether interventions to mitigate anemia can improve long term outcomes of COVID-19 patients should be further investigated.
FundingThis study was funded by nference. | infectious diseases |
10.1101/2020.12.03.20243196 | Mental health problems in the general population during and after the first lockdown phase due to the SARS-Cov-2 pandemic: Rapid review of multi-wave studies | AimsThe SARS-Cov-2 pandemic and the lockdown response are assumed to have increased mental health problems in general populations compared to pre-pandemic times. The aim of this paper is to review studies on the course of mental health problems during and after the first lockdown phase.
MethodsWe conducted a rapid review of multi-wave studies in general populations with time points during and after the first lockdown phase. Repeated cross-sectional and longitudinal studies that utilized validated instruments were included. The main outcome was whether indicators of mental health problems have changed during and after the first lockdown phase. The study was registered with PROSPERO No. CRD42020218640.
Results23 studies with 56 indicators were included in the qualitative review. Studies that reported data from pre-pandemic assessments through lockdown indicated an increase in mental health problems. During lockdown no uniform trend could be identified. After lockdown mental health problems decreased slightly.
ConclusionsAs mental health care utilization indicators and data on suicides do not suggest an increase in demand during the first lockdown phase, we regard the increase in mental health problems as general distress that is to be expected during a global health crisis. Several methodological, pandemic-related, response-related and health policy-related factors need to be considered when trying to gain a broader perspective on the impact of the first wave of the pandemic and the first phase of lockdown on general populations mental health. | psychiatry and clinical psychology |
10.1101/2020.12.03.20242032 | Cortical Mechanisms of Visual Hypersensitivity in Women at Risk for Chronic Pelvic Pain | Multisensory hypersensitivity (MSH), which refers to persistent discomfort across sensory modalities, is a risk factor for chronic pain. Developing a better understanding of the neural contributions of disparate sensory systems to MSH may clarify its role in the development of chronic pain. We recruited a cohort of women (n=147) enriched with participants with menstrual pain at risk for developing chronic pain. Visual sensitivity was measured using a periodic pattern-reversal stimulus during EEG. Self-reported visual unpleasantness ratings were also recorded. Bladder pain sensitivity was evaluated with an experimental bladder-filling task associated with early clinical symptoms of chronic pelvic pain. Visual stimulation induced unpleasantness was associated with bladder pain and evoked primary visual cortex excitation; however, the relationship between unpleasantness and cortical excitation was moderated by bladder pain. Thus, future studies aimed at reversing the progression of MSH into chronic pain should prioritize targeting of cortical mechanisms responsible for maladaptive sensory input integration. | obstetrics and gynecology |
10.1101/2020.12.03.20243436 | Reduced frequency of perforin-positive CD8+ T cells in menstrual effluent of endometriosis patients compared to healthy controls | BackgroundEndometriosis is widespread among women in reproductive age and quite commonly reduces life quality of those affected by symptoms like dysmenorrhea, dyspareunia or infertility. The scientific literature indicates many immunological changes like reduced cytotoxicity of natural killer cells or altered concentrations of cytokines and cell adhesion molecules. Frequently examined tissues are peripheral blood, endometrial tissue and peritoneal fluid. Yet, knowledge on immunological differences in menstrual effluent (ME) is scarce.
Methods12 women with endometriosis and 11 healthy controls were included in this study. ME was collected using menstrual cups and venous blood samples (PB) were taken. Mononuclear cells were obtained from ME (MMC) and PB (PBMC) and analyzed using flow cytometry. Furthermore, concentrations of cell adhesion molecules (ICAM-I and VCAM-I) and cytokines (IL-6, IL-8 and TNF-) were measured in ME and PB.
ResultsCD8+ T cells obtained from ME were significantly less often perforin-positive in women with endometriosis compared to healthy controls. Additionally, plasma ICAM-I concentrations were significantly lower in the endometriosis group. A comparison between MMC and PBMC revealed that MMC contained significantly less T cells and more B cells. The CD4/CD8 ratio was significantly higher in MMC, and Tregs were significantly less frequently in MMC. In ME, T cells and NK cells expressed significantly more CD69. NK cells obtained from ME were predominantly CD56bright/CD16dim and had a lower frequency of perforin+ cells compared to PBMC NK cells. NKp46 was significantly more expressed on NK cells from PBMC.
ConclusionCD8+ T cells obtained from the ME were significantly less perforin-positive in endometriosis patients indicating a reduced cytotoxic potential. MMC are distinctively different from PBMC and, thus, seem to be of endometrial origin. | obstetrics and gynecology |
10.1101/2020.12.04.20243766 | Initial vancomycin versus metronidazole for the treatment of first-episode non-severe Clostridioides difficile infection | ObjectiveClostridioides difficile infection (CDI) is the leading cause of infectious nosocomial diarrhea. Although initial fidaxomicin or vancomycin treatment is recommended by most major guidelines to treat severe CDI, there exists varied recommendations for first-episode non-severe CDI. Given the discrepancy in current treatment guidelines, we sought to evaluate the use of initial vancomycin versus metronidazole for first-episode non-severe CDI.
MethodsWe conducted a retrospective cohort study of all adult inpatients with first-episode CDI at our institution from January 2013 to May 2018. The initial vancomycin versus initial metronidazole cohorts were examined using a multivariate logistic regression model.
ResultsPatients (n = 737) had a median age of 72.3 years and 357 (48.4%) had hospital-acquired infection. Among patients with non-severe CDI (n = 326), recurrence, new incident infection, and 30-day mortality rates were 16.2%, 10.9%, and 5.3%, respectively, when treated with initial metronidazole, compared to 20.0%, 1.4%, and 10.0%, respectively, when treated with initial vancomycin. In an adjusted multivariable analysis, the use of initial vancomycin for the treatment of non-severe CDI was associated with a reduction in new incident infection (ORadj: 0.11; 95% CI: 0.02-0.86; P=0.035), compared to initial metronidazole.
ConclusionsInitial vancomycin was associated with a reduced rate of new incident infection in the treatment of adult inpatients with first-episode non-severe CDI. These findings support the use of initial vancomycin for all inpatients with CDI, when fidaxomicin is unavailable. | infectious diseases |
10.1101/2020.12.04.20243766 | Initial vancomycin versus metronidazole for the treatment of first-episode non-severe Clostridioides difficile infection | ObjectiveClostridioides difficile infection (CDI) is the leading cause of infectious nosocomial diarrhea. Although initial fidaxomicin or vancomycin treatment is recommended by most major guidelines to treat severe CDI, there exists varied recommendations for first-episode non-severe CDI. Given the discrepancy in current treatment guidelines, we sought to evaluate the use of initial vancomycin versus metronidazole for first-episode non-severe CDI.
MethodsWe conducted a retrospective cohort study of all adult inpatients with first-episode CDI at our institution from January 2013 to May 2018. The initial vancomycin versus initial metronidazole cohorts were examined using a multivariate logistic regression model.
ResultsPatients (n = 737) had a median age of 72.3 years and 357 (48.4%) had hospital-acquired infection. Among patients with non-severe CDI (n = 326), recurrence, new incident infection, and 30-day mortality rates were 16.2%, 10.9%, and 5.3%, respectively, when treated with initial metronidazole, compared to 20.0%, 1.4%, and 10.0%, respectively, when treated with initial vancomycin. In an adjusted multivariable analysis, the use of initial vancomycin for the treatment of non-severe CDI was associated with a reduction in new incident infection (ORadj: 0.11; 95% CI: 0.02-0.86; P=0.035), compared to initial metronidazole.
ConclusionsInitial vancomycin was associated with a reduced rate of new incident infection in the treatment of adult inpatients with first-episode non-severe CDI. These findings support the use of initial vancomycin for all inpatients with CDI, when fidaxomicin is unavailable. | infectious diseases |
10.1101/2020.12.04.20243766 | Initial vancomycin versus metronidazole for the treatment of first-episode non-severe Clostridioides difficile infection | ObjectiveClostridioides difficile infection (CDI) is the leading cause of infectious nosocomial diarrhea. Although initial fidaxomicin or vancomycin treatment is recommended by most major guidelines to treat severe CDI, there exists varied recommendations for first-episode non-severe CDI. Given the discrepancy in current treatment guidelines, we sought to evaluate the use of initial vancomycin versus metronidazole for first-episode non-severe CDI.
MethodsWe conducted a retrospective cohort study of all adult inpatients with first-episode CDI at our institution from January 2013 to May 2018. The initial vancomycin versus initial metronidazole cohorts were examined using a multivariate logistic regression model.
ResultsPatients (n = 737) had a median age of 72.3 years and 357 (48.4%) had hospital-acquired infection. Among patients with non-severe CDI (n = 326), recurrence, new incident infection, and 30-day mortality rates were 16.2%, 10.9%, and 5.3%, respectively, when treated with initial metronidazole, compared to 20.0%, 1.4%, and 10.0%, respectively, when treated with initial vancomycin. In an adjusted multivariable analysis, the use of initial vancomycin for the treatment of non-severe CDI was associated with a reduction in new incident infection (ORadj: 0.11; 95% CI: 0.02-0.86; P=0.035), compared to initial metronidazole.
ConclusionsInitial vancomycin was associated with a reduced rate of new incident infection in the treatment of adult inpatients with first-episode non-severe CDI. These findings support the use of initial vancomycin for all inpatients with CDI, when fidaxomicin is unavailable. | infectious diseases |
10.1101/2020.12.04.20243766 | Initial vancomycin versus metronidazole for the treatment of first-episode non-severe Clostridioides difficile infection | ObjectiveClostridioides difficile infection (CDI) is the leading cause of infectious nosocomial diarrhea. Although initial fidaxomicin or vancomycin treatment is recommended by most major guidelines to treat severe CDI, there exists varied recommendations for first-episode non-severe CDI. Given the discrepancy in current treatment guidelines, we sought to evaluate the use of initial vancomycin versus metronidazole for first-episode non-severe CDI.
MethodsWe conducted a retrospective cohort study of all adult inpatients with first-episode CDI at our institution from January 2013 to May 2018. The initial vancomycin versus initial metronidazole cohorts were examined using a multivariate logistic regression model.
ResultsPatients (n = 737) had a median age of 72.3 years and 357 (48.4%) had hospital-acquired infection. Among patients with non-severe CDI (n = 326), recurrence, new incident infection, and 30-day mortality rates were 16.2%, 10.9%, and 5.3%, respectively, when treated with initial metronidazole, compared to 20.0%, 1.4%, and 10.0%, respectively, when treated with initial vancomycin. In an adjusted multivariable analysis, the use of initial vancomycin for the treatment of non-severe CDI was associated with a reduction in new incident infection (ORadj: 0.11; 95% CI: 0.02-0.86; P=0.035), compared to initial metronidazole.
ConclusionsInitial vancomycin was associated with a reduced rate of new incident infection in the treatment of adult inpatients with first-episode non-severe CDI. These findings support the use of initial vancomycin for all inpatients with CDI, when fidaxomicin is unavailable. | infectious diseases |
10.1101/2020.12.04.20243766 | Initial vancomycin versus metronidazole for the treatment of first-episode non-severe Clostridioides difficile infection | ObjectiveClostridioides difficile infection (CDI) is the leading cause of infectious nosocomial diarrhea. Although initial fidaxomicin or vancomycin treatment is recommended by most major guidelines to treat severe CDI, there exists varied recommendations for first-episode non-severe CDI. Given the discrepancy in current treatment guidelines, we sought to evaluate the use of initial vancomycin versus metronidazole for first-episode non-severe CDI.
MethodsWe conducted a retrospective cohort study of all adult inpatients with first-episode CDI at our institution from January 2013 to May 2018. The initial vancomycin versus initial metronidazole cohorts were examined using a multivariate logistic regression model.
ResultsPatients (n = 737) had a median age of 72.3 years and 357 (48.4%) had hospital-acquired infection. Among patients with non-severe CDI (n = 326), recurrence, new incident infection, and 30-day mortality rates were 16.2%, 10.9%, and 5.3%, respectively, when treated with initial metronidazole, compared to 20.0%, 1.4%, and 10.0%, respectively, when treated with initial vancomycin. In an adjusted multivariable analysis, the use of initial vancomycin for the treatment of non-severe CDI was associated with a reduction in new incident infection (ORadj: 0.11; 95% CI: 0.02-0.86; P=0.035), compared to initial metronidazole.
ConclusionsInitial vancomycin was associated with a reduced rate of new incident infection in the treatment of adult inpatients with first-episode non-severe CDI. These findings support the use of initial vancomycin for all inpatients with CDI, when fidaxomicin is unavailable. | infectious diseases |
10.1101/2020.12.03.20243394 | Opportunities to catalyze improved healthcare access in pluralistic systems: a cross-sectional study in Haiti | IntroductionGains to ensure global healthcare access are at risk of stalling because some old resilient challenges require new solutions. Our objective was to study a pluralistic healthcare system that is reliant on both conventional and non-conventional providers to discover opportunities to catalyze renewed progress.
MethodsA cross-sectional study was conducted among households with children less than 5 years of age in Haiti. Households were randomly sampled geographically with stratifications for population density. Household questionnaires with standardized cases (intentions) were compared to self-recall of health events (behaviors). The connectedness of households and their providers was determined by network analysis.
ResultsA total of 568 households (incorporating 2900 members) and 65 providers were enrolled. Households reported 636 health events in the prior month. Households sought care for 35% (n=220) and treated with home remedies for 44% (n=277). The odds of seeking care increased 217% for severe events (aOR=3.17; 95%CI 1.99-5.05; p< 0.001). The odds of seeking care from a conventional provider increased by 37% with increasing distance (aOR=1.37; 95%CI 1.06-1.79; p=0.016). Despite stating an intention to seek care from conventional providers, there was a lack of congruence in practice that favored non-conventional providers (McNemars Chi-squared Test p<0.001). Care was sought from primary providers for 68% (n=150) of cases within a three-tiered network; 25% (n=38/150) were non-conventional.
ConclusionAddressing geographic barriers, possibly with technology solutions, should be prioritized to meet healthcare seeking intentions while developing approaches to connect non-conventional providers into healthcare networks when geographic barriers cannot be overcome.
Article SummaryO_LIThe study is inclusive of both conventional and non-conventional healthcare providers, reflecting Haitis pluralistic healthcare system.
C_LIO_LIThe study utilized randomized geospatial sampling method to ensure participants were geographically representative of all families in the study area.
C_LIO_LIEmployed an unique application of network analysis to observe relationships between families and healthcare providers to identify approaches to increase healthcare access in seemingly change resilient pluralistic systems.
C_LIO_LIA limitation of the approach to compare healthcare seeking intentions and behaviors was that the standardized case scenarios did not cover all health event scenarios.
C_LIO_LIA limitation on enrollment was that non-conventional healthcare providers were more difficult to locate than conventional providers.
C_LI | public and global health |
10.1101/2020.12.03.20243394 | Opportunities to catalyze improved healthcare access in pluralistic systems: a cross-sectional study in Haiti | IntroductionGains to ensure global healthcare access are at risk of stalling because some old resilient challenges require new solutions. Our objective was to study a pluralistic healthcare system that is reliant on both conventional and non-conventional providers to discover opportunities to catalyze renewed progress.
MethodsA cross-sectional study was conducted among households with children less than 5 years of age in Haiti. Households were randomly sampled geographically with stratifications for population density. Household questionnaires with standardized cases (intentions) were compared to self-recall of health events (behaviors). The connectedness of households and their providers was determined by network analysis.
ResultsA total of 568 households (incorporating 2900 members) and 65 providers were enrolled. Households reported 636 health events in the prior month. Households sought care for 35% (n=220) and treated with home remedies for 44% (n=277). The odds of seeking care increased 217% for severe events (aOR=3.17; 95%CI 1.99-5.05; p< 0.001). The odds of seeking care from a conventional provider increased by 37% with increasing distance (aOR=1.37; 95%CI 1.06-1.79; p=0.016). Despite stating an intention to seek care from conventional providers, there was a lack of congruence in practice that favored non-conventional providers (McNemars Chi-squared Test p<0.001). Care was sought from primary providers for 68% (n=150) of cases within a three-tiered network; 25% (n=38/150) were non-conventional.
ConclusionAddressing geographic barriers, possibly with technology solutions, should be prioritized to meet healthcare seeking intentions while developing approaches to connect non-conventional providers into healthcare networks when geographic barriers cannot be overcome.
Article SummaryO_LIThe study is inclusive of both conventional and non-conventional healthcare providers, reflecting Haitis pluralistic healthcare system.
C_LIO_LIThe study utilized randomized geospatial sampling method to ensure participants were geographically representative of all families in the study area.
C_LIO_LIEmployed an unique application of network analysis to observe relationships between families and healthcare providers to identify approaches to increase healthcare access in seemingly change resilient pluralistic systems.
C_LIO_LIA limitation of the approach to compare healthcare seeking intentions and behaviors was that the standardized case scenarios did not cover all health event scenarios.
C_LIO_LIA limitation on enrollment was that non-conventional healthcare providers were more difficult to locate than conventional providers.
C_LI | public and global health |
10.1101/2020.12.03.20242941 | Contrasting factors associated with COVID-19-related ICU admission and death outcomes in hospitalised patients by means of Shapley values | Identification of those at greatest risk of death due to the substantial threat of COVID-19 can benefit from novel approaches to epidemiology that leverage large datasets and complex machine-learning models, provide data-driven intelligence, and guide decisions such as intensive-care unit admission (ICUA). The objective of this study is two-fold, one substantive and one methodological: substantively to evaluate the association of demographic and health records with two related, yet different, outcomes of severe COVID-19 (viz., death and ICUA); methodologically to compare interpretations based on logistic regression and on gradient-boosted decision tree (GBDT) predictions interpreted by means of the Shapley impacts of covariates. Very different association of some factors, e.g., obesity and chronic respiratory diseases, with death and ICUA may guide review of practice. Shapley explanation of GBDTs identified varying effects of some factors among patients, thus emphasising the importance of individual patient assessment. The results of this study are also relevant for the evaluation of complex automated clinical decision systems, which should optimise prediction scores whilst remaining interpretable to clinicians and mitigating potential biases.
Author summaryThe design is a retrospective cohort study of 13954 in-patients of ages ranging from 1 to 105 year (IQR: 56, 70, 81) with a confirmed diagnosis of COVID-19 by 28th June 2020. This study used multivariable logistic regression to generate odd ratios (ORs) multiply adjusted for 37 covariates (comorbidities, demographic, and others) selected on the basis of clinical interest and prior findings. Results were supplemented by gradient-boosted decision tree (GBDT) classification to generate Shapley values in order to evaluate the impact of the covariates on model output for all patients. Factors are differentially associated with death and ICUA and among patients.
Deaths due to COVID-19 were associated with immunosuppression due to disease (OR 1.39, 95% CI 1.10-1.76), type-2 diabetes (OR 1.31, 95% CI 1.17-1.46), chronic respiratory disease (OR 1.19, 95% CI 1.05-1.35), age (OR 1.56/10-year increment, 95% CI 1.52-1.61), and male sex (OR 1.54, 95% CI1.42-1.68). Associations of ICUA with some factors differed in direction (e.g., age, chronic respiratory disease). Self-reported ethnicities were strongly but variably associated with both outcomes.
GBDTs had similar performance (ROC-AUC, ICUA 0.83, death 0.68 for GBDT; 0.80 and 0.68 for logistic regression). We derived importance scores based on Shapley values which were consistent with the ORs, despite the underlying machine-learning model being intrinsically different to the logistic regression. Chronic heart disease, hypertension, other comorbidities, and some ethnicities had Shapley impacts on death ranging from positive to negative among different patients, although consistently associated with ICUA for all. Immunosuppressive disease, type-2 diabetes, and chronic liver and respiratory diseases had positive impacts on death with either positive or negative on ICUA.
We highlight the complexity of informing clinical practice and public-health interventions. We recommend that clinical support systems should not only predict patients at risk, but also yield interpretable outputs for validation by domain experts. | health informatics |
10.1101/2020.12.03.20243725 | Head-to-head comparison of SARS-CoV-2 antigen-detecting rapid test with professional-collected nasal versus nasopharyngeal swab | BackgroundNasopharyngeal (NP) swab samples for antigen-detecting rapid diagnostic tests (Ag-RDTs) require qualified healthcare professionals and are frequently perceived as uncomfortable by patients.
MethodsWe performed a manufacturer-independent, prospective diagnostic accuracy study, comparing professional-collected nasal mid-turbinate (NMT) to nasopharyngeal swab, using the test kits of a WHO-listed SARS-CoV-2 Ag-RDT (STANDARD Q COVID-19 Ag Test, SD Biosensor), which is also being distributed by Roche. Individuals with high suspicion for COVID-19 infection were tested. The reference standard was RT-PCR using a combined oro-/nasopharyngeal swab sample. Percent positive and negative agreement, as well as sensitivity and specificity were calculated.
ResultsAmong the 179 participants, 41 (22.9%) tested positive for SARS-CoV-2 by RT-PCR. The positive percent agreement of the two different sampling techniques for the Ag-RDT was 93.5% (CI 79.3-98.2). The negative percent agreement was 95.9% (CI 91.4-98.1). The Ag-RDT with NMT-sampling showed a sensitivity of 80.5% (33/41 PCR positives detected; CI 66.0-89.8) and specificity of 98.6% (CI 94.9-99.6) compared to RT-PCR. The sensitivity with NP-sampling was 73.2% (30/41 PCR positives detected; CI 58.1-84.3) and specificity was 99.3% (CI 96.0-100). In patients with high viral load (>7.0 log10 SARS-CoV-2 RNA copies/swab), the sensitivity of the Ag-RDT with NMT-sampling was 100% and 94.7% with NP-sampling.
ConclusionThis study demonstrates that sensitivity of a WHO-listed SARS-CoV-2 Ag-RDT using a professional nasal-sampling kit is at least equal to that of the NP-sampling kit, although confidence intervals overlap. Of note, differences in the IFUs of the test procedures could have contributed to different sensitivities. NMT-sampling can be performed with less training, reduces patient discomfort, and it enables scaling of antigen testing strategies. Additional studies of patient self-sampling should be considered to further facilitate the scaling-up of Ag-RDT testing. | infectious diseases |
10.1101/2020.12.04.20244004 | Quantifying the impact of test-trace-isolate-quarantine (TTIQ) strategies on COVID-19 transmission | The test-trace-isolate-quarantine (TTIQ) strategy, where confirmed-positive pathogen carriers are isolated from the community and their recent close contacts are identified and pre-emptively quarantined, is used to break chains of transmission during a disease outbreak. The protocol is frequently followed after an individual presents with disease symptoms, at which point they will be tested for the pathogen. This TTIQ strategy, along with hygiene and social distancing measures, make up the non-pharmaceutical interventions that are utilised to suppress the ongoing COVID-19 pandemic. Here we develop a tractable mathematical model of disease transmission and the TTIQ intervention to quantify how the probability of detecting and isolating a case following symptom onset, the fraction of contacts that are identified and quarantined, and the delays inherent to these processes impact epidemic growth. In the model, the timing of disease transmission and symptom onset, as well as the frequency of asymptomatic cases, is based on empirical distributions of SARS-CoV-2 infection dynamics, while the isolation of confirmed cases and quarantine of their contacts is implemented by truncating their respective infectious periods. We find that a successful TTIQ strategy requires intensive testing: the majority of transmission is prevented by isolating symptomatic individuals and doing so in a short amount of time. Despite the lesser impact, additional contact tracing and quarantine increases the parameter space in which an epidemic is controllable and is necessary to control epidemics with a high reproductive number. TTIQ could remain an important intervention for the foreseeable future of the COVID-19 pandemic due to slow vaccine rollout and highly-transmissible variants with the potential for vaccine escape. Our results can be used to assess how TTIQ can be improved and optimised, and the methodology represents an improvement over previous quantification methods that is applicable to future epidemic scenarios.
1 Author summaryDetecting symptomatically-infected individuals and isolating them from the community is used slow the spread of an infectious disease. Additional contact tracing and quarantine can further interrupt chains of disease transmission. These measures are employed globally to control the ongoing COVID-19 pandemic. Here we use a mathematical model to quantify how effective the test-trace-isolate-quarantine (TTIQ) intervention can be against SARS-CoV-2 spread, and how delays and inaccuracies in these processes can reduce this effectiveness. With this framework we seek to improve and optimise the TTIQ intervention and to understand the problems that we could face with new variants and/or vaccine escape. We show that increasing the detection of new infections, and doing so with minimal delay after symptom onset, is key to an effective intervention. | epidemiology |
10.1101/2020.12.04.20244004 | Test-trace-isolate-quarantine (TTIQ) intervention strategies after symptomatic COVID-19 case identification | The test-trace-isolate-quarantine (TTIQ) strategy, where confirmed-positive pathogen carriers are isolated from the community and their recent close contacts are identified and pre-emptively quarantined, is used to break chains of transmission during a disease outbreak. The protocol is frequently followed after an individual presents with disease symptoms, at which point they will be tested for the pathogen. This TTIQ strategy, along with hygiene and social distancing measures, make up the non-pharmaceutical interventions that are utilised to suppress the ongoing COVID-19 pandemic. Here we develop a tractable mathematical model of disease transmission and the TTIQ intervention to quantify how the probability of detecting and isolating a case following symptom onset, the fraction of contacts that are identified and quarantined, and the delays inherent to these processes impact epidemic growth. In the model, the timing of disease transmission and symptom onset, as well as the frequency of asymptomatic cases, is based on empirical distributions of SARS-CoV-2 infection dynamics, while the isolation of confirmed cases and quarantine of their contacts is implemented by truncating their respective infectious periods. We find that a successful TTIQ strategy requires intensive testing: the majority of transmission is prevented by isolating symptomatic individuals and doing so in a short amount of time. Despite the lesser impact, additional contact tracing and quarantine increases the parameter space in which an epidemic is controllable and is necessary to control epidemics with a high reproductive number. TTIQ could remain an important intervention for the foreseeable future of the COVID-19 pandemic due to slow vaccine rollout and highly-transmissible variants with the potential for vaccine escape. Our results can be used to assess how TTIQ can be improved and optimised, and the methodology represents an improvement over previous quantification methods that is applicable to future epidemic scenarios.
1 Author summaryDetecting symptomatically-infected individuals and isolating them from the community is used slow the spread of an infectious disease. Additional contact tracing and quarantine can further interrupt chains of disease transmission. These measures are employed globally to control the ongoing COVID-19 pandemic. Here we use a mathematical model to quantify how effective the test-trace-isolate-quarantine (TTIQ) intervention can be against SARS-CoV-2 spread, and how delays and inaccuracies in these processes can reduce this effectiveness. With this framework we seek to improve and optimise the TTIQ intervention and to understand the problems that we could face with new variants and/or vaccine escape. We show that increasing the detection of new infections, and doing so with minimal delay after symptom onset, is key to an effective intervention. | epidemiology |
10.1101/2020.12.03.20242412 | Anomalous Epithelial Variations and Ectopic Inflammatory Response in Chronic Obstructive Pulmonary Disease | Phenotypic alterations in the lung epithelium have been widely implicated in Chronic obstructive pulmonary disease (COPD) pathogenesis, but the precise mechanisms orchestrating this persistent inflammatory process remain unknown due to the complexity of lung parenchymal and mesenchymal architecture. To identify cell type-specific mechanisms and cell-cell interactions among the multiple lung resident cell types and inflammatory cells that contribute to COPD progression, we profiled 52,764 cells from lungs of COPD patients, non-COPD smokers, and never smokers using single-cell RNA sequencing technology. We predicted pseudotime of cell differentiation and cell-to-cell interaction networks in COPD. While epithelial components in never-smokers were relatively uniform, smoker groups represent extensive heterogeneity in epithelial cells, particularly in alveolar type 2 (AT2) clusters. Among AT2 cells, which are generally regarded as alveolar progenitors, we identified a unique subset that significantly increased in COPD patients, and specifically expressed a series of chemokines and PD-L1. A trajectory analysis revealed that the inflammatory AT2 cell subpopulation followed a unique differentiation path, and a prediction model of cell-to-cell interactions inferred significantly increased intercellular networks of inflammatory AT2 cells. Our results identify previously unidentified cell subsets and provide an insight into the biological and clinical characteristics of COPD pathogenesis. | respiratory medicine |
10.1101/2020.12.04.20244244 | On the Effects of Misclassification in Estimating Efficacy With Application to Recent COVID-19 Vaccine Trials | The recent trials for proposed COVID-19 vaccines have garnered a considerable amount of attention and as of this writing extensive vaccination efforts are underway. The first two vaccines approved in the United States are the Moderna and Pfizer vaccines both with estimated efficacy near 95%. One question which has received limited attention, and which we address here, is what affect false positives or false negatives have on the estimated efficacy. Expressions for potential bias due to misclassification of COVID status are developed as are general formulas to adjust for misclassification, allowing for either differential or non-differential misclassification. These results are illustrated with numerical investigations pertinent to the Moderna and Pfizer trials. The general conclusion, fortunately, is that the potential misclassification of COVID status almost always would lead to underestimation of the efficacy and that correcting for false positives or negatives will typically lead to even higher estimated efficacy. | infectious diseases |
10.1101/2020.12.05.20243964 | Double Mammary In Situ: Predicting Feasibility of Right Mammary Artery In Situ for Circumflex Coronary Artery System | ObjectivesDouble (bilateral) mammary artery in situ revascularization seems less attractive to surgeons because of limited mammary length of right mammary artery, scare, or no means of its length estimation.
MethodsWeve selected patients who have used bilateral mammary artery for revascularization and divvied them into two groups: in situ and y-graft groups. We have used preoperative chest x-rays to build a predictive model with neural networks that could predict feasibility of in situ bilateral mammary revascularization.
ResultsThe predictive model was able to predict a positive outcome with 96% percent accuracy (p < 0.01). Models sensitivity and specificity were 96% and 95% respectively. Neural networks can be used to predict double mammary feasibility using chest x-rays. Model is capable of predicting positive outcomes with 95% accuracy.
ConclusionsChest x-ray base model can accurately predict the feasibility of in situ bilateral mammary artery revascularization. | cardiovascular medicine |
10.1101/2020.12.04.20243675 | Malaria awareness of adults in high, moderate and low transmission settings: A cross-sectional study in rural East Nusa Tenggara Province, Indonesia | IntroductionThe Indonesian roadmap to malaria elimination in 2009 indicated that the nation is progressing towards achieve malaria elimination by 2030. Currently, most of the districts in the Western part of Indonesia have eliminated malaria, however, none of the districts in East Nusa Tenggara Province (ENTP) have met set targets. This study aims to investigate the status of malaria awareness of rural adults in the ENTP.
MethodsA cross-sectional study was conducted between October and December 2019 in high, moderate, and low malaria endemic settings (MES) in ENTP. 1495 participants recruited by multi-stage sampling method were interviewed using a validated questionnaire, after obtaining informed consent. A malaria awareness index was developed based on ten questions. Logistic regression method was applied to investigate the significance of associations of malaria awareness with the three malaria endemic settings.
ResultsParticipants were between the age of 18 and 89 years old, 51.4% were female and 45.5% had completed primary education. Malaria awareness index was very low (48.8%, 95% confidence interval (CI): 45.2 - 52.4). Malaria awareness of rural adults residing in low endemic settings was three times higher compared to those were living in high endemic settings (Odds ratio (OR): 3.11, 95% CI: 2.40 - 4.03, p < 0.001) and the basic malaria knowledge for participants living in low malaria endemic setting was almost five times higher than that of in high endemic setting (OR: 4.66, 95% CI: 3.50 - 6.20, p < 0.001). Of total participants, 81.3% (95% CI: 79.1 - 83.5) were aware that malaria could be prevented and 75.1% (95% CI: 72.6 - 77.6) knew at least one prevention measure. Overall, the awareness of fever as the main symptom of malaria, mosquito bites as the transmission mode of malaria, and seeking treatment within 24 hours when suffering with malaria was poor, 37.9% (95% CI: 33.9 - 41.9), 59.1% (95% CI: 55.9 - 62.3), and 46.0% (95% CI: 42.3 - 49.7) respectively. The poor level of awareness was statistically significantly different amongst three MES, the level of awareness was the lowest in the high endemic setting.
ConclusionMalaria awareness of rural adults needs to be improved to address Indonesias national roadmap to malaria elimination. Results indicate public health programs at a local government level should incorporate the malaria awareness index in their key strategic intervention packages to address local malaria awareness. | epidemiology |
10.1101/2020.12.04.20244194 | Multifaceted strategies for the control of COVID-19 outbreaks in long-term care facilities in Ontario, Canada | The novel coronavirus disease 2019 (COVID-19) has caused severe outbreaks in Canadian long-term care facilities (LTCFs). In Canada, over 80% of COVID-19 deaths during the first pandemic wave occurred in LTCFs. We sought to evaluate the effect of mitigation measures in LTCFs including frequent testing of staff, and vaccination of staff and residents. We developed an agent-based transmission model and parameterized it with disease-specific estimates, temporal sensitivity of nasopharyngeal and saliva testing, results of vaccine efficacy trials, and data from initial COVID-19 outbreaks in LTCFs in Ontario, Canada. Characteristics of staff and residents, including contact patterns, were integrated into the model with age-dependent risk of hospitalization and death. Estimates of infection and outcomes were obtained and 95% credible intervals were generated using a bias-corrected and accelerated bootstrap method. Weekly routine testing of staff with 2-day turnaround time reduced infections among residents by at least 25.9% (95% CrI: 23.3% - 28.3%), compared to baseline measures of mask-wearing, symptom screening, and staff cohorting alone. A similar reduction of hospitalizations and deaths was achieved in residents. Vaccination averted 2-4 times more infections in both staff and residents as compared to routine testing, and markedly reduced hospitalizations and deaths among residents by 95.9% (95% CrI: 95.4% - 96.3%) and 95.8% (95% CrI: 95.5% - 96.1%), respectively, over 200 days from the start of vaccination. Vaccination could have a substantial impact on mitigating disease burden among residents, but may not eliminate the need for other measures before population-level control of COVID-19 is achieved. | epidemiology |
10.1101/2020.12.06.20244863 | Persistent Homology of Tumor CT Scans is Associated with Survival In Lung Cancer | PurposeRadiomics, the objective study of non-visual features in clinical imaging, has been useful in informing decisions in clinical oncology. However, radiomics currently lacks the ability to characterize the overall topological structure of the data. This niche can be filled by persistent homology, a form of topological data analysis that analyzes high-level structure. We hypothesized that persistent homology could be applied to lung tumor scans and analyze clinical outcomes.
MethodsWe obtained segmented computed tomography lung scans (n x 565) from the NSCLC-Radiomics and NSCLC-Radiogenomics datasets in The Cancer Imaging Archive. For each scan, a cubical complex filtration based on Hounsfield units was generated. We calculated a feature curve that plotted the number of 0 dimensional topological features against each Hounsfield unit. This curves first moment of distribution was utilized as a summary statistic to predict survival in a Cox proportional hazards model.
ResultsAfter controlling for tumor image size, age, and stage, the first moment of the 0D topological feature curve was associated with poorer survival (HR x 1.118; 95% CI x 1.026-1.218; p x 0.01). The patients in our study with the lowest first moment scores had significantly better survival (1238 days; 95% CI x 936-1599) compared to the patients with the highest first moment scores (429 days; 95% CI x 326-601; p x .0015).
ConclusionsWe have shown that persistent homology can generate useful clinical correlates from tumor CT scans. Our 0-dimensional topological feature curve statistic predicts survival in lung cancer patients. This novel statistic may be used in tandem with standard radiomics variables to better inform clinical oncology decisions. | oncology |
10.1101/2020.12.04.20244046 | A Comprehensive Epithelial Tubo-Ovarian Cancer Risk Prediction Model Incorporating Genetic and Epidemiological Risk Factors | BackgroundEpithelial tubo-ovarian cancer (EOC) has high mortality partly due to late diagnosis. Prevention is available but may be associated with adverse effects. A multifactorial risk model based on known genetic and epidemiological risk factors (RFs) for EOC can help identify females at higher risk who could benefit from targeted screening and prevention.
MethodsWe developed a multifactorial EOC risk model for females of European ancestry incorporating the effects of pathogenic variants (PVs) in BRCA1, BRCA2, RAD51C, RAD51D and BRIP1, a polygenic risk score (PRS) of arbitrary size, the effects of RFs and explicit family history (FH) using a synthetic model approach. The PRS, PV and RFs were assumed to act multiplicatively.
ResultsBased on a currently available PRS for EOC that explains 5% of the EOC polygenic variance, the estimated lifetime risks under the multifactorial model in the general population vary from 0.5% to 4.6% for the 1st to 99th percentiles of the EOC risk-distribution. The corresponding range for females with an affected first-degree relative is 1.9% to 10.3%. Based on the combined risk distribution, 33% of RAD51D PV carriers are expected to have a lifetime EOC risk of less than 10%. RFs provided the widest distribution, followed by the PRS. In an independent partial model validation, absolute and relative 5-year risks were well-calibrated in quintiles of predicted risk.
ConclusionThis multifactorial risk model can facilitate stratification, in particular among females with FH of cancer and/or moderate- and high-risk PVs. The model is available via the CanRisk Tool (www.canrisk.org). | genetic and genomic medicine |
10.1101/2020.12.04.20243907 | How to predict relapse in leukaemia using time series data: A comparative in silico study. | Risk stratification and treatment decisions for leukemia patients are regularly based on clinical markers determined at diagnosis, while measurements on system dynamics are often neglected. However, there is increasing evidence that linking quantitative time-course information to disease outcomes can improve the predictions for patient-specific treatment responses.
We designed a synthetic experiment to compare different computational methods with respect to their ability to accurately predict relapse for chronic and acute myeloid leukemia treatment. Technically, we used clinical reference data to first fit a model and then generate de novo model simulations of individual patients time courses for which we can systematically tune data quality (i.e. measurement error) and quantity (i.e. number of measurements). Based hereon, we compared the prediction accuracy of three different computational methods, namely mechanistic models, generalized linear models, and deep neural networks that have been fitted to the reference data.
Our results show that data quality has a higher impact on prediction accuracy than the specific choice of the particular method. We further show that adapted treatment and measurement schemes can considerably improve the prediction accuracy.
Our proof-of-principle study highlights how computational methods and optimized data acquisition strategies can improve risk assessment and treatment of leukemia patients. | hematology |
10.1101/2020.12.04.20230755 | Reproducible breath metabolite changes in children with SARS-CoV-2 infection | SARS-CoV-2 infection is diagnosed through detection of specific viral nucleic acid or antigens from respiratory samples. These techniques are relatively expensive, slow, and susceptible to false-negative results. A rapid non-invasive method to detect infection would be highly advantageous. Compelling evidence from canine biosensors and studies of adults with COVID-19 suggests that infection reproducibly alters human volatile organic compounds (VOCs) profiles. To determine whether pediatric infection is associated with VOC changes, we enrolled SARS-CoV-2-infected and -uninfected children admitted to a major pediatric academic medical center. Breath samples were collected from children and analyzed through state-of-the-art GCxGC-ToFMS. Isolated features included 84 targeted VOCs. Candidate biomarkers that were correlated with infection status were subsequently validated in a second, independent cohort of children. We thus find that six volatile organic compounds are significantly and reproducibly increased in the breath of SARS-CoV-2-infected children. Three aldehydes (octanal, nonanal, and heptanal) drew special attention, as aldehydes are also elevated in the breath of adults with COVID-19. Together, these biomarkers demonstrate high accuracy for distinguishing pediatric SARS-CoV-2 infection and support the ongoing development of novel breath-based diagnostics. | infectious diseases |
10.1101/2020.12.07.20245241 | Lack of antibodies against seasonal coronavirus OC43 nucleocapsid protein identifies patients at risk of critical COVID-19 | Most COVID-19 patients experience a mild disease; a minority suffers from critical disease.
We report about a biomarker validation study regarding 296 patients with confirmed SARS-CoV-2 infections from four tertiary care referral centers in Germany and France.
Patients with critical disease had significantly less anti-HCoV OC43 nucleocapsid protein antibodies compared to other COVID-19 patients (p=0.007). In multivariate analysis, OC43 negative inpatients had an increased risk of critical disease, higher than the risk by increased age or BMI, and lower than the risk by male sex. A risk stratification based on sex and OC43 serostatus was derived from this analysis.
Our results indicate that prior infections with seasonal human coronaviruses can protect against a severe course of COVID-19. Anti-OC43 antibodies should be measured for COVID-19 inpatients and considered as part of the risk assessment. We expect individuals tested negative for anti-OC43 antibodies to particularly benefit from vaccination, especially with other risk factors prevailing. | infectious diseases |
10.1101/2020.12.07.20245183 | Indicators of past COVID-19 infection status: Findings from a large occupational cohort of staff and postgraduate research students from a UK university | Background Definitive diagnosis of COVID-19 requires resources frequently restricted to the severely ill. Cohort studies must rely on surrogate indicators to define cases of COVID-19 in the community. We describe the prevalence and overlap of potential indicators including self-reported symptoms, suspicion, and routine test results, plus home antibody testing. Methods An occupational cohort of 2807 staff and postgraduate students at a large London university. Repeated surveys covering March to June 2020. Antibody test results from 'lateral flow' IgG/IgM cassettes in June 2020. Results 1882 participants had valid antibody test results, and 124 (7%) were positive. Core symptoms of COVID-19 were common (770 participants positive, 41%), although fewer met criteria on a symptom algorithm (n=297, 16%). Suspicion of COVID-19 (n=509, 27%) was much higher than positive external tests (n=39, 2%). Positive antibody tests were rare in people who had no suspicion (n=4, 1%) or no core symptoms (n=10, 2%). In those who reported external antibody tests, 15% were positive on the study antibody test, compared with 24% on earlier external antibody tests. Discussion Our results demonstrate the agreement between different COVID indicators. Antibody testing using lateral flow devices at home can detect asymptomatic cases and provide greater certainty to self-report; but due to weak and waning antibody responses to mild infection, may under-ascertain. Multiple indicators used in combination can provide a more complete story than one used alone. Cohort studies need to consider how they deal with different, sometimes conflicting, indicators of COVID-19 illness to understand its long-term outcomes. | epidemiology |
10.1101/2020.12.07.20245043 | Particle-Based COVID-19 Simulator with Contact Tracing and Testing | GoalThe COVID-19 pandemic has emerged as the most severe public health crisis in over a century. As of January 2021, there are more than 100 million cases and 2.1 million deaths. For informed decision making, reliable statistical data and capable simulation tools are needed. Our goal is to develop an epidemic simulator that can model the effects of random population testing and contact tracing.
MethodsOur simulator models individuals as particles with the position, velocity, and epidemic status states on a 2D map and runs an SEIR epidemic model with contact tracing and testing modules. The simulator is available on GitHub under the MIT license.
ResultsThe results show that the synergistic use of contact tracing and massive testing is effective in suppressing the epidemic (the number of deaths was reduced by 72%).
ConclusionsThe Particle-based COVID-19 simulator enables the modeling of intervention measures, random testing, and contact tracing, for epidemic mitigation and suppression.
Impact StatementOur particle-based epidemic simulator, calibrated with COVID-19 data, models each individual as a unique particle with a location, velocity, and epidemic state, enabling the consideration of contact tracing and testing measures. | epidemiology |
10.1101/2020.12.07.20230235 | A 6-mRNA host response whole-blood classifier trained using patients with non-COVID-19 viral infections accurately predicts severity of COVID-19 | BackgroundDetermining the severity of COVID-19 remains an unmet medical need. Our objective was to develop a blood-based host-gene-expression classifier for the severity of viral infections and validate it in independent data, including COVID-19.
MethodsWe developed the classifier for the severity of viral infections and validated it in multiple viral infection settings including COVID-19. We used training data (N=705) from 21 retrospective transcriptomic clinical studies of influenza and other viral illnesses looking at a preselected panel of host immune response messenger RNAs.
ResultsWe selected 6 host RNAs and trained logistic regression classifier with a cross-validation area under curve of 0.90 for predicting 30-day mortality in viral illnesses. Next, in 1,417 samples across 21 independent retrospective cohorts the locked 6-RNA classifier had an area under curve of 0.91 for discriminating patients with severe vs. non-severe infection. Next, in independent cohorts of prospectively (N=97) and retrospectively (N=100) enrolled patients with confirmed COVID-19, the classifier had an area under curve of 0.89 and 0.87, respectively, for identifying patients with severe respiratory failure or 30-day mortality. Finally, we developed a loop-mediated isothermal gene expression assay for the 6-messenger-RNA panel to facilitate implementation as a rapid assay.
ConclusionsWith further study, the classifier could assist in the risk assessment of COVID-19 and other acute viral infections patients to determine severity and level of care, thereby improving patient management and reducing healthcare burden. | genetic and genomic medicine |
10.1101/2020.12.06.20244772 | Polygenic regulation of PTSD severity and outcomes among World Trade Center responders | Post-traumatic stress disorder (PTSD) is a debilitating psychiatric condition triggered by exposure to trauma. The study of PTSD is complicated by highly heterogeneous presentations and experiences of trauma between individuals. Capitalizing on the existence of the World Trade Center General Responder Cohort (WTC-GRC) of rescue, recovery and clean-up workers who responded during and in the aftermath of the World Trade Center (WTC) 9/11/2001 attacks, we studied genetic correlates of PTSD in a sample of 371 WTC responders, selected from the WTC-GRC utilizing stratified random sampling. This deeply phenotyped sample of WTC responders - ranging from no/low PTSD symptom levels to severe PTSD- provide a unique opportunity to study genetic risk factors for PTSD severity and chronicity following a single, shared, well-documented trauma, also incorporating measures of childhood and other lifetime traumas.
We examined associations of polygenic risk scores (PRS) -derived from a range of genome-wide association studies (GWAS) of behavioral traits, psychiatric disorders, and brain volumetric phenotypes- with PTSD severity and chronicity among these 371 individuals. Our results demonstrate significant genetic regulation of lifetime PTSD severity, assessed with the lifetime version Clinician-Administered PTSD Scale (CAPS), and chronicity, assessed with the past-month CAPS. PRS derived from GWAS of attention deficit-hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and brain imaging phenotypes (amygdala and putamen volumes) were associated with several PTSD symptom dimensions. Interestingly, we found greater genetic contribution to PTSD among cases compared to our full sample. In addition, we tested for associations between exposures to traumatic stressors, including WTC-related exposures, childhood trauma, and other lifetime traumatic life events in our full sample. Together, polygenic risk and exposures to traumatic stress explained ~45% of variance in lifetime CAPS (R2=0.454), and ~48% of variance in past-month CAPS (R2=0.480) in the full sample.
These participants represent a highly vulnerable population, with exposures to severe trauma during 9/11 and the following days and months. These novel identified associations between PTSD and PRS of behavioral traits and brain volume phenotypes, as well as replicated associations with PRS of other psychiatric disorders, may contribute to understanding the biological factors associated with risk for and chronicity of PTSD. In particular, the identification of neuroimaging phenotypes indicates that coupling of neuroimaging with genetic risk score calculations may predict PTSD outcomes. | genetic and genomic medicine |
10.1101/2020.12.07.20245415 | Plasma S1P links to hypertension and biomarkers of inflammation, metabolism and cardiovascular disease - findings from a translational investigation | Sphingosine-1-phosphate (S1P) is an important regulator of immune cell trafficking and vascular dysfunction contributing to the development and progression of overt hypertension. Although targeting S1P signaling revealed therapeutic potential in different experimental hypertension studies, validations of S1P-blood pressure (BP) associations in humans are lacking. In a translational approach, we explored the associations between plasma S1P and BP in a family-based study cohort (Malmo Offspring (MOS) study; N=1026), and in a longitudinally conducted murine hypertension cohort.
In MOS, linear multivariate regression analyses showed that plasma S1P associates with increased systolic BP ({beta}=1.06, P=0.015). Study subjects with systolic BP [≥]140 mmHg presented with significantly higher S1P plasma concentrations compared to subjects with BP <120 mmHg independent of age and sex. The S1P-BP association was validated in a murine model where plasma S1P increased with systolic BP (r=0.7018, R2=0.4925; P<0.0001). In a sub-sample of MOS (N=444), proteomic profiling for markers of inflammation, metabolism and cardiovascular disease using proximity Extension Assays revealed multiple significant S1P associations, some of them with marked sex-specificity. In vitro and ex vivo validation of identified S1P associations disclosed augmented expression of different vascular dysfunction and inflammation markers in response to S1P.
Our translational findings show a link between plasma S1P and systolic BP as well as several inflammation and cardiovascular disease markers and suggest S1Ps biomarker potential. This encourages further studies to investigate its predictive capacity for hypertensive disease or the therapeutic potential of its signaling axis. | cardiovascular medicine |
10.1101/2020.12.01.20214551 | Breast cancer: Emerging principles of metastasis, adjuvant and neoadjuvant treatment from cancer registry data | BackgroundGrowing primary breast cancers (PT) can initiate local (LR), regional (pLN), and distant metastases (MET). Characteristics of these progressions such as initiation, frequency, growth duration and treatment success describe principles of these processes. They are bottlenecks through which scientific and molecular biological concepts and hypotheses must fit.
MethodsPopulation-based data from the Munich Cancer Registry over 4 time periods since 1978 with the most important prognostic factors and an up to date follow-up are analyzed. With 66.818 patients, reliable data are obtained on initiation on METs, growth time und survival even in small subgroups. Together with results of clinical trials on prevention and adjuvant treatment (AT) principles for tumor growth, MET process and AT are derived.
ResultsThe median growth periods for PT/ MET/LR/pLN result in 12.5/8.8/5/3.5 years. Even if 30% of METs only appear after 10 years of MET-free time, a delayed initiation or cascade like initiation of METs, e.g. from pLNs cannot be derived from the data. That is an immediate MET initiation principle by PT. The growth rate of the PT can vary by a factor of 10 or more and can be transferred to the MET. Nevertheless, the relation of the growth times PT/MET results in a less varying value of 1.4. Principles of AT are the 50% eradication of 1st and 2ndPTs, the selective and partial eradication of bone and lung METs with successful ATs, which cannot be improved by extending the duration of ATs. These principles reveal, among other things, that there is no rationale for the accepted for long-term endocrine ATs, breast cancer risk by hormone replacement therapies, or cascading initiation of METs.
ConclusionA paradigm with ten principles for the MET process and ATs can be derived from real world data and clinical trials. The principles show limits and opportunities for innovation also through alternative interpretations of well-known studies. The outlined MET process should be generalizable to all solid tumors. | oncology |
10.1101/2020.12.07.20245282 | Effect of COVID-19 response policies on walking behavior in US cities | The COVID-19 pandemic has caused mass disruption to our daily lives. Mobility restrictions implemented to reduce the spread of COVID-19 have impacted walking behavior, but the magnitude and spatio-temporal aspects of these changes have yet to be explored. Walking is the most common form of physical activity and non-motorized transport, and so has an important role in our health and economy. Understanding how COVID-19 response measures have affected walking behavior of populations and distinct subgroups is paramount to help devise strategies to prevent the potential health and societal impacts of declining walking levels. In this study, we integrated mobility data from mobile devices and area-level data to study the walking patterns of 1.62 million anonymous users in 10 metropolitan areas in the United States (US). The data covers the period from mid-February 2020 (pre-lockdown) to late June 2020 (easing of lockdown restrictions). We detected when users were walking, measured distance walked and time of the walk, and classified each walk as recreational or utilitarian. Our results revealed dramatic declines in walking, especially utilitarian walking, while recreational walking has recovered and even surpassed the levels before the pandemic. However, our findings demonstrated important social patterns, widening existing inequalities in walking behavior across socio-demographic groups. COVID-19 response measures had a larger impact on walking behavior for those from low-income areas, of low education, and high use of public transportation. Provision of equal opportunities to support walking could be key to opening up our society and the economy. | epidemiology |
10.1101/2020.12.08.20242495 | Investigating pleiotropy between depression and autoimmune diseases using the UK Biobank | BackgroundEpidemiological studies have shown increased comorbidity between depression and autoimmune diseases. The mechanisms driving the comorbidity are poorly understood, and a highly powered investigation is needed to understand the relative importance of shared genetic influences. We investigated the evidence for pleiotropy from shared genetic risk alleles between these traits in the UK Biobank (UKB).
MethodsWe defined autoimmune and depression cases using information from hospital episode statistics, self-reported conditions and medications, and mental health questionnaires. Pairwise comparisons of depression prevalence between autoimmune cases and controls, and vice-versa, were performed. Cross-trait polygenic risk score (PRS) analyses were performed to test for pleiotropy, i.e. testing whether PRS for depression could predict autoimmune disease status, and vice-versa.
ResultsWe identified 28k cases of autoimmune diseases (pooling across 14 traits) and 324k autoimmune controls, and 65k cases of depression and 232k depression controls. The prevalence of depression was significantly higher in autoimmune cases compared to controls, and vice-versa. PRS for myasthenia gravis and psoriasis were significantly higher in depression cases compared to controls (p < 5.2x10-5, R2 <= 0.04%). PRS for depression were significantly higher in inflammatory bowel disease, psoriasis, psoriatic arthritis, rheumatoid arthritis and type 1 diabetes cases compared to controls (p < 5.8x10-5, R2 range 0.06% to 0.27%), and lower in coeliac disease cases compared to controls (p < 5.4x10-7, R2 range 0.11% to 0.15%).
ConclusionsConsistent with the literature, depression was more common in individuals with autoimmune diseases compared to controls, and vice-versa, in the UKB. PRS showed some evidence for involvement of shared genetic factors, but the modest R2 values suggest that shared genetic architecture accounts for only a small proportion of the increased risk across traits. | psychiatry and clinical psychology |
10.1101/2020.12.08.20246025 | Characterising long term Covid-19: a living systematic review | BackgroundWhile it is now apparent clinical sequelae (often called Long Covid) may persist after acute Covid-19, their nature, frequency, and aetiology are poorly characterised. This study aims to regularly synthesise evidence on Long Covid characteristics, to inform clinical management, rehabilitation, and interventional studies to improve long term outcomes.
MethodsA living systematic review. Medline, CINAHL (EBSCO), Global Health (Ovid), WHO Global Research Database on Covid-19, LitCOVID, and Google Scholar were searched up to 17th March 2021. Published studies including at least 100 people with confirmed or clinically suspected Covid-19 at 12 weeks or more post-onset were included. Results were analysed using descriptive statistics and meta-analyses to estimate prevalence with 95% confidence intervals (CIs).
ResultsThirty-nine studies were included: 32 cohort, six cross-sectional, and one case-control. Most showed high or moderate risk of bias. None were set in low-income countries, limited studies included children. Studies reported on 10,951 people (48% female) in 12 countries. Most followed-up post hospital discharge (78%, 8520/10951). The longest mean follow-up was 221.7 (SD: 10.9) days post Covid-19 onset. An extensive range of symptoms with wide prevalence was reported, most commonly weakness (41%; 95% CI 25% to 59%), malaise (33%; 95% CI 15% to 57%), fatigue (31%; 95% CI 24% to 39%), concentration impairment (26%; 95% CI 21% to 32%), and breathlessness (25%; 95% CI 18% to 34%). Other frequent symptoms included musculoskeletal, neurological, and psychological. 37% (95% CI 18% to 60%) of people reported reduced quality of life.
ConclusionLong Covid is a complex condition with heterogeneous symptoms. The nature of the studies precludes a precise case definition or evaluation of risk factors. There is an urgent need for prospective, robust, standardised controlled studies into aetiology, risk factors, and biomarkers to characterise Long Covid in different at-risk populations and settings.
Systematic review registrationThe protocol was prospectively registered on the PROSPERO database (CRD42020211131).
Section 1: What is already known?O_LIA significant number of people continue to describe ongoing symptoms long after the acute phase of Covid-19, often referred to as Long Covid.
C_LIO_LILong Covid is a heterogeneous condition with an uncertain prevalence, for which there is currently no precise case definition.
C_LI
Section 2: What are the new findings?O_LIThis living systematic review provides a comprehensive summary of peer-reviewed published evidence on persistent symptoms of Covid-19 and will be regularly updated as new evidence emerges.
C_LIO_LIThe breadth of reported symptoms suggests a complex, heterogeneous condition affecting both those who were hospitalised and those managed in the community.
C_LIO_LIOur review identifies weakness (41%; 95% CI 25% to 59%), general malaise (33%; 95% confidence interval 15% to 57%), fatigue (31%; 95% CI 24% to 39%), concentration impairment (26%; 95% CI 21% to 32%) and breathlessness (25%; 95% CI 18% to 34%) as the most common symptoms.
C_LI
Section 3: What do the new findings imply?O_LIThe current evidence base of the clinical spectrum of Long Covid is limited, based on heterogenous data, and vulnerable to biases, hence caution should be used when interpreting or generalising the results.
C_LIO_LIOur review identifies areas where further Long Covid research is critically needed to help characterise Long Covid in different populations and define its aetiology, risk factors, and biomarkers, as well as the impact on variants of concern and vaccination on long term outcomes.
C_LI | public and global health |
10.1101/2020.12.08.20243337 | The genetic case for cardiorespiratory fitness as a clinical vital sign and the routine prescription of physical activity in healthcare | BackgroundCardiorespiratory fitness (CRF) and physical activity (PA) are well-established predictors of morbidity and all-cause mortality. However, CRF is not routinely measured and PA not routinely prescribed as part of standard healthcare. The American Heart Association (AHA) recently presented a scientific case for the inclusion of CRF as a clinical vital sign based on epidemiological and clinical observation. Here, we leverage genetic data in the UK Biobank (UKB) to strengthen the case for CRF as a vital sign, and make a case for the prescription of PA.
MethodsWe derived two CRF measures from the heart rate data collected during a submaximal cycle ramp test: CRF-vo2max, an estimate of the participants maximum volume of oxygen uptake, per kilogram of body weight, per minute; and CRF-slope, an estimate of the rate of increase of heart rate during exercise. Average PA over a 7-day period was derived from a wrist-worn activity tracker. After quality control, 70,783 participants had data on the two derived CRF measures, and 89,683 had PA data. We performed genome-wide association study (GWAS) analyses by sex, and post-GWAS techniques to understand genetic architecture of the traits and prioritize functional genes for follow-up.
ResultsWe found strong evidence that genetic variants associated with CRF and PA influenced genetic expression in a relatively small set of genes in heart, artery, lung, skeletal muscle, and adipose tissue. These functionally relevant genes were enriched among genes known to be associated with coronary artery disease (CAD), type 2 diabetes (T2D), and Alzheimers disease (three of the top 10 causes of death in high-income countries) as well as Parkinsons disease, pulmonary fibrosis, and blood pressure, heart rate, and respiratory phenotypes. Genetic variation associated with lower CRF and PA was also correlated with several disease risk factors (including greater body mass index, body fat and multiple obesity phenotypes); a typical T2D profile (including higher insulin resistance, higher fasting glucose, impaired beta-cell function, hyperglycaemia, hypertriglyceridemia); increased risk for CAD and T2D; and a shorter lifespan.
ConclusionsGenetics supports three decades of evidence for the inclusion of CRF as a clinical vital sign. Given the genetic, clinical, and epidemiological evidence linking CRF and PA to increased morbidity and mortality, regular measurement of CRF as a marker of health and routine prescription of PA could be a prudent strategy to support public health. | genetic and genomic medicine |
10.1101/2020.12.08.20246041 | Intention of health care workers to accept COVID-19 vaccination and related factors: a systematic review and meta-analysis | Considering medical and economic burden of the coronavirus disease 2019 (COVID-19), a high COVID-19 vaccination coverage among health care workers (HCWs) is an urgent need. The aim of this systematic review and meta-analysis was to estimate the intention of HCWs to accept COVID-19 vaccination and to find out related factors. We searched PubMed, Medline, Scopus, Web of Science, ProQuest, CINAHL and medRxiv until July 14, 2021. The heterogeneity between results was very high and thus we applied a random effect model to estimate pooled effects. We performed subgroup and meta-regression analysis to identify possible resources of heterogeneity. Twenty four studies, including 39,617 HCWs met the inclusion criteria. The overall proportion of HCWs that intend to accept COVID-19 vaccination was 63.5% (95% confidence interval: 56.5-70.2%) with a wide range among studies from 27.7% to 90.1%. The following factors were associated with increased HCWs willingness to get vaccinated against COVID-19: male gender, older age, white HCWs, physician profession, higher education level, comorbidity among HCWs, seasonal influenza vaccination, stronger vaccine confidence, positive attitude towards a COVID-19 vaccine, fear about COVID-19, individual perceived risk about COVID-19, and contact with suspected or confirmed COVID-19 patients. The reluctance of HCWs to vaccinate against COVID-19 could diminish the trust of individuals and trigger a ripple effect in the general public. Since vaccination is a complex behavior, understanding the way that HCWs take the decision to accept or not COVID-19 vaccination will give us the opportunity to develop the appropriate interventions to increase COVID-19 vaccination uptake.
Key MessagesO_LIThe overall proportion of health care workers that intent to accept COVID-19 vaccination was moderate.
C_LIO_LISeveral factors affect health care workers willingness to get vaccinated against COVID-19.
C_LIO_LICOVID-19 vaccine hesitancy among health care workers should be eliminated to inspire the general public towards a positive attitude regarding a novel COVID-19 vaccine.
C_LI | public and global health |
10.1101/2020.12.09.20242396 | Through The Back Door: Expiratory Accumulation of SARS-Cov-2 in the Olfactory Mucosa as Mechanism for CNS Penetration | IntroductionSARS-CoV-2 is a respiratory virus supposed to enter the organism through aerosol or fomite transmission to the nose, eyes and oropharynx. It is responsible for various clinical symptoms, including hyposmia and other neurological ones. Current literature suggests the olfactory mucosa as a port of entry to the CNS, but how the virus reaches the olfactory groove is still unknown. Because the first neurological symptoms of invasion (hyposmia) do not correspond to first signs of infection, the hypothesis of direct contact through airborne droplets during primary infection and therefore during inspiration is not plausible. The aim of this study is to evaluate if a secondary spread to the olfactory groove in a retrograde manner during expiration could be more probable.
MethodsFour three-dimensional virtual models were obtained from actual CT scans and used to simulate expiratory droplets. The volume mesh consists of 25 million of cells, the simulated condition is a steady expiration, driving a flow rate of 270 ml/s, for a duration of 0.6 seconds. The droplet diameter is of 5 m.
ResultsThe analysis of the simulations shows the virus to have a high probability to be deployed in the rhinopharynx, on the tail of medium and upper turbinates. The possibility for droplets to access the olfactory mucosa during the expiratory phase is lower than other nasal areas, but consistent.
DiscussionThe data obtained from these simulations demonstrates the virus can be deployed in the olfactory groove during expiration. Even if the total amount in a single act is scarce, it must be considered it is repeated tens of thousands of times a day, and the source of contamination continuously acts on a timescale of several days. The present results also imply CNS penetration of SARS-CoV-2 through olfactory mucosa might be considered a complication and, consequently, prevention strategies should be considered in diseased patients. | otolaryngology |
10.1101/2020.12.09.20246389 | Untargeted metabolomics of COVID-19 patient serum reveals potential prognostic markers of both severity and outcome. | The diagnosis of COVID-19 is normally based on the qualitative detection of viral nucleic acid sequences. Properties of the host response are not measured but are key in determining outcome. Although metabolic profiles are well suited to capture host state, most metabolomics studies are either underpowered, measure only a restricted subset of metabolites, compare infected individuals against uninfected control cohorts that are not suitably matched, or do not provide a compact predictive model.
Here we provide a well-powered, untargeted metabolomics assessment of 120 COVID-19 patient samples acquired at hospital admission. The study aims to predict the patients infection severity (i.e., mild or severe) and potential outcome (i.e., discharged or deceased).
High resolution untargeted LC-MS/MS analysis was performed on patient serum using both positive and negative ionization modes. A subset of 20 intermediary metabolites predictive of severity or outcome were selected based on univariate statistical significance and a multiple predictor Bayesian logistic regression model was created. The predictors were selected for their relevant biological function and include cytosine and ureidopropionate (indirectly reflecting viral load), kynurenine (reflecting host inflammatory response), and multiple short chain acylcarnitines (energy metabolism) among others.
Currently, this approach predicts outcome and severity with a Monte Carlo cross validated area under the ROC curve of 0.792 (SD 0.09) and 0.793 (SD 0.08), respectively. A blind validation study on an additional 90 patients predicted outcome and severity at ROC AUC of 0.83 (CI 0.74 - 0.91) and 0.76 (CI 0.67 - 0.86). Prognostic tests based on the markers discussed in this paper could allow improvement in the planning of COVID-19 patient treatment. | infectious diseases |
10.1101/2020.12.09.20246397 | A genomic epidemiology study of multidrug-resistant Escherichia coli, Klebsiella pneumoniae and Acinetobacter baumannii in two intensive care units in Hanoi, Vietnam | BackgroundVietnam has high rates of antimicrobial resistance (AMR) but limited capacity for genomic surveillance. This study used whole genome sequencing (WGS) to examine the prevalence and transmission of three key AMR pathogens in two intensive care units in Hanoi, Vietnam.
MethodsA prospective surveillance study of all adults admitted to intensive care units (ICUs) at the National Hospital for Tropical Diseases (NHTD) and Bach Mai Hospital (BMH) was conducted between June 2017 and January 2018. Clinical and environmental samples were cultured on selective media, characterised using MALDI TOF MS, and illumina sequenced. Phylogenies based on the de novo assemblies (SPAdes) were constructed using Mafft (PARsnp), Gubbins and RAxML. Resistance genes were detected using Abricate against the NCBI database.
Findings3,153 Escherichia coli, Klebsiella pneumoniae and Acinetobacter baumannii isolates from 369 patients were analysed. Phylogenetic analysis revealed predominant lineages within A. baumannii (global clone [GC]2, sequence types [ST]2, ST571) and K. pneumoniae (ST15, ST16, ST656, ST11, ST147) isolates. Colonisation was most common with E. coli (88.9%) followed by K. pneumoniae (62.4%). Of the E. coli, 91% carried a blaCTX-M variant, while 81% of K. pneumoniae isolates carried blaNDM (54%) and/or blaKPC (45%). Transmission analysis using single nucleotide polymorphisms (SNPs) identified 167 clusters involving 251 (68%) patients, in some cases involving patients from both ICUs. There were no significant differences between the lineages or AMR genes recovered between the two ICUs.
InterpretationThis study represents the largest prospective surveillance study of key AMR pathogens in Vietnamese ICUs. Clusters of closely related isolates in patients across both ICUs suggests recent transmission prior to ICU admission in other healthcare settings or in the community.
FundingThis work was funded by the Medical Research Council Newton Fund, United Kingdom; the Ministry of Science and Technology, Vietnam; and the Wellcome Trust, United Kingdom.
Research in contextO_ST_ABSEvidence before this studyC_ST_ABSGlobally, antimicrobial resistance (AMR) is projected to cause 10 million deaths annually by 2050. Ninety percent of these deaths are expected to occur in low- and middle-income countries (LMICs), but attributing morbidity and mortality to AMR is difficult in the absence of comprehensive data. Whilst efforts have been made to improve AMR surveillance in these settings, this is often hampered by limited expertise, laboratory infrastructure and financial resources.
Added value of this studyThis is the largest prospective surveillance study of three key AMR pathogens (E. coli, K. pneumoniae and A. baumannii) conducted in critical care settings in Vietnam. Sampling was restricted to patients who were colonised or infected with extended spectrum beta-lactamase (ESBL) producing and/or carbapenem-resistant organisms. Colonisation with more than one organism was very common, with multidrug-resistant (MDR) E. coli being predominant in stool samples. A small number of predominant lineages were identified for K. pneumoniae and A. baumannii, while the E. coli isolates were highly genetically diverse. A large number of genomic clusters were identified within the two ICUs, some of which spanned both ICUs. There were no significant differences between lineages or AMR genes between the two ICUs.
Implications of all the available evidenceThis study found high rates of colonisation and infection with three key AMR pathogens in adults admitted to two Vietnamese ICUs. Whilst transmission was common within ICUs the finding of similar lineages and AMR genes in both ICUs suggests that dissemination of AMR occurs prior to ICU admission, either in referral hospitals or in community settings prior to hospital admission. Strategies to tackle AMR in Vietnam will need to account for this by extending surveillance more widely across hospital and community settings. | infectious diseases |
10.1101/2020.12.09.20246157 | Assessing the Performance of COVID-19 Forecasting Models in the U.S. | To combat the spread of coronavirus disease 2019 (COVID-19), decision-makers and the public may desire forecasts of the cases, hospitalizations, and deaths that are likely to occur. Thankfully, dozens of COVID-19 forecasting models exist and many of their forecasts have been made publicly available. However, there has been little published peer-reviewed information regarding the performance of these models and what is available has focused mostly on the performance of their central estimates (i.e., predictive performance). There has been little reported on the accuracy of their uncertainty estimates (i.e., probabilistic performance), which could inform users how often they would be surprised by observations outside forecasted confidence intervals. To address this gap in knowledge, we borrow from the literature on formally elicited expert judgment to demonstrate one commonly used approach for resolving this issue. For two distinct periods of the pandemic, we applied the Classical Model (CM) to evaluate probabilistic model performance and constructed a performance-weighted ensemble based on this evaluation. Some models which exhibited good predictive performance were found to have poor probabilistic performance, and vice versa. Only two of the nine models considered exhibited superior predictive and probabilistic performance. Additionally, the CM-weighted ensemble outperformed the equal-weighted and predictive-weighted ensembles. With its limited scope, this study does not provide definitive conclusions on model performance. Rather, it highlights the evaluation methodology and indicates the utility associated with using the CM when assessing probabilistic performance and constructing high performing ensembles, not only for COVID-19 modeling but other applications as well.
Significance StatementCoronavirus disease 2019 (COVID-19) forecasting models can provide critical information for decision-makers and the public. Unfortunately, little information on their performance has been published, particularly regarding the accuracy of their uncertainty estimates (i.e., probabilistic performance). To address this research gap, we demonstrate the Classical Model (CM), a commonly used approach from the literature on formally elicited expert judgment, which considers both the tightness of forecast confidence intervals and frequency in which confidence intervals contain the observation. Two models exhibited superior performance and the CM-based ensemble consistently outperformed the other constructed ensembles. While these results are not definitive, they highlight the evaluation methodology and indicate the value associated with using the CM when assessing probabilistic performance and constructing high performing ensembles. | epidemiology |
10.1101/2020.12.09.20246157 | Assessing the Performance of COVID-19 Forecasting Models in the U.S. | To combat the spread of coronavirus disease 2019 (COVID-19), decision-makers and the public may desire forecasts of the cases, hospitalizations, and deaths that are likely to occur. Thankfully, dozens of COVID-19 forecasting models exist and many of their forecasts have been made publicly available. However, there has been little published peer-reviewed information regarding the performance of these models and what is available has focused mostly on the performance of their central estimates (i.e., predictive performance). There has been little reported on the accuracy of their uncertainty estimates (i.e., probabilistic performance), which could inform users how often they would be surprised by observations outside forecasted confidence intervals. To address this gap in knowledge, we borrow from the literature on formally elicited expert judgment to demonstrate one commonly used approach for resolving this issue. For two distinct periods of the pandemic, we applied the Classical Model (CM) to evaluate probabilistic model performance and constructed a performance-weighted ensemble based on this evaluation. Some models which exhibited good predictive performance were found to have poor probabilistic performance, and vice versa. Only two of the nine models considered exhibited superior predictive and probabilistic performance. Additionally, the CM-weighted ensemble outperformed the equal-weighted and predictive-weighted ensembles. With its limited scope, this study does not provide definitive conclusions on model performance. Rather, it highlights the evaluation methodology and indicates the utility associated with using the CM when assessing probabilistic performance and constructing high performing ensembles, not only for COVID-19 modeling but other applications as well.
Significance StatementCoronavirus disease 2019 (COVID-19) forecasting models can provide critical information for decision-makers and the public. Unfortunately, little information on their performance has been published, particularly regarding the accuracy of their uncertainty estimates (i.e., probabilistic performance). To address this research gap, we demonstrate the Classical Model (CM), a commonly used approach from the literature on formally elicited expert judgment, which considers both the tightness of forecast confidence intervals and frequency in which confidence intervals contain the observation. Two models exhibited superior performance and the CM-based ensemble consistently outperformed the other constructed ensembles. While these results are not definitive, they highlight the evaluation methodology and indicate the value associated with using the CM when assessing probabilistic performance and constructing high performing ensembles. | epidemiology |
10.1101/2020.12.09.20246157 | Post Hoc Evaluation of Probabilistic Model Forecasts: A COVID-19 Case Study | To combat the spread of coronavirus disease 2019 (COVID-19), decision-makers and the public may desire forecasts of the cases, hospitalizations, and deaths that are likely to occur. Thankfully, dozens of COVID-19 forecasting models exist and many of their forecasts have been made publicly available. However, there has been little published peer-reviewed information regarding the performance of these models and what is available has focused mostly on the performance of their central estimates (i.e., predictive performance). There has been little reported on the accuracy of their uncertainty estimates (i.e., probabilistic performance), which could inform users how often they would be surprised by observations outside forecasted confidence intervals. To address this gap in knowledge, we borrow from the literature on formally elicited expert judgment to demonstrate one commonly used approach for resolving this issue. For two distinct periods of the pandemic, we applied the Classical Model (CM) to evaluate probabilistic model performance and constructed a performance-weighted ensemble based on this evaluation. Some models which exhibited good predictive performance were found to have poor probabilistic performance, and vice versa. Only two of the nine models considered exhibited superior predictive and probabilistic performance. Additionally, the CM-weighted ensemble outperformed the equal-weighted and predictive-weighted ensembles. With its limited scope, this study does not provide definitive conclusions on model performance. Rather, it highlights the evaluation methodology and indicates the utility associated with using the CM when assessing probabilistic performance and constructing high performing ensembles, not only for COVID-19 modeling but other applications as well.
Significance StatementCoronavirus disease 2019 (COVID-19) forecasting models can provide critical information for decision-makers and the public. Unfortunately, little information on their performance has been published, particularly regarding the accuracy of their uncertainty estimates (i.e., probabilistic performance). To address this research gap, we demonstrate the Classical Model (CM), a commonly used approach from the literature on formally elicited expert judgment, which considers both the tightness of forecast confidence intervals and frequency in which confidence intervals contain the observation. Two models exhibited superior performance and the CM-based ensemble consistently outperformed the other constructed ensembles. While these results are not definitive, they highlight the evaluation methodology and indicate the value associated with using the CM when assessing probabilistic performance and constructing high performing ensembles. | epidemiology |
10.1101/2020.12.09.20246736 | Whole genome sequencing association analysis of quantitative red blood cell phenotypes: the NHLBI TOPMed program | Whole genome sequencing (WGS), a powerful tool for detecting novel coding and non-coding disease-causing variants, has largely been applied to clinical diagnosis of inherited disorders. Here we leveraged WGS data in up to 62,653 ethnically diverse participants from the NHLBI Trans-Omics for Precision Medicine (TOPMed) program and assessed statistical association of variants with seven red blood cell (RBC) quantitative traits. We discovered 14 single variant-RBC trait associations at 12 genomic loci. Several of the RBC trait-variant associations (RPN1, ELL2, MIDN, HBB, HBA1, PIEZO1, G6PD) were replicated in independent GWAS datasets imputed to the TOPMed reference panel. Most of these newly discovered variants are rare/low frequency, and several are observed disproportionately among non-European Ancestry (African, Hispanic/Latino, or East Asian) populations. We identified a 3bp indel p.Lys2169del (common only in the Ashkenazi Jewish population) of PIEZO1, a gene responsible for the Mendelian red cell disorder hereditary xerocytosis [OMIM 194380], associated with higher MCHC. In stepwise conditional analysis and in gene-based rare variant aggregated association analysis, we identified several of the variants in HBB, HBA1, TMPRSS6, and G6PD that represent the carrier state for known coding, promoter, or splice site loss-of-function variants that cause inherited RBC disorders. Finally, we applied base and nuclease editing to demonstrate that the sentinel variant rs112097551 (nearest gene RPN1) acts through a cis-regulatory element that exerts long-range control of the gene RUVBL1 which is essential for hematopoiesis. Together, these results demonstrate the utility of WGS in ethnically-diverse population-based samples and gene editing for expanding knowledge of the genetic architecture of quantitative hematologic traits and suggest a continuum between complex trait and Mendelian red cell disorders. | hematology |
10.1101/2020.12.10.20246892 | Stability of the stroke code during the COVID-19 pandemic in the region of Madrid: a retrospective study | Acute Stroke (AS) is the most common time-dependent disease attended in the Emergency Medicine Service (EMS) of Madrid (SUMMA 112). Community of Madrid has been one of the most affected regions in Spain by the coronavirus disease 2019 (COVID19) pandemic. A significant reduction in AS hospital admissions has been reported during the COVID-19 pandemic compared to the same period one year before. As international clinical practice guidelines support those patients with suspected acute stroke should be accessed via EMS, it is important to know whether the pandemic has jeopardized urgent pre-hospital stroke care, the first medical contact for most patients. We aimed to examine the impact of the COVID-19 in stroke codes (SC) in our EMS among three periods of time: the COVID-19 period, the same period the year before, and the 2019-2020 seasonal influenza period. Methods: We compared the SC frequency among the periods with high cumulative infection rate (above the median of the series) of the first wave of COVID-19, seasonal influenza, and also with the same period of the year before. Results: 1,130 SC were attended during the three periods. No significant reduction in SC was found during the COVID-19 pandemic. The reduction of hospital admissions might be attributable to patients attending the hospital by their means. The maximum SC workload seen during seasonal influenza has not been reached during the pandemic. We detected a non-significant deviation from the SC protocol, with a slight increase in hospitals transfers to hospitals without stroke units. | emergency medicine |
10.1101/2020.12.10.20246892 | Stability of the stroke code during the COVID-19 pandemic in the region of Madrid: a retrospective cohort study | Acute Stroke (AS) is the most common time-dependent disease attended in the Emergency Medicine Service (EMS) of Madrid (SUMMA 112). Community of Madrid has been one of the most affected regions in Spain by the coronavirus disease 2019 (COVID19) pandemic. A significant reduction in AS hospital admissions has been reported during the COVID-19 pandemic compared to the same period one year before. As international clinical practice guidelines support those patients with suspected acute stroke should be accessed via EMS, it is important to know whether the pandemic has jeopardized urgent pre-hospital stroke care, the first medical contact for most patients. We aimed to examine the impact of the COVID-19 in stroke codes (SC) in our EMS among three periods of time: the COVID-19 period, the same period the year before, and the 2019-2020 seasonal influenza period. Methods: We compared the SC frequency among the periods with high cumulative infection rate (above the median of the series) of the first wave of COVID-19, seasonal influenza, and also with the same period of the year before. Results: 1,130 SC were attended during the three periods. No significant reduction in SC was found during the COVID-19 pandemic. The reduction of hospital admissions might be attributable to patients attending the hospital by their means. The maximum SC workload seen during seasonal influenza has not been reached during the pandemic. We detected a non-significant deviation from the SC protocol, with a slight increase in hospitals transfers to hospitals without stroke units. | emergency medicine |
10.1101/2020.12.10.20247452 | Trends in prevalence of diabetes subgroups in U.S. adults: A data-driven cluster analysis spanning three decades from NHANES (1988-2018) | AIMSData-driven diabetes subgroups were proposed as an alternative to address diabetes heterogeneity. However, changes in trends for these subgroups have not been reported. Here, we analyzed trends of diabetes subgroups, stratified by sex, race, education level, age categories and time since diabetes diagnosis in the U.S.
METHODSWe used data from consecutive NHANES cycles spanning the 1988-2018 period. Diabetes subgroups (mild obesity-related [MOD], severe-insulin deficient [SIDD], severe-insulin resistant [SIRD], and mild age-related diabetes [MARD]) were classified using validated self-normalizing neural networks. Severe autoimmune-diabetes (SAID) was assessed for NHANES-III. Prevalence was estimated using examination sample weights considering bi-cyclic changes (BC) to evaluate trends and changes over time.
RESULTSDiabetes prevalence in the US increased from 7.5% (95%CI 7.1-7.9) in 1988-1989 to 13.9% (95%CI 13.4-14.4) in 2016-2018 (BC 1.09%, 95%CI 0.98-1.31, p<0.001). Non-Hispanic Blacks had the highest prevalence. Overall, MOD, MARD, and SIDD had an increase during the studied period. Particularly, Non-Hispanic Blacks had sharp increases in MARD and SIDD, Mexican Americans in SIDD, and non-Hispanic Whites in MARD. Males, subjects with secondary/high school, and adults aged 40-64 years had the highest increase in MOD prevalence. Trends in diabetes subgroups sustained after stratifying time since diabetes diagnosis.
CONCLUSIONSPrevalence of diabetes and its subgroups in the U.S. have increased from 1988-2018. These trends were different across sex, ethnicities, education, and age categories, indicating significant heterogeneity in diabetes within the U.S. Obesity burden, population aging, socioeconomic disparities, and lifestyle aspects could be implicated in the uprising trends of diabetes in the U.S. | endocrinology |
10.1101/2020.12.10.20246827 | Projecting the impact of a two-dose COVID-19 vaccination campaign in Ontario, Canada | BackgroundA number of highly effective COVID-19 vaccines have been developed and approved for mass vaccination. We evaluated the impact of vaccination on COVID-19 outbreak and disease outcomes in Ontario, Canada.
MethodsWe used an agent-based transmission model and parameterized it with COVID-19 characteristics, demographics of Ontario, and age-specific clinical outcomes. We implemented a two-dose vaccination program according to tested schedules in clinical trials for Pfizer-BioNTech and Moderna vaccines, prioritizing healthcare workers, individuals with comorbidities, and those aged 65 and older. Daily vaccination rate was parameterized based on vaccine administration data. Using estimates of vaccine efficacy, we projected the impact of vaccination on the overall attack rate, hospitalizations, and deaths. We further investigated the effect of increased daily contacts at different stages during vaccination campaigns on outbreak control.
ResultsMaintaining non-pharmaceutical interventions (NPIs) with an average of 74% reduction in daily contacts, vaccination with Pfizer-BioNTech and Moderna vaccines was projected to reduce hospitalizations by 27.3% (95% CrI: 22.3% - 32.4%) and 27.0% (95% CrI: 21.9% - 32.6%), respectively, over a one-year time horizon. The largest benefits of vaccination were observed in preventing deaths with reductions of 31.5% (95% CrI: 22.5% - 39.7%) and 31.9% (95% CrI: 22.0% - 41.4%) for Pfizer-BioNTech and Moderna vaccines, respectively, compared to no vaccination. We found that an increase of only 10% in daily contacts at the end of lockdown, when vaccination coverage with only one dose was 6%, would trigger a surge in the outbreak. Early relaxation of population-wide measures could lead to a substantial increase in the number of infections, potentially reaching levels observed during the peak of the second wave in Ontario.
ConclusionsVaccination can substantially mitigate ongoing COVID-19 outbreaks. Sustaining population-wide NPIs, to allow for a sufficient increase in population-level immunity through vaccination, is essential to prevent future outbreaks. | epidemiology |
10.1101/2020.12.10.20247023 | Low case numbers enable long-term stable pandemic control without lockdowns | The traditional long-term solutions for epidemic control involve eradication or population immunity. Here, we analytically derive the existence of a third viable solution: a stable equilibrium at low case numbers, where test-trace-and-isolate policies partially compensate for local spreading events, and only moderate restrictions remain necessary. In this equilibrium, daily cases stabilize around ten new infections per million people or less. However, stability is endangered if restrictions are relaxed or case numbers grow too high. The latter destabilization marks a tipping point beyond which the spread self-accelerates. We show that a lockdown can reestablish control and that recurring lockdowns are not necessary given sustained, moderate contact reduction. We illustrate how this strategy profits from vaccination and helps mitigate variants of concern. This strategy reduces cumulative cases (and fatalities) 4x more than strategies that only avoid hospital collapse. In the long term, immunization, large-scale testing, and international coordination will further facilitate control. | public and global health |
10.1101/2020.12.10.20247270 | A Decision Analytics Model to Optimize Investment in Interventions Targeting the HIV PrEP Cascade of Care | ObjectivesGaps between recommended and actual levels of HIV preexposure prophylaxis (PrEP) use remain among men who have sex with men (MSM). Interventions can address these gaps, but it is unknown how public health initiatives should invest prevention funds into these interventions to maximize their population impact.
DesignWe used a stochastic network-based HIV transmission model for MSM in the Atlanta area paired with an economic budget optimization model.
MethodsThe model simulated MSM participating in up to three real-world PrEP cascade interventions designed to improve initiation, adherence, or persistence. The primary outcome was infections averted over 10 years. The budget optimization model identified the investment combination under different budgets that maximized this outcome given intervention costs from a payer perspective.
ResultsFrom the base 15% PrEP coverage level, the three interventions could increase coverage to 27%, resulting in 12.3% of infections averted over 10 years. Uptake of each intervention was interdependent: maximal use of the adherence and persistence interventions depended on new PrEP users generated by the initiation intervention. As the budget increased, optimal investment involved a mixture of the initiation and persistence interventions, but not the adherence intervention. If adherence intervention costs were halved, the optimal investment was roughly equal across interventions.
ConclusionsInvestments into the PrEP cascade through initiatives should account for the interactions of the interventions as they are collectively deployed. Given current intervention efficacy estimates, the total population impact of each intervention may be improved with greater total budgets or reduced intervention costs. | hiv aids |
10.1101/2020.12.10.20247205 | Diverse Functional Autoantibodies in Patients with COVID-19 | COVID-19 manifests with a wide spectrum of clinical phenotypes that are characterized by exaggerated and misdirected host immune responses1-8. While pathological innate immune activation is well documented in severe disease1, the impact of autoantibodies on disease progression is less defined. Here, we used a high-throughput autoantibody discovery technique called Rapid Extracellular Antigen Profiling (REAP) to screen a cohort of 194 SARS-CoV-2 infected COVID-19 patients and healthcare workers for autoantibodies against 2,770 extracellular and secreted proteins (the "exoproteome"). We found that COVID-19 patients exhibit dramatic increases in autoantibody reactivities compared to uninfected controls, with a high prevalence of autoantibodies against immunomodulatory proteins including cytokines, chemokines, complement components, and cell surface proteins. We established that these autoantibodies perturb immune function and impair virological control by inhibiting immunoreceptor signaling and by altering peripheral immune cell composition, and found that murine surrogates of these autoantibodies exacerbate disease severity in a mouse model of SARS-CoV-2 infection. Analysis of autoantibodies against tissue-associated antigens revealed associations with specific clinical characteristics and disease severity. In summary, these findings implicate a pathological role for exoproteome-directed autoantibodies in COVID-19 with diverse impacts on immune functionality and associations with clinical outcomes. | infectious diseases |
10.1101/2020.12.11.20247106 | Concurrent validity of an Estimator of Weekly Alcohol Consumption (EWAC) based on the Extended AUDIT | Background and AimsThe 3-question Alcohol Use Disorders Identification Test (AUDIT-C) is frequently used in healthcare for screening and brief advice about levels of alcohol consumption. AUDIT-C scores (0-12) provide feedback as categories of risk rather than estimates of actual alcohol intake, an important metric for behaviour change. The study aimed to (a) develop a continuous metric from the Extended AUDIT-C expressed in United Kingdom (UK) units (8g pure ethanol), offering equivalent accuracy, and providing a direct estimator of weekly alcohol consumption (EWAC) and (b) evaluate the EWACs bias and error using the Graduated-Frequency (GF) questionnaire as a reference standard of alcohol consumption.
DesignCross-sectional diagnostic study based on a nationally-representative survey.
SettingsCommunity-dwelling households in England.
Participants22,404 household residents aged [≥] 16 years reporting drinking alcohol at least occasionally.
MeasurementsComputer-assisted personal interviews consisting of (a) AUDIT questionnaire with extended response items (the Extended AUDIT) and (b) GF. Primary outcomes were: mean deviation <1 UK unit (metric of bias); root mean squared deviation <2 UK units (metric of total error) between EWAC and GF. The secondary outcome was the receiver operating characteristic area under the curve for predicting alcohol consumption in excess of 14 and 35 UK units.
FindingsEWAC had a positive bias of 0.2 UK units [95% confidence interval: 0.08, 0.4] compared with GF. Deviations were skewed: while the mean error was {+/-}11 UK units/week [9.5, 11.9], in half of participants the deviation between EWAC and GF was between 0 and {+/-}2.1 UK units/week. EWAC predicted consumption in excess of 14 UK units/week with a significantly greater area under the curve (0.918 [0.914, 0.923]) than AUDIT-C (0.870 [0.864, 0.876]) or the full AUDIT (0.854 [0.847, 0.860]).
ConclusionsA new estimator of weekly alcohol consumption (EWAC), which uses answers to the Extended Alcohol Use Disorders Identification Test (Extended AUDIT-C), meets the targeted bias tolerance. It is superior in accuracy to AUDIT-C and the full AUDIT when predicting consumption thresholds, making it a reliable complement to the Extended AUDIT-C for health promotion interventions. | addiction medicine |
10.1101/2020.12.10.20247361 | COVID-19 PREDICTION IN SOUTH AFRICA: ESTIMATING THE UNASCERTAINED CASES- THE HIDDEN PART OF THE EPIDEMIOLOGICAL ICEBERG | Understanding the impact of non-pharmaceutical interventions as well as acscounting for the unascertained cases remain critical challenges for epidemiological models for understanding the transmission dynamics of COVID-19 spread. In this paper, we propose a new epidemiological model (eSEIRD) that extends the widely used epidemiological models such as extended Susceptible-Infected-Removed model (eSIR) and SAPHIRE (initially developed and used for analyzing data from Wuhan). We fit these models to the daily ascertained infected (and removed) cases from March 15, 2020 to Dec 31, 2020 in South Africa that reported the largest number of confirmed COVID-19 cases and deaths from the WHO African region. Using the eSEIRD model, the COVID-19 transmission dynamics in South Africa was characterized by the estimated basic reproduction number (R0) starting at 3.22 (95%CrI: [3.19, 3.23]) then dropping below 2 following a mandatory lockdown implementation and subsequently increasing to 3.27 (95%CrI: [3.27, 3.27]) by the end of 2020. The initial decrease of effective reproduction number followed by an increase suggest the effectiveness of early interventions and the combined effect of relaxing strict interventions and emergence of a new coronavirus variant in South Africa. The low estimated ascertainment rate was found to vary from 1.65% to 9.17% across models and time periods. The overall infection fatality ratio (IFR) was estimated as 0.06% (95%CrI: [0.04%, 0.22%]) accounting for unascertained cases and deaths while the reported case fatality ratio was 2.88% (95% CrI: [2.45%, 6.01%]). The models predict that from December 31, 2020, to April 1, 2021, the predicted cumulative number of infected would reach roughly 70% of total population in South Africa. Besides providing insights on the COVID-19 dynamics in South Africa, we develop powerful forecasting tools that enable estimation of ascertainment rates and IFR while quantifying the effect of intervention measures on COVID-19 spread.
AMS ClassificationPlace Classification here. Leave as is, if there is no classification | epidemiology |
10.1101/2020.12.10.20247361 | COVID-19 PREDICTION IN SOUTH AFRICA: ESTIMATING THE UNASCERTAINED CASES- THE HIDDEN PART OF THE EPIDEMIOLOGICAL ICEBERG | Understanding the impact of non-pharmaceutical interventions as well as acscounting for the unascertained cases remain critical challenges for epidemiological models for understanding the transmission dynamics of COVID-19 spread. In this paper, we propose a new epidemiological model (eSEIRD) that extends the widely used epidemiological models such as extended Susceptible-Infected-Removed model (eSIR) and SAPHIRE (initially developed and used for analyzing data from Wuhan). We fit these models to the daily ascertained infected (and removed) cases from March 15, 2020 to Dec 31, 2020 in South Africa that reported the largest number of confirmed COVID-19 cases and deaths from the WHO African region. Using the eSEIRD model, the COVID-19 transmission dynamics in South Africa was characterized by the estimated basic reproduction number (R0) starting at 3.22 (95%CrI: [3.19, 3.23]) then dropping below 2 following a mandatory lockdown implementation and subsequently increasing to 3.27 (95%CrI: [3.27, 3.27]) by the end of 2020. The initial decrease of effective reproduction number followed by an increase suggest the effectiveness of early interventions and the combined effect of relaxing strict interventions and emergence of a new coronavirus variant in South Africa. The low estimated ascertainment rate was found to vary from 1.65% to 9.17% across models and time periods. The overall infection fatality ratio (IFR) was estimated as 0.06% (95%CrI: [0.04%, 0.22%]) accounting for unascertained cases and deaths while the reported case fatality ratio was 2.88% (95% CrI: [2.45%, 6.01%]). The models predict that from December 31, 2020, to April 1, 2021, the predicted cumulative number of infected would reach roughly 70% of total population in South Africa. Besides providing insights on the COVID-19 dynamics in South Africa, we develop powerful forecasting tools that enable estimation of ascertainment rates and IFR while quantifying the effect of intervention measures on COVID-19 spread.
AMS ClassificationPlace Classification here. Leave as is, if there is no classification | epidemiology |
10.1101/2020.12.11.20246694 | Did people really drink bleach to prevent COVID-19? A tale of problematic respondents and a guide for measuring rare events in survey data | Society is becoming increasingly dependent on survey research. However, surveys can be impacted by participants who are non-attentive, respond randomly to survey questions, and misrepresent who they are and their true attitudes. The impact that such respondents can have on public health research has rarely been systematically examined. In this study we examine whether Americans began to engage in dangerous cleaning practices to avoid Covid-19 infection. Prior findings reported by the CDC have suggested that people began to engage in highly dangerous cleaning practices during the Covid-19 pandemic, including ingesting household cleansers such as bleach. In a series of studies totaling close to 1400 respondents, we show that 80-90% of reports of household cleanser ingestion are made by problematic respondents. These respondents report impossible claims such as recently having had a fatal heart attack and eating concrete for its iron content at a similar rate to ingesting household cleaners. Additionally, respondents frequent misreading or misinterpreting the intent of questions accounted for the rest of such claims. Once inattentive, mischievous, and careless respondents are taken out of the analytic sample we find no evidence that people ingest cleansers to prevent Covid-19 infection. The relationship between dangerous cleaning practices and health outcomes also becomes non-significant once problematic respondents are taken out of the analytic sample. These results show that reported ingestion of household cleaners and other similar dangerous practices are an artifact of problematic respondent bias. The implications of these findings for public health and medical survey research, as well as best practices for avoiding problematic respondents in surveys are discussed. | public and global health |
10.1101/2020.12.12.20247890 | Relationships of Alzheimer's disease and apolipoprotein E genotypes with small RNA and protein cargo of brain tissue extracellular vesicles | Alzheimers disease (AD) is a public health crisis that grows as populations age. Hallmarks of this neurodegenerative disease include aggregation of beta-amyloid peptides and hyperphosphorylated tau proteins in the brain. Variants of the APOE gene are the greatest known risk factors for sporadic AD. As emerging players in AD pathophysiology, extracellular vesicles (EVs) contain proteins, lipids, and RNAs and are involved in disposal of cellular toxins and intercellular communication. AD-related changes in the molecular composition of EVs may contribute to pathophysiology and lend insights into disease mechanisms. We recently adapted a method for separation of brain-derived EVs (bdEVs) from post-mortem tissues. Using this method, we isolated bdEVs from AD patients with different APOE genotypes and controls. bdEVs were counted, sized, and subjected to parallel small RNA sequencing, proteomic analysis. Although overall bdEV concentration was not affected by AD, we observed a shift towards smaller particles in AD. Also, numerous bdEV-associated RNAs (including miRNAs and tRNAs) and proteins were found to be correlated with AD pathology and APOE genotype. Some of the identified entities have been implicated previously in important AD-related pathways, including amyloid processing, neurodegeneration, and metabolic functions, etc. Prominently, AD hallmark Tau and Tau phosphorylated at threonine 231 (phosTau) were significantly increased in AD bdEVs, indicating the involvement of bdEVs in spread of Tau pathology. These findings provide further evidence that bdEVs and their molecular cargo modulate development and progression of AD. | neurology |
10.1101/2020.12.12.20248005 | Application of concise machine learning to construct accurate and interpretable EHR computable phenotypes | ObjectiveElectronic health records (EHRs) can improve patient care by enabling systematic identification of patients for targeted decision support. But, this requires scalable learning of computable phenotypes. To this end, we developed the feature engineering automation tool (FEAT) and assessed it in targeting screening for the underdiagnosed, under-treated disease primary aldosteronism.
Materials and MethodsWe selected 1,199 subjects receiving longitudinal care in a large health system and classified them for hypertension (N=608), hypertension with unexplained hypokalemia (N=172), and apparent treatment-resistant hypertension (N=176) by chart review. We derived 331 features from EHR encounters, diagnoses, laboratories, medications, vitals, and notes. We modified FEAT to encourage model parsimony and compared its models performance and interpretability to those of expert-curated heuristics and conventional machine learning.
ResultsFEAT models trained to replicate expert-curated heuristics had higher area under the precision-recall curve (AUPRC) than all other models (p < 0.001) except random forests and were smaller than all other models (p < 1e-6) except decision trees. FEAT models trained to predict chart review phenotypes exhibited similar AUPRC to penalized logistic regression while being simpler than all other models (p < 1e-6). For treatment-resistant hypertension, FEAT learned a six-feature, clinically intuitive model that demonstrated a positive predictive value of 0.70 and sensitivity of 0.62 in held-out testing data.
DiscussionFEAT learns computable phenotypes that approach the performance of expert-curated heuristics and conventional machine learning without sacrificing interpretability.
ConclusionBy constructing accurate and interpretable computable phenotypes at scale, FEAT has the potential to facilitate systematic clinical decision support. | health informatics |
10.1101/2020.12.13.20247254 | Predicting mortality in SARS-COV-2 (COVID-19) positive patients in the inpatient setting using a Novel Deep Neural Network | BackgroundThe second wave of COVID-19 pandemic is anticipated to be worse than the initial one and will strain the healthcare systems even more during the winter months. Our aim was to develop a machine learning-based model to predict mortality using the deep learning Neo-V framework. We hypothesized this novel machine learning approach could be applied to COVID-19 patients to predict mortality successfully with high accuracy.
MethodsThe current Deep-Neo-V model is built on our previously statistically rigorous machine learning framework [Fahad-Liaqat-Ahmad Intensive Machine (FLAIM) framework] that evaluated statistically significant risk factors, generated new combined variables and then supply these risk factors to deep neural network to predict mortality in RT-PCR positive COVID-19 patients in the inpatient setting. We analyzed adult patients ([≥]18 years) admitted to the Aga Khan University Hospital, Pakistan with a working diagnosis of COVID-19 infection (n=1228). We excluded patients that were negative on COVID-19 on RT-PCR, had incomplete or missing health records. The first phase selection of risk factor was done using Cox-regression univariate and multivariate analyses. In the second phase, we generated new variables and tested those statistically significant for mortality and in the third and final phase we applied deep neural networks and other traditional machine learning models like Decision Tree Model, k-nearest neighbor models and others.
ResultsA total of 1228 cases were diagnosed as COVID-19 infection, we excluded 14 patients after the exclusion criteria and (n=)1214 patients were analyzed. We observed that several clinical and laboratory-based variables were statistically significant for both univariate and multivariate analyses while others were not. With most significant being septic shock (hazard ratio [HR], 4.30; 95% confidence interval [CI], 2.91-6.37), supportive treatment (HR, 3.51; 95% CI, 2.01-6.14), abnormal international normalized ratio (INR) (HR, 3.24; 95% CI, 2.28-4.63), admission to the intensive care unit (ICU) (HR, 3.24; 95% CI, 2.22-4.74), treatment with invasive ventilation (HR, 3.21; 95% CI, 2.15-4.79) and laboratory lymphocytic derangement (HR, 2.79; 95% CI, 1.6-4.86). Machine learning results showed our DNN (Neo-V) model outperformed all conventional machine learning models with test set accuracy of 99.53%, sensitivity of 89.87%, and specificity of 95.63%; positive predictive value, 50.00%; negative predictive value, 91.05%; and area under the curve of the receiver-operator curve of 88.5.
ConclusionOur novel Deep-Neo-V model outperformed all other machine learning models. The model is easy to implement, user friendly and with high accuracy. | infectious diseases |
10.1101/2020.12.13.20247254 | Predicting mortality in SARS-COV-2 (COVID-19) positive patients in the inpatient setting using a Novel Deep Neural Network | BackgroundThe second wave of COVID-19 pandemic is anticipated to be worse than the initial one and will strain the healthcare systems even more during the winter months. Our aim was to develop a machine learning-based model to predict mortality using the deep learning Neo-V framework. We hypothesized this novel machine learning approach could be applied to COVID-19 patients to predict mortality successfully with high accuracy.
MethodsThe current Deep-Neo-V model is built on our previously statistically rigorous machine learning framework [Fahad-Liaqat-Ahmad Intensive Machine (FLAIM) framework] that evaluated statistically significant risk factors, generated new combined variables and then supply these risk factors to deep neural network to predict mortality in RT-PCR positive COVID-19 patients in the inpatient setting. We analyzed adult patients ([≥]18 years) admitted to the Aga Khan University Hospital, Pakistan with a working diagnosis of COVID-19 infection (n=1228). We excluded patients that were negative on COVID-19 on RT-PCR, had incomplete or missing health records. The first phase selection of risk factor was done using Cox-regression univariate and multivariate analyses. In the second phase, we generated new variables and tested those statistically significant for mortality and in the third and final phase we applied deep neural networks and other traditional machine learning models like Decision Tree Model, k-nearest neighbor models and others.
ResultsA total of 1228 cases were diagnosed as COVID-19 infection, we excluded 14 patients after the exclusion criteria and (n=)1214 patients were analyzed. We observed that several clinical and laboratory-based variables were statistically significant for both univariate and multivariate analyses while others were not. With most significant being septic shock (hazard ratio [HR], 4.30; 95% confidence interval [CI], 2.91-6.37), supportive treatment (HR, 3.51; 95% CI, 2.01-6.14), abnormal international normalized ratio (INR) (HR, 3.24; 95% CI, 2.28-4.63), admission to the intensive care unit (ICU) (HR, 3.24; 95% CI, 2.22-4.74), treatment with invasive ventilation (HR, 3.21; 95% CI, 2.15-4.79) and laboratory lymphocytic derangement (HR, 2.79; 95% CI, 1.6-4.86). Machine learning results showed our DNN (Neo-V) model outperformed all conventional machine learning models with test set accuracy of 99.53%, sensitivity of 89.87%, and specificity of 95.63%; positive predictive value, 50.00%; negative predictive value, 91.05%; and area under the curve of the receiver-operator curve of 88.5.
ConclusionOur novel Deep-Neo-V model outperformed all other machine learning models. The model is easy to implement, user friendly and with high accuracy. | infectious diseases |
10.1101/2020.12.12.20240143 | International Randomized Controlled Trial Evaluating Changes in Oral Health in Smokers After Switching to Combustion-Free Nicotine Delivery Systems: the SMILE protocol | Although the well-known detrimental effects of conventional cigarette smoking on oral health, there are still lack of evidences about the impact of less harmful alternatives (such as electronic cigarettes or heat not burn products), especially in young smokers with clinical absence of signs of periodontitis.
SMILE will be a prospective, multicenter, interventional, open label, randomized, controlled, three parallel-arms study assessing oral health parameters and teeth appearance of 18 months duration.
This study aims to compare short- and long-term impact on oral health between smokers continuing with conventional cigarette smoking, those switching to combustion-free nicotine delivery systems (C-F NDS), and never-smokers by objectively evaluating changes in gingival response, as a proxy for periodontal/gingival.
The total number of participants in the study planned of the trial is 606 (505 regular smokers and 101 never-smokers).
Regular smokers not intending to quit will be randomized in the ratio 1:4 either in continuing to smoke commercially manufactured conventional cigarettes (n = 101; Study Arm A) or switching to C-F NDS (n = 404; Study Arm B), never-smokers will be assigned in Arm C (n= 101).
The primary outcome will be to assess and compare the percentage mean change in Modified Gingival Index (MGI) score between Baseline and 18 months follow-up between the Study Arms A and B.
Secondary outcomes include the assessment of within- and between-group (Arm A, Arm B and Arm C) variations from baseline to 18 months follow-up of several endpoints, such as MGI, Tooth Stains Assessment, Dental Discolorations, Plaque Score Imaging, Oral Health Quality of Life (OHQOL) assessment and EuroQoL Visual Analog Scale (EQ VAS - QoL) assessment.
Patient recruitment will start in January 2021 and enrolment is expected to be completed by June 2021.
This will be the first study determining overall oral health impact of using CF-NDS in smokers without sign of periodontitis. Data from this study will provide valuable insights into the overall potential of C-F NDS to reduce the risk of periodontal diseases. | dentistry and oral medicine |
10.1101/2020.12.11.20247866 | Increased Burden of Familial-associated Early-onset Cancer Risk among Minority Americans Compared to non-Latino Whites. | BackgroundThe role of race/ethnicity in genetic predisposition of early-onset cancers can be estimated by comparing family-based cancer concordance rates among ethnic groups.
MethodsWe used linked California health registries to evaluate the relative cancer risks for first degree relatives of patients diagnosed between ages 0-26, and the relative risks of developing distinct second primary malignancies (SPMs). From 1989-2015, we identified 29,631 cancer patients and 62,863 healthy family members. We calculated the standardized incident ratios (SIRs) of early-onset primary cancers diagnosed in proband siblings and mothers, as well as SPMs detected among early-onset patients. Analyses were stratified by self-identified race/ethnicity.
ResultsGiven probands with cancer, there were increased relative risks of any cancer for siblings and mothers [SIR=3.32;95% confidence interval (CI):2.85-3.85)] and of SPMs (SIR=7.27;95%CI:6.56-8.03). Higher relative risk of any cancer in siblings and mothers given a proband with solid cancer (P<0.05) was observed for both Latinos (SIR=4.98;95%CI:3.82-6.39) and for non-Latino Blacks (SIR=7.35;95%CI:3.36-13.95) compared to non-Latino White subjects (SIR=3.02;95%CI:2.12-4.16). For hematologic cancers, higher familial risk was evident for Asian/Pacific Islanders (SIR=7.56;95%CI:3.26-14.90) compared to non-Latino whites (SIR:2.69;95%CI:1.62-4.20).
ConclusionsThe data support a need for increased attention to the genetics of early-onset cancer predisposition and environmental factors in race/ethnic minority families in the US.
FundingThis work was supported by the V Foundation for funding this work (Grant FP067172).
Key MessagesO_LIWe identified 29 631 cancer patients and their 62 863 healthy family members in California from 1989 to 2015.
C_LIO_LIThe risk of early-onset cancer in siblings and mothers was elevated by having a proband with cancer in the same family.
C_LIO_LIThe relative risk of early-onset cancers given a proband with solid cancer was higher for Latinos and Blacks when compared to non-Latino Whites.
C_LI | epidemiology |
10.1101/2020.12.11.20247924 | Risk-driven responses to COVID-19 eliminate the tradeoff between lives and livelihoods | BackgroundHealth outcomes from the COVID-19 pandemic vary widely across countries - by 2022 New Zealand suffered [~]10 total deaths per million people, whereas the U.K. and U.S. have had over 2000. Differences in infection fatality rates are insufficient to explain such vastly divergent outcomes. We propose that endogenous behavioral responses to risk shape countries epidemic trajectories, and that differences in responsiveness to risk are a primary driver of variation in epidemic scale and resultant mortality.
MethodsWe develop several testable predictions based on the proposed endogenous risk response mechanism. We test these using a simple modified SEIR model incorporating this mechanism, which we estimate for 131 countries (5.96 billion people) using data on daily reported SARS-CoV-2 infections and COVID-19 deaths. We further examine associations between COVID-19 deaths and several observed and model-estimated country characteristics using linear regression.
FindingsWe find empirical support for all predictions tested: 1) endogenous risk response substantially improves an SEIR models fit to data (mean absolute errors normalized by mean=66% across countries, vs. 551% without endogenous risk response); 2) Re converges to [~]1 across countries in both empirical data and model estimates with endogenous risk response (but not without it); 3) most cross-country variation in death rates cannot be explained by intuitively important factors like hospital capacity or policy response stringency; and 4) responsiveness to risk, which governs the sensitivity of the endogenous risk response, correlates strongly with death rates (R2=0.75) and is the strongest explanatory factor for cross-country variation therein.
InterpretationCountries converge to policy measures consistent with Re[~]1 (or exponentially growing outbreaks will compel them to increase restrictions). Responsiveness to risk, i.e. how readily a country adopts the required measures, shapes long-term cases and deaths. With greater responsiveness, many countries could considerably improve pandemic outcomes without imposing more restrictive control policies.
FundingNone.
Research in contextO_ST_ABSEvidence before this studyC_ST_ABSWhile numerous studies have examined drivers of variation in COVID-19 infection fatality rates (e.g. age, comorbidities), relatively few have sought to explain cross-country variation in overall mortality outcomes. Mortality outcomes are shaped primarily by variation in infection rates and only secondarily by IFR variation, yet the drivers of cross-country variation in infection rates are poorly understood as well. There is ample evidence that policy responses such as mask mandates, quarantines, and other non-pharmaceutical interventions (NPIs) can influence transmission, yet across countries the association of policy responses with long-term infection rates remains weak.
Added value of this studyWe propose a simple and intuitive theoretical mechanism - endogenous behavioral response to risk - to explain variation in cases and deaths across countries. While simple, the implications of this mechanism are rarely examined; a recent review found only 1 of 61 models in the CDC COVID-19 Forecast Hub incorporates endogenous risk response. We demonstrate how it can explain several empirical regularities, including multiple outbreak waves and convergence in effective reproduction number, which existing models largely do not explain. Our empirical estimates of responsiveness to risk also show that it drives wide cross-country variation in mortality outcomes, which otherwise remains unexplained.
Implications of all the available evidenceOur results highlight the key role of responsiveness to risk in shaping COVID-19 mortality outcomes. They suggest that adopting similar policy responses but with greater responsiveness could improve outcomes and reduce mortality. The responsiveness mechanism also resolves the apparent paradox that despite clear evidence of proximal NPI effectiveness, their impact on infection and death outcomes in cross-country comparisons is rather weak. Our results highlight the need to understand better the determinants of differences in responsiveness across countries, and how to improve it. | epidemiology |
10.1101/2020.12.11.20247924 | Responsiveness to risk explains large variation in COVID-19 mortality across countries | BackgroundHealth outcomes from the COVID-19 pandemic vary widely across countries - by 2022 New Zealand suffered [~]10 total deaths per million people, whereas the U.K. and U.S. have had over 2000. Differences in infection fatality rates are insufficient to explain such vastly divergent outcomes. We propose that endogenous behavioral responses to risk shape countries epidemic trajectories, and that differences in responsiveness to risk are a primary driver of variation in epidemic scale and resultant mortality.
MethodsWe develop several testable predictions based on the proposed endogenous risk response mechanism. We test these using a simple modified SEIR model incorporating this mechanism, which we estimate for 131 countries (5.96 billion people) using data on daily reported SARS-CoV-2 infections and COVID-19 deaths. We further examine associations between COVID-19 deaths and several observed and model-estimated country characteristics using linear regression.
FindingsWe find empirical support for all predictions tested: 1) endogenous risk response substantially improves an SEIR models fit to data (mean absolute errors normalized by mean=66% across countries, vs. 551% without endogenous risk response); 2) Re converges to [~]1 across countries in both empirical data and model estimates with endogenous risk response (but not without it); 3) most cross-country variation in death rates cannot be explained by intuitively important factors like hospital capacity or policy response stringency; and 4) responsiveness to risk, which governs the sensitivity of the endogenous risk response, correlates strongly with death rates (R2=0.75) and is the strongest explanatory factor for cross-country variation therein.
InterpretationCountries converge to policy measures consistent with Re[~]1 (or exponentially growing outbreaks will compel them to increase restrictions). Responsiveness to risk, i.e. how readily a country adopts the required measures, shapes long-term cases and deaths. With greater responsiveness, many countries could considerably improve pandemic outcomes without imposing more restrictive control policies.
FundingNone.
Research in contextO_ST_ABSEvidence before this studyC_ST_ABSWhile numerous studies have examined drivers of variation in COVID-19 infection fatality rates (e.g. age, comorbidities), relatively few have sought to explain cross-country variation in overall mortality outcomes. Mortality outcomes are shaped primarily by variation in infection rates and only secondarily by IFR variation, yet the drivers of cross-country variation in infection rates are poorly understood as well. There is ample evidence that policy responses such as mask mandates, quarantines, and other non-pharmaceutical interventions (NPIs) can influence transmission, yet across countries the association of policy responses with long-term infection rates remains weak.
Added value of this studyWe propose a simple and intuitive theoretical mechanism - endogenous behavioral response to risk - to explain variation in cases and deaths across countries. While simple, the implications of this mechanism are rarely examined; a recent review found only 1 of 61 models in the CDC COVID-19 Forecast Hub incorporates endogenous risk response. We demonstrate how it can explain several empirical regularities, including multiple outbreak waves and convergence in effective reproduction number, which existing models largely do not explain. Our empirical estimates of responsiveness to risk also show that it drives wide cross-country variation in mortality outcomes, which otherwise remains unexplained.
Implications of all the available evidenceOur results highlight the key role of responsiveness to risk in shaping COVID-19 mortality outcomes. They suggest that adopting similar policy responses but with greater responsiveness could improve outcomes and reduce mortality. The responsiveness mechanism also resolves the apparent paradox that despite clear evidence of proximal NPI effectiveness, their impact on infection and death outcomes in cross-country comparisons is rather weak. Our results highlight the need to understand better the determinants of differences in responsiveness across countries, and how to improve it. | epidemiology |
10.1101/2020.12.13.20248138 | Clinical Decision Support Using a Partially Instantiated Probabilistic Graphical Model Optimized through Quantum Annealing: Proof-of-Concept of a Computational Method Using a Clinical Data Set | An approach is described to building a clinical decision support tool which leverages a partially instantiated graphical model optimized through quantum annealing. Potential advantages of such a strategy include the practical, and potentially real-time, use of multidimensional patient data to make a host of intuitively understandable predictions and recommendations in complex cases which are informed by a data-driven probabilistic model. Preliminary proof-of-concept of the general approach is demonstrated using a large well-established anonymized patient data set, revealing the predictive capability of a specific model. Ideas for future research are discussed. | health informatics |
10.1101/2020.12.11.20247551 | Modeling of aerosol transmission of airborne pathogens in ICU rooms of COVID-19 patients with acute respiratory failure | The COVID-19 pandemic has generated many concerns about cross-contamination risks, particularly in hospital settings and Intensive Care Units (ICU). Virus-laden aerosols produced by infected patients can propagate throughout ventilated rooms and put medical personnel entering them at risk. Experimental results found with a schlieren optical method have shown that the air flows generated by a cough and normal breathing were modified by the oxygenation technique used, especially when using High Flow Nasal Canulae, increasing the shedding of potentially infectious airborne particles. This study also uses a 3D Computational Fluid Dynamics model based on a Lattice Boltzmann Method to simulate the air flows as well as the movement of numerous airborne particles produced by a patients cough within an ICU room under negative pressure. The effects of different mitigation scenarii on the amount of aerosols potentially containing SARS-CoV-2 that are extracted through the ventilation system are investigated. Numerical results indicate that adequate bed orientation and additional air treatment unit positioning can increase by 40% the number of particles extracted and decrease by 25% the amount of particles deposited on surfaces 45s after shedding. This approach could help lay the grounds for a more comprehensive way to tackle contamination risks in hospitals, as the model can be seen as a proof of concept and be adapted to any room configuration. | infectious diseases |
10.1101/2020.12.11.20247916 | Mathematical assessment of the roles of vaccination and non-pharmaceutical interventions on COVID-19 dynamics: a multigroup modeling approach | A novel coronavirus emerged in December of 2019 (COVID-19), causing a pandemic that continues to inflict unprecedented public health and economic burden in all nooks and corners of the world. Although the control of COVID-19 has largely focused on the use of basic public health measures (primarily based on using non-pharmaceutical interventions, such as quarantine, isolation, social-distancing, face mask usage and community lockdowns), three safe and highly-effective vaccines (by AstraZeneca Inc., Moderna Inc. and Pfizer Inc., with protective efficacy of 70%, 94.1% and 95%, respectively) have been approved for use in humans since December 2020. We present a new mathematical model for assessing the population-level impact of the three currently-available anti-COVID vaccines that are administered in humans. The model stratifies the total population into two subgroups, based on whether or not they habitually wear face mask in public. The resulting multigroup model, which takes the form of a deterministic system of nonlinear differential equations, is fitted and parametrized using COVID-19 cumulative mortality data for the third wave of the COVID-19 pandemic in the U.S. Conditions for the asymptotic stability of the associated disease-free equilibrium, as well as expression for the vaccine-derived herd immunity threshold, are rigorously derived. Numerical simulations of the model show that the size of the initial proportion of individuals in the masks-wearing group, together with positive change in behaviour from the non-masks wearing group (as well as those in masks-wearing group do not abandon their masks-wearing habit) play a crucial role in effectively curtailing the COVID-19 pandemic in the U.S. This study further shows that the prospect of achieving herd immunity (required for COVID-19 elimination) in the U.S., using any of the three currently-available vaccines, is quite promising. In particular, while the use of the AstraZeneca vaccine will lead to herd immunity in the U.S. if at least 80% of the populace is vaccinated, such herd immunity can be achieved using either the Moderna or Pfizer vaccine if about 60% of the U.S. population is vaccinated. Furthermore, the prospect of eliminating the pandemic in the US in the year 2021 is significantly enhanced if the vaccination program is complemented with nonpharmaceutical interventions at moderate increased levels of compliance (in relation to their baseline compliance). The study further suggests that, while the waning of natural and vaccine-derived immunity against COVID-19 induces only a marginal increase in the burden and projected time-to-elimination of the pandemic, adding the impacts of the therapeutic benefits of the vaccines into the model resulted in a dramatic reduction in the burden and time-to-elimination of the pandemic. | epidemiology |
10.1101/2020.12.11.20247916 | Towards achieving a vaccine-derived herd immunity threshold for COVID-19 in the U.S. | A novel coronavirus emerged in December of 2019 (COVID-19), causing a pandemic that continues to inflict unprecedented public health and economic burden in all nooks and corners of the world. Although the control of COVID-19 has largely focused on the use of basic public health measures (primarily based on using non-pharmaceutical interventions, such as quarantine, isolation, social-distancing, face mask usage and community lockdowns), three safe and highly-effective vaccines (by AstraZeneca Inc., Moderna Inc. and Pfizer Inc., with protective efficacy of 70%, 94.1% and 95%, respectively) have been approved for use in humans since December 2020. We present a new mathematical model for assessing the population-level impact of the three currently-available anti-COVID vaccines that are administered in humans. The model stratifies the total population into two subgroups, based on whether or not they habitually wear face mask in public. The resulting multigroup model, which takes the form of a deterministic system of nonlinear differential equations, is fitted and parametrized using COVID-19 cumulative mortality data for the third wave of the COVID-19 pandemic in the U.S. Conditions for the asymptotic stability of the associated disease-free equilibrium, as well as expression for the vaccine-derived herd immunity threshold, are rigorously derived. Numerical simulations of the model show that the size of the initial proportion of individuals in the masks-wearing group, together with positive change in behaviour from the non-masks wearing group (as well as those in masks-wearing group do not abandon their masks-wearing habit) play a crucial role in effectively curtailing the COVID-19 pandemic in the U.S. This study further shows that the prospect of achieving herd immunity (required for COVID-19 elimination) in the U.S., using any of the three currently-available vaccines, is quite promising. In particular, while the use of the AstraZeneca vaccine will lead to herd immunity in the U.S. if at least 80% of the populace is vaccinated, such herd immunity can be achieved using either the Moderna or Pfizer vaccine if about 60% of the U.S. population is vaccinated. Furthermore, the prospect of eliminating the pandemic in the US in the year 2021 is significantly enhanced if the vaccination program is complemented with nonpharmaceutical interventions at moderate increased levels of compliance (in relation to their baseline compliance). The study further suggests that, while the waning of natural and vaccine-derived immunity against COVID-19 induces only a marginal increase in the burden and projected time-to-elimination of the pandemic, adding the impacts of the therapeutic benefits of the vaccines into the model resulted in a dramatic reduction in the burden and time-to-elimination of the pandemic. | epidemiology |
10.1101/2020.12.11.20245357 | Prevalence of Occupation Associated with Increased Mobility During COVID-19 Pandemic | ObjectiveIdentifying geographic-level prevalence of occupations associated with mobility during local stay-at-home pandemic mandate.
MethodsA spatio-temporal ecological framework was applied to determine census-tracts that had significantly higher rates of occupations likely to be deemed essential: food-service, business and finance, healthcare support, and maintenance. Real-time mobility data was used to determine the average daily percent of residents not leaving their place of residence. Spatial regression models were constructed for each occupation proportion among census-tracts within a large urban area.
ResultsAfter adjusting for demographics, results indicate census-tracts with higher proportion of food-service workers, healthcare support employees, and office administration staff are likely to have increased mobility.
ConclusionsIncreased mobility among communities is likely to exacerbate COVID-19 mitigation efforts. This increase in mobility was also found associated with specific demographics suggesting it may be occurring among underserved and vulnerable populations. We find that prevalence of essential employment presents itself as a candidate for driving inequity in morbidity and mortality of COVID-19.
Three-question SummaryO_LIEmployees and workers deemed essential during the COVID-19 pandemic are likely to endure additional risk of infection due to community exposure. While preliminary reports are still quantifying this risk, we set out to examine if prevalence of specific occupations could be used to evaluate overall community-level risk based on stay-at-home mandate adherence.
C_LIO_LIStudy results suggest that that not only are certain occupation geo-spatially associated with movement outside the home but are also associated with demographic characteristics likely to contribute to inequity of COVID-19 morbidity.
C_LIO_LIOften, nuanced inequities are lost in the larger data samples, being able to identify possible inequities from other sources such as prevalence of occupation among communities, remains an important and applicable alternative.
C_LI | epidemiology |
10.1101/2020.12.11.20247353 | Association of University Reopening Policies with New Confirmed COVID-19 Cases in the United States | ImportanceReopening of universities in the U.S. has been controversial in the setting of the coronavirus disease 2019 (COVID-19) pandemic.
ObjectiveTo investigate (1) the association between new COVID-19 cases since September 1st with the number of students returning to campus in each county across the U.S. and (2) how different reopening policies at universities correlated with new COVID-19 cases.
DesignObservational cohort study using publicly available data sources. Multivariable regression models estimated both effects of university reopening and different reopening policies.
Settings and ParticipantsPopulations in U.S. counties reporting new confirmed COVID-19 cases from August 1st to October 22nd.
Exposures(1) total enrollment of students under the in-person or hybrid policies per county population and (2) proportion of online and hybrid enrollment within each county.
Main Outcomes and MeasuresMean number of daily new confirmed COVID-19 cases per 10,000 county population from September 1st to October 22nd.
ResultsFor 2,893 counties included in the study, mean number of daily confirmed cases per 10,000 county population rose from 1.51 from August 1st to August 31st to 1.98 from September 1st to October 22nd. Mean number of students returning to universities was 2.1% (95% CI, 1.8% to 2.3%) of the county population. The number of students returning to campus had an increased association ({beta} = 2.006, P < 0.001) with new confirmed COVID-19 cases within the local county region where the institution resided. For 1,069 U.S. counties with universities, the mean proportion of online enrollment within each county was 40.1% (95% CI, 37.4% to 42.8%), with most students enrolling in-person or hybrid mode. In comparison to holding classes in-person, reopening universities online ({beta} = -0.329, P < 0.001) or in a hybrid mode ({beta} = -0.272, P = 0.012) had a decreased association with new confirmed COVID-19 cases.
Conclusions and RelevanceA higher number of students returning to campus in U.S. counties was associated with an increase in new confirmed COVID-19 cases; reopening online or partially online was associated with slower spread of the virus, in comparison to in-person reopening.
Key PointsO_ST_ABSQuestionC_ST_ABSAre students returning to universities and specific reopening policies associated with new confirmed coronavirus cases in United States?
FindingsIn this cohort study of 2,893 U.S. counties, the number of students returning to campus was significantly associated with a higher number of new confirmed COVID-19 cases. In 1,069 U.S. counties with universities, online or hybrid reopening was significantly associated with a lower risk of new cases compared with in-person reopening.
MeaningAn increased risk of coronavirus infection was seen in surrounding regions after universities reopened last fall, and this effect was largest in those holding in-person classes. | health policy |
10.1101/2020.12.11.20247353 | Association of University Reopening Policies with New Confirmed COVID-19 Cases in the United States | ImportanceReopening of universities in the U.S. has been controversial in the setting of the coronavirus disease 2019 (COVID-19) pandemic.
ObjectiveTo investigate (1) the association between new COVID-19 cases since September 1st with the number of students returning to campus in each county across the U.S. and (2) how different reopening policies at universities correlated with new COVID-19 cases.
DesignObservational cohort study using publicly available data sources. Multivariable regression models estimated both effects of university reopening and different reopening policies.
Settings and ParticipantsPopulations in U.S. counties reporting new confirmed COVID-19 cases from August 1st to October 22nd.
Exposures(1) total enrollment of students under the in-person or hybrid policies per county population and (2) proportion of online and hybrid enrollment within each county.
Main Outcomes and MeasuresMean number of daily new confirmed COVID-19 cases per 10,000 county population from September 1st to October 22nd.
ResultsFor 2,893 counties included in the study, mean number of daily confirmed cases per 10,000 county population rose from 1.51 from August 1st to August 31st to 1.98 from September 1st to October 22nd. Mean number of students returning to universities was 2.1% (95% CI, 1.8% to 2.3%) of the county population. The number of students returning to campus had an increased association ({beta} = 2.006, P < 0.001) with new confirmed COVID-19 cases within the local county region where the institution resided. For 1,069 U.S. counties with universities, the mean proportion of online enrollment within each county was 40.1% (95% CI, 37.4% to 42.8%), with most students enrolling in-person or hybrid mode. In comparison to holding classes in-person, reopening universities online ({beta} = -0.329, P < 0.001) or in a hybrid mode ({beta} = -0.272, P = 0.012) had a decreased association with new confirmed COVID-19 cases.
Conclusions and RelevanceA higher number of students returning to campus in U.S. counties was associated with an increase in new confirmed COVID-19 cases; reopening online or partially online was associated with slower spread of the virus, in comparison to in-person reopening.
Key PointsO_ST_ABSQuestionC_ST_ABSAre students returning to universities and specific reopening policies associated with new confirmed coronavirus cases in United States?
FindingsIn this cohort study of 2,893 U.S. counties, the number of students returning to campus was significantly associated with a higher number of new confirmed COVID-19 cases. In 1,069 U.S. counties with universities, online or hybrid reopening was significantly associated with a lower risk of new cases compared with in-person reopening.
MeaningAn increased risk of coronavirus infection was seen in surrounding regions after universities reopened last fall, and this effect was largest in those holding in-person classes. | health policy |
10.1101/2020.12.14.20248201 | Exploratory comparison of Healthcare costs and benefits of the UK's Covid-19 response with four European countries | BackgroundIn responding to covid-19, governments have tried to balance protecting health while minimising Gross Domestic Product (GDP) losses. We compare health-related net benefit (HRNB) and GDP losses associated with government responses of the UK, Ireland, Germany, Spain, and Sweden from UK healthcare payer perspective.
MethodsWe compared observed cases, hospitalisations, and deaths under "mitigation" to modelled events under "no mitigation" to 20th July 2020. We thus calculated healthcare costs, quality adjusted life years (QALYs), and HRNB at {pound}20,000/QALY saved by each country. On per population (i.e. per capita) basis, we compared HRNB with forecast reductions in 2020 GDP growth (overall or compared to Sweden as minimal mitigation country) and qualitatively and quantitatively described government responses.
ResultsThe UK saved 3.17 (0.32-3.65) million QALYs, {pound}33 (8-38) billion healthcare costs, and {pound}1416 (220-1637) HRNB per capita at {pound}20,000/QALY. Per capita, this is comparable to {pound}1,455 GDP loss using Sweden as comparator and offsets 46.1 (7.1-53.2)% of total {pound}3075 GDP loss.
Germany, Spain, and Sweden had greater HRNB per capita. These also offset a greater percentage of total GDP losses per capita. Ireland fared worst on both measures. Countries with more mask wearing, testing, and population susceptibility had better outcomes. Highest stringency responses did not appear to have best outcomes.
ConclusionsOur exploratory analysis indicates the benefit of government covid-19 responses may outweigh their economic costs. The extent that HRNB offset economic losses appears to relate to population characteristics, testing levels, and mask wearing, rather than response stringency.
Research in ContextO_ST_ABSEvidence before this studyC_ST_ABSOur research question was how the health-related net benefits and economic impacts of the UK response to the covid-19 epidemic first wave compared to other European countries. We searched PubMed, MedRxiv, and Arxiv for terms related to cost-effectivness, covid-19, and the UK.
Two studies compared predicted lives saved to predicted gross domestic product (GDP) losses. One found that lives saved by a lockdown would outweigh GDP losses, while another found a lockdown to cost {pound}10million per life saved. A later modelling study used quality adjusted life-years (QALYs), going beyond lives saved, and found cost per QALY was below {pound}50,000. A fourth, comparing observed to modelled deaths and hospitalisations, found the cost per QALY was at least {pound}220,000, and thus the UK response was not cost-effective. None of these were international comparisons. One international study found good health and economic outcomes to be correlated. Another found global trade reductions and voluntary behavioural changes to have greater impact on economic growth than government measures. Neither considered cost-effectiveness. However, they suggest comparison to GDP loss is naive as this is total loss and not that due to government restrictions.
Added value of this studyWe compare the UK to Ireland, Germany, Spain, and Sweden on health-related net benefits and economic impacts of government response from a UK National Health Service (NHS) perspective. We describe countries response measures. We compared model predictions of outcomes under "no mitigation" to observed outcomes under "mitigation" up to July 20th 2020. We estimated healthcare costs, QALYs, and health-related net benefit (HRNB) saved. We compared HRBN to GDP losses using Sweden as a "minimal mitigation" comparator and calculated the % of total GDP loss they offset.
We found the UK saved 3{middle dot}17 (0{middle dot}32-3{middle dot}65) million QALYs, {pound}33 ({pound}8-38) billion in healthcare costs, and {pound}1416 (220-1637) HRNB per capita at the NHS threshold of {pound}20,000/QALY. This is comparable to the {pound}1,455 GDP loss per capita using Sweden as comparator and offsets 46{middle dot}1% (7{middle dot}1-53{middle dot}2) of the total estimated {pound}3075 GDP loss per capita. We found that Germany, Spain, and probably Sweden had greater HRNB per capita and offset greater percentages of GDP loss per capita. Ireland fared worst on both measures. We found countries with more mask wearing, testing, and population susceptibility (e.g. older and more interpersonal contact) had better outcomes. Highest stringency responses did not appear to have best outcomes.
Implications of all the available evidenceWe add to growing evidence that the total economic impact of covid-19 exceeds the HRNB of the UKs response. However, using Sweden as comparator and comparing across countries, we argue that GDP loss is not purely due to government restrictions and that due to restrictions may be outweighed by HRNB. We evaluated the extent to which countries have offset GDP losses, and these appear to be higher in countries with more at-risk populations, higher testing, and higher mask wearing, rather than those with most stringent restrictions. | health economics |
10.1101/2020.12.14.20247874 | Rapid, point-of-care molecular diagnostics with Cas13 | Rapid nucleic acid testing is a critical component of a robust infrastructure for increased disease surveillance. Here, we report a microfluidic platform for point-of-care, CRISPR-based molecular diagnostics. We first developed a nucleic acid test which pairs distinct mechanisms of DNA and RNA amplification optimized for high sensitivity and rapid kinetics, linked to Cas13 detection for specificity. We combined this workflow with an extraction-free sample lysis protocol using shelf-stable reagents that are widely available at low cost, and a multiplexed human gene control for calling negative test results. As a proof-of-concept, we demonstrate sensitivity down to 40 copies/L of SARS-CoV-2 in unextracted saliva within 35 minutes, and validated the test on total RNA extracted from patient nasal swabs with a range of qPCR Ct values from 13-35. To enable sample-to-answer testing, we integrated this diagnostic reaction with a single-use, gravity-driven microfluidic cartridge followed by real-time fluorescent detection in a compact companion instrument. We envision this approach for Diagnostics with Coronavirus Enzymatic Reporting (DISCoVER) will incentivize frequent, fast, and easy testing. | infectious diseases |
10.1101/2020.12.14.20247874 | Rapid, point-of-care molecular diagnostics with Cas13 | Rapid nucleic acid testing is a critical component of a robust infrastructure for increased disease surveillance. Here, we report a microfluidic platform for point-of-care, CRISPR-based molecular diagnostics. We first developed a nucleic acid test which pairs distinct mechanisms of DNA and RNA amplification optimized for high sensitivity and rapid kinetics, linked to Cas13 detection for specificity. We combined this workflow with an extraction-free sample lysis protocol using shelf-stable reagents that are widely available at low cost, and a multiplexed human gene control for calling negative test results. As a proof-of-concept, we demonstrate sensitivity down to 40 copies/L of SARS-CoV-2 in unextracted saliva within 35 minutes, and validated the test on total RNA extracted from patient nasal swabs with a range of qPCR Ct values from 13-35. To enable sample-to-answer testing, we integrated this diagnostic reaction with a single-use, gravity-driven microfluidic cartridge followed by real-time fluorescent detection in a compact companion instrument. We envision this approach for Diagnostics with Coronavirus Enzymatic Reporting (DISCoVER) will incentivize frequent, fast, and easy testing. | infectious diseases |
10.1101/2020.12.15.20248173 | The Covid-19 containment effects of public health measures - A spatial difference-in-differences approach | Since mid-March 2020 the Federal and state governments in Germany agreed on comprehensive public health measures to curb the spread of SARS-CoV-2 infections leading to the Covid-19 disease. We study the containment effects of these policy interventions on the progression of the pandemic in the first containment phase in spring 2020 before the easing of restrictions may become effective by the end of April. To exploit both the temporal and spatial dimension in the dissemination of the virus, we conduct a spatial panel data analysis for German NUTS-3 regions. Specifically, we employ a spatial difference-in-differences approach to identify the effects of six compound sets of public health measures. We find that contact restrictions and closure of schools substantially contributed to flattening the infection curve. Additionally, a strong treatment effect of mandatory wearing of face masks is established for the few treated regions during this containment phase. No incremental effect is evidenced for closure of establishments, such as museums, theaters, cinemas and parks, and the shutdown of shopping malls and other non-essential retail stores. These findings prove to be robust to changes in model specification. By contrast, the dampening effect of restaurant closure is sensitive to model variation. | epidemiology |
10.1101/2020.12.14.20248176 | Common genetic variants identify targets for COVID-19 and individuals at high risk of severe disease | SARS-CoV-2 enters host cells by binding angiotensin-converting enzyme 2 (ACE2). Through a genome-wide association study, we show that a rare variant (MAF = 0.3%, odds ratio 0.60, P=4.5x10-13) that down-regulates ACE2 expression reduces risk of COVID-19 disease, providing human genetics support for the hypothesis that ACE2 levels influence COVID-19 risk. Further, we show that common genetic variants define a risk score that predicts severe disease among COVID-19 cases. | genetic and genomic medicine |
10.1101/2020.12.14.20248176 | Genome-wide analysis in 756,646 individuals provides first genetic evidence that ACE2 expression influences COVID-19 risk and yields genetic risk scores predictive of severe disease | SARS-CoV-2 enters host cells by binding angiotensin-converting enzyme 2 (ACE2). Through a genome-wide association study, we show that a rare variant (MAF = 0.3%, odds ratio 0.60, P=4.5x10-13) that down-regulates ACE2 expression reduces risk of COVID-19 disease, providing human genetics support for the hypothesis that ACE2 levels influence COVID-19 risk. Further, we show that common genetic variants define a risk score that predicts severe disease among COVID-19 cases. | genetic and genomic medicine |
10.1101/2020.12.15.20248264 | Development of a Rapid Point-Of-Care Test that Measures Neutralizing Antibodies to SARS-CoV-2 | BackgroundAfter receiving a COVID-19 vaccine, most recipients want to know if they are protected from infection and for how long. Since neutralizing antibodies are a correlate of protection, we developed a lateral flow assay (LFA) that measures levels of neutralizing antibodies from a drop of blood. The LFA is based on the principle that neutralizing antibodies block binding of the receptor-binding domain (RBD) to angiotensin-converting enzyme 2 (ACE2).
MethodsThe ability of the LFA was assessed to correctly measure neutralization of sera, plasma or whole blood from patients with COVID-19 using SARS-CoV-2 microneutralization assays. We also determined if the LFA distinguished patients with seasonal respiratory viruses from patients with COVID-19. To demonstrate the usefulness of the LFA, we tested previously infected and non-infected COVID-19 vaccine recipients at baseline and after first and second vaccine doses.
ResultsThe LFA compared favorably with SARS-CoV-2 microneutralization assays with an area under the ROC curve of 98%. Sera obtained from patients with seasonal coronaviruses did not show neutralizing activity in the LFA. After a single mRNA vaccine dose, 87% of previously infected individuals demonstrated high levels of neutralizing antibodies. However, if individuals were not previously infected only 24% demonstrated high levels of neutralizing antibodies after one vaccine dose. A second dose boosted neutralizing antibody levels just 8% higher in previously infected individuals, but over 63% higher in non-infected individuals.
ConclusionsA rapid, semi-quantitative, highly portable and inexpensive neutralizing antibody test might be useful for monitoring rise and fall in vaccine-induced neutralizing antibodies to COVID-19. | infectious diseases |
10.1101/2020.12.15.20248264 | Development of a Rapid Point-Of-Care Test that Measures Neutralizing Antibodies to SARS-CoV-2 | BackgroundAfter receiving a COVID-19 vaccine, most recipients want to know if they are protected from infection and for how long. Since neutralizing antibodies are a correlate of protection, we developed a lateral flow assay (LFA) that measures levels of neutralizing antibodies from a drop of blood. The LFA is based on the principle that neutralizing antibodies block binding of the receptor-binding domain (RBD) to angiotensin-converting enzyme 2 (ACE2).
MethodsThe ability of the LFA was assessed to correctly measure neutralization of sera, plasma or whole blood from patients with COVID-19 using SARS-CoV-2 microneutralization assays. We also determined if the LFA distinguished patients with seasonal respiratory viruses from patients with COVID-19. To demonstrate the usefulness of the LFA, we tested previously infected and non-infected COVID-19 vaccine recipients at baseline and after first and second vaccine doses.
ResultsThe LFA compared favorably with SARS-CoV-2 microneutralization assays with an area under the ROC curve of 98%. Sera obtained from patients with seasonal coronaviruses did not show neutralizing activity in the LFA. After a single mRNA vaccine dose, 87% of previously infected individuals demonstrated high levels of neutralizing antibodies. However, if individuals were not previously infected only 24% demonstrated high levels of neutralizing antibodies after one vaccine dose. A second dose boosted neutralizing antibody levels just 8% higher in previously infected individuals, but over 63% higher in non-infected individuals.
ConclusionsA rapid, semi-quantitative, highly portable and inexpensive neutralizing antibody test might be useful for monitoring rise and fall in vaccine-induced neutralizing antibodies to COVID-19. | infectious diseases |
10.1101/2020.12.15.20248264 | Development of a Rapid Point-Of-Care Test that Measures Neutralizing Antibodies to SARS-CoV-2 | BackgroundAfter receiving a COVID-19 vaccine, most recipients want to know if they are protected from infection and for how long. Since neutralizing antibodies are a correlate of protection, we developed a lateral flow assay (LFA) that measures levels of neutralizing antibodies from a drop of blood. The LFA is based on the principle that neutralizing antibodies block binding of the receptor-binding domain (RBD) to angiotensin-converting enzyme 2 (ACE2).
MethodsThe ability of the LFA was assessed to correctly measure neutralization of sera, plasma or whole blood from patients with COVID-19 using SARS-CoV-2 microneutralization assays. We also determined if the LFA distinguished patients with seasonal respiratory viruses from patients with COVID-19. To demonstrate the usefulness of the LFA, we tested previously infected and non-infected COVID-19 vaccine recipients at baseline and after first and second vaccine doses.
ResultsThe LFA compared favorably with SARS-CoV-2 microneutralization assays with an area under the ROC curve of 98%. Sera obtained from patients with seasonal coronaviruses did not show neutralizing activity in the LFA. After a single mRNA vaccine dose, 87% of previously infected individuals demonstrated high levels of neutralizing antibodies. However, if individuals were not previously infected only 24% demonstrated high levels of neutralizing antibodies after one vaccine dose. A second dose boosted neutralizing antibody levels just 8% higher in previously infected individuals, but over 63% higher in non-infected individuals.
ConclusionsA rapid, semi-quantitative, highly portable and inexpensive neutralizing antibody test might be useful for monitoring rise and fall in vaccine-induced neutralizing antibodies to COVID-19. | infectious diseases |
10.1101/2020.12.15.20248254 | Risk factors, symptom reporting, healthcare-seeking behaviour and adherence to public health guidance: protocol for Virus Watch, a prospective community cohort study | IntroductionThe Coronavirus (COVID-19) Pandemic has caused significant global mortality and impacted lives around the world. Virus Watch aims to provide evidence on which public health approaches are most likely to be effective in reducing transmission and impact of the virus, and will investigate community incidence, symptom profiles, and transmission of COVID-19 in relation to population movement and behaviours.
Methods and analysisVirus Watch is a household community cohort study of acute respiratory infections in England & Wales and will run from June 2020 to August 2021. The study aims to recruit 50,000 people, including 12,500 from minority ethnic backgrounds, for an online survey cohort and monthly antibody testing using home finger prick kits. Nested within this larger study will be a sub-cohort of 10,000 individuals, including 3,000 people from minority ethnic backgrounds. This cohort of 10,000 people will have full blood serology taken between October 2020 and January 2021 and repeat serology between May 2021 and August 2021. Participants will also post self-administered nasal swabs for PCR assays of SARS-CoV-2 and will follow one of three different PCR testing schedules based upon symptoms.
Ethics and disseminationThis study has been approved by the Hampstead NHS Health Research Authority Ethics Committee. Ethics approval number - 20/HRA/2320. We are monitoring participant queries and using these to refine methodology where necessary, and are providing summaries and policy briefings of our preliminary findings to inform public health action by working through our partnerships with our study advisory group, Public Health England, NHS and Government Scientific Advisory panels.
Strengths and limitations of this studyO_LIVirus Watch is a large national household community cohort study of the occurrence and risk factors for COVID-19 infection that aims to recruit 50,000 people, including 12,500 from minority ethnic backgrounds.
C_LIO_LIVirus Watch is designed to estimate incidence of PCR confirmed COVID-19 in those with respiratory and non-respiratory presentations and the incidence of hospitalisation among PCR confirmed COVID-19 cases.
C_LIO_LIVirus Watch will measure effectiveness and impact of recommended COVID-19 control measures including testing, isolation, social distancing, respiratory and hand hygiene measures on risk of respiratory infection.
C_LIO_LIOnly households with a lead householder able to speak English were able to take part in the study up until March 2021.
C_LIO_LIOnly households of up to six people were eligible for inclusion and they were also required to have access to an internet connection. These restrictions will limit the generalisability to large or multigenerational households, and those without access to the internet.
C_LI | infectious diseases |
10.1101/2020.12.14.20247577 | The long-term success of mandatory vaccination laws at implementing the first vaccination campaign in 19th century rural Finland | In high income countries, childhood infections are on the rise, a phenomenon in part attributed to persistent hesitancy towards vaccines. To combat vaccine hesitancy, several countries recently made vaccinating children mandatory, but the effect of such vaccination laws on vaccination coverage remains debated and the long-term consequences are unknown. Here we quantified the consequences of vaccination laws on the vaccination coverage monitoring for a period of 63 years rural Finlands first vaccination campaign against the highly lethal childhood infection smallpox. We found that annual vaccination campaigns were focussed on children up to 1 year old, but that their vaccination coverage was low and declined with time until the start of the vaccination law, which stopped the declining trend and was associated with an abrupt coverage increase of 20 % to cover >80 % of all children. Our results indicate that vaccination laws had a long-term beneficial effect at increasing the vaccination coverage and will help public health practitioners to make informed decisions on how to act against vaccine hesitancy and optimise the impact of vaccination programmes. | epidemiology |
10.1101/2020.12.14.20247577 | The long-term success of mandatory vaccination laws at implementing the first vaccination campaign in 19th century rural Finland | In high income countries, childhood infections are on the rise, a phenomenon in part attributed to persistent hesitancy towards vaccines. To combat vaccine hesitancy, several countries recently made vaccinating children mandatory, but the effect of such vaccination laws on vaccination coverage remains debated and the long-term consequences are unknown. Here we quantified the consequences of vaccination laws on the vaccination coverage monitoring for a period of 63 years rural Finlands first vaccination campaign against the highly lethal childhood infection smallpox. We found that annual vaccination campaigns were focussed on children up to 1 year old, but that their vaccination coverage was low and declined with time until the start of the vaccination law, which stopped the declining trend and was associated with an abrupt coverage increase of 20 % to cover >80 % of all children. Our results indicate that vaccination laws had a long-term beneficial effect at increasing the vaccination coverage and will help public health practitioners to make informed decisions on how to act against vaccine hesitancy and optimise the impact of vaccination programmes. | epidemiology |
10.1101/2020.12.15.20248130 | Determinants of SARS-CoV-2 transmission to guide vaccination strategy in an urban area | BackgroundTransmission chains within small urban areas (accommodating[~]30% of the European population) greatly contribute to case burden and economic impact during the ongoing COVID-19 pandemic, and should be a focus for preventive measures to achieve containment. Here, at very high spatio-temporal resolution, we analysed determinants of SARS-CoV-2 transmission in a European urban area, Basel-City (Switzerland). Methodology. We combined detailed epidemiological, intra-city mobility, and socioeconomic data-sets with whole-genome-sequencing during the first SARS-CoV-2 wave. For this, we succeeded in sequencing 44% of all reported cases from Basel-City and performed phylogenetic clustering and compartmental modelling based on the dominating viral variant (B.1-C15324T; 60% of cases) to identify drivers and patterns of transmission. Based on these results we simulated vaccination scenarios and corresponding healthcare-system burden (intensive-care-unit occupancy). Principal Findings. Transmissions were driven by socioeconomically weaker and highly mobile population groups with mostly cryptic transmissions, whereas amongst more senior population transmission was clustered. Simulated vaccination scenarios assuming 60-90% transmission reduction, and 70-90% reduction of severe cases showed that prioritizing mobile, socioeconomically weaker populations for vaccination would effectively reduce case numbers. However, long-term intensive-care-unit occupation would also be effectively reduced if senior population groups were prioritized, provided there were no changes in testing and prevention strategies. Conclusions. Reducing SARS-CoV-2 transmission through vaccination strongly depends on the efficacy of the deployed vaccine. A combined strategy of protecting risk groups by extensive testing coupled with vaccination of the drivers of transmission (i.e. highly mobile groups) would be most effective at reducing the spread of SARS-CoV-2 within an urban area.
Author summaryWe examined SARS-CoV-2 transmission patterns within a European city (Basel, Switzerland) to infer drivers of the transmission during the first wave in spring 2020. The combination of diverse data (serological, genomic, transportation, socioeconomic) allowed us to combine phylogenetic analysis with mathematical modelling on related cases that were mapped to a residential address. As a result we could evaluate population groups driving SARS-CoV-2 transmission and quantify their effect on the transmission dynamics. We found traceable transmission chains in wealthier or more senior population groups and cryptic transmissions in the mobile, young or socioeconomic weaker population groups - these were identified as transmission drivers of the first wave. Based on this insight, we simulated vaccination scenarios for various vaccine efficacies to reflect different approaches undertaken to handle the epidemic. We conclude that vaccination of the mobile inherently younger population group would be most effective to handle following waves. | public and global health |