id
stringlengths 16
27
| title
stringlengths 18
339
| abstract
stringlengths 95
38.7k
| category
stringlengths 7
44
|
---|---|---|---|
10.1101/2021.01.16.21249923 | Development and validation of a multiple-choice question-based delirium care knowledge quiz for critical care nurses | AimsTo develop and psychometrically test a multiple choice questions (MCQs)-based quiz of delirium care knowledge for critical care nurses.
DesignInstrument development and psychometric evaluation study.
MethodsThe development and validation process including two phases. Phase I focused on the quiz development, conducted by the following steps: (1) generated initial 20-item pool; (2) examined content validity and (3) face validity; (4) conducted pilot testing, data were collected from 217 critical care nurses via online survey during 01 October to 07 November, 2020; (5) performed item analysis and eliminated items based on the item difficulty and discrimination indices. The MCQs quiz was finalised through the development process. Then, phases II emphasised the quiz validation, to estimate the internal consistency, split-half and test-retest reliability, and construct validity using parallel analysis with the exploratory factor analysis (EFA).
ResultsA final 16-item MCQs quiz was emerged from the item analysis. The Kuder- Richardson Formula 20 coefficient for the overall quiz showed good internal consistency (0.85), and the intraclass correlation coefficient with a 30-day interval also indicated that the questionnaire had satisfactory stability (0.96). The EFA confirmed appropriate construct validity for the quiz, four factors could explain the total variance of 60.87%.
ConclusionThis study developed the first MCQs quiz for delirium care knowledge and it is a reliable and valid tool that can be implemented to assess the level of delirium care knowledge.
ImpactThis study offers an evidence-based quiz designed for future research and education purposes in delirium care that has significant implications for knowledge test by using MCQs in clinical practice. | nursing |
10.1101/2021.01.16.21249861 | The management of Moderate Acute Malnutrition (MAM) in children aged 6-59 months: A systematic review and meta-analysis | BackgroundMalnutrition is a leading cause of morbidity and mortality in children aged under five years, especially in low- and middle-income countries (LMICs). Although severe acute malnutrition (SAM) is considered the most serious form of malnutrition, moderate acute malnutrition (MAM) affects greater numbers globally and, unlike SAM, guidelines lack a robust evidence-base. This systematic review and meta-analysis assessed the evidence for lipid-based nutrient supplements (LNS), fortified-blended-flours (FBF) and nutrition counselling, in the treatment of MAM.
MethodsFive databases were systematically searched for studies conducted in LMICs that compared the effectiveness of food-based products versus any comparator group in promoting recovery from MAM in children aged 6-59 months. Where appropriate, pooled estimates of effect were estimated using random-effects meta-analyses.
ResultsA total of thirteen trials were identified for inclusion. All used active controls rather than standard care, which is often minimal in most settings. There was evidence of increased probability of recovery (as assessed by gaining normal weight-for-height and/or MUAC) among children treated with LNS compared to children treated with FBF (RR 1{middle dot}05, 95%CI 1{middle dot}01-1{middle dot}09, p=0{middle dot}009). Treatment with an LNS was also associated with a lower risk of persistent MAM at the end of treatment compared with a FBF (RR 0{middle dot}82, 95%CI 0{middle dot}71-0{middle dot}95, p=0{middle dot}007).
ConclusionBased on a relatively small number of studies mainly from Africa, LNS are superior to FBF in improving anthropometric recovery from MAM. The true benefit of MAM treatment may be underestimated due to all studies using active controls rather than usual care which is minimal.
More high-quality evidence is needed to evaluate nutrition education/counselling alone as a MAM intervention. Studies should also assess a wider range of outcomes including body composition, morbidity and development - not weight-gain alone. | nutrition |
10.1101/2021.01.19.21249177 | Impact of Retinopathy and Systemic Vascular Comorbidities on All-Cause Mortality | Aims/hypothesisTo investigate the joint effects of retinopathy and systemic vascular comorbidities on mortality.
MethodsThis study included 5703 participants ([≥]40 years old) from the 2005-2008 National Health and Nutrition Examination Survey. The Early Treatment Diabetic Retinopathy Study grading scale was used to evaluate the retinopathy status. Systemic vascular comorbidities included diabetes mellitus (DM), high blood pressure (HBP), chronic kidney disease (CKD) and cardiovascular disease (CVD). Time to death was calculated as the time from baseline to either the date of death or censoring (December 31st, 2015), whichever came first. Risks of mortality were estimated using Cox proportional hazards models.
ResultsAfter adjusting for confounders, the presence of retinopathy predicted higher all-cause mortality (hazard ratio [HR], 1.40; 95% confidence interval [CI], 1.09-1.81). The all-cause mortality among participants with both retinopathy and systemic vascular comorbidities including DM (HR, 1.63; 95% CI, 1.06-2.50), HBP (HR, 1.46; 95% CI, 1.03-2.08), CKD (HR, 1.71; 95% CI, 1.24-2.35) and CVD (HR, 1.88; 95% CI, 1.19-2.96) was significantly higher than that among those without either condition.
Conclusions/interpretationIn this prospective study, individuals with retinopathy had increased all-cause mortality. The joint effects of retinopathy and major systemic vascular comorbidities increased the all-cause mortality further, suggesting that more extensive vascular risk factor assessment and management are needed to detect the burden of vascular pathologies and improve long-term survival in individuals with retinopathy.
Research in contextO_ST_ABSWhat is already known about this subject?C_ST_ABSRetinopathy has been recognized as an independent risk factor for mortality.
What is the key question?What are the joint effects of retinopathy and systemic vascular comorbidities (including diabetes mellitus, hypertension and chronic kidney disease and cardiovascular disease) on mortality?
What are the new findings?Consistent evidence on the increased risk of mortality among individuals with retinopathy was noted in a large sample of middle-aged and older adults.
The co-occurrence of retinopathy and systemic vascular conditions (diabetes mellitus, hypertension and chronic kidney disease and cardiovascular disease) further increased all-cause mortality independent of other covariates.
How might this impact on clinical practice in the foreseeable future?Individuals with retinopathy may benefit from a comprehensive vascular assessment.
Intensive vascular risk reduction is needed in the management of patients with retinopathy and and micro- or macrovascular disorders.
Highlighted the importance of retinopathy screening using retinal imaging for identifying individuals at high risk of mortality, particularly among individuals with systemic vascular comorbidities. | ophthalmology |
10.1101/2021.01.19.21249483 | Leveraging fine-mapping and non-European training data to improve trans-ethnic polygenic risk scores | Polygenic risk scores (PRS) based on European training data suffer reduced accuracy in non-European target populations, exacerbating health disparities. This loss of accuracy predominantly stems from LD differences, MAF differences (including population-specific SNPs), and/or causal effect size differences. PRS based on training data from the non-European target population do not suffer from these limitations, but are currently limited by much smaller training sample sizes. Here, we propose PolyPred, a method that improves cross-population polygenic prediction by combining two complementary predictors: a new predictor that leverages functionally informed fine-mapping to estimate causal effects (instead of tagging effects), addressing LD differences; and BOLT-LMM, a published predictor. In the special case where a large training sample is available in the non-European target population (or a closely related population), we propose PolyPred+, which further incorporates the non-European training data, addressing MAF differences and causal effect size differences. PolyPred and PolyPred+ require individual-level training data (for their BOLT-LMM component), but we also propose analogous methods that replace the BOLT-LMM component with summary statistic-based components if only summary statistics are available. We applied PolyPred to 49 diseases and complex traits in 4 UK Biobank populations using UK Biobank British training data (average N=325K), and observed statistically significant average relative improvements in prediction accuracy vs. BOLT-LMM ranging from +7% in South Asians to +32% in Africans (and vs. LD-pruning + P-value thresholding (P+T) ranging from +77% to +164%), consistent with simulations. We applied PolyPred+ to 23 diseases and complex traits in UK Biobank East Asians using both UK Biobank British (average N=325K) and Biobank Japan (average N=124K) training data, and observed statistically significant average relative improvements in prediction accuracy of +24% vs. BOLT-LMM and +12% vs. PolyPred. The summary statistic-based analogues of PolyPred and PolyPred+ attained similar improvements. In conclusion, PolyPred and PolyPred+ improve cross-population polygenic prediction accuracy, ameliorating health disparities. | genetic and genomic medicine |
10.1101/2021.01.19.21249483 | Leveraging fine-mapping and non-European training data to improve cross-population polygenic risk scores | Polygenic risk scores (PRS) based on European training data suffer reduced accuracy in non-European target populations, exacerbating health disparities. This loss of accuracy predominantly stems from LD differences, MAF differences (including population-specific SNPs), and/or causal effect size differences. PRS based on training data from the non-European target population do not suffer from these limitations, but are currently limited by much smaller training sample sizes. Here, we propose PolyPred, a method that improves cross-population polygenic prediction by combining two complementary predictors: a new predictor that leverages functionally informed fine-mapping to estimate causal effects (instead of tagging effects), addressing LD differences; and BOLT-LMM, a published predictor. In the special case where a large training sample is available in the non-European target population (or a closely related population), we propose PolyPred+, which further incorporates the non-European training data, addressing MAF differences and causal effect size differences. PolyPred and PolyPred+ require individual-level training data (for their BOLT-LMM component), but we also propose analogous methods that replace the BOLT-LMM component with summary statistic-based components if only summary statistics are available. We applied PolyPred to 49 diseases and complex traits in 4 UK Biobank populations using UK Biobank British training data (average N=325K), and observed statistically significant average relative improvements in prediction accuracy vs. BOLT-LMM ranging from +7% in South Asians to +32% in Africans (and vs. LD-pruning + P-value thresholding (P+T) ranging from +77% to +164%), consistent with simulations. We applied PolyPred+ to 23 diseases and complex traits in UK Biobank East Asians using both UK Biobank British (average N=325K) and Biobank Japan (average N=124K) training data, and observed statistically significant average relative improvements in prediction accuracy of +24% vs. BOLT-LMM and +12% vs. PolyPred. The summary statistic-based analogues of PolyPred and PolyPred+ attained similar improvements. In conclusion, PolyPred and PolyPred+ improve cross-population polygenic prediction accuracy, ameliorating health disparities. | genetic and genomic medicine |
10.1101/2021.01.19.21249492 | RPE65-related retinal dystrophy: mutational and phenotypic spectrum in 44 affected patients. | BackgroundBiallelic pathogenic RPE65 variants are related to a spectrum of clinically overlapping inherited retinal dystrophies (IRD). Most affected individuals show a severe progression, with 50% of patients legally blind by 20 years of age. A better knowledge of the mutational spectrum and the phenotype-genotype correlation in RPE65-related IRD is needed.
MethodsForty-five affected subjects from 27 unrelated families with a clinical diagnosis of RPE65-related IRD were included. Clinical evaluation consisted on self-reported ophthalmological history and objective ophthalmological examination. Patients genotype was classified accordingly to variant class (truncating or missense) or to variant location at different protein domains. Main phenotypic outcome was age at onset (AAO) of the symptomatic disease and a Kaplan-Meier analysis of disease symptom event-free survival was performed.
ResultsTwenty-nine different RPE65 variants were identified in our cohort, 7 of them novel. Most frequent variants were p.(Ile98Hisfs*26), p.(Pro111Ser) and p.(Gly187Glu) accounting for the 24% of the detected alleles. Patients carrying two missense alleles showed a later disease onset than those with 1 or 2 truncating variants (Log Rank test p<0.05). While the 60% of patients carrying a missense/missense genotype presented symptoms before or at the first year of life, almost all patients with at least 1 truncating allele (91%) had an AAO [≤]1 year (p<0.05).
ConclusionOur findings suggest an association between the type of the RPE65 carried variant and the AAO. Thus, our results provide useful data on RPE65-associated IRD phenotypes which may help to improve clinical and therapeutic management of these patients. | genetic and genomic medicine |
10.1101/2021.01.19.21249492 | RPE65-related retinal dystrophy: mutational and phenotypic spectrum in 45 affected patients. | BackgroundBiallelic pathogenic RPE65 variants are related to a spectrum of clinically overlapping inherited retinal dystrophies (IRD). Most affected individuals show a severe progression, with 50% of patients legally blind by 20 years of age. A better knowledge of the mutational spectrum and the phenotype-genotype correlation in RPE65-related IRD is needed.
MethodsForty-five affected subjects from 27 unrelated families with a clinical diagnosis of RPE65-related IRD were included. Clinical evaluation consisted on self-reported ophthalmological history and objective ophthalmological examination. Patients genotype was classified accordingly to variant class (truncating or missense) or to variant location at different protein domains. Main phenotypic outcome was age at onset (AAO) of the symptomatic disease and a Kaplan-Meier analysis of disease symptom event-free survival was performed.
ResultsTwenty-nine different RPE65 variants were identified in our cohort, 7 of them novel. Most frequent variants were p.(Ile98Hisfs*26), p.(Pro111Ser) and p.(Gly187Glu) accounting for the 24% of the detected alleles. Patients carrying two missense alleles showed a later disease onset than those with 1 or 2 truncating variants (Log Rank test p<0.05). While the 60% of patients carrying a missense/missense genotype presented symptoms before or at the first year of life, almost all patients with at least 1 truncating allele (91%) had an AAO [≤]1 year (p<0.05).
ConclusionOur findings suggest an association between the type of the RPE65 carried variant and the AAO. Thus, our results provide useful data on RPE65-associated IRD phenotypes which may help to improve clinical and therapeutic management of these patients. | genetic and genomic medicine |
10.1101/2021.01.17.21249972 | Validating quantitative PCR assays for cell-free DNA detection without DNA extraction: Exercise induced kinetics in systemic lupus erythematosus patients | Circulating cell-free DNA (cfDNA) has been investigated as a screening tool for many diseases. To avoid expensive and time-consuming DNA isolation, direct quantification PCR assays can be established. However, rigorous validation is required to provide reliable data in the clinical and non-clinical context. Considering International Organization for Standardization, as well as bioanalytical method validation guidelines we provide a comprehensive procedure to validate assays for cfDNA quantification from unpurified blood plasma. A 90 and 222 bp assay was validated to study the kinetics of cfDNA after exercise in patients with systemic lupus erythematosus. The assays showed ultra-low limit of quantification (LOQ) with 0.47 and 0.69 ng/ml, repeatability [≤] 11.6% (95% CI: 8.1-20.3), and intermediate precision [≤] 12.1% (95% CI: 9.2-17.7). Incurred sample reanalysis confirmed the precision of the procedure. The additional consideration of pre-analytical factors shows that centrifugation speed and temperature do not change cfDNA concentrations. In SLE patients cfDNA increases [~]2 fold after all out walking exercise, normalizing after 60 min of rest. The established assays allow reliable and cost-efficient quantification of cfDNA in minute amounts of plasma in the clinical setting and can be used as a standard to control pre-analytical factors including cfDNA losses during purification. | genetic and genomic medicine |
10.1101/2021.01.18.21250050 | Analysis of communities of countries with similar dynamics of the COVID-19 pandemic evolution | This work addresses the spread of the coronavirus through a non-parametric approach, with the aim of identifying communities of countries based on how similar their evolution of the disease is. The analysis focuses on the number of daily new COVID-19 cases per ten thousand people during a period covering at least 250 days after the confirmation of the tenth case. Dynamic analysis is performed by constructing Minimal Spanning Trees (MST) and identifying groups of similarity in contagions evolution in 95 time windows of a 150-day amplitude that moves one day at a time. The number of times countries belonged to a similar performance group in constructed time windows was the intensity measure considered. Groups composition is not stable, indicating that the COVID-19 evolution needs to be treated as a dynamic problem in the context of complex systems. Three communities were identified by applying the Louvain algorithm. Identified communities analysis according to each countrys socioeconomic characteristics and variables related to the disease sheds light on whether there is any suggested course of action. Even when strong testing and tracing cases policies may be related with a more stable dynamic of the disease, results indicate that communities are conformed by countries with diverse characteristics. The best option to counteract the harmful effects of a pandemic may be having strong health systems in place,with contingent capacity to deal with unforeseen events and available resources capable of a rapid expansion of its capacity. | health economics |
10.1101/2021.01.19.21250076 | Enhancing the Human Health Status Prediction: the ATHLOS Project | Preventive healthcare is a crucial pillar of health as it contributes to staying healthy and having immediate treatment when needed. Mining knowledge from longitudinal studies has the potential to significantly contribute to the improvement of preventive healthcare. Unfortunately, data originated from such studies are characterized by high complexity, huge volume and a plethora of missing values. Machine Learning, Data Mining and Data Imputation models are utilized as part of solving the aforementioned challenges, respectively. Towards this direction, we focus on the development of a complete methodology for the ATHLOS (Ageing Trajectories of Health: Longitudinal Opportunities and Synergies) Project - funded by the European Unions Horizon 2020 Research and Innovation Program, which aims to achieve a better interpretation of the impact of aging on health. The inherent complexity of the provided dataset lie in the fact that the project includes 15 independent European and international longitudinal studies of aging. In this work, we particularly focus on the HealthStatus (HS) score, an index that estimates the human status of health, aiming to examine the effect of various data imputation models to the prediction power of classification and regression models. Our results are promising, indicating the critical importance of data imputation in enhancing preventive medicines crucial role. | health informatics |
10.1101/2021.01.18.21250034 | State-of-the-Art Risk Models for Diabetes, Hypertension, Visual Diminution, and COVID-19 Severity in Mexico | BACKGROUNDDiabetes and hypertension are among top public health priorities, particularly in low and middle-income countries where their health and socioeconomic impact is exacerbated by the quality and accessibility of health care. Moreover, their connection with severe or deadly COVID-19 illness has further increased their societal relevance. Tools for early detection of these chronic diseases enable interventions to prevent high-impact complications, such as loss of sight and kidney failure. Similarly, prognostic tools for COVID-19 help stratify the population to prioritize protection and vaccination of high-risk groups, optimize medical resources and tests, and raise public awareness.
METHODSWe developed and validated state-of-the-art risk models for the presence of undiagnosed diabetes, hypertension, visual complications associated with diabetes and hypertension, and the risk of severe COVID-19 illness (if infected). The models were estimated using modern methods from the field of statistical learning (e.g., gradient boosting trees), and were trained on publicly available data containing health and socioeconomic information representative of the Mexican population. Lastly, we assembled a short integrated questionnaire and deployed a free online tool for massifying access to risk assessment.
RESULTSOur results show substantial improvements in accuracy and algorithmic equity (balance of accuracy across population subgroups), compared to established benchmarks. In particular, the models: i) reached state-of-the-art sensitivity and specificity rates of 90% and 56% (0.83 AUC) for diabetes, 80% and 64% (0.79 AUC) for hypertension, 90% and 56% (0.84 AUC) for visual diminution as a complication, and 90% and 60% (0.84 AUC) for development of severe COVID disease; and ii) achieved substantially higher equity in sensitivity across gender, indigenous/non-indigenous, and regional populations. In addition, the most relevant features used by the models were in line with risk factors commonly identified by previous studies. Finally, the online platform was deployed and made accessible to the public on a massive scale.
CONCLUSIONSThe use of large databases representative of the Mexican population, coupled with modern statistical learning methods, allowed the development of risk models with state-of-the-art accuracy and equity for two of the most relevant chronic diseases, their eye complications, and COVID-19 severity. These tools can have a meaningful impact on democratizing early detection, enabling large-scale preventive strategies in low-resource health systems, increasing public awareness, and ultimately raising social well-being. | health informatics |
10.1101/2021.01.18.21250034 | State-of-the-Art Risk Models for Diabetes, Hypertension, Visual Diminution, and COVID-19 Severity in Mexico | BACKGROUNDDiabetes and hypertension are among top public health priorities, particularly in low and middle-income countries where their health and socioeconomic impact is exacerbated by the quality and accessibility of health care. Moreover, their connection with severe or deadly COVID-19 illness has further increased their societal relevance. Tools for early detection of these chronic diseases enable interventions to prevent high-impact complications, such as loss of sight and kidney failure. Similarly, prognostic tools for COVID-19 help stratify the population to prioritize protection and vaccination of high-risk groups, optimize medical resources and tests, and raise public awareness.
METHODSWe developed and validated state-of-the-art risk models for the presence of undiagnosed diabetes, hypertension, visual complications associated with diabetes and hypertension, and the risk of severe COVID-19 illness (if infected). The models were estimated using modern methods from the field of statistical learning (e.g., gradient boosting trees), and were trained on publicly available data containing health and socioeconomic information representative of the Mexican population. Lastly, we assembled a short integrated questionnaire and deployed a free online tool for massifying access to risk assessment.
RESULTSOur results show substantial improvements in accuracy and algorithmic equity (balance of accuracy across population subgroups), compared to established benchmarks. In particular, the models: i) reached state-of-the-art sensitivity and specificity rates of 90% and 56% (0.83 AUC) for diabetes, 80% and 64% (0.79 AUC) for hypertension, 90% and 56% (0.84 AUC) for visual diminution as a complication, and 90% and 60% (0.84 AUC) for development of severe COVID disease; and ii) achieved substantially higher equity in sensitivity across gender, indigenous/non-indigenous, and regional populations. In addition, the most relevant features used by the models were in line with risk factors commonly identified by previous studies. Finally, the online platform was deployed and made accessible to the public on a massive scale.
CONCLUSIONSThe use of large databases representative of the Mexican population, coupled with modern statistical learning methods, allowed the development of risk models with state-of-the-art accuracy and equity for two of the most relevant chronic diseases, their eye complications, and COVID-19 severity. These tools can have a meaningful impact on democratizing early detection, enabling large-scale preventive strategies in low-resource health systems, increasing public awareness, and ultimately raising social well-being. | health informatics |
10.1101/2021.01.18.21250047 | Impact of Public Health Education Program on the Novel Coronavirus Outbreak in the United States | The coronavirus outbreak in the United States continues to pose a serious threat to human lives. Public health measures to slow down the spread of the virus involve using a face mask, social-distancing, and frequent hand washing. Since the beginning of the pandemic, there has been a global campaign on the use of non-pharmaceutical interventions (NPIs) to curtail the spread of the virus. However, the number of cases, mortality, and hospitalization continue to rise globally, including in the United States. We developed a mathematical model to assess the impact of a public health education program on the coronavirus outbreak in the US. Our simulation showed the prospect of an effective public health education program in reducing both the cumulative and daily mortality of the novel coronavirus. Finally, our result suggests the need to obey public health measures as loss of willingness would increase the cumulative and daily mortality in the US. | health policy |
10.1101/2021.01.18.21250047 | Impact of Public Health Education Program on the Novel Coronavirus Outbreak in the United States | The coronavirus outbreak in the United States continues to pose a serious threat to human lives. Public health measures to slow down the spread of the virus involve using a face mask, social-distancing, and frequent hand washing. Since the beginning of the pandemic, there has been a global campaign on the use of non-pharmaceutical interventions (NPIs) to curtail the spread of the virus. However, the number of cases, mortality, and hospitalization continue to rise globally, including in the United States. We developed a mathematical model to assess the impact of a public health education program on the coronavirus outbreak in the US. Our simulation showed the prospect of an effective public health education program in reducing both the cumulative and daily mortality of the novel coronavirus. Finally, our result suggests the need to obey public health measures as loss of willingness would increase the cumulative and daily mortality in the US. | health policy |
10.1101/2021.01.18.21250047 | Impact of Public Health Education Program on the Novel Coronavirus Outbreak in the United States | The coronavirus outbreak in the United States continues to pose a serious threat to human lives. Public health measures to slow down the spread of the virus involve using a face mask, social-distancing, and frequent hand washing. Since the beginning of the pandemic, there has been a global campaign on the use of non-pharmaceutical interventions (NPIs) to curtail the spread of the virus. However, the number of cases, mortality, and hospitalization continue to rise globally, including in the United States. We developed a mathematical model to assess the impact of a public health education program on the coronavirus outbreak in the US. Our simulation showed the prospect of an effective public health education program in reducing both the cumulative and daily mortality of the novel coronavirus. Finally, our result suggests the need to obey public health measures as loss of willingness would increase the cumulative and daily mortality in the US. | health policy |
10.1101/2021.01.15.20248217 | Hidden Parameters impacting resurgence of SARS-CoV-2 Pandemic | The spread of the COVID-19 virus has had an enormous impact on the worlds health and socioeconomic system. While lockdowns, which severely limit the movement of the population, have been implemented in March 2020 and again recently, the psychological and economic cost are severe. Removal of these restrictions occurred with varying degrees of success.
To study the resurgence of the virus in some communities we consider an epidemiological model, SIR-SD-L, that incorporates introduction of new population due to the removal of lockdown and identify parameters that impact the spread of the virus. This compartmental model of the epidemic incorporates a social distance metric based on progression of the infections; it models the dynamic propensity of infection spread based on the current infections relative to the susceptible population. The model is validated using data on growth of infections, hospitalizations and death, considering 24 counties in multiple US states and a categorization of the lockdown removal policies after the first lockdown. Model parameters, which include a compartment for the isolated population, are used to determine the rate at which the susceptible population increases to fit the rate of infections. Along with social distancing mandates, we identify active infections and the susceptible population as important factors in the resurgence of infections. We measure the efficacy of the lockdown removal policy via a ratio, PIR, which evaluates to less than 1 for counties where social distancing measures were mandated and which delayed complete re-opening of closed spaces like bars and restaurants. For other counties this ratio is greater than 1.
We also studied infection growth in the 24 US counties with respect to a release policy derived from CDC guidelines and compared against strategies that delay the removal of lockdown.
Our results illustrate that guidelines for deciding when lockdown rules are to be relaxed should consider the current state of the infectious population and the remaining susceptible population, hidden parameters that are deducible from models such as SIR-SD-L, and not limit policy considerations to the rate of new infections alone. This is especially true for counties where the growth rate of the virus is initially slow and misleading. Emphasis on social distancing is critical. | health policy |
10.1101/2021.01.17.21249970 | Optimizing SARS-CoV-2 vaccination strategies in France: Results from a stochastic agent-based model | The COVID-19 pandemic is a major global societal, economic and health threat. The availability of COVID-19 vaccines has raised hopes for a decline in the pandemic. We built upon a stochastic agent-based microsimulation model of the COVID-19 epidemic in France. We examined the potential impact of different vaccination strategies, defined according to the age, medical conditions, and expected vaccination acceptance of the target non-immunized adult population, on disease cumulative incidence, mortality, and number of hospital admissions. Specifically, we examined whether these vaccination strategies would allow to lift all non-pharmacological interventions (NPIs), based on a sufficiently low cumulative mortality and number of hospital admissions. While vaccinating the full adult non-immunized population, if performed immediately, would be highly effective in reducing incidence, mortality and hospital-bed occupancy, and would allow discontinuing all NPIs, this strategy would require a large number of vaccine doses. Vaccinating only adults at higher risk for severe SARS-CoV-2 infection, i.e. those aged over 65 years or with medical conditions, would be insufficient to lift NPIs. Immediately vaccinating only adults aged over 45 years, or only adults aged over 55 years with mandatory vaccination of those aged over 65 years, would enable lifting all NPIs with a substantially lower number of vaccine doses, particularly with the latter vaccination strategy. Benefits of these strategies would be markedly reduced if the vaccination was delayed, was less effective than expected on virus transmission or in preventing COVID-19 among older adults, or was not widely accepted. | health policy |
10.1101/2021.01.18.21250069 | An Evaluation of Methods for Monitoring Annual Quality Measures by Month to Predict Year-End Values | BackgroundAn increasing number of healthcare quality measures are designed for annual reporting. These measures require an entire year of data to accurately report the percentage of patients who met the measure. Annual measures give providers latitude to prioritize clinical workload and patient needs; however, they do not provide a direct means to monitor performance throughout the reporting year. Although there are many possible methods for measuring annual measures at finer-grained timescales, our applied work showed that the most obvious methods could give a misleading and inaccurate view of progress throughout the year. Neither the definitions of the annual measures, nor the research literature, provided any guidance on the best methods for interim monitoring of annual measures.
ObjectiveOur objective was to evaluate four different methods for monitoring annually reported quality measures monthly to best predict year-end performance throughout the reporting year.
MethodsWe developed four methods for monitoring annual measures by month: 1) Monthly Proportion: The proportion of patients with one or more encounters in the month who still needed to meet the measure at their first encounter of the month and met the measure by the end of the month; (2) Monthly Lookback Proportion: The proportion of patients seen in the month who met the measure by the end of the month, regardless of whether it was met in that month or previously in the reporting year; (3) Rolling 12 Month: The annual measure reported as if each month was the twelfth month of a twelve-month reporting period; and (4) YTD (Year-to-Date) Cumulative: The proportion of patients with one or more visits from the start of the reporting year through the month who satisfy the measure. We applied each method to two annual dental quality measures using data from two reporting years, and four different dental sites. We used mean squared error (MSE) to evaluate year-end predictive performance.
ResultsMethod 3 (Rolling 12 Month) had the lowest MSE in 11 out of 16 cases (2 measures X 2 years X 4 sites) and lowest total MSE (262.39) across all 16 cases. In 5 of the 16 cases, YTD Cumulative had the lowest MSE.
ConclusionsThe Rolling 12 Month method was best for predicting the year-end value across both measures and all four sites. | health systems and quality improvement |
10.1101/2021.01.18.21250069 | An Evaluation of Methods for Monitoring Annual Quality Measures by Month to Predict Year-End Values | BackgroundAn increasing number of healthcare quality measures are designed for annual reporting. These measures require an entire year of data to accurately report the percentage of patients who met the measure. Annual measures give providers latitude to prioritize clinical workload and patient needs; however, they do not provide a direct means to monitor performance throughout the reporting year. Although there are many possible methods for measuring annual measures at finer-grained timescales, our applied work showed that the most obvious methods could give a misleading and inaccurate view of progress throughout the year. Neither the definitions of the annual measures, nor the research literature, provided any guidance on the best methods for interim monitoring of annual measures.
ObjectiveOur objective was to evaluate four different methods for monitoring annually reported quality measures monthly to best predict year-end performance throughout the reporting year.
MethodsWe developed four methods for monitoring annual measures by month: 1) Monthly Proportion: The proportion of patients with one or more encounters in the month who still needed to meet the measure at their first encounter of the month and met the measure by the end of the month; (2) Monthly Lookback Proportion: The proportion of patients seen in the month who met the measure by the end of the month, regardless of whether it was met in that month or previously in the reporting year; (3) Rolling 12 Month: The annual measure reported as if each month was the twelfth month of a twelve-month reporting period; and (4) YTD (Year-to-Date) Cumulative: The proportion of patients with one or more visits from the start of the reporting year through the month who satisfy the measure. We applied each method to two annual dental quality measures using data from two reporting years, and four different dental sites. We used mean squared error (MSE) to evaluate year-end predictive performance.
ResultsMethod 3 (Rolling 12 Month) had the lowest MSE in 11 out of 16 cases (2 measures X 2 years X 4 sites) and lowest total MSE (262.39) across all 16 cases. In 5 of the 16 cases, YTD Cumulative had the lowest MSE.
ConclusionsThe Rolling 12 Month method was best for predicting the year-end value across both measures and all four sites. | health systems and quality improvement |
10.1101/2021.01.18.21250069 | An Evaluation of Methods for Monitoring Annual Quality Measures by Month to Predict Year-End Values | BackgroundAn increasing number of healthcare quality measures are designed for annual reporting. These measures require an entire year of data to accurately report the percentage of patients who met the measure. Annual measures give providers latitude to prioritize clinical workload and patient needs; however, they do not provide a direct means to monitor performance throughout the reporting year. Although there are many possible methods for measuring annual measures at finer-grained timescales, our applied work showed that the most obvious methods could give a misleading and inaccurate view of progress throughout the year. Neither the definitions of the annual measures, nor the research literature, provided any guidance on the best methods for interim monitoring of annual measures.
ObjectiveOur objective was to evaluate four different methods for monitoring annually reported quality measures monthly to best predict year-end performance throughout the reporting year.
MethodsWe developed four methods for monitoring annual measures by month: 1) Monthly Proportion: The proportion of patients with one or more encounters in the month who still needed to meet the measure at their first encounter of the month and met the measure by the end of the month; (2) Monthly Lookback Proportion: The proportion of patients seen in the month who met the measure by the end of the month, regardless of whether it was met in that month or previously in the reporting year; (3) Rolling 12 Month: The annual measure reported as if each month was the twelfth month of a twelve-month reporting period; and (4) YTD (Year-to-Date) Cumulative: The proportion of patients with one or more visits from the start of the reporting year through the month who satisfy the measure. We applied each method to two annual dental quality measures using data from two reporting years, and four different dental sites. We used mean squared error (MSE) to evaluate year-end predictive performance.
ResultsMethod 3 (Rolling 12 Month) had the lowest MSE in 11 out of 16 cases (2 measures X 2 years X 4 sites) and lowest total MSE (262.39) across all 16 cases. In 5 of the 16 cases, YTD Cumulative had the lowest MSE.
ConclusionsThe Rolling 12 Month method was best for predicting the year-end value across both measures and all four sites. | health systems and quality improvement |
10.1101/2021.01.11.21249636 | Issues of Random Sampling with Rapid Antigen Tests for COVID-19 Diagnosis: A Special Reference to Kalmunai RDHS Division | Random sampling from the community and performing the rapid antigen test has become a debatable issue during this COVID-19 pandemic. This writing analyzes the concerns using the data from Kalmunai RDHS Division, Sri Lanka, related to COVID-19 and shows the issues of random sampling in a community wherein the incident rate is small. Therefore, suitable sampling protocol is to be developed for performing rapid antigen test for COVID-19 diagnosis. | health systems and quality improvement |
10.1101/2021.01.19.21250046 | City Reduced Probability of Infection (CityRPI) for Indoor Airborne Transmission of SARS-CoV-2 and Urban Building Energy Impacts | Airborne transmission of aerosols produced by asymptomatic individuals is a large portion of the SARS-CoV-2 spread indoors. Outdoor air ventilation rate, air filtration, room occupancy, exposure time, and mask-wearing are among the key parameters that affect its airborne transmission in indoor spaces. In this work, we developed a new web-based platform, City Reduced Probability of Infection - CityRPI, to calculate the indoor airborne transmission of COVID-19 in various buildings of a city scale. An archetype library of twenty-nine building types is developed based on several standards and references. Among the mitigation strategies recommended to reduce infection risk, some could result in significant energy impacts on buildings. To study the combined effects of energy consumption and reduced infection probability, we integrated CityRPI with City Building Energy Model. We applied the integrated model to Montreal City and studied the impact of six mitigation measures on the infection risk and peak energy demand in winter. It shows that the same strategy could perform quite differently, depending on building types and properties. In the winter season, increasing the outdoor air ventilation rate may cause massive building energy consumption. All strategies are shown to reduce the infection risk but wearing a mask and reducing exposure time are the most effective strategies in many buildings, with around 60% reduction. Doubling the outdoor air ventilation rate is not as effective as other strategies to reduce the risk with less than 35% reduction. It also significantly increases building peak heating demand with 10-60%. | infectious diseases |
10.1101/2021.01.18.21249786 | Emergence of a novel SARS-CoV-2 strain in Southern California, USA | Since October 2020, novel strains of SARS-CoV-2 including B.1.1.7, have been identified to be of global significance from an infection and surveillance perspective. While this strain (B.1.1.7) may play an important role in increased COVID rates in the UK, there are still no reported strains to account for the spike of cases in Los Angeles (LA) and California as a whole, which currently has some of the highest absolute and per-capita COVID transmission rates in the country. From the early days of the pandemic when LA only had a single viral genome uploaded onto GISAID we have been at the forefront of generating and analyzing the SARS-CoV-2 sequencing data from the LA region. We report a novel strain emerging in Southern California. Most current cases in the catchment population in LA fall into two distinct subclades: 1) 20G (24% of total) is the predominant subclade currently in the United States 2) a relatively novel strain in clade 20C, CAL.20C strain ([~]36% of total) is defined by five concurrent mutations. After an analysis of all of the publicly available data and a comparison to our recent sequences, we see a dramatic growth in the relative percentage of the CAL.20C strain beginning in November of 2020. The predominance of this strain coincides with the increased positivity rate seen in this region. Unlike 20G, this novel strain CAL.20C is defined by multiple mutations in the S protein, a characteristic it shares with both the UK and South African strains, both of which are of significant clinical and scientific interest | infectious diseases |
10.1101/2021.01.15.21249894 | BAYESIAN GROUP TESTING WITH DILUTION EFFECTS | SO_SCPLOWUMMARYC_SCPLOWA Bayesian framework for group testing under dilution effects has been developed, using latticebased models. This work has particular relevance given the pressing public health need to enhance testing capacity for COVID-19 and future pandemics, and the need for wide-scale and repeated testing for surveillance under constantly varying conditions. The proposed Bayesian approach allows for dilution effects in group testing and for general test response distributions beyond just binary outcomes. It is shown that even under strong dilution effects, an intuitive group testing selection rule that relies on the model order structure, referred to as the Bayesian halving algorithm, has attractive optimal convergence properties. Analogous look-ahead rules that can reduce the number of stages in classification by selecting several pooled tests at a time are proposed and evaluated as well. Group testing is demonstrated to provide great savings over individual testing in the number of tests needed, even for moderately high prevalence levels. However, there is a trade-off with higher number of testing stages, and increased variability. A web-based calculator is introduced to assist in weighing these factors and to guide decisions on when and how to pool under various conditions. High performance distributed computing methods have also been implemented for considering larger pool sizes, when savings from group testing can be even more dramatic. | infectious diseases |
10.1101/2021.01.15.21249894 | BAYESIAN GROUP TESTING WITH DILUTION EFFECTS | SO_SCPLOWUMMARYC_SCPLOWA Bayesian framework for group testing under dilution effects has been developed, using latticebased models. This work has particular relevance given the pressing public health need to enhance testing capacity for COVID-19 and future pandemics, and the need for wide-scale and repeated testing for surveillance under constantly varying conditions. The proposed Bayesian approach allows for dilution effects in group testing and for general test response distributions beyond just binary outcomes. It is shown that even under strong dilution effects, an intuitive group testing selection rule that relies on the model order structure, referred to as the Bayesian halving algorithm, has attractive optimal convergence properties. Analogous look-ahead rules that can reduce the number of stages in classification by selecting several pooled tests at a time are proposed and evaluated as well. Group testing is demonstrated to provide great savings over individual testing in the number of tests needed, even for moderately high prevalence levels. However, there is a trade-off with higher number of testing stages, and increased variability. A web-based calculator is introduced to assist in weighing these factors and to guide decisions on when and how to pool under various conditions. High performance distributed computing methods have also been implemented for considering larger pool sizes, when savings from group testing can be even more dramatic. | infectious diseases |
10.1101/2021.01.15.21249894 | BAYESIAN GROUP TESTING WITH DILUTION EFFECTS | SO_SCPLOWUMMARYC_SCPLOWA Bayesian framework for group testing under dilution effects has been developed, using latticebased models. This work has particular relevance given the pressing public health need to enhance testing capacity for COVID-19 and future pandemics, and the need for wide-scale and repeated testing for surveillance under constantly varying conditions. The proposed Bayesian approach allows for dilution effects in group testing and for general test response distributions beyond just binary outcomes. It is shown that even under strong dilution effects, an intuitive group testing selection rule that relies on the model order structure, referred to as the Bayesian halving algorithm, has attractive optimal convergence properties. Analogous look-ahead rules that can reduce the number of stages in classification by selecting several pooled tests at a time are proposed and evaluated as well. Group testing is demonstrated to provide great savings over individual testing in the number of tests needed, even for moderately high prevalence levels. However, there is a trade-off with higher number of testing stages, and increased variability. A web-based calculator is introduced to assist in weighing these factors and to guide decisions on when and how to pool under various conditions. High performance distributed computing methods have also been implemented for considering larger pool sizes, when savings from group testing can be even more dramatic. | infectious diseases |
10.1101/2021.01.15.21249894 | BAYESIAN GROUP TESTING WITH DILUTION EFFECTS | SO_SCPLOWUMMARYC_SCPLOWA Bayesian framework for group testing under dilution effects has been developed, using latticebased models. This work has particular relevance given the pressing public health need to enhance testing capacity for COVID-19 and future pandemics, and the need for wide-scale and repeated testing for surveillance under constantly varying conditions. The proposed Bayesian approach allows for dilution effects in group testing and for general test response distributions beyond just binary outcomes. It is shown that even under strong dilution effects, an intuitive group testing selection rule that relies on the model order structure, referred to as the Bayesian halving algorithm, has attractive optimal convergence properties. Analogous look-ahead rules that can reduce the number of stages in classification by selecting several pooled tests at a time are proposed and evaluated as well. Group testing is demonstrated to provide great savings over individual testing in the number of tests needed, even for moderately high prevalence levels. However, there is a trade-off with higher number of testing stages, and increased variability. A web-based calculator is introduced to assist in weighing these factors and to guide decisions on when and how to pool under various conditions. High performance distributed computing methods have also been implemented for considering larger pool sizes, when savings from group testing can be even more dramatic. | infectious diseases |
10.1101/2021.01.19.21249604 | Clinical utility of Corona Virus Disease-19 serum IgG, IgM, and neutralizing antibodies and inflammatory markers | Most deaths from severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) infection occur in older subjects. We assessed age effects and clinical utility of serum SARS-CoV-2 immunoglobulin G (IgG), immunoglobulin M (IgM), and neutralizing antibodies and serum inflammatory markers. Serum IgG, IgM, and neutralizing antibody levels were measured using chemiluminescence assays from Diazyme (Poway, CA), while serum interleukin-6 (IL-6), C reactive protein (CRP), and ferritin were measured with immunoassays obtained from Roche (Indianapolis, IN). In 79,005 subjects, IgG and IgM levels were positive ([≥]1.0 arbitrary units [AU]/mL) in 5.29% and 3.25% of subjects, respectively. In antibody positive subjects, median IgG levels were 3.93 AU/mL if <45 years of age, 10.18 AU/mL if 45-64 years of age, and 10.85 AU/mL if [≥]65 years of age (p<0.0001). In SARS-CoV-2 RNA positive cases, family members and exposed subjects (n=1,111), antibody testing was found to be valuable for case finding, and persistent IgM levels were associated with chronic symptoms. In non-hospitalized and hospitalized subjects assessed for SARS-CoV-2 RNA (n=278), median IgG levels in AU/mL were 0.05 in negative subjects (n=100), 14.83 in positive outpatients (n=129), and 30.61 in positive hospitalized patients (n=49, p<0.0001). Neutralizing antibody levels correlated significantly with IgG (r=0.875; p<0.0001). Two or more of the criteria of IL-6 [≥]10 pg/mL, CRP [≥]10 mg/L, and/or IgM >1.0 AU/mL occurred in 97.7% of inpatients versus 1.8% of outpatients (>50-fold relative risk, C statistic 0.986, p<0.0001). Our data indicate that: 1) IgG levels are significantly higher in positive older subjects, possibly to compensate for decreased cellular immunity with aging; 2) IgG levels are important for case finding in family clusters; 3) IgG levels are significantly correlated with neutralizing antibody levels; 4) persistently elevated IgM levels are associated with chronic disease; and 5) markedly elevated IL-6, hs-CRP, and/or positive IgM accurately identify SARS-CoV-2 RNA positive subjects requiring hospitalization. | infectious diseases |
10.1101/2021.01.19.21249678 | The Role of Disease Severity and Demographics in the Clinical Course of COVID-19 Patients Treated with Convalescent Plasma | Treatment of patients with COVID-19 using convalescent plasma from recently recovered patients has been shown to be safe, but the time course of change in clinical status following plasma transfusion in relation to baseline disease severity has not yet been described. We analyzed short, descriptive daily reports of patient status in 7,180 hospitalized recipients of COVID-19 convalescent plasma in the Mayo Clinic Expanded Access Program. We assessed, from the day following transfusion, whether the patient was categorized by his or her physician as better, worse or unchanged compared to the day before, and whether, on the reporting day, the patient received mechanical ventilation, was in the ICU, had died or had been discharged. Most patients improved following transfusion, but clinical improvement was most notable in mild to moderately ill patients. Patients classified as severely ill upon enrollment improved, but not as rapidly, while patients classified as critically ill/end-stage and patients on ventilators showed worsening of disease status even after treatment with convalescent plasma. Patients age 80 and over showed little or no clinical improvement following transfusion. Clinical status at enrollment and age appear to be the primary factors in determining the therapeutic effectiveness of COVID-19 convalescent plasma among hospitalized patients. | infectious diseases |
10.1101/2021.01.18.21250056 | Learning from pandemics: using extraordinary events can improve disease now-casting models | Online searches have been used to study different health-related behaviours, including monitoring disease outbreaks. An obvious caveat is that several reasons can motivate individuals to seek online information and models that are blind to peoples motivations are of limited use and can even mislead. This is particularly true during extraordinary public health crisis, such as the ongoing pandemic, when fear, curiosity and many other reasons can lead individuals to search for health-related information, masking the disease-driven searches. However, health crisis can also offer an opportunity to disentangle between different drivers and learn about human behavior. Here, we focus on the two pandemics of the 21st century (2009-H1N1 flu and Covid-19) and propose a methodology to discriminate between search patterns linked to general information seeking (media driven) and search patterns possibly more associated with actual infection (disease driven). We show that by learning from such pandemic periods, with high anxiety and media hype, it is possible to select online searches and improve model performance both in pandemic and seasonal settings. Moreover, and despite the common claim that more data is always better, our results indicate that lower volume of the right data can be better than including large volumes of apparently similar data, especially in the long run. Our work provides a general framework that can be applied beyond specific events and diseases, and argues that algorithms can be improved simply by using less (better) data. This has important consequences, for example, to solve the accuracy-explainability trade-off in machine-learning. | infectious diseases |
10.1101/2021.01.17.21249878 | NEWS2 and laboratory predictors correlated with clinical deterioration in hospitalised patients with COVID-19 | BackgroundWe aimed to determine prognostic values of NEWS2 and laboratory parameters during the first week of COVID-19.
MethodsAll adult patients who were hospitalized for a confirmed COVID-19 between the 11th of March and the 11th of May 2020 were retrospectively included. To evaluate the factors in prognosis which are admission to intensive care unit (ICU) and in-hospital death, univariate logistic regression analysis was performed at admission (D0), at day-3 (D3), day-5 (D5), and day-7 (D7). Additionally, receiver operating characteristic (ROC) analyses were performed.
ResultsOverall, 611 patients were included. Clinical deterioration was observed in 79 (12.9%) patients during hospitalisation, 36 (5.9%) during the first three days, 54 (8.8%) during the first five days, and 62 (10.1%) during the first week of hospitalisation. Our results showed that NEWS2, procalcitonin, neutrophil/lymphocyte ratio (NLR), and albumin were the best predictors for clinical deterioration at D0, D3, D5, and D7. Procalcitonin had the highest odds ratio for clinical deterioration on all days in univariate analysis. ROC analyses showed that NEWS2 at D7, procalcitonin at D5, albumin at D7, and NLR at D5 had highest AUC values. Additionally, we detected a strong correlation between NEWS2 and laboratory parameters including neutrophil, lymphocyte, NLR, platelet/lymphocyte ratio, CRP, procalcitonin, ferritin, and urea on all days.
ConclusionThis study provides a list of several laboratory parameters correlated with NEWS2 and potential predictors for ICU admission or in-hospital death during the clinical course of COVID-19. Dynamic monitoring of NEWS2 and laboratory parameters is vital for improving clinical outcomes. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.18.21250041 | Endothelial cell-activating antibodies in COVID-19 | ObjectiveWhile endothelial dysfunction has been implicated in the widespread thrombo-inflammatory complications of coronavirus disease-19 (COVID-19), the upstream mediators of endotheliopathy remain for the most part cryptic. Our aim was to identify circulating factors contributing to endothelial cell activation and dysfunction in COVID-19.
MethodsHuman endothelial cells were cultured in the presence of serum or plasma from 244 patients hospitalized with COVID-19 and plasma from 100 patients with non-COVID sepsis. Cell adhesion molecules (E-selectin, VCAM-1, and ICAM-1) were quantified by in-cell ELISA.
ResultsSerum and plasma from patients with COVID-19 increased surface expression of cell adhesion molecules. Furthermore, levels of soluble ICAM-1 and E-selectin were elevated in patient serum and tracked with disease severity. The presence of circulating antiphospholipid antibodies was a strong marker of the ability of COVID-19 serum to activate endothelium. Depletion of total IgG from antiphospholipid antibody-positive serum markedly restrained upregulation of cell adhesion molecules. Conversely, supplementation of control serum with patient IgG was sufficient to trigger endothelial activation.
ConclusionThese data are the first to suggest that some patients with COVID-19 have potentially diverse antibodies that drive endotheliopathy, adding important context regarding thrombo-inflammatory effects of autoantibodies in severe COVID-19. | infectious diseases |
10.1101/2021.01.15.21249897 | Genomics of Aminoglycoside Resistance in Pseudomonas aeruginosa Bloodstream Infections at a United States Academic Hospital | Pseudomonas aeruginosa is a frequent cause of antibiotic-resistant infections. Although P. aeruginosa is intrinsically resistant to many antimicrobial agents, aminoglycosides are active against this organism in the absence of acquired resistance determinants and mutations. However, genes encoding aminoglycoside modifying enzymes (AMEs) are found in many strains that are resistant to these agents. We examined the prevalence of phenotypic resistance to the commonly used aminoglycosides gentamicin, tobramycin, and amikacin in a collection of 227 P. aeruginosa bloodstream isolates collected over two decades from a single U.S. academic medical institution. Resistance to these antibiotics was relatively stable over this time period. High-risk clones ST111 and ST298 were initially common but decreased in frequency over time. Whole genome sequencing identified relatively few AME genes in this collection compared to the published literature; only 14% of isolates contained an AME gene other than the ubiquitous aph(3)-IIb. Of those present, only ant(2")-Ia was associated with phenotypic resistance to gentamicin or to tobramycin. One extensively drug-resistant strain, PS1871, contained 5 AME genes, most of which were part of clusters of antibiotic resistance genes embedded within transposable elements. These findings suggest that AME genes play a relatively minor role in aminoglycoside resistance at our institution but that multidrug-resistant strains remain a problem. | infectious diseases |
10.1101/2021.01.18.21250044 | Blood transcriptional biomarkers of acute viral infection for detection of pre-symptomatic SARS-CoV 2 infection | We hypothesised that host-response biomarkers of viral infections may contribute to early identification of SARS-CoV-2 infected individuals, critical to breaking chains of transmission. We identified 20 candidate blood transcriptomic signatures of viral infection by systematic review and evaluated their ability to detect SARS-CoV-2 infection, compared to the gold-standard of virus PCR tests, among a prospective cohort of 400 hospital staff subjected to weekly testing when fit to attend work. The transcriptional signatures had limited overlap, but were mostly co-correlated as components of type 1 interferon responses. We reconstructed each signature score in blood RNA sequencing data from 41 individuals over sequential weeks spanning a first positive SARS-CoV-2 PCR, and after 6-month convalescence. A single blood transcript for IFI27 provided the highest accuracy for discriminating individuals at the time of their first positive viral PCR result from uninfected controls, with area under the receiver operating characteristic curve (AUROC) of 0.95 (95% confidence interval 0.91-0.99), sensitivity 0.84 (0.7-0.93) and specificity 0.95 (0.85-0.98) at a predefined test threshold. The test performed equally well in individuals with and without symptoms, correlated with viral load, and identified incident infections one week before the first positive viral PCR with sensitivity 0.4 (0.17-0.69) and specificity 0.95 (0.85-0.98). Our findings strongly support further urgent evaluation and development of blood IFI27 transcripts as a biomarker for early phase SARS-CoV-2 infection, for screening individuals such as contacts of index cases, in order to facilitate early case isolation and early antiviral treatments as they emerge. | infectious diseases |
10.1101/2021.01.15.21249756 | Factors associated with deaths due to COVID-19 versus other causes: population-based cohort analysis of UK primary care data and linked national death registrations within the OpenSAFELY platform | BackgroundMortality from COVID-19 shows a strong relationship with age and pre-existing medical conditions, as does mortality from other causes. However it is unclear how specific factors are differentially associated with COVID-19 mortality as compared to mortality from other causes.
MethodsWorking on behalf of NHS England, we carried out a cohort study within the OpenSAFELY platform. Primary care data from England were linked to national death registrations. We included all adults (aged [≥]18 years) in the database on 1st February 2020 and with >1 year of continuous prior registration, the cut-off date for deaths was 9th November 2020. Associations between individual-level characteristics and COVID-19 and non-COVID deaths were estimated by fitting age- and sex-adjusted logistic models for these two outcomes.
Results17,456,515 individuals were included. 17,063 died from COVID-19 and 134,316 from other causes. Most factors associated with COVID-19 death were similarly associated with non-COVID death, but the magnitudes of association differed. Older age was more strongly associated with COVID-19 death than non-COVID death (e.g. ORs 40.7 [95% CI 37.7-43.8] and 29.6 [28.9-30.3] respectively for [≥]80 vs 50-59 years), as was male sex, deprivation, obesity, and some comorbidities. Smoking, history of cancer and chronic liver disease had stronger associations with non-COVID than COVID-19 death. All non-white ethnic groups had higher odds than white of COVID-19 death (OR for Black: 2.20 [1.96-2.47], South Asian: 2.33 [2.16-2.52]), but lower odds than white of non-COVID death (Black: 0.88 [0.83-0.94], South Asian: 0.78 [0.75-0.81]).
InterpretationSimilar associations of most individual-level factors with COVID-19 and non-COVID death suggest that COVID-19 largely multiplies existing risks faced by patients, with some notable exceptions. Identifying the unique factors contributing to the excess COVID-19 mortality risk among non-white groups is a priority to inform efforts to reduce deaths from COVID-19.
FundingWellcome, Royal Society, National Institute for Health Research, National Institute for Health Research Oxford Biomedical Research Centre, UK Medical Research Council, Health Data Research UK. | infectious diseases |
10.1101/2021.01.15.21249756 | Factors associated with deaths due to COVID-19 versus other causes: population-based cohort analysis of UK primary care data and linked national death registrations within the OpenSAFELY platform | BackgroundMortality from COVID-19 shows a strong relationship with age and pre-existing medical conditions, as does mortality from other causes. However it is unclear how specific factors are differentially associated with COVID-19 mortality as compared to mortality from other causes.
MethodsWorking on behalf of NHS England, we carried out a cohort study within the OpenSAFELY platform. Primary care data from England were linked to national death registrations. We included all adults (aged [≥]18 years) in the database on 1st February 2020 and with >1 year of continuous prior registration, the cut-off date for deaths was 9th November 2020. Associations between individual-level characteristics and COVID-19 and non-COVID deaths were estimated by fitting age- and sex-adjusted logistic models for these two outcomes.
Results17,456,515 individuals were included. 17,063 died from COVID-19 and 134,316 from other causes. Most factors associated with COVID-19 death were similarly associated with non-COVID death, but the magnitudes of association differed. Older age was more strongly associated with COVID-19 death than non-COVID death (e.g. ORs 40.7 [95% CI 37.7-43.8] and 29.6 [28.9-30.3] respectively for [≥]80 vs 50-59 years), as was male sex, deprivation, obesity, and some comorbidities. Smoking, history of cancer and chronic liver disease had stronger associations with non-COVID than COVID-19 death. All non-white ethnic groups had higher odds than white of COVID-19 death (OR for Black: 2.20 [1.96-2.47], South Asian: 2.33 [2.16-2.52]), but lower odds than white of non-COVID death (Black: 0.88 [0.83-0.94], South Asian: 0.78 [0.75-0.81]).
InterpretationSimilar associations of most individual-level factors with COVID-19 and non-COVID death suggest that COVID-19 largely multiplies existing risks faced by patients, with some notable exceptions. Identifying the unique factors contributing to the excess COVID-19 mortality risk among non-white groups is a priority to inform efforts to reduce deaths from COVID-19.
FundingWellcome, Royal Society, National Institute for Health Research, National Institute for Health Research Oxford Biomedical Research Centre, UK Medical Research Council, Health Data Research UK. | infectious diseases |
10.1101/2021.01.17.21249995 | A rapid pharmacogenomic assay to detect NAT2 polymorphisms and guide isoniazid dosingfor tuberculosis treatment | RationaleStandardized weight-based dose of anti-tubercular drugs contributes to a substantial incidence of toxicities, inadequate treatment response, and relapse, in part due to variable drug levels achieved. Single nucleotide polymorphisms (SNPs) in the N-acetyltransferase-2 (NAT2) gene explain the majority of interindividual pharmacokinetic variability of isoniazid (INH). However, an obstacle to implementing pharmacogenomic-guided dosing is the lack of a point-of-care assay.
ObjectivesTo develop and test a NAT2 classification algorithm, validate its performance in predicting isoniazid clearance, and develop a prototype pharmacogenomic assay.
MethodsWe trained random forest models to predict NAT2 acetylation genotype from unphased SNP data using a global collection of 8,561 phased genomes. We enrolled 48 pulmonary TB patients, performed sparse pharmacokinetic sampling, and tested the acetylator prediction algorithm accuracy against estimated INH clearance. We then developed a cartridge-based multiplex qPCR assay on the GeneXpert platform and assessed its analytical sensitivity on whole blood samples from healthy individuals.
Measurements and Main ResultsWith a 5-SNP model trained on two-thirds of the data (n=5,738), out-of-sample acetylation genotype prediction accuracy on the remaining third (n=2,823) was 100%. Among the 48 TB patients, predicted acetylator types were: 27 (56.2%) slow, 16 (33.3%) intermediate and 5 (10.4%) rapid. INH clearance rates were lowest in predicted slow acetylators (median 19.3 L/hr), moderate in intermediate acetylators (median 41.0 L/hr) and highest in fast acetylators (median 46.7 L/hr). The cartridge-based assay accurately detected all allele patterns directly from 25ul of whole blood.
ConclusionsAn automated pharmacogenomic assay on a platform widely used globally for tuberculosis diagnosis could enable personalized dosing of isoniazid.
SummaryThis manuscript describes the development and validation of point-of-care multiplex pharmacogenomic assay to guide personalized dosing of isoniazid for treatment or prevention of tuberculosis. | infectious diseases |
10.1101/2021.01.17.21249995 | A rapid pharmacogenomic assay to detect NAT2 polymorphisms and guide isoniazid dosing for tuberculosis treatment | RationaleStandardized weight-based dose of anti-tubercular drugs contributes to a substantial incidence of toxicities, inadequate treatment response, and relapse, in part due to variable drug levels achieved. Single nucleotide polymorphisms (SNPs) in the N-acetyltransferase-2 (NAT2) gene explain the majority of interindividual pharmacokinetic variability of isoniazid (INH). However, an obstacle to implementing pharmacogenomic-guided dosing is the lack of a point-of-care assay.
ObjectivesTo develop and test a NAT2 classification algorithm, validate its performance in predicting isoniazid clearance, and develop a prototype pharmacogenomic assay.
MethodsWe trained random forest models to predict NAT2 acetylation genotype from unphased SNP data using a global collection of 8,561 phased genomes. We enrolled 48 pulmonary TB patients, performed sparse pharmacokinetic sampling, and tested the acetylator prediction algorithm accuracy against estimated INH clearance. We then developed a cartridge-based multiplex qPCR assay on the GeneXpert platform and assessed its analytical sensitivity on whole blood samples from healthy individuals.
Measurements and Main ResultsWith a 5-SNP model trained on two-thirds of the data (n=5,738), out-of-sample acetylation genotype prediction accuracy on the remaining third (n=2,823) was 100%. Among the 48 TB patients, predicted acetylator types were: 27 (56.2%) slow, 16 (33.3%) intermediate and 5 (10.4%) rapid. INH clearance rates were lowest in predicted slow acetylators (median 19.3 L/hr), moderate in intermediate acetylators (median 41.0 L/hr) and highest in fast acetylators (median 46.7 L/hr). The cartridge-based assay accurately detected all allele patterns directly from 25ul of whole blood.
ConclusionsAn automated pharmacogenomic assay on a platform widely used globally for tuberculosis diagnosis could enable personalized dosing of isoniazid.
SummaryThis manuscript describes the development and validation of point-of-care multiplex pharmacogenomic assay to guide personalized dosing of isoniazid for treatment or prevention of tuberculosis. | infectious diseases |
10.1101/2021.01.18.21249463 | Viral dispersion in open air | The SARS-CoV-2 pandemic has revived the debate about the routes of virus transmission and their likelihoods. It is of utmost importance to assess the risks of contamination of susceptible people by infectious individuals and to evaluate the level of viral transmission in the community. Most countries have imposed non-pharmaceutical measures to contain SARS-CoV-2 transmission, including social distancing and mask wearing. Here we evaluated the spreading of viruses in open air using harmless Escherichia coli bacteriophages as a surrogate. Phages were sprayed towards Petri dishes seeded with bacteria at different lengths and angles. Median droplets size was 127 {micro}m, similar to those produced by sneeze. Our results showed that the transmission rate decreased exponentially with distance. The highest recorded transmission rate was 9 x 10-6 PFU/plate when phages were sprayed from a 1 m distance, suggesting that the probability of transmission of a single virus at a 1 m distance is 1:100,000. These results agree with the WHO recommendation that face mask protection in an uncrowded well-ventilated space is not required. | infectious diseases |
10.1101/2021.01.16.21249937 | Meta-Analysis of Robustness of COVID-19 Diagnostic Kits During Early Pandemic | BackgroundAccurate detection of severe acute respiratory syndrome corona virus 2 (SARS-CoV-2) is necessary to mitigate the coronavirus disease-19 (COVID-19) pandemic. However, the test reagents and assay platforms are varied and may not be sufficiently robust to diagnose COVID-19.
MethodsWe reviewed 85 studies (21,530 patients), published from five regions of the world, to highlight issues involved in the diagnosis of COVID-19 in the early phase of the pandemic, following the standards outlined in the PRISMA statement. All relevant articles, published up to May 31, 2020, in PubMed, BioRiXv, MedRiXv, and Google Scholar, were included. We evaluated the qualitative (9749 patients) and quantitative (10,355 patients) performance of RT-PCR and serologic diagnostic tests for real-world samples, and assessed the concordance (5,538 patients) between methods in meta-analyses.
ResultsThe RT-PCR tests exhibited heterogeneity in the primers and reagents used. Of 1,957 positive RT-PCR COVID-19 participants, 1,585 had positive serum antibody (IgM +/- IgG) tests (sensitivity 0.81, 95%CI 0.66-.90). While 3,509 of 3581 participants RT-PCR negative for COVID-19 were found negative by serology testing (specificity 0.98, 95%CI 0.94-0.99). The chemiluminescent immunoassay exhibited the highest sensitivity, followed by ELISA and lateral flow immunoassays. Serology tests had higher sensitivity and specificity for laboratory-approval than for real-world reporting data.
ConclusionsThe robustness of the assays/platforms is influenced by variability in sampling and reagents. Serological testing complements and may minimize false negative RT-PCR results. Lack of standardized assay protocols in the early phase of pandemic might have contributed to the spread of COVID-19. | infectious diseases |
10.1101/2021.01.15.21249891 | Inactivation of SARS-CoV-2 virus in saliva using a guanidium based transport medium suitable for RT-PCR diagnostic assays | BackgroundUpper respiratory samples used to test for SARS-CoV-2 virus may be infectious and present a hazard during transport and testing. A buffer with the ability to inactivate SARS-CoV-2 at the time of sample collection could simplify and expand testing for COVID-19 to non-conventional settings.
MethodsWe evaluated a guanidium thiocyanate-based buffer, eNAT (Copan) as a possible transport and inactivation medium for downstream RT-PCR testing to detect SARS-CoV-2. Inactivation of SARS-CoV-2 USA-WA1/2020 in eNAT and in diluted saliva was studied at different incubation times. The stability of viral RNA in eNAT was also evaluated for up to 7 days at room temperature (28{degrees}C), refrigerated conditions (4{degrees}C) and at 35{degrees}C.
ResultsSARS-COV-2 virus spiked directly in eNAT could be inactivated at >5.6 log10 PFU/ml within a minute of incubation. When saliva was diluted 1:1 in eNAT, no cytopathic effect (CPE) on vero-E6 cell lines was observed, although SARS-CoV-2 RNA could be detected even after 30 min incubation and after two cell culture passages. A 1:2 (saliva:eNAT) dilution abrogated both CPE and detectable viral RNA after as little as 5 min incubation in eNAT. SARS-CoV-2 RNA from virus spiked at 5X the limit of detection remained positive up to 7 days of incubation in all tested conditions.
ConclusioneNAT and similar guanidinium thiocyanate-based media may be of value for transport, preservation, and processing of clinical samples for RT-PCR based SARS-CoV-2 detection. | infectious diseases |
10.1101/2021.01.19.21249222 | Performance of intensive care unit severity scoring systems across different ethnicities. | BackgroundDespite wide utilisation of severity scoring systems for case-mix determination and benchmarking in the intensive care unit, the possibility of scoring bias across ethnicities has not been examined. Recent guidelines on the use of illness severity scores to inform triage decisions for allocation of scarce resources such as mechanical ventilation during the current COVID-19 pandemic warrant examination for possible bias in these models. We investigated the performance of three severity scoring systems (APACHE IVa, OASIS, SOFA) across ethnic groups in two large ICU databases in order to identify possible ethnicity-based bias.
MethodData from the eICU Collaborative Research Database and the Medical Information Mart for Intensive Care were analysed for score performance in Asians, African Americans, Hispanics and Whites after appropriate exclusions. Discrimination and calibration were determined for all three scoring systems in all four groups.
FindingsWhile measurements of discrimination -area under the receiver operating characteristic curve (AUROC) -were significantly different among the groups, they did not display any discernible systematic patterns of bias. In contrast, measurements of calibration -standardised mortality ratio (SMR) -indicated persistent, and in some cases significant, patterns of difference between Hispanics and African Americans versus Asians and Whites. The differences between African Americans and Whites were consistently statistically significant. While calibrations were imperfect for all groups, the scores consistently demonstrated a pattern of over-predicting mortality for African Americans and Hispanics.
InterpretationThe systematic differences in calibration across ethnic groups suggest that illness severity scores reflect bias in their predictions of mortality.
FundingLAC is funded by the National Institute of Health through NIBIB R01 EB017205. There was no specific funding for this study. | intensive care and critical care medicine |
10.1101/2021.01.18.21250062 | Ranking videolaryngoscopes by orotracheal intubation performance: protocol of a systematic review and network meta-analysis of clinical trials at patient level. | BackgroundVideolaryngoscopes (VLs) are regarded to improve glottic visualization as compared to Macintosh laryngoscope (ML). However, we currently do not know which one would be the best choice. We then designed this systematic review and network meta-analysis to rank the different VLs as compared to ML.
MethodsWe will conduct a search in PubMed, LILACS, Scielo, Embase, Web of Science, and Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 6) on 11/01/2021. We will include randomized clinical trials fully reported with patients aged [≥] 16 years, comparing VLs with ML for failed intubation with the device, failed first intubation attempts, number of intubation attempts, time for intubation, difficulty of intubation, and improved visualization of the larynx. Pooled effects will be estimated by both fixed and random-effects models and presented according to qualitative and quantitative heterogeneity assessment. Sensitivity analyses will be performed as well as a priori subgroup, meta-regression and multiple meta-regression analyses. Additionally, network meta-analyses will be applied to rank the different VLs as compared to ML. We will also assess the risk of selective publication by funnel plot asymmetry.
DiscussionThis systematic review and network meta-analysis aim at helping health services and clinicians involved in airway manipulation choose the best VLs for orotracheal intubation.
Systematic review registrationThe current protocol was submitted to PROSPERO on 07/01/2021. | anesthesia |
10.1101/2021.01.16.21249934 | Development of Predictive Risk Models for All-cause Mortality in Pulmonary Hypertension using Machine Learning | BackgroundPulmonary hypertension, a progressive lung disorder with symptoms such as breathlessness and loss of exercise capacity, is highly debilitating and has a negative impact on the quality of life. In this study, we examined whether a multi-parametric approach using machine learning can improve mortality prediction.
MethodsA population-based territory-wide cohort of pulmonary hypertension patients from January 1, 2000 to December 31, 2017 were retrospectively analyzed. Significant predictors of all-cause mortality were identified. Easy-to-use frailty indexes predicting primary and secondary pulmonary hypertension were derived and stratification performances of the derived scores were compared. A factorization machine model was used for the development of an accurate predictive risk model and the results were compared to multivariate logistic regression, support vector machine, random forests, and multilayer perceptron.
ResultsThe cohorts consist of 2562 patients with either primary (n=1009) or secondary (n=1553) pulmonary hypertension. Multivariate Cox regression showed that age, prior cardiovascular, respiratory and kidney diseases, hypertension, number of emergency readmissions within 28 days of discharge were all predictors of all-cause mortality. Easy-to-use frailty scores were developed from Cox regression. A factorization machine model demonstrates superior risk prediction improvements for both primary (precision: 0.90, recall: 0.89, F1-score: 0.91, AUC: 0.91) and secondary pulmonary hypertension (precision: 0.87, recall: 0.86, F1-score: 0.89, AUC: 0.88) patients.
ConclusionWe derived easy-to-use frailty scores predicting mortality in primary and secondary pulmonary hypertension. A machine learning model incorporating multi-modality clinical data significantly improves risk stratification performance. | cardiovascular medicine |
10.1101/2021.01.12.21249477 | Prognostic significance of BMI after PCI treatment in ST-elevation myocardial infarction A cohort study from the Swedish Coronary Angiography and Angioplasty Registry | BackgroundObesity along with clustering of cardiovascular risk factors is a promoter for coronary artery disease. On the other hand, a high BMI appears to exert a protective effect with respect to outcomes after a coronary artery event, termed the obesity paradox.
MethodsThe Swedish Coronary and Angiography and Angioplasty registry (SCAAR) collects information on all patients who undergo percutaneous coronary intervention (PCI) for ST-elevation myocardial infarction (STEMI) in Sweden along with demographic- and procedure-related data. We studied the predictability of four categories of BMI for 1-year all-cause mortality in people with STEMI undergoing PCI.
ResultsAmong 25,384 patients, mean (SD) age 67.7 (12.1) years and 71.1% male, who underwent PCI for STEMI a total of 5,529 (21.8%) died within one year. Using normal-weight (BMI 18.5-24.9 kg/m2) as a reference, subjects with obesity (BMI [≥]30 kg/m2) had a low 1-year all-cause mortality risk in unadjusted analysis, HR 0.59 (95% CI 0.53- 0.67). However, after adjustment for age, sex and other covariates the difference became non-significant, HR 0.88 (95% CI: 0.75-1.02). Patients with overweight (BMI 25.0-29.9 kg/m2) had the lowest 1-year mortality risk in analysis adjusted for age, sex and other covariates, HR 0.87 (95% CI 0.79-0.95), whereas those with underweight (BMI <18.5 kg/m2) had the highest mortality in both unadjusted HR 2.22 (95% CI 1.69-2.92) and adjusted analysis, HR 1.72 (95% CI: 1.31-2.26).
ConclusionThe protective effect of obesity with respect to 1-year mortality after coronary intervention became non-significant after adjusting for age, sex and relevant co-variates. Instead, overweight people displayed the lowest risk and underweight individuals the highest risk for adjusted all-cause mortality | cardiovascular medicine |
10.1101/2021.01.19.21250092 | The Impact of U.S. County-Level Factors on COVID-19 Morbidity and Mortality | BackgroundThe effect of socioeconomic factors, ethnicity, and other variables, on the frequency of COVID-19 cases [morbidity] and induced deaths [mortality] at sub-population, rather than at individual levels, is only partially understood.
ObjectiveTo determine which county-level features best predict COVID-19 morbidity and mortality for a given county in the U.S.
DesignA Machine-Learning model that predicts COVID-19 mortality and morbidity using county-level features, followed by a SHAP-values-based importance analysis of the predictive features.
SettingPublicly available data from various American government and news websites.
Participants3,071 U.S. counties, from which 53 county-level features, as well as morbidity and mortality numbers, were collected.
MeasurementsFor each county: Ethnicity, socioeconomic factors, educational attainment, mask usage, population density, age distribution, COVID-19 morbidity and mortality, air quality indicators, presidential election results, ICU beds.
ResultsA Random Forest classifier produced an AUROC of 0.863 for morbidity prediction and an AUROC of 0.812 for mortality prediction. A SHAP-values-based analysis indicated that poverty rate, obesity rate, mean commute time to work, and proportion of people that wear masks significantly affected morbidity rates, while ethnicity, median income, poverty rate, and education levels, heavily influenced mortality rates. The correlation between several of these factors and COVID-19 morbidity and mortality, from 4/2020 to 11/2020 shifted, probably due to COVID-19 being initially associated with more urbanized areas, then with less urbanized ones.
LimitationsData are still coming in.
ConclusionsEthnicity, education, and economic disparity measures are major factors in predicting the COVID-19 mortality rate in a county. Between-counties low-variance factors (e.g., age), are not meaningful predictors.
Differing correlations can be explained by the COVID-19 spread from metropolitan to less metropolitan areas.
Primary Funding SourceNone. | epidemiology |
10.1101/2021.01.19.21249898 | Identifying high-risk groups for change in weight and body mass index: population cohort of 11 million measurements in 2.3 million adults | BackgroundAdult obesity prevention policies, which are largely untargeted, have met with limited success globally. Population groups with the highest risk of weight gain, if they could be reliably identified using readily available information, might benefit from targeted policy. The relative importance of age, sex, ethnicity, geographical region and social deprivation for weight gain is unknown.
MethodsWe calculated longitudinal changes in BMI over one, five and ten years and investigated transition between BMI categories using 11,187,383 clinically recorded, repeated measures of BMI from population-based electronic health records of 2,328,477 adults in England (1998-2016). The influence of risk factors was tested using logistic regression.
FindingsThe youngest adult age group (18-24 years) was more strongly associated with risk of weight gain than older age, male sex, socioeconomic deprivation, ethnicity or geographic region. Among the youngest adults, the top quartile gained 15.9kg in men and 12kg in women at 10 years. The odds of transitioning to a higher BMI category over 10 years were 4-6 times higher in the youngest (18-24 years) compared to oldest (65-74 years) individuals; odds ratio (95% confidence interval) 4.22 (3.85-4.62)) from normal weight to overweight or obesity, 4.60 (4.06-5.22) for overweight to obesity, and 5.87 (5.23-6.59) from obesity to severe obesity in multiple adjusted analyses. Among the youngest adults, socially deprived men were at greater risk of transitioning from normal weight to overweight (72%) and from overweight to obesity (68%) over 10 years. We provide an open access online risk calculator (https://pasea.shinyapps.io/bmi_shiny_app/) and present high resolution obesity risk charts over a 1-, 5- and 10-year follow-up.
InterpretationA radical shift in policy is required to focus on those at highest risk of weight gain-young adults-for individual and population level prevention of obesity and its long-term consequences for health and health care.
FundingBHF, HDR-UK, MRC | epidemiology |
10.1101/2021.01.15.21249880 | Simulation of epidemic models with generally distributed sojourn times | Epidemic models are used to analyze the progression or outcome of an epidemic under different control policies like vaccinations, quarantines, lockdowns, use of face-masks, pharmaceutical interventions, etc. When these models accurately represent real-life situations, they may become an important tool in the decision-making process. Among these models, compartmental models are very popular and assume individuals move along a series of compartments that describe their current health status. Nevertheless, these models are mostly Markovian, that is, the time in each compartment follows an exponential distribution. Here, we introduce a novel approach to simulate general stochastic epidemic models that accepts any distribution for the sojourn times. | epidemiology |
10.1101/2021.01.17.21249997 | Cognitive function and vitamin B12 and D in elders from Ecuador | IntroductionCurrent evidence still does not support the role of vitamin B12 or vitamin D in age-associated cognitive impairment.
ObjectiveEvaluate the association between vitamin B12 and D and cognitive function in elders.
MethodologySample included 1733 individuals 60 years old and older, who participated in the SABE study that was carried out in Ecuador. Cognitive function was measured using abbreviated version of minimental state examination (MMSE). Vitamin B12 and D were measured in blood. Data were analyzed through linear regression models with restricted cubic splines (RCSs). Models were adjusted by sex, years of education, region (urban highland, urban coast, rural highlands, rural coast), socioeconomic status, and ethnicity.
ResultsIndependently from sex, age, years of education, ethnicity, socioeconomic status and geographical zone of residence, we found that vitamin B12 but not vitamin D levels were associated with cognitive function in a representative group of Ecuadorian elders. Elders with lower levels of vitamin B12 showed lower MMSE scores in comparison to elders with higher levels of vitamin B12. Moreover, a statically significant non linear interaction was found between vitamin B12 and age with respect to cognitive function. In this way, we observed that in elders 75 years old and older whose levels of vitamin B12 were 271 pg/ml or less, the drop of cognitive function was particularly steep in comparison to elders whose levels of vitamin B12 were 647 pg/ml or more.
ConclusionsLow levels of vitamin B12 but not of vitamin D are associated with low cognitive functioning. | epidemiology |
10.1101/2021.01.18.21250053 | Forecasting the Spread of the COVID-19 Epidemic in Lombardy: A Dynamic Model Averaging Approach | Forecasting with accuracy the evolution of COVID-19 daily incidence curves is one of the most important exercises in the field of epidemic modeling. We examine the forecastability of daily COVID-19 cases in the Italian region of Lombardy using Dynamic Model Averaging and Dynamic Model Selection methods. To investigate the predictive accuracy of this approach, we compute forecast performance metrics of sequential out-of-sample real-time forecasts in a back-testing exercise ranging from March 1 to December 10 of 2020. We find that (i) Dynamic Model Averaging leads to a consistent and substantial predictive improvements over alternative epidemiological models and machine learning approaches when producing short-run forecasts. Using estimated posterior inclusion probabilities we also provide evidence on which set of predictors are relevant for forecasting in each period. Our findings also suggest that (ii) future incidences can be forecasted by exploiting information on the epidemic dynamics of neighboring regions, human mobility patterns, pollution and temperatures levels. | epidemiology |
10.1101/2021.01.19.21249816 | Modelling the COVID-19 Fatality Rate in England and its Regions | A model to account for the fatality rate in England and its regions is proposed. It follows the clear observation that, rather than two connected waves, there have been many waves of infections and fatalities in the regions of England of various magnitudes, usually overlapping. The waves are self-limiting, in that clear peaks are seen, particularly in reported positive test rates. The present model considers fatalities as the data reported are more reliable than positive test rates, particularly so during the first wave when so little testing was done.
The model considers the observed waves are essentially similar in form and can be modelled using a single wave form, whose final state is only dependent on its peak height and start date. The basic wave form was modelled using the observed fatality rates for London, which unlike the other regions, exhibited almost completely as a single wave in the "first wave". Its form matches rather well with the "Do Nothing" model reported by Imperial College on 16th March 2020, but reduced substantially from its expected peak.
There are, essentially, only two adjustable parameters used in the model, the start date of the relevant wave and its height. The modelled fatalities for each wave are summated per day and a cumulative curve is matched to that reported. The minimal number of adjustable parameters, alongside the fact that the waves invariably overlap, provides highly stringent conditions on the fitting process.
Results are presented for each region for both the "first" and "second waves. High levels of accuracy are obtained with R2 values approaching 100% against the ideal fit for both waves. It can also be seen that there are fundamental differences between the underlying behaviour of the "first" and "second" waves and reasons as to why those differences have arisen is briefly discussed. | epidemiology |
10.1101/2021.01.19.21249816 | Modelling the COVID-19 Fatality Rate in England and its Regions | A model to account for the fatality rate in England and its regions is proposed. It follows the clear observation that, rather than two connected waves, there have been many waves of infections and fatalities in the regions of England of various magnitudes, usually overlapping. The waves are self-limiting, in that clear peaks are seen, particularly in reported positive test rates. The present model considers fatalities as the data reported are more reliable than positive test rates, particularly so during the first wave when so little testing was done.
The model considers the observed waves are essentially similar in form and can be modelled using a single wave form, whose final state is only dependent on its peak height and start date. The basic wave form was modelled using the observed fatality rates for London, which unlike the other regions, exhibited almost completely as a single wave in the "first wave". Its form matches rather well with the "Do Nothing" model reported by Imperial College on 16th March 2020, but reduced substantially from its expected peak.
There are, essentially, only two adjustable parameters used in the model, the start date of the relevant wave and its height. The modelled fatalities for each wave are summated per day and a cumulative curve is matched to that reported. The minimal number of adjustable parameters, alongside the fact that the waves invariably overlap, provides highly stringent conditions on the fitting process.
Results are presented for each region for both the "first" and "second waves. High levels of accuracy are obtained with R2 values approaching 100% against the ideal fit for both waves. It can also be seen that there are fundamental differences between the underlying behaviour of the "first" and "second" waves and reasons as to why those differences have arisen is briefly discussed. | epidemiology |
10.1101/2021.01.19.21249816 | Modelling the COVID-19 Fatality Rate in England and its Regions | A model to account for the fatality rate in England and its regions is proposed. It follows the clear observation that, rather than two connected waves, there have been many waves of infections and fatalities in the regions of England of various magnitudes, usually overlapping. The waves are self-limiting, in that clear peaks are seen, particularly in reported positive test rates. The present model considers fatalities as the data reported are more reliable than positive test rates, particularly so during the first wave when so little testing was done.
The model considers the observed waves are essentially similar in form and can be modelled using a single wave form, whose final state is only dependent on its peak height and start date. The basic wave form was modelled using the observed fatality rates for London, which unlike the other regions, exhibited almost completely as a single wave in the "first wave". Its form matches rather well with the "Do Nothing" model reported by Imperial College on 16th March 2020, but reduced substantially from its expected peak.
There are, essentially, only two adjustable parameters used in the model, the start date of the relevant wave and its height. The modelled fatalities for each wave are summated per day and a cumulative curve is matched to that reported. The minimal number of adjustable parameters, alongside the fact that the waves invariably overlap, provides highly stringent conditions on the fitting process.
Results are presented for each region for both the "first" and "second waves. High levels of accuracy are obtained with R2 values approaching 100% against the ideal fit for both waves. It can also be seen that there are fundamental differences between the underlying behaviour of the "first" and "second" waves and reasons as to why those differences have arisen is briefly discussed. | epidemiology |
10.1101/2021.01.18.21250012 | Studying the course of Covid-19 by a recursive delay approach | AO_SCPLOWBSTRACTC_SCPLOWIn an earlier paper we proposed a recursive model for epidemics; in the present paper we generalize this model to include the asymptomatic or unrecorded symptomatic people, which we call dark people (dark sector). We call this the SEPARd-model. A delay differential equation version of the model is added; it allows a better comparison to other models. We carry this out by a comparison with the classical SIR model and indicate why we believe that the SEPARd model may work better for Covid-19 than other approaches.
In the second part of the paper we explain how to deal with the data provided by the JHU, in particular we explain how to derive central model parameters from the data. Other parameters, like the size of the dark sector, are less accessible and have to be estimated more roughly, at best by results of representative serological studies which are accessible, however, only for a few countries. We start our country studies with Switzerland where such data are available. Then we apply the model to a collection of other countries, three European ones (Germany, France, Sweden), the three most stricken countries from three other continents (USA, Brazil, India). Finally we show that even the aggregated world data can be well represented by our approach.
At the end of the paper we discuss the use of the model. Perhaps the most striking application is that it allows a quantitative analysis of the influence of the time until people are sent to quarantine or hospital. This suggests that imposing means to shorten this time is a powerful tool to flatten the curves. | epidemiology |
10.1101/2021.01.18.21250012 | Studying the course of Covid-19 by a recursive delay approach | AO_SCPLOWBSTRACTC_SCPLOWIn an earlier paper we proposed a recursive model for epidemics; in the present paper we generalize this model to include the asymptomatic or unrecorded symptomatic people, which we call dark people (dark sector). We call this the SEPARd-model. A delay differential equation version of the model is added; it allows a better comparison to other models. We carry this out by a comparison with the classical SIR model and indicate why we believe that the SEPARd model may work better for Covid-19 than other approaches.
In the second part of the paper we explain how to deal with the data provided by the JHU, in particular we explain how to derive central model parameters from the data. Other parameters, like the size of the dark sector, are less accessible and have to be estimated more roughly, at best by results of representative serological studies which are accessible, however, only for a few countries. We start our country studies with Switzerland where such data are available. Then we apply the model to a collection of other countries, three European ones (Germany, France, Sweden), the three most stricken countries from three other continents (USA, Brazil, India). Finally we show that even the aggregated world data can be well represented by our approach.
At the end of the paper we discuss the use of the model. Perhaps the most striking application is that it allows a quantitative analysis of the influence of the time until people are sent to quarantine or hospital. This suggests that imposing means to shorten this time is a powerful tool to flatten the curves. | epidemiology |
10.1101/2021.01.18.21249998 | The Association between Early Country-level Testing Capacity and Later COVID-19 Mortality Outcomes | BackgroundThe COVID-19 pandemic has overrun hospital systems while exacerbating economic hardship and food insecurity on a global scale. In an effort to understand how early action to find and control the virus is associated with cumulative outcomes, we explored how country-level testing capacity affects later COVID-19 mortality.
MethodsWe used the Our World in Data database to explore testing and mortality records in 27 countries from December 31, 2019 to September 30, 2020; we applied ordinary-least squares regression with clustering on country to determine the association between early COVID-19 testing capacity (cumulative tests per case) and later COVID-19 mortality (time to specified mortality thresholds), adjusting for country-level confounders, including median age, GDP, hospital bed capacity, population density, and non-pharmaceutical interventions.
ResultsHigher early testing implementation, as indicated by more cumulative tests per case when mortality was still low, was associated with longer accrual time for higher per capita deaths. For instance, a higher cumulative number of tests administered per case at the time of 6 deaths per million persons was positively predictive of a longer time to reach 15 deaths per million, after adjustment for all confounders ({beta}=0.659; P=0.001).
ConclusionsCountries that developed stronger COVID-19 testing capacity at early timepoints, as measured by tests administered per case identified, experienced a slower increase of deaths per capita. Thus, this study operationalizes the value of testing and provides empirical evidence that stronger testing capacity at early timepoints is associated with reduced mortality and better pandemic control. | epidemiology |
10.1101/2021.01.17.21249837 | Modeling and Simulation: A study on predicting the outbreak of COVID- 19 in Saudi Arabia | The novel coronavirus (Covid-19) infection has resulted in an ongoing pandemic affecting health system and economy of more than 200 countries around the world. Mathematical models are used to predict the biological and epidemiological trends of an epidemic and develop methods for controlling it. In this work, we use mathematical model perspective to study the role of behavior change in slowing the spread of the COVID-19 disease in Saudi Arabia. The real-time updated data from 1st May 2020 to 8th January 2021 is collected from Saudi Ministry of Health, aiming to provide dynamic behaviors of the pandemic in Saudi Arabia. During this period, it has infected 297,205 people, resulting in 6124 deaths with the mortality rate 2.06 %. There is weak positive relationship between the spread of the infection and mortality (R2 =0.412). We use Susceptible-Exposed-Infection-Recovered (SEIR) mode, the logistic growth model and with special focus on the exposed, infection and recovery individuals to simulate the final phase of the outbreak. The results indicate that social distancing, good hygienic conditions, and travel limitation are the crucial measures to prevent further spreading of the epidemic. | epidemiology |
10.1101/2021.01.18.21250071 | The impacts of COVID-19 vaccine timing, number of doses, and risk prioritization on mortality in the US | As COVID-19 vaccination begins worldwide, policymakers face critical trade-offs. Using a mathematical model of COVID-19 transmission, we find that timing of the rollout is expected to have a substantially greater impact on mortality than risk-based prioritization and uptake and that prioritizing first doses over second doses may be life saving. | epidemiology |
10.1101/2021.01.18.21250071 | The impacts of COVID-19 vaccine timing, number of doses, and risk prioritization on mortality in the US | As COVID-19 vaccination begins worldwide, policymakers face critical trade-offs. Using a mathematical model of COVID-19 transmission, we find that timing of the rollout is expected to have a substantially greater impact on mortality than risk-based prioritization and uptake and that prioritizing first doses over second doses may be life saving. | epidemiology |
10.1101/2021.01.18.21250031 | Cumulative effects of particulate matter pollution and meteorological variables on the risk of influenza-like illness in Bialystok, Poland | The cold season is usually accompanied by an increased incidence of respiratory infections and increased air pollution from combustion sources. As we are facing the growing numbers of COVID-19 cases caused by the novel SARS-CoV-2 coronavirus, an understanding of the impact of air pollutants and meteorological variables on the incidence of respiratory infections is crucial. The influenza-like illness (ILI) incidence might be used as a close proxy for the circulation of influenza viruses. Recently, SARS-CoV-2 has also been detected in patients with ILI. Using distributed lag nonlinear models, we analyzed the association between ILI, meteorological variables and particulate matter concentration in Bialystok, Poland, from 2013-2019. We found an exponential relation between cumulative PM2.5 pollution and the incidence of ILI that remained significant after adjusting for air temperatures and a long-term trend. Pollution had the greatest effect during the same week, but the risk of ILI was increased for the four following weeks. The risk of ILI was also increased by low air temperatures, low absolute humidity, and high wind speed. Altogether, our results show that all measures implemented to decrease PM2.5 concentrations would be beneficial to reduce the transmission of SARS-CoV-2 and other respiratory infections.
Capsule summaryLow/medium high concentrations of particulate matter pollution increase the risk of influenza-like illness. The effect is independent from air temperatures and lasts for the four following weeks. | epidemiology |
10.1101/2021.01.19.21250059 | Systematic Review and Meta-analysis of Immunomodulator and Biologic Therapies for Treatment of Chronic Pouchitis | BackgroundPouchitis is a common complication after restorative proctocolectomy with ileal pouch-anal anastomosis (IPAA). Although antibiotics are the primary therapy for acute pouchitis, a proportion of patients developed chronic antibiotic-dependent pouchitis (CADP) or antibiotic-refractory pouchitis (CARP). The efficacy of second line immunomodulator and biologic therapies for chronic pouchitis remain undefined. We performed a systematic review and metanalysis of published studies to assess their efficacy.
MethodThe online EMBASE database was searched for full-text articles describing the treatment of chronic pouchitis meeting our criteria. Post-induction clinical and endoscopic response and remission rates were extracted and combined for meta-analyses. The rate of treatment discontinuation and safety profiles were also assessed.
ResultsA total of 21 full-text articles were included in this meta-analysis representing 491 patients. The overall clinical response rate was 49% with clinical remission rate of 34%. The overall endoscopic response and remission rates were 53% and 36% respectively. The safety profile of individual agents was reassuring, but vedolizumab appears to have a more favorable safety profile.
ConclusionThis review and meta-analysis identified the effectiveness of vedolizumab and ustekinumab in achieving clinical and endoscopic response in chronic pouchitis, with a reassuring safety profile. There is limited data regarding use of immunomodulators and no conclusion can be drawn. Further studies are required to define the comparative effectiveness of available treatments of CADP or CARP. | gastroenterology |
10.1101/2021.01.18.21250060 | Enhancing Cognitive Restructuring with Concurrent Repetitive Transcranial Magnetic Stimulation for Transdiagnostic Psychopathology: A Proof of Concept Randomized Controlled Trial | IntroductionEmotional dysregulation constitutes a serious public health problem in need of novel transdiagnostic treatments.
ObjectiveTo this aim, we developed and tested a one-time intervention that integrates behavioral skills training with concurrent repetitive transcranial magnetic stimulation (rTMS).
MethodsForty-six adults who met criteria for at least one DSM-5 disorder and self-reported low use of cognitive restructuring (CR) were enrolled in a randomized, double-blind, sham-controlled trial that used a between-subjects design. Participants were taught CR and underwent active rTMS applied at 10 Hz over the right (n= 17) or left (n= 14) dorsolateral prefrontal cortex (dlPFC) or sham rTMS (n= 15) while practicing reframing and emotional distancing in response to autobiographical stressors.
ResultsThose who received active left or active right as opposed to sham rTMS exhibited enhanced regulation (ds = 0.21 - 0.62) as measured by psychophysiological indices during the intervention (higher high-frequency heart rate variability, lower regulation duration). Those who received active rTMS over the left DLPFC also self-reported reduced distress througout the intervention (d = 0.30), higher likelihood to use CR, and lower daily distress during the week following the intervention. The procedures were acceptable and feasible with few side effects.
ConclusionsThese findings show that engaging frontal circuits simultaneously with cognitive skills training and rTMS may be clinically feasible, well-tolerated and may show promise for the treatment of transdiagnostic emotional dysregulation. Larger follow up studies are needed to confirm the efficacy of this novel therapeutic approach. | psychiatry and clinical psychology |
10.1101/2021.01.18.21250060 | Enhancing Cognitive Restructuring with Concurrent Repetitive Transcranial Magnetic Stimulation: A Transdiagnostic Randomized Controlled Trial | IntroductionEmotional dysregulation constitutes a serious public health problem in need of novel transdiagnostic treatments.
ObjectiveTo this aim, we developed and tested a one-time intervention that integrates behavioral skills training with concurrent repetitive transcranial magnetic stimulation (rTMS).
MethodsForty-six adults who met criteria for at least one DSM-5 disorder and self-reported low use of cognitive restructuring (CR) were enrolled in a randomized, double-blind, sham-controlled trial that used a between-subjects design. Participants were taught CR and underwent active rTMS applied at 10 Hz over the right (n= 17) or left (n= 14) dorsolateral prefrontal cortex (dlPFC) or sham rTMS (n= 15) while practicing reframing and emotional distancing in response to autobiographical stressors.
ResultsThose who received active left or active right as opposed to sham rTMS exhibited enhanced regulation (ds = 0.21 - 0.62) as measured by psychophysiological indices during the intervention (higher high-frequency heart rate variability, lower regulation duration). Those who received active rTMS over the left DLPFC also self-reported reduced distress througout the intervention (d = 0.30), higher likelihood to use CR, and lower daily distress during the week following the intervention. The procedures were acceptable and feasible with few side effects.
ConclusionsThese findings show that engaging frontal circuits simultaneously with cognitive skills training and rTMS may be clinically feasible, well-tolerated and may show promise for the treatment of transdiagnostic emotional dysregulation. Larger follow up studies are needed to confirm the efficacy of this novel therapeutic approach. | psychiatry and clinical psychology |
10.1101/2021.01.18.21249750 | Association between polygenic propensity for a psychiatric disorder and nutrient intake | BackgroundDespite the observed associations between psychiatric disorders and nutrient intake, genetic studies are limited.
AimsWe examined whether polygenic scores for psychiatric disorders, including anorexia nervosa, major depressive disorder and schizophrenia, are associated with self-reported nutrient intake.
MethodsWe used data obtained by the UK Biobank Diet by 24-hour recall questionnaire (N=163,619). Association was assessed using linear mixed models for the analysis of data with repeated measures.
ResultsWe find polygenic scores for psychiatric disorders are differentially associated with nutrient intake, with attention-deficit/hyperactivity disorder, bipolar disorder and schizophrenia showing the strongest associations, whilst autism spectrum disorder showed no association. Expressed as the effect of a one standard deviation higher polygenic score, anorexia nervosa polygenic score was associated with higher intake of fibre (0.06 g), folate (0.93 g), iron (0.03 mg) and vitamin C (0.92 g). Similarly, a higher major depressive disorder polygenic score was associated with 0.04 mg lower iron and 1.13 g lower vitamin C intake per day, and a greater obsessive-compulsive disorder polygenic score with 0.06 g higher fibre intake. These associations were predominantly driven by socioeconomic status and educational attainment. However, a higher alcohol dependence polygenic score was associated with higher alcohol intake and individuals with higher persistent thinness polygenic scores reported their food to weigh 8.61 g less, both independent of socioeconomic status.
ConclusionsOur findings suggest that polygenic propensity for a psychiatric disorder is associated with dietary behaviour. The nutrient intake is based on self-reported data and findings must therefore be interpreted mindfully.
Declaration of interestNone. | psychiatry and clinical psychology |
10.1101/2021.01.18.21250032 | Remote care for mental health: qualitative study with service users, carers and staff during the COVID-19 pandemic | ObjectivesTo explore the experiences of service users, carers and staff seeking or providing secondary mental health services during the COVID-19 pandemic.
DesignQualitative interview study, co-designed with mental health service users and carers.
MethodsWe conducted semi-structured, telephone or online interviews with a purposively constructed sample; a peer researcher with lived experience conducted and analysed interviews with service users. Analysis was based on the constant comparison method.
SettingNHS secondary mental health services in England between June and August 2020.
ParticipantsOf 65 participants, 20 had either accessed or needed to access English secondary mental healthcare during the pandemic; 10 were carers of people with mental health difficulties; 35 were members of staff working in NHS secondary mental health services during the pandemic.
ResultsExperiences of remote care were mixed. Some service users valued the convenience of remote methods in the context of maintaining contact with familiar clinicians. Most participants commented that a lack of non-verbal cues and the loss of a therapeutic safe space challenged therapeutic relationship building, assessments, and identification of deteriorating mental wellbeing. Some carers felt excluded from remote meetings and concerned that assessments were incomplete without their input. Like service users, remote methods posed challenges for clinicians who reported uncertainty about technical options and a lack of training. All groups expressed concern about intersectionality exacerbating inequalities and the exclusion of some service user groups if alternatives to remote care are lost.
ConclusionsWhilst remote mental healthcare is likely to become increasingly widespread in secondary mental health services, our findings highlight the continued importance of a tailored, personal approach to decisions about remote mental healthcare. Further research should focus on which types of consultations best suit face-to-face interaction, and for whom and why, and which can be provided remotely and by which medium.
ARTICLE SUMMARYO_ST_ABSStrengths and limitations of this studyC_ST_ABSO_LIStrengths include its qualitative approach in speaking to a large sample of participants with varied mental health difficulties, carers, and a diverse range of mental healthcare staff.
C_LIO_LIIts novelty lies in a deep exploration of the views and experiences of remote mental healthcare during a pandemic.
C_LIO_LIThe methods are strengthened by the involvement of experts-by-experience and the use of peer research methods.
C_LIO_LIWe did not adopt a narrative method; the interviews were one-off conversations so we could not explore change as the pandemic progressed and people may have become accustomed to remote care.
C_LIO_LIThe study used remote methods to comply with UK lockdown regulations; this will have excluded some groups without the ability to engage remotely.
C_LI | psychiatry and clinical psychology |
10.1101/2021.01.18.20173633 | Initial Clinical Outcomes from NOCD Digital Behavioral Health Treatment of Obsessive-Compulsive Disorder Using Exposure and Response Prevention | Effective first-line treatments for obsessive-compulsive disorder (OCD) include exposure and response prevention (ERP) therapy. Despite extensive evidence of its efficacy in clinical studies and real-world samples, ERP is still underutilized as a treatment, likely due to access to care barriers such as the availability of adequately trained ERP therapists, geographical location, time, and cost. NOCD has created a digital behavioral health treatment for OCD using ERP delivered via teletherapy and between-session support. We examined preliminary treatment outcomes in a large naturalistic sample of 2069 adults, children, and adolescents with a primary OCD diagnosis. Treatment consisted of twice-weekly live teletherapy ERP for three weeks, followed by six weeks of once-weekly brief teletherapy check-ins. Assessments were conducted at baseline, after completion of three weeks of twice-weekly sessions, and at the end of the six weeks of brief check-ins. Treatment resulted in significant improvements, with a 45% mean reduction in OCD symptoms and a 71% response rate ([≥]35% reduction in OCD symptoms).Treatment also resulted in a significant, 43% reduction in depression, a 49% reduction in anxiety, and a 35% reduction in stress symptoms. Quality of life improved by a mean of 35%. The mean duration of treatment was approximately 11 weeks, and the total therapist time was approximately 11 hours, which is less than half the total time compared with standard once-weekly outpatient treatment. In sum, in this preliminary sample, NOCDs treatment model for OCD, delivered in a readily-accessible format for patients, has demonstrated to be effective and efficient. | psychiatry and clinical psychology |
10.1101/2021.01.18.21250049 | Examining the effect of information channel on COVID-19 vaccine acceptance | Hesitancy towards the COVID-19 vaccine remains high among the US population. Now that the vaccine is available to priority populations, it is critical to convince those that are hesitant to take the vaccine. Public health communication about the vaccine as well as misinformation on the vaccine occurs through a variety of different information channels. Some channels of information are more commonly found to spread misinformation. Given the expansive information environment, we sought to characterize the use of different media channels for COVID-19 vaccine information and determine the relationship between information channel and vaccine acceptance. We conducted a convenience sample of vaccine priority groups (N=2,650) between December 13 and 23, 2020 and conducted bivariate chi-squared tests and multivariable multinomial logistic regression analyses to determine the relative impact of channels of information on vaccine acceptance. We found traditional channels of information, especially National TV, National newspapers, and local newspapers increased the relative risk of vaccine acceptance. Individuals who received information from traditional media compared to social media or both traditional and social media were most likely to accept the vaccine. The implications of this study suggest social media channels have a role to play in educating the hesitant to accept the vaccine, while traditional media channels should continue to promote data-driven and informed vaccine content to their viewers. | public and global health |
10.1101/2021.01.19.21249936 | The COVID-19 Infodemic: The complex task of elevating signal and eliminating noise. | In Situation Report #3 and 39 days before declaring COVID-19 a pandemic, the WHO declared a -19 infodemic. The volume of coronavirus tweets was far too great for one to find accurate or reliable information. Healthcare workers were flooded with which drowned the of valuable COVID-19 information. To combat the infodemic, physicians created healthcare-specific micro-communities to share scientific information with other providers. We analyzed the content of eight physician-created communities and categorized each message in one of five domains. We coded 1) an application programming interface to download tweets and their metadata in JavaScript Object Notation and 2) a reading algorithm using visual basic application in Excel to categorize the content. We superimposed the publication date of each tweet into a timeline of key pandemic events. Finally, we created NephTwitterArchive.com to help healthcare workers find COVID-19-related signal tweets when treating patients. We collected 21071 tweets from the eight hashtags studied. Only 9051 tweets were considered signal: tweets categorized into both a domain and subdomain. There was a trend towards fewer signal tweets as the pandemic progressed, with a daily median of 22% (IQR 0-42%. The most popular subdomain in Prevention was PPE (2448 signal tweets). In Therapeutics, Hydroxychloroquine/chloroquine wwo Azithromycin and Mechanical Ventilation were the most popular subdomains. During the active Infodemic phase (Days 0 to 49), a total of 2021 searches were completed in NephTwitterArchive.com, which was a 26% increase from the same time period before the pandemic was declared (Days -50 to -1). The COVID-19 Infodemic indicates that future endeavors must be undertaken to eliminate noise and elevate signal in all aspects of scientific discourse on Twitter. In the absence of any algorithm-based strategy, healthcare providers will be left with the nearly impossible task of manually finding high-quality tweets from amongst a tidal wave of noise. | medical education |
10.1101/2021.01.15.21249916 | Simple Laboratory and Clinical Parameters as Predictor for Electrocardiographic Abnormalities Among Hospitalized Patients with Chronic Kidney Disease: A Cross-Sectional Study | BackgroundIn developing countries, even electrocardiography (ECG) hasnt been used widely in most health-care centers. The ability of physicians to refer to chronic kidney disease (CKD) patients for ECG, often collide with several barriers and costs. Therefore, we need to formulate the simplest and most efficient model to predict when CKD patients need to be referred due to potential ECG abnormalities.
ObjectiveThe aim of this study was to develop several clinical and laboratory parameters as a predictor of any ECG abnormalities.
Materials and MethodsA retrospective cross-sectional study design held at Dr. Soetomo General Academic Hospital, Surabaya, Indonesia. Subjects were hospitalized patients with CKD between 1 January to 31 December 2019. 198 CKD patients (101 males) were enrolled for the study. All patients had demographic information, detailed clinical profile, resting 12-lead ECG recording, complete blood count, serum electrolyte and renal function test profile during admission and results were interpreted blindly by two cardiologists. Statistical analysis was done by SPSS 17.0.
ResultsA total of 198 patients were included in this study. Mean ages were 52.2{+/-}11.8 years old and fifty-one percent were males. Eighty-eight percent of patients from 198 patients had ECG abnormality. AUC of hemoglobin level to discriminate poor R wave progression, pathological Q wave, non-spesific ST-T changes, and frontal axis deviation were 0.532, 0.641, 0.556 and 0.693, respectively. In multivariate logistic regression analysis, only higher systolic blood pressure was determined as an independent predictor of abnormal ECG finding in CKD patients, as systolic blood pressure increase by one unit, the odds of having abnormal ECG is increased 1.02 times (95% CI: 1.00 - 1.02, p=0.042).
ConclusionThe ECG abnormalities can be found in hospitalized CKD patients. Fragmented QRS and long QTc were the highest prevalent ECG abnormalities in our study. Serum creatinine and hemoglobin could predict peaked T wave and prolonged QTc among hospitalized CKD patients. Systolic blood pressure could predict prolonged QTc and fragmented QRS in CKD patients. | nephrology |
10.1101/2021.01.18.21249414 | Temporal Trends in COVID-19 associated AKI from March to December 2020 in New York City | Acute Kidney Injury (AKI) is among the most common complications of Coronavirus Disease 2019 (COVID-19). Throughout 2020 pandemic, the clinical approach to COVID-19 has progressively improved, but it is unknown how these changes have affected AKI incidence and severity. In this retrospective analysis, we report the trend over time of COVID-19 associated AKI and need of renal replacement therapy in a large health system in New York City, the first COVID-19 epicenter in United States. | nephrology |
10.1101/2021.01.16.21249869 | Greek High Phenolic Early Harvest Extra Virgin Olive Oil Reduces the Over-Excitation of Information Flow-based on Dominant Coupling Model in patients with Mild Cognitive Impairment: An EEG Resting-State Validation Approach | ObjectiveThe balance of cross-frequency coupling (CFC) over within-frequency coupling (WFC) can build a nonlinearity index (NI) that encapsulates the over-excitation of information flow between brain areas and across experimental time. The present study investigated for the very first time how the Greek High Phenolic Early Harvest Extra Virgin Olive Oil (HP-EH-EVOO) versus Moderate Phenolic (MP-EVOO) and Mediterranean Diet (MeDi) intervention in people with Mild Cognitive Impairment (MCI) could affect their spontaneous EEG dynamic connectivity.
MethodsFourty three subjects (14 in MeDi, 16 in MP-EVOO and 13 in HP-EH-EVOO) followed an EEG resting-state recording session (eyes-open and closed) before and after the treatment. Following our dominant coupling mode model (DoCM), we built a dynamic integrated dynamic functional connectivity graph (iDFCG) that tabulates both the functional strength and the DoCM of every pair of brain areas.
ResultsSignal spectrum within 1-13 Hz and theta/beta ratio have been decreased in the HP-EH-EVOO group in both conditions. FIDoCM has been improved after the intervention across groups and conditions but was more prominent in HP-EH-EVOO group (p < 0.001). Finally,we revealed a significant higher post-intervention reduction of NI ({Delta}NITotal and ) for the HP-EH-EVOO compared to the MP-EVOO and MeDi groups (p < 0.0001).
ConclusionsLong-term intervention with HP-EH-EVOO reduced the over-excitation of information flow in spontaneous brain activity.
SignificanceOur study confirms the alteration of signal spectrum of EEG rhythms and dominant coupling mode due to the intervention with HP-EH-EVOO nutrition protocol.
HighlightsO_LINon-pharmaceutical intervention based on HP-EH-EVOO in MCI reduces the over-excitation of information flow
C_LIO_LINon-pharmaceutical intervention based on HP-EH-EVOO in MCI increases the human brain flexibility
C_LIO_LIReconfiguration of dominant coupling modes in EEG resting-state due to the intervention is modulated by alpha frequency
C_LI | neurology |
10.1101/2021.01.16.21249869 | Greek High Phenolic Early Harvest Extra Virgin Olive Oil Reduces the Over-Excitation of Information Flow-based on Dominant Coupling Model in patients with Mild Cognitive Impairment: An EEG Resting-State Validation Approach | ObjectiveThe balance of cross-frequency coupling (CFC) over within-frequency coupling (WFC) can build a nonlinearity index (NI) that encapsulates the over-excitation of information flow between brain areas and across experimental time. The present study investigated for the very first time how the Greek High Phenolic Early Harvest Extra Virgin Olive Oil (HP-EH-EVOO) versus Moderate Phenolic (MP-EVOO) and Mediterranean Diet (MeDi) intervention in people with Mild Cognitive Impairment (MCI) could affect their spontaneous EEG dynamic connectivity.
MethodsFourty three subjects (14 in MeDi, 16 in MP-EVOO and 13 in HP-EH-EVOO) followed an EEG resting-state recording session (eyes-open and closed) before and after the treatment. Following our dominant coupling mode model (DoCM), we built a dynamic integrated dynamic functional connectivity graph (iDFCG) that tabulates both the functional strength and the DoCM of every pair of brain areas.
ResultsSignal spectrum within 1-13 Hz and theta/beta ratio have been decreased in the HP-EH-EVOO group in both conditions. FIDoCM has been improved after the intervention across groups and conditions but was more prominent in HP-EH-EVOO group (p < 0.001). Finally,we revealed a significant higher post-intervention reduction of NI ({Delta}NITotal and ) for the HP-EH-EVOO compared to the MP-EVOO and MeDi groups (p < 0.0001).
ConclusionsLong-term intervention with HP-EH-EVOO reduced the over-excitation of information flow in spontaneous brain activity.
SignificanceOur study confirms the alteration of signal spectrum of EEG rhythms and dominant coupling mode due to the intervention with HP-EH-EVOO nutrition protocol.
HighlightsO_LINon-pharmaceutical intervention based on HP-EH-EVOO in MCI reduces the over-excitation of information flow
C_LIO_LINon-pharmaceutical intervention based on HP-EH-EVOO in MCI increases the human brain flexibility
C_LIO_LIReconfiguration of dominant coupling modes in EEG resting-state due to the intervention is modulated by alpha frequency
C_LI | neurology |
10.1101/2021.01.18.21250068 | Immune Markers Are Associated with Cognitive Performance in a Multiethnic Cohort: the Northern Manhattan Study | OBJECTIVETo determine whether immune protein panels add significant information to correlates of cognition.
BACKGROUNDImmune mechanisms in vascular cognitive impairment and dementia are incompletely characterized.
DESIGN/METHODSA subsample of the prospective Northern Manhattan Study underwent detailed neuropsychological testing. Cognitive scores were converted into Z-scores and categorized into four domains (memory, language, processing speed, and executive function) based on factor analysis. Blood samples were analyzed using a 60-plex immunoassay. We used least absolute shrinkage and selection operator (LASSO) procedures to select markers and their interactions independently associated with cognitive scores. Linear regression models assessed cross-sectional associations of known correlates of cognition with cognitive scores, and assessed model fit before and after addition of LASSO-selected immune markers.
RESULTSAmong 1179 participants (mean age 70{+/-}8.9 years, 60% women, 68% Hispanic), inclusion of LASSO-selected immune markers improved model fit above age, education, and other risk factors (p for likelihood ratio test<0.005 for all domains). C-C Motif Chemokine Ligand 11 (CCL 11, eotaxin), C-X-C Motif Chemokine Ligand 9 (CXCL9), hepatocyte growth factor (HGF), and serpin E1 (plasminogen activator inhibitor-1) were associated with each of the domains and with overall cognitive function. Immune marker effects were comparable to conventional risk factors: for executive function, each standard deviation (SD) increase in CCL11 was associated with an effect equivalent to aging three years; for memory, HGF had twice the effect of aging.
CONCLUSIONSImmune markers associate with cognitive function in a multi-ethnic cohort. Further work is needed to validate these findings and determine optimal treatment targets. | neurology |
10.1101/2021.01.18.21249976 | Covid-19 respiratory protection: the filtration efficiency assessment of decontaminated FFP2 masks responding to associated shortages | During the Covid-19 pandemic, healthcare workers were extremely vulnerable to infection with the virus and needed continuous protection. One of the most effective and widely used means of protection was the FFP2 respirator. Unfortunately, this crisis created a shortage of these masks, prompting hospitals to explore opportunities to reuse them after decontamination.
An approach for assessing the filtration efficiency of decontaminated FFP2 masks has been proposed and applied to evaluate the possibilities of their safe reuse. The decontamination processes adopted are those based on moist heat or hydrogen peroxide. The approach introduces efficiency measures that define the filtration and protection capacity of the masks, which characterize both chemical and structural changes, and encompasses many techniques including scanning electron microscopy (SEM), Fourier transforms infrared spectroscopy (FTIR), and thermogravimetric analysis (TGA). The test protocol was applied to mask samples that had endured different decontamination cycles and the results of their efficiency measures were compared to brand-new masks performances.
The main result was that chemical and structural characterization of the decontaminated masks have shown no substantial change or deformation of their filter media structures. Indeed, the respiratory resistance test has shown that the results of both the FFP2 masks that have undergone a hydrogen peroxide disinfection cycle or a steam autoclave cycle remained constant with a small variation of 10 Pa from the EN149 standard. The chemical characterization, on the other hand, has shown that the filter media of the decontaminated masks remains unchanged, with no detectable chemical derivatives in its constituents. | occupational and environmental health |
10.1101/2021.01.19.21250090 | Economic Burden of Periodontal Disease in Europe and the United States of America - An updated forecast | IntroductionPeriodontal disease is a pandemic condition and its severe form affects approximately 10% of the global population. Here, we present a forecast for the economic burden of periodontal disease in 32 European countries and in the United States of America (USA).
Material and methodsIn an aggregate population-based cost analysis, taken as reference the most recent available data, we estimated the cost of periodontal disease. Under this, global health, dental and periodontal expenditures were estimated. Additionally, indirect estimates accounted for Disability-Adjusted Life Years (DALY) valued at per capita Gross Domestic Product (GDP), to estimate productivity losses, including periodontal disease, edentulism due to periodontal disease and caries due to periodontal disease.
ResultsIn 2018 the aggregate cost in Europe was estimated at {euro}17.00B and {euro}2.35B more in the USA ({euro}19.35B). Indirect costs due to periodontal disease amounted to {euro}132.52B in European countries and {euro}103.30B in the USA. The majority of the projected indirect costs were due to edentulism related to periodontal disease and periodontal disease. Indirect costs were the major portion of the estimated economic impact with an average of 0.66% of GDP in Europe and 0.50% in the USA. For the overall costs (direct and indirect), the value amounted to 0.75% of GDP in Europe and 0.60% in the USA.
ConclusionPeriodontal disease caused a {euro}149.52B loss in Europe and {euro}122.65B in the USA, in 2018. These results show that the economic burden of periodontal disease is increasing.
CLINICAL RELEVANCEO_ST_ABSScientific rationale for the studyC_ST_ABSConsidering the pandemic pattern of periodontal diseases we present a forecast for the economic burden of periodontal disease in 32 European countries and in the United States of America (USA).
Principal findingsPeriodontal disease caused a {euro}149.52B loss in Europe and {euro}122.65B in the USA, in 2018. For the overall costs (direct and indirect), the value amounted to 0.75% of GDP in Europe and 0.60% in the USA.
Practical implicationsThese results show that the economic burden of periodontal disease is increasing. | health economics |
10.1101/2021.01.19.21249356 | CoAI: Cost-Aware Artificial Intelligence for Health Care | The recent emergence of accurate artificial intelligence (AI) models for disease diagnosis raises the possibility that AI-based clinical decision support could substantially lower the workload of healthcare providers. However, for this to occur, the input data to an AI predictive model, i.e., the patients features, must themselves be low-cost, that is, efficient, inexpensive, or low-effort to acquire. When time or financial resources for gathering data are limited, as in emergency or critical care medicine, modern high-accuracy AI models that use thousands of patient features are likely impractical. To address this problem, we developed the CoAI (Cost-aware AI) framework to enable any kind of AI predictive model (e.g., deep neural networks, tree ensemble models, etc.) to make accurate predictions given a small number of low-cost features. We show that CoAI dramatically reduces the cost of predicting prehospital acute traumatic coagulopathy, intensive care mortality, and outpatient mortality relative to existing risk scores, while improving prediction accuracy. It also outperforms existing state-of-the-art cost-sensitive prediction approaches in terms of predictive performance, model cost, and training time. Extrapolating these results to all trauma patients in the United States shows that, at a fixed false positive rate, CoAI could alert providers of tens of thousands more dangerous events than other risk scores while reducing providers data-gathering time by about 90 percent, leading to a savings of 200,000 cumulative hours per year across all providers. We extrapolate similar increases in clinical utility for CoAI in intensive care. These benefits stem from several unique strengths: First, CoAI uses axiomatic feature attribution methods that enable precise estimation of feature importance. Second, CoAI is model-agnostic, allowing users to choose the predictive model that performs the best for the prediction task and data at hand. Finally, unlike many existing methods, CoAI finds high-performance models within a given budget without any tuning of the cost-vs-performance tradeoff. We believe CoAI will dramatically improve patient care in the domains of medicine in which predictions need to be made with limited time and resources. | health informatics |
10.1101/2021.01.18.21249433 | The burden of nosocomial covid-19: results from the Wales multi-centre retrospective observational study of 2518 hospitalised adults. | ObjectivesTo define the burden of nosocomial (hospital-acquired) novel pandemic coronavirus (covid-19) infection among adults hospitalised across Wales.
DesignRetrospective observational study of adult patients with polymerase chain reaction (PCR)-confirmed SARS-CoV-2 infection between 1st March - 1st July 2020 with a recorded hospital admission within the subsequent 31 days. Outcomes were collected up to 20th November using a standardised online data collection tool.
SettingService evaluation performed across 18 secondary or tertiary care hospitals.
Participants4112 admissions with a positive SARS-CoV-2 PCR result between 1st March to 1st July 2020 were screened. Anonymised data from 2518 participants were returned, representing over 60% of adults hospitalised across the nation of Wales.
Main outcome measuresThe prevalence and outcomes (death, discharge) for nosocomial covid-19, assessed across of a range of possible case definitions.
ResultsInpatient mortality rates for nosocomial covid-19 ranged from 38% to 42% and remained consistently higher than participants with community-acquired infection (31% to 35%) across a range of case definitions. Participants with nosocomial-acquired infection were an older, frailer, and multi-morbid population than those with community-acquired infection. Based on the Public Health Wales case definition, 50% of participants had been admitted for 30 days prior to diagnostic testing.
ConclusionsThis represents the largest assessment of clinical outcomes for patients with nosocomial covid-19 in the UK to date. These findings suggest that inpatient mortality rates from nosocomial-infection are likely higher than previously reported, emphasizing the importance of infection control measures, and supports prioritisation of vaccination for covid-19 negative admissions and trials of post-exposure prophylaxis in inpatient cohorts.
Trial registrationThis project was approved and sponsored by the Welsh Government, as part of a national audit and quality improvement scheme for patients hospitalised covid-19 across Wales.
Key MessagesO_ST_ABSWhat is already known on this topicC_ST_ABSWe searched PubMed and ISI Web of Science up until 31-December-2020 for studies reporting on patient outcomes following hospital-acquired infection due to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). We identified a range of case-definitions for hospital-acquired infection, based on timing of diagnostic testing 5 to 15 days following admission. The largest and only multi-centre study concluded individuals with nosocomial infection are at a lower risk of death from SARS-CoV-2 than those infected in the community, however, was performed early in the pandemic and utilised a conservative definition of nosocomial infection.
What this study addsOur multi-centre observational study represents the largest assessment of clinical outcomes for patients with nosocomial covid-19 in the UK to date, and suggests the burden of nosocomial SARS-CoV-2 infection has been underestimated. Nosocomial-infection occurred in older, frailer, and multi-morbid individuals, and was consistently associated with greater inpatient mortality than amongst those who were infected in the community across a spectrum of case-definitions. Our findings support implementation of enhanced infection control measures to reduce this burden during future waves, especially given the recent emergence of novel viral variants with enhanced transmissibility. Furthermore, roughly half of the patients meeting the Public Health Wales definition of definite nosocomial SARS-CoV-2 infection had been admitted for 30 days prior to diagnosis, highlighting a potential window of opportunity for inpatient pre-exposure and/or post-exposure prophylaxis. | health systems and quality improvement |
10.1101/2021.01.18.21249433 | The burden of nosocomial covid-19: results from the Wales multi-centre retrospective observational study of 2518 hospitalised adults. | ObjectivesTo define the burden of nosocomial (hospital-acquired) novel pandemic coronavirus (covid-19) infection among adults hospitalised across Wales.
DesignRetrospective observational study of adult patients with polymerase chain reaction (PCR)-confirmed SARS-CoV-2 infection between 1st March - 1st July 2020 with a recorded hospital admission within the subsequent 31 days. Outcomes were collected up to 20th November using a standardised online data collection tool.
SettingService evaluation performed across 18 secondary or tertiary care hospitals.
Participants4112 admissions with a positive SARS-CoV-2 PCR result between 1st March to 1st July 2020 were screened. Anonymised data from 2518 participants were returned, representing over 60% of adults hospitalised across the nation of Wales.
Main outcome measuresThe prevalence and outcomes (death, discharge) for nosocomial covid-19, assessed across of a range of possible case definitions.
ResultsInpatient mortality rates for nosocomial covid-19 ranged from 38% to 42% and remained consistently higher than participants with community-acquired infection (31% to 35%) across a range of case definitions. Participants with nosocomial-acquired infection were an older, frailer, and multi-morbid population than those with community-acquired infection. Based on the Public Health Wales case definition, 50% of participants had been admitted for 30 days prior to diagnostic testing.
ConclusionsThis represents the largest assessment of clinical outcomes for patients with nosocomial covid-19 in the UK to date. These findings suggest that inpatient mortality rates from nosocomial-infection are likely higher than previously reported, emphasizing the importance of infection control measures, and supports prioritisation of vaccination for covid-19 negative admissions and trials of post-exposure prophylaxis in inpatient cohorts.
Trial registrationThis project was approved and sponsored by the Welsh Government, as part of a national audit and quality improvement scheme for patients hospitalised covid-19 across Wales.
Key MessagesO_ST_ABSWhat is already known on this topicC_ST_ABSWe searched PubMed and ISI Web of Science up until 31-December-2020 for studies reporting on patient outcomes following hospital-acquired infection due to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). We identified a range of case-definitions for hospital-acquired infection, based on timing of diagnostic testing 5 to 15 days following admission. The largest and only multi-centre study concluded individuals with nosocomial infection are at a lower risk of death from SARS-CoV-2 than those infected in the community, however, was performed early in the pandemic and utilised a conservative definition of nosocomial infection.
What this study addsOur multi-centre observational study represents the largest assessment of clinical outcomes for patients with nosocomial covid-19 in the UK to date, and suggests the burden of nosocomial SARS-CoV-2 infection has been underestimated. Nosocomial-infection occurred in older, frailer, and multi-morbid individuals, and was consistently associated with greater inpatient mortality than amongst those who were infected in the community across a spectrum of case-definitions. Our findings support implementation of enhanced infection control measures to reduce this burden during future waves, especially given the recent emergence of novel viral variants with enhanced transmissibility. Furthermore, roughly half of the patients meeting the Public Health Wales definition of definite nosocomial SARS-CoV-2 infection had been admitted for 30 days prior to diagnosis, highlighting a potential window of opportunity for inpatient pre-exposure and/or post-exposure prophylaxis. | health systems and quality improvement |
10.1101/2021.01.19.20248560 | Modelling pooling strategies for SARS-CoV-2 testing in a university setting | Pre-symptomatic and asymptomatic transmission of SARS-CoV-2 are important elements in the Covid-19 pandemic, and until vaccines are made widely available there remains a reliance on testing to manage the spread of the disease, alongside non-pharmaceutical interventions such as measures to reduce close social interactions. In the UK, many universities opened for blended learning for the 2020-2021 academic year, with a mixture of face to face and online teaching. In this study we present a simulation framework to evaluate the effectiveness of different asymptomatic testing strategies within a university setting, across a range of transmission scenarios. We show that when positive cases are clustered by known social structures, such as student households, the pooling of samples by these social structures can substantially reduce the total cost of conducting RT-qPCR tests. We also note that routine recording of quantitative RT-qPCR results would facilitate future modelling studies. | infectious diseases |
10.1101/2021.01.19.21249840 | Impact of SARS-CoV-2 B.1.1.7 Spike variant on neutralisation potency of sera from individuals vaccinated with Pfizer vaccine BNT162b2 | Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) transmission is uncontrolled in many parts of the world, compounded in some areas by higher transmission potential of the B1.1.7 variant now seen in 50 countries. It is unclear whether responses to SARS-CoV-2 vaccines based on the prototypic strain will be impacted by mutations found in B.1.1.7. Here we assessed immune responses following vaccination with mRNA-based vaccine BNT162b2. We measured neutralising antibody responses following a single immunization using pseudoviruses expressing the wild-type Spike protein or the 8 amino acid mutations found in the B.1.1.7 spike protein. The vaccine sera exhibited a broad range of neutralising titres against the wild-type pseudoviruses that were modestly reduced against B.1.1.7 variant. This reduction was also evident in sera from some convalescent patients. Decreased B.1.1.7 neutralisation was also observed with monoclonal antibodies targeting the N-terminal domain (9 out of 10), the Receptor Binding Motif (RBM) (5 out of 31), but not in neutralising mAbs binding outside the RBM. Introduction of the E484K mutation in a B.1.1.7 background to reflect newly emerging viruses in the UK led to a more substantial loss of neutralising activity by vaccine-elicited antibodies and mAbs (19 out of 31) over that conferred by the B.1.1.7 mutations alone. E484K emergence on a B.1.1.7 background represents a threat to the vaccine BNT162b. | infectious diseases |
10.1101/2021.01.19.21249840 | SARS-CoV-2 B.1.1.7 escape from mRNA vaccine-elicited neutralizing antibodies | Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) transmission is uncontrolled in many parts of the world, compounded in some areas by higher transmission potential of the B1.1.7 variant now seen in 50 countries. It is unclear whether responses to SARS-CoV-2 vaccines based on the prototypic strain will be impacted by mutations found in B.1.1.7. Here we assessed immune responses following vaccination with mRNA-based vaccine BNT162b2. We measured neutralising antibody responses following a single immunization using pseudoviruses expressing the wild-type Spike protein or the 8 amino acid mutations found in the B.1.1.7 spike protein. The vaccine sera exhibited a broad range of neutralising titres against the wild-type pseudoviruses that were modestly reduced against B.1.1.7 variant. This reduction was also evident in sera from some convalescent patients. Decreased B.1.1.7 neutralisation was also observed with monoclonal antibodies targeting the N-terminal domain (9 out of 10), the Receptor Binding Motif (RBM) (5 out of 31), but not in neutralising mAbs binding outside the RBM. Introduction of the E484K mutation in a B.1.1.7 background to reflect newly emerging viruses in the UK led to a more substantial loss of neutralising activity by vaccine-elicited antibodies and mAbs (19 out of 31) over that conferred by the B.1.1.7 mutations alone. E484K emergence on a B.1.1.7 background represents a threat to the vaccine BNT162b. | infectious diseases |
10.1101/2021.01.19.21249840 | SARS-CoV-2 B.1.1.7 escape from mRNA vaccine-elicited neutralizing antibodies | Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) transmission is uncontrolled in many parts of the world, compounded in some areas by higher transmission potential of the B1.1.7 variant now seen in 50 countries. It is unclear whether responses to SARS-CoV-2 vaccines based on the prototypic strain will be impacted by mutations found in B.1.1.7. Here we assessed immune responses following vaccination with mRNA-based vaccine BNT162b2. We measured neutralising antibody responses following a single immunization using pseudoviruses expressing the wild-type Spike protein or the 8 amino acid mutations found in the B.1.1.7 spike protein. The vaccine sera exhibited a broad range of neutralising titres against the wild-type pseudoviruses that were modestly reduced against B.1.1.7 variant. This reduction was also evident in sera from some convalescent patients. Decreased B.1.1.7 neutralisation was also observed with monoclonal antibodies targeting the N-terminal domain (9 out of 10), the Receptor Binding Motif (RBM) (5 out of 31), but not in neutralising mAbs binding outside the RBM. Introduction of the E484K mutation in a B.1.1.7 background to reflect newly emerging viruses in the UK led to a more substantial loss of neutralising activity by vaccine-elicited antibodies and mAbs (19 out of 31) over that conferred by the B.1.1.7 mutations alone. E484K emergence on a B.1.1.7 background represents a threat to the vaccine BNT162b. | infectious diseases |
10.1101/2021.01.19.21249840 | SARS-CoV-2 B.1.1.7 sensitivity to mRNA vaccine-elicited, convalescent and monoclonal antibodies | Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) transmission is uncontrolled in many parts of the world, compounded in some areas by higher transmission potential of the B1.1.7 variant now seen in 50 countries. It is unclear whether responses to SARS-CoV-2 vaccines based on the prototypic strain will be impacted by mutations found in B.1.1.7. Here we assessed immune responses following vaccination with mRNA-based vaccine BNT162b2. We measured neutralising antibody responses following a single immunization using pseudoviruses expressing the wild-type Spike protein or the 8 amino acid mutations found in the B.1.1.7 spike protein. The vaccine sera exhibited a broad range of neutralising titres against the wild-type pseudoviruses that were modestly reduced against B.1.1.7 variant. This reduction was also evident in sera from some convalescent patients. Decreased B.1.1.7 neutralisation was also observed with monoclonal antibodies targeting the N-terminal domain (9 out of 10), the Receptor Binding Motif (RBM) (5 out of 31), but not in neutralising mAbs binding outside the RBM. Introduction of the E484K mutation in a B.1.1.7 background to reflect newly emerging viruses in the UK led to a more substantial loss of neutralising activity by vaccine-elicited antibodies and mAbs (19 out of 31) over that conferred by the B.1.1.7 mutations alone. E484K emergence on a B.1.1.7 background represents a threat to the vaccine BNT162b. | infectious diseases |
10.1101/2021.01.19.20248611 | Safety and immunogenicity of SARS-CoV-2 recombinant protein vaccine formulations in healthy adults: a randomised, placebo-controlled, dose-ranging study | BackgroundEffective vaccines against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are urgently needed. CoV2 preS dTM is a stabilised pre-fusion S protein vaccine produced in a baculovirus expression system. We present interim safety and immunogenicity results of the first-in-human study of the CoV2 preS dTM vaccine with two different adjuvant formulations.
MethodsThis Phase I/II, randomised, double-blind study (NCT04537208) is being conducted in healthy, SARS-CoV-2-seronegative adults in the USA. Participants were stratified by age (18-49 and [≥]50 years) and randomised to receive one (on Day[D]1) or two doses (D1, D22) of placebo or candidate vaccine, containing: low-dose (LD, effective dose 1.3 {micro}g) or high-dose (HD, 2.6 {micro}g) antigen with adjuvant AF03 (Sanofi Pasteur) or AS03 (GlaxoSmithKline); or unadjuvanted HD (18-49 years only). Safety was assessed up to D43. SARS-CoV-2 neutralising and binding antibody profiles were assessed in D1, D22 and D36 serum samples.
FindingsThe interim safety analyses included 439/441 randomised participants. There were no related unsolicited immediate AEs, serious AEs, medically attended AEs classified as severe, or AE of special interest. More grade 3 solicited reactions were reported than expected after the second dose in the adjuvanted vaccine groups. Neutralising and binding antibody responses after two vaccine doses were higher in adjuvanted versus unadjuvanted groups, in AS03-versus AF03-adjuvanted groups, in HD versus LD groups, and in younger versus older age strata.
InterpretationThe lower than expected immune responses, especially in the older age stratum, and the higher than anticipated reactogenicity post dose 2 were likely due to a higher than anticipated host cell protein content and lower than planned antigen dose in the clinical material. Further development of the AS03-adjuvanted candidate vaccine will focus on identifying the optimal antigen formulation and dose. | infectious diseases |
10.1101/2021.01.19.21249921 | A comprehensive antigen production and characterization study for easy-to-implement, highly specific and quantitative SARS-CoV-2 antibody assays | Antibody tests are essential tools to investigate humoral immunity following SARS-CoV-2 infection. While first-generation antibody tests have primarily provided qualitative results with low specificity, accurate seroprevalence studies and tracking of antibody levels over time require highly specific, sensitive and quantitative test setups. Here, we describe two quantitative ELISA antibody tests based on the SARS-CoV-2 spike receptor-binding domain and the nucleocapsid protein. Comparative expression in bacterial, insect, mammalian and plant-based platforms enabled the identification of new antigen designs with superior quality and high suitability as diagnostic reagents. Both tests scored excellently in clinical validations with multi-centric specificity and sensitivity cohorts and showed unprecedented correlation with SARS-CoV-2 neutralization titers. Orthogonal testing increased assay specificity to 99.8%, thereby enabling robust serodiagnosis in low-prevalence settings. The inclusion of a calibrator permits accurate quantitative monitoring of antibody concentrations in samples collected at different time points during the acute and convalescent phase of COVID-19. | infectious diseases |
10.1101/2021.01.19.21250094 | Diagnosis of SARS-Cov-2 infection using specimens other than naso- and oropharyngeal swabs: a systematic review and meta-analysis | BackgroundThe rapid and accurate testing of SARS-CoV-2 infection is still crucial to mitigate, and eventually halt, the spread of this disease. Currently, nasopharyngeal swab (NPS) and oropharyngeal swab (OPS) are the recommended standard sampling, yet, with some limitations. Several specimens that are easier to collect are being tested as alternatives to nasal/throat swabs in nucleic acid assays for SARS-CoV-2 detection. This study aims to critically appraise and compare the clinical performance of RT-PCR tests using oral saliva, deep-throat saliva/ posterior oropharyngeal saliva (DTS/POS), sputum, urine, feces, and tears/conjunctival swab [CS]) against standard specimens (NPS, OPS, or a combination of both).
MethodsIn this systematic review and meta-analysis, five databases (PubMed, Scopus, Web of Science, ClinicalTrial.gov and NIPH Clinical Trial) were searched up to the 30th of December 2020. Case-control and cohort studies on the detection of SARS-CoV-2 were included. Methodological quality was assessed through the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS 2).
FindingsWe identified 3022 entries, 33 of which (1.1%) met all required criteria and were included for the quantitative data analysis. Saliva presented the higher accuracy, 92.1% (95% CI: 70.0-98.3), with an estimated sensitivity of 83.9% (95% CI: 77.4-88.8) and specificity of 96.4% (95% CI: 89.5-98.8). DTS/POS samples had an overall accuracy of 79.7% (95% CI: 43.3-95.3), with an estimated sensitivity of 90.1% (95% CI: 83.3-96.9) and specificity of 63.1% (95% CI: 36.8-89.3). Remaining index specimens presented uncertainty given the lack of studies available.
InterpretationOur meta-analysis shows that saliva samples from oral region provide a high sensitivity and specificity, being the best candidate as an alternative specimen to NPS/OPS for COVID-19 detection, with suitable protocols for swab-free sample collection to be determined and validated in the future. The distinction between oral and extra-oral salivary samples will be crucial since DTS/POS samples may induce a higher rate of false positives. Urine, feces, tears/CS and sputum seem unreliable for diagnosis. Saliva testing may increase testing capacity, ultimately promoting the implementation of truly deployable COVID-19 tests, which could either work at the point-of-care (e.g. hospitals, clinics) or outbreak control spots (e.g. schools, airports, and nursing homes).
FundingNothing to declare.
Research in contextO_ST_ABSEvidence before this studyC_ST_ABSThe lack of systematized data on the accuracy performance of alternative specimens for the detection of SARS-CoV-2 (against the standard NPS/OPS). The ever-growing number of studies available, made this updated systematic review timely and of the utmost importance
Added value of this studyOur meta-analysis shows that saliva samples from the oral region provide a high sensitivity and specificity, being the best candidate as an alternative specimen to NPS/OPS for COVID-19 detection, with suitable protocols for swab-free sample collection to be determined and validated in the future. The distinction between oral and extra-oral salivary samples will be crucial since DTS/POS samples may induce a higher rate of false positives.
Implications of all the available evidenceSaliva samples simply taken from the oral cavity are promising alternatives to the currently used nasal/throat swabs. Saliva specimens can be self-collected, mitigate the discomfort caused by sampling, reduce the transmission risk and increase testing capacity. Therefore, the validation of this alternative specimen will promote the implementation of truly deployable rapid tests for SARS-CoV-2 detection at the point-of-care or outbreak spots. | infectious diseases |
10.1101/2021.01.16.21249946 | Immunisation, asymptomatic infection, herd immunity and the new variants of COVID-19 | ObjectivesIs "herd immunity" to COVID-19 a realistic outcome of any immunisation programme with the two main vaccines currently licenced in the UK (Pfizer vaccine BNT162b2 and Astra Zeneca/Oxford vaccine ChAdOx1-S)? More formally, can these vaccines achieve a sufficient level of population immunity to reduce R, the reproduction number of the infection, to below one in the absence of any non-pharmaceutical interventions?
DesignThe study uses simple mathematical models of the transmission of COVID-19 infection from primary to secondary cases parameterised using data on virus transmission and vaccine efficacy from the literature and the regulatory approval process for the vaccines.
ResultsIn the regulatory approval documents, the efficacy of the Pfizer vaccine is estimated at 0.948 (that for the Moderna vaccine is similar). Efficacy for the Oxford vaccine against primary symptomatic illness is estimated as 0.704, based on pooling of data from two dose regimes. For values of R0 similar to those reported during the first months of the pandemic, the simplest analysis implies that reducing the value of R below 1 would require 69% and 93% of the population to be vaccinated with the Pfizer and Oxford vaccine respectively (or achieve a comparable level of immunity through natural infection). However, the new variant of COVID-19 (Lineage B.1.1.7, named Variant of Concern VOC-202012/01) is reported to have an R-value 1.56 (0.92 to 2.28) times higher than the original strain. Vaccinating the entire population with the Oxford vaccine would only reduce the R value to 1.325 while the Pfizer vaccine would require 82% of the population to be vaccinated to control the spread of the new variant.
The Oxford vaccine reduces the incidence of serious illness to a greater extent than it reduces symptomatic illness. But its efficacy against the incidence of asymptomatic infections is lower, reducing its efficacy against all infection from 0.704 to 0.525 for the pooled data. Although asymptomatics are less infectious, including them in our calculations still raises R values by 20% or more, from 1.33 to 1.6 for the new variant with 100% vaccination. Neither vaccine is licenced for use in children, and when this is taken into account, this R value rises by a further 37% to 2.2 if the whole adult population is vaccinated. Even the more effective mRNA vaccines may allow the pandemic to persist via transmission amongst children, as current authorisations only allow their use on adults. In the absence of vaccination, R will reduce to 1 when 89% of the population has acquired immunity as a result of previous infection with COVID-19.
ConclusionsAll currently licensed vaccines provide substantial protection against serious illness to vaccinated individuals themselves. But the Oxford vaccine appears to have relatively low efficacy against asymptomatic infections. Although no comparable data from human trials are available for the mRNA vaccines, non-human primate studies suggest they are better at preventing nasal shedding and so transmission. Herd immunity to COVID-19 will be very difficult to achieve, especially so for the less effective vaccine. The possibility of transmission from vaccinated but infected individuals to vulnerable unvaccinated individuals is of serious concern. There is a strong case for preferring the more effective mRNA vaccines for health and social care workers and those who have contact with large numbers of vulnerable others. | infectious diseases |
10.1101/2021.01.18.20248911 | Seroepidemiological Study of Novel Corona Virus (CoVID-19) in Tehran, Iran | A novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has now spread to all countries of the world, including Iran. Although new anti-coronavirus antibodies in patients may be identified by immunological methods with sufficient sensitivity and specificity, the conclusive diagnosis of the disease is by the molecular RT-PCR process. We used a population-based seroepidemiological survey to quantify the proportion of the exposed population with SARS-CoV-2 antibodies and evaluated whether the antibodies are a marker of total or partial immunity to the proportion of the population that remains susceptible to the virus. This cross-sectional study was conducted to investigate the seroprevalence of CoVID-19 in Tehran, the capital of Iran, between April and end of October 2020. Specimens of clotted and heparinized blood (2ml) were collected from the patients. The serum and plasma were separated and stored at - 80LJ{degrees}C until use. We examined serum anti-SARS-CoV-2 IgG and IgM antibodies from 1375 in-patients admitted to our hospitals using ELISA kits. In total, 1375 participants were enrolled in this study, and SARS-CoV-2 antibodies were detected using IgM-IgG antibody assay in 291 patients. Among the seropositive patients studied, 187 were men (64.3%), and 104 were women (35.7%) (P<0.05). The mean age of the patients was 49{+/-}8.4 years; the majority (27%) were in age group 31-40 years. Also, the lowest frequency of cases was reported in the age group of 1-10 years (P <0.05). We determined the seroprevalence of SARS-CoV-2 for IgM or IgG antibodies to be 21.2%. Diabetes mellitus was the most common underlying disease among SARS-CoV-2 patients [P=0.05; Odd Ratio=1.61(0.90-2.91)]. Conventional serological assays in SARS-CoV-2 cases, such as the enzyme-linked immunoassay (ELISA) for specific IgM and IgG antibodies, have a high-throughput advantage and minimize false-negative rates that occur with the RT-PCR method. This study determined the seroprevalence of SARS-CoV-2 antibodies to be 21%. Control of diabetes among other cases factors shall play important role in management and control of COVID-19. | epidemiology |
10.1101/2021.01.19.20241844 | Factors associated with COVID-19 related hospitalisation, critical care admission and mortality using linked primary and secondary care data | BackgroundTo identify risk factors associated with increased risk of hospitalisation, intensive care unit (ICU) admission and mortality in inner North East London (NEL) during the first UK COVID-19 wave.
MethodsMultivariate logistic regression analysis on linked primary and secondary care data from people aged 16 or older with confirmed COVID-19 infection between 01/02/2020-30/06/2020 determined odds ratios (OR), 95% confidence intervals (CI) and p-values for the association between demographic, deprivation and clinical factors with COVID-19 hospitalisation, ICU admission and mortality.
ResultsOver the study period 1,781 people were diagnosed with COVID-19, of whom 1,195 (67%) were hospitalised, 152 (9%) admitted to ICU and 400 (23%) died. Results confirm previously identified risk factors: being male, or of Black or Asian ethnicity, or aged over 50. Obesity, type 2 diabetes and chronic kidney disease (CKD) increased the risk of hospitalisation. Obesity increased the risk of being admitted to ICU. Underlying CKD, stroke and dementia in-creased the risk of death. Having learning disabilities was strongly associated with increased risk of death (OR=4.75, 95%CI=(1.91,11.84), p=0.001). Having three or four co-morbidities increased the risk of hospitalisation (OR=2.34,95%CI=(1.55,3.54),p<0.001;OR=2.40, 95%CI=(1.55,3.73), p<0.001 respectively) and death (OR=2.61, 95%CI=(1.59,4.28), p<0.001;OR=4.07, 95% CI= (2.48,6.69), p<0.001 respectively).
ConclusionsWe confirm that age, sex, ethnicity, obesity, CKD and diabetes are important determinants of risk of COVID-19 hospitalisation or death. For the first time, we also identify people with learning disabilities and multi-morbidity as additional patient cohorts that need to be actively protected during COVID-19 waves. | epidemiology |
10.1101/2021.01.08.20248677 | Epidemic waves of COVID-19 in Scotland: a genomic perspective on the impact of the introduction and relaxation of lockdown on SARS-CoV-2 | The second SARS virus, SARS-CoV-2, emerged in December 2019, and within a month was globally distributed. It was first introduced into Scotland in February 2020 associated with returning travellers and visitors. By March it was circulating in communities across the UK, and to control COVID-19 cases, and prevent overwhelming of the National Health Service (NHS), a lockdown was introduced on 23rd March 2020 with a restriction of peoples movements. To augment the public health efforts a large-scale genome epidemiology effort (as part of the COVID-19 Genomics UK (COG-UK) consortium) resulted in the sequencing of over 5000 SARS-CoV-2 genomes by 18th August 2020 from Scottish cases, about a quarter of the estimated number of cases at that time. Here we quantify the geographical origins of the first wave introductions into Scotland from abroad and other UK regions, the spread of these SARS-CoV-2 lineages to different regions within Scotland (defined at the level of NHS Health Board) and the effect of lockdown on virus success. We estimate that approximately 300 introductions seeded lineages in Scotland, with around 25% of these lineages composed of more than five viruses, but by June circulating lineages were reduced to low levels, in line with low numbers of recorded positive cases. Lockdown was, thus, associated with a dramatic reduction in infection numbers and the extinguishing of most virus lineages. Unfortunately since the summer cases have been rising in Scotland in a second wave, with >1000 people testing positive on a daily basis, and hospitalisation of COVID-19 cases on the rise again. Examining the available Scottish genome data from the second wave, and comparing it to the first wave, we find that while some UK lineages have persisted through the summer, the majority of lineages responsible for the second wave are new introductions from outside of Scotland and many from outside of the UK. This indicates that, while lockdown in Scotland is directly linked with the first wave case numbers being brought under control, travel-associated imports (mostly from Europe or other parts of the UK) following the easing of lockdown are responsible for seeding the current epidemic population. This demonstrates that the impact of stringent public health measures can be compromised if following this, movements from regions of high to low prevalence are not minimised. | epidemiology |
10.1101/2021.01.18.21250035 | Nationwide large-scale data of acute lower gastrointestinal bleeding in Japan uncover detailed etiologies and relevant outcomes: CODE BLUE J-Study | BackgroundThe value of endoscopy for acute lower GI bleeding (ALGIB) remains unclear, given few large cohort studies. We aim to provide detailed clinical data for ALGIB management and to identify patients at risk for adverse outcomes based on endoscopic diagnosis.
MethodsWe conducted a multicenter, retrospective cohort study, named CODE BLUE J-Study, in 49 hospitals throughout Japan and studied 10,342 cases admitted for outpatient-onset of acute hematochezia.
ResultsCases were mostly elderly, with 29.5% hemodynamic instability and 60.1% comorbidity. 69.1% and 87.7 % of cases underwent CT and colonoscopy, respectively. Diagnostic yield of colonoscopy reached 94.9%, revealing 48 etiologies, most frequently diverticular bleeding. During hospitalization, the endoscopic therapy rate was 32.7%, mostly using clipping and band ligation. IVR and surgery were infrequently performed, for 2.1% and 1.4%. In-hospital rebleeding and death occurred in 15.2% and 0.9%. Diverticular bleeding cases had higher rates of hemodynamic instability, rebleeding, endoscopic therapy, IVR, and transfusion, but lower rates of death and surgery than other etiologies. Small bowel bleeding cases had significantly higher rates of surgery, IVR, and transfusion than other etiologies. Malignancy or upper GIB cases had significantly higher rates of thromboembolism and death than other etiologies. Etiologies that have favorable outcomes were ischemic colitis, infectious colitis, and post-endoscopy bleeding.
ConclusionsLarge-scale data of patients with acute hematochezia revealed high proportions of colonoscopy and CT, resulting in high endoscopic therapy rates. We highlight the importance of colonoscopy in detecting accurate bleeding etiologies that stratify patients at high or low risk of adverse outcomes. | gastroenterology |
10.1101/2021.01.17.21249965 | Antipsychotics effects on network-level reconfiguration of cortical morphometry in first-episode schizophrenia | BackgroundCortical thickness reductions are evident in patients with schizophrenia. Associations between antipsychotic medications (APMs) and cortical morphometry have been explored in schizophrenia patients. This raises the question of whether the reconfiguration of morphological architecture by APM plays potential compensatory roles for abnormalities in the cerebral cortex.
MethodsStructural MRI were obtained from 127 medication-naive first-episode schizophrenia (FES) patients and 133 matched healthy controls. Patients received 12 weeks of APM and were categorized as responders (n=75) or nonresponders (n=52) at follow-up. Using surface-based morphometry and structural covariance analysis, this study investigated the short-term effects of antipsychotics on cortical thickness and cortico-cortical connectivity. Global efficiency was computed to characterize network integration of the large-scale structural connectome. The relationship between connectivity and cortical thinning was examined by the structural covariance analysis among top-n regions with thickness reduction.
ResultsWidespread cortical thickness reductions were observed in pre-APM patients. Post-APM patients showed more reductions in cortical thickness, even in the frontotemporal regions without baseline reductions. Covariance analysis revealed strong cortico-cortical connectivity and higher network integration in responders than in nonresponders. Notably, the nonresponders lacked key nodes of the prefrontal and temporal regions for the covariance network between top-n regions with cortical thickness reductions.
ConclusionsAntipsychotic effects are not restricted to a single brain region but rather exhibit a network-level covariance pattern. Neuroimaging connectomics highlights the positive effects of antipsychotics on the reconfiguration of brain architecture, suggesting that abnormalities in regional morphology may be compensated by increasing interregional covariance when symptoms are controlled by antipsychotics. | psychiatry and clinical psychology |
10.1101/2021.01.17.21249913 | Seroprevalence of anti-SARS-CoV-2 antibodies in Iquitos, Loreto, Peru. | BackgroundDetection of SARS-CoV-2 antibodies among people at risk is critical for understanding both the prior transmission of COVID-19 and vulnerability of the population to the continuing transmission and, when done serially, the intensity of ongoing transmission over an interval in a community. In this study, we estimated the seroprevalence of COVID-19 in a representative population-based cohort of Iquitos, one of the regions with the highest mortality rates from COVID-19 in Peru, where a devastating number of cases occurred in March 2020.
MethodsWe conducted a population-based study of transmission tested each participant using the COVID-19 IgG/IgM Rapid Test from Orient Gene Biotech and used survey analysis methods to estimate seroprevalence accounting for the sampling design effect and test performance characteristics. Here we report results from the baseline (13 to 18 July 2020) and the first month of follow-up (13 to 18 August 2020) study.
FindingsWe enrolled a total of 716 participants and estimated seroprevalence of 70.0% (95% CI: 67.0%-73.4%), a test-re-test positivity of 65% (95% CI: 61.0%-68.3%), and an incidence of new exposures of 1.8% (95% CI: 0.9%-3.2%) data that suggest that transmission is ongoing but is occurring at low levels. We observed significant differences in the seroprevalence between age groups, with participants 18 to 29 years of age having lower seroprevalence than children <12 years of age (Prevalence ratio =0.85 [PR]; 95% CI: 0.73 - 0.98), suggesting that children were not refractory to infection in this setting.
InterpretationIquitos demonstrates one of the highest rates of seroprevalence of COVID-19 worldwide. Current data shows a limited case burden in Iquitos for the past seven months and suggests that these levels are sufficient to provide significant but incomplete herd immunity.
FundingDireccion Regional de Salud de Loreto, DIRESA, Loreto, Peru | public and global health |
10.1101/2021.01.19.21250085 | Incidence and Relative Risk of infection with SARS-CoV-2 virus (Covid-19) in European Soccer Players. | The purpose of the present study was to investigate the incidence and relative risk of infection Covid-virus in soccer players. Data from five leagues was used and compared to data from the normal population in each country. Our results revealed that the relative risk was higher in soccer players in three countries when correcting for the estimated true number of infected people in the populations. We discuss that the reason for the higher incidence in soccer players is caused by the virus entering a group of players that work closely together. | public and global health |