id
stringlengths 16
27
| title
stringlengths 18
339
| abstract
stringlengths 95
38.7k
| category
stringlengths 7
44
|
---|---|---|---|
10.1101/2022.03.28.22273059 | Engagement of health workers and peer educators from the National Adolescent Health Programme-Rashtriya Kishor Swasthya Karyakram during the COVID-19 pandemic: findings from a situational analysis | BackgroundTo understand the impact of COVID-19 on implementation of the peer education programme of the National Adolescent Health Programme-Rashtriya Kishor Swasthya Karyakram; repurposing of the RKSK health workers and Peer Educators (PE) in COVID-19 response activities and; adolescents' health and development issues.
MethodsVirtual in-depth interviews were conducted with stakeholders (n=31) (aged 15 to 54) engaged in the implementation of the peer education programme at state, district, block, and village levels in Madhya Pradesh and Maharashtra (India). These interviews were thematically coded and analysed to address the research objectives.
ResultsDespite most peer education programme activities being stopped, delayed, or disrupted during the pandemic and subsequent lockdown, some communication networks previously established, helped facilitate public health communication regarding COVID-19 and RKSK, between health workers, peer educators, and adolescents. There was significant repurposing of RKSK health workers and Peer Educators role towards COVID-19 response-related activities. Peer educators, with support from health workers, were involved in disseminating COVID-19 information, maintaining migrant and quarantine records, conducting household surveys for finding COVID-19 active cases and providing essential items (grocery, sanitary napkins, etc.) to communities and adolescents.
ConclusionPeer Educators with support from community health workers are able to play a crucial role in meeting the needs of the communities during a pandemic. There is a need to further engage, involve and build the skills of Peer Educators to support the health system. PEs can be encouraged by granting more visibility and incorporating their role more formally by paying them within the public health system in India. | public and global health |
10.1101/2022.03.28.22272903 | Ensuring a successful transition from Pap to HPV-based primary screening in Canada: a study protocol to investigate the psychosocial correlates of women's screening intentions | IntroductionThe Human Papillomavirus (HPV) test has emerged as a significant improvement over cytology for primary cervical cancer screening. In Canada, provinces and territories are moving towards implementing HPV testing in cervical cancer screening programs. While an abundance of research exists on the benefits of HPV-based screening, there is a dearth of research examining womens understanding of HPV testing. In other countries, failure to adequately address womens concerns about changes has disrupted implementation of HPV-based screening. This study protocol describes a multi-step approach to develop psychometrically valid measures and to investigate psychosocial correlates of womens intentions to participate in HPV-based cervical cancer screening.
Materials and MethodsWe conducted a web-based survey of Canadian women to assess the acceptability and feasibility of a questionnaire, including validation of scales examining: cervical cancer knowledge, HPV testing knowledge, HPV testing attitudes and beliefs, and HPV test self-sampling attitudes and beliefs. Preferences for cervical cancer screening were assessed using Best-Worst Scaling methodology.
A second web-based survey will be administered to a national sample of Canadian women in June-July of 2022 using the validated scales. Differences in the knowledge, attitudes, beliefs, and preferences of women who are currently either underscreened or adequately screened for cervical cancer will be examined through bivariate analyses. Multinomial logistic regression will be used to estimate the associations between psychosocial and sociodemographic factors and intentions to screen using HPV-based screening.
Study Impact and DisseminationFindings will provide direction for Canadian public health authorities to align guidelines to address womens concerns and optimize acceptability and uptake of HPV-based primary screening. Validated scales can be used by other researchers to improve and standardize measurement of psychosocial factors impacting HPV test acceptability. Study results will be disseminated through peer-reviewed journal articles, conference presentations, and direct communication with researchers, clinicians, policymakers, media, and specialty organizations. | public and global health |
10.1101/2022.03.28.22272903 | Ensuring a successful transition from Pap to HPV-based primary screening in Canada: a study protocol to investigate the psychosocial correlates of women's screening intentions | IntroductionThe Human Papillomavirus (HPV) test has emerged as a significant improvement over cytology for primary cervical cancer screening. In Canada, provinces and territories are moving towards implementing HPV testing in cervical cancer screening programs. While an abundance of research exists on the benefits of HPV-based screening, there is a dearth of research examining womens understanding of HPV testing. In other countries, failure to adequately address womens concerns about changes has disrupted implementation of HPV-based screening. This study protocol describes a multi-step approach to develop psychometrically valid measures and to investigate psychosocial correlates of womens intentions to participate in HPV-based cervical cancer screening.
Materials and MethodsWe conducted a web-based survey of Canadian women to assess the acceptability and feasibility of a questionnaire, including validation of scales examining: cervical cancer knowledge, HPV testing knowledge, HPV testing attitudes and beliefs, and HPV test self-sampling attitudes and beliefs. Preferences for cervical cancer screening were assessed using Best-Worst Scaling methodology.
A second web-based survey will be administered to a national sample of Canadian women in June-July of 2022 using the validated scales. Differences in the knowledge, attitudes, beliefs, and preferences of women who are currently either underscreened or adequately screened for cervical cancer will be examined through bivariate analyses. Multinomial logistic regression will be used to estimate the associations between psychosocial and sociodemographic factors and intentions to screen using HPV-based screening.
Study Impact and DisseminationFindings will provide direction for Canadian public health authorities to align guidelines to address womens concerns and optimize acceptability and uptake of HPV-based primary screening. Validated scales can be used by other researchers to improve and standardize measurement of psychosocial factors impacting HPV test acceptability. Study results will be disseminated through peer-reviewed journal articles, conference presentations, and direct communication with researchers, clinicians, policymakers, media, and specialty organizations. | public and global health |
10.1101/2022.03.31.22269942 | Tocilizumab, netakimab, and baricitinib in patients with mild-to-moderate COVID-19: a retrospective cohort study | BackgroundThe aim of the study was to assess inflammatory markers and clinical outcomes in adult patients admitted to hospital with mild-to-moderate COVID-19 and treated with targeted immunosuppressive therapy using anti-IL-17A (netakimab), anti-IL-6R (tocilizumab), or JAK1/JAK2 inhibitor (baricitinib) or with standard-of-care (SOC) therapy.
MethodsThe retrospective, observational cohort study included 154 adults hospitalized between February and August, 2020 with RT-PCR-confirmed SARS-CoV-2 with National Early Warning Score2 (NEWS2) < 7 and C-reactive protein (CRP) levels [≤] 140 mg/L on the day of the start of the therapy or observation. Patients were divided into the following groups: I) 4 mg baricitinib, 1 or 2 times a day for an average of 5 days (n = 38); II) 120 mg netakimab, one dose (n = 48); III) 400 mg tocilizumab, one dose (n = 34), IV) SOC: hydroxychloroquine, antiviral, antibacterial, anticoagulant, and dexamethasone (n = 34).
FindingsCRP levels significantly decreased after 72 h in the tocilizumab (p = 1 x 10-5) and netakimab (p = 8 x 10-4) groups and remained low after 120 h. The effect was stronger with tocilizumab compared to other groups (p = 0.028). A significant decrease in lactate dehydrogenase (LDH) levels was observed 72 h after netakimab therapy (p = 0.029). NEWS2 scores significantly improved 72 h after tocilizumab (p = 6.8 x 10-5) and netakimab (p = 0.01) therapy, and 120 h after the start of tocilizumab (p = 8.6 x 10-5), netakimab (p = 0.001), or baricitinib (p = 4.6 x 10-4) therapy, but not in the SOC group. Blood neutrophil counts (p = 6.4 x 10-4) and neutrophil-to-lymphocyte ratios (p = 0.006) significantly increased 72 h after netakimab therapy and remained high after 120 h. The percentage of patients discharged 5-7 days after the start of therapy was higher in the tocilizumab (44.1%) and netakimab (41.7%) groups than in the baricitinib (31.6%) and SOC (23.5%) groups. Compared to SOC (3/34, 8.8%), mortality was lower in netakimab (0/48, 0%, RR=0.1 (95% CI: 0.0054 to 1.91)), tocilizumab (0/34, 0%, RR=0.14 (95% CI: 0.0077 to 2.67)), and baricitinib (1/38, 2.6%, RR=0.3 (95% CI: 0.033 to 2.73)) groups.
InterpretationIn hospitalized patients with mild-to-moderate COVID-19, anti-IL-17A or anti-IL-6R therapy were superior or comparable to the JAK1/JAK2 inhibitor, and all three were superior to SOC. Whereas previous studies did not demonstrate significant benefit of anti-IL-17A therapy for severe COVID-19, our data suggest that such therapy could be a rational choice for mild-to-moderate disease, considering the generally high safety profile of IL-17A blockers. The significant increase in blood neutrophil counts in the netakimab group may reflect efflux of neutrophils from inflamed tissues. We therefore hypothesize that neutrophil count and neutrophil-to-lymphocyte ratio could serve as markers of therapeutic efficiency for IL-17A-blocking antibodies in the context of active inflammation. | allergy and immunology |
10.1101/2022.03.31.22273223 | Patterns of antibiotic cross-resistance by bacterial sample source: a retrospective cohort study | BackgroundAntimicrobial resistance is a major healthcare burden, exasperated when it extends to multiple drugs. While cross-resistance across multiple drugs is well-studied experimentally, it is not the case in clinical settings, and especially not while considering confounding. In addition, bacteria from different sample sources may have undergone different evolutionary trajectories, therefore examining cross-resistance across sources is desirable.
MethodsWe employed additive Bayesian network (ABN) modelling to examine antibiotic cross-resistance in five major bacterial species, obtained from different sources (urine, wound, blood, and sputum) in a clinical setting, collected in a large hospital in Israel over a 4-year period. ABN modelling allowed for examination of the relationship between resistance to different drugs while controlling for major confounding variables.
FindingsPatterns of cross-resistance differ across sample sources. Importantly, even when positive cross-resistance exists between two antibiotics in different sample sources, its magnitude may vary substantially.
InterpretationOur results highlight the importance of considering sample sources when assessing likelihood of antibiotic cross-resistance and determining antibiotic treatment regimens and policies. | pharmacology and therapeutics |
10.1101/2022.04.01.22273303 | Evidence for feasibility of mobile health and social media-based interventions for early psychosis and clinical high risk | BackgroundDigital technology, the internet and social media are increasingly investigated as a promising means for monitoring symptoms and delivering mental health treatment. These apps and interventions have demonstrated preliminary acceptability and feasibility, but previous reports suggests that access to technology may still be limited among individuals with psychotic disorders relative to the general population.
ObjectiveWe evaluated and compared access and use of technology and social media in young adults with psychotic disorders (PD), clinical risk for psychosis (CR), and psychosis-free youths (PF).
MethodsParticipants were recruited through a coordinated specialty care clinic dedicated towards early psychosis as well as ongoing studies. We surveyed 21 PD, 23 CR, and 15 PF participants regarding access to technology and use of social media, specifically Facebook and Twitter. Statistical analyses were conducted in R. Categorical variables were compared among groups Fishers exact test, continuous variables were compared using one-way ANOVA, and multiple linear regressions were used to evaluate for covariates.
ResultsAccess to technology and social media were similar among PD, CR and PF. Individuals with PD, but not CR, were less likely to post at a weekly or higher frequency compared to psychosis-free individuals. We found that decreased active social media posting was unique to psychotic disorders and did not occur with other psychiatric diagnoses or demographic variables. Additionally, variation in age, sex, Caucasian vs. non-Caucasian race did not affect posting frequency.
ConclusionsFor young people with psychosis spectrum disorders, there appears to be no "technology gap" limiting the implementation of digital and mobile health interventions. Active posting to social media was reduced for individuals with psychosis, which may be related to negative symptoms or impairment in social functioning. | psychiatry and clinical psychology |
10.1101/2022.03.30.22273189 | Behaviour change interventions improve maternal and child nutrition in sub-Saharan Africa: a systematic review | BackgroundEvidence that nutrition-specific and nutrition-sensitive interventions can improve maternal and child nutrition status in sub-Saharan Africa is inconclusive. Using behaviour change theory and techniques in intervention design may increase effectiveness and make outcomes more predictable. This systematic review aimed to determine whether interventions that included behaviour change functions were effective.
MethodsSix databases were searched systematically, using MeSH and free-text terms, for articles describing nutrition-specific and nutrition-sensitive behaviour change interventions published in English until January 2022. Titles, abstracts and full-text papers were double-screened. Data extraction and quality assessments followed Centre for Reviews and Dissemination guidelines. Behaviour change functions of interventions were mapped onto the COM-B model and Behaviour Change Wheel. PROSPERO registered (135054).
FindingsThe search yielded 1149 articles: 71 articles met inclusion criteria, ranging from low (n=30) to high (n=11) risk of bias. Many that applied behaviour change theory, communication or counselling resulted in significant improvements in infant stunting and wasting, household dietary intake and maternal psychosocial measures. Interventions with >2 behaviour change functions (including persuasion, incentivisation, environmental restructuring) were the most effective.
InterpretationWe recommend incorporating behaviour change functions in nutrition interventions to improve maternal and child outcomes, specifically drawing on the Behaviour Change Wheel, COM-B model. To enhance the designs of these interventions, and ultimately improve the nutritional and psychosocial outcomes for mothers and infants in sub-Saharan Africa, collaborations are recommended between behaviour change and nutrition experts, intervention designers, policy makers and commissioners to fund and roll-out multicomponent behaviour change interventions. | nutrition |
10.1101/2022.03.31.22273259 | Breast and Cervical Cancer Screening Rates in Student-Run Free Clinics: A Systematic Review | ObjectiveTo assess rates of breast and cervical cancer screening at student-run free clinics to better understand challenges and strategies for advancing quality and accessibility of womens health screening at student-run free clinics.
Data sourcesWe performed a systematic search of Ovid MEDLINE, PubMed, Web of Science, and Google Scholar databases from database inception to 2020 using keywords related to student-run clinics, breast cancer screening, and cervical cancer screening.
Study eligibility criteriaWe included all English-language publications describing screening rates of breast and/or cervical cancer at student-run free clinics within the United States. Five authors screened abstracts and reviewed full texts for inclusion.
Study appraisal and synthesis methodsTwo reviewers extracted data independently for each publication using a structured data extraction table. Disagreements were resolved by group consensus. Two reviewers then assessed for risk of bias for each text using a modified Agency for Healthcare Research and Quality checklist for cross-sectional and prevalence studies. Results were synthesized qualitatively due to study heterogeneity.
ResultsOf 3634 references identified, 12 references met study inclusion criteria. The proportion of patients up-to-date on breast cancer screening per guidelines ranged from 45% to 94%. The proportion of patients up-to-date on cervical cancer screening per guidelines ranged from 40% to 88%.
ConclusionStudent-run free clinics can match breast and cervical cancer screening rates amongst uninsured populations nationally, though more work is required to bridge the gap in care that exists for the underinsured and uninsured. | obstetrics and gynecology |
10.1101/2022.03.31.22273236 | The impact of COVID-19 pandemic on influenza surveillance: a systematic review and meta-analysis | Influenza activity was reported to be below the seasonal levels during the COVID-19 pandemic globally. However, during the SARS-CoV-2 outbreak, the routine real-time surveillance of influenza like illness (ILI) and acute respiratory infection (ARI) was adversely affected due to the changes in priorities, economic constraints, repurposing of hospitals for COVID care and closure of outpatient services. A systematic review and meta-analysis were carried out to assess the pooled proportion of symptomatic cases tested for influenza virus before the COVID-19 pandemic in 2019 and during the pandemic in 2020/2021. The study was designed based on PRISMA guidelines and the meta-analysis was performed to synthesise the pooled proportion of patients sampled for influenza surveillance before the COVID-19 pandemic in 2019 and during the pandemic in 2020/21 with 95% confidence interval (CI). The overall pooled proportion of symptomatic cases undergone influenza surveillance before and during the pandemic was 2.38% (95% CI 2.08%-2.67%) and 4.18% (95% CI 3.8%-4.52%) respectively. However, the pooled proportion of samples tested for influenza before the pandemic was 0.69% (95% CI 0.45-0.92%) and during the pandemic was 0.48% (95% CI 0.28-0.68%) when studies from Canada were excluded. The meta-analysis concludes that globally there was a decline in influenza surveillance during the COVID-19 pandemic except in Canada. | infectious diseases |
10.1101/2022.03.31.22273236 | The impact of COVID-19 pandemic on influenza surveillance: a systematic review and meta-analysis | Influenza activity was reported to be below the seasonal levels during the COVID-19 pandemic globally. However, during the SARS-CoV-2 outbreak, the routine real-time surveillance of influenza like illness (ILI) and acute respiratory infection (ARI) was adversely affected due to the changes in priorities, economic constraints, repurposing of hospitals for COVID care and closure of outpatient services. A systematic review and meta-analysis were carried out to assess the pooled proportion of symptomatic cases tested for influenza virus before the COVID-19 pandemic in 2019 and during the pandemic in 2020/2021. The study was designed based on PRISMA guidelines and the meta-analysis was performed to synthesise the pooled proportion of patients sampled for influenza surveillance before the COVID-19 pandemic in 2019 and during the pandemic in 2020/21 with 95% confidence interval (CI). The overall pooled proportion of symptomatic cases undergone influenza surveillance before and during the pandemic was 2.38% (95% CI 2.08%-2.67%) and 4.18% (95% CI 3.8%-4.52%) respectively. However, the pooled proportion of samples tested for influenza before the pandemic was 0.69% (95% CI 0.45-0.92%) and during the pandemic was 0.48% (95% CI 0.28-0.68%) when studies from Canada were excluded. The meta-analysis concludes that globally there was a decline in influenza surveillance during the COVID-19 pandemic except in Canada. | infectious diseases |
10.1101/2022.03.31.22273272 | Interpretation of non-responders to SARS-CoV-2 vaccines using WHO International Standard | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused a global pandemic with more than 485 millions infected. Questions about non-responders to SARS-CoV-2 vaccines remain unaddressed. Here, we report data from people after administering the complete dose of SARS-CoV-2 vaccines using the World Health Organization International Standard for anti-SARS-CoV-2 immunoglobulin. Our study showed that immune cells such as CD4 cells, CD8 cells, and B cells and anti-spike immunoglobulin G levels were significantly reduced in the elderly. There were 7.5% non-responders among the 18-59 yr group and 11.7% in the [≥]60 yr group. A titer of anti-SARS-CoV-2 spike immunoglobulin G is blew 50 BAU/mL to be considered as non-responders at intervals of 30 to 90 days after the last vaccine dose. Booster vaccination may be recommended for non-responders to reduce the disease severity and mortality. | allergy and immunology |
10.1101/2022.04.01.22273293 | Changes in Apolipoprotein A1 and B, Glucose Metabolism, and Skeletal Muscle Mass in Peripheral Artery Disease after Endovascular Treatment: A Pilot Study | Peripheral artery disease impairs walking and physical activity, resulting in further loss of skeletal muscle. However, peripheral artery disease can be treated with endovascular treatment. The thigh muscle has been shown to correlate with systemic skeletal muscle volume. We assessed the changes in the metabolism of apolipoproteins A1 and B, blood glucose, and thigh muscle mass after endovascular treatment in above-the-knee peripheral artery disease where vessels supply the thigh muscle. Mid-thigh muscle volume was measured with computed tomography before and at 6 months after endovascular treatment. Apolipoproteins A1 and B, fasting blood glucose, post-load (75 g oral glucose tolerance test) 2 h-blood glucose, and glycated hemoglobin A1c (HbA1c) levels were measured concomitantly. The relationships between changes in apolipoproteins A1, apolipoproteins B, blood glucose, post-oral glucose tolerance test 2 h-blood glucose, Rutherford classification, and gain or loss of thigh muscle were investigated. Thigh muscle mass did not correlate with changes in apolipoproteins A1, B, fasting glucose, post-oral glucose tolerance test 2 h-blood glucose, HbA1c, and Rutherford classification. Among patients with muscle gain post-endovascular treatment, apolipoproteins A1 increased significantly, while apolipoproteins B levels were similar. Post-oral glucose tolerance test 2 h-blood glucose levels decreased. Preferable metabolic changes were observed in patients with skeletal muscle gain contrasted with muscle loss. | cardiovascular medicine |
10.1101/2022.04.01.22273300 | Recurrent aphthous stomatitis affects quality of life | Recurrent aphthous stomatitis are recurrent oral ulcers that can affect important daily activities, such as oral hygiene and eating. In this prospective case-control study (n=62), we show that, during ulcer episodes, patients report a poorer quality of life compared to ulcer-free periods, and that this impact is positively associated with the number and size of lesions. Our results suggest that, if intervened locally, general relief of the condition could be achieved. | dentistry and oral medicine |
10.1101/2022.04.01.22273316 | Modeling and Global Sensitivity Analysis of Strategies to Mitigate Covid-19 Transmission on a Structured College Campus | In response to the COVID-19 pandemic, many higher educational institutions moved their courses on-line in hopes of slowing disease spread. The advent of multiple highly-effective vaccines offers the promise of a return to "normal" in-person operations, but it is not clear if - or for how long - campuses should employ non-pharmaceutical interventions such as requiring masks or capping the size of in-person courses. In this study, we develop and fine-tune a model of COVID-19 spread to UC Merceds student and faculty population. We perform a global sensitivity analysis to consider how both pharmaceutical and non-pharmaceutical interventions impact disease spread. Our work reveals that vaccines alone may not be sufficient to eradicate disease dynamics and that significant contact with an infected surrounding community will maintain cases on-campus. Our work provides a foundation for higher-education planning allowing campuses to balance the benefits of in-person instruction with the ability to quarantine/isolate infected individuals. | epidemiology |
10.1101/2022.04.01.22273281 | Effectiveness of COVID-19 vaccines against Omicron and Delta hospitalisation: test negative case-control study | BackgroundThe omicron (B.1.1.529) variant has been associated with reduced vaccine effectiveness (VE) against infection and mild disease with rapid waning, even after a third dose, nevertheless omicron has also been associated with milder disease than previous variants. With previous variants protection against severe disease has been substantially higher than protection against infection.
MethodsWe used a test-negative case-control design to estimate VE against hospitalisation with the omicron and delta variants using community and in hospital testing linked to hospital records. As a milder disease, there may be an increasing proportion of hospitalised individuals with Omicron as an incidental finding. We therefore investigated the impact of using more specific and more severe hospitalisation indicators on VE.
ResultsAmong 18-64 year olds using all Covid-19 cases admitted via emergency care VE after a booster peaked at 82.4% and dropped to 53.6% by 15+ weeks after the booster; using all admissions for >= 2 days stay with a respiratory code in the primary diagnostic field VE ranged from 90.9% down to 67.4%; further restricting to those on oxygen/ventilated/on intensive care VE ranged from 97.1% down to 75.9%. Among 65+ year olds the equivalent VE estimates were 92.4% down to 76.9%; 91.3% down to 85.3% and 95.8% down to 86.8%.
ConclusionsWith generally milder disease seen with Omicron, in particular in younger adults, contamination of hospitalisations with incidental cases is likely to reduce VE estimates against hospitalisation. VE estimates improve and waning and waning is more limited when definitions of hospitalisation that are more specific to severe respiratory disease are used. | epidemiology |
10.1101/2022.03.31.22273267 | Consistent Pattern of Epidemic Slowing Across Many Geographies Led to Longer, Flatter Initial Waves of the COVID-19 Pandemic | To define appropriate planning scenarios for future pandemics of respiratory pathogens, it is important to understand the initial transmission dynamics of COVID-19 during 2020. Here, we fit an age-stratified compartmental model with a flexible underlying transmission term to daily COVID-19 death data from states in the contiguous U.S. and to national and sub-national data from around the world. The daily death data of the first months of the COVID-19 pandemic was categorized into one of four main types: "spring single-peak profile", "summer single-peak profile", "spring/summer two-peak profile" and "broad with shoulder profile". We estimated a reproduction number R as a function of calendar time tc and as a function of time since the first death reported in that population (local pandemic time, tp). Contrary to the multiple categories and range of magnitudes in death incidence profiles, the R(tp) profiles were much more homogeneous. We find that in both the contiguous U.S. and globally, the initial value of both R(tc) and R(tp) was substantial: at or above two. However, during the early months, pandemic time R(tp) decreased exponentially to a value that hovered around one. This decrease was accompanied by a reduction in the variance of R(tp). For calendar time R(tc), the decrease in magnitude was slower and non-exponential, with a smaller reduction in variance. Intriguingly, similar trends of exponential decrease and reduced variance were not observed in raw death data. Our findings suggest that the combination of specific government responses and spontaneous changes in behaviour ensured that transmissibility dropped, rather than remaining constant, during the initial phases of a pandemic. Future pandemic planning scenarios should be based on models that assume similar decreases in transmissibility, which lead to longer epidemics with lower peaks when compared with models based on constant transmissibility.
Author summaryIn planning for a future novel respiratory pandemic, or the next variant of SARS-Cov-2, it is important to characterize and understand the observed epidemic patterns during the first months of the COVID-19 outbreak. Here, we describe COVID-19 epidemic patterns observed in the U.S. and globally in terms of patterns of the basic reproduction number, R(t), using an age-stratified compartmental model. We find that daily death data of the first months of the COVID-19 pandemic can be classified into one of four types: "spring single-peak profile", "summer single-peak profile", "spring/summer two-peak profile" and "broad with shoulder profile". Using the concept of local pandemic time, tp, we show a consistent pattern on four continents of an initial large magnitude and variance in reproductive number R(tp) that decreases monotonically and hovers around one for many days, regardless of specific intervention measures imposed by local authorities and without an accompanying decrease in daily death prevalence. We attribute this to significant behavior changes in populations in response to the perceived risk of COVID-19. | epidemiology |
10.1101/2022.03.31.22273263 | Latent Factors of Language Disturbance and Relationships to Quantitative Speech Features | Background and HypothesisQuantitative acoustic and textual measures derived from speech ("speech features") may provide valuable biomarkers for psychiatric disorders, particularly schizophrenia spectrum disorders (SSD). We sought to identify cross-diagnostic latent factors for speech disturbance with relevance for SSD and computational modeling.
Study DesignClinical ratings for speech disturbance were generated across 14 items for a cross-diagnostic sample (N=343), including SSD (n=97). Speech features were quantified using an automated pipeline for brief recorded samples of free-speech. Factor models for the clinical ratings were generated using exploratory factor analysis, then tested with confirmatory factor analysis in the cross-diagnostic and SSD groups. Relationships among factor scores, speech features and other clinical characteristics were examined using network analysis.
Study ResultsWe found a 3-factor model with good fit in the cross-diagnostic group and acceptable fit for the SSD subsample. The model identifies an impaired expressivity factor and two interrelated disorganized factors for inefficient and incoherent speech. Incoherent speech was specific to psychosis groups, while inefficient speech and impaired expressivity showed intermediate effects in people with nonpsychotic disorders. Network analysis showed that the factors had distinct relationships with speech features, and that the patterns were different in the cross-diagnostic versus SSD groups.
ConclusionsWe report a cross-diagnostic 3-factor model for speech disturbance which is supported by good statistical measures, intuitive, applicable to SSD, and relatable to linguistic theories. It provides a valuable framework for understanding speech disturbance and appropriate targets for modeling with quantitative speech features. | psychiatry and clinical psychology |
10.1101/2022.03.30.22273215 | An investigation of age-related neuropathophysiology in autism spectrum disorder using fixel-based analysis of corpus callosum white matter micro- and macrostructure | BackgroundCorpus callosum anomalies are commonly noted in autism spectrum disorder (ASD). Given the complexity of its microstructural architecture, with crossing fibers projecting throughout, we applied fixel-based analysis to probe white matter micro- and macrostructure within this region. As ASD is a neurodevelopmental condition with noted abnormalities in brain growth, age was also investigated.
MethodsData for participants with (N=54) and without (N=50) ASD, aged 5-34 years, were obtained from the Autism Brain Imaging Data Exchange-II (ABIDE-II). Within each site, indices of fiber density (FD), fiber cross-section (FC), and combined fiber density and cross-section (FDC) were compared between those groups.
ResultsYoung adolescents with ASD (age = 11.19 {+/-} 7.54) showed reduced macroscopic FC and FDC compared to age-matched neurotypical controls (age = 10.04 {+/-} 4.40). Reduced FD and FDC was noted in a marginally older ASD (age 13.87 {+/-} 3.15) cohort compared to matched controls (age = 13.85 {+/-} 2.90). Among the oldest cohorts, a non-significant trend indicated reduced FD in older adolescents/young adults with ASD (age = 17.07 {+/-} 3.56) compared to controls (age = 16.55 {+/-} 2.95). There was a positive correlation between age and callosal mean FC and FDC in the youngest cohort. When stratified by diagnosis, this finding remained only for the ASD sample.
ConclusionWhite matter aberration appears greatest among younger ASD cohorts. In older adolescents and young adults, less of the corpus callosum seems affected. This supports the suggestion that some early neuropathophysiological indicators in ASD may dissipate with age. | psychiatry and clinical psychology |
10.1101/2022.03.30.22273192 | Decontamination of Geobacillus Stearothermophilus Using the Arca Aerosolized Hydrogen Peroxide Decontamination System | IntroductionIn response to the limited supply of personal protective equipment during the pandemic caused by SARS-CoV-2, recent studies demonstrate that gaseous H2O2 is an effective decontaminant of N95 filtering facepiece respirators to enable reuse of these items in a clinical setting. This paper evaluates the efficacy of the Arca Aerosolized Hydrogen Peroxide Decontamination System (Arca), a novel aerosolized H2O2 decontamination system, using biologic indicator testing.
Materials and MethodsThe Arca produces and circulates H2O2 aerosol inside of a sealed stainless steel chamber. The Arcas decontamination efficacy was evaluated in 8 decontamination trials with 2 H2O2 concentrations (3% and 12%) and 4 decontamination cycle durations (45, 60, 90, and 120 minutes). Efficacy was evaluated by testing: 1) the concentration in parts per million (ppm) of H2O2 produced inside the chamber and the concentration in ppm of H2O2 vented from the chamber, and 2) the decontamination of Mesa Biologic Indicator filter strips (BI) inoculated with Geobacillus Stearothermophilus. Control tests were conducted by submerging BI strips in 3mL of 3% and 12% H2O2 for 120 minutes (negative controls) and by not exposing one BI strip to H2O2 (positive control).
ResultsGreater than 5000 ppm of H2O2 was detected on the concentration strips inside the chamber for each of the eight decontamination trials. No vented H2O2 was detected on the external concentration strips after any decontamination trial. No growth was observed for any of the negative controls after seven days. The positive control was positive for growth.
ConclusionThe Arca Aerosolized Hydrogen Peroxide Decontamination System is effective at decontaminating bacterial G. Stearothermophilus at a cycle time of 45 minutes utilizing 6mL of 3% H2O2 solution. | public and global health |
10.1101/2022.04.01.22273299 | Internet and mental health during the COVID-19 pandemic: Evidence from the UK | With the COVID-19 pandemic, the Internet has become a key player in the daily lives of most people. We investigate the relationship between mental health and internet use frequency and purpose six months after the first lockdown in the UK, September 2020. Using data from the UK Household Longitudinal Study on the 12-item General Health Questionnaire and the Internet use module, and controlling for sociodemographic characteristics and personality traits, we find that older individuals (aged 59 or above) have a lower internet use frequency (twice a day or less). Younger women use the Internet for social purposes more than men do, while younger men use the Internet for leisure-and-learning purposes more than women and older men do. Both high frequency internet use and use for social purposes appear to be a protective factor for social dysfunction. Interestingly, high internet use is a protective factor for social dysfunction among younger women, but a risk factor for psychological distress among younger men. Finally, while leisure-and-learning purpose is a protective factor for social dysfunction among younger women, it is a risk factor for social dysfunction among younger men. | public and global health |
10.1101/2022.04.01.22273252 | What factors converged to create a COVID-19 hot-spot? Lessons from the South Asian community in Ontario. | BackgroundSouth Asians represent the largest non-white ethnic group in Canada. The Greater Toronto Area (GTA), home to a high proportion of South Asians, emerged as a COVID-19 hot spot. Early in the pandemic, the South Asian community was identified as having risk factors for exposure and specific barriers to accessing testing and reliable health information, rendering them uniquely vulnerable to SARS-CoV-2 infection.
ObjectivesTo investigate the burden of SARS-CoV-2 infection among South Asians in the GTA, and to determine which demographic characteristics were most closely aligned with seropositivity, in this cross-sectional analysis of a prospective cohort study.
MethodsParticipants from the GTA were enrolled between April and July 2021. Seropositivity for anti-spike and anti-nucleocapsid antibodies was determined from dried blood spots, and age and sex standardized to the Ontario South Asian population. Demographics, risk perceptions, and sources of COVID-19 information were collected via questionnaire in a subset.
ResultsAmong the 916 South Asians enrolled, mean age 41 years, the age and sex standardized seropositivity was 23.6% (95% CI: 20.8%-26.4%). Approximately one-third identified as essential workers, and 19% reported living in a multi-generational household. Over half perceived high COVID-19 risk due to their geographic location, and 36% due to their type of employment. The top three most trusted sources of COVID-related information included healthcare providers/public health, traditional media sources, and social media.
ConclusionBy the third wave of the COVID-19 pandemic, approximately one-quarter of a sample of South Asians in Ontario had serologic evidence of prior SARS-CoV-2 infection. Insight into factors that render certain populations at risk can help future pandemic planning and disease control efforts. | public and global health |
10.1101/2022.04.01.22273306 | Using genetic information to define idiopathic pulmonary fibrosis in UK Biobank | IntroductionIdiopathic pulmonary fibrosis (IPF) is a rare lung disease characterised by progressive scarring in the alveoli. IPF can be defined in population studies using electronic healthcare records (EHR) but recent genetic studies of IPF using EHR have shown an attenuation of effect size for known genetic risk factors when compared to clinically-derived datasets, suggesting misclassification of cases.
MethodsWe used EHR (ICD-10, Read (2 & 3)) and questionnaire data to define IPF cases in UK Biobank, and evaluated these definitions using association results for the largest genetic risk variant for IPF (rs35705950-T, MUC5B). We further evaluated the impact of exclusions based on co-occurring codes for non-IPF pulmonary fibrosis and restricting codes according to changes in diagnostic practice.
ResultsOdds ratio (OR) estimates for rs35705950-T associations with IPF defined using EHR and questionnaire data in UK Biobank were significant and ranged from 2.06 to 3.09 which was lower than those reported using clinically-derived IPF datasets (95% confidence intervals: 3.74, 6.66). Code-based exclusions of cases gave slightly closer effect estimates to those previously reported, but sample sizes were substantially reduced.
DiscussionWe show that none of the UK Biobank IPF codes replicate the effect size for the association of rs35705950-T on IPF risk when using clinically-derived IPF datasets. Further code-based exclusions also did not lead to effect estimates closer to those expected. Whilst the apparent increased sample sizes available for IPF from general population cohorts may be of benefit, future studies should take these limitations of the case definition into account.
Key MessagesO_ST_ABSWhat is already known on this topicC_ST_ABSUK Biobank is a very large prospective cohort that can be utilised to increase sample sizes for studies of rare diseases such as idiopathic pulmonary fibrosis (IPF). However, effect size estimates for genetic risk factors for IPF in UK Biobank and other general population cohorts, when defining cases using electronic healthcare records (EHR), are smaller than those estimated from clinically-derived IPF datasets.
What this study addsUsing Hospital Episode Statistics (HES) data, primary care data, death registry data and self-report data in UK Biobank, we used the association rs35705950-T, the largest genetic risk factor for IPF, to evaluate code-based definitions of IPF. We show that none of the available IPF coding replicates the effect size for rs35705950-T on IPF risk that is observed in clinically-derived IPF datasets.
How this study might affect research, practice or policyResearch using large general population cohorts and datasets for observational studies of IPF should take these limitations of EHR definitions of IPF into consideration. | respiratory medicine |
10.1101/2022.03.31.22273254 | The HIV and STI syndemic following mass scale-up of combination HIV interventions in Uganda: a population-based cross-sectional study | BackgroundCombination HIV interventions (CHIs) have led to significant declines in HIV incidence in sub-Saharan Africa; however, population-level data on non-HIV sexually transmitted infection (STI) burden in the context of CHIs are rare. We aimed to assess STI burden in Uganda following mass scale-up of CHIs, including universal HIV treatment.
MethodsThe Sexually Transmitted Infection Prevalence Study (STIPS) was a cross-sectional study nested within the Rakai Community Cohort Study (RCCS), a population-based cohort among inland agrarian and Lake Victoria fishing populations in southern Uganda. STIPS enrolled consenting participants, 18-49 years, between May and October 2019 and measured prevalence of Chlamydia trachomatis (chlamydia), Neisseria gonorrhoeae (gonorrhea), Trichomonas vaginalis (trichomonas), syphilis, and herpes simplex virus type 2 (HSV-2).
FindingsSTIPS enrolled 1,825 participants, including 965 women (53%), of whom 9% (n=107) were pregnant. Overall, there was 9.8% prevalence of chlamydia (95%CI:8.5-11%), 6.7% gonorrhea (95%CI:5.7-8.0%), and 11% trichomonas (95%CI:9.5-12%). In the fishing population, syphilis reactivity was 24% (95%CI:22-27%), with 9.4% (95%CI:7.7-11%) having high titer (RPR [≥] 1:8) infection, including 17% (95%CI:12-24%) of HIV-positive men. Prevalence of [≥] 1 curable STI (chlamydia, gonorrhea, trichomonas, or high titer syphilis) was 44% higher among HIV-positive persons (adjusted prevalence risk ratio [adjPRR]=1.44,95%CI:1.22-1.71), with no differences by HIV treatment status. HIV-positive pregnant women were more likely than HIV-negative pregnant women to have a curable STI (adjPRR=1.87, 95%CI: 1.08-3.23).
InterpretationSTI burden remains extremely high in Uganda, particularly among HIV-positive persons. There is an urgent need to integrate STI diagnostic testing and treatment with HIV services in African settings.
FundingNational Institutes of Health | sexual and reproductive health |
10.1101/2022.03.31.22273275 | Cervicovaginal immune mediators increase when young women begin to have sexual intercourse: a prospective study and meta-analysis | BackgroundAdolescent girls and young women (AGYW) are at high risk of sexually transmitted infections (STIs). It is unknown whether beginning to have sexual intercourse causes changes to immune mediators in the cervicovaginal tract that contribute to this risk.
MethodsWe collected cervicovaginal lavages from Kenyan AGYW in the months before and after first penile-vaginal sexual intercourse and measured the concentrations of 20 immune mediators. We compared concentrations pre- and post-first sex using mixed effects models. Secondary analyses included adjustment for possible confounding factors. We additionally performed a systematic review to identify similar studies and combined them with our results by meta-analysis of individual participant data.
ResultsWe included 180 samples from 95 AGYW, with 44% providing only pre-first sex samples, 35% matched pre and post, and 21% only post. We consistently detected 19/20 immune mediators, all of which increased post-first sex (median increase 54%; p<0.05 for 13/19; Holm-Bonferroni-adjusted p<0.05 for IL-1{beta}, IL-2 and CXCL8). Effects remained similar after adjusting for confounding factors including STIs and Nugent score.
Our systematic review identified two eligible studies, one of 93 Belgian participants and the other of 18 American participants. Nine immune mediators were measured in at least 2/3 studies. Meta-analysis confirmed higher levels post-first sex for 8/9 immune mediators (median increase 47%; p<0.05 for six mediators, most prominently IL-1, IL-1{beta} and CXCL8).
ConclusionsCervicovaginal immune mediator concentrations increased after the beginning of sexual activity independently of confounding factors including STIs. Results were consistent across three studies conducted on three different continents. | sexual and reproductive health |
10.1101/2022.03.30.22273207 | Determinants of mobility decline in nephrology-referred patients with CKD: a longitudinal cohort study | Background and ObjectiveChronic kidney disease (CKD) is associated with loss of muscle quality leading to mobility limitation and decreased independence. Identifying predictors of gait speed decline may help target rehabilitative therapies to those at highest risk of mobility impairment.
Design, setting and participants, and measurementsThe current prospective cohort study recruited ambulatory patients with stage 1-4 CKD (eGFR 15-89 ml/min/1.73m2) from nephrology clinics. Predictors included demographic and clinical variables including GFR estimated using serum cystatin C. Outcomes were average change in gait speed (m/s) per year and inclusion in the top tertile of gait speed decline over 3 years. Linear mixed models and relative risk regression were used to estimate associations with annual gait speed changes and fastest tertile of decline.
ResultsAmong 213 participants, 81% were male, 22% were black and 43% had diabetes. Mean age was 57{+/-}13 years, median follow-up 3.15 years, mean baseline eGFRcysc 47.9{+/-}21ml/min/1.73 m2, and median baseline gait speed 0.95m/s [IQR 0.81, 1.10]. Lower baseline eGFRcysc was associated with more rapid loss of gait speed (-0.029 m/s/year [95% CI -0.042, -0.015] per 30 ml/min/1.73 m2 lower eGFR; p<0.001). Diabetes was associated with -0.024m/s/year faster change (95% CI -0.042, -0.007; p=.007). Lower eGFRcysc was associated with a 49% greater risk of rapid gait speed decline (IRR 1.49; 95% CI 1.11, 2.00, p=.008) after adjustment.
Prevalent cardiovascular disease and African American race were associated with a 45% greater (IRR 1.45; 95% CI 1.04, 2.01, p=.03) and 58% greater rate of rapid gait speed decline (IRR 1.58; 95% CI 1.09, 2.29, p=.02), respectively.
ConclusionsAmong ambulatory, disability-free patients with CKD, lower eGFRcysc and diabetes status were associated with faster gait speed decline. Lower eGFRcysc, cardiovascular disease, and African American race were associated with rapid gait speed decline. | nephrology |
10.1101/2022.03.29.22273147 | Quickest Way to Less Headache Days: an operational research model and its implementation for Chronic Migraine | Structured AbstractO_ST_ABSBackgroundC_ST_ABSChoosing migraine prevention medications often involves trial and error. Operations research methodologies, however, allow us to derive a mathematically optimum way to conduct such trial and error processes.
ObjectiveGiven probability of success (defined as 50% reduction in headache days) and adverse events as a function of time, we seek to develop and solve an operations research model, applicable to any arbitrary patient, minimizing time until discovery of an effective migraine prevention medication. We then seek to apply our model to real life data for chronic migraine prevention.
MethodsWe develop a model as follows: Given a set of D many preventive medications, for drug i in D, we describe the likelihood of reaching 50% headache day reduction over the course of time, (ti,1 [≤] ti,2 [≤] ...) by probability (pi,1 [≤] pi,2 [≤] ...). We additionally assume a probability of adverse event(qi,1 [≤] qi,2 [≤] ...). We then solve for a sequence of prescription trials that minimizes the expected time until an effective drug is identified.
Once we identify the optimal sequence for our model, we estimate p, t and q for topiramate and OnabotulinumtoxinA based on the FORWARD study by Rothrock et al as well as erenumab data published by Barbanti et al. at IHC 2019.
ResultsThe solution for our model is to order the drugs by probability of efficacy per unit time. When the efficacy of each drug i is known only for one period ti and there are no adverse effects, then the optimum sequence is to administer drug i for ti periods in decreasing order pi/ti. In general, the optimum sequence is to administer drug i for ti,k* periods in decreasing order of the Gittins index{sigma} i,k*
Based on the above data, the optimum sequence of chronic migraine prevention medication is a trial of erenumab for 12 weeks, followed by a trial of OnabotulinumtoxinA for 32 weeks, followed by a trial of topiramate for 32 weeks.
ConclusionWe propose an optimal sequence for preventive medication trial for patients with chronic migraine. Since our model makes limited assumptions on the characteristics of disease, it can be readily applied also to episodic migraine, given the appropriate data as input.
Indeed, our model can be applied to other scenarios so long as probability of success/adverse event as a function of time can be estimated. As such, we believe our model may have implications beyond our sub-specialty.
Trial RegistrationNot applicable. | pain medicine |
10.1101/2022.03.31.22273274 | Effect of vaccination rates on the prevalence and mortality of COVID-19 | By looking at trends in global epidemic data, we evaluate the effectiveness of vaccines on the incidence and mortality from the delta variant of COVID-19. By comparing countries of varying vaccination levels, we find that more vaccinated countries have lower deaths while not having lower cases. This cannot be explained by testing rates or restrictions, but can be partly explained by the most susceptible countries also being the highest vaccinated countries. We also find that during the period when many countries have high vaccination rates, cases and deaths are both increasing in time. This seems to be caused by the waning of the protection vaccines grant against infection. | health informatics |
10.1101/2022.03.31.22273208 | COVID-19 in people with neurofibromatosis 1, neurofibromatosis 2, or schwannomatosis | PurposePeople with pre-existing conditions may be more susceptible to severe Coronavirus disease 2019 (COVID-19) when infected by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2). The relative risk and severity of SARS-CoV-2 infection in people with rare diseases like neurofibromatosis (NF) type 1 (NF1), neurofibromatosis type 2 (NF2), or schwannomatosis (SWN) is unknown.
MethodsWe investigated the proportions of SARS-CoV-2 positive or COVID-19 patients in people with NF1, NF2, or SWN in the National COVID Collaborative Cohort (N3C) electronic health record dataset.
ResultsThe cohort sizes in N3C were 2,501 (NF1), 665 (NF2), and 762 (SWN). We compared these to N3C cohorts of other rare disease patients (98 - 9844 individuals) and the general non-NF population of 5.6 million. The site- and age-adjusted proportion of people with NF1, NF2, or SWN who tested positive for SARS-CoV-2 or were COVID-19 patients (collectively termed positive cases) was not significantly higher than in individuals without NF or other selected rare diseases. There were no severe outcomes reported in the NF2 or SWN cohorts. The proportion of patients experiencing severe outcomes was no greater for people with NF1 than in cohorts with other rare diseases or the general population.
ConclusionHaving NF1, NF2, or SWN does not appear to increase the risk of being SARS-CoV-2 positive or of being a COVID-19 patient, or of developing severe complications from SARS-CoV-2. | health informatics |
10.1101/2022.03.31.22273262 | Emulation of epidemics via Bluetooth-based virtual safe virus spread: experimental setup, software, and data | We describe an experimental setup and a currently running experiment for evaluating how physical interactions over time and between individuals affect the spread of epidemics. Our experiment involves the voluntary use of the Safe Blues Android app by participants at The University of Auckland (UoA) City Campus in New Zealand. The app spreads multiple virtual safe virus strands via Bluetooth depending on the social and physical proximity of the subjects. The evolution of the virtual epidemics is recorded as they spread through the population. The data is presented as a real-time (and historical) dashboard. A simulation model is applied to calibrate strand parameters. Participants locations are not recorded, but participants are rewarded based on the duration of participation within a geofenced area, and aggregate participation numbers serve as part of the data. Once the experiment is complete, the data will be made available as an open-source anonymized dataset.
This paper outlines the experimental setup, software, subject-recruitment practices, ethical considerations, and dataset description. The paper also highlights current experimental results in view of the lockdown that started in New Zealand at 23:59 on August 17, 2021. The experiment was initially planned in the New Zealand environment, expected to be free of COVID and lockdowns after 2020. However, a COVID Delta strain lockdown shuffled the cards and the experiment is currently extended into 2022.
Author summaryIn this paper, we describe the Safe Blues Android app experimental setup and a currently running experiment at the University of Auckland City Campus. This experiment is designed to evaluate how physical interactions over time and between individuals affect the spread of epidemics.
The Safe Blues app spreads multiple virtual safe virus strands via Bluetooth based on the subjects unobserved social and physical proximity. The app does not record the participants locations, but participants are rewarded based on the duration of participation within a geofenced area, and aggregate participation numbers serve as part of the data. When the experiment is finished, the data will be released as an open-source anonymized dataset.
The experimental setup, software, subject recruitment practices, ethical considerations, and dataset description are all described in this paper. In addition, we present our current experimental results in view of the lockdown that started in New Zealand at 23:59 on August 17, 2021. The information we provide here may be useful to other teams planning similar experiments in the future. | health informatics |
10.1101/2022.04.01.22273291 | Metabolic alkalosis and mortality in COVID-19 | BackgroundAs a new infectious disease affecting the world, COVID-19 has caused a huge impact on countries around the world. At present, its specific pathophysiological mechanism has not been fully clarified. We found in the analysis of the arterial blood gas data of critically ill patients that the incidence of metabolic alkalosis in such patients is high.
MethodWe retrospectively analyzed the arterial blood gas analysis results of a total of 16 critically ill patients in the intensive ICU area of Xiaogan Central Hospital and 42 severe patients in the intensive isolation ward, and analyzed metabolic acidosis and respiratory acidosis. Metabolic alkalosis and respiratory alkalosis, and the relationship between metabolic alkalosis and death.
ResultAmong the 16 critically ill patients, the incidence of metabolic alkalosis was 100%, while the incidence of metabolic alkalosis in severe patients was 50%; the mortality rate in critically ill patients was 81.3%, and 21.4% in severe patients; The mortality of all patients with metabolic alkalosis is 95.5%,and 4.5% in without metabolic alkalosis.
ConclusionThe incidence of metabolic alkalosis in critically ill COVID-19 patients is high, and it is associated with high mortality. | infectious diseases |
10.1101/2022.03.31.22273242 | If you build it, will they use it? Use of a Digital Assistant for Self-Reporting of COVID-19 Rapid Antigen Test Results during Large Nationwide Community Testing Initiative | ImportanceWide-spread distribution of rapid-antigen tests is integral to the United States strategy to address COVID-19; however, it is estimated that few rapid-antigen test results are reported to local departments of health.
ObjectiveTo characterize how often individuals in six communities throughout the United States used a digital assistant to log rapid-antigen test results and report them to their local Department of Health.
DesignThis prospective cohort study is based on anonymously collected data from the beneficiaries of The Say Yes! Covid Test program, which distributed 3,000,000 rapid antigen tests at no cost to residents of six communities between April and October 2021. We provide a descriptive evaluation of beneficiaries use of digital assistant for logging and reporting their rapid antigen test results.
Main Outcome and MeasuresNumber and proportion of tests logged and reported to the Department of Health through the digital assistant
ResultsA total of 178,785 test kits were ordered by the digital assistant, and 14,398 households used the digital assistant to log 41,465 test results. Overall, a small proportion of beneficiaries used the digital assistant (8%), but over 75% of those who used it reported their rapid antigen test results to their state public health department. The reporting behavior varied between communities and was significantly different for communities that were incentivized for reporting test results (p < 0.001). In all communities, positive tests were less reported than negative tests (60.4% vs 75.5%; p<0.001).
Conclusions and RelevanceThese results indicate that app-based reporting with incentives may be an effective way to increase reporting of rapid tests for COVID-19; however, increasing the adoption of the digital assistant is a critical first step. | infectious diseases |
10.1101/2022.04.01.22273288 | Effects of Intravenous and Inhaled Anesthetics on the Postoperative Complications for the patients undergoing One Lung Ventilation | IntroductionWith the widely used technique of One Lung Ventilation (OLV) in patients throughout thoracic surgery, its unclear whether inhaled or intravenous anesthetics were associated with postoperative complications. The purpose of the current study is to compare the effects of intravenous and inhaled anesthetics on the postoperative complications within the patients suffering OLV.
MethodsWe searched the related randomized controlled trials in PubMed\EMBASE\Medline and the Cochrane library up to 09\2021.Inclusive criteria were as follows: We included all the randomized controlled trials which compared the effects of intravenous and inhaled anesthetics on the postoperative complications[listed as: (a) major complications; (b)postoperative pulmonary complications (PPCs); (c) postoperative cognitive function (MMSE score); (d) length of hospital stay; (e) 30-days mortality] for the patients undergoing one lung ventilation.
ResultsThirteen randomized controlled trials with 2522 patients were included for analysis. Overall, there were no significant differences in the postoperative major complications between inhaled and intravenous anesthetics groups (OR 0.78, 95%CI 0.54 to 1.13, p=0.19; I2=0%). However, more PPCs were detected in intravenous groups when compared to inhaled groups (OR 0.62, 95%CI 0.44 to 0.87, p=0.005; I2=37%). Both the postoperative MMSE scores (SMD -1.94, 95%CI -4.87 to 0.99, p=0.19; I2=100%) and the length of hospital stay (SMD 0.05, 95%CI -0.29 to 0.39, p=0.76; I2=73%) were comparable between two groups. Besides, the 30-day mortality didnt differ significantly across groups either (OR 0.79, 95%CI 0.03 to 18, p=0.88; I2=63%).
ConclusionsIn patients undergoing OLV, generous anesthesia with inhaled anesthetics could reduce PPCs compared with intravenous anesthetics, but no evident advantages were provided over other major complications, cognitive function, hospital stay or mortality. | anesthesia |
10.1101/2022.04.01.22273312 | Use of glycolysis enhancing drugs has less risk of Parkinson's disease than 5α-reductase inhibitors | BackgroundTerazosin and closely related 1-adrenergic receptor antagonists (doxazosin and alfuzosin; TZ/DZ/AZ) enhance glycolysis and reduce neurodeneration in animal models. Observational evidence in humans from several databases support this finding; however, a recent study has suggested that tamsulosin, the comparator medication, increases risk of Parkinsons disease. We consider a different comparison group of men taking 5-reductase inhibitors as a new, independent comparison allowing us to both obtain new estimates of the association between TZ/DZ/AZ and Parkinsons disease outcomes and validate tamsulosin as an active comparator.
MethodsUsing the Truven Health Analytics Marketscan database, we identified men without Parkinsons disease, newly started on TZ/DZ/AZ, tamsulosin, or 5-reductase inhibitors. We followed these matched cohorts to compare the hazard of developing Parkinsons disease.
ResultsWe found that men taking TZ/DZ/AZ had a lower hazard of Parkinsons disease than men taking tamsulosin (HR=0.72, 95% CI: 0.66-0.78, n=239,946) and lower than men taking a 5-reductase inhibitors (HR=0.81, 95% CI: 0.72-0.92, n=129,320). The hazard for men taking tamsulosin was not statistically significantly different than for men taking 5-reductase inhibitors (HR=1.10, 95% CI: 1.00-1.22, n=157,490).
ConclusionsThese data suggest that men using TZ/DZ/AZ have a lower risk of developing Parkinsons disease than those using tamsulosin or 5-reductase inhibitors while users of tamsulosin and 5-reductase inhibitors have relatively similar survival functions. | epidemiology |
10.1101/2022.03.30.22273193 | Effectiveness of an Inactivated Covid-19 Vaccine with Homologous and Heterologous Boosters against the Omicron (B.1.1.529) Variant | BackgroundLarge outbreaks of the SARS-CoV-2 Omicron (B.1.1.529) variant have occurred in countries with high coverage of inactivated Covid-19 vaccines, raising urgent questions about effectiveness of these vaccines against disease and hospitalization with Omicron.
MethodsWe conducted a nationwide, test-negative, case-control study of adults who were tested for SARS-CoV-2 infection. We evaluated vaccine effectiveness against symptomatic Covid-19 and severe Covid-19 (hospital admission or deaths) for the primary series of CoronaVac and homologous and heterologous (BNT162b2) booster doses.
FindingsBetween September 6, 2021, and March 10, 2022, a total of 1,339,986 cases were matched to 1,339,986 test-negative controls. In the period of Omicron predominance, vaccine effectiveness [≥]180 days after the second CoronaVac dose was 8{middle dot}1% (95% CI, 7{middle dot}0 to 9{middle dot}1) and 57{middle dot}0% (95% CI, 53{middle dot}5 to 60{middle dot}2) against symptomatic and severe Covid-19, respectively. Vaccine effectiveness against symptomatic disease was 15{middle dot}0% (95% CI, 12{middle dot}0 to 18{middle dot}0) and 56{middle dot}8% (95% CI, 56{middle dot}3 to 57{middle dot}4) in the period 8-59 days after receiving a homologous and heterologous booster, respectively. During the same interval, vaccine effectiveness against severe Covid-19 was 71{middle dot}3% (95% CI, 60{middle dot}3 to 79{middle dot}2) and 85{middle dot}5% (95% CI, 83{middle dot}3 to 87{middle dot}0) after receiving a homologous and heterologous booster, respectively. Whereas waning of vaccine effectiveness against symptomatic Covid-19 was observed [≥]90 days after a homologous and heterologous booster, waning against severe Covid-19 was only observed after a homologous booster.
InterpretationA homologous CoronaVac booster dose provided limited additional protection, while a BNT162b2 booster dose afforded sustained protection against severe disease for at least three months. | epidemiology |
10.1101/2022.04.01.22273284 | Protocol for the British Paediatric Surveillance Study of Neonatal Stroke in the United Kingdom and the Republic of Ireland in babies in the first 90 days of life | BackgroundNeonatal stroke is a devastating condition that causes brain injury in babies and often leads to lifelong neurological impairment. Recent, prospective whole population studies of neonatal stroke are lacking. Neonatal strokes are different from those seen in older children and adults. A better understanding of the aetiology, current management and outcomes of neonatal stroke could reduce the burden of this rare condition. Most healthcare professionals see only a few cases of neonatal stroke in their careers, so population-based prospective studies are needed.
ObjectivesTo explore the incidence and two-year outcomes of neonatal stroke across an entire population in the UK and the Republic of Ireland.
PopulationAny infant presenting with neonatal stroke in the first 90 days of life.
DesignActive national surveillance study using a purpose-built integrated case notification-data collection online platform.
MethodsOver a 13-month period, British and Irish clinicians will notify any cases of neonatal stroke electronically via the online platform monthly. Clinicians will complete a primary questionnaire via the platform detailing clinical information, demographic details and investigations, including neuroimaging for detailed analysis and classification. An outcome questionnaire will be sent at two years of age via the platform. Appropriate ethical and regulatory approvals have been received from England, Wales, Scotland, Northern Ireland and the Republic of Ireland.
ConclusionThe neonatal stroke study represents the first multinational population surveillance study delivered via a purpose-built integrated case notification-data collection online platform and data safe haven, overcoming the challenges of setting up the study.
SynopsisO_ST_ABSStudy questionC_ST_ABSThe neonatal stroke active surveillance study aims to explore the incidence and two-year outcome of neonatal stroke in the UK and Ireland.
What is already known?Neonatal stroke is a rare but often devastating condition with lifelong consequences including cerebral palsy, epilepsy and cognitive delay. There are no contemporary, prospective multinational population studies on the presentation and outcomes of neonatal stroke. Whilst often the aetiology is multifactorial further information on underlying aetiology may help to identify potential future preventative treatments leading to improved outcomes.
What does this study add?International collaboration is required to understand the epidemiology, management and outcomes of rare diseases or conditions. This is the first multinational surveillance study delivered via a purpose-built integrated case notification-data collection online platform and data safe haven, presenting practical and ethical challenges. The study will describe the burden of neonatal stroke while providing parents/carers and healthcare professionals with up-to-date information about the condition including the two-year outcomes. | pediatrics |
10.1101/2022.04.01.22273305 | Long-term psychological consequences of long Covid: a propensity score matching analysis comparing trajectories of depression and anxiety symptoms before and after contracting long Covid vs short Covid | BackgroundThere is a growing global awareness of the psychological consequences of long Covid, supported by emerging empirical evidence. However, the mergence and long-term trajectories of psychological symptoms following the infection are still unclear.
AimsTo examine when psychological symptoms first emerge following the infection with SARS-CoV-2, and the long-term trajectories of psychological symptoms comparing long and short Covid groups.
MethodsWe analysed longitudinal data from the UCL Covid-19 Social Study (March 2020-November 2021). We included data from adults living in England who reported contracting SARS-CoV-2 by November 2021 (N=3,115). Of these, 15.9% reported having had long Covid (N=495). They were matched to participants who had short Covid using propensity score matching on a variety of demographic, socioeconomic and health covariates (N=962, n=13,325) and data were further analysed using growth curve modelling.
ResultsDepressive and anxiety symptoms increased immediately following the onset of infection in both long and short Covid groups. But the long Covid group had substantially greater initial increases in depressive symptoms and heightened levels over 22 months follow-up. Initial increases in anxiety were not significantly different between groups, but only the short Covid group experienced an improvement in anxiety over follow-up, leading to widening differences between groups.
ConclusionsThe findings shed light on the psychobiological pathways involved in the development of psychological symptoms relating to long Covid. The results highlight the need for monitoring of mental health and provision of adequate support to be interwoven with diagnosis and treatment of the physical consequences of long Covid. | psychiatry and clinical psychology |
10.1101/2022.04.01.22273310 | Metabolic Disturbances, Hemoglobin A1c, and Social Cognition Impairment in Schizophrenia Spectrum Disorders | Social cognitive impairments are core features of schizophrenia spectrum disorders (SSD) and are associated with greater functional impairment and decreased quality of life. Metabolic disturbances have been related to greater impairment in general neurocognition, but their relationship to social cognition has not been previously reported. In this study, metabolic measures and social cognition were assessed in 245 participants with SSD and 165 healthy comparison subjects (HC), excluding those with hemoglobin A1c (HA1c)>6.5%. Tasks assessed emotion processing, theory of mind, and social perception. Functional connectivity within and between social cognitive networks was measured during a naturalistic social task. Among SSD, a significant inverse relationship was found between social cognition and cumulative metabolic burden ({beta}=-0.38, p<0.001) and HA1c ({beta}=-0.37, p<0.001). The relationship between social cognition and HA1c was robust across domains and measures of social cognition and after accounting for age, sex, race, non-social neurocognition, hospitalization, and treatment with different antipsychotic medications. Negative connectivity between affect sharing and motor resonance networks was a partial mediator of this relationship across SSD and HC groups ({beta}=-0.05, p=0.008). There was a group x HA1c effect indicating that SSD participants were more adversely affected by increasing HA1c. Thus, we provide the first report of a robust relationship in SSD between social cognition and abnormal glucose metabolism. If replicated and found to be causal, insulin sensitivity and blood glucose may present as promising targets for improving social cognition, functional outcomes, and quality of life in SSD. | psychiatry and clinical psychology |
10.1101/2022.04.01.22273296 | Uptake of COVID-19 vaccines among pregnant women: a systematic review and meta-analysis | BackgroundMass vaccination against the COVID-19 is essential to control the pandemic. COVID-19 vaccines are recommended now during pregnancy to prevent adverse outcomes.
ObjectiveTo evaluate the evidence from the literature regarding the uptake of COVID-19 vaccination among pregnant women.
MethodsWe conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. We searched PubMed, Medline, Scopus, ProQuest, Web of Science, CINAHL, and a pre-print service (medRxiv) from inception to March 23, 2022. We included quantitative studies reporting COVID-19 vaccination uptake among pregnant women, studies that examine predictors of COVID-19 vaccination uptake and studies that examine reasons for decline of vaccination. We performed meta-analysis to estimate the overall proportion of vaccinated pregnant women against the COVID-19.
ResultsWe found 11 studies including 703,004 pregnant women. The overall proportion of vaccinated pregnant women against the COVID-19 was 27.5% (95% CI: 18.8-37.0%). The pooled proportion for studies that were conducted in Israel was higher than the proportion for studies that were conducted in USA and other countries. Predictors of COVID-19 vaccination uptake were older age, ethnicity, race, trust in COVID-19 vaccines, and fear of COVID-19 during pregnancy. On the other hand, mistrust in the government, diagnosis with COVID-19 during pregnancy, and worry about the safety and the side effects of the COVID-19 vaccines were reasons for decline of vaccination.
ConclusionsThe global COVID-19 vaccination prevalence in pregnant women is low. There is a large gap in the literature on the factors influencing the decision of pregnant women to be vaccinated against the COVID-19. Targeted information campaigns are essential to improve trust and build vaccine literacy among pregnant women. Given the ongoing high case rates and the known increased risks of COVID-19 in pregnant women, our findings could help policy makers to improve the acceptance rate of COVID-19 vaccines in pregnant women especially in vulnerable subgroups. | public and global health |
10.1101/2022.04.01.22273270 | A Randomized, Double-Blinded, Placebo-Controlled, Phase 2 Study of Safety, Tolerability and Efficacy of Pirfenidone in Patients with Rheumatoid Arthritis Interstitial Lung Disease | BackgroundInterstitial lung disease (ILD) is a known complication of rheumatoid arthritis (RA) with a lifetime risk in any individual of 7.7%. The TRAIL1 trial was a randomized, double-blinded, placebo-controlled, phase 2 study of safety, tolerability, and efficacy of pirfenidone for the treatment of patients with RA-ILD.
MethodsThe TRAIL1 was a phase 2 trial intended to enroll 270 adult patients (18 to 85 years) with established RA-ILD at 33 sites in 4 countries. Patients were randomly assigned (1:1) to 2,403 mg oral pirfenidone or placebo daily. The primary endpoint was the incidence of the composite endpoint of decline from baseline in percent predicted forced vital capacity (FVC%) of 10% or greater or death during the 52-week treatment period. Key secondary endpoints included change in absolute and FVC% over 52 weeks.
FindingsThe trial was stopped early due to slow recruitment and soon after the shutdown of clinical trials as a consequence of the coronavirus disease 2019 (COVID-19) pandemic. Data from 123 patients enrolled were analyzed. The primary endpoint was met by 11.1% on pirfenidone vs. 15% on placebo [OR=0.67 (0.22, 2.03), p=0.48]. Subjects receiving pirfenidone had a slower rate of decline in lung function as measured by estimated annual change in FVC(ml) (-66 vs. -146, p=0.0082) and FVC(%) (-1.02 vs. -3.21, p=0.0028). This effect on decline was also seen when analyzed within participants with baseline usual interstitial pneumonia (UIP) pattern on HRCT (FVC(ml) (-43 vs. -169, p=0.0014) and FVC% (-0.2 vs. -3.81, p=0.0002)). There was no significant difference in the rate of treatment-emergent serious adverse events.
InterpretationDue to early termination of the study, results should be interpreted with caution. Despite being underpowered to evaluate the primary endpoint, pirfenidone slowed the rate of decline of FVC over time in subjects with RA-ILD. Safety in patients with RA-ILD was similar to that seen in other pirfenidone trials.
FundingFunding for this investigator initiated trial was provided by Genentech, Inc. to Ivan O. Rosas, MD, on behalf of the TRAIL1 Investigators. | respiratory medicine |
10.1101/2022.04.01.22273289 | Long-term performance assessment of fully automatic biomedical glottis segmentation at the point of care | Deep Learning has a large impact on medical image analysis and lately has been adopted for clinical use at the point of care. However, there is only a small number of reports of long-term studies that show the performance of deep neural networks (DNNs) in such a clinical environment. In this study, we measured the long-term performance of a clinically optimized DNN for laryngeal glottis segmentation. We have collected the video footage for two years from an AI-powered laryngeal high-speed videoendoscopy imaging system and found that the footage image quality is stable across time. Next, we determined the DNN segmentation performance on lossy and lossless compressed data revealing that only 9% of recordings contain segmentation artefacts. We found that lossy and lossless compression are on par for glottis segmentation, however, lossless compression provides significantly superior image quality. Lastly, we employed continual learning strategies to continuously incorporate new data to the DNN to remove aforementioned segmentation artefacts. With modest manual intervention, we were able to largely alleviate these segmentation artefacts by up to 81%. We believe that our suggested deep learning-enhanced laryngeal imaging platform consistently provides clinically sound results, and together with our proposed continual learning scheme will have a long-lasting impact in the future of laryngeal imaging. | otolaryngology |
10.1101/2022.04.01.22273313 | Automated development of clinical prediction models enables real-time risk stratification with exemplar application to hypoxic-ischaemic encephalopathy | Real-time updated risk prediction of disease outcomes could lead to improvements in patient care and better resource management. Established monitoring during pregnancy at antenatal and intrapartum periods could be particularly amenable to benefits of this approach. This proof-of-concept study compared automated and manual prediction modelling approaches using data from the Collaborative Perinatal Project with exemplar application to hypoxic-ischaemic encephalopathy (HIE). Using manually selected predictors identified from previously published studies we obtained high HIE discrimination with logistic regression applied to antenatal only (0.71 AUC [95% CI 0.64-0.77]), antenatal and intrapartum (0.70 AUC [95% CI 0.64-0.77]), and antenatal, intrapartum and birthweight (0.73 AUC [95% CI 0.67-0.79]) data. In parallel, we applied a range of automated modelling methods and found penalised logistic regression had best discrimination and was equivalent to the manual approach but required little human input giving 0.75 AUC for antenatal only (95% CI 0.69, 0.81), 0.70 AUC for antenatal and intrapartum (95% CI 0.63, 0.78), and 0.74 AUC using antenatal, intrapartum, and infant birthweight (95% CI 0.65, 0.81). These results demonstrate the feasibility of developing automated prediction models which could be applied to produce disease risk estimates in real-time. This approach may be especially useful in pregnancy care but could be applied to any disease. | health informatics |
10.1101/2022.03.28.22272797 | Spatial-temporal and phylogenetic analyses of epidemiologic data to help understand the modes of transmission of endemic typhoid fever in Samoa | Salmonella enterica serovar Typhi (S. Typhi) is either widely distributed or proximally transmitted via fecally-contaminated food or water to cause typhoid fever. In Samoa, where endemic typhoid fever has persisted over decades despite water quality and sanitation improvements, the local patterns of S. Typhi circulation remain undistinguished. From April 2018-June 2020, epidemiologic data and GPS coordinates were collected during household investigations of 260 acute cases of typhoid fever, and 27 asymptomatic shedders of S. Typhi were detected among household contacts. Spatial and temporal distributions of cases were examined using Average Nearest Neighbor and space-time hotspot analyses. In rural regions, infections occurred in sporadic, focal clusters contrasting with persistent, less clustered cases in the Apia Urban Area. Restrictions to population movement during nationwide lockdowns in 2019-2020 were associated with marked reductions of cases. Phylogenetic analyses of isolates with whole genome sequences (n=186) revealed one dominant genotype 3.5.4 (n=181/186) that contains three Samoa-exclusive sub-lineages: 3.5.4.1, 3.5.4.2, and 3.5.4.3. Variables of patient sex, age, and geographic region were examined by phylogenetic groupings, and significant differences (p<0.05) associated genetically-similar isolates in urban areas with working ages (20-49 year olds), and in rural areas with age groups typically at home (<5, 50+). Isolates from asymptomatic shedders were among all three sub-lineages. Whole genome sequencing also corroborated bacterial genetic similarity in 10/12 putative epidemiologic linkages among cases and asymptomatic shedders as well as 3/3 repeat positives (presumed relapses), with a median of one single nucleotide polymorphism difference. These findings highlight various patterns of typhoid transmission in Samoa that differ between urban and rural regions as well as genomic subtypes. Asymptomatic shedders, detectable only through household investigations, are likely an important reservoir and mobile agent of infection. This study advances a "Samoan S. Typhi framework" that supports current and future typhoid surveillance and control efforts in Samoa.
AUTHOR SUMMARYMany typhoid endemic countries have evident transmission contributions from widely distributed contaminated water supplies and/or asymptomatic S. Typhi carriers who intermittently cause sporadic outbreaks. However, these patterns have not yet been examined in the island nation of Samoa, where typhoid has remained endemic for decades. In this study, we incorporated the discerning powers of spatial-temporal cluster analyses as well as phylogenetics of whole genome sequences (WGS) of S. Typhi isolates to examine detailed epidemiologic data collected through household investigations of culture-confirmed cases of typhoid fever occurring in Samoa from April 2018 through June 2020. We detected patterns consistent with both modes of transmission, varying between urban and rural regions, and we provided evidence of intra-household transmission of genetically similar isolates, thereby supporting a majority of putative epidemiologic linkages made during household investigations, and identifying important roles for asymptomatic shedders of S. Typhi. These findings advance our understanding of persistently endemic typhoid fever in Samoa and directly support the efforts of the Samoa Typhoid Fever Control Program of the Ministry of Health of Samoa. | genetic and genomic medicine |
10.1101/2022.04.01.22273292 | Characteristics of Kenyan Women Enrolled in a Trial on Doxycycline Post-exposure Prophylaxis for Sexually Transmitted Infection Prevention | IntroductionThe global incidence of sexually transmitted infections (STIs) has been rapidly increasing over the past decade, with more than one million curable STIs being acquired daily. Young women in sub-Saharan Africa have a high prevalence and incidence of both curable STIs and HIV. The use of doxycycline as a prophylaxis to prevent STI infections is promising; however, clinical trials, to date, have only been conducted among men who have sex with men (MSM) in high-income settings. We describe the characteristics of participants enrolled in the first trial to determine the efficacy of doxycycline post-exposure prophylaxis (PEP) to reduce STI incidence among women.
MethodsThis is an open-label 1:1 randomized clinical trial of doxycycline PEP efficacy to reduce incident bacterial STIs - Neisseria gonorrhoeae, Chlamydia trachomatis, and Treponema pallidum - among Kenyan women aged [≥]18 and [≤]30 years. All were also taking HIV pre-exposure prophylaxis (PrEP). We describe the baseline characteristics of participants.
ResultsBetween February 2020 and November 2021, 449 women were enrolled. The median age was 24 years (IQR 21-27), the majority were never married (66.1%), 370 women (82.4%) reported having a primary sex partner, and 33% had sex with new partners in the 3 months prior to enrolment. Two-thirds (67.5%, 268 women) did not use condoms, 36.7% reported transactional sex, and 43.2% suspected their male partners of having sex with other women. Slightly less than half (45.9%, 206 women) were recently concerned about being exposed to an STI. The prevalence of STIs was 17.9%, with C. trachomatis accounting for the majority of infections.
ConclusionYoung cisgender women using HIV PrEP in Kenya and enrolled in a trial of doxycycline postexposure prophylaxis had a high prevalence of curable STIs and represent a target population for an STI prevention intervention. | sexual and reproductive health |
10.1101/2022.04.01.22273317 | A comparative analysis of depression between pregnant and non-pregnant adolescents in a southwestern town in Nigeria | BackgroundAdolescence constitutes a risk factor for mental health problems, and this may be further complicated by pregnancy. The rate of adolescent pregnancy is still extremely high in the sub-Saharan Africa including Nigeria. Pregnancy and mental health problems during adolescence constitute double vulnerability for negative outcomes for the adolescents and their offspring.
MethodologyThe study was cross-sectional in design and it compared prevalence of depression and associated factors among pregnant and non-pregnant adolescents. It was conducted in Osogbo metropolis, Osun State, Southwest, Nigeria. The study population comprised pregnant adolescents (aged 15-19 years) attending antenatal care (ANC) in selected formal and informal health facilities. Non-pregnant adolescents who were equally attending services at the facilities were recruited as the control group. Information was obtained from the adolescents with the use of a structured questionnaire and data was analysed with IBM-SPSS version 21 software.
ResultsThree hundred and thirty-four respondents (167 per group) were involved in the study; the pregnant adolescents had a mean age ({+/-}SD) of 17.92 ({+/-}1.13) years while the non-pregnant adolescents had a mean age of 17.70 ({+/-}1.23) years. The prevalence of depression among the pregnant adolescents was 8.4% while that of the non-pregnant adolescents was 3.0%. The result showed a statistically significant association between pregnancy status and depression among the adolescents (p= 0.033). Living arrangement was the only socio-demographic variable that had significant relationship with depression among the pregnant adolescents while living arrangement and employment status had significant relationships with depression among the non-pregnant adolescents. History of mental illness, childhood sexual abuse and anxiety symptoms showed significant relationship with depression among pregnant adolescents, however, only anxiety symptoms showed significant relationship with depression among non-pregnant adolescents.
ConclusionThe study concluded that the prevalence of depression is significantly higher among pregnant adolescents with similarities and differences in the factors associated with depression in the two groups. | psychiatry and clinical psychology |
10.1101/2022.03.29.22273096 | VentRa. Validation study of the ventricle feature estimation and classification tool to differentiate behavioral variant frontotemporal dementia from psychiatric disorders and other degenerative diseases | IntroductionLateral ventricles are reliable and sensitive indicators of brain atrophy and disease progression in behavioral variant frontotemporal dementia (bvFTD). Here we validate our previously developed automated tool using ventricular features (known as VentRa) for the classification of bvFTD vs a mixed cohort of neurodegenerative, vascular, and psychiatric disorders from a clinically representative independent dataset.
MethodsLateral ventricles were segmented for 1110 subjects - 14 bvFTD, 30 other Frontotemporal Dementia (FTD), 70 Lewy Body Disease (LBD), 898 Alzheimer Disease (AD), 62 Vascular Brain Injury (VBI) and 36 Primary Psychiatric Disorder (PPD) from the publicly accessible National Alzheimers Coordinating Center dataset to assess the performance of VentRa.
ResultsUsing ventricular features to discriminate bvFTD subjects from PPD, VentRa achieved an accuracy of 84%, 71% sensitivity and 89% specificity. VentRa was able to identify bvFTD from a mixed age-matched cohort (i.e., Other FTD, LBD, AD, VBI and PPD) and to correctly classify other disorders as not compatible with bvFTD with a specificity of 83%. The specificity against each of the other individual cohorts were 80% for other FTD, 83% for LBD, 83% for AD and 84% for VBI.
DiscussionVentRa is a robust and generalizable tool with potential usefulness for improving the diagnostic certainty of bvFTD, particularly for the differential diagnosis with PPD. | neurology |
10.1101/2022.03.29.22273137 | Delineating the heterogeneity of preimplantation development via unsupervised clustering of embryo candidates for transfer using automated, accurate and standardized morphokinetic annotation | The majority of human embryos, whether naturally or in vitro fertilized (IVF), do not poses the capacity to implant within the uterus and reach live birth. Hence, selecting the embryos with the highest developmental potential to implant is imperative for improving pregnancy rates without prolonging time to pregnancy. The developmental potential of embryos can be assessed based on temporal profiling of the discrete morphokinetic events of preimplantation development. However, manual morphokinetic annotation introduces intra- and inter-observer variation and is time-consuming. Using a large clinically-labeled multicenter dataset of video recordings of preimplantation embryo development by time-lapse incubators, we trained a convolutional neural network and developed a classifier that performs fully automated, robust, and standardized annotation of the morphokinetic events with R-square 0.994 accuracy. To delineate the morphokinetic heterogeneity of preimplantation development, we performed unsupervised clustering of high-quality embryo candidates for transfer, which was independent of maternal age and blastulation rate. Retrospective comparative analysis of transfer versus implantation rates reveals differences between embryo clusters that are distinctively marked by poor synchronization of the third meiotic cell-cleavage cycle. We expect this work to advance the integration of morphokinetic-based decision support tools in IVF treatments and deepen our understanding of preimplantation heterogeneity. | obstetrics and gynecology |
10.1101/2022.03.29.22273112 | Prediction models with survival data: a comparison between machine learning and the Cox proportional hazards model | Recent years have seen increased interest in using machine learning (ML) methods for survival prediction, chiefly using big datasets with mixed datatypes and/or many predictors Model comparisons have frequently been limited to performance measure evaluation, with the chosen measure often suboptimal for assessing survival predictive performance. We investigated ML model performance in an application to osteosarcoma data from the EURAMOS-1 clinical trial (NCT00134030). We compared the performance of survival neural networks (SNN), random survival forests (RSF) and the Cox proportional hazards model. Three performance measures suitable for assessing survival model predictive performance were considered: the C-index, and the time-dependent Brier and Kullback-Leibler scores. Comparisons were also made on predictor importance and patient-specific survival predictions. Additionally, the effect of ML model hyper-parameters on performance was investigated. All three models had comparable performance as assessed by the C-index and Brier and Kullback-Leibler scores, with the Cox model and SNN also comparable in terms of relative predictor importance and patient-specific survival predictions. RSFs showed a tendency for according less importance to predictors with uneven class distributions and predicting clustered survival curves, the latter a result of tuning hyperparameters that influence forest shape through restrictions on terminal node size and tree depth. SNNs were comparatively more sensitive to hyperparameter misspecification, with decreased regularization resulting in inconsistent predicted survival probabilities. We caution against using RSF for predicting patient-specific survival, as standard model tuning practices may result in aggregated predictions, which is not reflected in performance measure values, and recommend performing multiple reruns of SNNs to verify prediction consistency. | oncology |
10.1101/2022.03.30.22273187 | Psychosocial and psychological Interventions effectiveness among Nurses in Intensive Care Units caring for pediatric patients: A Systematic Review and Meta-Analysis | PurposeThis review aimed to evaluate the effectiveness of psychosocial and psychological interventions in nurses among intensive care units caring for pediatric patients.
MethodsThis study was applied to the Participants, Intervention, Comparisons, Outcomes, Timing of Outcome Measurement, Settings, Study Design and PubMed, EMBASE, CINAHL databases were searched. To estimate the effect size, a meta-analysis of the studies was performed using the RevMan 5.3 program. The effect size used was the standardized mean difference.
ResultsOf 1,630 studies identified, 4 met the inclusion criteria, and 3 studies were used to estimate the effect size of psychosocial and psychological interventions. The primary outcome variable of these studies was stress. The effect of the intervention program on stress was also found to have no effect in individual studies, and the overall effect size was not statistically significant (SMD = -0.06; 95% CI: -0.33, 0.20; Z = 0.48, p = 0.630). However, according to the individual literature included in this study, after the stress management program was applied as a group, a significant stress reduction was shown in the experimental group (p = 0.021).
ConclusionsThese results show that psychosocial and psychological interventions were effective in stress management by a group approach. Therefore, it is necessary to develop psychosocial support interventions for stress management of nurses among intensive care units caring for pediatric patients more diversely. | health systems and quality improvement |
10.1101/2022.03.30.22273143 | Safety and immunogenicity of a SARS-CoV-2 recombinant protein nanoparticle vaccine (GBP510) adjuvanted with AS03: a phase 1/2, randomized, placebo-controlled, observer-blinded trial | BackgroundVaccination has helped to mitigate the COVID-19 pandemic. Ten traditional and novel vaccines have been listed by the World Health Organization for emergency use. Additional alternative approaches may better address ongoing vaccination globally, where there remains an inequity in vaccine distribution. GBP510 is a recombinant protein vaccine, which consists of self-assembling, two-component nanoparticles displaying the receptor-binding domain (RBD) in a highly immunogenic array.
MethodsWe conducted a randomized, placebo-controlled, observer-blinded, phase 1/2 trial to evaluate the safety and immunogenicity of GBP510 (2-doses at a 28-day interval) adjuvanted with or without AS03 in adults aged 19-85 years. The main outcomes included solicited and unsolicited adverse events; anti-SARS-CoV-2 RBD IgG antibody and neutralizing antibody responses; T-cell immune responses.
FindingsOf 328 participants who underwent randomization, 327 participants received at least 1 dose of vaccine. Each received either 10 g GBP510 adjuvanted with AS03 (n = 101), 10 g unadjuvanted GBP510 (n = 10), 25 g GBP510 adjuvanted with AS03 (n = 104), 25 g unadjuvanted GBP510 (n = 51), or placebo (n = 61). Most solicited adverse events were mild-to-moderate in severity and transient. Higher reactogenicity was observed in the GBP510 adjuvanted with AS03 groups compared to the non-adjuvanted and placebo groups. Reactogenicity was higher post-dose 2 compared to post-dose 1, particularly for systemic adverse events. The geometric mean concentrations of anti-SARS-CoV-2-RBD IgG antibody reached 2163.6/2599.2 BAU/mL in GBP510 adjuvanted with AS03 recipients (10 g/25 g) by 14 days after the second dose. Two-dose vaccination with 10 g or 25 g GBP510 adjuvanted with AS03 induced high titers of neutralizing antibody via pseudovirus (1369.0/1431.5 IU/mL) and wild-type virus (949.8/861.0 IU/mL) assays.
InterpretationGBP510 adjuvanted with AS03 was well tolerated and highly immunogenic. These results support further development of the vaccine candidate, which is currently being evaluated in Phase 3.
FundingCoalition for Epidemic Preparedness Innovations
RESEARCH IN CONTEXTO_ST_ABSEvidence before this studyC_ST_ABSWe searched PubMed for research articles published by December 31, 2021, using the terms "COVID-19" or "SARS-CoV-2," "vaccine," and "clinical trial." In previously reported randomized clinical trials, we found that mRNA vaccines were more immunogenic than adenovirus-vectored vaccines. Solicited adverse events were more frequent and more severe in intensity after the first dose compared to the second dose for adenovirus-vectored vaccines, whereas they increased after the second dose of mRNA or recombinant spike-protein nanoparticle vaccines.
Added value of this studyThis is the first human study evaluating the immunogenicity and safety of recombinant SARS-CoV-2 protein nanoparticle with and without adjuvant AS03, designed to elicit functional cross-protective responses via receptor-binding domain (RBD). Both 10 and 25 g of GBP510 with AS03 formulations were well tolerated with an acceptable safety profile. Potent humoral immune responses against the SARS-CoV-2 RBD were induced and peaked by day 42 (14 days after the second dose). In addition, GBP510 adjuvanted with AS03 elicited a noticeable Th1 response, with production of IFN-{gamma}, TNF-, and IL-2. IL-4 was inconsistent and IL-5 nearly inexistent response across all groups.
Implications of the available evidenceThe results from this phase 1/2 trial indicate that GBP510 adjuvanted with AS03 has an acceptable safety profile with no vaccine-related serious adverse events. Two-dose immunization with GBP510 adjuvanted with AS03 induced potent humoral and cellular immune responses against SARS-CoV-2. | infectious diseases |
10.1101/2022.03.29.22272714 | Immunogenicity of Pfizer-BioNTech COVID-19 mRNA Primary Vaccination Series in Recovered Individuals Depends on Symptoms at Initial Infection. | ImportancePublic health vaccination recommendations for COVID-19 primary series and boosters in previously infected individuals differ worldwide. As infection with SARS-CoV-2 is often asymptomatic, it remains to be determined if vaccine immunogenicity is comparable in all previously infected subjects. We present detailed immunological evidence to clarify the requirements for one-or two-dose primary vaccination series for naturally primed individuals.
ObjectiveEvaluate the immune response to COVID-19 mRNA vaccines in healthcare workers (HCWs) who recovered from a SARS-CoV-2 infection.
DesignMulticentric observational prospective cohort study of HCWs with a PCR-confirmed SARS-CoV-2 infection designed to evaluate the dynamics of T and B cells immune responses to primary infection and COVID-19 mRNA vaccination over 12 months.
ParticipantsUnvaccinated HCWs with PCR-confirmed SARS-CoV-2 infection were selected based on the presence or absence of symptoms at infection and serostatus at enrollment. Age- and sex-matched adults not infected with SARS-CoV-2 prior to vaccination were included as naive controls.
ExposureVaccination with Pfizer BioNTech BNT162b2 mRNA vaccine.
Main Outcome(s) and Measure(s)Immunity score (zero to three), before and after vaccination, based on anti-RBD IgG ratio, serum capacity to neutralize live virus and IFN-{gamma} secretion capacity in response to SARS-CoV-2 peptide pools above the positivity threshold for each of the three assays. We compared the immunity score between groups based on subjects symptoms at diagnosis and/or serostatus prior to vaccination.
ResultsNone of the naive participants (n=14) showed a maximal immunity score of three following one dose of vaccine compared to 84% of the previously infected participants (n=55). All recovered individuals who did not have an immunity score of three were seronegative prior to vaccination, and 67% had not reported symptoms resulting from their initial infection. Following one dose of vaccine, their immune responses were comparable to naive individuals, with significantly weaker responses than those who were symptomatic during infection.
Conclusions and RelevanceIndividuals who did not develop symptoms during their initial SARS-CoV-2 infection and were seronegative prior to vaccination present immune responses comparable to that of naive individuals. These findings highlight the importance of administering the complete two-dose primary regimen and following boosters of mRNA vaccines to individuals who experienced asymptomatic SARS-CoV-2 infection.
KEY POINTS
QuestionIs a single dose of COVID-19 mRNA vaccine sufficient to induce robust immune responses in individuals with prior SARS-CoV-2 infection?
FindingsIn this cohort of 55 health care workers previously infected with SARS-CoV-2, we show that the absence of symptoms during initial infection and negative serostatus prior to vaccination predict the strength of immune responses to COVID-19 mRNA vaccine. Lack of symptoms and a negative serostatus prior to vaccination leads to immune responses comparable to naive individuals.
MeaningOur results support a two-dose primary series requirement for any individual with prior history of asymptomatic SARS-CoV-2 infection. | infectious diseases |
10.1101/2022.03.29.22272858 | Declining Course of Humoral Immune Response in Initially Responding Kidney Transplant Recipients after Repeated SARS-CoV-2 Vaccination | Immunogenicity of SARS-CoV-2 vaccines in kidney transplant recipients is limited, resulting in inadequately low serological response rates and low immunoglobulin (Ig) levels, correlating with reduced protection against death and hospitalization from COVID-19. We retrospectively examined the time course of anti-SARS-CoV-2 Ig antibody levels after up to five repeated vaccinations in 644 previously nonresponding kidney transplant recipients. Using anti SARS-CoV-2 IgG/IgA ELISA and the total Ig ECLIA assays, we compare antibody levels at 1 month with levels at 2 and 4 months, respectively. Additionally, we correlate the measurements of the used assays.
Between 1 and 2 months, and between 1 and 4 months, mean anti-SARS-CoV-2 Ig levels in responders decreased by 14% and 25%, respectively, depending on the assay. Absolute Ig values and time course of antibody levels and showed high interindividual variability. Ig levels decreased by at least 20% in 77 of 148 paired samples with loss of sufficient serological protection over time occurring in 18 out of 148 (12.2%).
IgG ELISA and total Ig ECLIA assays showed a strong positive correlation (Kendalls tau=0.78), yet the two assays determined divergent results in 99 of 751 (13.2%) measurements. IgG and IgA assays showed overall strong correlation but divergent results in 270 of 1.173 (23.0%) cases and only weak correlation of antibody levels in positive samples.
Large interindividual variability and significant loss of serological protection after 4 months supports repeated serological sampling and consideration of shorter vaccination intervals in kidney transplant recipients. | allergy and immunology |
10.1101/2022.03.29.22273013 | CAR+ and CAR- T cells differentiate into an NK-like subset that is associated with increased inflammatory cytokines following infusion | Chimeric antigen receptor (CAR) T cells have demonstrable efficacy in treating B-cell malignancies. Factors such as product composition, lymphodepletion and immune reconstitution are known to influence functional persistence of CAR+ T cells. However, little is known about the determinants of differentiation and phenotypic plasticity of CAR+ T and immune cells early post-infusion. We report single cell multi-omics analysis of molecular, clonal, and phenotypic profiles of CAR+ T and other immune cells circulating in patients receiving donor-derived products. We used these data to reconstruct a differentiation trajectory, which explained the observed phenotypic plasticity and identified cell fate of CAR+ and CAR- T cells. Following lympho-depletion, endogenous CAR- CD8+ and {gamma}{square} T cells, clonally expand, and differentiate across heterogenous phenotypes, from a dominant resting or proliferating state into precursor of exhausted T cells, and notably into a terminal NK-like phenotype. In parallel, following infusion, CAR+ T cells undergo a similar differentiation trajectory, showing increased proliferation, metabolic activity and exhaustion when compared to circulating CAR- T cells. The subset of NK-like CAR+ T cells was associated with increasing levels of circulating proinflammatory cytokines, including innate-like IL-12 and IL-18. These results demonstrate that differentiation and phenotype of CAR+ T cells are determined by non-CAR induced signals that are shared with endogenous T cells, and condition the patients immune-recovery.
One Sentence SummaryCAR+ and CAR- CD8+ T cells share a differentiation trajectory terminating in an NK-like phenotype that is associated with increased inflammatory cytokines levels.
Graphical Abstract
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=163 SRC="FIGDIR/small/22273013v1_ufig1.gif" ALT="Figure 1">
View larger version (35K):
[email protected]@eb4c7org.highwire.dtl.DTLVardef@17ea823org.highwire.dtl.DTLVardef@509299_HPS_FORMAT_FIGEXP M_FIG C_FIG | allergy and immunology |
10.1101/2022.03.29.22272896 | Impact of Total Hip Replacements on the Incidence of Hip Fractures in Norway During 1999-2019. A NOREPOS Study | The knowledge about why hip fracture rates in Norway have declined is sparse. Concurrent with decreasing hip fracture rates, the rates of total hip replacements (THRs) have increased. We wanted to investigate if hip fracture rates continued to decline, and whether the increase in THRs had any influence on this decline, assuming that living with a hip prosthesis precludes fracture of the operated hip. Information on hip fractures in Norway 1999-2019 was available from the Norwegian Epidemiologic Osteoporosis Studies (NOREPOS) hip fracture database and population size were available in official population tables from Statistics Norway. Primary THRs (for any cause except hip fracture) 1989-2019 were obtained from the Norwegian Arthroplasty Register. We calculated the annual age-standardized incidence rates of hip fracture by sex for the period 1999-2019. The hip fracture rates in a scenario with no hip prostheses were calculated by subtracting 0.5 persons from the population at risk for each prevalent hip prosthesis, considering that each person has two hips at risk of fracture. We estimated how much of the decline could be attributed to the increased prevalence of hip prostheses. From 1999 to 2019, age-standardized incidence rates of hip fracture decreased by 27% in women and 20% in men. The rates remained stable in those under 70 years and decreased in those 70 years and above. Excluding replaced hips from the population at risk led to higher incidence rates, and this impact was considerably larger at higher ages. The increased prevalence of hip prostheses over the period accounted for approximately 18% (20% in women and 11% in men) of the observed decline in hip fracture rates. In conclusion, the incidence of hip fractures continued to decline, and the increasing number of people living with hip prostheses contributed significantly to the observed declining time trends. | epidemiology |
10.1101/2022.03.30.22273190 | Lessons from a pandemic | ObjectivesSeveral interventions have been used around the world trying to contain the SARS-Cov-2 pandemic, such as quarantine, prohibition of mass demonstrations, isolation of sick people, tracing of virus carriers, semi-containment, promotion of barrier gestures, development of rapid auto-tests and vaccines among others. We propose a simple model to evaluate the potential impact of such interventions.
MethodsA model for the reproduction number of an infectious disease including three main contexts of infection (indoor mass events, public indoor activities and household) and seven parameters is considered. We illustrate how these parameters could be obtained from the literature or from expert assumptions, and we apply the model to describe 20 scenarios that can typically occur during the different phases of a pandemic.
ResultsThis model provides a useful framework for better understanding and communicating the effects of different (combinations of) possible interventions, while encouraging constant updating of expert assumptions to better match reality.
ConclusionThis simple approach will bring more transparency and public support to help governments to think, decide, evaluate and adjust what to do during a pandemic. | epidemiology |
10.1101/2022.03.30.22273113 | Lymphoid Enhancer-Binding Factor 1 (LEF1) immunostaining as a surrogate of β-Catenin (CTNNB1) mutations | Activating mutations affecting exon 3 of the {beta}-Catenin (CTNNB1) gene result in constitutive activation of WNT signalling and are a diagnostic hallmark of several tumour entities including desmoid-type fibromatosis or define clinically relevant subtypes such as in endometrioid carcinoma. In a diagnostic setting, {beta}-Catenin immunohistochemistry is widely used as a surrogate of CTNNB1 mutations, but is often difficult to assess in practice, given that the characteristic nuclear translocation may be focal or hard to distinguish from spillover of the normal membranous staining.
We therefore assessed Lymphoid Enhancer-Binding Factor 1 (LEF1) immunohistochemistry, a nuclear marker of WNT activation as a potential surrogate of CTNNB1 mutations. Across a variety of entities characterised by CTNNB1 mutations as a putative driver we found diffuse and strong expression of LEF1 in 77% of cases. In a cohort of endometrial carcinomas (n=255) LEF1 was accurate to predict CTNNB1 mutations in 85% (p<0.001), while {beta}-Catenin was accurate in 76% (p<0.001). Irrespective of tumour type, we found LEF1 immunostaining to be easier to interpret than {beta}-Catenin immunostaining in 54% of cases, more difficult in 1% and equally easy to interpret in the remainder.
We conclude that LEF1 immunostaining is a highly useful surrogate marker of CTNNB1 mutations in lesions which are driven by WNT signalling, favourably complementing {beta}-Catenin immunohistochemistry and outperforming the latter as a single marker. | pathology |
10.1101/2022.03.30.22273178 | Sleep problems effect on developmental trajectories in children with autism | The effect of sleep problems in 2-to 5-year-old children with ASD was investigated in the largest and the longest observational study to-date. Parents assessed the development of 7069 children quarterly for three years on five orthogonal subscales: receptive language, expressive language, sociability, sensory awareness, and health. Moderate and severe sleep problems were reported in 13% of children. Children with no sleep problems developed faster compared to matched children with sleep problems in all subscales. The greatest difference in trajectories was detected in the health subscale. When controlling for the health score (in addition to each subscale score at baseline as well as gender and severity), the effect of sleep problems decreased in all subscales except the combinatorial receptive language subscale (where the effect of sleep problems was increased), suggesting that sleep problems affect combinatorial language acquisition irrespective of the overall health. This study confirms a high prevalence of sleep problems in ASD children and points to the need for more systematic research as an initial step in developing treatment strategies. | pediatrics |
10.1101/2022.03.30.22273158 | Birth anthropometry among three Asian ethnic groups in Singapore -- new growth charts | ObjectiveWe analyse birth anthropometry of Asian babies and its socioeconomic exposures, develop gestational age and gender-specific birth anthropometry charts and compare to the widely used Fenton chart.
DesignRetrospective observational study.
SettingDepartment of Neonatology at the National University Hospital in Singapore.
Population or sampleWe report data from 52 220 Chinese, Indian and Malay infants, born from 1991-1997 and from 2010-2017 in Singapore.
MethodsThe BW, length and head circumference are each modelled with maternal exposures using general additive model. Anthropometry charts are built using smoothed centile curve and compared with Fenton charts using binomial test.
Main outcome measuresBW, head circumference, crown-heel length.
ResultsIn contrast to the marked differences in birth anthropometry among these ethnic populations, when exposed to a uniform socioeconomic environment, their intrauterine growth and birth anthropometry were almost identical. From the gestational age specific anthropometric charts, until about late prematurity, Asian growth curves, as derived from our cohort, mirrored that of Fentons; thereafter, Asian babies showed a marked reduction in growth velocity.
ConclusionsThese findings suggest comparative slowing of intrauterine growth among Asian babies towards term gestation. This phenomenon may be explained by two possible postulations, firstly, restrictive effects of a smaller uterus of shorter Asian women towards term and secondly, early maturation and senescence of fetoplacental unit among Asians. In clinical practice the new birth anthropometry charts will more accurately identify true fetal growth restriction as well as true postnatal growth failure in preterm infants when applied to the appropriate population.
FundingSingapore Population Health Improvement Centre (NMRC/CG/C026/2017_NUHS). | pediatrics |
10.1101/2022.03.29.22273067 | Identification of novel nutrient-sensitive gene regulatory networks in amniotic fluid from fetuses with spina bifida using miRNA and transcription factor network analysis | BackgroundNeural tube defects (NTDs) remain among the most common congenital anomalies. Contributing risk factors include genetics and nutrient deficiencies, however, a comprehensive assessment of nutrient-gene interactions in NTDs is lacking. We hypothesised that multiple nutrient-gene interactions would be evident in NTD-associated gene signatures.
MethodsWe applied a novel, nutrient-focused gene expression analysis pipeline to identify nutrient-sensitive gene regulatory networks in amniocyte gene expression data (GSE4182) from fetuses with NTDs (cases; n=3) and fetuses with no congenital anomalies (controls; n=5). Differentially expressed genes (DEGs) were identified and screened for having nutrient cofactors. Transcription factors (TFs) with nutrient cofactors that regulated DEGs, and nutrient-sensitive miRNAs that had a previous link to NTDs, were identified and used to construct DEG regulatory networks.
ResultsOf the 880 DEGs in cases (vs. controls), 10% had at least one nutrient cofactor. DEG regulatory network analysis revealed that 39% and 52% of DEGs in cases were regulated by 22 nutrient-sensitive miRNAs and 10 nutrient-dependent TFs, respectively. Zinc- and B vitamin-dependent genes and gene regulatory networks (Zinc: 10 TFs targeting 50.6% of DEGs; B vitamins: 4 TFs targeting 37.7% of DEGs, 9 miRNAs targeting 17.6% of DEGs) were dysregulated in cases. Two nutrient-dependent TFs predicted to target DEGs in cases (Tumor Protein 63 and Churchill Domain Containing 1) have not been previously linked to NTDs.
ConclusionsWe identified multiple novel nutrient-sensitive gene regulatory networks associated with NTDs, which may relate to NTD pathogenesis, and indicate new targets to explore for NTD prevention or to optimise fetal development. | pediatrics |
10.1101/2022.03.29.22273110 | Effect of MAOA DNA methylation on human in vivo protein expression measured by harmine PET in healthy and depressed individuals | Epigenetic modifications, such as DNA methylation, are understood as an intermediary between environmental factors affecting disease risk and pathophysiologic changes to brain structure and function. Cerebral monoamine oxidase A (MAO-A) levels are altered in depression, as are DNA methylation levels within the MAOA gene, particularly in the promoter / exon I / intron I region. An effect of MAOA methylation on peripheral protein expression was shown, but the extent to which methylation affects brain MAO-A levels is not fully understood. Here, the influence of average and CpG site-specific MAOA promoter / exon I / intron I region DNA methylation on global MAO-A distribution volume (VT), an index of MAO-A density, was assessed via [11C]harmine positron emission tomography in 22 patients suffering from winter-type seasonal affective disorder and 30 healthy controls. No significant influence of MAOA DNA methylation on global MAO-A VT was found, despite correction for health status (patients vs. controls), sex, season (methylation analysis in spring / summer vs. fall / winter) and MAOA variable number of tandem repeat genotype (VNTR; high vs. low expression groups). However, in female subjects, season affected average DNA methylation, with higher levels in spring and summer (puncorr = 0.03). We thus did not find evidence for an effect of MAOA DNA methylation on brain MAO-A VT. In contrast to a previous study that demonstrated an effect of the methylation of a MAOA promoter region located further 5 on brain MAO-A, in the present study MAOA methylation appears to affect brain protein levels to a limited extent. The observed effect of season on methylation levels is in accordance with extensive evidence for seasonal effects within the serotonergic system.
Clinicaltrials.gov IdentifierNCT02582398
EUDAMED NumberCIV-AT-13-01-009583 | psychiatry and clinical psychology |
10.1101/2022.03.29.22273119 | No woman should be left behind: A decomposition analysis of socioeconomic inequalities in unsafe abortion among women presenting for abortion care services in Lusaka and Copperbelt provinces of Zambia | This study measured socioeconomic-related unsafe abortion inequality among women presenting for abortion care services in Lusaka and the Copperbelt provinces of Zambia and decompose its causes. We conducted a cross-sectional study between August and September 2021. Unsafe abortion inequalities were assessed using corrected concentration index and Erreygers-type decomposition analysis was conducted to assess causes of unsafe abortion inequalities. Out of 362 women, the magnitude of unsafe abortion was 77(21.3%, [95% CI: 17.8, 24.9]). The corrected concentration index was -0.231 (95% CI: -0.309, -0.154), implying pro-poor inequality in unsafe abortion among women. Decomposition analysis showed that the major contributors of the unsafe abortion inequality were socioeconomic status (66.6%), marital status (6.3%), education (10.2%) and employment (3.7%). Also, history of unwanted pregnancy (5.1%), awareness of whether abortion is legal in Zambia (8.9%) and awareness that hospitals offered free abortion services (11.3%). The findings suggest that the unsafe abortion is a problem in Zambia and substantial inequality mainly due to socioeconomic factors. Stakeholders and policymakers should consider socioeconomic strategies to reduce unsafe abortion inequality promoting advocacy to increased access to legal safe abortion and use of modern contraceptives so that no woman is left behind in the prevention of unsafe abortion. | public and global health |
10.1101/2022.03.30.22273191 | Diagnostic accuracy of three computer-aided detection systems for detecting pulmonary tuberculosis on chest radiography when used for screening: analysis of an international, multicenter migrants screening study | The aim of this study was to independently evaluate the diagnostic accuracy of three artificial intelligence (AI)-based computer aided detection (CAD) systems for detecting pulmonary tuberculosis (TB) on global migrants screening chest x-ray (CXR) cases.
Retrospective clinical data and CXR images were collected from the International Organization for Migration (IOM) pre-migration health assessment TB screening global database for US-bound migrants. A total of 2,812 participants were included in the dataset, of which 1,769 (62.9%) had accompanying microbiological test results. All CXRs were interpreted by three CAD systems (CAD4TB v6, Lunit INSIGHT v4.9.0, and qXR v2) offline and re-interpreted by two expert radiologists in a blinded fashion. The performance was evaluated using receiver operating characteristics curve (ROC), estimates of sensitivity and specificity at different CAD thresholds against both microbiological and radiological reference standards (MRS and RadRS, respectively).
The area under the curve against MRS was highest for Lunit (0.85; 95% CI 0.83-0.87), followed by qXR (0.75; 95% CI 0.72-0.77) and then CAD4TB (0.71; 95% CI 0.68-0.73). At a set specificity of 70%, Lunit had the highest sensitivity (54.5%; 95% CI 51.7-57.3); at a set sensitivity of 90%, specificity was also highest for Lunit (81.4%; 95% CI 77.9-84.6). The CAD systems performed comparable to sensitivity (98.3%), and except CAD4TB, to specificity (13.7 %) of expert radiologist. Similar trends were observed when using RadRS.
In conclusion, the study demonstrated that the three CAD systems had broadly similar diagnostic accuracy with regard to TB screening, and comparable accuracy to expert radiologist. Compared with different reference standards, Lunit performed better than both qXR and CAD4TB against MRS, and better than qXR against RadRS. Overall, these findings suggest that CAD systems could be a useful tool for TB screening programs in remote, high TB prevalent places where access to expert radiologists may be limited. | radiology and imaging |
10.1101/2022.03.29.22272994 | Single-cell RNA-sequencing reveals predictive features of response to pembrolizumab in Sezary syndrome | The PD-1 inhibitor pembrolizumab is effective in treating Sezary syndrome, a leukemic variant of cutaneous T-cell lymphoma. Our purpose was to investigate the effects of pembrolizumab on healthy and malignant T cells in Sezary syndrome and to discover characteristics that predict pembrolizumab response. Samples were analyzed before and after 3 weeks of pembrolizumab treatment by single-cell RNA-sequencing of 118,961 peripheral blood T cells isolated from six Sezary syndrome patients. T-cell receptor clonotyping, bulk RNA-seq signatures, and whole exome data were integrated to classify malignant T-cells and their underlying subclonal heterogeneity. We found that responses to pembrolizumab were associated with lower KIR3DL2 expression within Sezary T cells. Pembrolizumab modulated Sezary cell gene expression of T-cell activation associated genes. The CD8 effector populations included clonally expanded populations with a strong cytotoxic profile. Expansions of CD8 terminal effector and CD8 effector memory T-cell populations were observed in responding patients after treatment. We observed intrapatient Sezary cell heterogeneity including subclonal segregation of a coding mutation and copy number variation. Our study reveals differential effects of pembrolizumab in both malignant and healthy T cells. These data support further study of KIR3DL2 expression and CD8 immune populations as predictive biomarkers of pembrolizumab response in Sezary syndrome. | oncology |
10.1101/2022.03.29.22272994 | Single-cell RNA-sequencing reveals predictive features of response to pembrolizumab in Sezary syndrome | The PD-1 inhibitor pembrolizumab is effective in treating Sezary syndrome, a leukemic variant of cutaneous T-cell lymphoma. Our purpose was to investigate the effects of pembrolizumab on healthy and malignant T cells in Sezary syndrome and to discover characteristics that predict pembrolizumab response. Samples were analyzed before and after 3 weeks of pembrolizumab treatment by single-cell RNA-sequencing of 118,961 peripheral blood T cells isolated from six Sezary syndrome patients. T-cell receptor clonotyping, bulk RNA-seq signatures, and whole exome data were integrated to classify malignant T-cells and their underlying subclonal heterogeneity. We found that responses to pembrolizumab were associated with lower KIR3DL2 expression within Sezary T cells. Pembrolizumab modulated Sezary cell gene expression of T-cell activation associated genes. The CD8 effector populations included clonally expanded populations with a strong cytotoxic profile. Expansions of CD8 terminal effector and CD8 effector memory T-cell populations were observed in responding patients after treatment. We observed intrapatient Sezary cell heterogeneity including subclonal segregation of a coding mutation and copy number variation. Our study reveals differential effects of pembrolizumab in both malignant and healthy T cells. These data support further study of KIR3DL2 expression and CD8 immune populations as predictive biomarkers of pembrolizumab response in Sezary syndrome. | oncology |
10.1101/2022.03.29.22272913 | A distinct four-value blood signature of pyrexia under combination therapy of malignant melanoma with BRAF/MEK-inhibitors evidenced by an algorithm defined pyrexia score | Pyrexia is a frequent adverse event of BRAF/MEK-inhibitor combination therapy in patients with metastasized malignant melanoma (MM). The studys objective was to identify laboratory changes which might correlate with the appearance of pyrexia. Initially, data of 38 MM (14 with pyrexia) patients treated with dabrafenib plus trametinib were analysed retrospectively. Graphical visualization of time series of laboratory values suggested that a rise in C-reactive-protein, in parallel with a fall of leukocytes and thrombocytes, were indicative of pyrexia. Additionally, statistical analysis showed a significant correlation between lactate dehydrogenase (LDH) and pyrexia. An algorithm based on these observations was designed using a deductive approach in order to calculate a pyrexia score (PS) for each laboratory assessment in treated patients. A second independent data set with 28 MM patients (8 with pyrexia) was used for the validation of the algorithm. PS based on the four parameters CRP, LDH, leukocyte and thrombocyte numbers, were statistically significantly higher in pyrexia patients, differentiated between groups (F=20,8; p=<0,0001) and showed a significant predictive value for the development of pyrexia (F=6,24; p=0,013). We provide first evidence that pyrexia in patients treated with BRAF/MEK-blockade can be identified and predicted by an algorithm that calculates a score.
SUMMARY STATEMENTPyrexia is a severe side effect of BRAF/MEK-inhibitor therapy of malignant melanoma. An algorithm based on the four laboratory values, leukocyte and thrombocyte counts, CRP and LDH, was designed for detecting pyrexia. The used heuristic approach might serve as a blueprint for analysing unevenly sampled or even incomplete clinical data. | oncology |
10.1101/2022.03.25.22271678 | Androgen receptor polyQ alleles and COVID-19 severity in men: a replication study | Ample evidence indicates a sex-related difference in severity of COVID-19, with less favorable outcomes observed in men. Genetic factors have been proposed as candidates to explain this difference. The polyQ polymorphism in the androgen receptor gene has been recently described as a genetic biomarker of COVID-19 severity. In this study, we analyzed this association in a large cohort of 1136 men classified into three groups according to their degree of COVID-19 severity, finding a similar distribution of polyQ alleles among severity groups. Therefore, our results do not support the role of this polymorphism as a biomarker of COVID-19 severity. | genetic and genomic medicine |
10.1101/2022.03.29.22273138 | A proposed de-identification framework for a cohort of children presenting at a health facility in Uganda | Data sharing has enormous potential to accelerate and improve the accuracy of research, strengthen collaborations, and restore trust in the clinical research enterprise. Nevertheless, there remains reluctancy to openly share raw datasets, in part due to concerns regarding research participant confidentiality and privacy. Statistical data de-identification is an approach that can be used to preserve privacy and facilitate open data sharing. We have proposed a standardized framework for the de-identification of data generated from cohort studies in children in a low- and-middle income country.
Variables were labeled as direct and quasi-identifiers based on conditions of replicability, distinguishability, and knowability with consensus from two independent evaluators. Direct identifiers were removed from the dataset, while a statistical risk-based de-identification approach using the k-anonymity model was applied to quasi-identifiers. Qualitative assessment of the level of privacy invasion associated with data set disclosure was used to determine an acceptable re-identification risk threshold, and corresponding k-anonymity requirement. A de-identification model using generalization, followed by suppression was applied using a logical stepwise approach to achieve k-anonymity. The utility of the de-identified data was demonstrated using a typical clinical regression example. The de-identified dataset was published on the Pediatric Sepsis Data CoLaboratory Dataverse which provides moderated data access.
Researchers are faced with many challenges when providing access to clinical data. We provide a standardized de-identification framework that can be adapted and refined based on specific context and risks. This process will be combined with moderated access to foster coordination and collaboration in the clinical research community.
AUTHOR SUMMARYOpen Data is data that anyone can access, use, and share. Open Data has the potential to facilitate collaboration, enrich research, and advance the analytic capacity to inform decisions. Importantly, Open Data plays a role in fulfilling obligations to research participants and honoring the nature of medical research as a public good. Leaders in industry, academia, and regulatory agencies recognize the value in increased transparency and are focusing on how to openly share data while minimizing the safety risks to research participants. For example, making data open can pose a privacy risk to research participants who have shared personal health information. This risk can be mitigated using data de-identification, a process of removing personal information from a dataset so that an individuals identity is no longer apparent or cannot be reasonably ascertained from the data. We introduce a simple, statistical risk-based framework for de-identification of clinical data that can be followed by any researcher. This framework will guide open data sharing while improving the protection of research participants. | health informatics |
10.1101/2022.03.29.22273101 | Estimating epidemiological quantities from repeated cross-sectional prevalence measurements | BackgroundRepeated measurements of cross-sectional prevalence of Polymerase Chain Reaction (PCR) positivity or seropositivity provide rich insight into the dynamics of an infection. The UK Office for National Statistics (ONS) Community Infection Survey publishes such measurements for SARS-CoV-2 on a weekly basis based on testing enrolled households, contributing to situational awareness in the country. Here we present estimates of time-varying and static epidemiological quantities that were derived from the estimates published by ONS.
MethodsWe used a gaussian process to model incidence of infections and then estimated observed PCR prevalence by convolving our modelled incidence estimates with a previously published PCR detection curve describing the probability of a positive test as a function of the time since infection. We refined our incidence estimates using time-varying estimates of antibody prevalence combined with a model of antibody positivity and waning that moved individuals between compartments with or without antibodies based on estimates of new infections, vaccination, probability of seroconversion and waning.
ResultsWe produced incidence curves of infection describing the UK epidemic from late April 2020 until early 2022. We used these estimates of incidence to estimate the time-varying growth rate of infections, and combined them with estimates of the generation interval to estimate time-varying reproduction numbers. Biological parameters describing seroconversion and waning, while based on a simple model, were broadly in line with plausible ranges from individual-level studies.
ConclusionsBeyond informing situational awareness and allowing for estimates using individual-level data, repeated cross-sectional studies make it possible to estimate epidemiological parameters from population-level models. Studies or public health surveillance methods based on similar designs offer opportunities for further improving our understanding of the dynamics of SARS-CoV-2 or other pathogens and their interaction with population-level immunity. | epidemiology |
10.1101/2022.03.27.22272930 | Cancer diagnosis in primary care after second pandemic year in Catalonia: a time-series analysis of primary care electronic health records covering about five million people | BackgroundDuring COVID-19 pandemic, incidence of chronic disease had drastically been reduced due to health care interruptions. The aim of this study is to analyze cancer diagnosis during the two years of the COVID-19 pandemic.
MethodsTime-series study of malignant neoplasms, using data from the primary care electronic health records from January 2014 to December 2021. We obtained the expected monthly incidence using a temporary regression adjusted by trend and seasonality. We additionally compared cancer incidence in 2019 with those of 2020 and 2021 using the T-Test. We performed analysis globally, by sex and by type of cancer.
ResultsDuring 2020, the incidence of cancer had reduced by -21% compared to 2019 (p-value <0.05). Greater reductions were observed during lockdown in early 2020 (>40%) and with some types of cancers, especially prostate and skin cancers (-29.6% and -26.9% respectively, p-value<0.05). Lung cancers presented statistically non-significant reductions in both years. Cancer diagnosis returned to expected around March 2021, and incidence in 2021 was similar to that of 2019 (overall difference of 0.21%, p=0.967). However, -11% reduction still was found when comparing pandemic months of 2020-2021 with pre-pandemic months (2019-2020)
ConclusionsAlthough primary care cancer diagnostic capacity in 2021 has returned to pre-pandemic levels, missing diagnoses during the last two years have not been fully recovered.
Key messagesO_LICancer diagnoses have dramatically dropped during 2020 worldwide.
C_LIO_LIWe observe a -21% decline in 2020, but a return to pre-pandemic diagnosis capacity in 2021.
C_LIO_LIA -11% outstanding drop was still found comparing pre-pandemic to pandemic months.
C_LIO_LIReductions were greater during the lockdown (>40%).
C_LIO_LILung and breast cancers presented fewer reductions while prostate and skin cancers had greater drops.
C_LIO_LIMissing diagnoses during the last two years have not been fully recovered
C_LI | epidemiology |
10.1101/2022.03.26.22272727 | Association of Mass Distribution of Rapid Antigen Tests and SARS-CoV-2 Prevalence: Results from NIH-CDC funded Say Yes! Covid Test program in Michigan | ImportanceWide-spread distribution of diagnostics is an integral part of the United States COVID-19 strategy; however, few studies have assessed the effectiveness of this intervention at reducing transmission of community COVID-19.
ObjectiveTo assess the impact of the Say Yes! Covid Test (SYCT!) Michigan program, a population-based program that distributed 20,000 free rapid antigen tests within Ann Arbor and Ypsilanti, Michigan in June-August 2021, on community prevalence of SARS-CoV-2.
DesignThis ecological study analyzed cases of SARS-CoV-2 from March to October 2021 reported to the Washtenaw County Health Department.
SettingWashtenaw County, Michigan
ParticipantsAll residents of Washtenaw County
InterventionsCommunity-wide distribution of 500,000 rapid antigen tests for SARS-CoV-2 to residents of Ann Arbor and Ypsilanti, Michigan. Each household was limited to one test kit containing 25 rapid antigen tests.
Main Outcome and MeasuresCommunity prevalence of SARS-CoV-2, as measured through 7-day average cases, in Ann Arbor and Ypsilanti was compared to the rest of Washtenaw County. A generalized additive model was fitted with non-parametric trends for control and relative differences of trends in the pre-intervention, intervention, and post-intervention periods to compare intervention municipalities of Ann Arbor and Ypsilanti to the rest of Washtenaw County. Model results were used to calculate average cases prevented in the post-intervention period.
ResultsIn the post-intervention period, there were significantly lower standardized average cases in the intervention communities of Ann Arbor/Ypsilanti compared to the rest of Washtenaw County (p<0.001). The estimated standardized relative difference between Ann Arbor/Ypsilanti and the rest of Washtenaw County was -0.016 cases per day (95% CI: -0.020 to -0.013), implying that the intervention prevented 40 average cases per day two months into the post-intervention period if trends were consistent.
Conclusions and RelevanceMass distribution of rapid antigen tests may be a useful mitigation strategy to combat community transmission of SARS-CoV-2, especially given the recent relaxation of social distancing and masking requirements. | public and global health |
10.1101/2022.03.30.22272088 | Using DeepLabCut for tracking body landmarks in videos of children with dyskinetic cerebral palsy: a working methodology | Markerless motion tracking is a promising technique to capture human movements and postures. It could be a clinically feasible tool to objectively assess movement disorders within severe dyskinetic cerebral palsy (CP). Here, we aim to evaluate tracking accuracy on clinically recorded video data.
Method94 video recordings of 33 participants (dyskinetic CP, 8-23 years; GMFCS IV-V, i.e. non-ambulatory) from a previous clinical trial were used. Twenty-second clips were cut during lying down as this is a postion for this group of children and young adults allows to freely move. Video image resolution was 0.4 cm per pixel. Tracking was performed in DeepLabCut. We evaluated a model that was pre-trained on a human healthy adult data set with an increasing number of manually labeled frames (0, 1, 2, 6, 10, 15 and 20 frames per video). To assess generalizability, we used 80% of videos for the model development and evaluated the generalizability of the model using the remaining 20%. For evaluation the mean absolute error (MAE) between DeepLabCuts prediction of the position of body points and manual labels was calculated.
ResultsUsing just the pre-trained adult human model yielded a MAE of 121 pixels. An MAE of 4.5 pixels (about 1.5 cm) could be achieved by adding 15-20 manual labels. When applied to unseen video clips (i.e. generalization set), the MAE was 33 pixels with a dedicated model trained on 20 frames per videos.
ConclusionAccuracy of tracking with a standard pre-trained model is insufficiently to automatically assess movement disorders in dyskinetic CP. However, manually adding labels improves the model performance substantially. In addition, the methodology proposed within our study is applicable to check the accuracy of DeepLabCut application within other clinical data set. | rehabilitation medicine and physical therapy |
10.1101/2022.03.25.22272822 | The relationship of major diseases with childlessness: a sibling matched case-control and population register study in Finland and Sweden | BackgroundApproximately 20% of men and 15% of women remain childless at the end of their reproductive lifespan, with childlessness increasing over time, yet we lack a comprehensive understanding of the role and relative importance of diseases associated with childlessness, particularly among men.
MethodsWe examined all individuals born in Finland (n=1,035,928) and Sweden (n=1,509,092) between 1956 and 1968 (men) or 1956 and 1973 (women) and followed them up until the end of 2018. Socio-demographic, health, and reproductive information was obtained from nationwide registers. We assessed the association of 414 diseases across 16 categories with having no children by age 45 (women) and 50 (men) using a matched pair case-control design based on 71,524 pairs of full-sisters and 77,622 full-brothers who were discordant for childlessness as well as a population-based approach.
FindingsMental-behavioral, congenital anomalies, and endocrine-nutritional-metabolic disorders had the strongest associations with childlessness. Novel associations were discovered with inflammatory (eg. myocarditis) and autoimmune diseases (eg. juvenile idiopathic arthritis). Mental-behavioral disorders had stronger associations amongst men, particularly for schizophrenia and acute alcohol intoxication, while congenital anomalies, obesity-related diseases such as diabetes, and inflammatory diseases had stronger associations amongst women. Associations were dependent on the age at onset of the disease, with the strongest association observed earlier in women (21-25 years old) than men (26-30 years old). For most diseases, the association with childlessness was mediated by singlehood, especially in men. Some diseases, however, remained associated with childlessness among partnered individuals, including some mood- and endocrine-nutritional-metabolic disorders. All results can be explored in an interactive online dashboard.
InterpretationWe provide evidence that disease burden across multiple domains is associated with childlessness, identifying modifiable mental-behavioral disorders and novel autoimmune and inflammatory diseases. Evidence can be used for targeted health interventions to counter decreasing fertility, reproductive health, involuntary childlessness, and shrinking populations.
FundingEuropean Research Council (835079, 945733) and The Leverhulme Trust.
Research in ContextO_ST_ABSEvidence before this studyC_ST_ABSThe majority of research on infertility and childlessness has focused on socio-environmental factors, diseases related to reproduction, and examined predominantly women. Diseases are often considered separately, without a yardstick of their relative importance, and rarely examined within an entire population.
Added value of this studyThis is the first large-scale population study examining the association of 414 diseases across 16 broad categories with remaining childless, examining the entire reproductive and disease histories of 2.5 million men and women.
Implications of all the available evidenceOur study provides evidence that childlessness is associated with multiple diseases that are potentially modifiable with targeted public health interventions, particularly mental-behavioral disorders such as alcohol dependence in men or endocrine-nutritional-metabolic disorders linked to obesity and diabetes. Our broader approach revealed hitherto unknown links of childlessness with autoimmune (eg. juvenile idiopathic arthritis, multiple sclerosis, systemic lupus erythematosus) and inflammatory diseases (eg. myocarditis), warranting future studies examining the mechanisms underlying these associations. | sexual and reproductive health |
10.1101/2022.03.29.22273150 | The predictive significance of prognostic nutritional index and serum albumin/globulin ratio on the overall survival of penile cancer patients undergoing penectomy | ObjectiveTo assess the value of using the prognostic nutritional index (PNI) and serum albumin/globulin ratio (AGR) in predicting the overall survival (OS) of patients with penile cancer (PC) undergoing penectomy.
Materials and methodsA retrospective analysis of 123 patients who were admitted to our hospital due to PC from April 2010 to September 2021 and underwent penectomy were included in the study. The optimal cut-off value of PNI and AGR was determined by receiver operating characteristic curve analysis. Kaplan-Meier analysis and the Cox proportional hazard model were used to evaluate the correlation between PNI, AGR, and OS in patients with PC.
ResultsThe best cut-off values of PNI and AGR were set to 49.03 (95% confidence interval 0.705-0.888, Youden index=0.517, sensitivity=57.9%, specificity=93.7%, P<0.001) and 1.28 (95% confidence interval 0.610-0.860, Youden index=0.404, sensitivity=84.1%, specificity=56.2%, P=0.003). Kaplan-Meier analysis showed that the OS of the patients in the high PNI group and the high AGR group was significantly higher than that of the patients in the low PNI group and the low AGR group (P<0.001). Univariate analysis showed that patient age, clinical N stage, pathological stage, PNI, and SII are all predictors of OS in patients with PC (P<0.05). Multivariate analysis showed that pathological stage (P=0.005), PNI (P=0.021), and AGR (P=0.004) are independent prognostic factors for predicting OS in patients with PC undergoing penectomy.
ConclusionsBoth PNI score and serum AGR are independent prognostic factors for predicting OS in patients with PC undergoing penectomy. | urology |
10.1101/2022.03.28.22273015 | Mechanisms of central brain atrophy in multiple sclerosis | Background and objectivesThe measurement of longitudinal change in ventricular volume has been suggested as an accurate and reliable surrogate of central brain atrophy (CBA), potentially applicable to the everyday management of patient with multiple sclerosis (MS). To better understand mechanisms underlying central brain atrophy in RRMS patients we investigated the contribution of inflammatory activity in different lesion compartments to the enlargement of ventricular CSF volume. In addition, we investigated the role of the severity of lesional tissue damage in CBA progression.
MethodsPre- and post-gadolinium 3D-T1, 3D fluid-attenuated inversion recovery (FLAIR) and diffusion tensor images were acquired from 50 patients with relapsing MS. Lesional activity between baseline and 48 months was analysed on FLAIR images using custom-build software, which independently segmented expanding part of the chronic lesions, new confluent lesions and new free-standing lesions. The degree of lesional tissue damage was assessed by change in Mean Diffusivity (MD). Volumetric change of lateral ventricles was used as a measure of central brain atrophy.
ResultsDuring follow-up ventricles expanded on average by 12.6+/-13.7%. There was significant increase of total lesion volume, 69.3% of which was due to expansion of chronic lesions and 30.7%-to new (confluent and free-standing) lesional activity. There was high degree of correlation between volume of combined lesional activity and CBA (r2=0.67), which became considerably stronger when lesion volume was adjusted by the degree of tissue damage severity (r2=0.81). Linear regression analysis explained 90% of CBA variability and revealed that chronic lesion expansion was by far the largest contributor to ventricular enlargement (Standardized Coefficient Beta 0.68 (p<0.001) for expansion of chronic lesions compared to 0.29 (p=<0.001) for confluent lesions and 0.23 (p=0.001) for free-standing new lesions). Age and baseline ventricular volume also provided significant input to the model.
DiscussionOur data suggest that central brain atrophy is almost entirely explained by the combination of the volume and severity of lesional tissue activity. Furthermore, the expansion of chronic lesions plays a central role in this process. | neurology |
10.1101/2022.03.30.22273204 | Risk models based on non-cognitive measures may identify presymptomatic Alzheimer's disease | ObjectiveExamine if non-cognitive metrics alone can be used to construct risk models to identify adults at risk for Alzheimers dementia and cognitive impairment, thus enabling early interventions.
MethodsClinical data from older adults without dementia from two harmonized cohort studies, the Memory and Aging Project (MAP, n=1179) and Religious Orders Study (ROS, n=1103), were analyzed using Cox proportional hazard models with backward variable selection to develop risk prediction models for Alzheimers dementia and cognitive impairment. Models using only non-cognitive covariates of physical function, psychosocial, health conditions and medications were compared to models that added cognitive covariates of Mini-Mental Status Examination (MMSE) and composite cognition score summarizing 17 cognitive tests. All models were trained in MAP and tested in ROS. Model performance was evaluated by the area under the curves (AUC) of receiver operating characteristic (ROC) curve.
ResultsModels based on non-cognitive covariates alone achieved AUC (0.800, 0.785) for predicting Alzheimers dementia (3, 5) years from baseline. Including additional cognitive covariates improved AUC to (0.916, 0.881). A model with a single covariate of composite cognition score achieved AUC (0.905, 0.863). Models based on non-cognitive covariates alone achieved AUC (0.717, 0.714) for predicting cognitive impairment (3, 5) years from baseline. Including additional cognitive covariates improved AUC to (0.783, 0.770). A model with a single covariate of composite cognition score achieved AUC (0.754, 0.730).
ConclusionsRisk models based on non-cognitive metrics predict both Alzheimers dementia and cognitive impairment. However, non-cognitive covariates do not provide incremental predictivity for models that include cognitive metrics in predicting Alzheimers dementia, but do in models predicting cognitive impairment. Risk models of Alzheimers dementia show more accurate predictions than the models of cognitive impairment. To ensure early interventions, it will be necessary to derive improved risk prediction models in particular for cognitive impairment. | neurology |
10.1101/2022.03.29.22273121 | Double burden of malnutrition and its social disparities among rural Sri Lankan adolescents | In Sri Lanka, the double burden of nutrition is often neglected, and increase of adolescent obesity is not well investigated. This study determines the double burden of malnutrition among adolescents in Anuradhapura district exploring the differences of prevalence based on different definitions. Students aged 13 to 16 years were selected from 74 schools using probability proportionate to size sampling. Anthropometry was done according to WHO guidelines. Obesity was defined according to body mass index (BMI) based definitions of WHO, International Obesity Task Force and Indian growth references. Central obesity was defined using Indian and British waist circumference cut-offs. Prevalence estimates from different definitions were compared using McNemars test. Socio-demographic determinants of nutritional issues were assessed using Chi-square test for independence. A total of 3105 students (47.7% boys) were studied (mean age 14.8+ 0.8 years). According to WHO definitions, 73 (2.4%, 95% CI; 1.9-2.9) were obese, 222 (7.2%, 95% CI; 6.3-8.1) were overweight, 673 (21.7%, 95% CI; 20.2-23.1,) were thin and 396 (12.8%, 95% CI; 11.6-14.0) were stunted. More boys (3.1%) than girls (1.7%) were obese as well as thin (29.0% compared to 15.0%). Prevalence of overweight/obesity was higher among students in larger, urban schools, and belonging to high social class and more educated parents. Prevalence of overweight/obesity estimated using IOTF-Asian and Indian thresholds were significantly higher than that from WHO and IOTF-international thresholds. Double burden of malnutrition is affecting the adolescents in rural Sri Lanka. Prevalence estimates of obesity largely depend on the definition used. | nutrition |
10.1101/2022.03.28.22272552 | Demonstration of antibodies against SARS-CoV-2, neutralizing or binding, in seroconversion panels after mRNA-1273, BNT-162b2 and Ad26.COV2.S vaccine administration. | Seroconversion panels were collected before and after vaccination with three COVID-19 vaccines: two mRNA vaccines (mRNA-1273 and BNT-162b2) and one adenovirus vector vaccine (Ad26.COV2.S). The panels were tested for antibody activity by chemiluminescent immunoassay, ELISA and one was tested in a pseudovirus neutralization assay. Participants positive for anti-SARS-CoV-2 antibodies before vaccination (18.6%) had a higher response to the first vaccine dose than participants who tested negative. For two-dose vaccines, older participants showed a lower response to the first dose than younger participants. All participants showed positive antibody responses after the second vaccine. For the adenovirus vector vaccine, two participants did not generate antibody responses two weeks and two months after vaccination. Three participants were negative at two weeks but positive at two months. Pseudovirus neutralization showed good correlation with antibody activity (correlation coefficient =0.78, p<0.0001). Antibody responses in participants over 45 years old tended to be less robust. | infectious diseases |
10.1101/2022.03.27.22271988 | Phase 3, multicentre, double-blind, randomised, parallel-group, placebo-controlled study of camostat mesilate (FOY-305) for the treatment of COVID-19 (CANDLE study) | BackgroundIn vitro drug-screening studies have indicated that camostat mesilate (FOY-305) may prevent SARS-CoV-2 infection into human airway epithelial cells. This study was conducted to investigate whether camostat mesilate is an effective treatment for SARS-CoV-2 infection (COVID-19).
MethodsThis was a phase 3, multicentre, double-blind, randomised, parallel-group, placebo-controlled study. Patients were enrolled if they were admitted to a hospital within 5 days of onset of COVID-19 symptoms or within 5 days of a positive test for asymptomatic patients. Severe cases (e.g., those requiring oxygenation/ventilation) were excluded. Patients were administered camostat mesilate (600 mg qid; four to eight times higher than the clinical doses in Japan) or placebo for up to 14 days. The primary efficacy endpoint was the time to the first two consecutive negative tests for SARS-CoV-2.
FindingsOne-hundred and fifty-five patients were randomised to receive camostat mesilate (n=78) or placebo (n=77). The median time to the first test was 11 days in both groups, and conversion to negative status was observed in 60{middle dot}8% and 63{middle dot}5% of patients in the camostat mesilate and placebo groups, respectively. The primary (Bayesian) and secondary (frequentist) analyses found no significant differences in the primary endpoint between the two groups. No additional safety concerns beyond those already known for camostat mesilate were identified.
InterpretationCamostat mesilate is no more effective, based on upper airway viral clearance, than placebo for treating patients with mild to moderate SARS-CoV-2 infection with or without symptoms.
FundingOno Pharmaceutical Co., Ltd.
RESEARCH IN CONTEXT PANELO_ST_ABSEvidence before this studyC_ST_ABSSARS-CoV-2 infection (COVID-19), as a significant global health threat, is characterised by broad symptoms and varying disease severity. At the time of planning this study, there were no specific treatments for COVID-19 beyond the use of antiviral drugs, steroids and, in severe cases, ventilation with oxygen. Pre-clinical screening studies revealed the spike (S) protein of SARS-CoV-2 bind to angiotensin converting enzyme II (ACE2) on the host cell membrane. The S protein is then cleaved by a type II transmembrane serine protease (TMPRSS2) as an essential enzyme for the viral entry into host cells. In vitro drug-screening studies have shown that drugs that block binding of the S protein to ACE2 can prevent viral entry into a cell line derived from human airway epithelium. The studies identified 4-(4-guanidinobenzoyloxy)phenylacetic acid, the active metabolite of a serine protease inhibitor (camostat mesilate, FOY-305), as a candidate inhibitor of SARS-CoV-2 entry into humans. A retrospective study of critically ill COVID-19 patients with organ failure revealed a decline in disease activity within 8 days of admission among patients treated with camostat mesilate. In consideration of the preclinical and early clinical evidence, it was hypothesised that camostat mesilate is an effective treatment for patients with COVID-19. Therefore, we planned and executed a phase 3, randomised, double-blind, placebo-controlled study to investigate the efficacy and safety of camostat mesilate for the treatment of patients with mild to moderate COVID-19 infection with or without symptoms. The primary endpoint was the time to the first two consecutive negative tests for SARS-CoV-2. No controlled clinical studies of camostat mesilate had been conducted at the time of planning this study.
Added value of this studyThe results of this randomised controlled trial revealed that camostat mesilate, administered at a dose of 600 mg qid for up to 14 days, was no more effective than placebo, based on upper airway viral clearance in patients with mild to moderate SARS-CoV-2 infection with or without symptoms. Furthermore, there were no differences between the study groups in terms of other efficacy endpoints. This study used a dose that was four to eight times higher than the clinical doses of camostat mesilate used in Japan for the acute symptoms of chronic pancreatitis and postoperative reflux oesophagitis. The study identified no additional safety concerns beyond those already known for camostat mesilate.
Implications of all available evidenceAfter starting this study, another randomised, placebo-controlled study reported the efficacy and safety of camostat mesilate for the treatment of patients with COVID-19, albeit at a lower dose of 200 mg three times daily. That study also found no difference between camostat mesilate and placebo for the primary endpoint (the time to discharge or a clinical improvement in clinical severity of at least two points on a seven-point ordinal scale). Along with this evidence, our study did not support the use of camostat mesilate as a treatment option for COVID-19. However, since the administration of camostat mesilate was started after the onset of symptoms and presumably the peak viral load, we cannot exclude the possibility that camostat mesilate may be effective if administration is started earlier in the course of infection, or perhaps as prophylactic use in close contacts. | infectious diseases |
10.1101/2022.03.27.22271988 | Phase 3, multicentre, double-blind, randomised, parallel-group, placebo-controlled study of camostat mesilate (FOY-305) for the treatment of COVID-19 (CANDLE study) | BackgroundIn vitro drug-screening studies have indicated that camostat mesilate (FOY-305) may prevent SARS-CoV-2 infection into human airway epithelial cells. This study was conducted to investigate whether camostat mesilate is an effective treatment for SARS-CoV-2 infection (COVID-19).
MethodsThis was a phase 3, multicentre, double-blind, randomised, parallel-group, placebo-controlled study. Patients were enrolled if they were admitted to a hospital within 5 days of onset of COVID-19 symptoms or within 5 days of a positive test for asymptomatic patients. Severe cases (e.g., those requiring oxygenation/ventilation) were excluded. Patients were administered camostat mesilate (600 mg qid; four to eight times higher than the clinical doses in Japan) or placebo for up to 14 days. The primary efficacy endpoint was the time to the first two consecutive negative tests for SARS-CoV-2.
FindingsOne-hundred and fifty-five patients were randomised to receive camostat mesilate (n=78) or placebo (n=77). The median time to the first test was 11 days in both groups, and conversion to negative status was observed in 60{middle dot}8% and 63{middle dot}5% of patients in the camostat mesilate and placebo groups, respectively. The primary (Bayesian) and secondary (frequentist) analyses found no significant differences in the primary endpoint between the two groups. No additional safety concerns beyond those already known for camostat mesilate were identified.
InterpretationCamostat mesilate is no more effective, based on upper airway viral clearance, than placebo for treating patients with mild to moderate SARS-CoV-2 infection with or without symptoms.
FundingOno Pharmaceutical Co., Ltd.
RESEARCH IN CONTEXT PANELO_ST_ABSEvidence before this studyC_ST_ABSSARS-CoV-2 infection (COVID-19), as a significant global health threat, is characterised by broad symptoms and varying disease severity. At the time of planning this study, there were no specific treatments for COVID-19 beyond the use of antiviral drugs, steroids and, in severe cases, ventilation with oxygen. Pre-clinical screening studies revealed the spike (S) protein of SARS-CoV-2 bind to angiotensin converting enzyme II (ACE2) on the host cell membrane. The S protein is then cleaved by a type II transmembrane serine protease (TMPRSS2) as an essential enzyme for the viral entry into host cells. In vitro drug-screening studies have shown that drugs that block binding of the S protein to ACE2 can prevent viral entry into a cell line derived from human airway epithelium. The studies identified 4-(4-guanidinobenzoyloxy)phenylacetic acid, the active metabolite of a serine protease inhibitor (camostat mesilate, FOY-305), as a candidate inhibitor of SARS-CoV-2 entry into humans. A retrospective study of critically ill COVID-19 patients with organ failure revealed a decline in disease activity within 8 days of admission among patients treated with camostat mesilate. In consideration of the preclinical and early clinical evidence, it was hypothesised that camostat mesilate is an effective treatment for patients with COVID-19. Therefore, we planned and executed a phase 3, randomised, double-blind, placebo-controlled study to investigate the efficacy and safety of camostat mesilate for the treatment of patients with mild to moderate COVID-19 infection with or without symptoms. The primary endpoint was the time to the first two consecutive negative tests for SARS-CoV-2. No controlled clinical studies of camostat mesilate had been conducted at the time of planning this study.
Added value of this studyThe results of this randomised controlled trial revealed that camostat mesilate, administered at a dose of 600 mg qid for up to 14 days, was no more effective than placebo, based on upper airway viral clearance in patients with mild to moderate SARS-CoV-2 infection with or without symptoms. Furthermore, there were no differences between the study groups in terms of other efficacy endpoints. This study used a dose that was four to eight times higher than the clinical doses of camostat mesilate used in Japan for the acute symptoms of chronic pancreatitis and postoperative reflux oesophagitis. The study identified no additional safety concerns beyond those already known for camostat mesilate.
Implications of all available evidenceAfter starting this study, another randomised, placebo-controlled study reported the efficacy and safety of camostat mesilate for the treatment of patients with COVID-19, albeit at a lower dose of 200 mg three times daily. That study also found no difference between camostat mesilate and placebo for the primary endpoint (the time to discharge or a clinical improvement in clinical severity of at least two points on a seven-point ordinal scale). Along with this evidence, our study did not support the use of camostat mesilate as a treatment option for COVID-19. However, since the administration of camostat mesilate was started after the onset of symptoms and presumably the peak viral load, we cannot exclude the possibility that camostat mesilate may be effective if administration is started earlier in the course of infection, or perhaps as prophylactic use in close contacts. | infectious diseases |
10.1101/2022.03.30.22273174 | Dupilumab use is associated with protection from COVID-19 mortality: A retrospective analysis. | We previously found that type 2 immunity promotes COVID-19 pathogenesis in a mouse model. To test relevance to human disease we used electronic health record databases and determined that patients on dupilumab (anti-IL-4R monoclonal antibody that blocks IL-13 and IL-4 signaling) at the time of COVID-19 infection had lower mortality. | infectious diseases |
10.1101/2022.03.27.22272902 | Acceptability and associated factors of indoor residual spraying for Malaria control by households in Luangwa district of Zambia: A multilevel analysis | BackgroundThe global burden of malaria has increased from 227 million cases in 2019 to 247 million cases in 2020. Indoor residual spraying (IRS) remains one of the most effective control strategies for malaria. The current study sought to measure the acceptability level and associated factors of indoor residual spraying.
MethodsA cross sectional study was conducted from October to November 2020 in sixteen urban and rural communities of Luangwa district using a cluster sampling method, Multilevel analysis was used to account for the hierarchical structure of the data.
ResultsThe acceptability level of indoor residual spraying among household heads was relatively high at 87%. Individuals who felt the timing was not appropriate were associated with decreased odds of accepting IRS (AOR = 0.55, 95% CI: 0.20 - 0.86). Positive attitude was associated with increased odds of accepting IRS (AOR = 29.34, 95% CI: 11.14 - 77.30). High acceptability level was associated with unemployment (AOR = 1.92, 95% CI: 1.07 - 3.44). There were no associations found between acceptability levels and community-level factors such as information, education, communication dissemination, awareness achieved through door-to-door sensitization, and public address system.
ConclusionAcceptability level of indoor residual spraying was relatively high among households of Luangwa District suggesting that the interventions are more acceptable which is essential in reaching malaria elimination by 2030. Finding that community factors known to influence acceptability such as information, education and communication as well as awareness were not important to influencing acceptability suggests need for reinforcing messages related to indoor residual spraying and redefining the community sensitization approaches to make indoor residual spraying more acceptable. | infectious diseases |
10.1101/2022.03.30.22273203 | COVID-19 vaccination coverage by company size and the effects of socioeconomic factors and workplace vaccination in Japan: a cohort study | BackgroundVaccination is considered the most effective control measure against COVID-19. Vaccine hesitancy and equitable vaccine allocation are important challenges to disseminating developed vaccines. To promote COVID-19 vaccination coverage, the government of Japan established the workplace vaccination program. However, while it appears that the program was effective in overcoming vaccine hesitancy, the program may have hindered the equitable allocation of vaccines because it mainly focused on employees of large companies. We investigated the relationship between company size and COVID-19 vaccination completion status of employees and the impact of the workplace vaccination program on this relationship.
MethodsWe conducted an internet-based prospective cohort study from December 2020 (baseline) to December 2021. The data were collected using a self-administered questionnaire survey. Briefly, 27,036 workers completed the questionnaire at baseline and 18,560 at follow-up. After excluding ineligible respondents, we finally analyzed the data from 15,829 participants. At baseline, the participants were asked about the size of the company they worked for, and at follow-up they were asked about the month in which they received their second COVID-19 vaccine dose and the availability of a company-arranged vaccination opportunity.
ResultsIn each month throughout the observation period, the odds of having received a second COVID-19 vaccine dose were significantly lower for small-company employees than for large-company employees in the sex- and age-adjusted model. This difference decreased after adjusting for socioeconomic factors, and there was no significant difference after adjusting for the availability of a company-arranged vaccination opportunity.
ConclusionsThe workplace vaccination program implemented in Japan to control the COVID-19 pandemic may have been effective in overcoming vaccine hesitancy in workers; however, it may have caused an inequitable allocation of vaccines between companies of different sizes. Because people who worked for small companies were less likely to be vaccinated, it will be necessary to enhance support of vaccination for this population in the event of future infectious disease outbreaks.
Trial registrationNot applicable. | infectious diseases |
10.1101/2022.03.30.22273203 | COVID-19 vaccination coverage by company size and the effects of socioeconomic factors and workplace vaccination in Japan: a cohort study | BackgroundVaccination is considered the most effective control measure against COVID-19. Vaccine hesitancy and equitable vaccine allocation are important challenges to disseminating developed vaccines. To promote COVID-19 vaccination coverage, the government of Japan established the workplace vaccination program. However, while it appears that the program was effective in overcoming vaccine hesitancy, the program may have hindered the equitable allocation of vaccines because it mainly focused on employees of large companies. We investigated the relationship between company size and COVID-19 vaccination completion status of employees and the impact of the workplace vaccination program on this relationship.
MethodsWe conducted an internet-based prospective cohort study from December 2020 (baseline) to December 2021. The data were collected using a self-administered questionnaire survey. Briefly, 27,036 workers completed the questionnaire at baseline and 18,560 at follow-up. After excluding ineligible respondents, we finally analyzed the data from 15,829 participants. At baseline, the participants were asked about the size of the company they worked for, and at follow-up they were asked about the month in which they received their second COVID-19 vaccine dose and the availability of a company-arranged vaccination opportunity.
ResultsIn each month throughout the observation period, the odds of having received a second COVID-19 vaccine dose were significantly lower for small-company employees than for large-company employees in the sex- and age-adjusted model. This difference decreased after adjusting for socioeconomic factors, and there was no significant difference after adjusting for the availability of a company-arranged vaccination opportunity.
ConclusionsThe workplace vaccination program implemented in Japan to control the COVID-19 pandemic may have been effective in overcoming vaccine hesitancy in workers; however, it may have caused an inequitable allocation of vaccines between companies of different sizes. Because people who worked for small companies were less likely to be vaccinated, it will be necessary to enhance support of vaccination for this population in the event of future infectious disease outbreaks.
Trial registrationNot applicable. | infectious diseases |
10.1101/2022.03.29.22273126 | Adapted "Break The Cycle for Avant Garde" Intervention to Reduce Injection Assisting and Promoting Behaviours in People Who Inject Drugs in Tallinn, Estonia: A Pre- Post Trial. | In the context of established and emerging injection drug use epidemics, there is a need to prevent and avert injection drug use. We tested the hypothesis that an individual motivation and skills building counselling, adapted and enhanced from Hunts Break the Cycle intervention targeting persons currently injecting drugs would lead to reduction in injection initiation-related behaviours among PWID in Tallinn, Estonia. For this quasi-experimental study, pre-post outcome measures included self-reported promoting behaviours (speaking positively about injecting to non-injectors, injecting in front of non-injectors, offering to give a first injection) and injection initiation behaviours (assisting with or giving a first injection) during the previous 6 months. Of 214 PWID recruited, 189 were retained (88.3%) for the follow-up at 6 months. The proportion of those who had injected in front of non-PWID significantly declined from 15.9% to 8.5%, and reporting assisting with 1st injection from 6.4% to 1.06%. Of the current injectors retained in the study, 17.5% reported not injecting drugs at the follow up. The intervention adapted for the use in the setting of high prevalence of HIV and relatively low prevalence of injection assisting, tested proved to be effective and safe. | addiction medicine |
10.1101/2022.03.30.22273211 | Visualising Emergency Department Wait Times; Rapid Iterative Testing to Determine Patient Preferences for Displays | Visualising patient wait times in emergency departments for patients and families is increasingly common, following the development of prediction models using routinely collected patient demographic, urgency and flow data. Consumers of an emergency department wait time display will have culturally and linguistically diverse backgrounds, are more likely to be from under-served populations and will have varied data literacy skills. The wait times are uncertain, the information is presented when people are emotionally and physically challenged, and the predictions may inform high stakes decisions. In such a stressful environment, simplicity is crucial and the visual language must cater to the diverse audience. When wait times are conveyed well, patient experience improves. Designers must ensure the visualisation is patient-centred and that data are consistently and correctly interpreted. In this article, we present the results of a design study at three hospitals in Melbourne, Australia, undertaken in 2021. We used rapid iterative testing and evaluation methodology, with patients and families from diverse backgrounds as participants, to develop and validate a wait time display. We present the design process and the results of this project. Patients, families and staff were eligible to participate if they were awaiting care in the emergency department, or worked in patient reception and waiting areas. The patient-centred approach taken in our design process varies greatly from past work led by hospital administrations, and the resulting visualisations are very distinct. Most currently displayed wait time visualisations could be adapted to better meet end-user needs. Also of note, we found that techniques developed by visualisation researchers for conveying temporal uncertainty tended to overwhelm the diverse audience rather than inform. There is a need to balance precise and comprehensive information presentation against the strong need for simplicity in such a stressful environment. | emergency medicine |
10.1101/2022.03.28.22272824 | Motor signatures in digitized cognitive and memory tests enhances characterization of Parkinson's disease | BackgroundAlthough there is a growing interest in using wearable sensors to characterize movement disorders, there is a lack of methodology for developing clinically interpretable kinematics biomarkers. Such digital biomarkers would provide a more objective diagnosis, capturing finer degrees of motor deficits, while retaining the information of traditional clinical tests.
ObjectivesWe aim at digitizing traditional tests of cognitive and memory performance to derive motor biometrics of pen-strokes and voice, thereby complementing clinical tests with objective criteria, while enhancing the overall motor characterization of Parkinsons disease (PD).
Methods35 participants including patients with PD, healthy young and age-matched controls performed a series of drawing and memory tasks, while their pen movement and voice were digitized. We examined the moment-to-moment variability of time-series reflecting the pen speed and voice amplitude.
ResultsThe stochastic signatures of the fluctuations in pen drawing speed and voice amplitude of patients with PD show lower noise-to-signal ratio compared to those derived from the younger and age-matched neurotypical controls. It appears that contact motions of the pen strokes on the tablet evokes sensory feedback for more immediate and predictable control in PD, compared to controls, while voice amplitude loses its neurotypical richness.
ConclusionsWe offer new standardized data types and analytics to help advance our understanding of hidden motor aspects of cognitive and memory clinical assays commonly used in Parkinsons disease. | neurology |
10.1101/2022.03.28.22272682 | Prediction of serious complications in patients with cancer and pulmonary thromboembolism: validation of the EPIPHANY Index in a prospective cohort of patients from the PERSEO Study | IntroductionThere is currently no validated score capable of classifying cancer-associated pulmonary embolism (PE) in its full spectrum of severity. This study has validated the EPIHANY Index, a new tool to predict serious complications in cancer patients with suspected or unsuspected PE.
MethodThe PERSEO Study prospectively recruited individuals with PE and cancer from 22 Spanish hospitals. The estimation of the relative frequency {theta} of complications based on the EPIPHANY Index categories was made using the Bayesian alternative for the binomial test.
ResultsNine hundred patients diagnosed with PE between 2017/2020 were recruited. The rate of serious complications at 15 days was 11.8%, 95% highest density interval [HDI], 9.8-14.1%. Of the EPIPHANY low-risk patients, 2.4% (95% HDI, 0.8-4.6%) had serious complications, as did 5.5% (95% HDI, 2.9-8.7%) of the moderate-risk participants and 21.0% (95% HDI, 17.0-24.0%) of those with high-risk episodes. The EPIPHANY Index correlated with overall survival. Both the EPIPHANY Index and the Hestia criteria exhibited greater negative predictive value and a lower negative likelihood ratio than the remaining models. The incidence of bleeding at 6 months was 6.2% (95% HDI, 2.9-9.5%) in low/moderate-risk vs 12.7% (95% HDI, 10.1-15.4%) in high-risk (p-value=0.037) episodes. Of the outpatients, complications at 15 days were recorded in 2.1% (95% HDI, 0.7-4.0%) of the cases with EPIPHANY low/intermediate-risk vs 5.3% (95% HDI, 1.7-11.8%) in high-risk cases.
ConclusionWe have validated the EPIPHANY Index in patients with incidental or symptomatic cancer-related PE. This model can contribute to standardize decision-making in a scenario lacking quality evidence.
SummaryWe have validated the EPIPHANY Index in patients with acute, incidental, or symptomatic cancer-related PE. This predictive model of complications can contribute to standardize decision-making in a scenario lacking quality evidence. | oncology |
10.1101/2022.03.27.22271628 | Spatial prediction of COVID-19 pandemic dynamics in the United States | BackgroundThe impact of COVID-19 across the United States has been heterogeneous, with some areas demonstrating more rapid spread and greater mortality than others. We used geographically-linked data to test the hypothesis that the risk for COVID-19 is spatially defined and sought to define which features are most closely associated with elevated COVID-19 spread and mortality.
MethodsLeveraging geographically-restricted social, economic, political, and demographic information from U.S. counties, we developed a computational framework using structured Gaussian processing to predict county-level case and death counts during both the initial and the nationwide phases of the pandemic. After identifying the most predictive spatial features, we applied an unsupervised clustering algorithm, topic modelling, to identify groups of features that are most closely associated with COVID-19 spread.
FindingsWe found that the inclusion of spatial features modeled case counts very well, with overall Pearsons correlation coefficient (PCC) and R2of 0.96 and 0.84 during the initial phase and 0.95 and 0.87, respectively, during the nationwide phase. The most frequently selected features were associated with urbanicity and 2020 presidential vote margins. When trained using death counts, models revealed similar performance metrics, with the addition of aging metrics to those most frequently selected. Topic modeling showed that counties with similar socioeconomic and demographic features tended to group together, and some feature sets were associated with COVID-19 dynamics. Unsupervised clustering of counties based on these topics revealed groups of counties that experienced markedly different COVID-19 spread.
InterpretationSpatial features explained most of the variability in COVID-19 dynamics between counties. Topic modeling can be used to group collinear features and identify counties with similar features in epidemiologic research. | health informatics |
10.1101/2022.03.28.22273020 | Detection of a BA.1/BA.2 recombinant in travelers arriving in Hong Kong, February 2022 | We studied SARS-CoV-2 genomes from travelers arriving in Hong Kong from November-2021 to February-2022. Apart from detecting Omicron (BA.1, BA1.1. and BA.2) and Delta variants, we detected a BA.1/BA.2 recombinant in two epidemiologically linked cases. This recombinant has a breakpoint near the 5 end of Spike gene (nucleotide position 20055-21618). | infectious diseases |
10.1101/2022.03.30.22273164 | Socioeconomic inequality in the double burden of child malnutrition in the Eastern and Southern African Region | Socioeconomic inequalities in the double burden of child malnutrition threatens global nutrition targets 2025, especially in Eastern and Southern Africa. We aimed to quantify these inequalities from nationally representative household surveys in 13 Eastern and Southern African countries between 2000 and 2018. 13 of the latest Demographic and Health Surveys including 72,231 children under five year olds were studied. Prevalence of stunting, wasting and overweight (including obesity) were disagregated by wealth quintiles, maternal education categories and urban-rural residence for visual inspection of inequalities, and the slope index of inequality (SII) and the relative index of inequality (RII) were estimated for each country. Country-specific estimates were pooled using random-effects meta-analyses. Regional stunting and wasting prevalence was higher among children living in the poorest households, with mothers with the lowest educational level and in rural areas. In contrast, regional overweight (including obesity) prevalence was higher among children living in the richest households, with mothers with the highest educational level and urban areas. Tackling social inequalities in the distribution of the double burden of malnutrition among children in the Eastern and Southern African region will require strategies that address the reasons socially disadvantaged children become more exposed to stunting or wasting. | nutrition |
10.1101/2022.03.31.22273257 | Acute respiratory distress syndrome after SARS-CoV-2 infection on young adult population: international observational federated study based on electronic health records through the 4CE consortium ARDS after SARS-CoV-2 infection on young adult | PurposeIn young adults (18 to 49 years old), investigation of the acute respiratory distress syndrome (ARDS) after severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection has been limited. We evaluated the risk factors and outcomes of ARDS following infection with SARS-CoV-2 in a young adult population.
MethodsA retrospective cohort study was conducted between January 1st, 2020 and February 28th, 2021 using patient-level electronic health records (EHR), across 241 United States hospitals and 43 European hospitals participating in the Consortium for Clinical Characterization of COVID-19 by EHR (4CE). To identify the risk factors associated with ARDS, we compared young patients with and without ARDS through a federated analysis. We further compared the outcomes between young and old patients with ARDS.
ResultsAmong the 75,377 hospitalized patients with positive SARS-CoV-2 PCR, 1001 young adults presented with ARDS ( 7.8% of young hospitalized adults). Their mortality rate at 90 days was 16.2% and they presented with a similar complication rate for infection than older adults with ARDS. Peptic ulcer disease, paralysis, obesity, congestive heart failure, valvular disease, diabetes, chronic pulmonary disease and liver disease were associated with a higher risk of ARDS. We described a high prevalence of obesity (53%), hypertension (38%-although not significantly associated with ARDS), and diabetes (32%).
ConclusionTrough an innovative method, a large international cohort study of young adults developing ARDS after SARS-CoV-2 infection has been gather. It demonstrated the poor outcomes of this population and associated risk factor. | intensive care and critical care medicine |
10.1101/2022.04.02.22273346 | Multi-task deep autoencoder to predict Alzheimer's disease progression using temporal DNA methylation data in peripheral blood | MotivationTraditional approaches for diagnosing Alzheimers disease (AD) such as brain imaging and cerebrospinal fluid are invasive and expensive. It is desirable to develop a useful diagnostic tool by exploiting biomarkers obtained from peripheral tissues due to their noninvasive and easily accessible characteristics. However, the capacity of using DNA methylation data in peripheral blood for predicting AD progression is rarely known. It is also challenging to develop an efficient prediction model considering the complex and high-dimensional DNA methylation data in a longitudinal study.
ResultsWe develop two multi-task deep autoencoders, which are based on convolutional autoencoder and long short-term memory autoencoder to learn the compressed feature representations by jointly minimizing the reconstruction error and maximizing the prediction accuracy. By benchmarking on longitudinal methylation data collected from peripheral blood in Alzheimers Disease Neuroimaging Initiative, we demonstrate that the multi-task deep autoencoders outperform state-of-the-art machine learning approaches for both predicting AD progression and reconstructing the temporal methylation profiles. In addition, the proposed multi-task deep autoencoders can predict AD progression accurately using only historical data and the performance is further improved by including all temporal data.
Availabilityhttps://github.com/lichen-lab/MTAE | genetic and genomic medicine |
10.1101/2022.03.31.22273248 | Telehealth versus Self-Directed Lifestyle Intervention to Promote Healthy Blood Pressure: a Randomized Controlled Trial | IntroductionLifestyle behavior modification interventions, delivered using telehealth, have been shown to be effective in reducing weight. However, limited data exists on the benefits of lifestyle behavior change delivered using telehealth and web-based applications on blood pressure (BP).
MethodsWe conducted a 2-site randomized controlled trial in the Geisinger Health System (January 2019 to March 2021) to compare the efficacy of a self-guided vs. a dietitian telehealth approach using web-based applications in 187 participants with 24-hour systolic BP 120-160 mmHg and body mass index [≥] 25 kg/m2. Both arms received recommendations to improve diet based on a web-based food frequency questionnaire, and access to an online weight management program. The telehealth arm received weekly telephone calls with a dietitian who used motivational interviewing. The primary outcome was 12-week change in 24-hour systolic BP. Secondary outcomes included changes in sleep/awake systolic BP and diastolic BP, self-reported physical activity, healthy eating index (HEI)-2015 score, and weight.
ResultsA total of 187 participants (mean age 54.6 [SD 13.2] years, 52% female, 23% on BP medications, mean body mass index 34.5 [6.5] kg/m2, mean HEI-2015 score 60.8 [11.1] units) were randomized with 156 (83.4%) completing the trial. Mean 24-hour systolic BP improved from baseline to 12 weeks similarly in the dietitian (-6.73 mmHg, 95% CI: -8.64, -4.82) and the self-directed arm (-4.92, 95% CI: -7.01, -2.77; p comparing groups=0.2). The dietitian telehealth arm had greater 12-week improvements in sleep systolic BP (mean -6.92 vs. -1.45; p=0.004), sleep diastolic BP (-3.31 vs. 0.73; p=0.001), and self-reported physical activity (866 vs. -243 metabolic equivalent of task minutes/week; p=0.01). The dietitian telehealth arm also tended to have greater 12-week improvements in weight loss (-5.11 vs. -3.89 kg; p=0.1) and HEI-2015 score (9.23 vs. 6.43 units; p=0.09), though these differences were not statistically significant.
ConclusionsDietitian-led telehealth supported by web-based applications resulted in a similar reduction in 24-hour systolic BP as a self-directed approach, with secondary improvements in sleep BP and physical activity.
Trial registration numberClinicalTrials.gov Identifier NCT03700710 | cardiovascular medicine |
10.1101/2022.04.01.22273107 | Transmission of SARS-CoV-2 in standardised First Few X cases and household transmission investigations: a systematic review and meta-analysis | We aimed to estimate the household secondary infection attack rate (hSAR) of SARS-CoV-2 in investigations aligned with the WHO Unity Studies Household Transmission Investigations (HHTI) protocol. We conducted a systematic review and meta-analysis according to PRISMA 2020 guidelines.
We searched Medline, Embase, Web of Science, Scopus and medRxiv/bioRxiv for Unity-aligned First Few X cases (FFX) and HHTIs published between 1 December 2019 and 26 July 2021. Standardised early results were shared by WHO Unity Studies collaborators (to 1 October 2021). We used a bespoke tool to assess investigation methodological quality. Values for hSAR and 95% confidence intervals (CIs) were extracted or calculated from crude data. Heterogeneity was assessed by visually inspecting overlap of CIs on forest plots and quantified in meta-analyses.
Of 9988 records retrieved, 80 articles (64 from databases; 16 provided by Unity Studies collaborators) were retained in the systematic review and 62 were included in the primary meta-analysis. hSAR point estimates ranged from 2%-90% (95% prediction interval: 3%-71%; I2=99.7%); I2 values remained >99% in subgroup analyses, indicating high, unexplained heterogeneity and leading to a decision not to report pooled hSAR estimates.
FFX and HHTI remain critical epidemiological tools for early and ongoing characterisation of novel infectious pathogens. The large, unexplained variance in hSAR estimates emphasises the need to further support standardisation in planning, conduct and analysis, and for clear and comprehensive reporting of FFX and HHTIs in time and place, to guide evidence-based pandemic preparedness and response efforts for SARS-CoV-2, influenza and future novel respiratory viruses. | epidemiology |
10.1101/2022.04.01.22273107 | Transmission of SARS-CoV-2 in standardised First Few X cases and household transmission investigations: a systematic review and meta-analysis | We aimed to estimate the household secondary infection attack rate (hSAR) of SARS-CoV-2 in investigations aligned with the WHO Unity Studies Household Transmission Investigations (HHTI) protocol. We conducted a systematic review and meta-analysis according to PRISMA 2020 guidelines.
We searched Medline, Embase, Web of Science, Scopus and medRxiv/bioRxiv for Unity-aligned First Few X cases (FFX) and HHTIs published between 1 December 2019 and 26 July 2021. Standardised early results were shared by WHO Unity Studies collaborators (to 1 October 2021). We used a bespoke tool to assess investigation methodological quality. Values for hSAR and 95% confidence intervals (CIs) were extracted or calculated from crude data. Heterogeneity was assessed by visually inspecting overlap of CIs on forest plots and quantified in meta-analyses.
Of 9988 records retrieved, 80 articles (64 from databases; 16 provided by Unity Studies collaborators) were retained in the systematic review and 62 were included in the primary meta-analysis. hSAR point estimates ranged from 2%-90% (95% prediction interval: 3%-71%; I2=99.7%); I2 values remained >99% in subgroup analyses, indicating high, unexplained heterogeneity and leading to a decision not to report pooled hSAR estimates.
FFX and HHTI remain critical epidemiological tools for early and ongoing characterisation of novel infectious pathogens. The large, unexplained variance in hSAR estimates emphasises the need to further support standardisation in planning, conduct and analysis, and for clear and comprehensive reporting of FFX and HHTIs in time and place, to guide evidence-based pandemic preparedness and response efforts for SARS-CoV-2, influenza and future novel respiratory viruses. | epidemiology |
10.1101/2022.04.02.22273342 | Chronotype and mortality - a 37-year follow-up study in Finnish adults | BackgroundThe only study on chronotype and mortality suggested small increases of all-cause and cardiovascular mortality in a 6.5-year follow-up. Our aim was to constructively replicate findings from it in a longer follow-up.
MethodsA questionnaire was administered to the population-based Finnish Twin Cohort in 1981 (response rate 84%, age 24-101 years). The study population included 23,854 participants who replied to the question: "Try to assess to what extent you are a morning person or an evening person", with four response alternatives (clearly a morning person, to some extent a morning person, to some extent an evening person, and clearly an evening person). Vital status and cause of death data were provided by nationwide registers up to the end of 2018. Hazard ratios and their 95% confidence intervals for mortality were computed; there were 8728 deaths over the follow-up period. Adjustments were made for education, alcohol, smoking, body mass index, and sleep length.
ResultsThe model adjusted for all covariates showed a 9% increase in risk of all-cause mortality for the evening-type group (1.09, 1.01-1.18), with attenuation mainly due to smoking and alcohol, as they had larger consumption than those with morningness chronotypes. We observed no increase in cardiovascular mortality by chronotype. No increase in mortality was seen among the non-smokers who were at most light drinkers.
ConclusionsOur results from this 37-year follow-up study suggest that there is little or no independent contribution of chronotype to all-cause and cardiovascular mortality.
Key messagesO_LIThere is little or no independent contribution of chronotype to mortality.
C_LIO_LIThe increased risk of mortality associated with eveningness appears to be mainly mediated by a larger consumption of tobacco and alcohol than in those with a morningness chronotype.
C_LIO_LIIt is important to be aware of the increased health risks of evening-type persons.
C_LI | epidemiology |
10.1101/2022.04.01.22273214 | Inferring the true number of SARS-CoV-2 infections in Japan | IntroductionIn Japan, as of December 31, 2021, more than 1.73 million laboratory-confirmed cases have been reported. However, the actual number of infections is likely to be under-ascertained due to the epidemiological characteristics such as mild and subclinical infections and limited testing availability in the early days of the pandemic. In this study, we infer the true number of infections in Japan between January 16, 2020, and December 31, 2021, using a statistical modelling framework that combines data on reported cases and fatalities.
MethodsWe used reported daily COVID-19 deaths stratified into 8 distinct age-groups and age-specific infection fatality ratios (IFR) to impute the true number of infections. Estimates of IFR were informed from published studies as well seroprevalence studies conducted in Japan. To account for the uncertainty in IFR estimates, we sampled values from relevant distributions.
ResultsWe estimated that as of December 31, 2021, 2.90 million (CrI: 1.77 to 4.27 million) people had been infected in Japan, which is 1.68 times higher than the 1.73 million reported cases. Our meta-analysis confirmed that these findings were consistent with the intermittent seroprevalence studies conducted in Japan.
ConclusionsWe have estimated that a substantial number of COVID-19 infections in the country were unreported, particularly in adults. Our approach provides a more realistic assessment of the true underlying burden of COVID-19. The results of this study can be used as fundamental components to strengthen population health control and surveillance measures. | epidemiology |
10.1101/2022.04.03.22273209 | Association of history of metformin use with delirium and mortality: A retrospective cohort study | ObjectiveTo investigate the relationship between history of metformin use and delirium risk, as well as long-term mortality.
MethodsIn this retrospective cohort study, subjects recruited between January 2016 and March 2020 were analyzed. Logistic regression analysis was performed to investigate the relationship between metformin use and delirium. Log-rank analysis and Cox proportional hazards model were used to investigate the relationship between metformin use and 3-year mortality.
ResultsThe data from 1404 subjects were analyzed. 242 subjects were categorized into a DM-without-metformin group, and 264 subjects were categorized into a DM-with-metformin group. Prevalence of delirium was 36.0% in the DM-without-metformin group, and 29.2% in the DM-with-metformin group. A history of metformin use reduced the risk of delirium in patients with DM (OR, 0.50 [95% CI, 0.32 to 0.79]) after controlling for age, sex, and dementia status, body mass index (BMI), and insulin use. The 3-year mortality in the DM-without-metformin group (survival rate, 0.595 [95% CI, 0.512 to 0.669]) was higher than in the DM-with-metformin group (survival rate, 0.695 [95% CI, 0.604 to 0.770]) (p=0.035). A history of metformin use decreased the risk of 3-year mortality after adjustment for age, sex, Charlson Comorbidity Index, BMI, history of insulin use, and delirium status (HR, 0.69 [95% CI, 0.48 to 0.98]).
ConclusionsIt was found that metformin usage was associated with decreased delirium prevalence and lower 3-year mortality. The potential benefit of metformin on delirium risk and mortality were shown. | psychiatry and clinical psychology |
10.1101/2022.04.03.22273349 | The spatial-temporal risk profiling of Clonorchis sinensis infection in South Korea: A geostatistical modeling study | Clonorchiasis is one of the major parasitic diseases in South Korea. Spatially explicit estimates of the infection risk are important for control and intervention. We did a systematic review of collected prevalence data on Clonorchis sinensis infection in South Korea. Data of potential influencing factors (e.g., environmental and social-economic factors) was obtained from open-access databases. Bivariate Bayesian geostatistical joint modeling approaches were applied to analyze disease data, within a logit regression in combination of potential influencing factors and spatial-temporal random effects. We identified surveys of C. sinensis infection done at 1362 unique locations, and presented the first spatial-temporal risk maps at high spatial resolution (5x5km) in South Korea. High infection risk areas shrunk significantly from 1970 to 2017. The overall risk decreased since the start of the national deworming program in 1969, and then slightly increased since the year 1995 when the program suspended, and maintained stable since 2005 when the Clonorchiasis Eradication Program begun. The population-weighted prevalence was estimated as 3.87% (95% BCI: 3.04-4.82%) in 2017, accounting to 1.92 (95% BCI: 1.51-2.40) million infected people. Although the prevalence over the country has been low, C. sinensis infection was still endemic in areas of eastern and southern regions, particularly the five major river basins. We also defined factors significantly correlated, such as, distance to the nearest open water bodies, annual precipitation, and land surface temperature at night. All findings above provide important information on spatial-targeting control and preventive strategies of C. sinensis infection in South Korea. | public and global health |
10.1101/2022.03.31.22272398 | Blood flow restriction added to usual care exercise in patients with early weight bearing restrictions after cartilage or meniscus repair in the knee joint: A feasibility study | PurposeIn musculoskeletal rehabilitation, blood flow restriction - low load strength training (BFR-LLST) is theoretically indicated - as opposed to traditional heavy strength training - in patients who can or may not heavily load tissues healing from recent surgery. The main purpose was to examine the feasibility of BFR-LLST added to usual care exercise early after cartilage or meniscus repair in the knee joint.
MethodsWe included 42 patients with cartilage (n=21) or meniscus repair (n=21) in the knee joint. They attended 9 weeks of BFR-LLST added to a usual care exercise at an outpatient rehabilitation center. Outcome measures were assessed at different time points from 4 (baseline) to 26 weeks postoperatively. They included: Adherence, harms, knee joint and thigh pain, perceived exertion, thigh circumference (muscle size proxy), isometric knee-extension strength, self-reported disability and quality of life.
ResultsOn average, patients with cartilage and meniscus repair performed >84 % of the total BFR-LLST supervised sessions. Thirty-eight patients reported 146 adverse events (e.g., dizziness) - none considered serious. A decrease in thigh circumference of the operated leg was not found in both groups from baseline to the end of the intervention period with no exacerbation of knee joint or quadriceps muscle pain.
ConclusionsBFR-LLST added to usual care exercise initiated early after cartilage or meniscus repair seems feasible and may prevent disuse thigh muscle atrophy during a period of weight bearing restrictions. Harms were reported, but no serious adverse events were found. Our findings are promising but need replication using RCT-design. | rehabilitation medicine and physical therapy |
10.1101/2022.04.01.22272847 | Pharmacokinetics/pharmacodynamics by Race: Analysis of a Peginterferon Beta-1a Phase I Study | BackgroundBlack/African-American participants are underrepresented in clinical trials but can experience a greater burden of disease, such as multiple sclerosis, than other racial groups in the United States. A phase 1, open-label, 2-period crossover study that demonstrated bioequivalence of subcutaneous (SC) and intramuscular (IM) injection of peginterferon beta-1a in healthy volunteers enrolled a similar proportion of Black/African-American and White participants, enabling a subgroup analysis comparing these groups.
MethodsPeginterferon beta-1a 125 g was administered by SC or IM (1:1) injection, followed by a 28-day washout period before a second injection using the alternate method. Primary endpoints were maximum observed concentration (Cmax) and area under the concentration-time curve from Hour 0 to infinity (AUCinf). Secondary endpoints included safety, tolerability, and additional pharmacokinetic and pharmacodynamic parameters.
FindingsThis analysis included 70 (51.5%) Black and 59 (43.3%) White participants. Black participants exhibited a 29.8% higher geometric mean Cmax of peginterferon beta-1a than White participants following SC administration and demonstrated similar values following IM administration. Black participants displayed 31.0% versus 11.8% higher geometric mean AUCinf values than White participants with SC versus IM administration. Neopterin dynamics and safety signals were similar between groups, with numerically fewer adverse events reported among Black participants.
ConclusionsNo clinically meaningful differences were identified between Black and White participants in pharmacokinetics/pharmacodynamics or safety related to peginterferon beta-1a administration, indicating that no change in dosing regimen is warranted for Black patients with MS.
FundingFunding for medical writing support was provided by Biogen Inc. (Cambridge, MA, USA). | neurology |