doi
stringlengths 16
27
| title
stringlengths 18
435
| authors
stringlengths 6
600
| author_corresponding
stringlengths 5
52
| author_corresponding_institution
stringlengths 1
160
⌀ | date
stringlengths 10
10
| version
int64 0
26
| type
stringclasses 3
values | license
stringclasses 7
values | category
stringclasses 51
values | jatsxml
stringlengths 68
79
| abstract
stringlengths 4
38.7k
| published
stringlengths 13
46
⌀ | server
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10.1101/19001792 | Design of a computer model for the identification of adolescent swimmers with low BMD | Marin-Puyalto, J.; Gomez-Cabello, A.; Gomez-Bruton, A.; Matute-Llorente, A.; Gonzalez-Aguero, A.; Casajus, J. A.; Vicente-Rodriguez, G. | German Vicente-Rodriguez | Universidad de Zaragoza | 2019-07-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | sports medicine | https://www.medrxiv.org/content/early/2019/07/13/19001792.source.xml | ObjectivesThis paper aims to elaborate a decision tree for the early detection of adolescent swimmers at risk of presenting low bone mineral density (BMD), based on easily measurable fitness and performance variables.
MethodsBone mineral status of 78 adolescent swimmers was determined using DXA scans at the hip and subtotal body. Participants also underwent physical fitness (upper and lower body strength, running speed and cardiovascular endurance) and performance (swimming history, speed and ranking) assessments. A gradient boosting machine regression tree was built in order to predict BMD of the swimmers and to further develop a simpler individual decision tree, using a subtotal BMD height-adjusted Z-score of -1 as threshold value.
ResultsThe predicted BMD using the gradient boosted model was strongly correlated with the actual BMD values obtained from DXA (r=0.960, p<0.0001) with a root mean squared error of 0.034 g/cm2. According to a simple decision tree, that showed a 73.9% of classification accuracy, swimmers with a body mass index (BMI) lower than 17 kg/m2 or a handgrip strength inferior to 43kg with the sum of both arms could be at higher risk of having low BMD.
ConclusionEasily measurable fitness variables (BMI and handgrip strength) could be used for the early detection of adolescent swimmers at risk of suffering from low BMD. The presented decision tree could be used in training settings to determine the necessity of further BMD assessments.
Summary boxO_ST_ABSWhat are the new findings?C_ST_ABSO_LIAdolescent swimmers with a low BMI or handgrip strength seem more likely to be at higher risk of having low BMD.
C_LIO_LISubtotal BMD values predicted from our regression model are strongly correlated with DXA measurements.
C_LI
How might it impact on clinical practice in the futureO_LIHealthcare professionals could easily detect adolescent swimmers in need of a DXA scan.
C_LIO_LIThe computer-based regression tree could be included in low BMD management and screening strategies.
C_LI | null | medrxiv |
10.1101/19001859 | Quantification of Conflicts of Interest in an Online Point-of-Care Clinical Support Website | Chopra, A. C.; Tilberry, S.; Sternat, K. E.; Chung, D. Y.; Nichols, S. D.; Piper, B. J. | Brian J Piper | Geisinger Commonwealth School of Medicine | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | medical ethics | https://www.medrxiv.org/content/early/2019/07/14/19001859.source.xml | Online medical reference websites are utilized by health care providers to enhance their education and decision making. However, these resources may not adequately reveal pharmaceutical-author interactions and their potential conflicts of interest (CoIs). This investigation: 1) evaluates the correspondence of two well-utilized CoI databases: the Centers for Medicare and Medicaid Services Open Payments (CMSOP) and ProPublicas Dollars for Docs (PDD) and 2) quantifies CoIs among authors of a publically available point of care clinical support website. Two data sources were used: the hundred most common drugs and the top fifty causes of death. These topics were entered into a freely available database. The authors (N = 139) were then input into CMSOP and PDD and compensation and number of payment were determined for 2013-2015. The subset of highly compensated authors that also reported "Nothing to disclose" were further examined. There was a high degree of similarity between CMSOP and PDD for compensation (R2 [≥] 0.998) and payment number (R2 [≥] 0.992). The amount received was 1.4% higher in CMSOP ($4,059,194) than in PDD ($4,002,891). The articles where the authors had received the greatest compensation were in neurology (Parkinsons Disease = $1,810,032), oncology (Acute Lymphoblastic Leukemia = $616,727), and endocrinology (Type I Diabetes = $377,388). Two authors reporting "Nothing to disclose" received appreciable and potentially relevant compensation. CMSOP and PDD produced almost identical results. CoIs were common among authors but self-reporting may be an inadequate reporting mechanism. Recommendations are offered for improving the CoI transparency of pharmaceutical-author interactions in point-of-care electronic resources. | 10.1007/s11948-019-00153-9 | medrxiv |
10.1101/19001867 | Actigraphic Screening for Rapid Eye Movement Sleep Behavior Disorder | Sandala, K.; Dostalova, S.; Nepozitek, J.; Ibarburu, V.; Dusek, P.; Ruzicka, E.; Sonka, K.; Kemlink, D. | David Kemlink | First Faculty of Medicine, Charles University in Prague | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_no | neurology | https://www.medrxiv.org/content/early/2019/07/14/19001867.source.xml | BackgroundThe patients suffering of the rapid eye movement sleep behavior disorder (RBD) are in high risk of developing a neurodegenerative disorder, most frequently from the group of alpha-synucleinopathies, such as Parkinsons disease (PD), Dementia with Lewy Bodies (DLB) or multiple system atrophy (MSA). The definitive diagnosis of RBD is based on polysomnographic investigation. Actigraphy is much easier to perform and reflects condition in patients home environment.
The aimsThe aim of this study was to find suitable biomarkers for RBD, which can be detectable by actigraphic recording.
MethodsHigh resolution actigraphic recording (MotionWatch, CamNtech ltd.) and confirming polysomnographic recording was performed on 45 RBD patients, 30 patients with other sleep-related motor disorders and 20 healthy controls. Each individual file was analysed by software testing for amount of sleep (MotionWare 1.1.20) and secondly for periodic motor activity (PLMS analysis 1.0.16). The 13-item patient self-rating RBD screening questionnaire (RBD-SQ) translated to Czech language was also used for screening purposes. We used an RBD-SQ score of five points as a positive test result, as suggested by the original publication of the scale.
ResultsWhen using the actigraphic sleep detection, we encountered significant differences mostly on non-dominant hand, related to sleep fragmentation - most notably increased percentage of Short immobile bouts (47.0% vs. 28.0%, p<0.0001), increased Fragmentation index (72.5 vs. 40.7, p<0.0001) and decreased percentage of Sleep efficiency (72.1% vs. 86.8%, p<0.0001)in RBD subjects compared to other sleep disorders and controls. When analyzing periodic motor activity, we also found surprisingly more periodic hand movements (p=0.028, corrected for multiple testing), but differences on lower extremities using either measurement were not significant. The discrimination function based on RBD-SQ and Short immobile bouts % could allocate correctly the RBD status in 87.6% of cases with Wilks Lambda 0.435 and p<0.0001.
ConclusionIn our single-center study in patients from the Czech population, we found that actigraphic recording from upper extremities shows consistently more prominent sleep fragmentation in RBD patients compared to other sleep diagnoses or healthy controls. Actigraphy may be useful in broader screening for RBD. | null | medrxiv |
10.1101/19001909 | Mediterranean Diet improves thrombosis biomarkers in high cardiovascular risk individuals: a randomized controlled trial | Hernaez, A.; Castaner, O.; Tresserra-Rimbau, A.; Pinto, X.; Fito, M.; Casas, R.; Martinez-Gonzalez, M. A.; Corella, D.; Salas-Salvado, J.; Lapetra, J.; Gomez-Gracia, E.; Aros, F.; Fiol, M.; Serra-Majem, L.; Ros, E.; Estruch, R. | Alvaro Hernaez | August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_no | nutrition | https://www.medrxiv.org/content/early/2019/07/14/19001909.source.xml | ScopeTo assess whether following a Mediterranean diet (MedDiet) improves atherothrombosis biomarkers in high cardiovascular risk individuals.
Methods and resultsIn 358 random volunteers from the PREDIMED trial (Prevencion con Dieta Mediterranea), we assessed the 1-year effects on atherothrombosis markers of an intervention with MedDiet, enriched with virgin olive oil (MedDiet-VOO; N=120) or nuts (MedDiet-Nuts; N=119) versus a low-fat control diet (N=119). In a secondary, observational approach, we studied whether volunteers with large increments in MedDiet adherence (>2 score points) were associated with 1-year improvements in biomarkers (relative to those worsening their adherence). The MedDiet-VOO intervention increased platelet activating factor-acetylhydrolase activity in high-density lipoproteins (HDLs) by 7.5% [95% confidence interval: 0.17; 14.8] and decreased HDL-bound 1-antitrypsin levels by 6.1% [-11.8; -0.29]. The MedDiet-Nuts one reduced non-esterified fatty acid concentrations by 9.3% [-18.1; -0.53]. Only the low-fat diet was associated with increases in platelet factor-4 and prothrombin factor1+2 levels versus baseline (P=0.012 and P=0.003, respectively, according to Wilcoxon signed-rank tests). Finally, large MedDiet increments were associated with less fibrinogen (-9.5% [-18.3; -0.60]) and non-esterified fatty acid concentrations (-16.7% [-31.7; -1.74]).
ConclusionFollowing a MedDiet improves atherothrombosis biomarkers in high cardiovascular risk individuals. | 10.1002/mnfr.202000350 | medrxiv |
10.1101/19001909 | Mediterranean Diet improves thrombosis biomarkers in high cardiovascular risk individuals: a randomized controlled trial | Hernaez, A.; Castaner, O.; Tresserra-Rimbau, A.; Pinto, X.; Fito, M.; Casas, R.; Martinez-Gonzalez, M. A.; Corella, D.; Salas-Salvado, J.; Lapetra, J.; Gomez-Gracia, E.; Aros, F.; Fiol, M.; Serra-Majem, L.; Ros, E.; Estruch, R. | Alvaro Hernaez | August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona | 2019-08-22 | 2 | PUBLISHAHEADOFPRINT | cc_no | nutrition | https://www.medrxiv.org/content/early/2019/08/22/19001909.source.xml | ScopeTo assess whether following a Mediterranean diet (MedDiet) improves atherothrombosis biomarkers in high cardiovascular risk individuals.
Methods and resultsIn 358 random volunteers from the PREDIMED trial (Prevencion con Dieta Mediterranea), we assessed the 1-year effects on atherothrombosis markers of an intervention with MedDiet, enriched with virgin olive oil (MedDiet-VOO; N=120) or nuts (MedDiet-Nuts; N=119) versus a low-fat control diet (N=119). In a secondary, observational approach, we studied whether volunteers with large increments in MedDiet adherence (>2 score points) were associated with 1-year improvements in biomarkers (relative to those worsening their adherence). The MedDiet-VOO intervention increased platelet activating factor-acetylhydrolase activity in high-density lipoproteins (HDLs) by 7.5% [95% confidence interval: 0.17; 14.8] and decreased HDL-bound 1-antitrypsin levels by 6.1% [-11.8; -0.29]. The MedDiet-Nuts one reduced non-esterified fatty acid concentrations by 9.3% [-18.1; -0.53]. Only the low-fat diet was associated with increases in platelet factor-4 and prothrombin factor1+2 levels versus baseline (P=0.012 and P=0.003, respectively, according to Wilcoxon signed-rank tests). Finally, large MedDiet increments were associated with less fibrinogen (-9.5% [-18.3; -0.60]) and non-esterified fatty acid concentrations (-16.7% [-31.7; -1.74]).
ConclusionFollowing a MedDiet improves atherothrombosis biomarkers in high cardiovascular risk individuals. | 10.1002/mnfr.202000350 | medrxiv |
10.1101/19001909 | Effects of a Mediterranean Diet and physical activity on atherothrombosis biomarkers in high cardiovascular risk individuals | Hernaez, A.; Castaner, O.; Tresserra-Rimbau, A.; Pinto, X.; Fito, M.; Casas, R.; Martinez-Gonzalez, M. A.; Corella, D.; Salas-Salvado, J.; Lapetra, J.; Gomez-Gracia, E.; Aros, F.; Fiol, M.; Serra-Majem, L.; Ros, E.; Estruch, R. | Alvaro Hernaez | August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona | 2020-02-21 | 3 | PUBLISHAHEADOFPRINT | cc_no | nutrition | https://www.medrxiv.org/content/early/2020/02/21/19001909.source.xml | ScopeTo assess whether following a Mediterranean diet (MedDiet) improves atherothrombosis biomarkers in high cardiovascular risk individuals.
Methods and resultsIn 358 random volunteers from the PREDIMED trial (Prevencion con Dieta Mediterranea), we assessed the 1-year effects on atherothrombosis markers of an intervention with MedDiet, enriched with virgin olive oil (MedDiet-VOO; N=120) or nuts (MedDiet-Nuts; N=119) versus a low-fat control diet (N=119). In a secondary, observational approach, we studied whether volunteers with large increments in MedDiet adherence (>2 score points) were associated with 1-year improvements in biomarkers (relative to those worsening their adherence). The MedDiet-VOO intervention increased platelet activating factor-acetylhydrolase activity in high-density lipoproteins (HDLs) by 7.5% [95% confidence interval: 0.17; 14.8] and decreased HDL-bound 1-antitrypsin levels by 6.1% [-11.8; -0.29]. The MedDiet-Nuts one reduced non-esterified fatty acid concentrations by 9.3% [-18.1; -0.53]. Only the low-fat diet was associated with increases in platelet factor-4 and prothrombin factor1+2 levels versus baseline (P=0.012 and P=0.003, respectively, according to Wilcoxon signed-rank tests). Finally, large MedDiet increments were associated with less fibrinogen (-9.5% [-18.3; -0.60]) and non-esterified fatty acid concentrations (-16.7% [-31.7; -1.74]).
ConclusionFollowing a MedDiet improves atherothrombosis biomarkers in high cardiovascular risk individuals. | 10.1002/mnfr.202000350 | medrxiv |
10.1101/19001909 | Mediterranean Diet and atherothrombosis biomarkers: a randomized controlled trial | Hernaez, A.; Castaner, O.; Tresserra-Rimbau, A.; Pinto, X.; Fito, M.; Casas, R.; Martinez-Gonzalez, M. A.; Corella, D.; Salas-Salvado, J.; Lapetra, J.; Gomez-Gracia, E.; Aros, F.; Fiol, M.; Serra-Majem, L.; Ros, E.; Estruch, R. | Alvaro Hernaez | August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona | 2020-04-15 | 4 | PUBLISHAHEADOFPRINT | cc_no | nutrition | https://www.medrxiv.org/content/early/2020/04/15/19001909.source.xml | ScopeTo assess whether following a Mediterranean diet (MedDiet) improves atherothrombosis biomarkers in high cardiovascular risk individuals.
Methods and resultsIn 358 random volunteers from the PREDIMED trial (Prevencion con Dieta Mediterranea), we assessed the 1-year effects on atherothrombosis markers of an intervention with MedDiet, enriched with virgin olive oil (MedDiet-VOO; N=120) or nuts (MedDiet-Nuts; N=119) versus a low-fat control diet (N=119). In a secondary, observational approach, we studied whether volunteers with large increments in MedDiet adherence (>2 score points) were associated with 1-year improvements in biomarkers (relative to those worsening their adherence). The MedDiet-VOO intervention increased platelet activating factor-acetylhydrolase activity in high-density lipoproteins (HDLs) by 7.5% [95% confidence interval: 0.17; 14.8] and decreased HDL-bound 1-antitrypsin levels by 6.1% [-11.8; -0.29]. The MedDiet-Nuts one reduced non-esterified fatty acid concentrations by 9.3% [-18.1; -0.53]. Only the low-fat diet was associated with increases in platelet factor-4 and prothrombin factor1+2 levels versus baseline (P=0.012 and P=0.003, respectively, according to Wilcoxon signed-rank tests). Finally, large MedDiet increments were associated with less fibrinogen (-9.5% [-18.3; -0.60]) and non-esterified fatty acid concentrations (-16.7% [-31.7; -1.74]).
ConclusionFollowing a MedDiet improves atherothrombosis biomarkers in high cardiovascular risk individuals. | 10.1002/mnfr.202000350 | medrxiv |
10.1101/19001842 | Effects of vitamin D supplementation and seasonality on circulating cytokines in adolescents: analysis of data from a feasibility trial in Mongolia. | Yegorov, S.; Bromage, S.; Boldbaatar, N.; Ganmaa, D. | Sergey Yegorov | Suleyman Demirel University | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | nutrition | https://www.medrxiv.org/content/early/2019/07/14/19001842.source.xml | Vitamin D deficiency is prevalent in human populations and has been linked to immune dysfunction. Here we explored the effects of cholecalciferol supplementation on circulating cytokines in severely vitamin D deficient (blood 25(OH)D3 << 30 nmol/L) adolescents aged 12-15 from Mongolia. The study included 28 children receiving 800 IU daily cholecalciferol for 6 months spanning winter and spring, and 30 children receiving placebo during the same period. The levels of 25(OH)D3 were assessed at baseline, three and six months. Twenty-one cytokines were measured in serum at baseline and at six months. The median blood 25(OH)D3 concentration at baseline was 13.7 nmol/L (IQR=10.0-21.7). Supplementation tripled blood 25(OH)D3 levels (p<0.001) and reversed the direction of change for most cytokines (16/21, 86%). Supplementation was associated with elevated interleukin (IL)-6 (p=0.043). The placebo group had reduced MIP-1 (p=0.007) and IL-8 (p=0.034) at six months. These findings suggest that cholecalciferol supplementation and seasonality have a measurable impact on circulating cytokines in adolescents, identifying chemokines as potentially important biomarkers of vitamin D status in this population.
ClinicalTrial.org ID: NCT01244204 | 10.3389/fnut.2019.00166 | medrxiv |
10.1101/19001917 | Reproducibility and transparency characteristics of oncology research evidence | Walters, C. G.; Harter, Z. J.; Wayant, C.; Vo, N.; Warren, M.; Chronister, J.; Tritz, D.; Vassar, M. | Corbin G Walters | Oklahoma State University Center for Health Sciences | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_by | oncology | https://www.medrxiv.org/content/early/2019/07/14/19001917.source.xml | IntroductionAs much as 50%-90% of research is estimated to be irreproducible, costing upwards of $28 billion in the United States alone. Reproducible research practices are essential to improving the reproducibility and transparency of biomedical research, such as including pre-registering studies, publishing a protocol, making research data and metadata publicly available, and publishing in open access journals. Here we report an investigation of key reproducible or transparent research practices in the published oncology literature.
MethodsWe performed a cross-sectional analysis of a random sample of 300 oncology studies published from 2014-2018. We extracted key reproducibility and transparency characteristics in a duplicative fashion by blinded investigators using a pilot tested Google Form.
ResultsOf the 300 studies randomly sampled, 296 studies were analyzed for study reproducibility characteristics. Of these 296 studies, 194 were contained empirical data that could be analyzed for reproducible and transparent research practices. Raw data was available for 9 studies (4.6%). Approximately 5 studies (2.6%) provided a protocol. Despite our sample including 15 clinical trials and 7 systematic reviews/meta-analyses, only 7 included a pre-registration statement. Less than 25% (65/194) of studies provided an author conflict of interest statement.
DiscussionWe found that key reproducibility and transparency characteristics were absent from a random sample of published oncology studies. We recommend required pre-registration for all eligible trials and systematic reviews, published protocols for all manuscripts, and deposition of raw data and metadata in public repositories. | 10.1136/bmjopen-2019-033962 | medrxiv |
10.1101/19001958 | Redefining typhoid diagnosis: what would an improved test need to look like? | Mather, R.; Hopkins, H.; Parry, C.; Dittrich, S. | Richard Mather | London School of Hygiene and Tropical Medicine | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_by | infectious diseases | https://www.medrxiv.org/content/early/2019/07/14/19001958.source.xml | IntroductionTyphoid fever is one of the most common bacterial causes of acute febrile illness in the developing world, with an estimated 10.9 million new cases and 116.8 thousand deaths in 2017. Typhoid point-of-care (POC) diagnostic tests are widely used but have poor sensitivity and specificity, resulting in antibiotic overuse that has led to the emergence and spread of multidrug resistant strains. With recent advances in typhoid surveillance and detection, this is the ideal time to produce a target product profile (TPP) that guides product development and ensure that a next-generation test meets the needs of users in the resource-limited settings where typhoid is endemic.
MethodsA structured literature review was conducted to develop a draft TPP for a next-generation typhoid diagnostic test with minimal and optimal desired characteristics for 36 test parameters. The TPP was refined using feedback collected from a Delphi survey of key stakeholders in clinical medicine, microbiology, diagnostics and public and global health.
ResultsA next-generation typhoid diagnostic test should improve patient management through the diagnosis and treatment of infection with acute Salmonella enterica serovars Typhi or Paratyphi with a sensitivity [≥]90% and specificity [≥]95%. The test would ideally be used at the lowest level of the healthcare system in settings without a reliable power or water supply and provide results in less than 15 minutes at a cost of <$1.00 USD.
ConclusionThis report outlines the first comprehensive TPP for typhoid fever and is intended to guide the development of a next-generation typhoid diagnostic test. An accurate POC test will reduce the morbidity and mortality of typhoid fever through rapid diagnosis and treatment and will have the greatest impact in reducing antimicrobial resistance if it is combined with diagnostics for other causes of acute febrile illness in a treatment algorithm. | 10.1136/bmjgh-2019-001831 | medrxiv |
10.1101/19001875 | Earliest infections predict the age distribution of seasonal influenza A cases | Arevalo, P.; McLean, H. Q.; Belongia, E. A.; Cobey, S. | Philip Arevalo | Department of Ecology and Evolution, Univeristy of Chicago | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/14/19001875.source.xml | Seasonal variation in the age distribution of influenza A cases suggests that factors other than age shape susceptibility to medically attended infection. We ask whether these differences can be partly explained by protection conferred by childhood influenza infection, which has lasting impacts on immune responses to influenza and protection against new influenza A subtypes (phenomena known as original antigenic sin and immune imprinting). Fitting a statistical model to data from studies of influenza vaccine effectiveness (VE), we find that primary infection appears to reduce the risk of medically attended infection with that subtype throughout life. This effect is stronger for H1N1 compared to H3N2. Additionally, we find evidence that VE varies with both age and birth year, suggesting that VE is sensitive to early exposures. Our findings may improve estimates of age-specific risk and VE in similarly vaccinated populations and thus improve forecasting and vaccination strategies to combat seasonal influenza. | 10.7554/eLife.50060 | medrxiv |
10.1101/19001875 | Earliest infections predict the age distribution of seasonal influenza A cases | Arevalo, P.; McLean, H. Q.; Belongia, E. A.; Cobey, S. | Philip Arevalo | Department of Ecology and Evolution, Univeristy of Chicago | 2019-09-08 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/09/08/19001875.source.xml | Seasonal variation in the age distribution of influenza A cases suggests that factors other than age shape susceptibility to medically attended infection. We ask whether these differences can be partly explained by protection conferred by childhood influenza infection, which has lasting impacts on immune responses to influenza and protection against new influenza A subtypes (phenomena known as original antigenic sin and immune imprinting). Fitting a statistical model to data from studies of influenza vaccine effectiveness (VE), we find that primary infection appears to reduce the risk of medically attended infection with that subtype throughout life. This effect is stronger for H1N1 compared to H3N2. Additionally, we find evidence that VE varies with both age and birth year, suggesting that VE is sensitive to early exposures. Our findings may improve estimates of age-specific risk and VE in similarly vaccinated populations and thus improve forecasting and vaccination strategies to combat seasonal influenza. | 10.7554/eLife.50060 | medrxiv |
10.1101/19001875 | Earliest infections predict the age distribution of seasonal influenza A cases | Arevalo, P.; McLean, H. Q.; Belongia, E. A.; Cobey, S. | Philip Arevalo | Department of Ecology and Evolution, Univeristy of Chicago | 2020-03-29 | 3 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2020/03/29/19001875.source.xml | Seasonal variation in the age distribution of influenza A cases suggests that factors other than age shape susceptibility to medically attended infection. We ask whether these differences can be partly explained by protection conferred by childhood influenza infection, which has lasting impacts on immune responses to influenza and protection against new influenza A subtypes (phenomena known as original antigenic sin and immune imprinting). Fitting a statistical model to data from studies of influenza vaccine effectiveness (VE), we find that primary infection appears to reduce the risk of medically attended infection with that subtype throughout life. This effect is stronger for H1N1 compared to H3N2. Additionally, we find evidence that VE varies with both age and birth year, suggesting that VE is sensitive to early exposures. Our findings may improve estimates of age-specific risk and VE in similarly vaccinated populations and thus improve forecasting and vaccination strategies to combat seasonal influenza. | 10.7554/eLife.50060 | medrxiv |
10.1101/19002048 | Ethnicity and Acculturation: Asian American Substance Use from Early Adolescence to Mature Adulthood | Ahmmad, Z.; Adkins, D. E. | Daniel E Adkins | University of Utah | 2019-07-14 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/07/14/19002048.source.xml | Research on Asian American substance use has, to date, been limited by monolithic conceptions of Asian identity, inadequate attention to acculturative process, and a dearth of longitudinal analyses spanning developmental periods. Using five waves of the National Longitudinal Study of Adolescent to Adult Health, this study addresses these limitations by longitudinally investigating disparities in substance use from early adolescence into mature adulthood among Asian American ethnic groups, including subjects identifying as multiple Asian ethnicities and multiracial Asians. The conditional effects of acculturation indicators (e.g., nativity generation, co-ethnic peer networks, co-ethnic neighborhood concentration) on the substance use outcomes were also examined. Results indicate significant variation across Asian ethnicities, with the lowest probabilities of substance use among Chinese and Vietnamese Americans, and the highest among multiracial Asian Americans. Acculturation indicators were also strongly, independently associated with increased substance use, and attenuated many of the observed ethnic disparities, particularly for multiracial, multiethnic, and Japanese Asian Americans. This study argues that ignoring the diversity of Asian ethnicities masks the presence of high-risk Asian American groups. Moreover, results indicate that, among Asian Americans, substance use is strongly positively associated with acculturation to U.S. cultural norms, and generally peaks at later ages than the U.S. average. | 10.1080/1369183X.2020.1788927 | medrxiv |
10.1101/19002048 | Ethnicity and Acculturation: Asian American Substance Use from Early Adolescence to Mature Adulthood | Ahmmad, Z.; Adkins, D. E. | Daniel E Adkins | University of Utah | 2020-06-05 | 2 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2020/06/05/19002048.source.xml | Research on Asian American substance use has, to date, been limited by monolithic conceptions of Asian identity, inadequate attention to acculturative process, and a dearth of longitudinal analyses spanning developmental periods. Using five waves of the National Longitudinal Study of Adolescent to Adult Health, this study addresses these limitations by longitudinally investigating disparities in substance use from early adolescence into mature adulthood among Asian American ethnic groups, including subjects identifying as multiple Asian ethnicities and multiracial Asians. The conditional effects of acculturation indicators (e.g., nativity generation, co-ethnic peer networks, co-ethnic neighborhood concentration) on the substance use outcomes were also examined. Results indicate significant variation across Asian ethnicities, with the lowest probabilities of substance use among Chinese and Vietnamese Americans, and the highest among multiracial Asian Americans. Acculturation indicators were also strongly, independently associated with increased substance use, and attenuated many of the observed ethnic disparities, particularly for multiracial, multiethnic, and Japanese Asian Americans. This study argues that ignoring the diversity of Asian ethnicities masks the presence of high-risk Asian American groups. Moreover, results indicate that, among Asian Americans, substance use is strongly positively associated with acculturation to U.S. cultural norms, and generally peaks at later ages than the U.S. average. | 10.1080/1369183X.2020.1788927 | medrxiv |
10.1101/19001834 | Childhood immune imprinting to influenza A shapes birth year-specific risk during seasonal H1N1 and H3N2 epidemics | Gostic, K. M.; Bridge, R.; Brady, S.; Viboud, C.; Worobey, M.; Lloyd-Smith, J. O. | Katelyn M Gostic | Dept. of Ecology and Evolutionary Biology, University of California, Los Angeles, Los Angeles, CA, USA | 2019-07-13 | 1 | PUBLISHAHEADOFPRINT | cc_by | epidemiology | https://www.medrxiv.org/content/early/2019/07/13/19001834.source.xml | Across decades of co-circulation in humans, influenza A subtypes H1N1 and H3N2 have caused seasonal epidemics characterized by different age distributions of infection and mortality. H3N2 causes the majority of cases in high-risk elderly cohorts, and the majority of overall deaths, whereas H1N1 causes incidence shifted towards young and middle-aged adults, and fewer deaths. These contrasting age profiles may result from differences in childhood exposure to H1N1 and H3N2 or from differences in evolutionary rate between subtypes. Here we analyze a large epidemiological surveillance dataset to test whether childhood immune imprinting shapes seasonal influenza epidemiology, and if so, whether it acts primarily via immune memory of a particular influenza subtype or via broader immune memory that protects across subtypes. We also test the impact of evolutionary differences between influenza subtypes on age distributions of infection. Likelihood-based model comparison shows that narrow, within-subtype imprinting is the strongest driver of seasonal influenza risk. The data do not support a strong effect of evolutionary rate, or of broadly protective imprinting that acts across subtypes. Our findings emphasize that childhood exposures can imprint a lifelong immunological bias toward particular influenza subtypes, and that these cohort-specific biases shape epidemic age distributions. As a result, newer and less "senior" antibody responses acquired later in life do not provide the same strength of protection as responses imprinted in childhood. Finally, we project that the relatively low mortality burden of H1N1 may increase in the coming decades, as cohorts that lack H1N1-specific imprinting eventually reach old age. | 10.1371/journal.ppat.1008109 | medrxiv |
10.1101/19001834 | Childhood immune imprinting to influenza A shapes birth year-specific risk during seasonal H1N1 and H3N2 epidemics | Gostic, K. M.; Bridge, R.; Brady, S.; Viboud, C.; Worobey, M.; Lloyd-Smith, J. O. | Katelyn M Gostic | Dept. of Ecology and Evolutionary Biology, University of California, Los Angeles, Los Angeles, CA, USA | 2019-09-26 | 2 | PUBLISHAHEADOFPRINT | cc_by | epidemiology | https://www.medrxiv.org/content/early/2019/09/26/19001834.source.xml | Across decades of co-circulation in humans, influenza A subtypes H1N1 and H3N2 have caused seasonal epidemics characterized by different age distributions of infection and mortality. H3N2 causes the majority of cases in high-risk elderly cohorts, and the majority of overall deaths, whereas H1N1 causes incidence shifted towards young and middle-aged adults, and fewer deaths. These contrasting age profiles may result from differences in childhood exposure to H1N1 and H3N2 or from differences in evolutionary rate between subtypes. Here we analyze a large epidemiological surveillance dataset to test whether childhood immune imprinting shapes seasonal influenza epidemiology, and if so, whether it acts primarily via immune memory of a particular influenza subtype or via broader immune memory that protects across subtypes. We also test the impact of evolutionary differences between influenza subtypes on age distributions of infection. Likelihood-based model comparison shows that narrow, within-subtype imprinting is the strongest driver of seasonal influenza risk. The data do not support a strong effect of evolutionary rate, or of broadly protective imprinting that acts across subtypes. Our findings emphasize that childhood exposures can imprint a lifelong immunological bias toward particular influenza subtypes, and that these cohort-specific biases shape epidemic age distributions. As a result, newer and less "senior" antibody responses acquired later in life do not provide the same strength of protection as responses imprinted in childhood. Finally, we project that the relatively low mortality burden of H1N1 may increase in the coming decades, as cohorts that lack H1N1-specific imprinting eventually reach old age. | 10.1371/journal.ppat.1008109 | medrxiv |
10.1101/19002139 | Increased functional connectivity of thalamic subdivisions in patients with Parkinson's disease | Owens-Walton, C.; Jakabek, D.; Power, B. D.; Walterfang, M.; Velakoulis, D.; van Westen, D.; Looi, J. C.; Shaw, M.; Hansson, O. | Conor Owens-Walton | Australian National University | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | radiology and imaging | https://www.medrxiv.org/content/early/2019/07/15/19002139.source.xml | Parkinsons disease (PD) affects 2-3% of the population over the age of 65 with loss of dopaminergic neurons in the substantia nigra impacting the functioning of basal ganglia-thalamocortical circuits. The precise role played by the thalamus is unknown, despite its critical role in the functioning of the cerebral cortex, and the abnormal neuronal activity of the structure in PD. Our objective was to more clearly elucidate how functional connectivity and morphology of the thalamus are impacted in PD (n = 32) compared to Controls (n = 20). To investigate functional connectivity of the thalamus we subdivided the structure into two important regions-of-interest, the first with putative connections to the motor cortices and the second with putative connections to prefrontal cortices. We then investigated potential differences in the size and shape of the thalamus in PD, and how morphology and functional connectivity relate to clinical variables. Our data demonstrate that PD is associated with increases in functional connectivity between motor subdivisions of the thalamus and the supplementary motor area, and between prefrontal thalamic subdivisions and nuclei of the basal ganglia, anterior and dorsolateral prefrontal cortices, as well as the anterior and paracingulate gyri. These results suggest that PD is associated with increased functional connectivity of subdivisions of the thalamus which may be indicative alterations to basal ganglia-thalamocortical circuitry. | 10.1371/journal.pone.0222002 | medrxiv |
10.1101/19002139 | Increased functional connectivity of thalamic subdivisions in patients with Parkinson's disease | Owens-Walton, C.; Jakabek, D.; Power, B. D.; Walterfang, M.; Velakoulis, D.; van Westen, D.; Looi, J. C.; Shaw, M.; Hansson, O. | Conor Owens-Walton | Australian National University | 2019-08-26 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | radiology and imaging | https://www.medrxiv.org/content/early/2019/08/26/19002139.source.xml | Parkinsons disease (PD) affects 2-3% of the population over the age of 65 with loss of dopaminergic neurons in the substantia nigra impacting the functioning of basal ganglia-thalamocortical circuits. The precise role played by the thalamus is unknown, despite its critical role in the functioning of the cerebral cortex, and the abnormal neuronal activity of the structure in PD. Our objective was to more clearly elucidate how functional connectivity and morphology of the thalamus are impacted in PD (n = 32) compared to Controls (n = 20). To investigate functional connectivity of the thalamus we subdivided the structure into two important regions-of-interest, the first with putative connections to the motor cortices and the second with putative connections to prefrontal cortices. We then investigated potential differences in the size and shape of the thalamus in PD, and how morphology and functional connectivity relate to clinical variables. Our data demonstrate that PD is associated with increases in functional connectivity between motor subdivisions of the thalamus and the supplementary motor area, and between prefrontal thalamic subdivisions and nuclei of the basal ganglia, anterior and dorsolateral prefrontal cortices, as well as the anterior and paracingulate gyri. These results suggest that PD is associated with increased functional connectivity of subdivisions of the thalamus which may be indicative alterations to basal ganglia-thalamocortical circuitry. | 10.1371/journal.pone.0222002 | medrxiv |
10.1101/19002154 | Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection | Ayhan, M. S.; Kuehlewein, L.; Aliyeva, G.; Inhoffen, W.; Ziemssen, F.; Berens, P. | Philipp Berens | University of Tuebingen | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | ophthalmology | https://www.medrxiv.org/content/early/2019/07/15/19002154.source.xml | Deep learning-based systems can achieve a diagnostic performance comparable to physicians in a variety of medical use cases including the diagnosis of diabetic retinopathy. To be useful in clinical practise, it is necessary to have well calibrated measures of the uncertainty with which these systems report their decisions. However, deep neural networks (DNNs) are being often overconfident in their predictions, and are not amenable to a straightforward probabilistic treatment. Here, we describe an intuitive framework based on test-time data augmentation for quantifying the diagnostic uncertainty of a state-of-the-art DNN for diagnosing diabetic retinopathy. We show that the derived measure of uncertainty is well-calibrated and that experienced physicians likewise find cases with uncertain diagnosis difficult to evaluate. This paves the way for an integrated treatment of uncertainty in DNN-based diagnostic systems. | 10.1016/j.media.2020.101724 | medrxiv |
10.1101/19002238 | An Assessment of Transparency and Reproducibility-related Research Practices in Otolaryngology | Johnson, A. L.; Torgerson, T.; Skinner, M.; Hamilton, T.; Tritz, D.; Vassar, M. | Austin L Johnson | Oklahoma State University Center for Health Science | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by | otolaryngology | https://www.medrxiv.org/content/early/2019/07/15/19002238.source.xml | IntroductionClinical research serves as the foundation for evidence-based patient care, and reproducibility of results is consequently critical. We sought to assess the transparency and reproducibility of research studies in otolaryngology by evaluating a random sample of publications in otolaryngology journals between 2014 and 2018.
MethodsWe used the National Library of Medicine catalog to identify otolaryngology journals that met the inclusion criteria (available in the English language and indexed in MEDLINE). From these journals, we extracted a random sample of 300 publications using a PubMed search for records published between January 1, 2014, and December 31, 2018. Specific indicators of reproducible and transparent research practices were evaluated in a blinded, independent, and duplicate manner using a pilot-tested Google form.
ResultsOur initial search returned 26,498 records, from which 300 were randomly selected for analysis. Of these 300 records, 286 met inclusion criteria and 14 did not. Among the empirical studies, 2% (95% CI, 0.4%-3.5%) of publications indicated that raw data were available, 0.6% (95% CI, 0.3%-1.6%) reported an analysis script, 5.3% (95% CI, 2.7%-7.8%) were linked to an accessible research protocol, and 3.9% (95% CI, 1.7%-6.1%) were preregistered. None of the publications had a clear statement claiming to replicate, or to be a replication of, another study.
ConclusionsInadequate reproducibility practices exist in otolaryngology. Nearly all studies in our analysis lacked a data or material availability statement, did not link to an accessible protocol, and were not preregistered. Most studies were not available as open access. Taking steps to improve reproducibility would likely also improve patient care. | 10.1002/lary.28322 | medrxiv |
10.1101/19001784 | Pain in clients attending a South African voluntary counselling and testing centre was frequent and extensive but did not depend on HIV status. | Wadley, A. L.; Lazarus, E.; Gray, G. E.; Mitchell, D.; Kamerman, P. R. | Peter R Kamerman | University of the Witwatersrand | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by | hiv aids | https://www.medrxiv.org/content/early/2019/07/15/19001784.source.xml | BackgroundThe frequency of pain is reported to be high in people living with HIV (PLWH), but valid comparisons between PLWH and HIV-negative cohorts are rare. We investigated whether HIV infection influenced frequency and characteristics of pain in adults undergoing voluntary testing for HIV.
MethodsParticipants were recruited from a HIV voluntary counselling and testing (VCT) centre at the Chris Hani Baragwanath Academic Hospital, Soweto, South Africa. Pain was assessed using the Wisconsin Brief Pain Questionnaire. Depressive and anxiety symptomatology was determined using the Hopkins Symptom checklist-25. We then stratified by HIV status.
ResultsData from 535 black South Africans were analysed: HIV-infected n=70, HIV uninfected n=465. Overall, frequency of pain was high with 59% (95%CI: 55; 63, n: 316/535) of participants reporting pain, with no difference related to HIV status: HIV-infected 50% (95% CI: 37; 61, n: 35/70), HIV-uninfected 60% (95%CI: 56; 65, n: 281/465). Pain intensity and number of pain sites were similar between the groups as were symptoms of anxiety and depression: mean HSCL-25 1.72 (95% CI 1.57; 1.87) HIV-infected participants and 1.68 (95% CI: 1.63; 1.73) HIV-uninfected participants. Univariate analysis showed female sex and greater depressive and anxiety symptomatology associated with having pain. In a conservative multivariable model, only depressive and anxiety symptomatology was retained in the model.
ConclusionThe high frequency of pain found in both HIV infected and uninfected individuals presenting at a VCT centre was more likely to be associated with depression and anxiety, than with the presence or absence of HIV. | 10.1097/qai.0000000000002248 | medrxiv |
10.1101/19002121 | Evaluation of Indicators of Reproducibility and Transparency in Published Cardiology Literature | Anderson, J. M.; Wright, B.; Tritz, D.; Horn, J.; Parker, I.; Bergeron, D.; Cook, S.; Vassar, M. | Jon Michael Anderson | Oklahoma State University Center for Health Sciences | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_no | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/07/15/19002121.source.xml | BackgroundThe extent of reproducibility in cardiology research remains unclear. Therefore, our main objective was to determine the quality of research published in cardiology journals using eight indicators of reproducibility.
MethodsUsing a cross-sectional study design, we conducted an advanced search of the National Library of Medicine (NLM) catalog for publications from 2014-2018 in journals pertaining to cardiology. Journals must have been published in the English language and must have been indexed in MEDLINE. Once the initial list of publications from all cardiology journals was obtained, we searched for full-text PDF versions using Open Access, Google Scholar, and PubMed. Studies were analyzed using a pilot-tested Google Form to evaluate the presence of information that was deemed necessary to reproduce the study in its entirety.
ResultsAfter exclusions, we included 132 studies containing empirical data. Of these studies, the majority (126/132, 95.5%) did not provide the raw data collected while conducting the study, 0/132 (0%) provided step-by-step analysis scripts, and 117/132 (88.6%) failed to provide sufficient materials needed to reproduce the study.
ConclusionsThe presentation of studies published in cardiology journals does not appear to facilitate reproducible research. Considerable improvements to the framework of biomedical science, specifically in the field of cardiology, are necessary. Solutions to increase the reproducibility and transparency of published works in cardiology journals is warranted, including addressing inadequate sharing of materials, raw data, and key methodological details. | 10.1136/heartjnl-2020-316519 | medrxiv |
10.1101/19002402 | Population Structure Drives Differential Methicillin-resistant Staphylococcus aureus Colonization Dynamics | Mietchen, M. S.; Short, C. T.; Samore, M.; Lofgren, E. T. | Eric T Lofgren | Washington State University | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/15/19002402.source.xml | BackgroundUsing a model of methicillin-resistant Staphylococcus aureus (MRSA) within an intensive care unit (ICU), we explore how differing hospital population structures impact these infection dynamics.
MethodsUsing a stochastic compartmental model of an 18-bed ICU, we compared the rates of MRSA acquisition across three potential population structures: a Single Staff Type (SST) model with nurses and physicians as a single staff type, a model with separate staff types for nurses and physicians (Nurse-MD model), and a Metapopulation model where each nurse was assigned a group of patients. By varying the proportion of time spent with the assigned patient group ({gamma}) within the Metapopulation model, we explored whether simpler models may be acceptable approximations to more realistic patient-healthcare staff contact patterns.
ResultsThe SST, Nurse-MD, and Metapopulation models had a mean annual number of cumulative MRSA acquisitions of 40.6, 32.2 and 19.6 respectively. All models were sensitive to the same parameters in the same direction, although the Metapopulation model was less sensitive. The number of acquisitions varied non-linearly by values of {gamma}, with values below 0.40 resembling the Nurse-MD model, while values above that converged toward the metapopulation structure.
DiscussionThe population structure of a modeled hospital has considerable impact on model results, with the SST model having more than double the acquisition rate of the more structured Metapopulation model. While the direction of parameter sensitivity remained the same, the magnitude of these differences varied, producing different infection rates across relatively similar populations. The non-linearity of the models response to differing values of {gamma} suggests only a narrow space of relatively dispersed nursing assignments where simple model approximations are appropriate.
ConclusionSimplifying assumptions around how a hospital population is modeled, especially assuming random mixing, may overestimate infection rates and the impact of interventions. | 10.1371/journal.pcbi.1010352 | medrxiv |
10.1101/19002402 | Population Structure Drives Differential Methicillin-resistant Staphylococcus aureus Colonization Dynamics | Mietchen, M. S.; Short, C. T.; Samore, M.; Lofgren, E. T. | Eric T Lofgren | Washington State University | 2019-08-02 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/02/19002402.source.xml | BackgroundUsing a model of methicillin-resistant Staphylococcus aureus (MRSA) within an intensive care unit (ICU), we explore how differing hospital population structures impact these infection dynamics.
MethodsUsing a stochastic compartmental model of an 18-bed ICU, we compared the rates of MRSA acquisition across three potential population structures: a Single Staff Type (SST) model with nurses and physicians as a single staff type, a model with separate staff types for nurses and physicians (Nurse-MD model), and a Metapopulation model where each nurse was assigned a group of patients. By varying the proportion of time spent with the assigned patient group ({gamma}) within the Metapopulation model, we explored whether simpler models may be acceptable approximations to more realistic patient-healthcare staff contact patterns.
ResultsThe SST, Nurse-MD, and Metapopulation models had a mean annual number of cumulative MRSA acquisitions of 40.6, 32.2 and 19.6 respectively. All models were sensitive to the same parameters in the same direction, although the Metapopulation model was less sensitive. The number of acquisitions varied non-linearly by values of {gamma}, with values below 0.40 resembling the Nurse-MD model, while values above that converged toward the metapopulation structure.
DiscussionThe population structure of a modeled hospital has considerable impact on model results, with the SST model having more than double the acquisition rate of the more structured Metapopulation model. While the direction of parameter sensitivity remained the same, the magnitude of these differences varied, producing different infection rates across relatively similar populations. The non-linearity of the models response to differing values of {gamma} suggests only a narrow space of relatively dispersed nursing assignments where simple model approximations are appropriate.
ConclusionSimplifying assumptions around how a hospital population is modeled, especially assuming random mixing, may overestimate infection rates and the impact of interventions. | 10.1371/journal.pcbi.1010352 | medrxiv |
10.1101/19002204 | Reliability and validity of the UK Biobank cognitive tests | Fawns-Ritchie, C.; Deary, I. J. | Ian J Deary | University of Edinburgh | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/07/15/19002204.source.xml | UK Biobank is a health resource with data from over 500,000 adults. The participants have been assessed on cognitive function since baseline. The cognitive tests in UK Biobank are brief and bespoke, and are administered without supervision on a touchscreen computer. Psychometric information on the tests is limited. The present study examined their concurrent validity and short-term test-retest reliability. A sample of 160 participants (mean age=62.59, SD=10.24) completed the UK Biobank cognitive assessment and a range of well-validated cognitive tests ( reference tests). Fifty-two participants returned 4 weeks later to repeat the UK Biobank tests. Correlations were calculated between UK Biobank tests and the reference tests. Four-week test-retest correlations were calculated for UK Biobank tests. UK Biobank cognitive tests showed a range of correlations with their respective reference tests, i.e. those tests that are thought to assess the same underlying cognitive ability (mean Pearson r=0.53, range=0.22 to 0.83, p[≤].005). Four-week test-retest reliability of the UK Biobank tests were moderate-to-high (mean Pearson r=0.55, range=0.40 to 0.89, p[≤].003). Despite the brief, non-standard nature of the UK Biobank cognitive tests, some showed substantial concurrent validity and test-retest reliability. These psychometric results provide currently-lacking information on the validity of the UK Biobank cognitive tests. | 10.1371/journal.pone.0231627 | medrxiv |
10.1101/19002105 | Educational differentials in domain specific physical activity by ethnicity, age, and gender: findings from over 44,000 participants in The UK Household Longitudinal Study (2013-2015). | Fluharty, M. E.; Pinto Perira, S.; Benzeval, M.; Hamer, M.; Jefferis, B.; Griffiths, L.; Cooper, R.; Bann, D. | Meg E Fluharty | Centre for Longitudinal Studies, Institute of Education, University College London | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/15/19002105.source.xml | BackgroundThe prevalence of overall physical inactivity remains high, particularly amongst socioeconomically disadvantaged groups. It is unclear however if such inequalities vary systematically by age, sex, or ethnicity, and if there are differing effects across physical activity (PA) domains.
MethodsWe used data from a nationally representative survey of the UK, Understanding Society, with information on educational attainment (our indicator of socioeconomic position), PA and demographics collected in 2013-2015 (N= 44,903). Logistic regression analyses were conducted to test associations of education with three different PA domains (active travel, occupational and leisure time). To examine modification of the associations between education and physical activity in each domain by sex, age and ethnicity, we tested two-way interaction terms (education x ethnicity; education x sex; education x age).
ResultsLower educational attainment was associated with higher active transportation and occupational physical activity, but lower weekly leisure-time activity. These associations were modified by sex, ethnicity, and age. For example, education-related differences in active travel were larger for females (difference in predicted probability of activity between highest and lowest educational groups: -10% in females, (95% CI: -11.9, 7.9) -3% in males (-4.8, -0.4). The education-related differences in occupational activity were larger among males -35% (-36.9, -32.4) than females -17% (-19.4, -15.0). Finally, education related differences in moderate to vigorous leisure time activity varied most substantially by ethnicity; for example, differences were 17% (16.2, 18.7) for White individuals compared with 6% (0.6, 11.6) for Black individuals.
ConclusionsEducational differences in PA vary by domain, and are modified by age, sex, and ethnicity. A better understanding of physically inactive sub-groups may aid development of tailored interventions to increase activity levels and reduce health inequalities. | 10.1136/bmjopen-2019-033318 | medrxiv |
10.1101/19000943 | Confounding adjustment performance of ordinal analysis methods in stroke studies | Zonneveld, T. P.; Aigner, A.; Groenwold, R. H. H.; Algra, A.; Nederkoorn, P. J.; Grittner, U.; Kruyt, N. D.; Siegerink, B. | Bob Siegerink | Center for stroke research, Charité Universitätsmedizin, Berlin, Germany | 2019-07-15 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | epidemiology | https://www.medrxiv.org/content/early/2019/07/15/19000943.source.xml | BackgroundIn acute stroke studies, ordinal logistic regression (OLR) is often used to analyze outcome on the modified Rankin Scale (mRS), whereas the non-parametric Mann-Whitney measure of superiority (MWS) has also been suggested. It is unclear how these perform comparatively when confounding adjustment is warranted. Our aim is to quantify the performance of OLR and MWS in different confounding variable settings.
MethodsWe set up a simulation study with three different scenarios; (1) dichotomous confounding variables, (2) continuous confounding variables, and (3) confounding variable settings mimicking a study on functional outcome after stroke. We compared adjusted ordinal logistic regression (aOLR) and stratified Mann-Whitney measure of superiority (sMWS), and also used propensity scores to stratify the MWS (psMWS). For comparability, OLR estimates were transformed to a MWS. We report bias, the percentage of runs that produced a point estimate deviating by more than 0.05 points (point estimate variation), and the coverage probability.
ResultsIn scenario 1, there was no bias in both sMWS and aOLR, with similar point estimate variation and coverage probabilities. In scenario 2, sMWS resulted in more bias (0.04 versus 0.00), and higher point estimate variation (41.6% versus 3.3%), whereas coverage probabilities were similar. In scenario 3, there was no bias in both methods, point estimate variation was higher in the sMWS (6.7%) versus aOLR (1.1%), and coverage probabilities were 0.98 (sMWS) versus 0.95 (aOLR). With psMWS, bias remained 0.00, with less point estimate variation (1.5%) and a coverage probability of 0.95.
ConclusionsThe bias of both adjustment methods was similar in our stroke simulation scenario, and the higher point estimate variation in the MWS improved with propensity score based stratification. The stratified MWS is a valid alternative for adjusted OLR only when the ratio of number of strata versus number of observations is relatively low, but propensity score based stratification extends the application range of the MWS. | 10.1371/journal.pone.0231670 | medrxiv |
10.1101/19002063 | Early clinical markers of aggressive multiple sclerosis | Malpas, C. B.; Ali Manouchehrinia, A.; Sharmin, S.; Roos, I.; Horakova, D.; Havrdova, E. K.; Trojano, M.; Izquierdo, G.; Eichau, S.; Bergamaschi, R.; Sola, P.; Ferraro, D.; Lugaresi, A.; Prat, A.; Girard, M.; Duquette, P.; Grammond, P.; Grand'Maison, F.; Ozakbas, S.; Van Pesch, V.; Granella, F.; Hupperts, R.; Pucci, E.; Boz, C.; Iuliano, G.; Sidhom, Y.; Gouider, R.; Spitaleri, D.; Butzkueven, H.; Soysal, A.; Petersen, T.; Verheul, F.; Karabudak, R.; Turkoglu, R.; Ramo-Tello, C.; Terzi, M.; Cristiano, E.; Slee, M.; McCombe, P.; Macdonell, R.; Fragoso, Y.; Olascoaga, J.; Altintas, A.; Olsson, T | Tomas Kalincik | CORe Unit, Department of Medicine, University of Melbourne, Melbourne, Australia; Department of Neurology, Royal Melbourne Hospital, Melbourne, Australia | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | neurology | https://www.medrxiv.org/content/early/2019/07/16/19002063.source.xml | Patients with the aggressive form of MS accrue disability at an accelerated rate, typically reaching EDSS >= 6 within 10 years of symptom onset. Several clinicodemographic factors have been associated with aggressive MS, but less research has focused on clinical markers that are present in the first year of disease. The development of early predictive models of aggressive MS is essential to optimise treatment in this MS subtype. We evaluated whether patients who will develop aggressive MS can be identified based on early clinical markers, and to replicate this analysis in an independent cohort. Patient data were obtained from MSBase. Inclusion criteria were (a) first recorded disability score (EDSS) within 12 months of symptom onset, (b) at least 2 recorded EDSS scores, and (c) at least 10 years of observation time. Patients were classified as having aggressive MS if they: (a) reached EDSS >= 6 within 10 years of symptom onset, (b) EDSS >=6 was confirmed and sustained over >=6 months, and (c) EDSS >=6 was sustained until the end of follow-up. Clinical predictors included patient variables (sex, age at onset, baseline EDSS, disease duration at first visit) and recorded relapses in the first 12 months since disease onset (count, pyramidal signs, bowel-bladder symptoms, cerebellar signs, incomplete relapse recovery, steroid administration, hospitalisation). Predictors were evaluated using Bayesian Model Averaging (BMA). Independent validation was performed using data from the Swedish MS Registry. Of the 2,403 patients identified, 145 were classified as having aggressive MS (6%). BMA identified three statistical predictors: age > 35 at symptom onset, EDSS >= 3 in the first year, and the presence of pyramidal signs in the first year. This model significantly predicted aggressive MS (AUC = .80, 95% CIs = .75, .84). The presence of all three signs was strongly predictive, with 32% of such patients meeting aggressive disease criteria. The absence of all three signs was associated with a 1.4% risk. Of the 556 eligible patients in the Swedish MS Registry cohort, 34 (6%) met criteria for aggressive MS. The combination of all three signs was also predictive in this cohort (AUC = .75, 95% CIs = .66, .84). Taken together, these findings suggest that older age at symptom onset, greater disability during the first year, and pyramidal signs in the first year are early indicators of aggressive MS. | 10.1093/brain/awaa081 | medrxiv |
10.1101/19002162 | Post-discharge Acute Care and Outcomes in the Era of Readmission Reduction | Khera, R.; Wang, Y.; Bernheim, S. M.; Lin, Z.; Krumholz, H. | Rohan Khera | UT Southwestern Medical Center | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | health policy | https://www.medrxiv.org/content/early/2019/07/16/19002162.source.xml | BackgroundWith incentives to reduce readmission rates, there are concerns that patients who need hospitalization after a recent hospital discharge may be denied access, which would increase their risk of mortality.
ObjectiveWe determined whether patients with hospitalizations for conditions covered by national readmission programs who received care in emergency department (ED) or observation units but were not hospitalized within 30 days had an increased risk of death. We also evaluated temporal trends in post-discharge acute care utilization in inpatient units, emergency department (ED) and observation units for these patients.
Design, Setting, and ParticipantsIn this observational study, national Medicare claims data for 2008-2016, we identified patients [≥]65 years hospitalized with heart failure (HF), acute myocardial infarction (AMI), or pneumonia, conditions included in the HRRP.
Main Outcomes and MeasuresPost-discharge 30-day mortality according to patients 30-day acute care utilization. Acute care utilization in inpatient and observation units, and the ED during the 30-day and 31-90-day post-discharge period.
ResultsThere were 3,772,924 hospitalizations for HF, 1,570,113 for AMI, and 3,131,162 for pneumonia. The overall post-discharge 30-day mortality was 8.7% for HF, 7.3% for AMI, and 8.4% for pneumonia. Post-discharge mortality increased annually by 0.16% (95% CI, 0.11%, 0.22%) for HF, decreased by 0.15% (95% CI, -0.18%, -0.12%) for AMI, and did not significantly change for pneumonia. Specifically, mortality only increased for HF patients who did not utilize any post-discharge acute care, increasing at a rate of 0.16% per year (95% CI, 0.11%, 0.22%), accounting for 99% of the increase in post-discharge mortality in heart failure. Concurrent with a reduction in 30-day readmission rates, 30-day observation stays and visits to the ED increased across all 3 conditions during and beyond the post-discharge 30-day period. There was no significant change in overall 30-day post-acute care utilization (P-trend >0.05 for all).
Conclusions and RelevanceThe only condition with an increasing mortality through the study period was HF; the increase preceded the policy and was not present among those received ED or observation unit care without hospitalization. Overall, during this period, there was not a significant change in the overall 30-day post-discharge acute care utilization. | 10.1136/bmj.l6831 | medrxiv |
10.1101/19002162 | Post-discharge Acute Care and Outcomes in the Era of Readmission Reduction: A National Retrospective Cohort Study of Medicare Beneficiaries in the United States | Khera, R.; Wang, Y.; Bernheim, S. M.; Lin, Z.; Krumholz, H. | Rohan Khera | UT Southwestern Medical Center | 2019-11-15 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | health policy | https://www.medrxiv.org/content/early/2019/11/15/19002162.source.xml | BackgroundWith incentives to reduce readmission rates, there are concerns that patients who need hospitalization after a recent hospital discharge may be denied access, which would increase their risk of mortality.
ObjectiveWe determined whether patients with hospitalizations for conditions covered by national readmission programs who received care in emergency department (ED) or observation units but were not hospitalized within 30 days had an increased risk of death. We also evaluated temporal trends in post-discharge acute care utilization in inpatient units, emergency department (ED) and observation units for these patients.
Design, Setting, and ParticipantsIn this observational study, national Medicare claims data for 2008-2016, we identified patients [≥]65 years hospitalized with heart failure (HF), acute myocardial infarction (AMI), or pneumonia, conditions included in the HRRP.
Main Outcomes and MeasuresPost-discharge 30-day mortality according to patients 30-day acute care utilization. Acute care utilization in inpatient and observation units, and the ED during the 30-day and 31-90-day post-discharge period.
ResultsThere were 3,772,924 hospitalizations for HF, 1,570,113 for AMI, and 3,131,162 for pneumonia. The overall post-discharge 30-day mortality was 8.7% for HF, 7.3% for AMI, and 8.4% for pneumonia. Post-discharge mortality increased annually by 0.16% (95% CI, 0.11%, 0.22%) for HF, decreased by 0.15% (95% CI, -0.18%, -0.12%) for AMI, and did not significantly change for pneumonia. Specifically, mortality only increased for HF patients who did not utilize any post-discharge acute care, increasing at a rate of 0.16% per year (95% CI, 0.11%, 0.22%), accounting for 99% of the increase in post-discharge mortality in heart failure. Concurrent with a reduction in 30-day readmission rates, 30-day observation stays and visits to the ED increased across all 3 conditions during and beyond the post-discharge 30-day period. There was no significant change in overall 30-day post-acute care utilization (P-trend >0.05 for all).
Conclusions and RelevanceThe only condition with an increasing mortality through the study period was HF; the increase preceded the policy and was not present among those received ED or observation unit care without hospitalization. Overall, during this period, there was not a significant change in the overall 30-day post-discharge acute care utilization. | 10.1136/bmj.l6831 | medrxiv |
10.1101/19002519 | Childhood Trauma and Trajectories of Depressive Symptoms Across Adolescence | Kwong, A. S. F.; Maddalena, J. M.; Croft, J.; Heron, J.; Leckie, G. | Alex Siu Fung Kwong | University of Bristol | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/16/19002519.source.xml | BackgroundGrowth curve modelling such as trajectory analysis is useful for examining the longitudinal nature of depressive symptoms, their antecedents and later consequences. However, issues in interpretation associated with this methodology could hinder the translation from results to policy changes and interventions. The aim of this article is to provide a "model interpretation framework" for highlighting growth curve results in a more interpretable manner. Here we demonstrate the association between childhood trauma and trajectories of depressive symptoms. Childhood trauma has been shown to a be strong predictor for later depression, but less is known how childhood trauma has an effect throughout adolescence and young adulthood. Identifying when childhood trauma (and its severity) is likely to have its greatest impact on depression is important for determining the timing of interventions for depression.
MethodsWe used data on over 6,500 individuals from the Avon Longitudinal Study of Parents and Children (ALSPAC) to estimate trajectories of depressive symptoms between the ages of 11 and 24. Depressive symptoms were measured using the short mood and feelings questionnaire (SMFQ) across 9 occasions. Childhood trauma was assessed between the ages of 5 and 10 years old, and we estimated population averaged multilevel growth curves of depressive symptoms for exposure to trauma (yes vs no) and then in a separate model, the number of trauma types reported such as inter-personal violence or neglect (coded as 0, 1, 2, 3+). We then calculated what the depressive symptoms scores would be ages 12, 14, 16, 18, 20, 22, 24, between these varying trajectories.
ResultsReported exposure to childhood trauma was associated with less favourable trajectories of depressive symptoms across adolescence, mainly characterised by exposed individuals having worse depressive symptoms at age 16. There was an exposure-response relationship between the number of childhood traumas and trajectories of depressive symptoms.
Individuals exposed to 3 or more types of trauma had substantially steeper and less favourable trajectories of depressive symptoms: becoming worse at a more rapid rate until the age of 18. By age 18, individuals that reported the greatest exposure to trauma (3+ types of trauma) had 14% more depressive symptoms compared to non-exposed participants.
LimitationsThis study was subject to attrition, particularly towards the latter ages of the SMFQ.
ConclusionChildhood trauma is strongly associated with less favourable trajectories of depressive symptoms across adolescence. Individuals exposed to multiple types of inter-personal violence or neglect are at the greatest risk of worsening depressive symptoms throughout adolescence and young adulthood. Individuals exposed to traumatic experiences in childhood should be identified as at high risk of depression and other adverse outcomes as early trauma may disrupt social development and have lasting consequences on mental health outcomes.
The model interpretation framework presented here may be more interpretable for researchers, clinicians and policy makers as it allows comparisons of depression across multiple stages of development to highlight when the effects of depression are greatest. | null | medrxiv |
10.1101/19001313 | The global scientific research response to the public health emergency of Zika virus infection | Oliveira, J. F.; Pescarini, J. M.; Rodrigues, M. S.; Almeida, B. A.; Henriques, C. M. P.; Gouveia, F. C.; Rabello, E. T.; Matta, G. C.; Barreto, M. L.; Sampaio, R. B. | Ricardo Barros Sampaio | Centro de Integracao de Dados e Conhecimentos para Saude, Fiocruz, Salvador, Bahia, Brazil; Gerencia Regional de Brasilia, Fiocruz, Brasilia, Brazil | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/16/19001313.source.xml | BackgroundScience studies have been a field of research for different knowledge areas and they have been successfully used to analyse the construction of scientific knowledge, practice and dissemination. In this study, we aimed to verify how the Zika epidemic has moulded scientific production worldwide analysing international collaboration and the knowledge landscape through time, research topics and country involvement.
MethodologyWe searched the Web of Science (WoS) for studies published up to 31st December 2018 on Zika using the search terms "zika", "zkv" or "zikv". We analysed the scientific production regarding which countries have published the most, on which topics, as well as country level collaboration. We performed a scientometric analysis of research on Zika focusing on knowledge mapping and the scientific research path over time and space.
FindingsWe found two well defined research areas divided into three subtopics accounting for six clusters. With regard to country analysis, the USA followed by Brazil were the leading countries in publications on Zika. China entered as a new player focusing on specific research areas. When we took into consideration the epidemics and reported cases, Brazil and France were the leading research countries on related topics. As for international collaboration, the USA followed by England and France stand out as the main hubs. The research areas most published included public health related topics from 2015 until the very beginning of 2016, followed by an increase in topics related to the clinical aspects of the disease in 2016 and the beginnings of laboratorial research in 2017/2018.
ConclusionsMapping the response to Zika, a public health emergency, demonstrated a clear pattern of the participation of countries in the scientific advances. The pattern of knowledge production found in this study represented the different perspectives and interests of countries based firstly on their level of exposure to the epidemic and secondly on their financial positions with regard to science. | 10.1371/journal.pone.0229790 | medrxiv |
10.1101/19002436 | Disruptive Mood Dysregulation Disorder: symptomatic and syndromic thresholds and diagnostic operationalization | Laporte, P. P.; Matijasevich, A.; Munhoz, T. N.; Santos, I. S.; Barros, A. J. D.; Pine, D. S.; Rohde, L. A.; Leibenluft, E.; Salum, G. A. | Paola Paganella Laporte | Universidade Federal do Rio Grande do Sul | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_no | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/07/16/19002436.source.xml | ObjectiveThe aim of this study is to identify the most appropriate threshold for Disruptive Mood Dysregulation Disorder (DMDD) diagnosis and the impact of potential changes in diagnostic rules on prevalence levels in the community.
MethodTrained psychologists evaluated 3,562 pre-adolescents/early adolescents from the 2004 Pelotas Birth Cohort with the Development and Well-Being Behavior Assessment (DAWBA). The clinical threshold was assessed in three stages: symptomatic, syndromic and clinical operationalization. The symptomatic threshold identified the response category in each DAWBA item which separates normative misbehavior from a clinical indicator. The syndromic threshold identified the number of irritable mood and outbursts needed to capture pre-adolescents/early adolescents with high symptom levels. Clinical operationalization compared the impact of AND/OR rules for combining irritable mood and outbursts on impairment and levels of psychopathology.
ResultsAt the symptomatic threshold, most irritable mood items were normative in their lowest response categories and clinically significant in their highest response categories. For outbursts some indicated a symptom even when present at only a mild level, while others did not indicate symptoms at any level. At the syndromic level, a combination of 2 out of 7 irritable mood and 3 out of 8 outburst indicators accurately captured a cluster of individuals with high level of symptoms. Analysis combining irritable mood and outbursts delineated non-overlapping aspects of DMDD, providing support for the OR rule in clinical operationalization. The best DMDD criteria resulted in a prevalence of 3%.
ConclusionResults provide information for initiatives aiming to provide data-driven and clinically oriented operationalized criteria for DMDD. | 10.1016/j.jaac.2019.12.008 | medrxiv |
10.1101/19002436 | Disruptive Mood Dysregulation Disorder: symptomatic and syndromic thresholds and diagnostic operationalization | Laporte, P. P.; Matijasevich, A.; Munhoz, T. N.; Santos, I. S.; Barros, A. J. D.; Pine, D. S.; Rohde, L. A.; Leibenluft, E.; Salum, G. A. | Paola Paganella Laporte | Universidade Federal do Rio Grande do Sul | 2019-10-10 | 2 | PUBLISHAHEADOFPRINT | cc_no | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/10/10/19002436.source.xml | ObjectiveThe aim of this study is to identify the most appropriate threshold for Disruptive Mood Dysregulation Disorder (DMDD) diagnosis and the impact of potential changes in diagnostic rules on prevalence levels in the community.
MethodTrained psychologists evaluated 3,562 pre-adolescents/early adolescents from the 2004 Pelotas Birth Cohort with the Development and Well-Being Behavior Assessment (DAWBA). The clinical threshold was assessed in three stages: symptomatic, syndromic and clinical operationalization. The symptomatic threshold identified the response category in each DAWBA item which separates normative misbehavior from a clinical indicator. The syndromic threshold identified the number of irritable mood and outbursts needed to capture pre-adolescents/early adolescents with high symptom levels. Clinical operationalization compared the impact of AND/OR rules for combining irritable mood and outbursts on impairment and levels of psychopathology.
ResultsAt the symptomatic threshold, most irritable mood items were normative in their lowest response categories and clinically significant in their highest response categories. For outbursts some indicated a symptom even when present at only a mild level, while others did not indicate symptoms at any level. At the syndromic level, a combination of 2 out of 7 irritable mood and 3 out of 8 outburst indicators accurately captured a cluster of individuals with high level of symptoms. Analysis combining irritable mood and outbursts delineated non-overlapping aspects of DMDD, providing support for the OR rule in clinical operationalization. The best DMDD criteria resulted in a prevalence of 3%.
ConclusionResults provide information for initiatives aiming to provide data-driven and clinically oriented operationalized criteria for DMDD. | 10.1016/j.jaac.2019.12.008 | medrxiv |
10.1101/19002436 | Disruptive Mood Dysregulation Disorder: symptomatic and syndromic thresholds and diagnostic operationalization | Laporte, P. P.; Matijasevich, A.; Munhoz, T. N.; Santos, I. S.; Barros, A. J. D.; Pine, D. S.; Rohde, L. A.; Leibenluft, E.; Salum, G. A. | Paola Paganella Laporte | Universidade Federal do Rio Grande do Sul | 2019-10-29 | 3 | PUBLISHAHEADOFPRINT | cc_no | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/10/29/19002436.source.xml | ObjectiveThe aim of this study is to identify the most appropriate threshold for Disruptive Mood Dysregulation Disorder (DMDD) diagnosis and the impact of potential changes in diagnostic rules on prevalence levels in the community.
MethodTrained psychologists evaluated 3,562 pre-adolescents/early adolescents from the 2004 Pelotas Birth Cohort with the Development and Well-Being Behavior Assessment (DAWBA). The clinical threshold was assessed in three stages: symptomatic, syndromic and clinical operationalization. The symptomatic threshold identified the response category in each DAWBA item which separates normative misbehavior from a clinical indicator. The syndromic threshold identified the number of irritable mood and outbursts needed to capture pre-adolescents/early adolescents with high symptom levels. Clinical operationalization compared the impact of AND/OR rules for combining irritable mood and outbursts on impairment and levels of psychopathology.
ResultsAt the symptomatic threshold, most irritable mood items were normative in their lowest response categories and clinically significant in their highest response categories. For outbursts some indicated a symptom even when present at only a mild level, while others did not indicate symptoms at any level. At the syndromic level, a combination of 2 out of 7 irritable mood and 3 out of 8 outburst indicators accurately captured a cluster of individuals with high level of symptoms. Analysis combining irritable mood and outbursts delineated non-overlapping aspects of DMDD, providing support for the OR rule in clinical operationalization. The best DMDD criteria resulted in a prevalence of 3%.
ConclusionResults provide information for initiatives aiming to provide data-driven and clinically oriented operationalized criteria for DMDD. | 10.1016/j.jaac.2019.12.008 | medrxiv |
10.1101/19002436 | Disruptive Mood Dysregulation Disorder: symptomatic and syndromic thresholds and diagnostic operationalization | Laporte, P. P.; Matijasevich, A.; Munhoz, T. N.; Santos, I. S.; Barros, A. J. D.; Pine, D. S.; Rohde, L. A.; Leibenluft, E.; Salum, G. A. | Paola Paganella Laporte | Universidade Federal do Rio Grande do Sul | 2020-01-20 | 4 | PUBLISHAHEADOFPRINT | cc_no | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2020/01/20/19002436.source.xml | ObjectiveThe aim of this study is to identify the most appropriate threshold for Disruptive Mood Dysregulation Disorder (DMDD) diagnosis and the impact of potential changes in diagnostic rules on prevalence levels in the community.
MethodTrained psychologists evaluated 3,562 pre-adolescents/early adolescents from the 2004 Pelotas Birth Cohort with the Development and Well-Being Behavior Assessment (DAWBA). The clinical threshold was assessed in three stages: symptomatic, syndromic and clinical operationalization. The symptomatic threshold identified the response category in each DAWBA item which separates normative misbehavior from a clinical indicator. The syndromic threshold identified the number of irritable mood and outbursts needed to capture pre-adolescents/early adolescents with high symptom levels. Clinical operationalization compared the impact of AND/OR rules for combining irritable mood and outbursts on impairment and levels of psychopathology.
ResultsAt the symptomatic threshold, most irritable mood items were normative in their lowest response categories and clinically significant in their highest response categories. For outbursts some indicated a symptom even when present at only a mild level, while others did not indicate symptoms at any level. At the syndromic level, a combination of 2 out of 7 irritable mood and 3 out of 8 outburst indicators accurately captured a cluster of individuals with high level of symptoms. Analysis combining irritable mood and outbursts delineated non-overlapping aspects of DMDD, providing support for the OR rule in clinical operationalization. The best DMDD criteria resulted in a prevalence of 3%.
ConclusionResults provide information for initiatives aiming to provide data-driven and clinically oriented operationalized criteria for DMDD. | 10.1016/j.jaac.2019.12.008 | medrxiv |
10.1101/19002147 | Neuroimaging biomarkers differentiate Parkinson disease with and without cognitive impairment and dementia | Owens-Walton, C.; Jakabek, D.; Power, B. D.; Walterfang, M.; Hall, S.; van Westen, D.; Looi, J. C.; Shaw, M.; Hansson, O. | Conor Owens-Walton | Australian National University | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | radiology and imaging | https://www.medrxiv.org/content/early/2019/07/16/19002147.source.xml | Mild cognitive impairment in Parkinson disease places a high burden on patients and is likely a precursor to Parkinson disease-related dementia. Studying the functional connectivity and morphology of subcortical structures within basal ganglia-thalamocortical circuits may uncover neuroimaging biomarkers of cognitive dysfunction in PD. We used an atlas-based seed region-of-interest approach to investigate resting-state functional connectivity of important subdivisions of the caudate nucleus, putamen and thalamus, between controls (n = 33), cognitively unimpaired Parkinson disease subjects (n = 33), Parkinson disease subjects with mild cognitive impairment (n = 22) and Parkinson disease subjects with dementia (n = 17). We then investigated how the morphology of the caudate, putamen and thalamus structures and differed between groups. Results indicate that cognitively unimpaired Parkinson disease subjects, compared to controls, display increased functional connectivity of the dorsal caudate, anterior putamen and mediodorsal thalamic subdivisions with areas across the frontal lobe, as well as reduced functional connectivity of the dorsal caudate with posterior cortical and cerebellar regions. Compared to controls, Parkinson disease subjects with mild cognitive impairment demonstrated reduced functional connectivity of the mediodorsal thalamus with midline nodes within the executive-control network. Compared to subjects with mild cognitive impairment, subjects with dementia demonstrated reduced functional connectivity of the mediodorsal thalamus with the posterior cingulate cortex, a key node within the default-mode network. Extensive volumetric and surface-based contraction was found in Parkinson disease subjects with dementia. Our research demonstrates how functional connectivity of the caudate, putamen and thalamus are implicated in the pathophysiology of cognitive impairment and dementia in Parkinson disease, with mild cognitive impairment and dementia in Parkinson disease associated with a breakdown in functional connectivity of the mediodorsal thalamus with para- and posterior cingulate regions of the brain. | 10.1016/j.pscychresns.2021.111273 | medrxiv |
10.1101/19002147 | Neuroimaging biomarkers differentiate Parkinson disease with and without cognitive impairment and dementia | Owens-Walton, C.; Jakabek, D.; Power, B. D.; Walterfang, M.; Hall, S.; van Westen, D.; Looi, J. C.; Shaw, M.; Hansson, O. | Conor Owens-Walton | Australian National University | 2019-08-26 | 2 | PUBLISHAHEADOFPRINT | cc_by_nd | radiology and imaging | https://www.medrxiv.org/content/early/2019/08/26/19002147.source.xml | Mild cognitive impairment in Parkinson disease places a high burden on patients and is likely a precursor to Parkinson disease-related dementia. Studying the functional connectivity and morphology of subcortical structures within basal ganglia-thalamocortical circuits may uncover neuroimaging biomarkers of cognitive dysfunction in PD. We used an atlas-based seed region-of-interest approach to investigate resting-state functional connectivity of important subdivisions of the caudate nucleus, putamen and thalamus, between controls (n = 33), cognitively unimpaired Parkinson disease subjects (n = 33), Parkinson disease subjects with mild cognitive impairment (n = 22) and Parkinson disease subjects with dementia (n = 17). We then investigated how the morphology of the caudate, putamen and thalamus structures and differed between groups. Results indicate that cognitively unimpaired Parkinson disease subjects, compared to controls, display increased functional connectivity of the dorsal caudate, anterior putamen and mediodorsal thalamic subdivisions with areas across the frontal lobe, as well as reduced functional connectivity of the dorsal caudate with posterior cortical and cerebellar regions. Compared to controls, Parkinson disease subjects with mild cognitive impairment demonstrated reduced functional connectivity of the mediodorsal thalamus with midline nodes within the executive-control network. Compared to subjects with mild cognitive impairment, subjects with dementia demonstrated reduced functional connectivity of the mediodorsal thalamus with the posterior cingulate cortex, a key node within the default-mode network. Extensive volumetric and surface-based contraction was found in Parkinson disease subjects with dementia. Our research demonstrates how functional connectivity of the caudate, putamen and thalamus are implicated in the pathophysiology of cognitive impairment and dementia in Parkinson disease, with mild cognitive impairment and dementia in Parkinson disease associated with a breakdown in functional connectivity of the mediodorsal thalamus with para- and posterior cingulate regions of the brain. | 10.1016/j.pscychresns.2021.111273 | medrxiv |
10.1101/19002147 | Structural and functional MRI changes associated with cognitive impairment and dementia in Parkinson disease | Owens-Walton, C.; Jakabek, D.; Power, B. D.; Walterfang, M.; Hall, S.; van Westen, D.; Looi, J. C.; Shaw, M.; Hansson, O. | Conor Owens-Walton | Australian National University | 2019-11-27 | 3 | PUBLISHAHEADOFPRINT | cc_by_nd | radiology and imaging | https://www.medrxiv.org/content/early/2019/11/27/19002147.source.xml | Mild cognitive impairment in Parkinson disease places a high burden on patients and is likely a precursor to Parkinson disease-related dementia. Studying the functional connectivity and morphology of subcortical structures within basal ganglia-thalamocortical circuits may uncover neuroimaging biomarkers of cognitive dysfunction in PD. We used an atlas-based seed region-of-interest approach to investigate resting-state functional connectivity of important subdivisions of the caudate nucleus, putamen and thalamus, between controls (n = 33), cognitively unimpaired Parkinson disease subjects (n = 33), Parkinson disease subjects with mild cognitive impairment (n = 22) and Parkinson disease subjects with dementia (n = 17). We then investigated how the morphology of the caudate, putamen and thalamus structures and differed between groups. Results indicate that cognitively unimpaired Parkinson disease subjects, compared to controls, display increased functional connectivity of the dorsal caudate, anterior putamen and mediodorsal thalamic subdivisions with areas across the frontal lobe, as well as reduced functional connectivity of the dorsal caudate with posterior cortical and cerebellar regions. Compared to controls, Parkinson disease subjects with mild cognitive impairment demonstrated reduced functional connectivity of the mediodorsal thalamus with midline nodes within the executive-control network. Compared to subjects with mild cognitive impairment, subjects with dementia demonstrated reduced functional connectivity of the mediodorsal thalamus with the posterior cingulate cortex, a key node within the default-mode network. Extensive volumetric and surface-based contraction was found in Parkinson disease subjects with dementia. Our research demonstrates how functional connectivity of the caudate, putamen and thalamus are implicated in the pathophysiology of cognitive impairment and dementia in Parkinson disease, with mild cognitive impairment and dementia in Parkinson disease associated with a breakdown in functional connectivity of the mediodorsal thalamus with para- and posterior cingulate regions of the brain. | 10.1016/j.pscychresns.2021.111273 | medrxiv |
10.1101/19002378 | Nonlinear biomarker interactions in conversion from Mild Cognitive Impairment to Alzheimer's disease | Popescu, S.; Whittington, A.; Gunn, R. N.; Matthews, P. M.; Glocker, B.; Sharp, D. J.; Cole, J. H. | James H Cole | King\'s College London | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | neurology | https://www.medrxiv.org/content/early/2019/07/16/19002378.source.xml | The multi-faceted nature of Alzheimers disease means that multiple biomarkers (e.g., amyloid-{beta}, tau, brain atrophy) can contribute to the prediction of clinical outcomes. Machine learning methods are a powerful way to identify the best approach to this prediction. However, it has been difficult previously to model nonlinear interactions between biomarkers in the context of predictive models. This is important as the mechanisms relating these biomarkers to the disease are inter-related and nonlinear interactions occur. Here, we used Gaussian Processes to model nonlinear interactions when combining biomarkers to predict Alzheimers disease conversion in 48 mild cognitive impairment participants who progressed to Alzheimers disease and 158 stable (over three years) people with mild cognitive impairment. Measures included: demographics, APOE4 genotype, CSF (amyloid-{beta}42, total tau, phosphorylated tau), neuroimaging markers of amyloid-{beta} deposition ([18F]florbetapir) or neurodegeneration (hippocampal volume, brain-age). We examined: (i) the independent value each biomarker has in predicting conversion; and (ii) whether modelling nonlinear interactions between biomarkers improved prediction performance.
Despite relatively high correlations between different biomarkers, our results showed that each measured added complementary information when predicting conversion to Alzheimers disease. A linear model predicting MCI group (stable versus progressive) explained over half the variance (R2 = 0.51, P < 0.001); the strongest independently-contributing biomarker was hippocampal volume (R2 = 0.13). Next, we compared the sensitivity of different models to progressive MCI: independent biomarker models, additive models (with no interaction terms), nonlinear interaction models. We observed a significant improvement (P < 0.001) for various two-way interaction models, with the best performing model including an interaction between amyloid-{beta}-PET and P-tau, while accounting for hippocampal volume (sensitivity = 0.77).
Our results showed that closely-related biomarkers still contribute uniquely to the prediction of conversion, supporting the continued use of comprehensive biological assessments. A number of interactions between biomarkers were implicated in the prediction of Alzheimers disease conversion. For example, the interaction between hippocampal atrophy and amyloid-deposition influences progression to Alzheimers disease over and above their independent contributions. Importantly, nonlinear interaction modelling shows that although for some patients adding additional biomarkers may add little value (i.e., when hippocampal volume is high), but for others (i.e., with low hippocampal volume) further invasive and expensive testing is warranted. Our Gaussian Processes framework enables visual examination of these nonlinear interactions, allowing projection of individual patients into biomarker space, providing a way to make personalised healthcare decisions or stratify subsets of patients for recruitment into trials of neuroprotective interventions. | 10.1002/hbm.25133 | medrxiv |
10.1101/19002378 | Nonlinear biomarker interactions in conversion from Mild Cognitive Impairment to Alzheimer's disease | Popescu, S.; Whittington, A.; Gunn, R. N.; Matthews, P. M.; Glocker, B.; Sharp, D. J.; Cole, J. H. | James H Cole | King\'s College London | 2019-08-02 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | neurology | https://www.medrxiv.org/content/early/2019/08/02/19002378.source.xml | The multi-faceted nature of Alzheimers disease means that multiple biomarkers (e.g., amyloid-{beta}, tau, brain atrophy) can contribute to the prediction of clinical outcomes. Machine learning methods are a powerful way to identify the best approach to this prediction. However, it has been difficult previously to model nonlinear interactions between biomarkers in the context of predictive models. This is important as the mechanisms relating these biomarkers to the disease are inter-related and nonlinear interactions occur. Here, we used Gaussian Processes to model nonlinear interactions when combining biomarkers to predict Alzheimers disease conversion in 48 mild cognitive impairment participants who progressed to Alzheimers disease and 158 stable (over three years) people with mild cognitive impairment. Measures included: demographics, APOE4 genotype, CSF (amyloid-{beta}42, total tau, phosphorylated tau), neuroimaging markers of amyloid-{beta} deposition ([18F]florbetapir) or neurodegeneration (hippocampal volume, brain-age). We examined: (i) the independent value each biomarker has in predicting conversion; and (ii) whether modelling nonlinear interactions between biomarkers improved prediction performance.
Despite relatively high correlations between different biomarkers, our results showed that each measured added complementary information when predicting conversion to Alzheimers disease. A linear model predicting MCI group (stable versus progressive) explained over half the variance (R2 = 0.51, P < 0.001); the strongest independently-contributing biomarker was hippocampal volume (R2 = 0.13). Next, we compared the sensitivity of different models to progressive MCI: independent biomarker models, additive models (with no interaction terms), nonlinear interaction models. We observed a significant improvement (P < 0.001) for various two-way interaction models, with the best performing model including an interaction between amyloid-{beta}-PET and P-tau, while accounting for hippocampal volume (sensitivity = 0.77).
Our results showed that closely-related biomarkers still contribute uniquely to the prediction of conversion, supporting the continued use of comprehensive biological assessments. A number of interactions between biomarkers were implicated in the prediction of Alzheimers disease conversion. For example, the interaction between hippocampal atrophy and amyloid-deposition influences progression to Alzheimers disease over and above their independent contributions. Importantly, nonlinear interaction modelling shows that although for some patients adding additional biomarkers may add little value (i.e., when hippocampal volume is high), but for others (i.e., with low hippocampal volume) further invasive and expensive testing is warranted. Our Gaussian Processes framework enables visual examination of these nonlinear interactions, allowing projection of individual patients into biomarker space, providing a way to make personalised healthcare decisions or stratify subsets of patients for recruitment into trials of neuroprotective interventions. | 10.1002/hbm.25133 | medrxiv |
10.1101/19002410 | Analysis of the outcome of patients with stage IV uterine serous carcinoma mimicking ovarian cancer | Al-AKer, M.; Nicklin, J.; Sanday, K. | Murad Al-AKer | Liverpool hospital, NSW2170, Australia | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_no | obstetrics and gynecology | https://www.medrxiv.org/content/early/2019/07/16/19002410.source.xml | ObjectivesTo identify clinicopathological factors that might influence survival in patients with stage IV uterine serous carcinoma, and to compare survival outcomes in patients with stage IV uterine serous carcinoma managed with neoadjuvant chemotherapy followed by interval cytoreduction (with or without adjuvant chemotherapy), primary cytoreductive surgery followed by adjuvant chemotherapy.
MethodsA retrospective cohort study of all patients with stage IV Uterine serous carcinoma treated between 2005 and 2015 within a regional cancer centre. Progression-free and overall survival rates were calculated using the Kaplan-Meier method.
ResultsOf 50 women with stage IV uterine serous carcinoma who met inclusion criteria, 37 underwent primary cytoreductive surgery, nine received neoadjuvant chemotherapy with planned interval cytoreductive surgery and four received palliative care only. A pre-treatment diagnosis of stage IV uterine serous carcinoma was made for only 45.9% of the primary cytoreductive surgery group and 56.6% of the neoadjuvant chemotherapy group, with advanced ovarian cancer the most common preoperative misdiagnosis. Median follow up was 19 months. Median overall survival was 27 months for the primary cytoreductive surgery group, 20 months for the neoadjuvant chemotherapy group and two months for the palliative care group. Optimal cytoreduction was achieved in 67.6% of the primary cytoreductive surgery group and 87.5% of the neoadjuvant chemotherapy group who underwent interval cytoreduction. Optimal cytoreduction was associated with improvement in overall survival, compared with suboptimal cytoreduction (36 versus 15 months; P=0.16). Adjuvant chemotherapy was associated with significantly higher overall survival compared with no adjuvant chemotherapy (36 versus four months; P<0.05). Median overall survival was 16 months for those with pure uterine serous carcinoma (n=40), compared with 32 months for those with mixed histopathology (n=10).
ConclusionStage IV uterine serous carcinoma can mimic advanced ovarian cancer. It carries a poor prognosis, which is worse for pure uterine serous carcinoma than for mixed-type endometrial adenocarcinoma. Neoadjuvant chemotherapy followed by interval cytoreduction and adjuvant chemotherapy seems to be a safe option, with an increased rate of optimal cytoreduction and comparable overall survival, compared with primary cytoreductive surgery. Adjuvant chemotherapy significantly improves survival in all groups.
Primary objectiveTo analyse the clinicopathological factors that might influence the progression-free survival and overall survival in patients with stage IV uterine serous carcinoma treated at Queensland Centre for Gynecological cancer.
Secondary objectiveTo compare the survival outcomes of patients with stage IV uterine serous carcinoma treated with neoadjuvant chemotherapy and interval cytoreduction, with those treated with primary cytoreductive surgery followed by adjuvant chemotherapy and patients who received palliative care only.
PRECISOptimal cytoreduction and adjuvant chemotherapy improved survival in stage IV uterine serous carcinoma. Neoadjuvant chemotherapy was feasible and safe. Patients with microscopic disease have similar poor prognosis.
HIGHLIGHTSO_LIPure uterine serous carcinoma carries a worse prognosis compared to mixed uterine serous carcinoma
C_LIO_LIOptimal cytoreduction and adjuvant chemotherapy improve survival in Stage IV uterine serous carcinoma
C_LIO_LINeoadjuvant chemotherapy is feasible and a safe option in the management of stage IV uterine serous carcinoma
C_LI | null | medrxiv |
10.1101/19002345 | Periodic testing and estimation of STD-HIV association | Masse, B.; Guibord, P.; Boily, M.-C.; Alary, M. | Benoît Mâsse | Université Laval | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by | hiv aids | https://www.medrxiv.org/content/early/2019/07/16/19002345.source.xml | BackgroundThe validity of measures used in follow-up studies to estimate the magnitude of the HIV-STD association will be the focus of this paper. A recent simulation study by Boily et al [1] based on a model of HIV and STD transmission showed that the relative risk (RR), estimated by the hazard rate ratio (HRR) obtained by the Cox model had poor validity, either in absence or in presence of a real association between HIV and STD. The HRR tends to underestimate the true magnitude of a non-null association. These results were obtained from simulated follow-up studies where HIV was periodicaly tested every three months and every month for the STD.
Aims and MethodsThis paper extends the above results by investigating the impact of using different periodic testing intervals on the validity of HRR estimates. Issues regarding the definition of exposure to STDs in this context are explored. A stochastic model for the transmission of HIV and other STDs is used to simulate follow-up studies with different periodic testing intervals. HRR estimates obtained with the Cox model with a time-dependent STD exposure covariate are compared to the true magnitude of the HIV-STD association. In addition, real data are reanalysed using the STD exposure definition described in this paper. The data from Laga et al [2] are used for this purpose.
Results(1) Simulated data: independently of the magnitude of the true association, we observed a greater reduction of the bias when increasing the frequency of HIV testing than that of the STD testing. (2) Real data: The STD exposure definition can create substantial differences in the estimation of the HIV-STD association. Laga et al [2] have found a HRR of 2.5 (1.1 - 6.4) for the association between HIV and genital ulcer disease compared to an estimate of 3.5 (1.5 - 8.3) with our improved definition of exposure.
ConclusionsResults on the simulated data have an important impact on the design of field studies. For instance when choosing between two designs; one where both HIV and STD are screened every 3 months versus one where HIV and STD are screened every 3 months and monthly, respectively. The latter design is more expensive and involves more complicated logistics. Furthermore, this increment in cost may not be justified considering the relatively small gain in terms of validity and variability. | null | medrxiv |
10.1101/19002246 | Interdependence between confirmed and discarded cases of dengue, chikungunya and Zika viruses in Brazil: A multivariate time-series analysis | Oliveira, J. F.; Rodrigues, M. S.; Skalinski, L. M.; Santos, A. E.; Costa, L. C.; Cardim, L. L.; Paixao, E. S.; Costa, M. C. N.; Oliveira, W. K.; Barreto, M. L.; Teixeira, M. G.; Andrade, R. F. S. | Juliane Fonseca Oliveira | Center of Data and Knowledge Integration for Health, Instituto Goncalo Moniz, Fundacao Oswaldo Cruz | 2019-07-16 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/16/19002246.source.xml | The co-circulation of different arboviruses in the same time and space poses a significant threat to public health given their rapid geographic dispersion and serious health, social, and economic impact. Therefore, it is crucial to have high quality of case registration to estimate the real impact of each arboviruses in the population. In this work, a Vector Autoregressive (VAR) model was developed to investigate the interrelationships between discarded and confirmed cases of dengue, chikungunya, and Zika in Brazil. We used data from the Brazilian National Notifiable Diseases Information System (SINAN) from 2010 to 2017. There were two waves in the series of dengue notification in this period, one occurring in 2013 and the second in 2015. The series of reported cases of both Zika and chikungunya reached their peak in late 2015 and early 2016. The VAR model shows that the Zika series have a significant impact on the dengue series and vice versa, suggesting that several discarded and confirmed cases of dengue could actually have been cases of Zika. The model also suggests that the series of confirmed chikungunya cases is almost independent of the cases of dengue and Zika. In conclusion, co-circulation of arboviruses with similar symptoms could lead to misdiagnosed diseases in the surveillance system. We argue that the use of mathematical and statistical models routinely in association with traditional symptom-surveillance could help to decrease such errors and to provide early indication of possible future outbreaks. These findings address the challenges regarding notification biases and shed new light on how to handle reported cases based only in clinical-epidemiological criteria when multiples arboviruses co-circulate in the same population.
Author summaryArthropod-borne viruses (arboviruses) transmission is a growing health problem worldwide. The real epidemiological impact of the co-circulation of different arboviruses in the same urban spaces is a recent phenomenon and there are many issues to explore. One of this issue is the misclassification due to the scarce availability of confirmatory laboratory tests. This establishes a challenge to identify, distinguish and estimate the number of infections when different arboviruses co-circulate. We propose the use of multivariate time series analysis to understand how the weekly notification of suspected cases of dengue, chikungunya and Zika, in Brazil, affected each other. Our results suggest that the series of Zika significantly impact on the series of dengue and vice versa, indicating that several discarded and confirmed cases of dengue might actually have been Zika cases. The results also suggest that the series of confirmed cases of chikungunya are almost independent of those of dengue and Zika. Our findings shed light on yet hidden aspects on the co-circulation of these three viruses based on reported cases. We believe the present work provides a new perspective on the longitudinal analysis of arboviruses transmission and call attention to the challenge in dealing with biases in the notification of multiple arboviruses that circulate in the same urban environment. | 10.1371/journal.pone.0228347 | medrxiv |
10.1101/19001883 | Maternal and child gluten intake and risk of type 1 diabetes: The Norwegian Mother and Child Cohort Study | Lund-Blix, N. A.; Tapia, G.; Marild, K.; Brantsaeter, A. L.; Njolstad, P. R.; Joner, G.; Skrivarhaug, T.; Stordal, K.; Stene, L. C. | Nicolai A Lund-Blix | Norwegian Institute of Public Health, and Oslo University Hospital | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | nutrition | https://www.medrxiv.org/content/early/2019/07/19/19001883.source.xml | OBJECTIVETo examine the association between maternal and child gluten intake and risk of type 1 diabetes in children.
DESIGNPregnancy cohort
SETTINGPopulation-based, nation-wide study in Norway
PARTICIPANTS86,306 children in The Norwegian Mother and Child Cohort Study born from 1999 through 2009, followed to April 15, 2018.
MAIN OUTCOME MEASURESClinical type 1 diabetes, ascertained in a nation-wide childhood diabetes registry. Hazard ratios were estimated using Cox regression for the exposures maternal gluten intake up to week 22 of pregnancy and childs gluten intake when the child was 18 months old.
RESULTSDuring a mean follow-up of 12.3 years (range 0.7-16.0), 346 children (0.4%) developed type 1 diabetes (incidence rate 32.6 per 100,000 person-years). The average gluten intake was 13.6 grams/day for mothers during pregnancy, and 8.8 grams/day for the child at 18 months of age. Maternal gluten intake in mid-pregnancy was not associated with the development of type 1 diabetes in the child (adjusted hazard ratio 1.02 (95% confidence interval 0.73 to 1.43) per 10 grams/day increase in gluten intake). However, the childs gluten intake at 18 months of age was associated with an increased risk of later developing type 1 diabetes (adjusted hazard ratio 1.46 (95% confidence interval 1.06 to 2.01) per 10 grams/day increase in gluten intake).
CONCLUSIONSThis study suggests that the childs gluten intake at 18 months of age, and not the maternal intake during pregnancy, could increase the risk of type 1 diabetes in the child.
WHAT IS ALREADY KNOWN ON THIS TOPICA national prospective cohort study from Denmark found that a high maternal gluten intake during pregnancy could increase the risk of type 1 diabetes in the offspring (adjusted hazard ratio 1.31 (95% confidence interval 1.001 to 1.72) per 10 grams/day increase in gluten intake). No studies have investigated the relation between the amount of gluten intake by both the mother during pregnancy and the child in early life and risk of developing type 1 diabetes in childhood.
WHAT THIS STUDY ADDSIn this prospective population-based pregnancy cohort with 86,306 children of whom 346 developed type 1 diabetes we found that the childs gluten intake at 18 months of age was associated with the risk of type 1 diabetes (adjusted hazard ratio 1.46 (95% confidence interval 1.06 to 2.01) per 10 grams/day increase in gluten intake). This study suggests that the childs gluten intake at 18 months of age, and not the maternal intake during pregnancy, could increase the childs risk of type 1 diabetes. | 10.1371/journal.pmed.1003032 | medrxiv |
10.1101/19001180 | Association between myopia and peripapillary hyperreflective ovoid mass-like structures in children | LYU, I. J.; Park, K.-A.; Oh, S. Y. | Sei Yeul Oh | Samsung Medical Center, SungKyunkwan University, School of Medicine | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_no | ophthalmology | https://www.medrxiv.org/content/early/2019/07/19/19001180.source.xml | PurposeTo investigate the characteristics of children with peripapillary hyperreflective ovoid mass-like structures (PHOMS) and evaluate the risk factors associated with PHOMS.
MethodsThis study included 132 eyes of 66 children with PHOMS and 92 eyes of 46 children without PHOMS (controls) who were assessed by disc enhanced depth image spectral-domain optical coherence tomography (OCT). Univariable and multivariable logistic analyses were performed to evaluate risk factors associated with the presence of PHOMS.
ResultsAmong the 66 children with PHOMS, 53 patients (80.3%) had bilateral and 13 patients (19.7%) had unilateral PHOMS. The mean age of the PHOMS group was 11.7 {+/-} 2.6 years and 11.4 {+/-} 3.1 years in the control group. Mean spherical equivalent (SE) by cycloplegic refraction was -3.13 {+/-} 1.87 diopters (D) in the PHOMS group and -0.95 {+/-} 2.65 D in the control group. Mean astigmatism was 0.67 {+/-} 0.89 D and 0.88 {+/-} 1.02 D in the PHOMS group and the control group, respectively. Mean disc size was 1735 {+/-} 153 m in the PHOMS group and 1741 {+/-} 190 m in the control group. All eyes in PHOMS group had myopia of -0.50 D or less, except for an eye with +1.00 D. According to the univariable (odds ratio [OR] 1.59, P < 0.001) and multivariable (OR 2.00, P < 0.001) logistic regression analyses, SE decreased by 1 D was significantly associated with PHOMS.
ConclusionsPHOMS is associated with myopic shift in children. Optic disc tilt may be a mediator between myopia and PHOMS. | 10.1038/s41598-020-58829-3 | medrxiv |
10.1101/19002030 | Specific diagnostic method for St. Louis Encephalitis Virus using a non-structural protein as antigen. | Simari, M. B.; Goni, S. E.; Luppo, V. C.; Fabbri, C. M.; Arguelles, M. H.; Lozano, M. E.; Morales, M. A.; Iglesias, N. G. | Nestor Gabriel Iglesias | Universidad Nacional de Quilmes | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/19/19002030.source.xml | St. Louis encephalitis virus (SLEV) is a mosquito-borne reemerging flavivirus in Argentina. It is currently necessary to develop specific serological tests that can efficiently discriminate the flaviviruses that circulate in our country. The immunoassays to diagnose SLEV lack specificity because they are based on the detection of structural viral proteins and the human immunoglobulins produced during infection against these proteins cross-react with other flaviviruses. Here, we describe an enzyme-immunoassay designed to detect human IgG antibodies specific to the viral nonstructural protein NS5. The results indicate that NS5 is a promising antigen useful to discriminate SLEV from other circulating flaviviruses. | 10.1099/jgv.0.001359 | medrxiv |
10.1101/19002097 | Acute hepatitis C infection among adults with HIV in the Netherlands: a capture-recapture analysis | Boender, T. S.; Op de Coul, E.; Arends, J.; Prins, M.; van der Valk, M.; van der Meer, J. T. M.; van Benthem, B.; Reiss, P.; Smit, C. | T Sonia Boender | Stichting HIV Monitoring | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/07/19/19002097.source.xml | BackgroundReliable surveillance systems are essential to assess the national response to eliminating hepatitis C virus (HCV), in the context of the global strategy towards eliminating viral hepatitis.
AimWe aimed to assess the completeness of the two national registries of acute HCV infection in people with HIV, and estimated the number of acute HCV infections among adults with HIV in the Netherlands.
MethodsFor 2003-2016, cases of HCV infection and reinfection among adults with a positive or unknown HIV-serostatus were identified in two national registries: the ATHENA cohort, and the National Registry for Notifiable Diseases. For 2013-2016, cases were linked, and two-way capture-recapture analysis was carried out.
ResultsDuring 2013-2016, there were an estimated 282 (95%CI: 264-301) acute HCV infections among adults with HIV. The addition of cases with an unknown HIV-serostatus increased the matches (from N=104 to N=129), and a subsequently increased the estimated total: 330 (95%CI: 309-351). Underreporting was estimated at 14-20%.
ConclusionIn 2013-2016, up to 330 cases of acute HCV infection were estimated to have occurred among adults with HIV. National surveillance of acute HCV can be improved by increased notification of infections. Surveillance data should ideally include both acute and chronic HCV infections, and be able to distinguish between acute and chronic infections, and initial and reinfections.
ClassificationsThe Netherlands; sexually transmitted infections; hepatitis C; HIV infection; Surveillance; epidemiology | 10.2807/1560-7917.ES.2020.25.7.1900450 | medrxiv |
10.1101/19002527 | Characterizing co-occurring conditions by age at diagnosis in autism spectrum disorders | Failla, M. D.; Schwartz, K. L.; Chaganti, S.; Cutting, L. E.; Landman, B. A.; Cascio, C. J. | Carissa J Cascio | Vanderbilt University Medical Center | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/07/19/19002527.source.xml | Individuals with autism spectrum disorders (ASD) experience a significant number of co-occurring medical conditions, yet little is known about these conditions beyond prevalence. We hypothesized that individuals with ASD experienced an increased burden of co-occurring conditions as measured by presence, frequency, and duration of visits related to co-occurring conditions. We expected that age of ASD diagnosis (early, <7; late, >7) would be associated with different co-occurring conditions. Medical record data were extracted from a large anonymized medical center database for 3097 individuals with ASD and 3097 matched controls. Co-occurring conditions were characterized using a novel tool (pyPheWAS) to examine presence, frequency, and duration of each condition. We identified several categories of co-occurring conditions in ASD: neurological (epilepsy, sleep disorders); psychiatric (mood disorders, adjustment/conduct disorders, suicidal ideation), and developmental. Early ASD diagnosis was associated with epilepsy-related conditions, whereas a later diagnosis was associated with psychiatric conditions. The early ASD diagnosis group had later first diagnosis of co-occurring psychiatric conditions compared to the late ASD diagnosis group. Our work confirms individuals with ASD are under a significant medical burden, with increased duration and frequency of visits associated with co-occurring conditions. Adequate management of these conditions could reduce burden on individuals with ASD. | 10.1177/1362361320934561 | medrxiv |
10.1101/19002220 | HIV-associated sensory neuropathy continues to be a problem in individuals starting tenofovir-based antiretroviral treatment | Pillay, P.; Wadley, A. L.; Cherry, C. L.; Karstaedt, A. S.; Kamerman, P. R. | Peter R Kamerman | University of the Witwatersrand | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_by | neurology | https://www.medrxiv.org/content/early/2019/07/19/19002220.source.xml | HIV-associated sensory neuropathy (HIV-SN) is a common and often painful neurological condition associated with HIV-infection and its treatment. However, data on the incidence of HIV-SN in neuropathy-free individuals initiating combination antiretroviral therapies (cART) that do not contain the neurotoxic agent stavudine are lacking. We investigated the six-month incidence of HIV-SN in ART naive individuals initiating tenofovir (TDF)-based cART, and the clinical factors associated with the development of HIV-SN. 120 neuropathy-free and ART naive individuals initiating cART at a single centre in Johannesburg, South Africa were enrolled. Participants were screened for HIV-SN at study enrolment and then approximately every two-months for a period of approximately six-months. Symptomatic HIV-SN was defined by the presence of at least one symptom (pain/burning, numbness, paraesthesias) and at least two clinical signs (reduced vibration sense, absent ankle reflexes or pin-prick hypoaesthesia). Asymptomatic HIV-SN required at least two clinical signs only. A total of 88% of the cohort completed three visits within the six-month period. Eleven individuals developed asymptomatic HIV-SN and nine developed symptomatic HIV-SN, giving a six-month cumulative incidence of neuropathy of 140 cases per 1000 patients (95% CI: 80 - 210) at an incidence rate of 0.37 (95% CI: 0.2 - 0.5) per person year. Increasing height and active tuberculosis (TB) disease were independently associated with the risk of developing HIV-SN (p < 0.05). We found that within the first six months of starting cART, incident SN persists in the post-stavudine era, but may be asymptomatic. | null | medrxiv |
10.1101/19001685 | Sitagliptin decreases visceral fat and blood glucose in women with polycystic ovarian syndrome | Devin, J. K.; Wright, P.; Celedonio, J. E.; Nian, H.; Brown, N. J. | Jessica K. Devin | Vanderbilt University Medical Center | 2019-07-13 | 1 | PUBLISHAHEADOFPRINT | cc_no | endocrinology | https://www.medrxiv.org/content/early/2019/07/13/19001685.source.xml | ContextWomen with polycystic ovarian syndrome (PCOS) have decreased growth hormone (GH), which can increase visceral adiposity (VAT) and impair vascular function. GH releasing hormone, a dipeptidyl peptidase-4 (DPP4) substrate, stimulates GH secretion.
ObjectiveWe tested the hypothesis that DPP4 inhibition increases GH and improves glucose levels and vascular function in women with PCOS.
MethodsEighteen women with PCOS participated in a double-blinded, cross-over study. They received sitagliptin 100 mg vs. placebo daily for one month separated by an eight-week washout. During each treatment, women underwent a 75-gram oral glucose tolerance test (OGTT), assessment of vascular function and body composition. Overnight GH secretion was assessed via venous sampling every 10 minutes for 12 hours and analyzed using an automated deconvolution algorithm.
ResultsDuring OGTT, sitagliptin increased GLP-1 (p<0.001), early insulin secretion (from mean insulinogenic index 1.9{+/-}1.2 (SD) to 3.2{+/-}3.1; p=0.02) and decreased peak glucose (mean -17.2 mg/dL [95% CI -27.7, -6.6]; p<0.01). At one month, sitagliptin decreased VAT (from 1141.9{+/-}700.7 to 1055.1{+/-}710.1 g; p=0.02) but did not affect vascular function. Sitagliptin increased GH half-life (from 13.9{+/-}3.6 to 17.0{+/-}6.8 min, N=16; p=0.04) and interpulse interval (from 53.2{+/-}20.0 to 77.3{+/-}38.2 min, N=16; p<0.05) but did not increase mean overnight GH (p=0.92 vs. placebo).
ConclusionsSitagliptin decreased the maximal glucose response to OGTT and VAT. Sitagliptin did not increase overnight GH but increased GH half-life and the interpulse interval.
PrecisSitagliptin improved body composition and blood glucoses following oral glucose load in women with PCOS. Sitagliptin potentiated GH half-life but did not increase overnight GH levels. | 10.1210/clinem/dgz028 | medrxiv |
10.1101/19002501 | Obstructive sleep apnea, positive airway pressure treatment, and postoperative delirium: protocol for a retrospective observational study | King, C. R.; Escallier, K.; Ju, Y.-E. S.; Lin, N.; Palanca, B. J.; McKinnon, S.; Avidan, M. S. | Michael S Avidan | Washington University | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | anesthesia | https://www.medrxiv.org/content/early/2019/07/19/19002501.source.xml | IntroductionObstructive sleep apnea (OSA) is common among older surgical patients, and delirium is a frequent and serious postoperative complication. Emerging evidence suggests that OSA increases the risk for postoperative delirium. We hypothesize that OSA is an independent risk factor for postoperative delirium, and that in patients with OSA, perioperative adherence to positive airway pressure (PAP) therapy decreases the incidence of postoperative delirium and its sequelae. The proposed retrospective cohort analysis study will use existing datasets to: (i) describe and compare the incidence of postoperative delirium in surgical patients based on OSA diagnosis and treatment with PAP; (ii) assess whether preoperatively untreated OSA is independently associated with postoperative delirium; and (iii) explore whether preoperatively untreated OSA is independently associated with worse postoperative quality of life. The findings of this study will inform on the potential utility and approach of an interventional trial aimed at preventing postoperative delirium in patients with diagnosed and undiagnosed OSA.
Methods and AnalysisObservational data from existing electronic databases will be used, including over 100,000 surgical patients and [~]10,000 intensive care unit (ICU) admissions. We will obtain the incidence of postoperative delirium in adults admitted postoperatively to the ICU who underwent structured preoperative assessment, including OSA diagnosis and screening. We will use doubly robust propensity score methods to assess whether untreated OSA independently predicts postoperative delirium. Using similar methodology, we will assess if untreated OSA independently predicts worse postoperative quality of life.
Ethics and disseminationThis study has been approved by the Human Research Protection Office at Washington University School of Medicine. We will publish the results in a peer-reviewed venue. Because the data is secondary and high risk for re-identification, we will not publicly share the data. Data will be destroyed after 1 year of completion of active IRB approved projects.
Article summaryStrengths and limitations of this study, (containing 5 short bullet points, no longer than one sentence each, that relate specifically to the methods)
O_LIOur granular database includes routine structured preoperative screening for OSA, processed laboratory results, and verified comorbid diagnoses.
C_LIO_LIWe have limited information on the severity of most comorbidities, creating the possibility for substantial residual confounding.
C_LIO_LIOur database includes near-universal and standardized nurse-driven delirium evaluations at multiple time-points as well as clinician diagnoses.
C_LIO_LICompared to prior studies, the large sample size will allow for more aggressive confounder adjustment utilizing linked structured medical histories, intraoperative records, and administrative data.
C_LIO_LISelection bias and confounding by indication are important limitations, which we will address using advanced statistical methods.
C_LI | 10.1136/bmjopen-2018-026649 | medrxiv |
10.1101/19002584 | Is therapeutic inertia present in primary diabetes care in Malaysia? A prospective study | CHEW, B. H.; Hussain, H.; Akthar Supian, Z. | BOON HOW CHEW | UNIVERSITI PUTRA MALAYSIA | 2019-07-19 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | endocrinology | https://www.medrxiv.org/content/early/2019/07/19/19002584.source.xml | AimsThis prospective study aimed to determine the proportions of therapeutic inertia when treatment targets not achieved in adults T2D at three public health clinics in Malaysia.
MethodsThe index prescriptions were those when the annual blood tests were reviewed. Prescriptions were verified and classified as 1) no change, 2) stepping up and 3) stepping down. Multivariable logistic regression and sensitive analyses were conducted.
ResultsAt follow-up, 552 participants were available for the assessment of therapeutic inertia (78.9% response rate). The mean (SD) age and diabetes duration were 60.0 (9.9) years and 5.0 (6.0) years, respectively. High therapeutic inertia were observed in oral anti-diabetic (61-72%), anti-hypertensive (34-65%) and lipid-lowering therapies (56-77%), and lesser in insulin (34- 52%). Insulin therapeutic inertia was more likely among those with shorter diabetes duration (adjusted OR 0.9, 95% CI 0.87, 0.98). Those who did not achieve treatment targets were less likely to experience therapeutic inertia: HbA1c [≥] 7.0%: adjusted OR 0.10 (0.04, 0.24); BP [≥] 140/90 mmHg: 0.28 (0.16, 0.50); LDL-cholesterol [≥] 2.6 mmol/L: 0.37 (0.22, 0.64).
ConclusionsAlthough therapeutic intensifications were more likely in the present of non- achieved treatment targets but the proportions of therapeutic inertia were high.
Trial registrationNCT02730754 https://clinicaltrials.gov/ct2/show/NCT02730754
HighlightsO_LIProbably the first study in Malaysia or Asian countries
C_LIO_LIAssessing three main therapeutic inertia among T2D at primary diabetes care
C_LIO_LIProportions of therapeutic inertia were high
C_LIO_LITherapeutic inertia was unlikely in non-achieved treatment targets
C_LIO_LIPossible causes of therapeutic inertia require identification and rectification
C_LI | 10.1186/s12875-021-01472-2 | medrxiv |
10.1101/19002584 | Is therapeutic inertia present in primary diabetes care in Malaysia? A prospective study | CHEW, B. H.; Hussain, H.; Akthar Supian, Z. | BOON HOW CHEW | UNIVERSITI PUTRA MALAYSIA | 2019-11-25 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | endocrinology | https://www.medrxiv.org/content/early/2019/11/25/19002584.source.xml | AimsThis prospective study aimed to determine the proportions of therapeutic inertia when treatment targets not achieved in adults T2D at three public health clinics in Malaysia.
MethodsThe index prescriptions were those when the annual blood tests were reviewed. Prescriptions were verified and classified as 1) no change, 2) stepping up and 3) stepping down. Multivariable logistic regression and sensitive analyses were conducted.
ResultsAt follow-up, 552 participants were available for the assessment of therapeutic inertia (78.9% response rate). The mean (SD) age and diabetes duration were 60.0 (9.9) years and 5.0 (6.0) years, respectively. High therapeutic inertia were observed in oral anti-diabetic (61-72%), anti-hypertensive (34-65%) and lipid-lowering therapies (56-77%), and lesser in insulin (34- 52%). Insulin therapeutic inertia was more likely among those with shorter diabetes duration (adjusted OR 0.9, 95% CI 0.87, 0.98). Those who did not achieve treatment targets were less likely to experience therapeutic inertia: HbA1c [≥] 7.0%: adjusted OR 0.10 (0.04, 0.24); BP [≥] 140/90 mmHg: 0.28 (0.16, 0.50); LDL-cholesterol [≥] 2.6 mmol/L: 0.37 (0.22, 0.64).
ConclusionsAlthough therapeutic intensifications were more likely in the present of non- achieved treatment targets but the proportions of therapeutic inertia were high.
Trial registrationNCT02730754 https://clinicaltrials.gov/ct2/show/NCT02730754
HighlightsO_LIProbably the first study in Malaysia or Asian countries
C_LIO_LIAssessing three main therapeutic inertia among T2D at primary diabetes care
C_LIO_LIProportions of therapeutic inertia were high
C_LIO_LITherapeutic inertia was unlikely in non-achieved treatment targets
C_LIO_LIPossible causes of therapeutic inertia require identification and rectification
C_LI | 10.1186/s12875-021-01472-2 | medrxiv |
10.1101/19002691 | Bias correction methods for test-negative designs in the presence of misclassification | Endo, A.; Funk, S.; Kucharski, A. J. | Akira Endo | London School of Hygiene & Tropical Medicine | 2019-07-20 | 1 | PUBLISHAHEADOFPRINT | cc_by | epidemiology | https://www.medrxiv.org/content/early/2019/07/20/19002691.source.xml | AO_SCPLOWBSTRACTC_SCPLOWThe test-negative design has become a standard approach for vaccine effectiveness studies. However, previous studies suggested that it may be more sensitive than other designs to misclassification of disease outcome caused by imperfect diagnostic tests. This could be a particular limitation in vaccine effectiveness studies where simple tests (e.g. rapid influenza diagnostic tests) are used for logistical convenience. To address this issue, we derived a mathematical representation of the test-negative design with imperfect tests, then developed a bias correction framework for possible misclassification. Test-negative design studies usually include multiple covariates other than vaccine history to adjust potential confounders; our methods can also address multivariate analyses and be easily coupled with existing estimation tools. We validated the performance of these methods using simulations of common scenarios for vaccine efficacy and were able to obtain unbiased estimates in a variety of parameter settings. | 10.1017/S0950268820002058 | medrxiv |
10.1101/19002634 | Automated Localization and Segmentation of Mononuclear Cell Aggregates in Kidney Histological Images Using Deep Learning | Lituiev, D. S.; Cha, S. J.; Chin, A.; Glicksberg, B. S.; Bishara, A.; Dobi, D.; Cheng, R.; Sohn, J. H.; Laszik, Z.; Hadley, D. | Dexter Hadley | Bakar Computational Health Sciences Institute & Department of Pediatrics, University of California, San Francisco | 2019-07-20 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pathology | https://www.medrxiv.org/content/early/2019/07/20/19002634.source.xml | Allograft rejection is a major concern in kidney transplantation. Inflammatory processes in patients with kidney allografts involve various patterns of immune cell recruitment and distributions. Lymphoid aggregates (LAs) are commonly observed in patients with kidney allografts and their presence and localization may correlate with severity of acute rejection. Alongside with other markers of inflammation, LAs assessment is currently performed by pathologists manually in a qualitative way, which is both time consuming and far from precise. Here we present the first automated method of identifying LAs and measuring their densities in whole slide images of transplant kidney biopsies. We trained a deep convolutional neural network based on U-Net on 44 core needle kidney biopsy slides, monitoring loss on a validation set (n=7 slides). The model was subsequently tested on a hold-out set (n=10 slides). We found that the coarse pattern of LAs localization agrees between the annotations and predictions, which is reflected by high correlation between the annotated and predicted fraction of LAs area per slide (Pearson R of 0.9756). Furthermore, the network achieves an auROC of 97.78 {+/-} 0.93% and an IoU score of 69.72 {+/-} 6.24 % per LA-containing slide in the test set. Our study demonstrates that a deep convolutional neural network can accurately identify lymphoid aggregates in digitized histological slides of kidney. This study presents a first automatic DL-based approach for quantifying inflammation marks in allograft kidney, which can greatly improve precision and speed of assessment of allograft kidney biopsies when implemented as a part of computer-aided diagnosis system. | 10.23880/cprj-16000140 | medrxiv |
10.1101/19001586 | Modified Long-Axis In-Plane Ultrasound Technique Versus Conventional Palpation Technique For Radial Arterial Cannulation: A Prospective Randomized Controlled Trial | Wang, J.; Lai, Z.; Weng, X.; Lin, Y.; Wu, G.; Huang, Q.; Su, J.; Zeng, J.; Liu, J.; Zhao, Z.; Yan, T.; Zhang,, L.; Zhou, L. | Liangcheng Zhang, | Department of Anesthesiology, Fujian Medical University Union Hospital, No.29 Xin-Quan Road, Fuzhou, 350001, China | 2019-07-20 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | anesthesia | https://www.medrxiv.org/content/early/2019/07/20/19001586.source.xml | BackgroundA low first-pass success rate of radial artery cannulation was obtained when using the conventional palpation technique(C-PT) or ultrasound-guided techniques, we therefore evaluate the effect of a modified long-axis in-plane ultrasound technique (M-LAINUT) in guiding radial artery cannulation in adults.
MethodsWe conducted a prospective, randomized and controlled clinical trial of 288 patients undergoing radial artery cannulation. Patients were randomized 1:1 to M-LAINUT or C-PT group at fujian medical university union hospital between 2017 and 2018. Radial artery cannulation was performed by three anesthesiologists with different experience. The outcome was the first and total radial artery cannulation success rates, the number of attempts and the cannulation time.
Results285 patients were statistically analyzed. The success rate of first attempt was 91.6% in the M-LAINUT group (n=143) and 57.7% in the C-PT group (n=142; P<0.001) (odds ratio, 7.9; 95% confidence interval, 4.0-15.7). The total success rate ([≤]5 min and [≤]3 attempts) in the M-LAINUT group was 97.9%, compared to 84.5% in the palpation group (p <0.001) (odds ratio, 8.5; 95% confidence interval, 2.5-29.2). The total cannulation time was shorter and the number of attempts was fewer in the M-LAINUT group than that in the C-PT group (p <0. 05).
ConclusionModified long-axis in-plane ultrasound-guided radial artery cannulation can increase the first and total radial artery cannulation success rates, reduce the number of attempts and shorten the total cannulation time in adults. | 10.1097/MD.0000000000018747 | medrxiv |
10.1101/19002717 | Personality profiles differ between patients with epileptic seizures and patients with psychogenic non-epileptic seizures | Leong, M.; Wang, A. D.; Trainor, D.; Johnstone, B.; Rayner, G.; Kalincik, T.; Kwan, P.; O'Brien, T. J.; Velakoulis, D.; Malpas, C. | Charles Malpas | The University of Melbourne, The Royal Melbourne Hospital, Monash University, The Alfred Hospital | 2019-07-20 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/07/20/19002717.source.xml | AimThe primary aim of the study was to determine whether patients with psychogenic non-epileptic seizures (PNES) have different personality profiles compared to patients with epileptic seizures (ES). The secondary aim was to determine whether any such personality differences could be used to efficiently screen for PNES in clinical settings.
BackgroundPNES and ES are often difficult to differentiate, leading to incorrect or delayed diagnosis. While the current gold-standard investigation is video-EEG monitoring (VEM), it is resource intensive and not universally available. Although some research has investigated the differential psychological profiles of PNES and ES patients, most studies have focused on symptoms of psychopathology. The lack of research using modern personality models in PNES and ES presents a gap in knowledge that this study aimed to address.
MethodsA retrospective collection of data was conducted on patients who completed the NEO-Five Factor Inventory questionnaire during a VEM admission to the Royal Melbourne Hospital between 2002-2017. Patients were classified as either ES or PNES based on clinical consensus diagnosis. For patients with ES, type of epilepsy and laterality of seizure focus were also recorded. Personality differences were investigated using Bayesian linear mixed effects models. Receiver operating characteristic curve analysis was also performed to generate sensitivities and specificities of individual personality scores.
Results305 patients were included in the study. The openness to experience domain was the only personality factor demonstrating strong evidence for a group difference (BF10 = 21.55, d = -0.43 [95% CI -0.71, -0.17]), with patients in the PNES group having higher scores compared to the ES group. Within the openness to experience domain, only the aesthetic interest facet showed evidence for a group difference (BF10 = 7.98, d = -0.39 [95% CI -0.66, -0.12]). ES patients had lower scores on these measures compared to the normal population, while PNES patients did not. Both openness to experience and aesthetic interest, however, showed poor sensitivities (53%, 46% respectively) and specificities (69%, 46% respectively) for classifying PNES and ES patients. There were no differences between personality profiles in Temporal Lobe Epilepsy (TLE) and non-TLE patients, or in laterality in TLE.
ConclusionPatients with ES exhibit lower openness to experience and aesthetic interest compared to patients with PNES and compared to the general population. Despite these differences, the relatively low sensitivity and specificity of these instruments suggests their use is limited in a clinical setting. Nevertheless, these findings open up new avenues of research using modern personality models to further understand patients with epilepsy and related presentations. | 10.1016/j.seizure.2019.10.011 | medrxiv |
10.1101/19002717 | Personality profiles differ between patients with epileptic seizures and patients with psychogenic non-epileptic seizures | Leong, M.; Wang, A. D.; Trainor, D.; Johnstone, B.; Rayner, G.; Kalincik, T.; Kwan, P.; O'Brien, T. J.; Velakoulis, D.; Malpas, C. | Charles Malpas | The University of Melbourne, The Royal Melbourne Hospital, Monash University, The Alfred Hospital | 2019-08-26 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/08/26/19002717.source.xml | AimThe primary aim of the study was to determine whether patients with psychogenic non-epileptic seizures (PNES) have different personality profiles compared to patients with epileptic seizures (ES). The secondary aim was to determine whether any such personality differences could be used to efficiently screen for PNES in clinical settings.
BackgroundPNES and ES are often difficult to differentiate, leading to incorrect or delayed diagnosis. While the current gold-standard investigation is video-EEG monitoring (VEM), it is resource intensive and not universally available. Although some research has investigated the differential psychological profiles of PNES and ES patients, most studies have focused on symptoms of psychopathology. The lack of research using modern personality models in PNES and ES presents a gap in knowledge that this study aimed to address.
MethodsA retrospective collection of data was conducted on patients who completed the NEO-Five Factor Inventory questionnaire during a VEM admission to the Royal Melbourne Hospital between 2002-2017. Patients were classified as either ES or PNES based on clinical consensus diagnosis. For patients with ES, type of epilepsy and laterality of seizure focus were also recorded. Personality differences were investigated using Bayesian linear mixed effects models. Receiver operating characteristic curve analysis was also performed to generate sensitivities and specificities of individual personality scores.
Results305 patients were included in the study. The openness to experience domain was the only personality factor demonstrating strong evidence for a group difference (BF10 = 21.55, d = -0.43 [95% CI -0.71, -0.17]), with patients in the PNES group having higher scores compared to the ES group. Within the openness to experience domain, only the aesthetic interest facet showed evidence for a group difference (BF10 = 7.98, d = -0.39 [95% CI -0.66, -0.12]). ES patients had lower scores on these measures compared to the normal population, while PNES patients did not. Both openness to experience and aesthetic interest, however, showed poor sensitivities (53%, 46% respectively) and specificities (69%, 46% respectively) for classifying PNES and ES patients. There were no differences between personality profiles in Temporal Lobe Epilepsy (TLE) and non-TLE patients, or in laterality in TLE.
ConclusionPatients with ES exhibit lower openness to experience and aesthetic interest compared to patients with PNES and compared to the general population. Despite these differences, the relatively low sensitivity and specificity of these instruments suggests their use is limited in a clinical setting. Nevertheless, these findings open up new avenues of research using modern personality models to further understand patients with epilepsy and related presentations. | 10.1016/j.seizure.2019.10.011 | medrxiv |
10.1101/19002741 | A pooled analysis of the duration of chemoprophylaxis against malaria after treatment with artesunate-amodiaquine and artemether-lumefantrine | Bretscher, M. T.; Dahal, P.; Griffin, J. T.; Stepniewska, K.; Bassat, Q.; Baudin, E.; D'Alessandro, U.; Djimde, A. A.; Dorsey, G.; Espie, E.; Fofana, B.; Gonzalez, R.; Juma, E.; Karema, C.; Lasry, E.; Lell, B.; Lima, N.; Menendez, C.; Mombo-Ngoma, G.; Moreira, C.; Nikiema, F.; Ouedraogo, J. B.; Staedke, S. G.; Tinto, H.; Valea, I.; Yeka, A.; Ghani, A. C.; Guerin, P. J.; Okell, L. C. | Lucy C Okell | Imperial College London | 2019-07-20 | 1 | PUBLISHAHEADOFPRINT | cc_by | public and global health | https://www.medrxiv.org/content/early/2019/07/20/19002741.source.xml | Artemether-lumefantrine (AL) and artesunate-amodiaquine (AS-AQ) are the most commonly-used treatments against Plasmodium falciparum malaria in Africa. The lumefantrine and amodiaquine partner drugs may provide differing durations of post-treatment prophylaxis, an important additional benefit to patients. Analyzing 4214 individuals from clinical trials in 12 sites, we estimated a mean duration of post-treatment protection of 13.0 days (95% CI 10.7-15.7) for AL and 15.2 days (95% CI 12.8-18.4) for AS-AQ after allowing for transmission intensity. However, the duration varied substantially between sites: where wild type pfmdr1 86 and pfcrt 76 parasite genotypes predominated, AS-AQ provided [~]2-fold longer protection than AL. Conversely, AL provided up to 1.5-fold longer protection than AS-AQ where mutants were common. We estimate that choosing AL or AS-AQ as first-line treatment according to local drug sensitivity could alter population-level clinical incidence of malaria by up to 14% in under-five year olds where malaria transmission is high. | 10.1186/s12916-020-1494-3 | medrxiv |
10.1101/19002725 | Differential expression of selected microRNA and putative target genes in peripheral blood cells as early markers of severe forms of dengue | Hapugaswatta, H.; Amarasena, P.; Premaratna, R.; Seneviratne, K. N.; Jayathilaka, N. | Nimanthi Jayathilaka | Department of Chemistry, Faculty of Science, University of Kelaniya, Kelaniya, Sri Lanka | 2019-07-20 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | infectious diseases | https://www.medrxiv.org/content/early/2019/07/20/19002725.source.xml | BackgroundDengue presents a wide clinical spectrum including asymptomatic dengue fever (DF) or severe forms, such as dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS). Early symptoms of DHF are similar to those of non-life-threatening DF. Severe symptoms manifest after 3-5 days of fever, which can be life threatening due to lack of proper medications and inability to distinguish severe cases during the early stages. Early prediction of severe dengue in patients with no warning signs who may later develop severe infection is very important for proper disease management to alleviate DHF related complications and mortality. Due to the role in post-transcriptional regulation of gene expression and remarkable stability of microRNA, altered expression of microRNA was evaluated to explore clinically relevant biomarkers.
Methodology/Principal findingsThe relative expression of microRNA hsa-let-7e, hsa-miR-30b-5p, hsa-miR-30e-3p, hsa-miR-33a, and hsa-miR-150-5p and several putative target genes in peripheral blood cells (PBC) collected from 20 DF and 20 DHF positive patients within four days from fever onset was evaluated by qRT-PCR. hsa-miR-150-5p showed significant (P<0.05) up regulation in PBC of DHF patients compared to DF patients during the acute phase of infection. Expression of enhancer of zeste homolog 2 (EZH2) was significantly (P<0.05) down regulated indicating that genes involved in epigenetic regulation are also differentially expressed in DHF patients during the early stage of infection.
Conclusions/SignificanceDifferential expression of microRNA miR-150-5p and the putative target gene EZH2 may serve as reliable biomarkers of disease severity during early stages of dengue infection.
Author summarySevere dengue cannot be distinguished from dengue fever during the early stages of infection based on the clinical symptoms. A diagnosis is only made after the patient is presented with severe manifestations such as plasma leakage and hemorrhage. During a dengue outbreak, this leads to high occupancy of hospital beds. However, only a small percentage of patients present with severe symptoms and the others do not require medical care at a hospital. Therefore, early prognosis of severe manifestations could reduce dengue related mortality by identifying the patients who will benefit from hospitalization and early intervention. We demonstrate that severe dengue in Sri Lankan patients is associated with increased expression of miRNA miR150 and decreased expression of EZH2 during the early stages of infection when none of the patients showed symptoms of developing severe manifestations at later stages of infection. | 10.1186/s12929-020-0620-z | medrxiv |
10.1101/19002386 | Research transparency promotion by surgical journals publishing randomised controlled trials: a survey | Lombard, N.; Gasmi, A.; Sulpice, L.; Boudjema, K.; Naudet, F.; BERGEAT, D. | Damien BERGEAT | CHU Rennes | 2019-07-22 | 1 | PUBLISHAHEADOFPRINT | cc_no | surgery | https://www.medrxiv.org/content/early/2019/07/22/19002386.source.xml | ObjectiveTo describe the surgical journal position statement on data-sharing policies (primary objective) and to describe the other features of their research transparency promotion.
MethodsOnly "SURGICAL" journals with an impact factor superior to 2 (Web of Science) were eligible for the study. They were not included if there were no explicit instructions for clinical trial publication in the instructions for authors and if there were no RCT published between January 2016 and January 2019. The primary outcome was the existence of a data-sharing policy in the instructions for authors. Details on research transparency promotion were also collected, namely the existence of a "prospective registration of clinical trials requirement" policy; a "COIs" disclosure requirement and a specific reference to reporting guidelines such as CONSORT for RCT.
ResultsAmong the 87 surgical journals eligible, 82 (94%) were included in the analysis: 67 (77%) had explicit instructions for RCT and of the remaining, 15 (17.2%) had published at least one RCT between 2016-2019. The median impact factor was 2.98 [IQR=2.48-3.77] and in 2016 and 2017, the journals published a median of 11.5 RCT [IQR=5-20.75]. Data-sharing statement instructions (primary outcome) were ICMJE-compliant in four cases (4.88%), weaker in 45.12% (n=37) and inexistent in 50% (n=41) of the journals. As for data-sharing statements, no association was found between journal characteristics and the existence of data-sharing policies (ICMJE-compliant or weaker). A "prospective registration of clinical trials requirement" was associated with ICMJE allusion or affiliation and higher impact factors. Journals with specific RCT instructions in their OIA and journals referenced on the ICMJE website more frequently mandated the use of CONSORT guidelines.
ConclusionResearch transparency promotion is still limited in surgical journals. Uniformization of journal requirements vis-a-vis ICMJE guidelines could be a first step forward for research transparency promotion in surgery. | 10.1186/s13063-020-04756-7 | medrxiv |
10.1101/19002782 | Mechanisms of Arrhythmogenicity in Hypertrophic Cardiomyopathy: Insight from Noninvasive Electrocardiographic Imaging | Perez-Alday, E.; Haq, K.; German, D.; Hamilton, C.; Johnson, K.; Phan, F.; Rogovoy, N. M.; Yang, K.; Wirth, A.; Thomas, J.; Dalouk, K.; Fuss, C.; Ferencik, M.; Heitner, S.; Tereshchenko, L. G. | Larisa G Tereshchenko | Oregon Health & Science University | 2019-07-22 | 1 | PUBLISHAHEADOFPRINT | cc_no | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/07/22/19002782.source.xml | BackgroundMechanisms of arrhythmogenicity in hypertrophic cardiomyopathy (HCM) are not well understood.
ObjectiveTo characterize an electrophysiological substrate of HCM in comparison to ischemic cardiomyopathy (ICM), or healthy individuals.
MethodsWe conducted a prospective case-control study. The study enrolled HCM patients at high risk for ventricular tachyarrhythmia (VT) (n=10; age 61{+/-}9 y; left ventricular ejection fraction (LVEF) 60{+/-}9%), and three comparison groups: healthy individuals (n=10; age 28{+/-}6 y; LVEF>70%), ICM patients with LV hypertrophy (LVH) and known VT (n=10; age 64{+/-}9 y; LVEF 31{+/-}15%), and ICM patients with LVH and no known VT (n=10; age 70{+/-}7y; LVEF 46{+/-}16%). All participants underwent 12-lead ECG, cardiac CT or MRI, and 128-electrode body surface mapping (BioSemi ActiveTwo, Netherlands). Non-invasive voltage and activation maps were reconstructed using the open-source SCIRun (University of Utah) inverse problem-solving environment.
ResultsIn the epicardial basal anterior segment, HCM patients had the greatest ventricular activation dispersion [16.4{+/-}5.5 vs. 13.1{+/-}2.7 (ICM with VT) vs. 13.8{+/-}4.3 (ICM no VT) vs. 8.1{+/-}2.4 ms (Healthy); P=0.0007], the largest unipolar voltage [1094{+/-}211 vs. 934{+/-}189 (ICM with VT) vs. 898{+/-}358 (ICM no VT) vs. 842{+/-}90 {micro}V (Healthy); P=0.023], and the greatest voltage dispersion [median(interquartile range) 215(161-281) vs. 189(143-208) (ICM with VT) vs. 158(109-236) (ICM no VT) vs. 110(106-168){micro}V (Healthy); P=0.041]. Differences were also observed in other endo-and epicardial basal and apical segments.
ConclusionHCM is characterized by a greater activation dispersion in basal segments, a larger voltage, and a larger voltage dispersion through LV. | 10.3389/fphys.2020.00344 | medrxiv |
10.1101/19002782 | Mechanisms of Arrhythmogenicity in Hypertrophic Cardiomyopathy: Insight from Noninvasive Electrocardiographic Imaging | Perez-Alday, E.; Haq, K.; German, D.; Hamilton, C.; Johnson, K.; Phan, F.; Rogovoy, N. M.; Yang, K.; Wirth, A.; Thomas, J.; Dalouk, K.; Fuss, C.; Ferencik, M.; Heitner, S.; Tereshchenko, L. G. | Larisa G Tereshchenko | Oregon Health & Science University | 2020-02-28 | 2 | PUBLISHAHEADOFPRINT | cc_no | cardiovascular medicine | https://www.medrxiv.org/content/early/2020/02/28/19002782.source.xml | BackgroundMechanisms of arrhythmogenicity in hypertrophic cardiomyopathy (HCM) are not well understood.
ObjectiveTo characterize an electrophysiological substrate of HCM in comparison to ischemic cardiomyopathy (ICM), or healthy individuals.
MethodsWe conducted a prospective case-control study. The study enrolled HCM patients at high risk for ventricular tachyarrhythmia (VT) (n=10; age 61{+/-}9 y; left ventricular ejection fraction (LVEF) 60{+/-}9%), and three comparison groups: healthy individuals (n=10; age 28{+/-}6 y; LVEF>70%), ICM patients with LV hypertrophy (LVH) and known VT (n=10; age 64{+/-}9 y; LVEF 31{+/-}15%), and ICM patients with LVH and no known VT (n=10; age 70{+/-}7y; LVEF 46{+/-}16%). All participants underwent 12-lead ECG, cardiac CT or MRI, and 128-electrode body surface mapping (BioSemi ActiveTwo, Netherlands). Non-invasive voltage and activation maps were reconstructed using the open-source SCIRun (University of Utah) inverse problem-solving environment.
ResultsIn the epicardial basal anterior segment, HCM patients had the greatest ventricular activation dispersion [16.4{+/-}5.5 vs. 13.1{+/-}2.7 (ICM with VT) vs. 13.8{+/-}4.3 (ICM no VT) vs. 8.1{+/-}2.4 ms (Healthy); P=0.0007], the largest unipolar voltage [1094{+/-}211 vs. 934{+/-}189 (ICM with VT) vs. 898{+/-}358 (ICM no VT) vs. 842{+/-}90 {micro}V (Healthy); P=0.023], and the greatest voltage dispersion [median(interquartile range) 215(161-281) vs. 189(143-208) (ICM with VT) vs. 158(109-236) (ICM no VT) vs. 110(106-168){micro}V (Healthy); P=0.041]. Differences were also observed in other endo-and epicardial basal and apical segments.
ConclusionHCM is characterized by a greater activation dispersion in basal segments, a larger voltage, and a larger voltage dispersion through LV. | 10.3389/fphys.2020.00344 | medrxiv |
10.1101/19002808 | Associations between respiratory health outcomes and coal mine fire PM2.5 smoke exposure: a cross-sectional study | Johnson, A. L.; Gao, C. X.; Dennekamp, M.; Williamson, G. J.; Brown, D.; Carroll, M. T.; Del Monaco, A.; Ikin, J. F.; Abramson, M. J.; Guo, Y. | Amanda L Johnson | Monash University | 2019-07-23 | 1 | PUBLISHAHEADOFPRINT | cc_no | occupational and environmental health | https://www.medrxiv.org/content/early/2019/07/23/19002808.source.xml | RationalIn 2014, local wildfires ignited a fire in the Morwell open cut coal mine, in south-eastern Australia, which burned for six weeks. Limited research was available regarding the respiratory health effects of coal mine fire-related PM2.5 smoke exposure.
ObjectiveThis study examined associations between self-reported respiratory outcomes in adults and mine fire-related PM2.5 smoke exposure.
ParticipantsEligible participants were adult residents of Morwell, identified using the Victorian electoral roll.
Main outcome measuresSelf-reported data were collected as part of the Hazelwood Health Study Adult Survey.
Mine fire-related PM2.5 concentrations were retrospectively modelled by the Commonwealth Scientific and Industrial Research Organisation Oceans & Atmosphere Flagship. Personalised mean 24-h and peak 12-h mine fire-related PM2.5 exposures were estimated for each participant. Data were analysed by multivariate logistic regression.
ResultsThere was some evidence of a dose-response relationship between respiratory outcomes and mine fire PM2.5 concentrations. Chronic cough was associated with an Odds Ratio (OR) of 1.13 (95% Confidence Interval 1.03 to 1.23; p-value 0.007) per 10 g/m3 increment in mean PM2.5 and 1.07 (1.02 to 1.12; 0.004) per 100 g/m3 increment in peak PM2.5. Current wheeze was associated with peak PM2.5, OR=1.06 (1.02 to 1.11; 0.004) and chronic phlegm with mean PM2.5 OR=1.10 (1.00 to 1.20; 0.052). Males, participants 18-64 years and those residing in homes constructed from non-brick/concrete materials or homes with tin/metal roofs had higher estimated ORs.
ConclusionsThese findings contribute to the formation of public health policy responses in the event of future major pollution episodes.
Key MessagesO_ST_ABSWhat is the key question?C_ST_ABSWas there an association between mine fire-related PM2.5 smoke exposure and self-reported respiratory health outcomes for adult residents of Morwell, approximately 2.5 years after the mine fire?
What is the bottom line?There was some evidence of a dose-response relationship between respiratory outcomes and mine fire-related PM2.5 concentrations.
Why read on?There is limited research regarding the health effects of coal mine fire-related PM2.5 smoke exposure and to the best of our knowledge, this is the first study to examine self-reported respiratory symptoms associated with smoke exposure from a coal mine fire. | 10.3390/ijerph16214262 | medrxiv |
10.1101/19002576 | Teacher-rated aggression and co-occurring problems and behaviors among schoolchildren: A comparison of four population-based European cohorts | Whipp, A. M.; Vuoksimaa, E.; Bolhuis, K.; De Zeeuw, E. L.; Korhonen, T.; Mauri, M.; Pulkkinen, L.; Rimfeld, K.; Rose, R. J.; van Beijsterveldt, C. E.; Bartels, M.; Plomin, R.; Tiemeier, H.; Kaprio, J. A.; Boomsma, D. | Alyce M Whipp | University of Helsinki | 2019-07-23 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/23/19002576.source.xml | Aggressive behavior in school is an ongoing concern, with the current focus mostly on specific manifestations such as bullying and extreme violence. Children spend a substantial amount of time in school, but their behaviors in the school setting tend to be less well characterized than in the home setting. Since aggression may index multiple behavioral problems, we assessed associations of teacher-rated aggressive behavior with co-occurring externalizing/internalizing problems and social behavior in 39,936 schoolchildren from 4 population-based cohorts from Finland, the Netherlands, and the UK. Mean levels of aggressive behavior differed significantly by gender. Correlations of aggressive behavior were high with all other externalizing problems (0.47-0.80) and lower with internalizing problems (0.02-0.39). A negative association was seen with prosocial behavior (-0.33 to -0.54). Despite the higher mean levels of aggressive behavior in boys, the correlations were notably similar for boys and girls (e.g., aggressive-hyperactivity correlations: 0.51-0.75 boys, 0.47-0.70 girls) and did not vary greatly with respect to age, instrument or cohort. Thus, aggressive behavior at school rarely occurs in isolation and children with problems of aggressive behavior likely require help with other behavioral and emotional problems. It is important to note that greater aggressive behavior is not only associated with greater amount of other externalizing and internalizing problems but also with lower levels of prosocial behavior. | 10.1371/journal.pone.0238667 | medrxiv |
10.1101/19002576 | Teacher-rated aggression and co-occurring behaviors and problems among schoolchildren: A comparison of four population-based European cohorts | Whipp, A. M.; Vuoksimaa, E.; Bolhuis, K.; De Zeeuw, E. L.; Korhonen, T.; Mauri, M.; Pulkkinen, L.; Rimfeld, K.; Rose, R. J.; van Beijsterveldt, C. E.; Bartels, M.; Plomin, R.; Tiemeier, H.; Kaprio, J. A.; Boomsma, D. | Alyce M Whipp | University of Helsinki | 2020-05-24 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2020/05/24/19002576.source.xml | Aggressive behavior in school is an ongoing concern, with the current focus mostly on specific manifestations such as bullying and extreme violence. Children spend a substantial amount of time in school, but their behaviors in the school setting tend to be less well characterized than in the home setting. Since aggression may index multiple behavioral problems, we assessed associations of teacher-rated aggressive behavior with co-occurring externalizing/internalizing problems and social behavior in 39,936 schoolchildren from 4 population-based cohorts from Finland, the Netherlands, and the UK. Mean levels of aggressive behavior differed significantly by gender. Correlations of aggressive behavior were high with all other externalizing problems (0.47-0.80) and lower with internalizing problems (0.02-0.39). A negative association was seen with prosocial behavior (-0.33 to -0.54). Despite the higher mean levels of aggressive behavior in boys, the correlations were notably similar for boys and girls (e.g., aggressive-hyperactivity correlations: 0.51-0.75 boys, 0.47-0.70 girls) and did not vary greatly with respect to age, instrument or cohort. Thus, aggressive behavior at school rarely occurs in isolation and children with problems of aggressive behavior likely require help with other behavioral and emotional problems. It is important to note that greater aggressive behavior is not only associated with greater amount of other externalizing and internalizing problems but also with lower levels of prosocial behavior. | 10.1371/journal.pone.0238667 | medrxiv |
10.1101/19002576 | Teacher-rated aggression and co-occurring behaviors and problems among schoolchildren: A comparison of four population-based European cohorts | Whipp, A. M.; Vuoksimaa, E.; Bolhuis, K.; De Zeeuw, E. L.; Korhonen, T.; Mauri, M.; Pulkkinen, L.; Rimfeld, K.; Rose, R. J.; van Beijsterveldt, C. E.; Bartels, M.; Plomin, R.; Tiemeier, H.; Kaprio, J. A.; Boomsma, D. | Alyce M Whipp | University of Helsinki | 2020-08-19 | 3 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2020/08/19/19002576.source.xml | Aggressive behavior in school is an ongoing concern, with the current focus mostly on specific manifestations such as bullying and extreme violence. Children spend a substantial amount of time in school, but their behaviors in the school setting tend to be less well characterized than in the home setting. Since aggression may index multiple behavioral problems, we assessed associations of teacher-rated aggressive behavior with co-occurring externalizing/internalizing problems and social behavior in 39,936 schoolchildren from 4 population-based cohorts from Finland, the Netherlands, and the UK. Mean levels of aggressive behavior differed significantly by gender. Correlations of aggressive behavior were high with all other externalizing problems (0.47-0.80) and lower with internalizing problems (0.02-0.39). A negative association was seen with prosocial behavior (-0.33 to -0.54). Despite the higher mean levels of aggressive behavior in boys, the correlations were notably similar for boys and girls (e.g., aggressive-hyperactivity correlations: 0.51-0.75 boys, 0.47-0.70 girls) and did not vary greatly with respect to age, instrument or cohort. Thus, aggressive behavior at school rarely occurs in isolation and children with problems of aggressive behavior likely require help with other behavioral and emotional problems. It is important to note that greater aggressive behavior is not only associated with greater amount of other externalizing and internalizing problems but also with lower levels of prosocial behavior. | 10.1371/journal.pone.0238667 | medrxiv |
10.1101/19002774 | Metagenomic identification of severe pneumonia pathogens with rapid Nanopore sequencing in mechanically-ventilated patients. | Yang, L.; Haidar, G.; Zia, H.; Nettles, R.; Qin, S.; Wang, X.; Shah, F.; Rapport, S. F.; Charalampous, T.; Methe, B.; Fitch, A.; Morris, A.; McVerry, B. J.; O'Grady, J.; Kitsios, G. D. | Georgios D. Kitsios | Division of Pulmonary, Allergy and Critical Care Medicine, Department of Medicine, University of Pittsburgh School of Medicine and University of Pittsburgh Medi | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc_no | respiratory medicine | https://www.medrxiv.org/content/early/2019/07/24/19002774.source.xml | BackgroundMetagenomic sequencing of respiratory microbial communities for etiologic pathogen identification in pneumonia may help overcome the limitations of current culture-based methods. We examined the feasibility and clinical validity of rapid-turnaround metagenomics with Nanopore sequencing of respiratory samples for severe pneumonia diagnosis.
Methods and FindingsWe conducted a case-control study of mechanically-ventilated patients with pneumonia (nine culture-positive and five culture-negative) and without pneumonia (eight controls). We collected endotracheal aspirate samples (ETAs) and applied a microbial DNA enrichment method prior to performing metagenomic sequencing with the Oxford Nanopore MinION device. We compared Nanopore results against clinical microbiologic cultures and bacterial 16S rRNA gene sequencing. In nine culture-positive cases, Nanopore revealed communities with low alpha diversity and high abundance of the bacterial (n=8) or fungal (n=1) species isolated by clinical cultures. In four culture-positive cases with resistant organisms, Nanopore detected antibiotic resistance genes corresponding to the phenotypic resistance identified by clinical antibiograms. In culture-negative pneumonia, Nanopore revealed probable bacterial pathogens in 1/5 cases and airway colonization by Candida species in 3/5 cases. In controls, Nanopore showed high abundance of oral bacteria in 5/8 subjects, and identified colonizing respiratory pathogens in the three other subjects. Nanopore and 16S sequencing showed excellent concordance for the most abundant bacterial taxa.
ConclusionWe demonstrated technical feasibility and proof-of-concept clinical validity of Nanopore metagenomics for severe pneumonia diagnosis, with striking concordance with positive microbiologic cultures and clinically actionable information offered from the sequencing profiles of culture-negative samples. Prospective studies with real-time metagenomics are warranted to examine the impact on antimicrobial decision-making and clinical outcomes. | 10.1186/s12931-019-1218-4 | medrxiv |
10.1101/19002857 | Multidimensional factors predicting exclusive breastfeeding in Ethiopia: evidence from a meta-analysis of studies in the past 10 years | Habtewold, T. D.; Endalamaw, A.; Mohammed, S. H.; Mulugeta, H.; Dessie, G.; Kassa, G. M.; Asmare, Y.; Tadesse, M.; Alemu, Y. M.; Sharew, N. T.; Tura, A. K.; Tegegne, B. S.; Alemu, S. M. | Tesfa Dejenie Habtewold | University of Groningen | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pediatrics | https://www.medrxiv.org/content/early/2019/07/24/19002857.source.xml | BackgroundIn Ethiopia, the prevalence of exclusive breastfeeding (EBF) is 60.1%, which is lower than the national Health Sector Transformation Plan 2016-2020, National Nutrition Program 2016-2020 and WHO global target. This may be attributed to multidimensional factors.
ObjectiveThe aim of this meta-analysis was to investigate the association between EBF and educational status, household income, marital status, media exposure, and parity in Ethiopia.
MethodsDatabases used were PubMed, EMBASE, Web of Science, SCOPUS, CINAHL and WHO Global health library, and key terms were searched using interactive searching syntax. It was also supplemented by manual searching. Observational studies published between September 2000 and March 2019 were included. The methodological quality of studies was examined using the Newcastle-Ottawa Scale (NOS) for cross-sectional studies. Data were extracted using the Joanna Briggs Institute (JBI) data extraction tool. To obtain the pooled odds ratio (OR), extracted data were fitted in a random-effects meta-analysis model. Statistical heterogeneity was quantified using Cochrans Q test, {tau}2, and I2 statistics. Additional analysis conducted includes Jackknife sensitivity analysis, cumulative meta-analysis, and meta-regression analysis.
ResultsOut of 553 studies retrieved, 31 studies fulfilled our inclusion criteria. Almost all studies were conducted on mothers with newborn less than 23 months. Maternal educational status (OR = 1.39; p = 0.03; 95% CI = 1.03 - 1.89; I2 = 86.11%), household income (OR = 1.27; p = 0.02; 95% CI = 1.05 - 1.55; I2 = 60.9%) and marital status (OR = 1.39; p = 0.02; 95% CI = 1.05 - 1.83; I2 = 76.96%) were found to be significantly associated with EBF. We also observed an inverse dose-response relationship of EBF with educational status and income. Significant association was not observed between EBF and parity, media exposure and paternal educational status.
ConclusionsIn this meta-analysis, we depicted the relevant effect of maternal education, income, and marital status on EBF. Therefore, multifaceted, effective, and evidence-based efforts are needed to increase national breastfeeding rates in Ethiopia. | 10.1007/s10995-020-03059-2 | medrxiv |
10.1101/19002972 | fMRI as an outcome measure in clinical trials: A systematic review in clinicaltrials.gov | Sadraee, A.; Paulus, M.; Ekhtiari, H. | Alaleh Sadraee | Institute for Cognitive Science Studies, Tehran, Iran | 2019-07-25 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | radiology and imaging | https://www.medrxiv.org/content/early/2019/07/25/19002972.source.xml | BackgroundFunctional magnetic resonance imaging (fMRI) is quickly becoming a significant outcome measure for clinical trials and as more than one thousand trials with fMRI as an outcome measure were registered in clinicaltrials.gov at the time of writing this article. However, 93% of these registered trials are still not completed with published results and there is no picture available about methodological dimensions of these ongoing trials with fMRI as an outcome measure.
MethodsWe collected trials that use fMRI as an outcome measure by searching "fMRI" in the ClinicalTrials.gov registry on October 13 2018 and reviewing each trials record entry. Eligible trials characteristics were extracted and summarized.
ResultsIn total, 1386 clinical trials were identified that reported fMRI in their outcome measures with fMRI as the only primary outcome in 33% of them. 82% of fMRI trials were started after 2011. The most frequent intervention was drug (29%). 57% of trials had parallel assignment design and 20% were designed for cross over assignment. For task-based fMRI, cognitive systems (46%) based on RDoC was the most frequent domain of tasks, followed by positive valence systems (19%), systems for social processing (10%) and sensorimotor systems (5%). Less than one-third of trials (28%) registered at least one region of interest for their analysis. Food cue reactivity task, pain perception task, n-back task and monetary incentive delay task were recruited in more than 25 registered trials.
ConclusionThe number of fMRI trials (fMRI as an outcome measure) with both task and rest protocols is growing rapidly. Different RDoC domains are covered by various tasks in fMRI trials. However, our study suggests the need of greater harmony and better standardization for registration of fMRI details on both methods and analysis which would allow for more effective comparison across studies in systematic reviews and also help the validation of results towards having fMRI as a biomarker in the future. | 10.1002/brb3.2089 | medrxiv |
10.1101/19002766 | Hospital Resource Utilization in Epilepsy: Disparities in Rural vs Urban Care | Bensken, W. P.; Norato, G.; Khan, O. I. | Wyatt P Bensken | National Institute of Neurological Disorders and Stroke, National Institutes of Health | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc0 | neurology | https://www.medrxiv.org/content/early/2019/07/24/19002766.source.xml | ObjectiveThis study assessed differences in resource utilization during epilepsy and seizure related hospitalizations in urban versus rural environments, to identify potential disparities in care for people living with epilepsy in rural communities.
MethodsA 10 year (2001 to 2011) state-wide hospital discharge database was used. Cost, length of stay, illness severity, and procedure use were compared between urban facilities, rural facilities, and epilepsy centers. Comparison across three separate years helped to identify differences between facility type as well as to assess practices changes within facilities over time.
ResultsAverage total charges between the three types of facilities was significantly different, with epilepsy centers having the highest average charge per patient, followed by urban and rural facilities (corrected p-values < 0.001). Illness severity was similar between the three groups, with the exception that epilepsy centers had a higher proportion of major severity cases. Length of stay remained fairly consistent across all three years for epilepsy centers (mean: 4.75 days), while decreasing for urban facilities (means: 4.39 to 3.72) and ultimately decreasing for rural facilities (mean: 3.31 to 3.09). Rates of procedure utilization revealed CT scan use persisted longer at rural facilities compared to urban facilities, while EEG and vEEG was low at rural facilities. vEEG was relatively restricted to epilepsy centers and had an initial substantial rise in use over the years and was associated with corresponding greater surgical procedure rates.
SignificanceThis study indicates a measurable difference between epilepsy centers, urban facilities, and rural facilities in cost and procedure utilization while caring for patients during epilepsy- and seizure-related hospitalizations. If these conclusions are valid, they support that persons living with epilepsy who live in rural communities and utilize these services receive care that is considered suboptimal when compared to urban centers or epilepsy centers. | null | medrxiv |
10.1101/19002550 | Early detection of molecular disease progression by whole-genome circulating tumor DNA in advanced solid tumors | Davis, A. A.; Iams, W. T.; Chan, D.; Lentz, R. W.; Oh, M. S.; Peterman, N.; Robertson, A.; Shah, A.; Srivas, R.; Lambert, N.; George, P.; Wong, B.; Wood, H.; Close, J.; Tezcan, A.; Nesmith, K.; Tezcan, H.; Chae, Y. K. | Young Kwang Chae | Feinberg School of Medicine, Northwestern University; Robert H. Lurie Comprehensive Cancer Center of Northwestern University | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | oncology | https://www.medrxiv.org/content/early/2019/07/24/19002550.source.xml | PurposeTreatment response assessment for patients with advanced solid tumors is complex and existing methods of assessment require greater precision for early disease assessment. Current guidelines rely on imaging, which has limitations such as the long time required before treatment effectiveness can be determined. Serial changes in whole-genome (WG) circulating tumor DNA (ctDNA) were used to detect disease progression early in the treatment course.
Methods97 patients with advanced cancer were enrolled, and blood was collected before and after initiation of a new treatment. Plasma cell-free DNA libraries were prepared for either WG or WG bisulfite sequencing. Longitudinal changes in the fraction of ctDNA were quantified to identify molecular progression or response in a binary manner. Study endpoints were agreement with first follow-up imaging (FUI) and stratification of progression-free survival (PFS).
ResultsPatients with early molecular progression had shorter PFS (n=14; median 62d) compared to others (n=78; median 263d, HR 12.6 [95% confidence interval 5.8-27.3], log-rank P<10-10, 5 excluded from analysis). All cases with molecular progression were confirmed by FUI and molecular progression preceded FUI by a median of 40d. Sensitivity for the assay in identifying clinical progression was 54%, median 24d into treatment and specificity was 100%.
ConclusionsMolecular progression, based on ctDNA data, detected disease progression for cases on treatment with high specificity approximately 6 weeks before follow-up imaging. This technology may enable early course change to a potentially effective therapy, avoiding side effects and cost associated with cycles of ineffective treatment.
Translational RelevanceTools for early assessment of treatment response in advanced solid tumors require refinement. We performed baseline and early serial assessments of WG ctDNA to predict treatment response prior to standard of care clinical and radiographic assessments. Our results demonstrated that the blood-based prediction reliably identified molecular progression, approximately 6 weeks before imaging, with very high specificity and positive predictive value across multiple tumor and treatment types. Patients with molecular progression had significantly shorter progression-free survival compared with non-progressors. In addition, a large quantitative decrease in tumor fraction ratio was associated with significant durable benefit. Collectively, these findings demonstrate that cancer-related changes in the blood precede clinical or imaging changes and may inform changes in management earlier in the treatment course to improve long-term patient outcomes and limit cost. | 10.1158/1535-7163.MCT-19-1060 | medrxiv |
10.1101/19002550 | Early detection of molecular disease progression by whole-genome circulating tumor DNA in advanced solid tumors | Davis, A. A.; Iams, W. T.; Chan, D.; Lentz, R. W.; Oh, M. S.; Peterman, N.; Robertson, A.; Shah, A.; Srivas, R.; Lambert, N.; George, P.; Wong, B.; Wood, H.; Close, J.; Tezcan, A.; Nesmith, K.; Tezcan, H.; Chae, Y. K. | Young Kwang Chae | Feinberg School of Medicine, Northwestern University; Robert H. Lurie Comprehensive Cancer Center of Northwestern University | 2019-08-02 | 2 | PUBLISHAHEADOFPRINT | cc_by_nd | oncology | https://www.medrxiv.org/content/early/2019/08/02/19002550.source.xml | PurposeTreatment response assessment for patients with advanced solid tumors is complex and existing methods of assessment require greater precision for early disease assessment. Current guidelines rely on imaging, which has limitations such as the long time required before treatment effectiveness can be determined. Serial changes in whole-genome (WG) circulating tumor DNA (ctDNA) were used to detect disease progression early in the treatment course.
Methods97 patients with advanced cancer were enrolled, and blood was collected before and after initiation of a new treatment. Plasma cell-free DNA libraries were prepared for either WG or WG bisulfite sequencing. Longitudinal changes in the fraction of ctDNA were quantified to identify molecular progression or response in a binary manner. Study endpoints were agreement with first follow-up imaging (FUI) and stratification of progression-free survival (PFS).
ResultsPatients with early molecular progression had shorter PFS (n=14; median 62d) compared to others (n=78; median 263d, HR 12.6 [95% confidence interval 5.8-27.3], log-rank P<10-10, 5 excluded from analysis). All cases with molecular progression were confirmed by FUI and molecular progression preceded FUI by a median of 40d. Sensitivity for the assay in identifying clinical progression was 54%, median 24d into treatment and specificity was 100%.
ConclusionsMolecular progression, based on ctDNA data, detected disease progression for cases on treatment with high specificity approximately 6 weeks before follow-up imaging. This technology may enable early course change to a potentially effective therapy, avoiding side effects and cost associated with cycles of ineffective treatment.
Translational RelevanceTools for early assessment of treatment response in advanced solid tumors require refinement. We performed baseline and early serial assessments of WG ctDNA to predict treatment response prior to standard of care clinical and radiographic assessments. Our results demonstrated that the blood-based prediction reliably identified molecular progression, approximately 6 weeks before imaging, with very high specificity and positive predictive value across multiple tumor and treatment types. Patients with molecular progression had significantly shorter progression-free survival compared with non-progressors. In addition, a large quantitative decrease in tumor fraction ratio was associated with significant durable benefit. Collectively, these findings demonstrate that cancer-related changes in the blood precede clinical or imaging changes and may inform changes in management earlier in the treatment course to improve long-term patient outcomes and limit cost. | 10.1158/1535-7163.MCT-19-1060 | medrxiv |
10.1101/19002550 | Early assessment of molecular progression and response by whole-genome circulating tumor DNA in advanced solid tumors | Davis, A. A.; Iams, W. T.; Chan, D.; Lentz, R. W.; Oh, M. S.; Peterman, N.; Robertson, A.; Shah, A.; Srivas, R.; Wilson, T.; Lambert, N.; George, P.; Wong, B.; Wood, H.; Close, J.; Tezcan, A.; Nesmith, K.; Tezcan, H.; Chae, Y. K. | Young Kwang Chae | Feinberg School of Medicine, Northwestern University; Robert H. Lurie Comprehensive Cancer Center of Northwestern University | 2019-11-25 | 3 | PUBLISHAHEADOFPRINT | cc_by_nd | oncology | https://www.medrxiv.org/content/early/2019/11/25/19002550.source.xml | PurposeTreatment response assessment for patients with advanced solid tumors is complex and existing methods of assessment require greater precision for early disease assessment. Current guidelines rely on imaging, which has limitations such as the long time required before treatment effectiveness can be determined. Serial changes in whole-genome (WG) circulating tumor DNA (ctDNA) were used to detect disease progression early in the treatment course.
Methods97 patients with advanced cancer were enrolled, and blood was collected before and after initiation of a new treatment. Plasma cell-free DNA libraries were prepared for either WG or WG bisulfite sequencing. Longitudinal changes in the fraction of ctDNA were quantified to identify molecular progression or response in a binary manner. Study endpoints were agreement with first follow-up imaging (FUI) and stratification of progression-free survival (PFS).
ResultsPatients with early molecular progression had shorter PFS (n=14; median 62d) compared to others (n=78; median 263d, HR 12.6 [95% confidence interval 5.8-27.3], log-rank P<10-10, 5 excluded from analysis). All cases with molecular progression were confirmed by FUI and molecular progression preceded FUI by a median of 40d. Sensitivity for the assay in identifying clinical progression was 54%, median 24d into treatment and specificity was 100%.
ConclusionsMolecular progression, based on ctDNA data, detected disease progression for cases on treatment with high specificity approximately 6 weeks before follow-up imaging. This technology may enable early course change to a potentially effective therapy, avoiding side effects and cost associated with cycles of ineffective treatment.
Translational RelevanceTools for early assessment of treatment response in advanced solid tumors require refinement. We performed baseline and early serial assessments of WG ctDNA to predict treatment response prior to standard of care clinical and radiographic assessments. Our results demonstrated that the blood-based prediction reliably identified molecular progression, approximately 6 weeks before imaging, with very high specificity and positive predictive value across multiple tumor and treatment types. Patients with molecular progression had significantly shorter progression-free survival compared with non-progressors. In addition, a large quantitative decrease in tumor fraction ratio was associated with significant durable benefit. Collectively, these findings demonstrate that cancer-related changes in the blood precede clinical or imaging changes and may inform changes in management earlier in the treatment course to improve long-term patient outcomes and limit cost. | 10.1158/1535-7163.MCT-19-1060 | medrxiv |
10.1101/19003053 | Quantification of Optical Coherence Tomography Angiography in Age and Age-related Macular Degeneration using Vessel Density analysis | Vaghefi, E.; Hill, S.; Kresten, H.; Squirrell, D. | Ehsan Vaghefi | University of Auckland | 2019-07-25 | 1 | PUBLISHAHEADOFPRINT | cc_no | ophthalmology | https://www.medrxiv.org/content/early/2019/07/25/19003053.source.xml | PurposeTo determine whether vessel density (VD) as measured by optical coherence tomography angiography (OCT-A) provide insights into retinal and choriocapillaris vascular changes with ageing and intermediate dry age related macular degeneration (AMD).
MethodsSeventy-five participants were recruited into three cohorts; young healthy (YH) group, old healthy (OH) and those at high-risk for exudative AMD. Raw OCT and OCT-A data from TOPCON DRI OCT Triton were exported using Topcon IMAGENET 6.0 software, and 3D datasets were analysed to determine retinal thickness and vessel density.
ResultsCentral macular thickness measurements revealed a trend of overall retinal thinning with increasing age. VD through the full thickness of the retina was highest in ETDRS sector 4 (the inferior macula) in all the cohorts. Mean VD was significantly higher in the deep capillary plexus than the superficial capillary plexus in all ETDRS sectors in all cohorts but there was no significant difference noted between groups. Choriocapillaris VD was significantly lower in all ETDRS sectors in the in the AMD group compared with the YH and the OH groups.
ConclusionsRetinal vessel density maps, derived from the retinal plexi are not reliable biomarkers for assessing the ageing macular. Our non-proprietary analysis of the vascular density of the choriocapillaris revealed a significant drop off of VD with age and disease but further work is required to corroborate this finding. If repeatable, choriocapillaris VD may provide a non-invasive biomarker of healthy ageing and disease.
Brief SummaryIn this manuscript, we have studied the potential of retinal vessel density as measured by optical coherence tomography angiography (OCT-A), as a biomarker for detection of high-risk of developing exudative age-related macular degeneration (AMD). | 10.1097/APO.0000000000000278 | medrxiv |
10.1101/19002592 | Use of undetectable viral load to improve population-based survey estimates of known HIV-positive status and antiretroviral treatment coverage in Kenya | Young, P. W.; Zielinski-Gutierrez, E.; Wamicwe, J.; Mukui, I.; Kim, A. A.; Waruru, A.; Zeh, C.; Kretzschmar, M. E.; De Cock, K. M. | Peter W Young | US Centers for Disease Control and Prevention, Kenya | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc0 | hiv aids | https://www.medrxiv.org/content/early/2019/07/24/19002592.source.xml | ObjectiveTo compare alternative methods of adjusting self-reported knowledge of HIV-positive status and antiretroviral (ARV) therapy use based on undetectable viral load (UVL) and ARV detection in blood.
DesignPost hoc analysis of nationally-representative household survey to compare alternative biomarker-based adjustments to population HIV indicators.
MethodsWe reclassified HIV-positive participants aged 15-64 years in the 2012 Kenya AIDS Indicator Survey (KAIS) that were unaware of their HIV-positive status by self-report as aware and on antiretroviral treatment if either ARVs were detected or viral load was undetectable (<550 copies/mL) on dried blood spots. We compared self-report to adjustments for ARVs measurement, UVL, or both.
ResultsTreatment coverage among all HIV-positive respondents increased from 31.8% for self-report to 42.5% [95% confidence interval (CI) 37.4-47.8] based on ARV detection alone, to 42.8% (95% CI 37.9-47.8) when ARV-adjusted, 46.2% (95% CI 41.3-51.1) when UVL-adjusted and 48.8% (95% CI 43.9-53.8) when adjusted for either ARV or UVL. Awareness of positive status increased from 46.9% for self-report to 56.2% (95% CI 50.7- 61.6) when ARV-adjusted, 57.5% (95% CI 51.9-63.0) when UVL-adjusted, and 59.8% (95% CI 54.2-65.1) when adjusted for either ARV or UVL.
ConclusionsUndetectable viral load, which is routinely measured in surveys, may be a useful adjunct or alternative to ARV detection for adjusting survey estimates of knowledge of HIV status and antiretroviral treatment coverage. | 10.1097/QAD.0000000000002453 | medrxiv |
10.1101/19002592 | Use of undetectable viral load to improve survey estimates of known HIV-positive status and antiretroviral treatment coverage | Young, P. W.; Zielinski-Gutierrez, E.; Wamicwe, J.; Mukui, I.; Kim, A. A.; Waruru, A.; Zeh, C.; Kretzschmar, M. E.; De Cock, K. M. | Peter W Young | US Centers for Disease Control and Prevention, Kenya | 2019-11-26 | 2 | PUBLISHAHEADOFPRINT | cc0 | hiv aids | https://www.medrxiv.org/content/early/2019/11/26/19002592.source.xml | ObjectiveTo compare alternative methods of adjusting self-reported knowledge of HIV-positive status and antiretroviral (ARV) therapy use based on undetectable viral load (UVL) and ARV detection in blood.
DesignPost hoc analysis of nationally-representative household survey to compare alternative biomarker-based adjustments to population HIV indicators.
MethodsWe reclassified HIV-positive participants aged 15-64 years in the 2012 Kenya AIDS Indicator Survey (KAIS) that were unaware of their HIV-positive status by self-report as aware and on antiretroviral treatment if either ARVs were detected or viral load was undetectable (<550 copies/mL) on dried blood spots. We compared self-report to adjustments for ARVs measurement, UVL, or both.
ResultsTreatment coverage among all HIV-positive respondents increased from 31.8% for self-report to 42.5% [95% confidence interval (CI) 37.4-47.8] based on ARV detection alone, to 42.8% (95% CI 37.9-47.8) when ARV-adjusted, 46.2% (95% CI 41.3-51.1) when UVL-adjusted and 48.8% (95% CI 43.9-53.8) when adjusted for either ARV or UVL. Awareness of positive status increased from 46.9% for self-report to 56.2% (95% CI 50.7- 61.6) when ARV-adjusted, 57.5% (95% CI 51.9-63.0) when UVL-adjusted, and 59.8% (95% CI 54.2-65.1) when adjusted for either ARV or UVL.
ConclusionsUndetectable viral load, which is routinely measured in surveys, may be a useful adjunct or alternative to ARV detection for adjusting survey estimates of knowledge of HIV status and antiretroviral treatment coverage. | 10.1097/QAD.0000000000002453 | medrxiv |
10.1101/19001479 | Carriage duration of carbapenemase-producing Enterobacteriaceae in a hospital cohort - implications for infection control measures | Mo, Y.; Hernandez-Koutoucheva, A.; Musicha, P.; Bertrand, D.; Lye, D.; Ng, O. T.; Fenlon, S. N.; Chen, S. L.; Ling, M. L.; Tang, W. Y.; Barkham, T.; Nagarajan, N.; Cooper, B. S.; Marimuthu, K. | Yin Mo | National University Hospital, Mahidol-Oxford Tropical Medicine Research Unit, University of Oxford | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc_no | infectious diseases | https://www.medrxiv.org/content/early/2019/07/24/19001479.source.xml | Carriage duration of carbapenemase-producing Enterobacteriaceae (CPE) is uncertain. We followed 21 CPE carriers over one year. Mean carriage duration was 86 (95%CrI= [60, 122]) days, with 98.5% (95%CrI= [95.0, 99.8]) probability of decolonization in one year. Antibiotic consumption was associated with prolonged carriage. CPE-carriers status should be reviewed yearly. | 10.3201/eid2609.190592 | medrxiv |
10.1101/19002899 | Excess weight mediates changes in HDL pool that reduce cholesterol efflux capacity and increase antioxidant activity | de Lima-Junior, J. C.; Virginio, V. W.; Moura, F. A.; Bertolami, A.; Bertolami, M.; Coelho-Filho, O. R.; Zanotti, I.; Nadruz, W.; Faria, E. C.; Carvalho, L. S. F.; Sposito, A. C. | Andrei C Sposito | Laboratory of Atherosclerosis and Vascular Biology, Faculty of Medical Sciences, State University of Campinas | 2019-07-25 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/07/25/19002899.source.xml | ObjectiveObesity-related decline in high-density lipoprotein (HDL) functions such as cholesterol efflux capacity (CEC) has supported the notion that this lipoprotein dysfunction may contribute for atherogenesis among obese patients. Besides, potentially other HDL protective actions may be affected with weight gain and these changes may occur even before the obesity range.
MethodsLipid profile, body mass index (BMI), biochemical measurements, and carotid intima-media thickness (cIMT) were obtained in this cross-sectional study with 899 asymptomatic individuals. HDL functions were measured in a subgroup (n=101).
ResultsIndividuals with increased HDL-C had an attenuated increase in cIMT with elevation of BMI. CEC, HDL-C, HDL size and HDL-antioxidant activity were negatively associated with cIMT. BMI was inversely associated with HDL-mediated inhibition of platelet aggregation and CEC, but surprisingly it was directly associated with the antioxidant activity. Thus, even in non-obese, non-diabetic individuals, increased BMI is associated with a wide change in protective functions of HDL, reducing CEC and increasing antioxidant activity. In these subjects, decreased HDL concentration, size or function are related to increased atherosclerotic burden.
ConclusionOur findings demonstrate that in non-obese, non-diabetic individuals, the increasing values of BMI are associated with impaired protective functions of HDL and concomitant increase in atherosclerotic burden. | 10.1016/j.numecd.2019.09.017 | medrxiv |
10.1101/19003004 | The Paradox of Female Obesity in Low and Lower-Middle Income Countries | Tang, C. Y.; Woldu, H. G.; Sheets, L. R. | Cynthia Y Tang | University of Missouri School of Medicine | 2019-07-25 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | epidemiology | https://www.medrxiv.org/content/early/2019/07/25/19003004.source.xml | SettingObesity, once considered an epidemic of the developed world, is now becoming an even more prominent problem than underweight in low and lower middle income countries (LLMICs). Ample literature has shown that as a countrys income increases, the burden of obesity shifts from the rich to the poor. This is known as the "Reversal Hypothesis." Many studies have explored the effects of various social determinants of health on obesity, but few have studied education as an independent variable on female obesity across LLMICs.
ObjectiveGlobally, adult females have a higher prevalence of obesity and the obesity shift occurs more quickly for women than for men. We aim to address this disparity and contribute towards the reversal hypothesis by exploring the association of education and obesity in women in LLMICs.
DesignIn this cross-sectional study, we used a multi-national and multi-year database from the publicly available Demographic and Health Surveys program with data from 34 LLMICs. Education levels are standardized across countries during survey collection.
ResultsOur age-adjusted prevalence ratio (AA-PR) analysis shows that women in LLMICs with higher education have a significantly greater prevalence of obesity than women with no education. We analyzed this phenomenon by individual nations, continents, and income classifications. Educated women living in low income countries are 5.12 times more obese than uneducated women (AA-PR, 95% CI=4.75, 5.53) and 3.42 times more obese in lower middle income countries (AA-PR, 95% CI=3.31, 3.54).
ConclusionThese findings highlight a need for more studies and policy attention focusing on female education levels, among other factors, to understand, predict, and prevent obesity in LLMICs.
ARTICLE SUMMARY
Strengths and limitations of this studyO_LIA rigorous sample size of 943,947 adult females in 34 LLMIC countries was utilized to study the association between adult female obesity and education level.
C_LIO_LIAge-adjusted and age-and-wealth-adjusted prevalence ratios of obesity were analyzed based on 34 individual nations, three continents, and two major income categories.
C_LIO_LIThis study includes the most recent data available through the Demographic and Health Surveys program, which standardizes education levels during data collection, allowing for comparison between all surveyed countries.
C_LIO_LIThis study is limited by the relatively small number of countries for which data is available through the DHS dataset, and thus, further research will be needed to show whether these results are generalizable to other LLMICs.
C_LI | null | medrxiv |
10.1101/19003004 | The Paradox of Female Obesity in Low and Lower-Middle Income Countries | Tang, C. Y.; Woldu, H. G.; Sheets, L. R. | Cynthia Y Tang | University of Missouri School of Medicine | 2021-02-27 | 2 | PUBLISHAHEADOFPRINT | cc_by_nd | epidemiology | https://www.medrxiv.org/content/early/2021/02/27/19003004.source.xml | SettingObesity, once considered an epidemic of the developed world, is now becoming an even more prominent problem than underweight in low and lower middle income countries (LLMICs). Ample literature has shown that as a countrys income increases, the burden of obesity shifts from the rich to the poor. This is known as the "Reversal Hypothesis." Many studies have explored the effects of various social determinants of health on obesity, but few have studied education as an independent variable on female obesity across LLMICs.
ObjectiveGlobally, adult females have a higher prevalence of obesity and the obesity shift occurs more quickly for women than for men. We aim to address this disparity and contribute towards the reversal hypothesis by exploring the association of education and obesity in women in LLMICs.
DesignIn this cross-sectional study, we used a multi-national and multi-year database from the publicly available Demographic and Health Surveys program with data from 34 LLMICs. Education levels are standardized across countries during survey collection.
ResultsOur age-adjusted prevalence ratio (AA-PR) analysis shows that women in LLMICs with higher education have a significantly greater prevalence of obesity than women with no education. We analyzed this phenomenon by individual nations, continents, and income classifications. Educated women living in low income countries are 5.12 times more obese than uneducated women (AA-PR, 95% CI=4.75, 5.53) and 3.42 times more obese in lower middle income countries (AA-PR, 95% CI=3.31, 3.54).
ConclusionThese findings highlight a need for more studies and policy attention focusing on female education levels, among other factors, to understand, predict, and prevent obesity in LLMICs.
ARTICLE SUMMARY
Strengths and limitations of this studyO_LIA rigorous sample size of 943,947 adult females in 34 LLMIC countries was utilized to study the association between adult female obesity and education level.
C_LIO_LIAge-adjusted and age-and-wealth-adjusted prevalence ratios of obesity were analyzed based on 34 individual nations, three continents, and two major income categories.
C_LIO_LIThis study includes the most recent data available through the Demographic and Health Surveys program, which standardizes education levels during data collection, allowing for comparison between all surveyed countries.
C_LIO_LIThis study is limited by the relatively small number of countries for which data is available through the DHS dataset, and thus, further research will be needed to show whether these results are generalizable to other LLMICs.
C_LI | null | medrxiv |
10.1101/19002865 | Leveraging multiple data types to estimate the true size of the Zika epidemic in the Americas | Moore, S. M.; Oidtman, R. J.; Soda, J.; Siraj, A. S.; Reiner, R. C.; Barker, C. M.; Perkins, A. | Alex Perkins | University of Notre Dame | 2019-07-24 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/07/24/19002865.source.xml | Since the first Zika virus (ZIKV) infection was confirmed in Brazil in May 2015, several hundred thousand cases have been reported across the Americas. This figure gives an incomplete picture of the epidemic, however, given that asymptomatic infections, imperfect surveillance, and variability in reporting rates complicate the interpretation of case report data. The infection attack rate (IAR)--defined as the proportion of the population that was infected over the course of the epidemic--has important implications for the longer-term epidemiology of Zika in the region, such as the timing, location, and likelihood of future outbreaks. To estimate the IAR and the total number of people infected, we leveraged multiple types of Zika case data from 15 countries and territories where subnational data were publicly available. Datasets included confirmed and suspected Zika cases in pregnant women and in the total population, Zika-associated Guillan-Barre syndrome cases, and cases of congenital Zika syndrome. We used a hierarchical Bayesian model with empirically-informed priors that leveraged the different case report types to simultaneously estimate national and subnational reporting rates, the fraction of symptomatic infections, and subnational IARs. In these 15 countries and territories, estimates of Zika IAR ranged from 0.084 (95% CrI: 0.067 - 0.096) in Peru to 0.361 (95% CrI: 0.214 - 0.514) in Ecuador, with significant subnational variability in IAR for every country. Totaling these infection estimates across these and 33 other countries and territories in the region, our results suggest that 132.3 million (95% CrI: 111.3-170.2 million) people in the Americas have been infected by ZIKV since 2015. These estimates represent the most extensive attempt to date to determine the size of the Zika epidemic in the Americas, and they offer an important baseline for assessing the risk of future Zika epidemics in this region. | 10.1371/journal.pntd.0008640 | medrxiv |
10.1101/19002865 | Leveraging multiple data types to estimate the true size of the Zika epidemic in the Americas | Moore, S. M.; Oidtman, R. J.; Soda, J.; Siraj, A. S.; Reiner, R. C.; Barker, C. M.; Perkins, A. | Alex Perkins | University of Notre Dame | 2019-08-22 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | epidemiology | https://www.medrxiv.org/content/early/2019/08/22/19002865.source.xml | Since the first Zika virus (ZIKV) infection was confirmed in Brazil in May 2015, several hundred thousand cases have been reported across the Americas. This figure gives an incomplete picture of the epidemic, however, given that asymptomatic infections, imperfect surveillance, and variability in reporting rates complicate the interpretation of case report data. The infection attack rate (IAR)--defined as the proportion of the population that was infected over the course of the epidemic--has important implications for the longer-term epidemiology of Zika in the region, such as the timing, location, and likelihood of future outbreaks. To estimate the IAR and the total number of people infected, we leveraged multiple types of Zika case data from 15 countries and territories where subnational data were publicly available. Datasets included confirmed and suspected Zika cases in pregnant women and in the total population, Zika-associated Guillan-Barre syndrome cases, and cases of congenital Zika syndrome. We used a hierarchical Bayesian model with empirically-informed priors that leveraged the different case report types to simultaneously estimate national and subnational reporting rates, the fraction of symptomatic infections, and subnational IARs. In these 15 countries and territories, estimates of Zika IAR ranged from 0.084 (95% CrI: 0.067 - 0.096) in Peru to 0.361 (95% CrI: 0.214 - 0.514) in Ecuador, with significant subnational variability in IAR for every country. Totaling these infection estimates across these and 33 other countries and territories in the region, our results suggest that 132.3 million (95% CrI: 111.3-170.2 million) people in the Americas have been infected by ZIKV since 2015. These estimates represent the most extensive attempt to date to determine the size of the Zika epidemic in the Americas, and they offer an important baseline for assessing the risk of future Zika epidemics in this region. | 10.1371/journal.pntd.0008640 | medrxiv |
10.1101/19002832 | Alcohol causes an increased risk of head and neck but not breast cancer in individuals from the UK Biobank study: A Mendelian randomisation analysis. | Ingold, N.; Amin, H. A.; Drenos, F. | Fotios Drenos | Brunel University London | 2019-07-26 | 1 | PUBLISHAHEADOFPRINT | cc_no | epidemiology | https://www.medrxiv.org/content/early/2019/07/26/19002832.source.xml | ABSTACTAlcohol intake and the risk of various types of cancers have been previously correlated. Correlation though does not always mean that a causal relationship between the two is present. Excessive alcohol consumption is also correlated with other lifestyle factors and behaviours, such as smoking and increased adiposity, that also affect the risk of cancer and make the identification and estimation of the causal effect of alcohol on cancer difficult. Here, using individual level data for 322,193 individuals from the UK Biobank, we report the observational and causal effects of alcohol consumption on types of cancer previously suggested as correlated to alcohol. Alcohol was observationally associated with cancers of the lower digestive system, head and neck and breast cancer. No associations were observed when we considered those keeping alcohol consumption below the recommended threshold of 14 units/week. When Mendelian randomisation was used to assess the causal effect of alcohol on cancer, we found that increasing alcohol consumption, especially above the recommended level, was causal to head and neck cancers but not breast cancer. Our results where replicated using a two sample MR method and data from the much larger COGS genome wide analysis of breast cancer. We conclude that alcohol is causally related to head and neck cancers, especially cancer of larynx, but the observed association with breast cancer are likely due to confounding. The suggested threshold of 14 units/week appears suitable to manage the risk of cancer due to alcohol. | null | medrxiv |
10.1101/19002683 | Designing a text messaging program to increase adherence to medication for the secondary prevention of cardiovascular disease | Uribe-Rodriguez, A. F.; Perez-Rivero, P. F.; Free, C.; Perel, P.; Murray, E.; Serrano, N.; Horne, R.; Atkins, L.; Casas, J. P.; Bermon, A. | Ana Fernanda Uribe-Rodriguez | Pontifical Bolivarian University | 2019-07-26 | 1 | PUBLISHAHEADOFPRINT | cc_no | public and global health | https://www.medrxiv.org/content/early/2019/07/26/19002683.source.xml | BackgroundCardiovascular medication for secondary prevention has been shown to be effective. However, cardiovascular patients have poor medication adherence, the consequences of which include premature death, recurrence risk, hospitalization, and high financial cost for the healthcare system. Behavioral interventions based on text messaging technology are a promising strategy to improving adherence in medications. In low-middle income settings there is no high-quality evidence of a behavioral program delivered by SMS; hence we describe the development, message content, and the program design of the intervention for improving adherence to cardiovascular medication.
MethodsWe used the model reported by Abroms and colleagues for developing and evaluating text messages-based interventions. This model describes a process in which the intervention created is based on theory and evidence, the target audience is involved to ensure the intervention is engaging and useful, and there is a focus on implementation from the outset.
ResultsOur main result was the design of the program, which consisted of a twelve-month structured intervention based on Transtheoretical Model of Behavior Change. We wrote and validated clusters of texts messages targeting each stage of the model. Each message went through an examination process including the evaluation of former cardiovascular patients, experts and the team research personnel. Another important result was an understanding of patients perceptions of their experience of cardiovascular disease, barriers to accessing healthcare in Colombia and the use of mobile technology for health.
ConclusionsAn SMS intervention has the potential to be an acceptable and effective way of improving adherence to medication in patients with cardiovascular disease. This paper describes the development and content of one such intervention. | null | medrxiv |
10.1101/19001768 | A single-blinded randomized crossover trial comparing peer-to-peer and standard instruction on airway management skill training | Surabenjawong, U.; Phrampus, P. E.; Lutz, J.; Farkas, D.; Gopalakrishna, A.; Monsomboon, A.; Limsuwat, C.; O'Donnell, J. M. | Usapan Surabenjawong | Department of Emergency Medicine, Faculty of Medicine, Mahidol University | 2019-07-26 | 1 | PUBLISHAHEADOFPRINT | cc_no | medical education | https://www.medrxiv.org/content/early/2019/07/26/19001768.source.xml | BackgroundPeer-to-peer teaching, which is an alternative to standard teaching (by expert instructors), has the potential to emphasize student self-learning and reduce the cost and workload of the instructor. Self-instruction videos with peer feedback are highlighted in many medical and nursing school curricula.
ObjectiveTo evaluate whether peer to peer instruction supported by a structured curriculum and video exemplars is not inferior to standard instructor-led teaching in basic airway management skill, knowledge, and confidence attainment.
MethodThis single blinded randomized crossover trial was conducted with a sample of novice nursing students. Data was collected through the pre-to post-knowledge and confidence assessments. The students were randomly assigned to two crossover groups. Each student learned basic airway management skills through both methods. The students performances were recorded in every session with recordings reviewed by blinded expert instructors.
ResultsThe study included 48 participants, who were assigned into both the expert instruction group and peer-to-peer group through computer generated randomization. The skill rating scores of the peer-to-peer group were not inferior to the standard teaching. With further analysis, we noted that the peer-to-peer group scores had significantly higher scores demonstrating a large effect size (Cohens d of 1.07 (p-value 0.002) for oropharyngeal airway insertion, 1.14 (p-value <0.001) for nasopharyngeal airway insertion and 0.81 (p-value 0.003) for bag mask ventilation). There was no significant difference between pre- and post-knowledge scores across groups (p-value of 0.13 and 0.22 respectively). Participants in both groups reported higher confidence after learning. However, the difference was not statistically significant.
ConclusionsUndergraduate nursing students trained in basic airway management skills by peer-to-peer instruction and a structured curriculum did not show inferior scores compared to the students who were trained by expert instructors. There was no significant difference in the knowledge and confidence levels between the groups. | 10.1016/j.ecns.2020.06.009 | medrxiv |
10.1101/19003160 | The Human Leukocyte Antigen Locus and Susceptibility to Rheumatic Heart Disease in South Asians and Europeans | Auckland, K.; Mittal, B.; Cairns, B. J.; Garg, N.; Kumar, S.; Mentzer, A. J.; Kado, J.; Perman, M. L.; Steer, A. C.; Hill, A. V.; Parks, T. | Tom Parks | University of Oxford | 2019-07-26 | 1 | PUBLISHAHEADOFPRINT | cc_by | infectious diseases | https://www.medrxiv.org/content/early/2019/07/26/19003160.source.xml | BackgroundRheumatic heart disease (RHD) remains an important cause of morbidity and mortality globally. Several reports have linked the disease to the human leukocyte antigen (HLA) locus but with negligible consistency.
MethodsWe undertook a genome-wide association study (GWAS) of susceptibility to RHD in 1163 South Asians (672 cases; 491 controls) recruited in India and Fiji. We analysed directly obtained and imputed genotypes, and followed-up associated loci in 1459 Europeans (150 cases; 1309 controls) from the UK Biobank study. For fine-mapping, we used HLA imputation to define classical alleles and amino acid polymorphisms.
ResultsA single signal situated in the HLA class III region reached genome-wide significance in the South Asians, and replicated in the Europeans (rs201026476; combined odds ratio 1.81, 95% confidence intervals 1.51-2.18, P=3.48x10-10). While the signal fine-mapped to specific amino acid polymorphisms within HLA-DQB1 and HLA-B, with conditioning, the lead class III variant remained associated with susceptibility (P=3.34x10-4), suggesting an independent effect.
ConclusionsA complex HLA signal, likely comprising at least two underlying causal variants, strongly associates with susceptibility to RHD in South Asians and Europeans. Crucially, the involvement of the class III region may partly explain the previous inconsistency, while offering important new insight into pathogenesis. | 10.1038/s41598-020-65855-8 | medrxiv |
10.1101/19003228 | Esophageal Cooling For Protection During Left Atrial Ablation: A Systematic Review And Meta-Analysis | Leung, L.; Gallagher, M.; Santangeli, P.; Tschabrunn, C.; Guerra, J. M.; Campos, B.; Hayat, J.; Atem, F.; Mickelsen, S.; Kulstad, E. | Erik Kulstad | UT Southwestern Medical Center | 2019-07-26 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | cardiovascular medicine | https://www.medrxiv.org/content/early/2019/07/26/19003228.source.xml | BackgroundThermal damage to the esophagus is a risk from radiofrequency (RF) ablation of the left atrium for the treatment of atrial fibrillation (AF), with the most extreme type of thermal injury resulting in atrio-esophageal fistula (AEF), with a correspondingly high mortality rate. Various approaches have been developed to reduce esophageal injury, including power reduction, avoidance of greater contact-force, esophageal deviation, and esophageal cooling. One method of esophageal cooling involves direct instillation of cold water or saline into the esophagus during RF ablation. Although this method provides limited heat-extraction capacity, studies of it have suggested potential benefit.
ObjectiveWe sought to perform a meta-analysis of existing studies evaluating esophageal cooling via direct liquid instillation for the reduction of thermal injury.
MethodsWe reviewed Medline for existing studies involving esophageal cooling for protection of thermal injury during RF ablation. A meta-analysis was then performed using random effects model to calculate estimated effect size with 95% confidence intervals, with outcome of esophageal lesions, stratified by severity, as determined by post-procedure endoscopy.
ResultsA total of 9 studies were identified and reviewed. After excluding pre-clinical and mathematical model studies, 3 were included in the meta-analysis, totaling 494 patients. Esophageal cooling showed a tendency to shift lesion severity downward, such that total lesions did not show a statistically significant change (OR 0.6, 95% CI 0.15 to 2.38). For high grade lesions, a significant OR of 0.39 (95% CI 0.17 to 0.89) in favor of esophageal cooling was found, suggesting that esophageal cooling, even utilizing a low-capacity thermal extraction technique, reduces lesion severity from RF ablation.
ConclusionsEsophageal cooling reduces lesion severity encountered during RF ablation, even when using relatively low heat extraction methods such as direct instillation of cold liquid. Further investigation of this approach is warranted. | 10.1007/s10840-019-00661-5 | medrxiv |
10.1101/19002352 | The ansa subthalamica: a neglected fiber tract | Alho, E. J. L.; Alho, A. T. D. L.; Horn, A.; Martin, M. G.; Edlow, B.; Fischl, B.; Nagy, J.; Fonoff, E. T.; Hamani, C.; Heinsen, H. | Eduardo Joaquim Lopes Alho | Faculdade de Medicina, University of Sao Paulo, Brazil | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_no | neurology | https://www.medrxiv.org/content/early/2019/07/29/19002352.source.xml | BackgroundThe pallidofugal pathways are classically subdivided into ansa lenticularis, lenticular fasciculus, and subthalamic fasciculus. In addition to these three subsystems, we characterize an anatomical structure that connects the antero-medial pole of the subthalamic nucleus to the ventral portions of the pallidum, both related to limbic processing of information. This bundle has been previously considered to form a part of the ansa lenticularis, however, it shows striking differences on histology and MRI features compared to the ansa lenticularis, and therefore we suggest to denominate it ansa subthalamica.
ObjectivesTo describe the ansa subthalamica as a different structure than the ansa lenticularis, that can be recognized by different methods (histology, high-field MRI and connectome tractography), including current 3T clinical imaging.
MethodsA complete human brain was histologically processed and submitted to registration procedures to correct for tissue deformations and normalization to MNI space. Coordinates of histological structures were then comparable to high-field (7T) post-mortem and in vivo MRIs, 13 pre-operative 3T imaging of parkinsonian patients and normative connectome tractography. Mean intensity gray values for different structures were measured in Susceptibility-Weighted Images.
ResultsIt was possible to characterize this structure with different methods and there was significant difference in signal intensity in the ansa subthalamica (hypointense), compared to the ansa lenticularis (hyperintense).
ConclusionsThe ansa subhtalamica may represent the anatomical pathway that connects limbic regions of the STN and pallidum, and should be investigated as a possible substrate for limbic effects of stereotactic surgery of the subthalamic region. | 10.1002/mds.27901 | medrxiv |
10.1101/19003236 | Toward Automated Classification of Pathological Transcranial Doppler Waveform Morphology via Spectral Clustering | Thorpe, S. G.; Thibeault, C. M.; Canac, N.; Jalaleddini, K.; Dorn, A.; Wilk, S. J.; Devlin, T.; Scalzo, F.; Hamilton, R. B. | Samuel Garrett Thorpe | Neural Analytics, Inc. | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_by_nd | neurology | https://www.medrxiv.org/content/early/2019/07/29/19003236.source.xml | Cerebral Blood Flow Velocity waveforms acquired via Transcranial Doppler (TCD) can provide evidence for cerebrovascular occlusion and stenosis. Thrombolysis in Brain Ischemia (TIBI) flow grades are widely used for this purpose, but require subjective assessment by expert evaluators to be reliable. In this work we seek to determine whether TCD morphology can be objectively assessed using an unsupervised machine learning approach to waveform categorization. TCD beat waveforms were recorded at multiple depths from the Middle Cerebral Arteries of 106 subjects; 33 with CTA-confirmed Large Vessel Occlusion (LVO). From each waveform, three morphological variables were extracted, quantifying absolute peak onset, number/prominence of auxiliary peaks, and systolic canopy length. Spectral clustering identified groups implicit in the resultant three-dimensional feature space, with gap-statistic criteria establishing the optimal cluster number. We found that gap-statistic disparity was maximized at four clusters, referred to as flow types I, II, III, and IV. Types I and II were primarily composed of control subject waveforms, whereas types III and IV derived mainly from LVO patients. Cluster morphologies for types I and IV aligned clearly with Normal and Blunted TIBI flows, respectively. Types II and III represented commonly observed flow-types not delineated by TIBI, which nonetheless deviate quantifiably from normal and blunted flows. We conclude that important morphological variability exists beyond that currently quantified by TIBI in populations experiencing or at-risk for acute ischemic stroke, and posit that the observed flow-types provide the foundation for objective methods of real-time automated flow type classification. | 10.1371/journal.pone.0228642 | medrxiv |
10.1101/19003236 | Toward Automated Classification of Pathological Transcranial Doppler Waveform Morphology via Spectral Clustering | Thorpe, S. G.; Thibeault, C. M.; Canac, N.; Jalaleddini, K.; Dorn, A.; Wilk, S. J.; Devlin, T.; Scalzo, F.; Hamilton, R. B. | Samuel Garrett Thorpe | Neural Analytics, Inc. | 2019-08-22 | 2 | PUBLISHAHEADOFPRINT | cc_by_nd | neurology | https://www.medrxiv.org/content/early/2019/08/22/19003236.source.xml | Cerebral Blood Flow Velocity waveforms acquired via Transcranial Doppler (TCD) can provide evidence for cerebrovascular occlusion and stenosis. Thrombolysis in Brain Ischemia (TIBI) flow grades are widely used for this purpose, but require subjective assessment by expert evaluators to be reliable. In this work we seek to determine whether TCD morphology can be objectively assessed using an unsupervised machine learning approach to waveform categorization. TCD beat waveforms were recorded at multiple depths from the Middle Cerebral Arteries of 106 subjects; 33 with CTA-confirmed Large Vessel Occlusion (LVO). From each waveform, three morphological variables were extracted, quantifying absolute peak onset, number/prominence of auxiliary peaks, and systolic canopy length. Spectral clustering identified groups implicit in the resultant three-dimensional feature space, with gap-statistic criteria establishing the optimal cluster number. We found that gap-statistic disparity was maximized at four clusters, referred to as flow types I, II, III, and IV. Types I and II were primarily composed of control subject waveforms, whereas types III and IV derived mainly from LVO patients. Cluster morphologies for types I and IV aligned clearly with Normal and Blunted TIBI flows, respectively. Types II and III represented commonly observed flow-types not delineated by TIBI, which nonetheless deviate quantifiably from normal and blunted flows. We conclude that important morphological variability exists beyond that currently quantified by TIBI in populations experiencing or at-risk for acute ischemic stroke, and posit that the observed flow-types provide the foundation for objective methods of real-time automated flow type classification. | 10.1371/journal.pone.0228642 | medrxiv |
10.1101/19002485 | Considerations in the deployment of novel universal vaccines against epidemic and pandemic influenza | Arinaminpathy, N.; Riley, S.; Barclay, W.; Saad-Roy, C. M.; Grenfell, B. T. | Nimalan Arinaminpathy | Imperial College London | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_by | infectious diseases | https://www.medrxiv.org/content/early/2019/07/29/19002485.source.xml | There is increasing interest in the development of new, universal influenza vaccines (UIV) that - unlike current vaccines - are effective against a broad range of seasonal influenza strains, as well as against novel pandemic viruses. Even where these vaccines do not block infection, they can moderate clinical severity, reducing morbidity and mortality while potentially also reducing opportunities for transmission. Previous modelling studies have illustrated the potential epidemiological benefits of UIVs, including their potential to mitigate pandemic burden. However, these new vaccines could shape population immunity in complex ways. Here, using mathematical models of influenza transmission, we illustrate two types of unintended consequences that could arise from their future deployment. First, by reducing the amount of infection-induced immunity in a population without fully replacing it, a seasonal UIV programme may permit larger pandemics than in the absence of vaccination. Second, the more successful a future UIV programme is in reducing transmission of seasonal influenza, the more vulnerable the population could become to the emergence of a vaccine-escape variant. These risks could be mitigated by optimal deployment of any future UIV vaccine: namely, the use of a combined vaccine formulation (incorporating conventional as well as multiple universal antigenic targets), and by achieving sufficient population coverage to compensate for reductions in infection-induced immunity. As early candidates of UIVs approach advanced clinical trials, there is a need to monitor their characteristics in such a way that is focused on their potential impact. This work offers a first step in this direction. | 10.1098/rsif.2019.0879 | medrxiv |
10.1101/19000505 | Microbiota and Health Study: a prospective cohort of respiratory and diarrheal infections and associated risk factors in Bangladeshi infants under two years | Vidal, K.; Sultana, S.; Prieto Patron, A.; Binia, A.; Rahman, M.; Deeba, I. M.; Bruessow, H.; Sakwinska, O.; Sarker, S. A. | Olga Sakwinska | Nestle Research | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc | pediatrics | https://www.medrxiv.org/content/early/2019/07/29/19000505.source.xml | BackgroundEarly childhood respiratory and diarrheal infections are major causes of morbidity and mortality worldwide. There is a need to further assess the epidemiology through prospective and community-based studies to gain key insights that could inform preventative measures to reduce the risk of infectious disease in this vulnerable population. We aimed to analyze the burden and determinants of acute respiratory infection (ARI) and diarrhea episodes affecting infants during their first 2 years of life with state-of-the-art molecular technologies.
MethodsThe ARI and diarrhea episodes were prospectively collected in a community-based, longitudinal cohort of infants (n=267) from birth to 2 years of life in Bangladesh. Women were recruited during the third trimester of pregnancy. Demographic, socioeconomic, and environmental information on the households was recorded. Nasopharyngeal and fecal samples were collected during regular scheduled visits from mother-infant pairs, and also from the infants during unscheduled visits for reported illnesses. New generation sequencing methods will be utilized to determine microbiota composition and function, supplemented by the state-of-the-art multiplex molecular detection technology for a wide range of bacterial and viral pathogens.
DiscussionThis study sought to assess the epidemiology of both respiratory and gastrointestinal illnesses during the first 2 years of life in children from a peri-urban community of Dhaka, Bangladesh. Characteristics of the mothers, as well as birth characteristics of infants enrolled in the Microbiota and Health Study are presented here. We will determine any potential association between microbiota composition and the abovementioned illnesses, and also examine the influence of known and hypothesized risk factors on the occurrence of infections. Such putative factors include environmental, socioeconomic, maternal, clinical, and selected genetic factors, namely the variation of the fucosyltransferase genes (FUT2 and FUT3) of mothers and infants. This study will add to current knowledge about these early childhood infectious diseases, and will provide data to generate hypotheses for the development of nutritional approaches to be used as prevention strategies.
Trial registrationThe study was retrospectively registered at clinicaltrials.gov as NCT02361164 (February 11, 2015). | null | medrxiv |
10.1101/19003038 | Distinct psychopathology profiles in patients with epileptic seizures compared to non-epileptic psychogenic seizures | Wang, A. D.; Leong, M.; Johnstone, B.; Rayner, G.; Kwan, P.; O'Brien, T. J.; Velakoulis, D.; Malpas, C. B. | Charles B Malpas | Department of Medicine, Royal Melbourne Hospital, The University of Melbourne, Australia; Department of Neurosciences, Monash University, Australia; Department | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/07/29/19003038.source.xml | ObjectiveSimilarities in clinical presentations between epileptic seizures (ES) and psychogenic non-epileptic seizures (PNES) produces a risk of misdiagnosis. Video-EEG monitoring (VEM) is the diagnostic gold standard, but involves significant cost and time commitment, suggesting a need for efficient screening tools.
Methods628 patients were recruited from an inpatient VEM unit; 293 patients with ES, 158 with PNES, 31 both ES and PNES, and 146 non-diagnostic. Patients completed the SCL-90-R, a standardised 90-item psychopathology instrument. Bayesian linear models were computed to investigate whether SCL-90-R domain scores or the overall psychopathology factor p differed between groups. Receiver operating characteristic (ROC) curves were computed to investigate the PNES classification accuracy of each domain score and p. A machine learning algorithm was also used to determine which subset of SCL-90-R items produced the greatest classification accuracy.
ResultsEvidence was found for elevated scores in PNES compared to ES groups in the symptom domains of anxiety (b = 0.47, 95%HDI = [0.10, 0.80]), phobic anxiety (b = 1.32, 95%HDI = [0.98, 1.69]), somatisation (b = 0.84, 95%HDI = [0.49, 1.20]), and the general psychopathology factor p (b = 1.35, 95%HDI = [0.86, 1.82]). Of the SCL-90-R domain scores, somatisation produced the highest classification accuracy (AUC = 0.74, 95%CI = [0.69, 0.79]). The genetic algorithm produced a 6-item subset from the SCL-90-R, which produced comparable classification accuracy to the somatisation scores (AUC = 0.73, 95%CI = [0.64, 0.82]).
SignificanceCompared to patients with ES, patients with PNES report greater symptoms of somatisation, general anxiety, and phobic anxiety against a background of generally elevated psychopathology. While self-reported psychopathology scores are not accurate enough for diagnosis in isolation, elevated psychopathology in these domains should raise the suspicion of PNES in clinical settings. | 10.1016/j.eplepsyres.2019.106234 | medrxiv |
10.1101/19003038 | Distinct psychopathology profiles in patients with epileptic seizures compared to non-epileptic psychogenic seizures | Wang, A. D.; Leong, M.; Johnstone, B.; Rayner, G.; Kwan, P.; O'Brien, T. J.; Velakoulis, D.; Malpas, C. B. | Charles B Malpas | Department of Medicine, Royal Melbourne Hospital, The University of Melbourne, Australia; Department of Neurosciences, Monash University, Australia; Department | 2019-08-26 | 2 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | psychiatry and clinical psychology | https://www.medrxiv.org/content/early/2019/08/26/19003038.source.xml | ObjectiveSimilarities in clinical presentations between epileptic seizures (ES) and psychogenic non-epileptic seizures (PNES) produces a risk of misdiagnosis. Video-EEG monitoring (VEM) is the diagnostic gold standard, but involves significant cost and time commitment, suggesting a need for efficient screening tools.
Methods628 patients were recruited from an inpatient VEM unit; 293 patients with ES, 158 with PNES, 31 both ES and PNES, and 146 non-diagnostic. Patients completed the SCL-90-R, a standardised 90-item psychopathology instrument. Bayesian linear models were computed to investigate whether SCL-90-R domain scores or the overall psychopathology factor p differed between groups. Receiver operating characteristic (ROC) curves were computed to investigate the PNES classification accuracy of each domain score and p. A machine learning algorithm was also used to determine which subset of SCL-90-R items produced the greatest classification accuracy.
ResultsEvidence was found for elevated scores in PNES compared to ES groups in the symptom domains of anxiety (b = 0.47, 95%HDI = [0.10, 0.80]), phobic anxiety (b = 1.32, 95%HDI = [0.98, 1.69]), somatisation (b = 0.84, 95%HDI = [0.49, 1.20]), and the general psychopathology factor p (b = 1.35, 95%HDI = [0.86, 1.82]). Of the SCL-90-R domain scores, somatisation produced the highest classification accuracy (AUC = 0.74, 95%CI = [0.69, 0.79]). The genetic algorithm produced a 6-item subset from the SCL-90-R, which produced comparable classification accuracy to the somatisation scores (AUC = 0.73, 95%CI = [0.64, 0.82]).
SignificanceCompared to patients with ES, patients with PNES report greater symptoms of somatisation, general anxiety, and phobic anxiety against a background of generally elevated psychopathology. While self-reported psychopathology scores are not accurate enough for diagnosis in isolation, elevated psychopathology in these domains should raise the suspicion of PNES in clinical settings. | 10.1016/j.eplepsyres.2019.106234 | medrxiv |
10.1101/19003152 | Contrasting Evidence to Reimbursement Reality for Off-label use (OLU) of Drug Treatments in Cancer Care - Rationale and Design of the CEIT-OLU-project | Herbrand, A. K.; Schmitt, A. M.; Briel, M.; Diem, S.; Ewald, H.; Hoogkamer, A.; Joerger, M.; Mc Cord, K. A.; Novak, U.; Sricharoenchai, S.; Hemkens, L. G.; Kasenda, B. | Amanda Katherina Herbrand | Department of Medical Oncology, University Hospital Basel and University of Basel, Basel, Switzerland. | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_no | public and global health | https://www.medrxiv.org/content/early/2019/07/29/19003152.source.xml | BackgroundOff-label drug use (OLU) reflects a perceived unmet medical need, which is common in oncology. Cancer drugs are often highly expensive and their reimbursement is a challenge for many health care systems. OLU is frequently regulated by reimbursement restrictions. For evidence-based health care, treatment ought to be reimbursed if there is sufficient clinical evidence for treatment benefit independently of patient factors not related to the treatment indication. However, little is known about the reality of OLU reimbursement and its association with the underlying clinical evidence. Here we aim to investigate the relationship of reimbursement decisions with the underlying clinical evidence.
Methods/DesignWe extract patient characteristics and details on treatment and reimbursement of cancer drugs from over 3000 patients treated in three Swiss hospitals. We systematically search for clinical trial evidence on benefits associated with OLU in the most common indications. We will describe the prevalence of OLU in Switzerland and its reimbursement in cancer care, and use multivariable logistic regression techniques to investigate the association of approval/rejection of a reimbursement requests to the evidence on treatment effects and to further factors, including type of drug, molecular predictive markers and the health insurer.
DiscussionOur study will provide a systematic overview and assessment of OLU and its reimbursement reality in Switzerland. We may provide a better understanding of the access to cancer care that is regulated by health insurers and we hope to identify factors that determine the level of evidence-based cancer care in a highly diverse Western health care system. | 10.1136/esmoopen-2019-000596 | medrxiv |
10.1101/19003129 | A study of knowledge, experience and beliefs about hepatitis B virus (HBV) infection in south western Uganda | Mugisha, J.; Mokaya, J.; Bukenya, D.; Ssembajja, F.; Mayambala, D.; Newton, R.; Matthews, P. C.; Seeley, J. | Philippa C Matthews | Nuffield Department of Medicine, University of Oxford, Medawar Building for Pathogen Research, South Parks Road, Oxford OX1 3SY, UK; Department of Infectious Di | 2019-07-29 | 1 | PUBLISHAHEADOFPRINT | cc_by | public and global health | https://www.medrxiv.org/content/early/2019/07/29/19003129.source.xml | IntroductionUnited Nations sustainable development goals aim for the elimination of viral hepatitis as a public health threat by 2030, leading to efforts to upscale the availability and accessibility of hepatitis B virus (HBV) vaccination, diagnosis and treatment globally. However, a variety of societal factors, including beliefs, traditions, and stigma, can be a major obstacle to all of these interventions. We therefore set out to investigate how HBV is understood and described in communities in Uganda, and whether there is evidence of potential stigma.
MethodWe carried out a qualitative formative study in two sites in South Western Uganda: a village in Kalungu district (site A) and an area on the outskirts of Masaka town (site B). We undertook a rapid assessment to investigate how adults describe HBV infection and their perceptions about the infection. We collected data by conducting a transect walk, observations, community group discussions, and in-depth interviews, sampling a total of 131 individuals. We used inductive content analysis to extract key themes associated with HBV.
ResultsThere is no specific word for HBV infection in local languages, and knowledge about this infection is varied. While some individuals were completely unfamiliar with HBV infection, some had heard of HBV. Radio was a common source of information. There was awareness of HBV as a cause of liver disease, limited knowledge regarding the cause, mode of transmission and treatment. Stigma in HBV may be rare in this community due to limited understanding and experience of HBV.
ConclusionThere is an ongoing need to improve awareness and understanding of HBV in this community. Careful dissemination of accurate information is required to promote acceptance of interventions for prevention, diagnosis and treatment. | 10.3389/fpubh.2019.00304 | medrxiv |
10.1101/19003210 | Association of Kras mutation with tumor deposit status and overall survival of colorectal cancer | Zhang, M.; Hu, W.; Hu, K.; Lin, Y.; Feng, Z.; Yun, J.-P.; Gao, N.; Zhang, L. | Lanjing Zhang | Princeton Medical Center/Rutgers University | 2019-07-30 | 1 | PUBLISHAHEADOFPRINT | cc_by_nc_nd | pathology | https://www.medrxiv.org/content/early/2019/07/30/19003210.source.xml | BackgroundThe recent staging manual upstages Node-negative tumor-deposit positive colorectal cancer (CRC) from N0 to N1c category, while the development of tumor-deposit presence is poorly understood. Meanwhile, Kras mutation is associated with progression of CRC, but its link to tumor-deposit status is unclear.
MethodThis retrospective cohort study included the patients with incidental CRC diagnosed during 2010-2014 in the National Cancer Database and recorded statuses of Kras and tumor deposit. We conducted multivariable logistic regression and Cox regression analyses to investigate the factors associated with tumor-deposit status and overall-survival, respectively.
ResultsA total of 48,200 CRC patients with Kras status were included in the study (25,407 [52.7%] men, 25,648[46.8%] <65 years old, 18 381 [38.1%] with Kras mutation). Adjusted for microsatellite instability, age, pathologic stage and tumor grade, Kras mutation (versus wild-type) was associated with tumor-deposit presence (n=15,229, odds ratio=1.11, 95% CI 1.02 to 1.20). Kras mutation was also independently linked to a worse overall survival of CRC patients regardless of tumor-deposit status (n=8,110, adjusted Hazard ratio=1.40, 95% CI 1.09 to 1.79 for CRC with tumor deposits, and n=2,618, adjusted HR=1.63, 95% CI 1.16 to 2.28 for CRC without), but a better survival in CRC with no known/applicable tumor-deposit status (n=457, adjusted Hazard ratio =0.32, 95% CI 0.11 to 0.95).
ConclusionKras mutation is independently associated with tumor-deposit presence, and a worse overall survival of CRC with or without tumor-deposit. Therefore, it may play a role in the development of tumor deposits and serve as a target for CRC treatment. | 10.1007/s10552-020-01313-0 | medrxiv |