journal-title
stringclasses
191 values
pmid
stringlengths
8
8
pmc
stringlengths
10
11
doi
stringlengths
12
31
article-title
stringlengths
11
423
abstract
stringlengths
18
3.69k
related-work
stringlengths
12
84k
references
sequencelengths
0
206
reference_info
listlengths
0
192
JMIR mHealth and uHealth
29487045
PMC5849795
10.2196/mhealth.7531
Feasibility of Virtual Tablet-Based Group Exercise Among Older Adults in Siberia: Findings From Two Pilot Trials
BackgroundRegular physical activity has a positive effect on physical health, well-being, and life satisfaction of older adults. However, engaging in regular physical activity can be challenging for the elderly population because of reduced mobility, low motivation, or lack of the proper infrastructures in their communities.ObjectiveThe objective of this paper was to study the feasibility of home-based online group training—under different group cohesion settings—and its effects on adherence and well-being among Russian older adults. We focused particularly on the technology usability and usage and on the adherence to the training (in light of premeasures of social support, enjoyment of physical activity, and leg muscle strength). As a secondary objective, we also explored the effects of the technology-supported intervention on subjective well-being and loneliness.MethodsTwo pilot trials were carried out exploring two different group cohesion settings (weak cohesion and strong cohesion) in the period from 2015 to 2016 in Tomsk, Russian Federation. A total of 44 older adults (59-83 years) participated in the two pilots and followed a strength and balance training program (Otago) for 8 weeks with the help of a tablet-based virtual gym app. Participants in each pilot were assigned to an interaction condition, representing the online group exercising, and an individual condition, representing a home-based individual training. Both conditions featured persuasion strategies but differed in the ability to socialize and train together.ResultsBoth interaction and individual groups reported a high usability of the technology. Trainees showed a high level of technology acceptance and, particularly, a high score in intention to future use (4.2-5.0 on a 5-point Likert scale). Private texting (short service message [SMS]) was used more than public texting, and the strong cohesion condition resulted in more messages per user. Joint participations to training sessions (copresence) were higher for the social group with higher cohesion. The overall adherence to the training was 74% (SD 27%). Higher levels of social support at baseline were associated with higher adherence in the low cohesion condition (F1,18=5.23, P=.03), whereas in the high cohesion, such association was not found. Overall improvement in the satisfaction with life score was observed between pre and post measures (F1,31=5.85, P=.02), but no decrease in loneliness.ConclusionsOnline group exercising was proven feasible among healthy independently living older adults in Russia. The pilots suggest that a physical training performed in a virtual environment positively affect the life satisfaction of the trainees, but it does not provide support for a decrease in loneliness. High cohesion groups are preferable for group exercising, especially to mitigate effects of low social support on adherence. Further research in motivating group interactions in training settings is needed.
Related WorkRecent research has demonstrated an effectiveness of technology-supported exercise interventions for older adults in terms of physical fitness [18]. However, although there is an ongoing discussion on whether group exercising or home-based individual exercising is more effective in increasing adherence of individuals to training programs (eg, [19,20]) and despite calls for analysis focusing on understanding group-based exercising in terms of cohesiveness (frequency of contact and group dynamics) [21], no intervention has compared the effectiveness of individual and (different types of) group settings in a technology-supported intervention.Research has also shown a preference by older adults in group training [11,12]. However, implementing group exercising can be challenging, especially in a heterogeneous elderly population, with individual differences leading to motivational issues and problems in tailoring the training [11].Fitness apps for home-based training have been widely explored in technology-supported interventions (see [22] for a review); however, we are not aware of interventions supporting online group exercising for individuals of different levels of fitness. Consequently, there is very limited research on the effects of level of fitness, social support, and subjective well-being in online group settings. The exception comes from a recent study on an Internet-based group training intervention [23] relying on a general-purpose teleconference software to deliver real-time exercises to older adults in rural areas. Although targeting homogeneous groups, focused on physical fitness outcomes, and limited to a small sample of 10 older adults, the study highlights some interesting challenges in deploying this type of technology.In our previous study [12,24], we made some steps to test the feasibility of a tool for online group exercising, namely Gymcentral, that allows individual of different levels of fitness to follow exercises with the remote company of others. We conducted an 8-week pilot study exploring the effects of online group exercise training in Trento, Italy, with 37 adults, 65 years and above, who followed the Otago exercise program [25] aiming at strength and balance improvement in older age. The specific focus of the study was on technology acceptance, attitude, and preference toward group training and its effects on physical and social well-being; in comparison with a traditional tablet-based individual training program implementing no persuasion strategies.Still, despite the prior work and the extensive existing literature, open questions remain:How does the online group exercising translate to other cultural and environmental settings?How effective is online training with groups of different levels of cohesion?How does online group exercising compare with individual training featuring persuasion strategies?
[ "16813655", "20057213", "20697813", "18063026", "3920714", "24317382", "24169944", "20545467", "24761536", "22585931", "25744109", "24612748", "11818183", "28392983", "11322678", "16978493", "16367493", "2035047", "22818947", "7431205", "18504506", "11124735", "10380242", "16472044", "24554274" ]
[ { "pmid": "16813655", "title": "Physical activity is related to quality of life in older adults.", "abstract": "BACKGROUND\nPhysical activity is associated with health-related quality of life (HRQL) in clinical populations, but less is known whether this relationship exists in older men and women who are healthy. Thus, this study determined if physical activity was related to HRQL in apparently healthy, older subjects.\n\n\nMETHODS\nMeasures were obtained from 112 male and female volunteers (70 +/- 8 years, mean +/- SD) recruited from media advertisements and flyers around the Norman, Oklahoma area. Data was collected using a medical history questionnaire, HRQL from the Medical Outcomes Survey short form-36 questionnaire, and physical activity level from the Johnson Space Center physical activity scale. Subjects were separated into either a higher physically active group (n = 62) or a lower physically active group (n = 50) according to the physical activity scale.\n\n\nRESULTS\nThe HRQL scores in all eight domains were significantly higher (p < 0.05) in the group reporting higher physical activity. Additionally, the more active group had fewer females (44% vs. 72%, p = 0.033), and lower prevalence of hypertension (39% vs. 60%, p = 0.041) than the low active group. After adjusting for gender and hypertension, the more active group had higher values in the following five HRQL domains: physical function (82 +/- 20 vs. 68 +/- 21, p = 0.029), role-physical (83 +/- 34 vs. 61 +/- 36, p = 0.022), bodily pain (83 +/- 22 vs. 66 +/- 23, p = 0.001), vitality (74 +/- 15 vs. 59 +/- 16, p = 0.001), and social functioning (92 +/- 18 vs. 83 +/- 19, p = 0.040). General health, role-emotional, and mental health were not significantly different (p > 0.05) between the two groups.\n\n\nCONCLUSION\nHealthy older adults who regularly participated in physical activity of at least moderate intensity for more than one hour per week had higher HRQL measures in both physical and mental domains than those who were less physically active. Therefore, incorporating more physical activity into the lifestyles of sedentary or slightly active older individuals may improve their HRQL." }, { "pmid": "20057213", "title": "Community exercise: a vital component to healthy aging.", "abstract": "Exercise plays a critical role in promoting healthy aging and in the management of chronic illness. In this paper, we provide an overview of the leading research regarding exercise and chronic illness, and the variables influencing exercise participation among persons with a chronic illness. We then examine the Empoli Adaptive Physical Activity (APA) program as a model program that has overcome many of the obstacles to exercise adherence. Piloted by Local Health Authority 11 in Tuscany, Italy, APA has over 2,000 participants, and it provides tailored exercise opportunities for persons with stroke, back pain, Parkinson's disease or multiple sclerosis, among others illnesses. The Empoli APA program serves as a model community exercise program and is now being replicated throughout Tuscany and in the United States." }, { "pmid": "20697813", "title": "Moving against frailty: does physical activity matter?", "abstract": "Frailty is a common condition in older persons and has been described as a geriatric syndrome resulting from age-related cumulative declines across multiple physiologic systems, with impaired homeostatic reserve and a reduced capacity of the organism to resist stress. Therefore, frailty is considered as a state of high vulnerability for adverse health outcomes, such as disability, falls, hospitalization, institutionalization, and mortality. Regular physical activity has been shown to protect against diverse components of the frailty syndrome in men and women of all ages and frailty is not a contra-indication to physical activity, rather it may be one of the most important reasons to prescribe physical exercise. It has been recognized that physical activity can have an impact on different components of the frailty syndrome. This review will address the role of physical activity on the most relevant components of frailty syndrome, with specific reference to: (i) sarcopenia, as a condition which frequently overlaps with frailty; (ii) functional impairment, considering the role of physical inactivity as one of the strongest predictors of physical disability in elders; (iii) cognitive performance, including evidence on how exercise and physical activity decrease the risk of early cognitive decline and poor cognition in late life; and (iv) depression by reviewing the effect of exercise on improving mood and increasing positive well-being." }, { "pmid": "18063026", "title": "Prevention of chronic diseases: a call to action.", "abstract": "Chronic (non-communicable) diseases--principally cardiovascular diseases, cancer, chronic respiratory diseases, and diabetes--are leading causes of death and disability but are surprisingly neglected elements of the global-health agenda. They are underappreciated as development issues and underestimated as diseases with profound economic effects. Achievement of the global goal for prevention and control of chronic diseases would avert 36 million deaths by 2015 and would have major economic benefits. The main challenge for achievement of the global goal is to show that it can be reached in a cost-effective manner with existing interventions. This series of papers in The Lancet provides evidence that this goal is not only possible but also realistic with a small set of interventions directed towards whole populations and individuals who are at high risk. The total yearly cost of the interventions in 23 low-income and middle-income countries is about US$5.8 billion (as of 2005). In this final paper in the Series we call for a serious and sustained worldwide effort to prevent and control chronic diseases in the context of a general strengthening of health systems. Urgent action is needed by WHO, the World Bank, regional banks and development agencies, foundations, national governments, civil society, non-governmental organisations, the private sector including the pharmaceutical industry, and academics. We have established the Chronic Disease Action Group to encourage, support, and monitor action on the implementation of evidence-based efforts to promote global, regional, and national action to prevent and control chronic diseases." }, { "pmid": "3920714", "title": "The determinants of physical activity and exercise.", "abstract": "Evaluation and delivery of physical activity and exercise programs appear impeded by the substantial numbers of Americans who are unwilling or unable to participate regularly in physical activity. As a step toward identifying effective interventions, we reviewed available research on determinants relating to the adoption and maintenance of physical activity. We categorized determinants as personal, environmental, or characteristic of the exercise. We have considered supervised participation separately from spontaneous activity in the general population. A wide variety of determinants, populations, and settings have been studied within diverse research traditions and disciplines. This diversity and the varied interpretation of the data hinder our clearly summarizing the existing knowledge. Although we provide some directions for future study and program evaluation, there is a need for research that tests hypotheses derived from theoretical models and that has clear implications for intervention programs. We still need to explore whether general theories of health behavior or approaches relating to specific exercises or activities can be used to predict adoption and maintenance of physical activity." }, { "pmid": "24317382", "title": "Prevalence of sedentary behavior in older adults: a systematic review.", "abstract": "Sedentary behavior is a cluster of behaviors adopted in a sitting or lying posture where little energy is being expended. Sedentary behavior is a risk factor for health independent to inactivity. Currently, there are no published systematic reviews on the prevalence of sedentary behavior objectively measured in, or subjectively reported by, older adults. The aim of this systematic review was to collect and analyze published literature relating to reported prevalence of sedentary behavior, written in English, on human adults, where subjects aged 60 years and over were represented in the study. 23 reports covered data from 18 surveys sourced from seven countries. It was noted that sedentary behavior is defined in different ways by each survey. The majority of surveys included used self-report as a measurement of sedentary behavior. Objective measurements were also captured with the use of body worn accelerometers. Whether measurements are subjective or objective, the majority of older adults are sedentary. Almost 60% of older adult's reported sitting for more than 4 h per day, 65% sit in front of a screen for more than 3 h daily and over 55% report watching more than 2 h of TV. However, when measured objectively in a small survey, it was found that 67% of the older population were sedentary for more than 8.5 h daily." }, { "pmid": "24169944", "title": "The effect of fall prevention exercise programmes on fall induced injuries in community dwelling older adults: systematic review and meta-analysis of randomised controlled trials.", "abstract": "OBJECTIVE\nTo determine whether, and to what extent, fall prevention exercise interventions for older community dwelling people are effective in preventing different types of fall related injuries.\n\n\nDATA SOURCES\nElectronic databases (PubMed, the Cochrane Library, Embase, and CINAHL) and reference lists of included studies and relevant reviews from inception to July 2013.\n\n\nSTUDY SELECTION\nRandomised controlled trials of fall prevention exercise interventions, targeting older (>60 years) community dwelling people and providing quantitative data on injurious falls, serious falls, or fall related fractures.\n\n\nDATA SYNTHESIS\nBased on a systematic review of the case definitions used in the selected studies, we grouped the definitions of injurious falls into more homogeneous categories to allow comparisons of results across studies and the pooling of data. For each study we extracted or calculated the rate ratio of injurious falls. Depending on the available data, a given study could contribute data relevant to one or more categories of injurious falls. A pooled rate ratio was estimated for each category of injurious falls based on random effects models.\n\n\nRESULTS\n17 trials involving 4305 participants were eligible for meta-analysis. Four categories of falls were identified: all injurious falls, falls resulting in medical care, severe injurious falls, and falls resulting in fractures. Exercise had a significant effect in all categories, with pooled estimates of the rate ratios of 0.63 (95% confidence interval 0.51 to 0.77, 10 trials) for all injurious falls, 0.70 (0.54 to 0.92, 8 trials) for falls resulting in medical care, 0.57 (0.36 to 0.90, 7 trials) for severe injurious falls, and 0.39 (0.22 to 0.66, 6 trials) for falls resulting in fractures, but significant heterogeneity was observed between studies of all injurious falls (I(2)=50%, P=0.04).\n\n\nCONCLUSIONS\nExercise programmes designed to prevent falls in older adults also seem to prevent injuries caused by falls, including the most severe ones. Such programmes also reduce the rate of falls leading to medical care." }, { "pmid": "20545467", "title": "Older adults' motivating factors and barriers to exercise to prevent falls.", "abstract": "The aim of this study was to describe motivating factors and barriers for older adults to adhere to group exercise in the local community aiming to prevent falls, and thereby gain knowledge about how health professionals can stimulate adherence. The motivation equation was used as a theoretical framework. Data were collected from individual semi-structured interviews (n = 10). The interviews were taped, transcribed, and thereafter analysed by using a descriptive content analysis consisting of four steps. The results showed that motivating factors to adhere to recommended exercise were perceived prospects of staying independent, maintaining current health status, and improving physical balance and the ability to walk. Barriers were reduced health status, lack of motivation, unpleasant experience during previous exercise group sessions, and environmental factors. All participants wanted information from health professionals on the benefit of exercise. Many considered individual variations in functional skills within each group as a disadvantage. The knowledge gained from this study suggests a greater involvement from all health professionals in motivating older adults to attend exercise groups. The results also suggest that physical therapists should be more aware of the importance of comparative levels of physical function when including participants in exercise groups." }, { "pmid": "24761536", "title": "Seasonal variation and homes: understanding the social experiences of older adults.", "abstract": "There has been limited research on the importance of seasons in the lives of older adults. Previous research has highlighted seasonal fluctuations in physical functioning--including limb strength, range of motion, and cardiac death--the spread of influenza in seasonal migration patterns. In addition, older adults experience isolation for various reasons, such as decline of physical and cognitive ability, lack of transportation, and lack of opportunities for social interaction. There has been much attention paid to the social isolation of older adults, yet little analysis about how the isolation changes throughout the year. Based on findings from an ethnographic study of older adults (n = 81), their family members (n = 49), and supportive professionals (n = 46) as they embark on relocation from their homes, this study analyzes the processes of moving for older adults. It examines the seasonal fluctuations of social isolation because of the effect of the environment on the social experiences of older adults. Isolation occurs because of the difficulty inclement weather causes on social interactions and mobility. The article concludes with discussion of the ways that research and practice can be designed and implemented to account for seasonal variation." }, { "pmid": "22585931", "title": "Serum [25(OH)D] status, ankle strength and activity show seasonal variation in older adults: relevance for winter falls in higher latitudes.", "abstract": "BACKGROUND\nseasonal variation exists in serum [25(OH)D] and physical activity, especially at higher latitudes, and these factors impact lower limb strength. This study investigates seasonal variation in leg strength in a longitudinal repeated measures design concurrently with serum vitamin D and physical activity.\n\n\nMETHODS\neighty-eight community-dwelling independently mobile older adults (69.2 ± 6.5 years) were evaluated five times over a year, at the end of five consecutive seasons at latitude 41.1°S, recruited in two cohorts. Leg strength, serum [25(OH)D] and physical activity levels were measured. Time spent outside was recorded. Monthly falls diaries recorded falls. Data were analysed to determine annual means and percentage changes.\n\n\nRESULTS\nsignificant variation in [25(OH)D] (±15%), physical activity (±13%), ankle dorsiflexion strength (±8%) and hours spent outside (±20%) (all P < 0.001) was demonstrated over the year, with maximums in January and February (mid-summer). Low mean ankle strength was associated with increased incidence of falling (P = 0.047). Quadriceps strength did not change (±2%; P = 0.53).\n\n\nCONCLUSION\nankle dorsiflexor strength varied seasonally. Increased ankle strength in summer may be influenced by increased levels of outdoors activity over the summer months. Reduced winter-time dorsiflexor strength may predispose older people to increased risk of tripping-related falls, and warrants investigation in a multi-faceted falls prevention programme." }, { "pmid": "25744109", "title": "Loneliness and health in Eastern Europe: findings from Moscow, Russia.", "abstract": "OBJECTIVES\nTo examine which factors are associated with feeling lonely in Moscow, Russia, and to determine whether loneliness is associated with worse health.\n\n\nSTUDY DESIGN\nCross-sectional study.\n\n\nMETHODS\nData from 1190 participants were drawn from the Moscow Health Survey. Logistic regression analysis was used to examine which factors were associated with feeling lonely and whether loneliness was linked to poor health.\n\n\nRESULTS\nAlmost 10% of the participants reported that they often felt lonely. Divorced and widowed individuals were significantly more likely to feel lonely, while not living alone and having greater social support reduced the risk of loneliness. Participants who felt lonely were more likely to have poor self-rated health (odds ratio [OR]: 2.28; 95% confidence interval [CI]: 1.38-3.76), and have suffered from insomnia (OR: 2.43; CI: 1.56-3.77) and mental ill health (OR: 2.93; CI: 1.88-4.56).\n\n\nCONCLUSIONS\nFeeling lonely is linked to poorer health in Moscow. More research is now needed on loneliness and the way it affects health in Eastern Europe, so that appropriate interventions can be designed and implemented to reduce loneliness and its harmful impact on population well-being in this setting." }, { "pmid": "24612748", "title": "Non-face-to-face physical activity interventions in older adults: a systematic review.", "abstract": "Physical activity is effective in preventing chronic diseases, increasing quality of life and promoting general health in older adults, but most older adults are not sufficiently active to gain those benefits. A novel and economically viable way to promote physical activity in older adults is through non-face-to-face interventions. These are conducted with reduced or no in-person interaction between intervention provider and program participants. The aim of this review was to summarize the scientific literature on non-face-to-face physical activity interventions targeting healthy, community dwelling older adults (≥ 50 years). A systematic search in six databases was conducted by combining multiple key words of the three main search categories \"physical activity\", \"media\" and \"older adults\". The search was restricted to English language articles published between 1st January 2000 and 31st May 2013. Reference lists of relevant articles were screened for additional publications. Seventeen articles describing sixteen non-face-to-face physical activity interventions were included in the review. All studies were conducted in developed countries, and eleven were randomized controlled trials. Sample size ranged from 31 to 2503 participants, and 13 studies included 60% or more women. Interventions were most frequently delivered via print materials and phone (n=11), compared to internet (n=3) and other media (n=2). Every intervention was theoretically framed with the Social Cognitive Theory (n=10) and the Transtheoretical Model of Behavior Change (n=6) applied mostly. Individual tailoring was reported in 15 studies. Physical activity levels were self-assessed in all studies. Fourteen studies reported significant increase in physical activity. Eight out of nine studies conducted post-intervention follow-up analysis found that physical activity was maintained over a longer time. In the six studies where intervention dose was assessed the results varied considerably. One study reported that 98% of the sample read the respective intervention newsletters, whereas another study found that only 4% of its participants visited the intervention website more than once. From this review, non-face-to-face physical activity interventions effectively promote physical activity in older adults. Future research should target diverse older adult populations in multiple regions while also exploring the potential of emerging technologies." }, { "pmid": "11818183", "title": "Effectiveness of physical activity interventions for older adults: a review.", "abstract": "OBJECTIVE\nThis review evaluates the effectiveness of physical activity interventions among older adults.\n\n\nMETHODS\nComputerized searches were performed to identify randomized controlled trials. Studies were included if: (1) the study population consisted of older adults (average sample population age of > or =50 years and minimum age of 40 years); (2) the intervention consisted of an exercise program or was aimed at promoting physical activity; and (3) reported on participation (i.e., adherence/compliance) or changes in level of physical activity (e.g., pre-post test measures and group comparisons).\n\n\nRESULTS\nThe 38 studies included 57 physical activity interventions. Three types of interventions were identified: home-based, group-based, and educational. In the short-term, both home-based interventions and group-based interventions achieved high rates of participation (means of 90% and 84%, respectively). Participation declined the longer the duration of the intervention. Participation in education interventions varied widely (range of 35% to 96%). Both group-based interventions and education interventions were effective in increasing physical activity levels in the short-term. Information on long-term effectiveness was either absent or showed no difference of physical activity level between the study groups.\n\n\nCONCLUSIONS\nHome-based, group-based, and educational physical activity interventions can result in increased physical activity, but changes are small and short-lived. Participation rates of home-based and group-based interventions were comparable, and both seemed to be unrelated to type or frequency of physical activity. The beneficial effect of behavioral reinforcement strategies was not evident. Comparative studies evaluating the effectiveness of diverse interventions are needed to identify the interventions most likely to succeed in the initiation and maintenance of physical activity." }, { "pmid": "28392983", "title": "Effects of online group exercises for older adults on physical, psychological and social wellbeing: a randomized pilot trial.", "abstract": "BACKGROUND\nIntervention programs to promote physical activity in older adults, either in group or home settings, have shown equivalent health outcomes but different results when considering adherence. Group-based interventions seem to achieve higher participation in the long-term. However, there are many factors that can make of group exercises a challenging setting for older adults. A major one, due to the heterogeneity of this particular population, is the difference in the level of skills. In this paper we report on the physical, psychological and social wellbeing outcomes of a technology-based intervention that enable online group exercises in older adults with different levels of skills.\n\n\nMETHODS\nA total of 37 older adults between 65 and 87 years old followed a personalized exercise program based on the OTAGO program for fall prevention, for a period of eight weeks. Participants could join online group exercises using a tablet-based application. Participants were assigned either to the Control group, representing the traditional individual home-based training program, or the Social group, representing the online group exercising. Pre- and post- measurements were taken to analyze the physical, psychological and social wellbeing outcomes.\n\n\nRESULTS\nAfter the eight-weeks training program there were improvements in both the Social and Control groups in terms of physical outcomes, given the high level of adherence of both groups. Considering the baseline measures, however, the results suggest that while in the Control group fitter individuals tended to adhere more to the training, this was not the case for the Social group, where the initial level had no effect on adherence. For psychological outcomes there were improvements on both groups, regardless of the application used. There was no significant difference between groups in social wellbeing outcomes, both groups seeing a decrease in loneliness despite the presence of social features in the Social group. However, online social interactions have shown to be correlated to the decrease in loneliness in the Social group.\n\n\nCONCLUSION\nThe results indicate that technology-supported online group-exercising which conceals individual differences in physical skills is effective in motivating and enabling individuals who are less fit to train as much as fitter individuals. This not only indicates the feasibility of training together despite differences in physical skills but also suggests that online exercise might reduce the effect of skills on adherence in a social context. However, results from this pilot are limited to a small sample size and therefore are not conclusive. Longer term interventions with more participants are instead recommended to assess impacts on wellbeing and behavior change." }, { "pmid": "11322678", "title": "Practical implementation of an exercise-based falls prevention programme.", "abstract": "Muscle weakness and impaired balance are risk factors underlying many falls and fall injuries experienced by older people. Fall prevention strategies have included exercise programmes that lower the risk of falling by improving strength and balance. We have developed an individually tailored, home-based, strength and balance retraining programme, which has proven successful in reducing falls and moderate fall injuries in people aged 80 years and older. Here we describe a simple assessment of strength and balance and the content and delivery of a falls prevention exercise programme." }, { "pmid": "16978493", "title": "The Rapid Assessment of Physical Activity (RAPA) among older adults.", "abstract": "INTRODUCTION\nThe Rapid Assessment of Physical Activity (RAPA) was developed to provide an easily administered and interpreted means of assessing levels of physical activity among adults older than 50 years.\n\n\nMETHODS\nA systematic review of the literature, a survey of geriatricians, focus groups, and cognitive debriefings with older adults were conducted, and an expert panel was convened. From these procedures, a nine-item questionnaire assessing strength, flexibility, and level and intensity of physical activity was developed. Among a cohort of 115 older adults (mean age, 73.3 years; age range, 51-92 years), half of whom were regular exercisers (55%), the screening performance of three short self-report physical activity questionnaires--the RAPA, the Behavioral Risk Factor Surveillance System (BRFSS) physical activity questions, and the Patient-centered Assessment and Counseling for Exercise (PACE)--was compared with the Community Healthy Activities Model Program for Seniors (CHAMPS) as the criterion.\n\n\nRESULTS\nCompared with the BRFSS and the PACE, the RAPA was more positively correlated with the CHAMPS moderate caloric expenditure (r = 0.54 for RAPA, r = 0.40 for BRFSS, and r = 0.44 for PACE) and showed as good or better sensitivity (81%), positive predictive value (77%), and negative predictive value (75%) as the other tools. Specificity, sensitivity, and positive predictive value of the questions on flexibility and strength training were in the 80% range, except for specificity of flexibility questions (62%). Mean caloric expenditure per week calculated from the CHAMPS was compared between those who did and those who did not meet minimum recommendations for moderate or vigorous physical activity based on these self-report questionnaires. The RAPA outperformed the PACE and the BRFSS.\n\n\nCONCLUSION\nThe RAPA is an easy-to-use, valid measure of physical activity for use in clinical practice with older adults." }, { "pmid": "16367493", "title": "The Satisfaction With Life Scale.", "abstract": "This article reports the development and validation of a scale to measure global life satisfaction, the Satisfaction With Life Scale (SWLS). Among the various components of subjective well-being, the SWLS is narrowly focused to assess global life satisfaction and does not tap related constructs such as positive affect or loneliness. The SWLS is shown to have favorable psychometric properties, including high internal consistency and high temporal reliability. Scores on the SWLS correlate moderately to highly with other measures of subjective well-being, and correlate predictably with specific personality characteristics. It is noted that the SWLS is Suited for use with different age groups, and other potential uses of the scale are discussed." }, { "pmid": "2035047", "title": "The MOS social support survey.", "abstract": "This paper describes the development and evaluation of a brief, multidimensional, self-administered, social support survey that was developed for patients in the Medical Outcomes Study (MOS), a two-year study of patients with chronic conditions. This survey was designed to be comprehensive in terms of recent thinking about the various dimensions of social support. In addition, it was designed to be distinct from other related measures. We present a summary of the major conceptual issues considered when choosing items for the social support battery, describe the items, and present findings based on data from 2987 patients (ages 18 and older). Multitrait scaling analyses supported the dimensionality of four functional support scales (emotional/informational, tangible, affectionate, and positive social interaction) and the construction of an overall functional social support index. These support measures are distinct from structural measures of social support and from related health measures. They are reliable (all Alphas greater than 0.91), and are fairly stable over time. Selected construct validity hypotheses were supported." }, { "pmid": "22818947", "title": "The eight-item modified Medical Outcomes Study Social Support Survey: psychometric evaluation showed excellent performance.", "abstract": "OBJECTIVE\nEvaluation and validation of the psychometric properties of the eight-item modified Medical Outcomes Study Social Support Survey (mMOS-SS).\n\n\nSTUDY DESIGN AND SETTING\nSecondary analyses of data from three populations: Boston breast cancer study (N=660), Los Angeles breast cancer study (N=864), and Medical Outcomes Study (N=1,717). The psychometric evaluation of the eight-item mMOS-SS compared performance across populations and with the original 19-item Medical Outcomes Study Social Support Survey (MOS-SS). Internal reliability, factor structure, construct validity, and discriminant validity were evaluated using Cronbach's alpha, principal factor analysis (PFA), and confirmatory factor analysis (CFA), Spearman and Pearson correlation, t-test and Wilcoxon rank sum tests.\n\n\nRESULTS\nmMOS-SS internal reliability was excellent in all three populations. PFA factor loadings were similar across populations; one factor >0.6, well-discriminated two factor (instrumental/emotional social support four items each) >0.5. CFA with a priori two-factor structure yielded consistently adequate model fit (root mean squared errors of approximation 0.054-0.074). mMOS-SS construct and discriminant validity were similar across populations and comparable to MOS-SS. Psychometric properties held when restricted to women aged ≥ 65 years.\n\n\nCONCLUSION\nThe psychometric properties of the eight-item mMOS-SS were excellent and similar to those of the original 19-item instrument. Results support the use of briefer mMOS-SS instrument; better suited to multidimensional geriatric assessments and specifically in older women with breast cancer." }, { "pmid": "7431205", "title": "The revised UCLA Loneliness Scale: concurrent and discriminant validity evidence.", "abstract": "The development of an adequate assessment instrument is a necessary prerequisite for social psychological research on loneliness. Two studies provide methodological refinement in the measurement of loneliness. Study 1 presents a revised version of the self-report UCLA (University of California, Los Angeles) Loneliness Scale, designed to counter the possible effects of response bias in the original scale, and reports concurrent validity evidence for the revised measure. Study 2 demonstrates that although loneliness is correlated with measures of negative affect, social risk taking, and affiliative tendencies, it is nonetheless a distinct psychological experience." }, { "pmid": "18504506", "title": "A Short Scale for Measuring Loneliness in Large Surveys: Results From Two Population-Based Studies.", "abstract": "Most studies of social relationships in later life focus on the amount of social contact, not on individuals' perceptions of social isolation. However, loneliness is likely to be an important aspect of aging. A major limiting factor in studying loneliness has been the lack of a measure suitable for large-scale social surveys. This article describes a short loneliness scale developed specifically for use on a telephone survey. The scale has three items and a simplified set of response categories but appears to measure overall loneliness quite well. The authors also document the relationship between loneliness and several commonly used measures of objective social isolation. As expected, they find that objective and subjective isolation are related. However, the relationship is relatively modest, indicating that the quantitative and qualitative aspects of social relationships are distinct. This result suggests the importance of studying both dimensions of social relationships in the aging process." }, { "pmid": "10380242", "title": "A 30-s chair-stand test as a measure of lower body strength in community-residing older adults.", "abstract": "Measuring lower body strength is critical in evaluating the functional performance of older adults. The purpose of this study was to assess the test-retest reliability and the criterion-related and construct validity of a 30-s chair stand as a measure of lower body strength in adults over the age of 60 years. Seventy-six community-dwelling older adults (M age = 70.5 years) volunteered to participate in the study, which involved performing two 30-s chair-stand tests and two maximum leg-press tests, each conducted on separate days 2-5 days apart. Test-retest intraclass correlations of .84 for men and .92 for women, utilizing one-way analysis of variance procedures appropriate for a single trial, together with a nonsignificant change in scores from Day 1 testing to Day 2, indicate that the 30-s chair stand has good stability reliability. A moderately high correlation between chair-stand performance and maximum weight-adjusted leg-press performance for both men and women (r = .78 and .71, respectively) supports the criterion-related validity of the chair stand as a measure of lower body strength. Construct (or discriminant) validity of the chair stand was demonstrated by the test's ability to detect differences between various age and physical activity level groups. As expected, chair-stand performance decreased significantly across age groups in decades--from the 60s to the 70s to the 80s (p < .01) and was significantly lower for low-active participants than for high-active participants (p < .0001). It was concluded that the 30-s chair stand provides a reasonably reliable and valid indicator of lower body strength in generally active, community-dwelling older adults." }, { "pmid": "16472044", "title": "Physical activity and quality of life in older adults: influence of health status and self-efficacy.", "abstract": "BACKGROUND\nPhysical activity has been positively linked to quality of life (QOL) in older adults. Measures of health status and global well-being represent common methods of assessing QOL outcomes, yet little has been done to determine the nature of the relationship of these outcomes with physical activity.\n\n\nPURPOSE\nWe examined the roles played by physical activity, health status, and self-efficacy in global QOL (satisfaction with life) in a sample of older Black and White women.\n\n\nMETHOD\nParticipants (N = 249, M age = 68.12 years) completed multiple indicators of physical activity, self-efficacy, health status, and QOL at baseline of a 24-month prospective trial. Structural equation modeling examined the fit of 3 models of the physical activity and QOL relationship.\n\n\nRESULTS\nAnalyses indicated that relationships between physical activity and QOL, self-efficacy and QOL were all indirect. Specifically, physical activity influenced self-efficacy and QOL through physical and mental health status, which in turn influenced global QOL.\n\n\nCONCLUSIONS\nOur findings support a social cognitive model of physical activity's relationship with QOL. Subsequent tests of hypothesized relationships across time are recommended." }, { "pmid": "24554274", "title": "Association between physical activity and quality of life in the elderly: a systematic review, 2000-2012.", "abstract": "OBJECTIVE\nTo review information regarding the association of physical activity (PA) with quality of life (QoL) in the elderly and to identify the study designs and measurement instruments most commonly used in its assessment, in the period 2000-2012.\n\n\nMETHODS\nRelevant articles were identified by a search of four electronic databases and cross-reference lists and by contact with the authors of the included manuscripts. Original studies on the association between PA and QoL in individuals aged 60 years or older were examined. The quality of studies as well as the direction and the consistency of the association between PA and QoL were evaluated.\n\n\nRESULTS\nA total of 10,019 articles were identified as potentially relevant, but only 42 (0.42%) met the inclusion criteria and were retrieved and examined. Most studies demonstrated a positive association between PA and QoL in the elderly. PA had a consistent association with the following QoL domains: functional capacity; general QoL; autonomy; past, present and future activities; death and dying; intimacy; mental health; vitality; and psychological.\n\n\nCONCLUSION\nPA was positively and consistently associated with some QoL domains among older individuals, supporting the notion that promoting PA in the elderly may have an impact beyond physical health. However, the associations between PA and other QoL domains were moderate to inconsistent and require further investigation." } ]
Frontiers in Neurorobotics
29593521
PMC5859180
10.3389/fnbot.2018.00011
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
2. Related worksIn order for a robot to move within a space, a metric map consisting of occupancy grids that encode whether or not an area is navigable is generally used. The simultaneous localization and mapping (SLAM) (Durrant-Whyte and Bailey, 2006) is a famous localization method for mobile robots. However, the tasks that are coordinated with a user cannot be performed using only a metric map, since semantic information is required for interaction with a user. Nielsen et al. (2004) proposed a method of expanding a metric map into a semantic map by attaching a single-frame snapshot in order to share spatial information between a user and a robot. As a bridge between a metric map and human-robot interaction, research on semantic maps that provide semantic attributes (such as object recognition results) to metric maps has been performed (Pronobis et al., 2006; Ranganathan and Dellaert, 2007). Studies have also been reported on giving semantic object annotations to 3D point cloud data (Rusu et al., 2008, 2009). Moreover, in terms of studies based on multiple cues, Espinace et al. (2013) proposed a method of characterizing places according to low-level visual features associated to objects. Although these approaches could categorize spaces based on semantic information, they did not deal with linguistic information about the names that represent spaces.In the field of navigation tasks with human-robot interaction, methods of classifying corridors and rooms using a predefined ontology based on shape and image features have been proposed (Zender et al., 2008; Pronobis and Jensfelt, 2012). In studies on semantic space categorization, Kostavelis and Gasteratos (2013) proposed a method of generating a 3D metric map that is semantically categorized by recognizing a place using bag of features and support vector machines. Granda et al. (2010) performed spatial labeling and region segmentation by applying a Gaussian model to the SLAM module of a robot operating system (ROS). Mozos and Burgard (2006) proposed a method of classifying metric maps into semantic classes by using adaboost as a supervised learning method. Galindo et al. (2008) utilized semantic maps and predefined hierarchical spatial information for robot task planning. Although these approaches were able to ground several predefined names to spaces, the learning of location names through human-robot communication in a bottom-up manner has not been achieved.Many studies have been conducted on spatial concept formation based on multimodal information observed in individual environments (Hagiwara et al., 2016; Heath et al., 2016; Rangel et al., 2017). Spatial concepts are formed in a bottom-up manner based on multimodal observed information, and allow predictions of different modalities. This makes it possible to estimate the linguistic information representing a space from position and image information in a probabilistic way. Gu et al. (2016) proposed a method of learning relative space categories from ambiguous instructions. Taniguchi et al. (2014, 2016) proposed computational models for a mobile robot to acquire spatial concepts based on information from recognized speech and estimated self-location. Here, the spatial concept was defined as the distributions of names and positions at each place. The method enables a robot to predict a positional distribution from recognized human speech through formed spatial concepts. Ishibushi et al. (2015) proposed a method of learning the spatial regions at each place by stochastically integrating image recognition results and estimated self-positions. In these studies, it was possible to form a spatial concept conforming to human perception such as an entrance and a corridor by inferring the parameters of the model.However, these studies did not focus on the hierarchical structure of spatial concepts. In particular, the features of the higher layer, such as the living space, are included in the features of the lower layer, such as the front of the television, and it was difficult to form the spatial concept in the abstract layer. Furthermore, the ability to understand and describe a place linguistically in different layers is an important function in robots that provide services through linguistic communication with humans. Despite the importance of the hierarchical structure of spatial concepts, a method that enables such concept formation has not been proposed in previous studies. We propose a method that forms a hierarchical spatial concept in a bottom-up manner from multimodal information and demonstrate the effectiveness of the formed spatial concepts in predicting location names and positions.
[]
[]
Frontiers in Neurorobotics
29615888
PMC5868127
10.3389/fnbot.2018.00007
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.
1.2. Related workIn the field of human-robot collaboration, numerous studies (Bandera et al., 2012; Rozo et al., 2016) demonstrated the advantage of active, human-in-the-loop interaction. The framework proposed by Lenz et al. (2008) allows the joint action of humans and robots for an assembly task. Their system can anticipate human behavior based on the learnt sequence, ensuring smooth collaboration. An active-learning architecture proposed by Myagmarjav and Sridharan (2015) can be trained with limited knowledge of the task. It enables the robot to ask task-relevant questions to acquire information from the user. The work by Chao et al. (2010) belongs to the same category. Nevertheless, a pertinent difference with these methods is that PIL does not require an explicit training phase.Based on Q-learning (Watkins and Dayan, 1992), Thomaz and Breazeal (2006) introduced the Interactive Reinforcement Learning (IRL) method. In this approach, the user is able to provide positive and negative rewards during training in response to the robot's manipulation action. The authors demonstrated that human-generated reward can be fast compared to classical reinforcement learning. Along these lines, in the work by Suay and Chernova (2011) the user provides guidance signals to constrain the robot's exploration toward a limited set of actions. Here, the user provides feedback for every action. Najar et al. (2016) proposed a similar IRL method to learn the meaning of the guidance signals by using evaluative feedback instead of task rewards. Recent work by Rozo et al. (2016) is built on the same concepts. In contrast to our approach, the IRL methods do not incorporate proactive robot behavior.Our framework is along the lines of the work discussed below in the sense that it provides the robot with proactive behavior in a collaboration task. Huang and Mutlu (2016) presented an anticipatory control method that enables the robot to proactively perform a pick-and-place task based on anticipated actions of their human partners. Their anticipation module is trained using eye-tracking glasses which track the gaze of the user. The authors showed that anticipatory control responded to the user significantly faster than a reactive control method that does not anticipate the user's intent. Hawkins et al. (2014) constructed a probabilistic graphical model to anticipate human action. In their work, users wore brightly-colored surgical gloves while giving instructions to the robot. Caccavale and Finzi (2017)'s attentional behavior-based system uses a hierarchical architecture. It recognizes human activities and intentions in a simulated environment to pick and place objects. All three approaches require prior learning of the task to model the anticipatory behavior. Contrary to these approaches, we use hand gestures which are a natural, unencumbered, non-contact, and prop-free mode of interaction in the real robot environment.In our previous implementation of the framework (Shukla et al, 2017a), the robot randomly performed an action if the association between the state of the system and the action was unknown. After each action the robot receives feedback (positive or negative) from the user. If the action was given a negative feedback then it randomly chooses another action. One contribution of this work is a probabilistic action anticipation module that ranks candidate actions. The rank of an action is decided based on the probability of the action given the three attributes of the state of the system. Details of action anticipation module are discussed in section 3.2. The action anticipation module helps to sort the sequence of the manipulation actions instead of choosing them randomly, therefore speeding-up the task.
[ "22563315", "26257694" ]
[ { "pmid": "22563315", "title": "I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.", "abstract": "Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times." }, { "pmid": "26257694", "title": "Using gaze patterns to predict task intent in collaboration.", "abstract": "In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a \"worker\" prepared a sandwich by adding ingredients requested by a \"customer.\" In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience." } ]
BMC Medical Informatics and Decision Making
29589562
PMC5872377
10.1186/s12911-018-0593-y
Qcorp: an annotated classification corpus of Chinese health questions
BackgroundHealth question-answering (QA) systems have become a typical application scenario of Artificial Intelligent (AI). An annotated question corpus is prerequisite for training machines to understand health information needs of users. Thus, we aimed to develop an annotated classification corpus of Chinese health questions (Qcorp) and make it openly accessible.MethodsWe developed a two-layered classification schema and corresponding annotation rules on basis of our previous work. Using the schema, we annotated 5000 questions that were randomly selected from 5 Chinese health websites within 6 broad sections. 8 annotators participated in the annotation task, and the inter-annotator agreement was evaluated to ensure the corpus quality. Furthermore, the distribution and relationship of the annotated tags were measured by descriptive statistics and social network map.ResultsThe questions were annotated using 7101 tags that covers 29 topic categories in the two-layered schema. In our released corpus, the distribution of questions on the top-layered categories was treatment of 64.22%, diagnosis of 37.14%, epidemiology of 14.96%, healthy lifestyle of 10.38%, and health provider choice of 4.54% respectively. Both the annotated health questions and annotation schema were openly accessible on the Qcorp website. Users can download the annotated Chinese questions in CSV, XML, and HTML format.ConclusionsWe developed a Chinese health question corpus including 5000 manually annotated questions. It is openly accessible and would contribute to the intelligent health QA system development.Electronic supplementary materialThe online version of this article (10.1186/s12911-018-0593-y) contains supplementary material, which is available to authorized users.
Comparison with other related worksComparing to other related works on the annotation and corpus building of health and medical questions (Table 2), there are three main specialties in this study. Firstly, the scale of the annotated corpora in our Qcorp database was the biggest. Currently the Qcorp contains 5,000 annotated Chinese health questions, surpass the 4,654 annotated English clinical questions maintained by NLM [19], and the 4,465 annotated Chinese health questions built by Zhang N [38], let alone other small scale corpora. Secondly, the sample questions used in the Qcorp database were randomly selected from multiple sources. Unlike those corpora mainly come from 1 health website [31, 32, 37, 38], our corpus were randomly selected from 5 Chinese health websites so as to improve the representativeness of the corpus. Thirdly, the corpus here covered the relatively more diversity of the diseases. Other similar corpus, especially those Chinese ones, are mainly focused on only one specific kind of diseases, such as genetic and rare diseases [32], cancer [34], maternal and infant diseases [37], and skin diseases [38] and so on. While our corpus were selected from 6 broad sections, including internal medicine, surgery, obstetrics & gynecology, pediatrics, infectious diseases, and traditional Chinese medicine, so as to make it cover as many diseases as possible. To conclude, the Qcorp database is currently the biggest annotated classification corpus of Chinese health questions that from multiple sources and covered relatively more diversity of diseases. Other specialties include that the classification schema modified and applied in this study was quite reliable and with proper layers and number of categories.Table 2A comparison of works on the corpus building of health and medical questionsCorpus or Author nameLanguageAskerCorpus scaleQuestion sourcesDisease coveringAnnotated categoriesLayersNLM collected clinical questions [19]EnP4,654Clinical settings (5 studies [20–25])Not limited644Patrick J [30]EnP595Clinical settingsNot limited114Zhang Y [31]EnC6001 website23 subcategories>505Roberts K [32]EnC1,4671 websiteGenetic and rare diseases131Maroy S [34]EnC1,2796 websitesCancer102Yin JW [37]CnC1,6001 health APPMaternal and infant health81Zhang N [38]CnC4,4651 website, books, self-composedSkin disease522Tang GY [39]CnC1,6884 websitesHyperlipidemia2411Our QcorpCnC5,0005 websites6 broad sections292P refers to physician, and C refers to consumer
[ "22874311", "28550995", "21256977", "22874183", "25592675", "26185615", "21705457", "23244628", "22874315", "7772099", "10435959", "10938054", "11711012", "14702450", "20670693", "21856442", "22142949", "25954411", "25759063", "23092060", "27147494" ]
[ { "pmid": "22874311", "title": "Online health information search: what struggles and empowers the users? Results of an online survey.", "abstract": "The most popular mean of searching for online health content is a general search engine for all domains of interest. Being general implies on one hand that the search engine is not tailored to the needs which are particular to the medical and on another hand that health domain and health-specific queries may not always return adequate and adapted results. The aim of our study was to identify difficulties and preferences in online health information search encountered by members of the general public. The survey in four languages was online from the 9th of March until the 27th of April, 2011. 385 answers were collected, representing mostly the opinions of highly educated users, mostly from France and Spain. The most important characteristics of a search engine are relevance and trustworthiness of results. The results currently retrieved do not fulfil these requirements. The ideal representation of the information will be a categorization of the results into different groups. Medical dictionaries/thesauruses, suggested relevant topics, image searches and spelling corrections are regarded as helpful tools. There is a need to work towards better customized solutions which provide users with the trustworthy information of high quality specific to his/her case in a user-friendly environment which would eventually lead to making appropriate health decisions." }, { "pmid": "28550995", "title": "Experiences, practices and barriers to accessing health information: A qualitative study.", "abstract": "BACKGROUND\nWith technology advancements making vast amounts of health information available whenever and wherever it is required, there is a growing need to understand how this information is being accessed and used.\n\n\nOBJECTIVE\nOur aim was to explore patients/public and health professionals' experiences, practices and preferences for accessing health information.\n\n\nMETHODS\nFocus groups were conducted with 35 healthcare professionals (31 nurses and 4 allied health professionals) and 14 patients/members of the public. Semi-structured interviews were conducted with 5 consultants, who were unable to attend the focus groups. Data collection took place between March and May 2013 and all data were analysed thematically.\n\n\nRESULTS\nHealth professionals and patients/members of the public reported primarily accessing health information to inform their decision making for providing and seeking treatment respectively. For all participants the internet was the primary mechanism for accessing health information, with health professionals' access affected by open access charges; time constraints and access to computers. Variation in how patients/members of the public and health professionals appraise the quality of information also emerged, with a range of techniques for assessing quality reported.\n\n\nCONCLUSIONS\nThere was a clear preference for accessing health information online within our sample. Given that this information is central to both patient and health professionals' decision making, it is essential that these individuals are basing their decisions on high quality information. Findings from this study have implications for educationalists, health professionals, policymakers and the public." }, { "pmid": "21256977", "title": "AskHERMES: An online question answering system for complex clinical questions.", "abstract": "OBJECTIVE\nClinical questions are often long and complex and take many forms. We have built a clinical question answering system named AskHERMES to perform robust semantic analysis on complex clinical questions and output question-focused extractive summaries as answers.\n\n\nDESIGN\nThis paper describes the system architecture and a preliminary evaluation of AskHERMES, which implements innovative approaches in question analysis, summarization, and answer presentation. Five types of resources were indexed in this system: MEDLINE abstracts, PubMed Central full-text articles, eMedicine documents, clinical guidelines and Wikipedia articles.\n\n\nMEASUREMENT\nWe compared the AskHERMES system with Google (Google and Google Scholar) and UpToDate and asked physicians to score the three systems by ease of use, quality of answer, time spent, and overall performance.\n\n\nRESULTS\nAskHERMES allows physicians to enter a question in a natural way with minimal query formulation and allows physicians to efficiently navigate among all the answer sentences to quickly meet their information needs. In contrast, physicians need to formulate queries to search for information in Google and UpToDate. The development of the AskHERMES system is still at an early stage, and the knowledge resource is limited compared with Google or UpToDate. Nevertheless, the evaluation results show that AskHERMES' performance is comparable to the other systems. In particular, when answering complex clinical questions, it demonstrates the potential to outperform both Google and UpToDate systems.\n\n\nCONCLUSIONS\nAskHERMES, available at http://www.AskHERMES.org, has the potential to help physicians practice evidence-based medicine and improve the quality of patient care." }, { "pmid": "22874183", "title": "CliniQA : highly reliable clinical question answering system.", "abstract": "Evidence-based medicine (EBM) aims to apply the best available evidences gained from scientific method to clinical decision making. From the computer science point of view, the current bottleneck of applying EBM by a decision maker (either a patient or a physician) is the time-consuming manual retrieval, appraisal, and interpretation of scientific evidences from large volume of and rapidly increasing medical research reports. Patients do not have the expertise to do it. For physicians, study has shown that they usually have insufficient time to conduct the task. CliniQA tries to shift the burden of time and expertise from the decision maker to the computer system. Given a single clinical foreground question, the CliniQA will return a highly reliable answer based on existing medical research reports. Besides this, the CliniQA will also return the analyzed information from the research report to help users appraise the medical evidences more efficiently." }, { "pmid": "25592675", "title": "Biomedical question answering using semantic relations.", "abstract": "BACKGROUND\nThe proliferation of the scientific literature in the field of biomedicine makes it difficult to keep abreast of current knowledge, even for domain experts. While general Web search engines and specialized information retrieval (IR) systems have made important strides in recent decades, the problem of accurate knowledge extraction from the biomedical literature is far from solved. Classical IR systems usually return a list of documents that have to be read by the user to extract relevant information. This tedious and time-consuming work can be lessened with automatic Question Answering (QA) systems, which aim to provide users with direct and precise answers to their questions. In this work we propose a novel methodology for QA based on semantic relations extracted from the biomedical literature.\n\n\nRESULTS\nWe extracted semantic relations with the SemRep natural language processing system from 122,421,765 sentences, which came from 21,014,382 MEDLINE citations (i.e., the complete MEDLINE distribution up to the end of 2012). A total of 58,879,300 semantic relation instances were extracted and organized in a relational database. The QA process is implemented as a search in this database, which is accessed through a Web-based application, called SemBT (available at http://sembt.mf.uni-lj.si ). We conducted an extensive evaluation of the proposed methodology in order to estimate the accuracy of extracting a particular semantic relation from a particular sentence. Evaluation was performed by 80 domain experts. In total 7,510 semantic relation instances belonging to 2,675 distinct relations were evaluated 12,083 times. The instances were evaluated as correct 8,228 times (68%).\n\n\nCONCLUSIONS\nIn this work we propose an innovative methodology for biomedical QA. The system is implemented as a Web-based application that is able to provide precise answers to a wide range of questions. A typical question is answered within a few seconds. The tool has some extensions that make it especially useful for interpretation of DNA microarray results." }, { "pmid": "26185615", "title": "A framework for ontology-based question answering with application to parasite immunology.", "abstract": "BACKGROUND\nLarge quantities of biomedical data are being produced at a rapid pace for a variety of organisms. With ontologies proliferating, data is increasingly being stored using the RDF data model and queried using RDF based querying languages. While existing systems facilitate the querying in various ways, the scientist must map the question in his or her mind to the interface used by the systems. The field of natural language processing has long investigated the challenges of designing natural language based retrieval systems. Recent efforts seek to bring the ability to pose natural language questions to RDF data querying systems while leveraging the associated ontologies. These analyze the input question and extract triples (subject, relationship, object), if possible, mapping them to RDF triples in the data. However, in the biomedical context, relationships between entities are not always explicit in the question and these are often complex involving many intermediate concepts.\n\n\nRESULTS\nWe present a new framework, OntoNLQA, for querying RDF data annotated using ontologies which allows posing questions in natural language. OntoNLQA offers five steps in order to answer natural language questions. In comparison to previous systems, OntoNLQA differs in how some of the methods are realized. In particular, it introduces a novel approach for discovering the sophisticated semantic associations that may exist between the key terms of a natural language question, in order to build an intuitive query and retrieve precise answers. We apply this framework to the context of parasite immunology data, leading to a system called AskCuebee that allows parasitologists to pose genomic, proteomic and pathway questions in natural language related to the parasite, Trypanosoma cruzi. We separately evaluate the accuracy of each component of OntoNLQA as implemented in AskCuebee and the accuracy of the whole system. AskCuebee answers 68 % of the questions in a corpus of 125 questions, and 60 % of the questions in a new previously unseen corpus. If we allow simple corrections by the scientists, this proportion increases to 92 %.\n\n\nCONCLUSIONS\nWe introduce a novel framework for question answering and apply it to parasite immunology data. Evaluations of translating the questions to RDF triple queries by combining machine learning, lexical similarity matching with ontology classes, properties and instances for specificity, and discovering associations between them demonstrate that the approach performs well and improves on previous systems. Subsequently, OntoNLQA offers a viable framework for building question answering systems in other biomedical domains." }, { "pmid": "21705457", "title": "Towards spoken clinical-question answering: evaluating and adapting automatic speech-recognition systems for spoken clinical questions.", "abstract": "OBJECTIVE\nTo evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task.\n\n\nDESIGN AND MEASUREMENTS\nThe authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level.\n\n\nRESULTS\nNuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%.\n\n\nCONCLUSION\nWithout modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems." }, { "pmid": "23244628", "title": "Usability survey of biomedical question answering systems.", "abstract": "We live in an age of access to more information than ever before. This can be a double-edged sword. Increased access to information allows for more informed and empowered researchers, while information overload becomes an increasingly serious risk. Thus, there is a need for intelligent information retrieval systems that can summarize relevant and reliable textual sources to satisfy a user's query. Question answering is a specialized type of information retrieval with the aim of returning precise short answers to queries posed as natural language questions. We present a review and comparison of three biomedical question answering systems: askHERMES (http://www.askhermes.org/), EAGLi (http://eagl.unige.ch/EAGLi/), and HONQA (http://services.hon.ch/cgi-bin/QA10/qa.pl)." }, { "pmid": "22874315", "title": "Trustworthiness and relevance in web-based clinical question answering.", "abstract": "Question answering systems try to give precise answers to a user's question posed in natural language. It is of utmost importance that the answers returned are relevant to the user's question. For clinical QA, the trustworthiness of answers is another important issue. Limiting the document collection to certified websites helps to improve the trustworthiness of answers. On the other hand, limited document collections are known to harm the relevancy of answers. We show, however, in a comparative evaluation, that promoting trustworthiness has no negative effect on the relevance of the retrieved answers in our clinical QA system. On the contrary, the answers found are in general more relevant." }, { "pmid": "7772099", "title": "Can primary care physicians' questions be answered using the medical journal literature?", "abstract": "Medical librarians and informatics professionals believe the medical journal literature can be useful in clinical practice, but evidence suggests that practicing physicians do not share this belief. The authors designed a study to determine whether a random sample of \"native\" questions asked by primary care practitioners could be answered using the journal literature. Participants included forty-nine active, nonacademic primary care physicians providing ambulatory care in rural and nonrural Oregon, and seven medical librarians. The study was conducted in three stages: (1) office interviews with physicians to record clinical questions; (2) online searches to locate answers to selected questions; and (3) clinician feedback regarding the relevance and usefulness of the information retrieved. Of 295 questions recorded during forty-nine interviews, 60 questions were selected at random for searches. The average total time spent searching for and selecting articles for each question was forty-three minutes. The average cost per question searched was $27.37. Clinician feedback was received for 48 of 56 questions (four physicians could not be located, so their questions were not used in tabulating the results). For 28 questions (56%), clinicians judged the material relevant; for 22 questions (46%) the information provided a \"clear answer\" to their question. They expected the information would have had an impact on their patient in nineteen (40%) cases, and an impact on themselves or their practice in twenty-four (51%) cases. If the results can be generalized, and if the time and cost of performing searches can be reduced, increased use of the journal literature could significantly improve the extent to which primary care physicians' information needs are met." }, { "pmid": "10435959", "title": "Analysis of questions asked by family doctors regarding patient care.", "abstract": "OBJECTIVES\nTo characterise the information needs of family doctors by collecting the questions they asked about patient care during consultations and to classify these in ways that would be useful to developers of knowledge bases.\n\n\nDESIGN\nObservational study in which investigators visited doctors for two half days and collected their questions. Taxonomies were developed to characterise the clinical topic and generic type of information sought for each question.\n\n\nSETTING\nEastern Iowa.\n\n\nPARTICIPANTS\nRandom sample of 103 family doctors.\n\n\nMAIN OUTCOME MEASURES\nNumber of questions posed, pursued, and answered; topic and generic type of information sought for each question; time spent pursuing answers; information resources used.\n\n\nRESULTS\nParticipants asked a total of 1101 questions. Questions about drug prescribing, obstetrics and gynaecology, and adult infectious disease were most common and comprised 36% of all questions. The taxonomy of generic questions included 69 categories; the three most common types, comprising 24% of all questions, were \"What is the cause of symptom X?\" \"What is the dose of drug X?\" and \"How should I manage disease or finding X?\" Answers to most questions (702, 64%) were not immediately pursued, but, of those pursued, most (318, 80%) were answered. Doctors spent an average of less than 2 minutes pursuing an answer, and they used readily available print and human resources. Only two questions led to a formal literature search.\n\n\nCONCLUSIONS\nFamily doctors in this study did not pursue answers to most of their questions. Questions about patient care can be organised into a limited number of generic types, which could help guide the efforts of knowledge base developers." }, { "pmid": "10938054", "title": "A taxonomy of generic clinical questions: classification study.", "abstract": "OBJECTIVE\nTo develop a taxonomy of doctors' questions about patient care that could be used to help answer such questions.\n\n\nDESIGN\nUse of 295 questions asked by Oregon primary care doctors to modify previously developed taxonomy of 1101 clinical questions asked by Iowa family doctors.\n\n\nSETTING\nPrimary care practices in Iowa and Oregon.\n\n\nPARTICIPANTS\nRandom samples of 103 Iowa family doctors and 49 Oregon primary care doctors.\n\n\nMAIN OUTCOME MEASURES\nConsensus among seven investigators on a meaningful taxonomy of generic questions; interrater reliability among 11 individuals who used the taxonomy to classify a random sample of 100 questions: 50 from Iowa and 50 from Oregon.\n\n\nRESULTS\nThe revised taxonomy, which comprised 64 generic question types, was used to classify 1396 clinical questions. The three commonest generic types were \"What is the drug of choice for condition x?\" (150 questions, 11%); \"What is the cause of symptom x?\" (115 questions, 8%); and \"What test is indicated in situation x?\" (112 questions, 8%). The mean interrater reliability among 11 coders was moderate (kappa=0.53, agreement 55%).\n\n\nCONCLUSIONS\nClinical questions in primary care can be categorised into a limited number of generic types. A moderate degree of interrater reliability was achieved with the taxonomy developed in this study. The taxonomy may enhance our understanding of doctors' information needs and improve our ability to meet those needs." }, { "pmid": "11711012", "title": "Answering family physicians' clinical questions using electronic medical databases.", "abstract": "OBJECTIVE\nWe studied the ability of electronic medical databases to provide adequate answers to the clinical questions of family physicians.\n\n\nSTUDY DESIGN\nTwo family physicians attempted to answer 20 questions with each of the databases evaluated. The adequacy of the answers was determined by the 2 physician searchers, and an arbitration panel of 3 family physicians was used if there was disagreement.\n\n\nDATA SOURCE\nWe identified 38 databases through nominations from national groups of family physicians, medical informaticians, and medical librarians; 14 met predetermined eligibility criteria.\n\n\nOUTCOMES MEASURED\nThe primary outcome was the proportion of questions adequately answered by each database and by combinations of databases. We also measured mean and median times to obtain adequate answers for individual databases.\n\n\nRESULTS\nThe agreement between family physician searchers regarding the adequacy of answers was excellent (k=0.94). Five individual databases (STAT!Ref, MDConsult, DynaMed, MAXX, and MDChoice.com) answered at least half of the clinical questions. Some combinations of databases answered 75% or more. The average time to obtain an adequate answer ranged from 2.4 to 6.5 minutes.\n\n\nCONCLUSION\nSeveral current electronic medical databases could answer most of a group of 20 clinical questions derived from family physicians during office practice. However, point-of-care searching is not yet fast enough to address most clinical questions identified during routine clinical practice." }, { "pmid": "14702450", "title": "An evaluation of information-seeking behaviors of general pediatricians.", "abstract": "OBJECTIVE\nUsage of computer resources at the point of care has a positive effect on physician decision making. Pediatricians' information-seeking behaviors are not well characterized. The goal of this study was to characterize quantitatively the information-seeking behaviors of general pediatricians and specifically compare their use of computers, including digital libraries, before and after an educational intervention.\n\n\nMETHODS\nGeneral pediatric residents and faculty at a US Midwest children's hospital participated. A control (year 1) versus intervention group (year 2) research design was implemented. Eligible pediatrician pools overlapped, such that some participated first in the control group and later as part of the intervention. The intervention group received a 10-minute individual training session and handout on how to use a pediatric digital library to answer professional questions. A general medical digital library was also available. Pediatricians in both the control and the intervention groups were surveyed using the critical incident technique during 2 6-month time periods. Both groups were telephoned for 1- to 2-minute interviews and were asked, \"What pediatric question(s) did you have that you needed additional information to answer?\" The main outcome measures were the differences between the proportion of pediatricians who use computers and digital libraries and a comparison of the number of times that pediatricians use these resources before and after intervention.\n\n\nRESULTS\nA total of 58 pediatricians were eligible, and 52 participated (89.6%). Participant demographics between control (N = 41; 89.1%) and intervention (N = 31; 70.4%) were not statistically different. Twenty pediatricians were in both groups. Pediatricians were slightly less likely to pursue answers after the intervention (94.7% vs 89.2%); the primary reason cited for both groups was a lack of time. The pediatricians were as successful in finding answers in each group (95.7% vs 92.7%), but the intervention group took significantly less time (8.3 minutes vs 19.6 minutes). After the intervention, pediatricians used computers and digital libraries more to answer their questions and spent less time using them.\n\n\nCONCLUSION\nThis study showed higher rates of physician questions pursued and answered and higher rates of computer use at baseline and after intervention compared with previous studies. Pediatricians who seek answers at the point of care therefore should begin to shift their information-seeking behaviors toward computer resources, as they are as effective but more time-efficient." }, { "pmid": "20670693", "title": "Automatically extracting information needs from complex clinical questions.", "abstract": "OBJECTIVE\nClinicians pose complex clinical questions when seeing patients, and identifying the answers to those questions in a timely manner helps improve the quality of patient care. We report here on two natural language processing models, namely, automatic topic assignment and keyword identification, that together automatically and effectively extract information needs from ad hoc clinical questions. Our study is motivated in the context of developing the larger clinical question answering system AskHERMES (Help clinicians to Extract and aRrticulate Multimedia information for answering clinical quEstionS).\n\n\nDESIGN AND MEASUREMENTS\nWe developed supervised machine-learning systems to automatically assign predefined general categories (e.g. etiology, procedure, and diagnosis) to a question. We also explored both supervised and unsupervised systems to automatically identify keywords that capture the main content of the question.\n\n\nRESULTS\nWe evaluated our systems on 4654 annotated clinical questions that were collected in practice. We achieved an F1 score of 76.0% for the task of general topic classification and 58.0% for keyword extraction. Our systems have been implemented into the larger question answering system AskHERMES. Our error analyses suggested that inconsistent annotation in our training data have hurt both question analysis tasks.\n\n\nCONCLUSION\nOur systems, available at http://www.askhermes.org, can automatically extract information needs from both short (the number of word tokens <20) and long questions (the number of word tokens >20), and from both well-structured and ill-formed questions. We speculate that the performance of general topic classification and keyword extraction can be further improved if consistently annotated data are made available." }, { "pmid": "21856442", "title": "Toward automated consumer question answering: automatically separating consumer questions from professional questions in the healthcare domain.", "abstract": "OBJECTIVE\nBoth healthcare professionals and healthcare consumers have information needs that can be met through the use of computers, specifically via medical question answering systems. However, the information needs of both groups are different in terms of literacy levels and technical expertise, and an effective question answering system must be able to account for these differences if it is to formulate the most relevant responses for users from each group. In this paper, we propose that a first step toward answering the queries of different users is automatically classifying questions according to whether they were asked by healthcare professionals or consumers.\n\n\nDESIGN\nWe obtained two sets of consumer questions (~10,000 questions in total) from Yahoo answers. The professional questions consist of two question collections: 4654 point-of-care questions (denoted as PointCare) obtained from interviews of a group of family doctors following patient visits and 5378 questions from physician practices through professional online services (denoted as OnlinePractice). With more than 20,000 questions combined, we developed supervised machine-learning models for automatic classification between consumer questions and professional questions. To evaluate the robustness of our models, we tested the model that was trained on the Consumer-PointCare dataset on the Consumer-OnlinePractice dataset. We evaluated both linguistic features and statistical features and examined how the characteristics in two different types of professional questions (PointCare vs. OnlinePractice) may affect the classification performance. We explored information gain for feature reduction and the back-off linguistic category features.\n\n\nRESULTS\nThe 10-fold cross-validation results showed the best F1-measure of 0.936 and 0.946 on Consumer-PointCare and Consumer-OnlinePractice respectively, and the best F1-measure of 0.891 when testing the Consumer-PointCare model on the Consumer-OnlinePractice dataset.\n\n\nCONCLUSION\nHealthcare consumer questions posted at Yahoo online communities can be reliably classified from professional questions posted by point-of-care clinicians and online physicians. The supervised machine-learning models are robust for this task. Our study will significantly benefit further development in automated consumer question answering." }, { "pmid": "22142949", "title": "An ontology for clinical questions about the contents of patient notes.", "abstract": "OBJECTIVE\nMany studies have been completed on question classification in the open domain, however only limited work focuses on the medical domain. As well, to the best of our knowledge, most of these medical question classifications were designed for literature based question and answering systems. This paper focuses on a new direction, which is to design a novel question processing and classification model for answering clinical questions applied to electronic patient notes.\n\n\nMETHODS\nThere are four main steps in the work. Firstly, a relatively large set of clinical questions was collected from staff in an Intensive Care Unit. Then, a clinical question taxonomy was designed for question and answering purposes. Subsequently an annotation guideline was created and used to annotate the question set. Finally, a multilayer classification model was built to classify the clinical questions.\n\n\nRESULTS\nThrough the initial classification experiments, we realized that the general features cannot contribute to high performance of a minimum classifier (a small data set with multiple classes). Thus, an automatic knowledge discovery and knowledge reuse process was designed to boost the performance by extracting and expanding the specific features of the questions. In the evaluation, the results show around 90% accuracy can be achieved in the answerable subclass classification and generic question templates classification. On the other hand, the machine learning method does not perform well at identifying the category of unanswerable questions, due to the asymmetric distribution.\n\n\nCONCLUSIONS\nIn this paper, a comprehensive study on clinical questions has been completed. A major outcome of this work is the multilayer classification model. It serves as a major component of a patient records based clinical question and answering system as our studies continue. As well, the question collections can be reused by the research community to improve the efficiency of their own question and answering systems." }, { "pmid": "25954411", "title": "Automatically classifying question types for consumer health questions.", "abstract": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification." }, { "pmid": "25759063", "title": "Toward automated classification of consumers' cancer-related questions with a new taxonomy of expected answer types.", "abstract": "This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions." }, { "pmid": "23092060", "title": "Interrater reliability: the kappa statistic.", "abstract": "The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested." }, { "pmid": "27147494", "title": "Interactive use of online health resources: a comparison of consumer and professional questions.", "abstract": "OBJECTIVE\nTo understand how consumer questions on online resources differ from questions asked by professionals, and how such consumer questions differ across resources.\n\n\nMATERIALS AND METHODS\nTen online question corpora, 5 consumer and 5 professional, with a combined total of over 40 000 questions, were analyzed using a variety of natural language processing techniques. These techniques analyze questions at the lexical, syntactic, and semantic levels, exposing differences in both form and content.\n\n\nRESULTS\nConsumer questions tend to be longer than professional questions, more closely resemble open-domain language, and focus far more on medical problems. Consumers ask more sub-questions, provide far more background information, and ask different types of questions than professionals. Furthermore, there is substantial variance of these factors between the different consumer corpora.\n\n\nDISCUSSION\nThe form of consumer questions is highly dependent upon the individual online resource, especially in the amount of background information provided. Professionals, on the other hand, provide very little background information and often ask much shorter questions. The content of consumer questions is also highly dependent upon the resource. While professional questions commonly discuss treatments and tests, consumer questions focus disproportionately on symptoms and diseases. Further, consumers place far more emphasis on certain types of health problems (eg, sexual health).\n\n\nCONCLUSION\nWebsites for consumers to submit health questions are a popular online resource filling important gaps in consumer health information. By analyzing how consumers write questions on these resources, we can better understand these gaps and create solutions for improving information access.This article is part of the Special Focus on Person-Generated Health and Wellness Data, which published in the May 2016 issue, Volume 23, Issue 3." } ]
BMC Medical Informatics and Decision Making
29589569
PMC5872501
10.1186/s12911-018-0594-x
A bibliometric analysis of natural language processing in medical research
BackgroundNatural language processing (NLP) has become an increasingly significant role in advancing medicine. Rich research achievements of NLP methods and applications for medical information processing are available. It is of great significance to conduct a deep analysis to understand the recent development of NLP-empowered medical research field. However, limited study examining the research status of this field could be found. Therefore, this study aims to quantitatively assess the academic output of NLP in medical research field.MethodsWe conducted a bibliometric analysis on NLP-empowered medical research publications retrieved from PubMed in the period 2007–2016. The analysis focused on three aspects. Firstly, the literature distribution characteristics were obtained with a statistics analysis method. Secondly, a network analysis method was used to reveal scientific collaboration relations. Finally, thematic discovery and evolution was reflected using an affinity propagation clustering method.ResultsThere were 1405 NLP-empowered medical research publications published during the 10 years with an average annual growth rate of 18.39%. 10 most productive publication sources together contributed more than 50% of the total publications. The USA had the highest number of publications. A moderately significant correlation between country’s publications and GDP per capita was revealed. Denny, Joshua C was the most productive author. Mayo Clinic was the most productive affiliation. The annual co-affiliation and co-country rates reached 64.04% and 15.79% in 2016, respectively. 10 main great thematic areas were identified including Computational biology, Terminology mining, Information extraction, Text classification, Social medium as data source, Information retrieval, etc.ConclusionsA bibliometric analysis of NLP-empowered medical research publications for uncovering the recent research status is presented. The results can assist relevant researchers, especially newcomers in understanding the research development systematically, seeking scientific cooperation partners, optimizing research topic choices and monitoring new scientific or technological activities.
Related workApplications of bibliometrics are numerous. Many studies focused on publication statistical characteristics evaluation with elements such as publication data, influential journals, productive authors, affiliations, and countries. Based on two separate databases Web of Science (WoS) and Google Scholar, Diem and Stefan [28] investigated the fitness-for-purpose of bibliometric indicators for measuring the research performance of individual researchers in education sciences field in Switzerland. The study results indicated that the indicators for research performance measurement such as quantity of publications and citation impact measure were highly positively correlated. Fan et al. [29] conducted a bibliometric study for the evaluation of the quantity and quality of Chinese publications on burns at both the international and domestic levels with basis of PubMed records during 1985 and 2014. Similar works have also been conducted for medical research output. A study for the determination of whether a correlation existed between bibliometrics and National Institutes of Health (NIH) funding data among academic neurosurgeons was conducted by Venable et al. [30]. Their work revealed that bibliometric indices were higher among neurosurgeons with NIH funding, but only the contemporary h-index was shown to be predictive of NIH funding. By examining the growth of published literature on diabetes in three countries including Nigeria, Argentina and Thailand, Harande and Alhaji [31] showed that the literature of the disease grew and spread very widely. Ramos [32] found that the research output in countries with more estimated cases of tuberculosis was less compared with industrialized countries through a bibliometric analysis of tuberculosis research output. In addition, bibliometric analysis on research publications related with cancer [33], eye disease [34], obesity [35], dental traumatology [36], etc., could also be found. Bibliometric analysis for publication statistical characteristics evaluation was also available for specific journals, e.g., Journal of Intellectual Property Rights [37] and The Electronic Library [38].Studies on collaboration relationship among authors, affiliations, or countries were commonly found. Based on researches covering biomedical, physics, and mathematics, Newman [39] compared the scientific co-authorship patterns using network analysis. Radev et al. [40] investigated the publications published by The Association for Computational Linguistics using citation and collaboration network analysis to identify the most central papers and authors. A bibliometric and visual study on consumer behavior research publications from 1966 to 2008 was presented by Muñoz-Leiva et al. [41]. Geaney et al. [42] provided a detailed evaluation of type 2 diabetes mellitus research output during the year 1951–2012 with methods of large-scale data analysis, bibliometric indicators, and density-equalizing mapping. They came to the conclusion that the number of research was rising in step with the increasing global burden of the disease. With a chord diagram of the 20 most productive countries, Li et al. [43] confirmed the predominance of the USA in international geo-ontology research collaboration. They also found that the international cooperation of countries such as Sweden, Switzerland, and New Zealand were relatively high although with fewer publications.There were also a few studies centering on research topic detection of a certain field using bibliometrics. For example, Heo et al. [44] analyzed the field of bioinformatics using a multi-faceted topic modeling method. By combining performance analysis and science mapping, some studies conducted thematic evolution detection and visualization of a given research field, e.g., hydropower [45], neuroscience [46], and social work [47]. Similar works have also been conducted for specific journals such as Knowledge-Based Systems [22]. Based on co-word analysis, Cobo et al. [48] proposed an automatic approach with the combination of performance analysis and science mapping to show the conceptual evolution of intelligent transportation systems research field during three consecutive periods. Six main thematic areas were identified out. With the purpose of mapping and analyzing the structure and evolution of the scientific literature on gender differences in higher education and science, Dehdarirad et al. [49] applied co-word analysis to identify main concepts, used hierarchical cluster analysis to cluster the keywords, and created a strategic diagram to analyze trends.Most relevant studies chose WoS as publication retrieval data source, and therefore, author-defined keywords and ISI keywords plus were usually used as topic candidates [22, 23, 46]. This might lead to information loss without considering title and abstract fields. The key terms in title and abstract fields were extracted and analyzed using VOSviewer with equal importance in the study of Yeung et al. [46]. However, it is more reasonable to bestow weighing for terms from different fields.To our knowledge, there was no study applying bibliometrics to assess research output of NLP-empowered medical research field. Therefore, giving the deficiencies in existing research, this study uses PubMed as data source. With 1405 NLP-empowered medical research publications retrieved, literature distribution characteristics and scientific collaboration are acquired using a descriptive statistics method and a social network analysis method, respectively. In addition to author defined keywords and PubMed medical subject headings (MeSH), key terms extracted from title and abstract fields using a developed Python program are also included in AP clustering analysis for thematic discovery and evolution.
[ "20837160", "16135244", "26630392", "26063745", "24441986", "25755127", "25868462", "24933368", "26521301", "25943550", "27836816", "24496068", "26940748", "25327613", "22413016", "25612654", "27608527", "24239737", "19017458", "26104927", "26819840", "26864566", "26846974", "14745042", "26208117", "28617229", "28377687", "17218491", "21737437", "27026618", "26766600" ]
[ { "pmid": "20837160", "title": "An ontology-based measure to compute semantic similarity in biomedicine.", "abstract": "Proper understanding of textual data requires the exploitation and integration of unstructured and heterogeneous clinical sources, healthcare records or scientific literature, which are fundamental aspects in clinical and translational research. The determination of semantic similarity between word pairs is an important component of text understanding that enables the processing, classification and structuring of textual resources. In the past, several approaches for assessing word similarity by exploiting different knowledge sources (ontologies, thesauri, domain corpora, etc.) have been proposed. Some of these measures have been adapted to the biomedical field by incorporating domain information extracted from clinical data or from medical ontologies (such as MeSH or SNOMED CT). In this paper, these approaches are introduced and analyzed in order to determine their advantages and limitations with respect to the considered knowledge bases. After that, a new measure based on the exploitation of the taxonomical structure of a biomedical ontology is proposed. Using SNOMED CT as the input ontology, the accuracy of our proposal is evaluated and compared against other approaches according to a standard benchmark of manually ranked medical terms. The correlation between the results of the evaluated measures and the human experts' ratings shows that our proposal outperforms most of the previous measures avoiding, at the same time, some of their limitations." }, { "pmid": "16135244", "title": "Automation of a problem list using natural language processing.", "abstract": "BACKGROUND\nThe medical problem list is an important part of the electronic medical record in development in our institution. To serve the functions it is designed for, the problem list has to be as accurate and timely as possible. However, the current problem list is usually incomplete and inaccurate, and is often totally unused. To alleviate this issue, we are building an environment where the problem list can be easily and effectively maintained.\n\n\nMETHODS\nFor this project, 80 medical problems were selected for their frequency of use in our future clinical field of evaluation (cardiovascular). We have developed an Automated Problem List system composed of two main components: a background and a foreground application. The background application uses Natural Language Processing (NLP) to harvest potential problem list entries from the list of 80 targeted problems detected in the multiple free-text electronic documents available in our electronic medical record. These proposed medical problems drive the foreground application designed for management of the problem list. Within this application, the extracted problems are proposed to the physicians for addition to the official problem list.\n\n\nRESULTS\nThe set of 80 targeted medical problems selected for this project covered about 5% of all possible diagnoses coded in ICD-9-CM in our study population (cardiovascular adult inpatients), but about 64% of all instances of these coded diagnoses. The system contains algorithms to detect first document sections, then sentences within these sections, and finally potential problems within the sentences. The initial evaluation of the section and sentence detection algorithms demonstrated a sensitivity and positive predictive value of 100% when detecting sections, and a sensitivity of 89% and a positive predictive value of 94% when detecting sentences.\n\n\nCONCLUSION\nThe global aim of our project is to automate the process of creating and maintaining a problem list for hospitalized patients and thereby help to guarantee the timeliness, accuracy and completeness of this information." }, { "pmid": "26630392", "title": "\"Rate My Therapist\": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.", "abstract": "The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies." }, { "pmid": "26063745", "title": "Domain adaptation for semantic role labeling of clinical text.", "abstract": "OBJECTIVE\nSemantic role labeling (SRL), which extracts a shallow semantic relation representation from different surface textual forms of free text sentences, is important for understanding natural language. Few studies in SRL have been conducted in the medical domain, primarily due to lack of annotated clinical SRL corpora, which are time-consuming and costly to build. The goal of this study is to investigate domain adaptation techniques for clinical SRL leveraging resources built from newswire and biomedical literature to improve performance and save annotation costs.\n\n\nMATERIALS AND METHODS\nMultisource Integrated Platform for Answering Clinical Questions (MiPACQ), a manually annotated SRL clinical corpus, was used as the target domain dataset. PropBank and NomBank from newswire and BioProp from biomedical literature were used as source domain datasets. Three state-of-the-art domain adaptation algorithms were employed: instance pruning, transfer self-training, and feature augmentation. The SRL performance using different domain adaptation algorithms was evaluated by using 10-fold cross-validation on the MiPACQ corpus. Learning curves for the different methods were generated to assess the effect of sample size.\n\n\nRESULTS AND CONCLUSION\nWhen all three source domain corpora were used, the feature augmentation algorithm achieved statistically significant higher F-measure (83.18%), compared to the baseline with MiPACQ dataset alone (F-measure, 81.53%), indicating that domain adaptation algorithms may improve SRL performance on clinical text. To achieve a comparable performance to the baseline method that used 90% of MiPACQ training samples, the feature augmentation algorithm required <50% of training samples in MiPACQ, demonstrating that annotation costs of clinical SRL can be reduced significantly by leveraging existing SRL resources from other domains." }, { "pmid": "24441986", "title": "Word sense disambiguation in the clinical domain: a comparison of knowledge-rich and knowledge-poor unsupervised methods.", "abstract": "OBJECTIVE\nTo evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources.\n\n\nMATERIALS AND METHODS\nThe graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation.\n\n\nRESULTS\nThe topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40-50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help.\n\n\nDISCUSSION\nAlthough topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words.\n\n\nCONCLUSIONS\nTopic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited." }, { "pmid": "25755127", "title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features.", "abstract": "OBJECTIVE\nSocial media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media.\n\n\nMETHODS\nWe introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique.\n\n\nRESULTS\nADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance.\n\n\nCONCLUSION\nIt is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets." }, { "pmid": "25868462", "title": "Normalization of relative and incomplete temporal expressions in clinical narratives.", "abstract": "OBJECTIVE\nTo improve the normalization of relative and incomplete temporal expressions (RI-TIMEXes) in clinical narratives.\n\n\nMETHODS\nWe analyzed the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis. We annotated the RI-TIMEXes in three corpora to study the characteristics of RI-TMEXes in different domains. This informed the design of our RI-TIMEX normalization system for the clinical domain, which consists of an anchor point classifier, an anchor relation classifier, and a rule-based RI-TIMEX text span parser. We experimented with different feature sets and performed an error analysis for each system component.\n\n\nRESULTS\nThe annotation confirmed the hypotheses that we can simplify the RI-TIMEXes normalization task using two multi-label classifiers. Our system achieves anchor point classification, anchor relation classification, and rule-based parsing accuracy of 74.68%, 87.71%, and 57.2% (82.09% under relaxed matching criteria), respectively, on the held-out test set of the 2012 i2b2 temporal relation challenge.\n\n\nDISCUSSION\nExperiments with feature sets reveal some interesting findings, such as: the verbal tense feature does not inform the anchor relation classification in clinical narratives as much as the tokens near the RI-TIMEX. Error analysis showed that underrepresented anchor point and anchor relation classes are difficult to detect.\n\n\nCONCLUSIONS\nWe formulate the RI-TIMEX normalization problem as a pair of multi-label classification problems. Considering only RI-TIMEX extraction and normalization, the system achieves statistically significant improvement over the RI-TIMEX results of the best systems in the 2012 i2b2 challenge." }, { "pmid": "24933368", "title": "Dynamical phenotyping: using temporal analysis of clinically collected physiologic data to stratify populations.", "abstract": "Using glucose time series data from a well measured population drawn from an electronic health record (EHR) repository, the variation in predictability of glucose values quantified by the time-delayed mutual information (TDMI) was explained using a mechanistic endocrine model and manual and automated review of written patient records. The results suggest that predictability of glucose varies with health state where the relationship (e.g., linear or inverse) depends on the source of the acuity. It was found that on a fine scale in parameter variation, the less insulin required to process glucose, a condition that correlates with good health, the more predictable glucose values were. Nevertheless, the most powerful effect on predictability in the EHR subpopulation was the presence or absence of variation in health state, specifically, in- and out-of-control glucose versus in-control glucose. Both of these results are clinically and scientifically relevant because the magnitude of glucose is the most commonly used indicator of health as opposed to glucose dynamics, thus providing for a connection between a mechanistic endocrine model and direct insight to human health via clinically collected data." }, { "pmid": "26521301", "title": "Multilayered temporal modeling for the clinical domain.", "abstract": "OBJECTIVE\nTo develop an open-source temporal relation discovery system for the clinical domain. The system is capable of automatically inferring temporal relations between events and time expressions using a multilayered modeling strategy. It can operate at different levels of granularity--from rough temporality expressed as event relations to the document creation time (DCT) to temporal containment to fine-grained classic Allen-style relations.\n\n\nMATERIALS AND METHODS\nWe evaluated our systems on 2 clinical corpora. One is a subset of the Temporal Histories of Your Medical Events (THYME) corpus, which was used in SemEval 2015 Task 6: Clinical TempEval. The other is the 2012 Informatics for Integrating Biology and the Bedside (i2b2) challenge corpus. We designed multiple supervised machine learning models to compute the DCT relation and within-sentence temporal relations. For the i2b2 data, we also developed models and rule-based methods to recognize cross-sentence temporal relations. We used the official evaluation scripts of both challenges to make our results comparable with results of other participating systems. In addition, we conducted a feature ablation study to find out the contribution of various features to the system's performance.\n\n\nRESULTS\nOur system achieved state-of-the-art performance on the Clinical TempEval corpus and was on par with the best systems on the i2b2 2012 corpus. Particularly, on the Clinical TempEval corpus, our system established a new F1 score benchmark, statistically significant as compared to the baseline and the best participating system.\n\n\nCONCLUSION\nPresented here is the first open-source clinical temporal relation discovery system. It was built using a multilayered temporal modeling strategy and achieved top performance in 2 major shared tasks." }, { "pmid": "25943550", "title": "An end-to-end hybrid algorithm for automated medication discrepancy detection.", "abstract": "BACKGROUND\nIn this study we implemented and developed state-of-the-art machine learning (ML) and natural language processing (NLP) technologies and built a computerized algorithm for medication reconciliation. Our specific aims are: (1) to develop a computerized algorithm for medication discrepancy detection between patients' discharge prescriptions (structured data) and medications documented in free-text clinical notes (unstructured data); and (2) to assess the performance of the algorithm on real-world medication reconciliation data.\n\n\nMETHODS\nWe collected clinical notes and discharge prescription lists for all 271 patients enrolled in the Complex Care Medical Home Program at Cincinnati Children's Hospital Medical Center between 1/1/2010 and 12/31/2013. A double-annotated, gold-standard set of medication reconciliation data was created for this collection. We then developed a hybrid algorithm consisting of three processes: (1) a ML algorithm to identify medication entities from clinical notes, (2) a rule-based method to link medication names with their attributes, and (3) a NLP-based, hybrid approach to match medications with structured prescriptions in order to detect medication discrepancies. The performance was validated on the gold-standard medication reconciliation data, where precision (P), recall (R), F-value (F) and workload were assessed.\n\n\nRESULTS\nThe hybrid algorithm achieved 95.0%/91.6%/93.3% of P/R/F on medication entity detection and 98.7%/99.4%/99.1% of P/R/F on attribute linkage. The medication matching achieved 92.4%/90.7%/91.5% (P/R/F) on identifying matched medications in the gold-standard and 88.6%/82.5%/85.5% (P/R/F) on discrepant medications. By combining all processes, the algorithm achieved 92.4%/90.7%/91.5% (P/R/F) and 71.5%/65.2%/68.2% (P/R/F) on identifying the matched and the discrepant medications, respectively. The error analysis on algorithm outputs identified challenges to be addressed in order to improve medication discrepancy detection.\n\n\nCONCLUSION\nBy leveraging ML and NLP technologies, an end-to-end, computerized algorithm achieves promising outcome in reconciling medications between clinical notes and discharge prescriptions." }, { "pmid": "27836816", "title": "Web-based Real-Time Case Finding for the Population Health Management of Patients With Diabetes Mellitus: A Prospective Validation of the Natural Language Processing-Based Algorithm With Statewide Electronic Medical Records.", "abstract": "BACKGROUND\nDiabetes case finding based on structured medical records does not fully identify diabetic patients whose medical histories related to diabetes are available in the form of free text. Manual chart reviews have been used but involve high labor costs and long latency.\n\n\nOBJECTIVE\nThis study developed and tested a Web-based diabetes case finding algorithm using both structured and unstructured electronic medical records (EMRs).\n\n\nMETHODS\nThis study was based on the health information exchange (HIE) EMR database that covers almost all health facilities in the state of Maine, United States. Using narrative clinical notes, a Web-based natural language processing (NLP) case finding algorithm was retrospectively (July 1, 2012, to June 30, 2013) developed with a random subset of HIE-associated facilities, which was then blind tested with the remaining facilities. The NLP-based algorithm was subsequently integrated into the HIE database and validated prospectively (July 1, 2013, to June 30, 2014).\n\n\nRESULTS\nOf the 935,891 patients in the prospective cohort, 64,168 diabetes cases were identified using diagnosis codes alone. Our NLP-based case finding algorithm prospectively found an additional 5756 uncodified cases (5756/64,168, 8.97% increase) with a positive predictive value of .90. Of the 21,720 diabetic patients identified by both methods, 6616 patients (6616/21,720, 30.46%) were identified by the NLP-based algorithm before a diabetes diagnosis was noted in the structured EMR (mean time difference = 48 days).\n\n\nCONCLUSIONS\nThe online NLP algorithm was effective in identifying uncodified diabetes cases in real time, leading to a significant improvement in diabetes case finding. The successful integration of the NLP-based case finding algorithm into the Maine HIE database indicates a strong potential for application of this novel method to achieve a more complete ascertainment of diagnoses of diabetes mellitus." }, { "pmid": "24496068", "title": "Clustering clinical trials with similar eligibility criteria features.", "abstract": "OBJECTIVES\nTo automatically identify and cluster clinical trials with similar eligibility features.\n\n\nMETHODS\nUsing the public repository ClinicalTrials.gov as the data source, we extracted semantic features from the eligibility criteria text of all clinical trials and constructed a trial-feature matrix. We calculated the pairwise similarities for all clinical trials based on their eligibility features. For all trials, by selecting one trial as the center each time, we identified trials whose similarities to the central trial were greater than or equal to a predefined threshold and constructed center-based clusters. Then we identified unique trial sets with distinctive trial membership compositions from center-based clusters by disregarding their structural information.\n\n\nRESULTS\nFrom the 145,745 clinical trials on ClinicalTrials.gov, we extracted 5,508,491 semantic features. Of these, 459,936 were unique and 160,951 were shared by at least one pair of trials. Crowdsourcing the cluster evaluation using Amazon Mechanical Turk (MTurk), we identified the optimal similarity threshold, 0.9. Using this threshold, we generated 8806 center-based clusters. Evaluation of a sample of the clusters by MTurk resulted in a mean score 4.331±0.796 on a scale of 1-5 (5 indicating \"strongly agree that the trials in the cluster are similar\").\n\n\nCONCLUSIONS\nWe contribute an automated approach to clustering clinical trials with similar eligibility features. This approach can be potentially useful for investigating knowledge reuse patterns in clinical trial eligibility criteria designs and for improving clinical trial recruitment. We also contribute an effective crowdsourcing method for evaluating informatics interventions." }, { "pmid": "26940748", "title": "Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.", "abstract": "OBJECTIVES\nTo develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text.\n\n\nMETHODS\nLeveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov.\n\n\nRESULTS\nThe precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively.\n\n\nCONCLUSIONS\nValx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community." }, { "pmid": "25327613", "title": "Adaptive semantic tag mining from heterogeneous clinical research texts.", "abstract": "OBJECTIVES\nTo develop an adaptive approach to mine frequent semantic tags (FSTs) from heterogeneous clinical research texts.\n\n\nMETHODS\nWe develop a \"plug-n-play\" framework that integrates replaceable unsupervised kernel algorithms with formatting, functional, and utility wrappers for FST mining. Temporal information identification and semantic equivalence detection were two example functional wrappers. We first compared this approach's recall and efficiency for mining FSTs from ClinicalTrials.gov to that of a recently published tag-mining algorithm. Then we assessed this approach's adaptability to two other types of clinical research texts: clinical data requests and clinical trial protocols, by comparing the prevalence trends of FSTs across three texts.\n\n\nRESULTS\nOur approach increased the average recall and speed by 12.8% and 47.02% respectively upon the baseline when mining FSTs from ClinicalTrials.gov, and maintained an overlap in relevant FSTs with the base- line ranging between 76.9% and 100% for varying FST frequency thresholds. The FSTs saturated when the data size reached 200 documents. Consistent trends in the prevalence of FST were observed across the three texts as the data size or frequency threshold changed.\n\n\nCONCLUSIONS\nThis paper contributes an adaptive tag-mining framework that is scalable and adaptable without sacrificing its recall. This component-based architectural design can be potentially generalizable to improve the adaptability of other clinical text mining methods." }, { "pmid": "22413016", "title": "A small world of citations? The influence of collaboration networks on citation practices.", "abstract": "This paper examines the proximity of authors to those they cite using degrees of separation in a co-author network, essentially using collaboration networks to expand on the notion of self-citations. While the proportion of direct self-citations (including co-authors of both citing and cited papers) is relatively constant in time and across specialties in the natural sciences (10% of references) and the social sciences (20%), the same cannot be said for citations to authors who are members of the co-author network. Differences between fields and trends over time lie not only in the degree of co-authorship which defines the large-scale topology of the collaboration network, but also in the referencing practices within a given discipline, computed by defining a propensity to cite at a given distance within the collaboration network. Overall, there is little tendency to cite those nearby in the collaboration network, excluding direct self-citations. These results are interpreted in terms of small-scale structure, field-specific citation practices, and the value of local co-author networks for the production of knowledge and for the accumulation of symbolic capital. Given the various levels of integration between co-authors, our findings shed light on the question of the availability of 'arm's length' expert reviewers of grant applications and manuscripts." }, { "pmid": "25612654", "title": "Eye neoplasms research: a bibliometric analysis from 1966 to 2012.", "abstract": "PURPOSE\nTo calculate the growth rate of the biomedical literature on eye neoplasms and to assess which journals, countries, and continents are the most productive.\n\n\nMETHODS\nPubMed was used to search for articles published from 1966 to 2012. Total number of articles per year was fitted to a linear equation as well as an exponential curve. To identify the core journals and predict the number of journals containing articles related to eye neoplasms, Bradford's law was applied. For each country and each continent, the gross domestic product (GDP) index (publications per $1 billion USD of GDP) and the population index (publications per million inhabitants) were calculated.\n\n\nRESULTS\nA total of 27,943 references were retrieved. The growth in the number of publications showed a linear increase with a yearly average growth rate of 2.08%, which was lower than for the whole PubMed database (3.59%). Using Bradford's law, 17 core journals were identified, among which 2 journals produced more than 1000 articles (JAMA Ophthalmology and American Journal of Ophthalmology). Europe was the most productive continent, followed by North America and Asia. The United States was by far the predominant country in number of publications, followed by Germany and the United Kingdom. However, population and GDP indexes showed that absolute production did not reflect the production per capita or economic efficiency.\n\n\nCONCLUSIONS\nThis bibliometric study provides data contributing to a better understanding of the eye neoplasm research field." }, { "pmid": "27608527", "title": "Chinese academic contribution to burns: A comprehensive bibliometrics analysis from 1985 to 2014.", "abstract": "OBJECTIVE\nThe objective of this study was to conduct a survey of the academic contribution and influence of Chinese scholars in the field of burns.\n\n\nMETHOD\nThe PubMed database was searched to obtain literature items originating from various countries and Chinese provinces from 1985 to 2014. The citation data were collected through the Google Scholar engine.\n\n\nRESULTS\nA total of 1037 papers published in 256 journals were included in this survey. China was second only to the USA in the number of publications on burns since 2010. In addition, the annual number of papers has increased significantly since 2001. The journal Burns published the most number of articles, but its proportion has been decreasing. Of the papers included in the survey, 58.34% were published in journals with a 5-year impact factor between 1 and 2, whereas only 3.66% were published in journals with an impact factor >5. Both total citations and citations per paper have decreased in the past decade. Randomized controlled trials or systematic reviews merely accounted for a small proportion. Twenty-nine provinces including 64 cities contributed one paper at least. The publications from Taiwan, Beijing, Chongqing, Shanghai, and Guangdong were high in both quantity and quality.\n\n\nCONCLUSION\nThe Chinese academic contribution to the field of burns is now on a rise. Although the quality of papers is lagging behind quantity, scholars and academies are dedicated to improving China's academic level." }, { "pmid": "24239737", "title": "A correlation between National Institutes of Health funding and bibliometrics in neurosurgery.", "abstract": "OBJECTIVE\nThe relationship between metrics, such as the h-index, and the ability of researchers to generate funding has not been previously investigated in neurosurgery. This study was performed to determine whether a correlation exists between bibliometrics and National Institutes of Health (NIH) funding data among academic neurosurgeons.\n\n\nMETHODS\nThe h-index, m-quotient, g-index, and contemporary h-index were determined for 1225 academic neurosurgeons from 99 (of 101) departments. Two databases were used to create the citation profiles, Google Scholar and Scopus. The NIH Research Portfolio Online Reporting Tools Expenditures and Reports tool was accessed to obtain career grant funding amount, grant number, year of first grant award, and calendar year of grant funding.\n\n\nRESULTS\nOf the 1225 academic neurosurgeons, 182 (15%) had at least 1 grant with a fully reported NIH award profile. Bibliometric indices were all significantly higher for those with NIH funding compared to those without NIH funding (P < .001). The contemporary h-index was found to be significantly predictive of NIH funding (P < .001). All bibliometric indices were significantly associated with the total number of grants, total award amount, year of first grant, and duration of grants in calendar years (bivariate correlation, P < .001) except for the association of m-quotient with year of first grant (P = .184).\n\n\nCONCLUSIONS\nBibliometric indices are higher for those with NIH funding compared to those without, but only the contemporary h-index was shown to be predictive of NIH funding. Among neurosurgeons with NIH funding, higher bibliometric scores were associated with greater total amount of funding, number of grants, duration of grants, and earlier acquisition of their first grant." }, { "pmid": "19017458", "title": "A bibliometric analysis of tuberculosis research indexed in PubMed, 1997-2006.", "abstract": "OBJECTIVE\nTo describe a bibliometric review of the literature on tuberculosis (TB) research indexed in PubMed over a 10-year period.\n\n\nMETHODS\nMedline was used via the PubMed online service of the US National Library of Medicine from 1997 to 2006. The search strategy was: [(tuberculosis) OR (tuberculous) in all fields].\n\n\nRESULTS\nA total of 35 735 references were located. The average annual growth rate was +4.7%. The articles were published in 2874 scientific journals. Sixteen journals contained 25% of the TB journal literature. The main journal was the International Journal of Tuberculosis and Lung Disease. Western Europe was the most productive region, with 31.1% of the articles. The USA ranked second (21%) and Asia third (19.9%). The USA is the predominant country, followed by India, Japan and the United Kingdom. When normalised by population, the order of prominence is Switzerland, New Zealand and Denmark. Normalised by GDP, Gambia, Malawi and Guinea-Bissau were the most productive countries. Normalised by estimated number of TB cases, Iceland, Switzerland and Norway were in leading positions.\n\n\nCONCLUSIONS\nThere was increasing research activity in the field of TB during the period 1997-2006. The countries with more estimated cases of TB produced less research in TB than industrialised countries." }, { "pmid": "26104927", "title": "Does Cancer Literature Reflect Multidisciplinary Practice? A Systematic Review of Oncology Studies in the Medical Literature Over a 20-Year Period.", "abstract": "PURPOSE\nQuality cancer care is best delivered through a multidisciplinary approach requiring awareness of current evidence for all oncologic specialties. The highest impact journals often disseminate such information, so the distribution and characteristics of oncology studies by primary intervention (local therapies, systemic therapies, and targeted agents) were evaluated in 10 high-impact journals over a 20-year period.\n\n\nMETHODS AND MATERIALS\nArticles published in 1994, 2004, and 2014 in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Lancet Oncology, Journal of Clinical Oncology, Annals of Oncology, Radiotherapy and Oncology, International Journal of Radiation Oncology, Biology, Physics, Annals of Surgical Oncology, and European Journal of Surgical Oncology were identified. Included studies were prospectively conducted and evaluated a therapeutic intervention.\n\n\nRESULTS\nA total of 960 studies were included: 240 (25%) investigated local therapies, 551 (57.4%) investigated systemic therapies, and 169 (17.6%) investigated targeted therapies. More local therapy trials (n=185 [77.1%]) evaluated definitive, primary treatment than systemic (n=178 [32.3%]) or targeted therapy trials (n=38 [22.5%]; P<.001). Local therapy trials (n=16 [6.7%]) also had significantly lower rates of industry funding than systemic (n=207 [37.6%]) and targeted therapy trials (n=129 [76.3%]; P<.001). Targeted therapy trials represented 5 (2%), 38 (10.2%), and 126 (38%) of those published in 1994, 2004, and 2014, respectively (P<.001), and industry-funded 48 (18.9%), 122 (32.6%), and 182 (54.8%) trials, respectively (P<.001). Compared to publication of systemic therapy trial articles, articles investigating local therapy (odds ratio: 0.025 [95% confidence interval: 0.012-0.048]; P<.001) were less likely to be found in high-impact general medical journals.\n\n\nCONCLUSIONS\nFewer studies evaluating local therapies, such as surgery and radiation, are published in high-impact oncology and medicine literature. Further research and attention are necessary to guide efforts promoting appropriate representation of all oncology studies in high-impact, broad-readership journals." }, { "pmid": "26819840", "title": "Trends and topics in eye disease research in PubMed from 2010 to 2014.", "abstract": "BACKGROUND\nThe purpose of this study is to provide a report on scientific production during the period 2010-2014 in order to identify the major topics as well as the predominant actors (journals, countries, continents) involved in the field of eye disease.\n\n\nMETHODS\nA PubMed search was carried out to extract articles related to eye diseases during the period 2010-2014. Data were downloaded and processed through developed PHP scripts for further analysis.\n\n\nRESULTS\nA total of 62,123 articles were retrieved. A total of 3,368 different journals were found, and 19 journals were identified as \"core journals\" according to Braford's law. English was by far the predominant language. A total of 853,182 MeSH terms were found, representing an average of 13.73 (SD = 4.98) MeSH terms per article. Among these 853,182 MeSH terms, 14,689 different MeSH terms were identified. Vision Disorders, Glaucoma, Diabetic Retinopathy, Macular Degeneration, and Cataract were the most frequent five MeSH terms related to eye diseases. The analysis of the total number of publications showed that Europe and Asia were the most productive continents, and the USA and China the most productive countries. Interestingly, using the mean Five-Year Impact Factor, the two most productive continents were North America and Oceania. After adjustment for population, the overall ranking positions changed in favor of smaller countries (i.e. Iceland, Switzerland, Denmark, and New Zealand), while after adjustment for Gross Domestic Product (GDP), the overall ranking positions changed in favor of some developing countries (Malawi, Guatemala, Singapore).\n\n\nCONCLUSIONS\nDue to the large number of articles included and the numerous parameters analyzed, this study provides a wide view of scientific productivity related to eye diseases during the period 2010-2014 and allows us to better understand this field." }, { "pmid": "26864566", "title": "Longitudinal trends in global obesity research and collaboration: a review using bibliometric metadata.", "abstract": "The goal of this study was to understand research trends and collaboration patterns together with scholarly impact within the domain of global obesity research. We developed and analysed bibliographic affiliation data collected from 117,340 research articles indexed in Scopus database on the topic of obesity and published from 1993-2012. We found steady growth and an exponential increase of publication numbers. Research output in global obesity research roughly doubled each 5 years, with almost 80% of the publications and authors from the second decade (2003-2012). The highest publication output was from the USA - 42% of publications had at least one author from the USA. Many US institutions also ranked highly in terms of research output and collaboration. Fifteen of the top-20 institutions in terms of publication output were from the USA; however, several European and Japanese research institutions ranked more highly in terms of average citations per paper. The majority of obesity research and collaboration has been confined to developed countries although developing countries have showed higher growth in recent times, e.g. the publication ratio between 2003-2012 and 1993-2002 for developing regions was much higher than that of developed regions (9:1 vs. 4:1). We also identified around 42 broad disciplines from authors' affiliation data, and these showed strong collaboration between them. Overall, this study provides one of the most comprehensive longitudinal bibliometric analyses of obesity research. This should help in understanding research trends, spatial density, collaboration patterns and the complex multi-disciplinary nature of research in the obesity domain." }, { "pmid": "26846974", "title": "Traumatic Dental Injuries in the primary dentition: a 15-year bibliometric analysis of Dental Traumatology.", "abstract": "AIM\nTo explore the profile of articles on traumatic dental injuries (TDI) in the primary dentition published in Dental Traumatology in the last 15 years using bibliometric analysis.\n\n\nMETHODS\nThree researchers read all titles and abstracts of articles published in Dental Traumatology between 2000 and 2014 (excluding editorials and letters) and selected all articles on TDI in the primary dentition. The articles were categorized according to year of publication, country in which the study was conducted, study design, and topics addressed. Divergences were resolved by consensus between the researchers.\n\n\nRESULTS\nAmong a total of 1257 articles published, 98 were initially excluded. Among the remaining 1159 articles, 152 (13.1%) focused on TDI in the primary dentition. The articles were conducted in 29 countries, with Brazil (38.8%) and Turkey (11.8%) accounting for the largest numbers. Cross-sectional studies (36.2%) and case report/case series (33.6%) were the most frequent study designs. Only two systematic reviews were published. The most commonly addressed topics were frequency/etiology/associated factors (36.8%), treatment (30.9%), and prognosis (19.7%). Among the articles addressing treatment, two-thirds were case reports or case series. The effects of TDI in primary teeth on their permanent successors were addressed in 20.4% of the articles (31/152).\n\n\nCONCLUSIONS\nThe number of articles on TDI in the primary dentition has increased, but remains low. The evaluation of study designs and topics addressed identified gaps that could contribute to the development of new studies on TDI in the primary dentition, especially cohort studies that evaluate risk factors, prognosis, and treatment." }, { "pmid": "14745042", "title": "Coauthorship networks and patterns of scientific collaboration.", "abstract": "By using data from three bibliographic databases in biology, physics, and mathematics, respectively, networks are constructed in which the nodes are scientists, and two scientists are connected if they have coauthored a paper. We use these networks to answer a broad variety of questions about collaboration patterns, such as the numbers of papers authors write, how many people they write them with, what the typical distance between scientists is through the network, and how patterns of collaboration vary between subjects and over time. We also summarize a number of recent results by other authors on coauthorship patterns." }, { "pmid": "26208117", "title": "Type 2 Diabetes Research Yield, 1951-2012: Bibliometrics Analysis and Density-Equalizing Mapping.", "abstract": "The objective of this paper is to provide a detailed evaluation of type 2 diabetes mellitus research output from 1951-2012, using large-scale data analysis, bibliometric indicators and density-equalizing mapping. Data were retrieved from the Science Citation Index Expanded database, one of the seven curated databases within Web of Science. Using Boolean operators \"OR\", \"AND\" and \"NOT\", a search strategy was developed to estimate the total number of published items. Only studies with an English abstract were eligible. Type 1 diabetes and gestational diabetes items were excluded. Specific software developed for the database analysed the data. Information including titles, authors' affiliations and publication years were extracted from all files and exported to excel. Density-equalizing mapping was conducted as described by Groenberg-Kloft et al, 2008. A total of 24,783 items were published and cited 476,002 times. The greatest number of outputs were published in 2010 (n=2,139). The United States contributed 28.8% to the overall output, followed by the United Kingdom (8.2%) and Japan (7.7%). Bilateral cooperation was most common between the United States and United Kingdom (n=237). Harvard University produced 2% of all publications, followed by the University of California (1.1%). The leading journals were Diabetes, Diabetologia and Diabetes Care and they contributed 9.3%, 7.3% and 4.0% of the research yield, respectively. In conclusion, the volume of research is rising in parallel with the increasing global burden of disease due to type 2 diabetes mellitus. Bibliometrics analysis provides useful information to scientists and funding agencies involved in the development and implementation of research strategies to address global health issues." }, { "pmid": "28617229", "title": "Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.", "abstract": "BACKGROUND\nBioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure.\n\n\nMETHODS\nIn this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics.\n\n\nRESULTS\nWe analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented.\n\n\nCONCLUSION\nThe results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are confirmed by integrated analysis of topic distribution as well as top ranked keyphrases, authors, and journals." }, { "pmid": "28377687", "title": "The Changing Landscape of Neuroscience Research, 2006-2015: A Bibliometric Study.", "abstract": "Background: It is beneficial to evaluate changes in neuroscience research field regarding research directions and topics over a defined period. Such information enables stakeholders to quickly identify the most influential research and incorporate latest evidence into research-informed education. To our knowledge, no study reported changes in neuroscience literature over the last decade. Therefore, the current study determined research terms with highest citation scores, compared publication shares of research areas and contributing countries in this field from 2006 to 2015 and identified the most productive journals. Methods: Data were extracted from Web of Science and Journal Citation Reports (JCR). Only articles and reviews published in journals classified under the JCR \"Neurosciences\" category over the period of interest were included. Title and abstract fields of each included publication were extracted and analyzed via VOSviewer to identify recurring terms with high relative citation scores. Two term maps were produced for publications over the study period to illustrate the extent of co-occurrence, and the impact of terms was evaluated based on their relative citation scores. To further describe the recent research priority or \"hot spots,\" 10 terms with the highest relative citation scores were identified annually. In addition, by applying Bradford's law, we identified 10 journals being the most productive journals per annum over the survey period and evaluated their bilbiometric performances. Results: From 2006 to 2015, there were 47 terms involved in the annual lists of top 10 terms with highest relative citation scores. The most frequently recurring terms were autism (8), meta-analysis (7), functional connectivity (6), default mode network (4) and neuroimaging (4). Neuroscience research related to psychology and behavioral sciences showed an increase in publication share over the survey period, and China has become one of the major contributors to neuroscience research. Ten journals were frequently identified (≥8 years) as core journals within the survey period. Discussion: The landscape of neuroscience research has changed recently, and this paper provides contemporary overview for researchers and health care workers interested in this field's research and developments. Brain imaging and brain connectivity terms had high relative citation scores." }, { "pmid": "17218491", "title": "Clustering by passing messages between data points.", "abstract": "Clustering data by identifying a subset of representative examples is important for processing sensory signals and detecting patterns in data. Such \"exemplars\" can be found by randomly choosing an initial subset of data points and then iteratively refining it, but this works well only if that initial choice is close to a good solution. We devised a method called \"affinity propagation,\" which takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. We used affinity propagation to cluster images of faces, detect genes in microarray data, identify representative sentences in this manuscript, and identify cities that are efficiently accessed by airline travel. Affinity propagation found clusters with much lower error than other methods, and it did so in less than one-hundredth the amount of time." }, { "pmid": "21737437", "title": "APCluster: an R package for affinity propagation clustering.", "abstract": "SUMMARY\nAffinity propagation (AP) clustering has recently gained increasing popularity in bioinformatics. AP clustering has the advantage that it allows for determining typical cluster members, the so-called exemplars. We provide an R implementation of this promising new clustering technique to account for the ubiquity of R in bioinformatics. This article introduces the package and presents an application from structural biology.\n\n\nAVAILABILITY\nThe R package apcluster is available via CRAN-The Comprehensive R Archive Network: http://cran.r-project.org/web/packages/apcluster\n\n\nCONTACT\[email protected]; [email protected]." }, { "pmid": "27026618", "title": "Efficient identification of nationally mandated reportable cancer cases using natural language processing and machine learning.", "abstract": "OBJECTIVE\nTo help cancer registrars efficiently and accurately identify reportable cancer cases.\n\n\nMATERIAL AND METHODS\nThe Cancer Registry Control Panel (CRCP) was developed to detect mentions of reportable cancer cases using a pipeline built on the Unstructured Information Management Architecture - Asynchronous Scaleout (UIMA-AS) architecture containing the National Library of Medicine's UIMA MetaMap annotator as well as a variety of rule-based UIMA annotators that primarily act to filter out concepts referring to nonreportable cancers. CRCP inspects pathology reports nightly to identify pathology records containing relevant cancer concepts and combines this with diagnosis codes from the Clinical Electronic Data Warehouse to identify candidate cancer patients using supervised machine learning. Cancer mentions are highlighted in all candidate clinical notes and then sorted in CRCP's web interface for faster validation by cancer registrars.\n\n\nRESULTS\nCRCP achieved an accuracy of 0.872 and detected reportable cancer cases with a precision of 0.843 and a recall of 0.848. CRCP increases throughput by 22.6% over a baseline (manual review) pathology report inspection system while achieving a higher precision and recall. Depending on registrar time constraints, CRCP can increase recall to 0.939 at the expense of precision by incorporating a data source information feature.\n\n\nCONCLUSION\nCRCP demonstrates accurate results when applying natural language processing features to the problem of detecting patients with cases of reportable cancer from clinical notes. We show that implementing only a portion of cancer reporting rules in the form of regular expressions is sufficient to increase the precision, recall, and speed of the detection of reportable cancer cases when combined with off-the-shelf information extraction software and machine learning." }, { "pmid": "26766600", "title": "Automated Outcome Classification of Computed Tomography Imaging Reports for Pediatric Traumatic Brain Injury.", "abstract": "BACKGROUND\nThe authors have previously demonstrated highly reliable automated classification of free-text computed tomography (CT) imaging reports using a hybrid system that pairs linguistic (natural language processing) and statistical (machine learning) techniques. Previously performed for identifying the outcome of orbital fracture in unprocessed radiology reports from a clinical data repository, the performance has not been replicated for more complex outcomes.\n\n\nOBJECTIVES\nTo validate automated outcome classification performance of a hybrid natural language processing (NLP) and machine learning system for brain CT imaging reports. The hypothesis was that our system has performance characteristics for identifying pediatric traumatic brain injury (TBI).\n\n\nMETHODS\nThis was a secondary analysis of a subset of 2,121 CT reports from the Pediatric Emergency Care Applied Research Network (PECARN) TBI study. For that project, radiologists dictated CT reports as free text, which were then deidentified and scanned as PDF documents. Trained data abstractors manually coded each report for TBI outcome. Text was extracted from the PDF files using optical character recognition. The data set was randomly split evenly for training and testing. Training patient reports were used as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for negation, certainty, and temporal status. A random subset stratified by site was analyzed using descriptive quantitative content analysis to confirm identification of TBI findings based on the National Institute of Neurological Disorders and Stroke (NINDS) Common Data Elements project. Findings were coded for presence or absence, weighted by frequency of mentions, and past/future/indication modifiers were filtered. After combining with the manual reference standard, a decision tree classifier was created using data mining tools WEKA 3.7.5 and Salford Predictive Miner 7.0. Performance of the decision tree classifier was evaluated on the test patient reports.\n\n\nRESULTS\nThe prevalence of TBI in the sampled population was 159 of 2,217 (7.2%). The automated classification for pediatric TBI is comparable to our prior results, with the notable exception of lower positive predictive value. Manual review of misclassified reports, 95.5% of which were false-positives, revealed that a sizable number of false-positive errors were due to differing outcome definitions between NINDS TBI findings and PECARN clinical important TBI findings and report ambiguity not meeting definition criteria.\n\n\nCONCLUSIONS\nA hybrid NLP and machine learning automated classification system continues to show promise in coding free-text electronic clinical data. For complex outcomes, it can reliably identify negative reports, but manual review of positive reports may be required. As such, it can still streamline data collection for clinical research and performance improvement." } ]
BMC Medical Informatics and Decision Making
29589563
PMC5872502
10.1186/s12911-018-0595-9
A pattern learning-based method for temporal expression extraction and normalization from multi-lingual heterogeneous clinical texts
BackgroundTemporal expression extraction and normalization is a fundamental and essential step in clinical text processing and analyzing. Though a variety of commonly used NLP tools are available for medical temporal information extraction, few work is satisfactory for multi-lingual heterogeneous clinical texts.MethodsA novel method called TEER is proposed for both multi-lingual temporal expression extraction and normalization from various types of narrative clinical texts including clinical data requests, clinical notes, and clinical trial summaries. TEER is characterized as temporal feature summarization, heuristic rule generation, and automatic pattern learning. By representing a temporal expression as a triple <M, A, N>, TEER identifies temporal mentions M, assigns type attributes A to M, and normalizes the values of M into formal representations N.ResultsBased on two heterogeneous clinical text datasets: 400 actual clinical requests in English and 1459 clinical discharge summaries in Chinese. TEER was compared with six state-of-the-art baselines. The results showed that TEER achieved a precision of 0.948 and a recall of 0.877 on the English clinical requests, while a precision of 0.941 and a recall of 0.932 on the Chinese discharge summaries.ConclusionsAn automated method TEER for multi-lingual temporal expression extraction was presented. Based on the two datasets containing heterogeneous clinical texts, the comparison results demonstrated the effectiveness of the TEER method in multi-lingual temporal expression extraction from heterogeneous narrative clinical texts.
Related workIn response to the need of temporal expression extraction, an open evaluation challenge-TempEval for temporal expression identification, was held in 2007 [8], 2010 [11], 2013 [12], 2015 [13], 2017 [14], resulting in the wide adoption of a number of systems. HeidelTime [7], an instance of the systems, outperformed the other systems the English temporal expression identification and normalization task of the TempEval 2 challenge. TempEval challenge released official guidelines for annotating temporal expressions in the challenges. For example, the guideline for English text annotation in TempEval 2010 consisted of nouns, proper nouns, noun phrases, adjectives, adjective phrases, adverbs, and adverb phrases.Targeting at temporal expression extraction, TempEx recognized temporal expressions and normalized them using the TIMEX2 standard. Both absolute time (e.g., May 7, 2017) and relative time (e.g., last weekend) could be identified by TempEx through the way of local context. GUTime further enhanced the capabilities. Based on the idea of utilizing a reference time, the method identified and annotated lexical triggers such as yesterday and phrase triggers such as last year. Temporal extraction gained more attention since 2010. As the result, more progressive methods about temporal expression extraction were developed, e.g., HeidelTime. Nevertheless, all the methods focused on newspaper and narrative texts primarily, without testing on medical texts [1].For clinical temporal expression extraction, Informatics for Integrating Biology & the Bedside (i2b2) NLP Challenge devoted on temporal relation identification in medical narratives for EMR data records. The challenge also offered a corpus containing clinical discharge summaries with human annotations of events and temporal expressions for research communities. The corpus was widely applied to the development and evaluation of temporal expression and event identification methods [15]. The challenge tried to evaluate different submitted methods on: 1) temporal expressions including date, time, duration, or frequency types, 2) clinical events containing medical concepts such as treatments, and events related to the clinical timeline of patients, e.g., admissions, transfers among departments, and 3) temporal relations between temporal expressions and clinical events.Clinical TempEval 2015 concentrated on the method competition for timeline extraction and annotation for the medical domain. The challenge included six different tasks. The Task 12 (clinical TempEval) of SemEval-2017 succeeded Clinical TempEval [16] and the past i2b2 temporal challenge [17] directly. The Clinical TempEval focused on clinical timeline extraction and understanding for clinical narratives, basing on the THYME corpus with temporal annotations [18]. 16 teams participated in TempEval 2017 [19].There are a number of research and systems for English temporal expression extraction from clinical texts. Sohn et al. reported a hybrid method to detect temporal information using regular expression matching and matching learning [15]. A comprehensive system for extracting temporal information from clinical texts was proposed by Tang et al. [20]. Tao et al. presented a method for identifying temporal representations of vaccine adverse events using ontology for temporal analysis [21]. Li & Patrick [22] addressed a statistical model using linguistic, contextual and semantic features for extracting temporal expressions from an extremely noisy clinical corpus. Xu et al. [23] introduced an end-to-end temporal relation system including a temporal extraction sub-system based on a Conditional Random Fields (CRF) for name entity extraction and context-free grammar-based normalization. Luo et al. [24] extracted temporal constraints from the eligibility criteria texts of clinical trials using CRF. Chang et al. [25] proposed a hybrid method TEMPTING to identify temporal links among entities combining a rule-based method and a maximum entropy model. Nevertheless, comparing and evaluating the performance of the systems could be difficult due to lack of system open source codes. Moreover, only a few of these systems processed complexity clinical texts, such as user-generated clinical notes. Most of them still focused on the process of the relatively formal clinical texts.Moreover, there was research on Chinese temporal expression extraction. Li et al. [26] proposed a Chinese temporal tagging (extraction and normalization) method by developing Chinese HeidelTime resources. Shen et al. [27] constructed a temporal expression extraction model based on Tsinghua Chinese Treebank. Zhou et al. [28] proposed a method for the recognition of Chinese temporal expressions using regular expression matching and a temporal relationship extraction approach based on CRF. Yan and Ji [29] proposed a Chinese temporal information identification method using CRF and a semi-supervised learning method. Wu et al. [30] built a Chinese temporal parser for the extraction and normalization of temporal information utilizing grammar rules and constraint rules. Zhu et al. [31] presented a CRF-based approach for temporal phrases recognition. Liu et al. [32] proposed a new Chinese time expression recognition method combined with common features plus semantic role features according to the characteristics of Chinese time expression and CRF.In the standardization of temporal annotation, TimeML is a robust specification markup language for annotating temporal expressions in texts [33]. It deals with four different issues in labelling temporal and event expressions including time stamping of events and reasoning with contextually underspecified temporal expressions. With respect to TimeML specifications [34], we use TIMEX3 for annotating temporal expressions throughout the paper. All the attributes of TIMEX3 are inherited from the THYME annotations [18].
[ "21846787", "26940748", "23558168", "23564629", "23571849", "23256916", "23304326", "23467472", "24060600" ]
[ { "pmid": "21846787", "title": "Automatic extraction of relations between medical concepts in clinical texts.", "abstract": "OBJECTIVE\nA supervised machine learning approach to discover relations between medical problems, treatments, and tests mentioned in electronic medical records.\n\n\nMATERIALS AND METHODS\nA single support vector machine classifier was used to identify relations between concepts and to assign their semantic type. Several resources such as Wikipedia, WordNet, General Inquirer, and a relation similarity metric inform the classifier.\n\n\nRESULTS\nThe techniques reported in this paper were evaluated in the 2010 i2b2 Challenge and obtained the highest F1 score for the relation extraction task. When gold standard data for concepts and assertions were available, F1 was 73.7, precision was 72.0, and recall was 75.3. F1 is defined as 2*Precision*Recall/(Precision+Recall). Alternatively, when concepts and assertions were discovered automatically, F1 was 48.4, precision was 57.6, and recall was 41.7.\n\n\nDISCUSSION\nAlthough a rich set of features was developed for the classifiers presented in this paper, little knowledge mining was performed from medical ontologies such as those found in UMLS. Future studies should incorporate features extracted from such knowledge sources, which we expect to further improve the results. Moreover, each relation discovery was treated independently. Joint classification of relations may further improve the quality of results. Also, joint learning of the discovery of concepts, assertions, and relations may also improve the results of automatic relation extraction.\n\n\nCONCLUSION\nLexical and contextual features proved to be very important in relation extraction from medical texts. When they are not available to the classifier, the F1 score decreases by 3.7%. In addition, features based on similarity contribute to a decrease of 1.1% when they are not available." }, { "pmid": "26940748", "title": "Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.", "abstract": "OBJECTIVES\nTo develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text.\n\n\nMETHODS\nLeveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov.\n\n\nRESULTS\nThe precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively.\n\n\nCONCLUSIONS\nValx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community." }, { "pmid": "23558168", "title": "Comprehensive temporal information detection from clinical text: medical events, time, and TLINK identification.", "abstract": "BACKGROUND\nTemporal information detection systems have been developed by the Mayo Clinic for the 2012 i2b2 Natural Language Processing Challenge.\n\n\nOBJECTIVE\nTo construct automated systems for EVENT/TIMEX3 extraction and temporal link (TLINK) identification from clinical text.\n\n\nMATERIALS AND METHODS\nThe i2b2 organizers provided 190 annotated discharge summaries as the training set and 120 discharge summaries as the test set. Our Event system used a conditional random field classifier with a variety of features including lexical information, natural language elements, and medical ontology. The TIMEX3 system employed a rule-based method using regular expression pattern match and systematic reasoning to determine normalized values. The TLINK system employed both rule-based reasoning and machine learning. All three systems were built in an Apache Unstructured Information Management Architecture framework.\n\n\nRESULTS\nOur TIMEX3 system performed the best (F-measure of 0.900, value accuracy 0.731) among the challenge teams. The Event system produced an F-measure of 0.870, and the TLINK system an F-measure of 0.537.\n\n\nCONCLUSIONS\nOur TIMEX3 system demonstrated good capability of regular expression rules to extract and normalize time information. Event and TLINK machine learning systems required well-defined feature sets to perform well. We could also leverage expert knowledge as part of the machine learning features to further improve TLINK identification performance." }, { "pmid": "23564629", "title": "Evaluating temporal relations in clinical text: 2012 i2b2 Challenge.", "abstract": "BACKGROUND\nThe Sixth Informatics for Integrating Biology and the Bedside (i2b2) Natural Language Processing Challenge for Clinical Records focused on the temporal relations in clinical narratives. The organizers provided the research community with a corpus of discharge summaries annotated with temporal information, to be used for the development and evaluation of temporal reasoning systems. 18 teams from around the world participated in the challenge. During the workshop, participating teams presented comprehensive reviews and analysis of their systems, and outlined future research directions suggested by the challenge contributions.\n\n\nMETHODS\nThe challenge evaluated systems on the information extraction tasks that targeted: (1) clinically significant events, including both clinical concepts such as problems, tests, treatments, and clinical departments, and events relevant to the patient's clinical timeline, such as admissions, transfers between departments, etc; (2) temporal expressions, referring to the dates, times, durations, or frequencies phrases in the clinical text. The values of the extracted temporal expressions had to be normalized to an ISO specification standard; and (3) temporal relations, between the clinical events and temporal expressions. Participants determined pairs of events and temporal expressions that exhibited a temporal relation, and identified the temporal relation between them.\n\n\nRESULTS\nFor event detection, statistical machine learning (ML) methods consistently showed superior performance. While ML and rule based methods seemed to detect temporal expressions equally well, the best systems overwhelmingly adopted a rule based approach for value normalization. For temporal relation classification, the systems using hybrid approaches that combined ML and heuristics based methods produced the best results." }, { "pmid": "23571849", "title": "A hybrid system for temporal information extraction from clinical text.", "abstract": "OBJECTIVE\nTo develop a comprehensive temporal information extraction system that can identify events, temporal expressions, and their temporal relations in clinical text. This project was part of the 2012 i2b2 clinical natural language processing (NLP) challenge on temporal information extraction.\n\n\nMATERIALS AND METHODS\nThe 2012 i2b2 NLP challenge organizers manually annotated 310 clinic notes according to a defined annotation guideline: a training set of 190 notes and a test set of 120 notes. All participating systems were developed on the training set and evaluated on the test set. Our system consists of three modules: event extraction, temporal expression extraction, and temporal relation (also called Temporal Link, or 'TLink') extraction. The TLink extraction module contains three individual classifiers for TLinks: (1) between events and section times, (2) within a sentence, and (3) across different sentences. The performance of our system was evaluated using scripts provided by the i2b2 organizers. Primary measures were micro-averaged Precision, Recall, and F-measure.\n\n\nRESULTS\nOur system was among the top ranked. It achieved F-measures of 0.8659 for temporal expression extraction (ranked fourth), 0.6278 for end-to-end TLink track (ranked first), and 0.6932 for TLink-only track (ranked first) in the challenge. We subsequently investigated different strategies for TLink extraction, and were able to marginally improve performance with an F-measure of 0.6943 for TLink-only track." }, { "pmid": "23256916", "title": "Ontology-based time information representation of vaccine adverse events in VAERS for temporal analysis.", "abstract": "UNLABELLED\n\n\n\nBACKGROUND\nThe U.S. FDA/CDC Vaccine Adverse Event Reporting System (VAERS) provides a valuable data source for post-vaccination adverse event analyses. The structured data in the system has been widely used, but the information in the write-up narratives is rarely included in these kinds of analyses. In fact, the unstructured nature of the narratives makes the data embedded in them difficult to be used for any further studies.\n\n\nRESULTS\nWe developed an ontology-based approach to represent the data in the narratives in a \"machine-understandable\" way, so that it can be easily queried and further analyzed. Our focus is the time aspect in the data for time trending analysis. The Time Event Ontology (TEO), Ontology of Adverse Events (OAE), and Vaccine Ontology (VO) are leveraged for the semantic representation of this purpose. A VAERS case report is presented as a use case for the ontological representations. The advantages of using our ontology-based Semantic web representation and data analysis are emphasized.\n\n\nCONCLUSIONS\nWe believe that representing both the structured data and the data from write-up narratives in an integrated, unified, and \"machine-understandable\" way can improve research for vaccine safety analyses, causality assessments, and retrospective studies." }, { "pmid": "23304326", "title": "Extracting temporal information from electronic patient records.", "abstract": "A method for automatic extraction of clinical temporal information would be of significant practical importance for deep medical language understanding, and a key to creating many successful applications, such as medical decision making, medical question and answering, etc. This paper proposes a rich statistical model for extracting temporal information from an extremely noisy clinical corpus. Besides the common linguistic, contextual and semantic features, the highly restricted training sample expansion and the structure distance between the temporal expression & related event expressions are also integrated into a supervised machine-learning approach. The learning method produces almost 80% F- score in the extraction of five temporal classes, and nearly 75% F-score in identifying temporally related events. This process has been integrated into the document-processing component of an implemented clinical question answering system that focuses on answering patient-specific questions (See demonstration at http://hitrl.cs.usyd.edu.au/ICNS/)." }, { "pmid": "23467472", "title": "An end-to-end system to identify temporal relation in discharge summaries: 2012 i2b2 challenge.", "abstract": "OBJECTIVE\nTo create an end-to-end system to identify temporal relation in discharge summaries for the 2012 i2b2 challenge. The challenge includes event extraction, timex extraction, and temporal relation identification.\n\n\nDESIGN\nAn end-to-end temporal relation system was developed. It includes three subsystems: an event extraction system (conditional random fields (CRF) name entity extraction and their corresponding attribute classifiers), a temporal extraction system (CRF name entity extraction, their corresponding attribute classifiers, and context-free grammar based normalization system), and a temporal relation system (10 multi-support vector machine (SVM) classifiers and a Markov logic networks inference system) using labeled sequential pattern mining, syntactic structures based on parse trees, and results from a coordination classifier. Micro-averaged precision (P), recall (R), averaged P&R (P&R), and F measure (F) were used to evaluate results.\n\n\nRESULTS\nFor event extraction, the system achieved 0.9415 (P), 0.8930 (R), 0.9166 (P&R), and 0.9166 (F). The accuracies of their type, polarity, and modality were 0.8574, 0.8585, and 0.8560, respectively. For timex extraction, the system achieved 0.8818, 0.9489, 0.9141, and 0.9141, respectively. The accuracies of their type, value, and modifier were 0.8929, 0.7170, and 0.8907, respectively. For temporal relation, the system achieved 0.6589, 0.7129, 0.6767, and 0.6849, respectively. For end-to-end temporal relation, it achieved 0.5904, 0.5944, 0.5921, and 0.5924, respectively. With the F measure used for evaluation, we were ranked first out of 14 competing teams (event extraction), first out of 14 teams (timex extraction), third out of 12 teams (temporal relation), and second out of seven teams (end-to-end temporal relation).\n\n\nCONCLUSIONS\nThe system achieved encouraging results, demonstrating the feasibility of the tasks defined by the i2b2 organizers. The experiment result demonstrates that both global and local information is useful in the 2012 challenge." }, { "pmid": "24060600", "title": "TEMPTING system: a hybrid method of rule and machine learning for temporal relation extraction in patient discharge summaries.", "abstract": "Patient discharge summaries provide detailed medical information about individuals who have been hospitalized. To make a precise and legitimate assessment of the abundant data, a proper time layout of the sequence of relevant events should be compiled and used to drive a patient-specific timeline, which could further assist medical personnel in making clinical decisions. The process of identifying the chronological order of entities is called temporal relation extraction. In this paper, we propose a hybrid method to identify appropriate temporal links between a pair of entities. The method combines two approaches: one is rule-based and the other is based on the maximum entropy model. We develop an integration algorithm to fuse the results of the two approaches. All rules and the integration algorithm are formally stated so that one can easily reproduce the system and results. To optimize the system's configuration, we used the 2012 i2b2 challenge TLINK track dataset and applied threefold cross validation to the training set. Then, we evaluated its performance on the training and test datasets. The experiment results show that the proposed TEMPTING (TEMPoral relaTion extractING) system (ranked seventh) achieved an F-score of 0.563, which was at least 30% better than that of the baseline system, which randomly selects TLINK candidates from all pairs and assigns the TLINK types. The TEMPTING system using the hybrid method also outperformed the stage-based TEMPTING system. Its F-scores were 3.51% and 0.97% better than those of the stage-based system on the training set and test set, respectively." } ]
BioData Mining
29610579
PMC5872503
10.1186/s13040-018-0165-9
Pairwise gene GO-based measures for biclustering of high-dimensional expression data
BackgroundBiclustering algorithms search for groups of genes that share the same behavior under a subset of samples in gene expression data. Nowadays, the biological knowledge available in public repositories can be used to drive these algorithms to find biclusters composed of groups of genes functionally coherent. On the other hand, a distance among genes can be defined according to their information stored in Gene Ontology (GO). Gene pairwise GO semantic similarity measures report a value for each pair of genes which establishes their functional similarity. A scatter search-based algorithm that optimizes a merit function that integrates GO information is studied in this paper. This merit function uses a term that addresses the information through a GO measure.ResultsThe effect of two possible different gene pairwise GO measures on the performance of the algorithm is analyzed. Firstly, three well known yeast datasets with approximately one thousand of genes are studied. Secondly, a group of human datasets related to clinical data of cancer is also explored by the algorithm. Most of these data are high-dimensional datasets composed of a huge number of genes. The resultant biclusters reveal groups of genes linked by a same functionality when the search procedure is driven by one of the proposed GO measures. Furthermore, a qualitative biological study of a group of biclusters show their relevance from a cancer disease perspective.ConclusionsIt can be concluded that the integration of biological information improves the performance of the biclustering process. The two different GO measures studied show an improvement in the results obtained for the yeast dataset. However, if datasets are composed of a huge number of genes, only one of them really improves the algorithm performance. This second case constitutes a clear option to explore interesting datasets from a clinical point of view.
Related workThe main idea of biclustering is to discover local patterns rather than global patterns in datasets. In the last years, many biclustering algorithms have been proposed in the context of gene expression data [11, 12]. These algorithms differ depending on their search criteria and their heuristic strategies [1]. They can be classified according to whether they are based or not on a particular evaluation measure [13]. It is important to note that the comparison among this kind of techniques is a hard task because the best algorithm generally depends on the type of patterns to discover and the nature of the studied dataset [14].Several algorithms that are usually referenced can be highlighted. They can be considered as classic biclustering algorithms [15]. Cheng and Church [16] and FLOC algorithms [17] find biclusters with a score under a threshold called Mean Square Residue (MSR). The first one was the foundational algorithm and the FLOC improved it. Although the MSR measure has been used in many measure-based algorithms, it can not capture some relevant patterns [18]. xMotifs algorithm [19] iteratively searches conserved gene expression subsets of genes that are simultaneously conserved across a subset of conditions. Binary inclusion-maximal biclustering algorithm (BIMAX) was presented in [20] where it was used as a reference method for comparison with other algorithms. The Plaid Model [21] is an additive biclustering algorithm based on additive layers to capture biclusters. Spectral Biclustering [22] uses a checkboard structure to find biclusters and it applies a singular value decomposition (SVD) of the matrix representing the dataset. Factor analysis for bicluster acquisition (FABIA) [23] is based on a statistical method, which studies the variability among variables (genes) according to a potentially lower number of unobserved variables called factors. Order-preserving submatrix algorithm (OPSM) [24] sequentially searches for biclusters based on a linear ordering among rows. Iterative signature algorithm (ISA) [20] finds up-regulated and down-regulated patterns using a nondeterministic greedy search as heuristic. Blocks of coherent values with respect to rows and columns are found by reordering the input matrix. Finally, it can be also highlighted a family of measure-based algorithms that use evolutionary computation techniques such as [25–28]. Moreover, it can be noted that several algorithms of this group use correlations among genes as a measure for purposes of bicluster evaluation [7, 29–35].In the last years, the use of biological information as a mechanism of knowledge-driven search has been studied. Concretely, some algorithms recently used GO functional files to improve their performance in traditional clustering of gene expression data [36]. GO was also used in an unsupervised scenario based on a Principal Component Analysis (PCA) method in order to explore gene expression datasets [37].In the field of biclustering, the AID-ISA algorithm [38] is a modified version of the ISA algorithm that uses a procedure to incorporate additional sources of information. GenMiner [39] is an algorithm based on association rules that also handles biological annotation files. It integrates gene expression and annotation data in a single framework in order to select relevant rules during the search process. The algorithm presented in [40] works with self-organizing maps and combines an ontology-based clustering using GO and an expression-based clustering. Moreover, in this field but specialized in microRNA and target genes data, the algorithm presented in [41] used GO in order to establish a ranking from its results.Due to the NP-hard nature of biclustering [42], most of algorithms have difficulties to find relevant information with high-dimensional datasets. Recently, some authors have included some constrains during the search process in order to deal with the size of the dataset. Thus, only the most relevant part of the dataset is explored [43, 44]. The BiC2PAM algorithm [45] uses pattern mining-based ideas to prune the search process. It also considers the biological context through the fulfilment of several constraints related to interesting properties from a biological point of view and to annotations from domain knowledge. This paper also establishes a classification of the new biclustering algorithms based on knowledge integration: constraints with nice properties, parametric constraints and biclustering with annotations.The authors of this paper presented a preliminary biclustering algorithm that integrates biological knowledge in [9]. Namely, a scatter search metaheuristic algorithm [46] was adapted to optimize a merit function that handled gene expression and gene annotation data. As a consequence of this first study of biological information integration in biclustering, a gene pairwise GO-measure was also studied in [10]. The current work constitutes an extension of this last work in order to analyze how to improve the algorithm performance using these ideas in the context of high-dimensional gene expression datasets. This work can be classified as a constraint-based biclustering algorithm with knowledge integration through the use of annotations from knowledge-based repositories [45].
[ "10802651", "16500941", "22772837", "18179696", "21261986", "26160444", "28114903", "16144809", "12671006", "20418340", "12935334", "19128508", "20015398", "19734153", "23496895", "19843612", "23387364", "16824804", "23323856", "25413436", "27651825", "14668247", "7833759", "24334380", "11752295", "26487634", "21751369", "19744993" ]
[ { "pmid": "16500941", "title": "A systematic comparison and evaluation of biclustering methods for gene expression data.", "abstract": "MOTIVATION\nIn recent years, there have been various efforts to overcome the limitations of standard clustering approaches for the analysis of gene expression data by grouping genes and samples simultaneously. The underlying concept, which is often referred to as biclustering, allows to identify sets of genes sharing compatible expression patterns across subsets of samples, and its usefulness has been demonstrated for different organisms and datasets. Several biclustering methods have been proposed in the literature; however, it is not clear how the different techniques compare with each other with respect to the biological relevance of the clusters as well as with other characteristics such as robustness and sensitivity to noise. Accordingly, no guidelines concerning the choice of the biclustering method are currently available.\n\n\nRESULTS\nFirst, this paper provides a methodology for comparing and validating biclustering methods that includes a simple binary reference model. Although this model captures the essential features of most biclustering approaches, it is still simple enough to exactly determine all optimal groupings; to this end, we propose a fast divide-and-conquer algorithm (Bimax). Second, we evaluate the performance of five salient biclustering algorithms together with the reference model and a hierarchical clustering method on various synthetic and real datasets for Saccharomyces cerevisiae and Arabidopsis thaliana. The comparison reveals that (1) biclustering in general has advantages over a conventional hierarchical clustering approach, (2) there are considerable performance differences between the tested methods and (3) already the simple reference model delivers relevant patterns within all considered settings." }, { "pmid": "22772837", "title": "A comparative analysis of biclustering algorithms for gene expression data.", "abstract": "The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters." }, { "pmid": "18179696", "title": "An open-source representation for 2-DE-centric proteomics and support infrastructure for data storage and analysis.", "abstract": "BACKGROUND\nIn spite of two-dimensional gel electrophoresis (2-DE) being an effective and widely used method to screen the proteome, its data standardization has still not matured to the level of microarray genomics data or mass spectrometry approaches. The trend toward identifying encompassing data standards has been expanding from genomics to transcriptomics, and more recently to proteomics. The relative success of genomic and transcriptomic data standardization has enabled the development of central repositories such as GenBank and Gene Expression Omnibus. An equivalent 2-DE-centric data structure would similarly have to include a balance among raw data, basic feature detection results, sufficiency in the description of the experimental context and methods, and an overall structure that facilitates a diversity of usages, from central reposition to local data representation in LIMs systems.\n\n\nRESULTS & CONCLUSION\nAchieving such a balance can only be accomplished through several iterations involving bioinformaticians, bench molecular biologists, and the manufacturers of the equipment and commercial software from which the data is primarily generated. Such an encompassing data structure is described here, developed as the mature successor to the well established and broadly used earlier version. A public repository, AGML Central, is configured with a suite of tools for the conversion from a variety of popular formats, web-based visualization, and interoperation with other tools and repositories, and is particularly mass-spectrometry oriented with I/O for annotation and data analysis." }, { "pmid": "21261986", "title": "Biclustering of gene expression data by correlation-based scatter search.", "abstract": "BACKGROUND\nThe analysis of data generated by microarray technology is very useful to understand how the genetic information becomes functional gene products. Biclustering algorithms can determine a group of genes which are co-expressed under a set of experimental conditions. Recently, new biclustering methods based on metaheuristics have been proposed. Most of them use the Mean Squared Residue as merit function but interesting and relevant patterns from a biological point of view such as shifting and scaling patterns may not be detected using this measure. However, it is important to discover this type of patterns since commonly the genes can present a similar behavior although their expression levels vary in different ranges or magnitudes.\n\n\nMETHODS\nScatter Search is an evolutionary technique that is based on the evolution of a small set of solutions which are chosen according to quality and diversity criteria. This paper presents a Scatter Search with the aim of finding biclusters from gene expression data. In this algorithm the proposed fitness function is based on the linear correlation among genes to detect shifting and scaling patterns from genes and an improvement method is included in order to select just positively correlated genes.\n\n\nRESULTS\nThe proposed algorithm has been tested with three real data sets such as Yeast Cell Cycle dataset, human B-cells lymphoma dataset and Yeast Stress dataset, finding a remarkable number of biclusters with shifting and scaling patterns. In addition, the performance of the proposed method and fitness function are compared to that of CC, OPSM, ISA, BiMax, xMotifs and Samba using Gene the Ontology Database." }, { "pmid": "26160444", "title": "Biclustering on expression data: A review.", "abstract": "Biclustering has become a popular technique for the study of gene expression data, especially for discovering functionally related gene sets under different subsets of experimental conditions. Most of biclustering approaches use a measure or cost function that determines the quality of biclusters. In such cases, the development of both a suitable heuristics and a good measure for guiding the search are essential for discovering interesting biclusters in an expression matrix. Nevertheless, not all existing biclustering approaches base their search on evaluation measures for biclusters. There exists a diverse set of biclustering tools that follow different strategies and algorithmic concepts which guide the search towards meaningful results. In this paper we present a extensive survey of biclustering approaches, classifying them into two categories according to whether or not use evaluation metrics within the search method: biclustering algorithms based on evaluation measures and non metric-based biclustering algorithms. In both cases, they have been classified according to the type of meta-heuristics which they are based on." }, { "pmid": "28114903", "title": "A systematic comparative evaluation of biclustering techniques.", "abstract": "BACKGROUND\nBiclustering techniques are capable of simultaneously clustering rows and columns of a data matrix. These techniques became very popular for the analysis of gene expression data, since a gene can take part of multiple biological pathways which in turn can be active only under specific experimental conditions. Several biclustering algorithms have been developed in the past recent years. In order to provide guidance regarding their choice, a few comparative studies were conducted and reported in the literature. In these studies, however, the performances of the methods were evaluated through external measures that have more recently been shown to have undesirable properties. Furthermore, they considered a limited number of algorithms and datasets.\n\n\nRESULTS\nWe conducted a broader comparative study involving seventeen algorithms, which were run on three synthetic data collections and two real data collections with a more representative number of datasets. For the experiments with synthetic data, five different experimental scenarios were studied: different levels of noise, different numbers of implanted biclusters, different levels of symmetric bicluster overlap, different levels of asymmetric bicluster overlap and different bicluster sizes, for which the results were assessed with more suitable external measures. For the experiments with real datasets, the results were assessed by gene set enrichment and clustering accuracy.\n\n\nCONCLUSIONS\nWe observed that each algorithm achieved satisfactory results in part of the biclustering tasks in which they were investigated. The choice of the best algorithm for some application thus depends on the task at hand and the types of patterns that one wants to detect." }, { "pmid": "16144809", "title": "Shifting and scaling patterns from gene expression data.", "abstract": "MOTIVATION\nDuring the last years, the discovering of biclusters in data is becoming more and more popular. Biclustering aims at extracting a set of clusters, each of which might use a different subset of attributes. Therefore, it is clear that the usefulness of biclustering techniques is beyond the traditional clustering techniques, especially when datasets present high or very high dimensionality. Also, biclustering considers overlapping, which is an interesting aspect, algorithmically and from the point of view of the result interpretation. Since the Cheng and Church's works, the mean squared residue has turned into one of the most popular measures to search for biclusters, which ideally should discover shifting and scaling patterns.\n\n\nRESULTS\nIn this work, we identify both types of patterns (shifting and scaling) and demonstrate that the mean squared residue is very useful to search for shifting patterns, but it is not appropriate to find scaling patterns because even when we find a perfect scaling pattern the mean squared residue is not zero. In addition, we provide an interesting result: the mean squared residue is highly dependent on the variance of the scaling factor, which makes possible that any algorithm based on this measure might not find these patterns in data when the variance of gene values is high. The main contribution of this paper is to prove that the mean squared residue is not precise enough from the mathematical point of view in order to discover shifting and scaling patterns at the same time.\n\n\nCONTACT\[email protected]." }, { "pmid": "12671006", "title": "Spectral biclustering of microarray data: coclustering genes and conditions.", "abstract": "Global analyses of RNA expression levels are useful for classifying genes and overall phenotypes. Often these classification problems are linked, and one wants to find \"marker genes\" that are differentially expressed in particular sets of \"conditions.\" We have developed a method that simultaneously clusters genes and conditions, finding distinctive \"checkerboard\" patterns in matrices of gene expression data, if they exist. In a cancer context, these checkerboards correspond to genes that are markedly up- or downregulated in patients with particular types of tumors. Our method, spectral biclustering, is based on the observation that checkerboard structures in matrices of expression data can be found in eigenvectors corresponding to characteristic expression patterns across genes or conditions. In addition, these eigenvectors can be readily identified by commonly used linear algebra approaches, in particular the singular value decomposition (SVD), coupled with closely integrated normalization steps. We present a number of variants of the approach, depending on whether the normalization over genes and conditions is done independently or in a coupled fashion. We then apply spectral biclustering to a selection of publicly available cancer expression data sets, and examine the degree to which the approach is able to identify checkerboard structures. Furthermore, we compare the performance of our biclustering methods against a number of reasonable benchmarks (e.g., direct application of SVD or normalized cuts to raw data)." }, { "pmid": "20418340", "title": "FABIA: factor analysis for bicluster acquisition.", "abstract": "MOTIVATION\nBiclustering of transcriptomic data groups genes and samples simultaneously. It is emerging as a standard tool for extracting knowledge from gene expression measurements. We propose a novel generative approach for biclustering called 'FABIA: Factor Analysis for Bicluster Acquisition'. FABIA is based on a multiplicative model, which accounts for linear dependencies between gene expression and conditions, and also captures heavy-tailed distributions as observed in real-world transcriptomic data. The generative framework allows to utilize well-founded model selection methods and to apply Bayesian techniques.\n\n\nRESULTS\nOn 100 simulated datasets with known true, artificially implanted biclusters, FABIA clearly outperformed all 11 competitors. On these datasets, FABIA was able to separate spurious biclusters from true biclusters by ranking biclusters according to their information content. FABIA was tested on three microarray datasets with known subclusters, where it was two times the best and once the second best method among the compared biclustering approaches.\n\n\nAVAILABILITY\nFABIA is available as an R package on Bioconductor (http://www.bioconductor.org). All datasets, results and software are available at http://www.bioinf.jku.at/software/fabia/fabia.html.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "12935334", "title": "Discovering local structure in gene expression data: the order-preserving submatrix problem.", "abstract": "This paper concerns the discovery of patterns in gene expression matrices, in which each element gives the expression level of a given gene in a given experiment. Most existing methods for pattern discovery in such matrices are based on clustering genes by comparing their expression levels in all experiments, or clustering experiments by comparing their expression levels for all genes. Our work goes beyond such global approaches by looking for local patterns that manifest themselves when we focus simultaneously on a subset G of the genes and a subset T of the experiments. Specifically, we look for order-preserving submatrices (OPSMs), in which the expression levels of all genes induce the same linear ordering of the experiments (we show that the OPSM search problem is NP-hard in the worst case). Such a pattern might arise, for example, if the experiments in T represent distinct stages in the progress of a disease or in a cellular process and the expression levels of all genes in G vary across the stages in the same way. We define a probabilistic model in which an OPSM is hidden within an otherwise random matrix. Guided by this model, we develop an efficient algorithm for finding the hidden OPSM in the random matrix. In data generated according to the model, the algorithm recovers the hidden OPSM with a very high success rate. Application of the methods to breast cancer data seem to reveal significant local patterns." }, { "pmid": "19128508", "title": "TCP: a tool for designing chimera proteins based on the tertiary structure information.", "abstract": "BACKGROUND\nChimera proteins are widely used for the analysis of the protein-protein interaction region. One of the major issues is the epitope analysis of the monoclonal antibody. In the analysis, a continuous portion of an antigen is sequentially substituted into a different sequence. This method works well for an antibody recognizing a linear epitope, but not for that recognizing a discontinuous epitope. Although the designing the chimera proteins based on the tertiary structure information is required in such situations, there is no appropriate tool so far.\n\n\nRESULTS\nIn light of the problem, we developed a tool named TCP (standing for a Tool for designing Chimera Proteins), which extracts some sets of mutually orthogonal cutting surfaces for designing chimera proteins using a genetic algorithm. TCP can also incorporate and consider the solvent accessible surface area information calculated by a DSSP program. The test results of our method indicate that the TCP is robust and applicable to various shapes of proteins.\n\n\nCONCLUSION\nWe developed TCP, a tool for designing chimera proteins based on the tertiary structure information. TCP is robust and possesses several favourable features, and we believe it is a useful tool for designing chimera proteins. TCP is freely available as an additional file of this manuscript for academic and non-profit organization." }, { "pmid": "20015398", "title": "A biclustering algorithm based on a bicluster enumeration tree: application to DNA microarray data.", "abstract": "BACKGROUND\nIn a number of domains, like in DNA microarray data analysis, we need to cluster simultaneously rows (genes) and columns (conditions) of a data matrix to identify groups of rows coherent with groups of columns. This kind of clustering is called biclustering. Biclustering algorithms are extensively used in DNA microarray data analysis. More effective biclustering algorithms are highly desirable and needed.\n\n\nMETHODS\nWe introduce BiMine, a new enumeration algorithm for biclustering of DNA microarray data. The proposed algorithm is based on three original features. First, BiMine relies on a new evaluation function called Average Spearman's rho (ASR). Second, BiMine uses a new tree structure, called Bicluster Enumeration Tree (BET), to represent the different biclusters discovered during the enumeration process. Third, to avoid the combinatorial explosion of the search tree, BiMine introduces a parametric rule that allows the enumeration process to cut tree branches that cannot lead to good biclusters.\n\n\nRESULTS\nThe performance of the proposed algorithm is assessed using both synthetic and real DNA microarray data. The experimental results show that BiMine competes well with several other biclustering methods. Moreover, we test the biological significance using a gene annotation web-tool to show that our proposed method is able to produce biologically relevant biclusters. The software is available upon request from the authors to academic users." }, { "pmid": "19734153", "title": "Bi-correlation clustering algorithm for determining a set of co-regulated genes.", "abstract": "MOTIVATION\nBiclustering has been emerged as a powerful tool for identification of a group of co-expressed genes under a subset of experimental conditions (measurements) present in a gene expression dataset. Several biclustering algorithms have been proposed till date. In this article, we address some of the important shortcomings of these existing biclustering algorithms and propose a new correlation-based biclustering algorithm called bi-correlation clustering algorithm (BCCA).\n\n\nRESULTS\nBCCA has been able to produce a diverse set of biclusters of co-regulated genes over a subset of samples where all the genes in a bicluster have a similar change of expression pattern over the subset of samples. Moreover, the genes in a bicluster have common transcription factor binding sites in the corresponding promoter sequences. The presence of common transcription factors binding sites, in the corresponding promoter sequences, is an evidence that a group of genes in a bicluster are co-regulated. Biclusters determined by BCCA also show highly enriched functional categories. Using different gene expression datasets, we demonstrate strength and superiority of BCCA over some existing biclustering algorithms.\n\n\nAVAILABILITY\nThe software for BCCA has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/ approximately rajat. Then it needs to be installed. Two word files (included in the zip file) need to be consulted before installation and execution of the software.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "23496895", "title": "Biclustering for the comprehensive search of correlated gene expression patterns using clustered seed expansion.", "abstract": "BACKGROUND\nIn a functional analysis of gene expression data, biclustering method can give crucial information by showing correlated gene expression patterns under a subset of conditions. However, conventional biclustering algorithms still have some limitations to show comprehensive and stable outputs.\n\n\nRESULTS\nWe propose a novel biclustering approach called \"BIclustering by Correlated and Large number of Individual Clustered seeds (BICLIC)\" to find comprehensive sets of correlated expression patterns in biclusters using clustered seeds and their expansion with correlation of gene expression. BICLIC outperformed competing biclustering algorithms by completely recovering implanted biclusters in simulated datasets with various types of correlated patterns: shifting, scaling, and shifting-scaling. Furthermore, in a real yeast microarray dataset and a lung cancer microarray dataset, BICLIC found more comprehensive sets of biclusters that are significantly enriched to more diverse sets of biological terms than those of other competing biclustering algorithms.\n\n\nCONCLUSIONS\nBICLIC provides significant benefits in finding comprehensive sets of correlated patterns and their functional implications from a gene expression dataset." }, { "pmid": "19843612", "title": "Sequence-non-specific effects of RNA interference triggers and microRNA regulators.", "abstract": "RNA reagents of diverse lengths and structures, unmodified or containing various chemical modifications are powerful tools of RNA interference and microRNA technologies. These reagents which are either delivered to cells using appropriate carriers or are expressed in cells from suitable vectors often cause unintended sequence-non-specific immune responses besides triggering intended sequence-specific silencing effects. This article reviews the present state of knowledge regarding the cellular sensors of foreign RNA, the signaling pathways these sensors mobilize and shows which specific features of the RNA reagents set the responsive systems on alert. The representative examples of toxic effects caused in the investigated cell lines and tissues by the RNAs of specific types and structures are collected and may be instructive for further studies of sequence-non-specific responses to foreign RNA in human cells." }, { "pmid": "23387364", "title": "A new unsupervised gene clustering algorithm based on the integration of biological knowledge into expression data.", "abstract": "BACKGROUND\nGene clustering algorithms are massively used by biologists when analysing omics data. Classical gene clustering strategies are based on the use of expression data only, directly as in Heatmaps, or indirectly as in clustering based on coexpression networks for instance. However, the classical strategies may not be sufficient to bring out all potential relationships amongst genes.\n\n\nRESULTS\nWe propose a new unsupervised gene clustering algorithm based on the integration of external biological knowledge, such as Gene Ontology annotations, into expression data. We introduce a new distance between genes which consists in integrating biological knowledge into the analysis of expression data. Therefore, two genes are close if they have both similar expression profiles and similar functional profiles at once. Then a classical algorithm (e.g. K-means) is used to obtain gene clusters. In addition, we propose an automatic evaluation procedure of gene clusters. This procedure is based on two indicators which measure the global coexpression and biological homogeneity of gene clusters. They are associated with hypothesis testing which allows to complement each indicator with a p-value.Our clustering algorithm is compared to the Heatmap clustering and the clustering based on gene coexpression network, both on simulated and real data. In both cases, it outperforms the other methodologies as it provides the highest proportion of significantly coexpressed and biologically homogeneous gene clusters, which are good candidates for interpretation.\n\n\nCONCLUSION\nOur new clustering algorithm provides a higher proportion of good candidates for interpretation. Therefore, we expect the interpretation of these clusters to help biologists to formulate new hypothesis on the relationships amongst genes." }, { "pmid": "16824804", "title": "Co-clustering and visualization of gene expression data and gene ontology terms for Saccharomyces cerevisiae using self-organizing maps.", "abstract": "We propose a novel co-clustering algorithm that is based on self-organizing maps (SOMs). The method is applied to group yeast (Saccharomyces cerevisiae) genes according to both expression profiles and Gene Ontology (GO) annotations. The combination of multiple databases is supposed to provide a better biological definition and separation of gene clusters. We compare different levels of genome-wide co-clustering by weighting the involved sources of information differently. Clustering quality is determined by both general and SOM-specific validation measures. Co-clustering relies on a sufficient correlation between the different datasets. We investigate in various experiments how much GO information is contained in the applied gene expression dataset and vice versa. The second major contribution is a visualization technique that applies the cluster structure of SOMs for a better biological interpretation of gene (expression) clusterings. Our GO term maps reveal functional neighborhoods between clusters forming biologically meaningful functional SOM regions. To cope with the high variety and specificity of GO terms, gene and cluster annotations are mapped to a reduced vocabulary of more general GO terms. In particular, this advances the ability of SOMs to act as gene function predictors." }, { "pmid": "23323856", "title": "XperimentR: painless annotation of a biological experiment for the laboratory scientist.", "abstract": "BACKGROUND\nToday's biological experiments often involve the collaboration of multidisciplinary researchers utilising several high throughput 'omics platforms. There is a requirement for the details of the experiment to be adequately described using standardised ontologies to enable data preservation, the analysis of the data and to facilitate the export of the data to public repositories. However there are a bewildering number of ontologies, controlled vocabularies, and minimum standards available for use to describe experiments. There is a need for user-friendly software tools to aid laboratory scientists in capturing the experimental information.\n\n\nRESULTS\nA web application called XperimentR has been developed for use by laboratory scientists, consisting of a browser-based interface and server-side components which provide an intuitive platform for capturing and sharing experimental metadata. Information recorded includes details about the biological samples, procedures, protocols, and experimental technologies, all of which can be easily annotated using the appropriate ontologies. Files and raw data can be imported and associated with the biological samples via the interface, from either users' computers, or commonly used open-source data repositories. Experiments can be shared with other users, and experiments can be exported in the standard ISA-Tab format for deposition in public databases. XperimentR is freely available and can be installed natively or by using a provided pre-configured Virtual Machine. A guest system is also available for trial purposes.\n\n\nCONCLUSION\nWe present a web based software application to aid the laboratory scientist to capture, describe and share details about their experiments." }, { "pmid": "25413436", "title": "A framework for generalized subspace pattern mining in high-dimensional datasets.", "abstract": "BACKGROUND\nA generalized notion of biclustering involves the identification of patterns across subspaces within a data matrix. This approach is particularly well-suited to analysis of heterogeneous molecular biology datasets, such as those collected from populations of cancer patients. Different definitions of biclusters will offer different opportunities to discover information from datasets, making it pertinent to tailor the desired patterns to the intended application. This paper introduces 'GABi', a customizable framework for subspace pattern mining suited to large heterogeneous datasets. Most existing biclustering algorithms discover biclusters of only a few distinct structures. However, by enabling definition of arbitrary bicluster models, the GABi framework enables the application of biclustering to tasks for which no existing algorithm could be used.\n\n\nRESULTS\nFirst, a series of artificial datasets were constructed to represent three clearly distinct scenarios for applying biclustering. With a bicluster model created for each distinct scenario, GABi is shown to recover the correct solutions more effectively than a panel of alternative approaches, where the bicluster model may not reflect the structure of the desired solution. Secondly, the GABi framework is used to integrate clinical outcome data with an ovarian cancer DNA methylation dataset, leading to the discovery that widespread dysregulation of DNA methylation associates with poor patient prognosis, a result that has not previously been reported. This illustrates a further benefit of the flexible bicluster definition of GABi, which is that it enables incorporation of multiple sources of data, with each data source treated in a specific manner, leading to a means of intelligent integrated subspace pattern mining across multiple datasets.\n\n\nCONCLUSIONS\nThe GABi framework enables discovery of biologically relevant patterns of any specified structure from large collections of genomic data. An R implementation of the GABi framework is available through CRAN (http://cran.r-project.org/web/packages/GABi/index.html)." }, { "pmid": "27651825", "title": "BiC2PAM: constraint-guided biclustering for biological data analysis with domain knowledge.", "abstract": "BACKGROUND\nBiclustering has been largely used in biological data analysis, enabling the discovery of putative functional modules from omic and network data. Despite the recognized importance of incorporating domain knowledge to guide biclustering and guarantee a focus on relevant and non-trivial biclusters, this possibility has not yet been comprehensively addressed. This results from the fact that the majority of existing algorithms are only able to deliver sub-optimal solutions with restrictive assumptions on the structure, coherency and quality of biclustering solutions, thus preventing the up-front satisfaction of knowledge-driven constraints. Interestingly, in recent years, a clearer understanding of the synergies between pattern mining and biclustering gave rise to a new class of algorithms, termed as pattern-based biclustering algorithms. These algorithms, able to efficiently discover flexible biclustering solutions with optimality guarantees, are thus positioned as good candidates for knowledge incorporation. In this context, this work aims to bridge the current lack of solid views on the use of background knowledge to guide (pattern-based) biclustering tasks.\n\n\nMETHODS\nThis work extends (pattern-based) biclustering algorithms to guarantee the satisfiability of constraints derived from background knowledge and to effectively explore efficiency gains from their incorporation. In this context, we first show the relevance of constraints with succinct, (anti-)monotone and convertible properties for the analysis of expression data and biological networks. We further show how pattern-based biclustering algorithms can be adapted to effectively prune of the search space in the presence of such constraints, as well as be guided in the presence of biological annotations. Relying on these contributions, we propose BiClustering with Constraints using PAttern Mining (BiC2PAM), an extension of BicPAM and BicNET biclustering algorithms.\n\n\nRESULTS\nExperimental results on biological data demonstrate the importance of incorporating knowledge within biclustering to foster efficiency and enable the discovery of non-trivial biclusters with heightened biological relevance.\n\n\nCONCLUSIONS\nThis work provides the first comprehensive view and sound algorithm for biclustering biological data with constraints derived from user expectations, knowledge repositories and/or literature." }, { "pmid": "14668247", "title": "Characterizing gene sets with FuncAssociate.", "abstract": "SUMMARY\nFuncAssociate is a web-based tool to help researchers use Gene Ontology attributes to characterize large sets of genes derived from experiment. Distinguishing features of FuncAssociate include the ability to handle ranked input lists, and a Monte Carlo simulation approach that is more appropriate to determine significance than other methods, such as Bonferroni or idák p-value correction. FuncAssociate currently supports 10 organisms (Vibrio cholerae, Shewanella oneidensis, Saccharomyces cerevisiae, Schizosaccharomyces pombe, Arabidopsis thaliana, Caenorhaebditis elegans, Drosophila melanogaster, Mus musculus, Rattus norvegicus and Homo sapiens).\n\n\nAVAILABILITY\nFuncAssociate is freely accessible at http://llama.med.harvard.edu/Software.html. Source code (in Perl and C) is freely available to academic users 'as is'." }, { "pmid": "24334380", "title": "Proximity measures for clustering gene expression microarray data: a validation methodology and a comparative analysis.", "abstract": "Cluster analysis is usually the first step adopted to unveil information from gene expression microarray data. Besides selecting a clustering algorithm, choosing an appropriate proximity measure (similarity or distance) is of great importance to achieve satisfactory clustering results. Nevertheless, up to date, there are no comprehensive guidelines concerning how to choose proximity measures for clustering microarray data. Pearson is the most used proximity measure, whereas characteristics of other ones remain unexplored. In this paper, we investigate the choice of proximity measures for the clustering of microarray data by evaluating the performance of 16 proximity measures in 52 data sets from time course and cancer experiments. Our results support that measures rarely employed in the gene expression literature can provide better results than commonly employed ones, such as Pearson, Spearman, and euclidean distance. Given that different measures stood out for time course and cancer data evaluations, their choice should be specific to each scenario. To evaluate measures on time-course data, we preprocessed and compiled 17 data sets from the microarray literature in a benchmark along with a new methodology, called Intrinsic Biological Separation Ability (IBSA). Both can be employed in future research to assess the effectiveness of new measures for gene time-course data." }, { "pmid": "11752295", "title": "Gene Expression Omnibus: NCBI gene expression and hybridization array data repository.", "abstract": "The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo." }, { "pmid": "21751369", "title": "Reactome pathway analysis to enrich biological discovery in proteomics data sets.", "abstract": "Reactome (http://www.reactome.org) is an open-source, expert-authored, peer-reviewed, manually curated database of reactions, pathways and biological processes. We provide an intuitive web-based user interface to pathway knowledge and a suite of data analysis tools. The Pathway Browser is a Systems Biology Graphical Notation-like visualization system that supports manual navigation of pathways by zooming, scrolling and event highlighting, and that exploits PSI Common Query Interface web services to overlay pathways with molecular interaction data from the Reactome Functional Interaction Network and interaction databases such as IntAct, ChEMBL and BioGRID. Pathway and expression analysis tools employ web services to provide ID mapping, pathway assignment and over-representation analysis of user-supplied data sets. By applying Ensembl Compara to curated human proteins and reactions, Reactome generates pathway inferences for 20 other species. The Species Comparison tool provides a summary of results for each of these species as a table showing numbers of orthologous proteins found by pathway from which users can navigate to inferred details for specific proteins and reactions. Reactome's diverse pathway knowledge and suite of data analysis tools provide a platform for data mining, modeling and analysis of large-scale proteomics data sets. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP 8)." }, { "pmid": "19744993", "title": "QuickGO: a web-based tool for Gene Ontology searching.", "abstract": "UNLABELLED\nQuickGO is a web-based tool that allows easy browsing of the Gene Ontology (GO) and all associated electronic and manual GO annotations provided by the GO Consortium annotation groups QuickGO has been a popular GO browser for many years, but after a recent redevelopment it is now able to offer a greater range of facilities including bulk downloads of GO annotation data which can be extensively filtered by a range of different parameters and GO slim set generation.\n\n\nAVAILABILITY AND IMPLEMENTATION\nQuickGO has implemented in JavaScript, Ajax and HTML, with all major browsers supported. It can be queried online at http://www.ebi.ac.uk/QuickGO. The software for QuickGO is freely available under the Apache 2 licence and can be downloaded from http://www.ebi.ac.uk/QuickGO/installation.html" } ]
Proteomes
29419792
PMC5874769
10.3390/proteomes6010010
Revealing Subtle Functional Subgroups in Class A Scavenger Receptors by Pattern Discovery and Disentanglement of Aligned Pattern Clusters
A protein family has similar and diverse functions locally conserved as aligned sequence segments. Further discovering their association patterns could reveal subtle family subgroup characteristics. Since aligned residues associations (ARAs) in Aligned Pattern Clusters (APCs) are complex and intertwined due to entangled function, factors, and variance in the source environment, we have recently developed a novel method: Aligned Residue Association Discovery and Disentanglement (ARADD) to solve this problem. ARADD first obtains from an APC an ARA Frequency Matrix and converts it to an adjusted statistical residual vector space (SRV). It then disentangles the SRV into Principal Components (PCs) and Re-projects their vectors to a SRV to reveal succinct orthogonal AR groups. In this study, we applied ARADD to class A scavenger receptors (SR-A), a subclass of a diverse protein family binding to modified lipoproteins with diverse biological functionalities not explicitly known. Our experimental results demonstrated that ARADD can unveil subtle subgroups in sequence segments with diverse functionality and highly variable sequence lengths. We also demonstrated that the ARAs captured in a Position Weight Matrix or an APC were entangled in biological function and domain location but disentangled by ARADD to reveal different subclasses without knowing their actual occurrence positions.
2. Related WorkTraditionally, computational sequence analysis methods have been developed to identify conserved sequence patterns from a protein family. Methods such as Multiple Sequence Alignment (MSA) [8] are only suitable for globally homologous sequences with a high level of sequence similarity [9], and motif discovery [10]; another method, is based on probabilistic model (such as position weight matrix [6]) which assumes independence between residue columns to represent the conserved sequence patterns. Such independence assumption is unrealistic in many cases, where correlation of residues along the sequence is commonly observed [11,12].Pattern discovery is an essential element in predictive analytics [13,14] for knowledge discovery and analysis. Its essence is to discover patterns (motifs) occurring in the data to reveal association patterns for interpretation and classification [15]. Hence, we develop an algorithm to obtain APCs [16,17] which capture functional residue association and site conservation. Since APCs contain aligned residues in strong statistical association sequence patterns, this representation is more knowledge-rich [16,17] when compared with MSA and probabilistic models. Hence, APCs reveal locally conserved yet diverse function patterns of protein families. APCs can reveal biological function in conserved regions of protein families.Association rule mining [18] is the most well-known methodology for mining item sets in relational dataset in the area of data mining. Algorithms such as Apriori [19] and FP-growth [20] can be applied for capturing associations from relational dataset. However, frequent patterns discovered by the above algorithms are extremely sensitive to threshold settings. Our new method, Aligned Residue Association Discovery and Disentanglement (ARADD), evolved from our AVADD [5] method, and is proposed to solve this problem. ARADD is able to reveal residue association patterns in different orthogonal PCs and Re-projected SRVs (RSRVs). ARADD is able to correlate different functionalities based only on the confidence intervals. As demonstrated in the results reported in this paper, ARADD achieves stable and succinct results in a simple fashion.As observed in our recent paper [5], a challenging problem encountered when discovering association patterns is that the association could be masked or obscured in the data due to the entanglement of unknown factors in their source environment. To resolve this problem for general relational datasets, we developed a novel method known as AVADD in our previous work [5]. In this paper, we transformed the existing methodology to discover and disentangle ARAs from APCs. The reasons are as follows: (1) the aligned columns (sites) in an APC can be treated as attributes of relational dataset; (2) the residing residues on these sites can be treated as attribute values; (3) the residue associations in an APC can be treated as attribute value associations. The extended ARADD from AVADD [5] could discover and disentangle ARAs from APCs as if AVADD [5] could do that on attribute value associations (AVAs) from a relational dataset. This is the most game-changing part of ARADD in comparison with existing methods. Due to such capability, subtle entangled subgroup characteristics masked or conspicuous in APCs can be revealed. To the best of our knowledge, only ARADD could disentangle such ARA patterns in APCs while no other reported methods could.In summary, compared to the above-mentioned algorithms, ARADD solved the most difficult problems in discovering and analyzing subgroup characteristics of APCs containing entangled associations and the variation among them in their aligned sequence patterns. We should conclude that: (1) local associations may occur in different sequence locations or functional domains; (2) subgroups with similar patterns (motifs) could have small differences in functionality; (3) similar functionality may occur in different function groups and domains; and (4) multiple functionalities may occur within a functional group dominated by a key function. We refer to such entwined phenomena as the results of entangled ARA patterns.
[ "23181696", "26010753", "20375573", "24278755", "19458158", "16679011", "21483869", "16900144", "3612789", "12211028", "24564874", "26356022", "24162173", "27153647" ]
[ { "pmid": "23181696", "title": "The evolution of the class A scavenger receptors.", "abstract": "BACKGROUND\nThe class A scavenger receptors are a subclass of a diverse family of proteins defined based on their ability to bind modified lipoproteins. The 5 members of this family are strikingly variable in their protein structure and function, raising the question as to whether it is appropriate to group them as a family based on their ligand binding abilities.\n\n\nRESULTS\nTo investigate these relationships, we defined the domain architecture of each of the 5 members followed by collecting and annotating class A scavenger receptor mRNA and amino acid sequences from publicly available databases. Phylogenetic analyses, sequence alignments, and permutation tests revealed a common evolutionary ancestry of these proteins, indicating that they form a protein family. We postulate that 4 distinct gene duplication events and subsequent domain fusions, internal repeats, and deletions are responsible for the diverse protein structures and functions of this family. Despite variation in domain structure, there are highly conserved regions across all 5 members, indicating the possibility that these regions may represent key conserved functional motifs.\n\n\nCONCLUSIONS\nWe have shown with significant evidence that the 5 members of the class A scavenger receptors form a protein family. We have indicated that these receptors have a common origin which may provide insight into future functional work with these proteins." }, { "pmid": "26010753", "title": "Scavenger receptor structure and function in health and disease.", "abstract": "Scavenger receptors (SRs) are a 'superfamily' of membrane-bound receptors that were initially thought to bind and internalize modified low-density lipoprotein (LDL), though it is currently known to bind to a variety of ligands including endogenous proteins and pathogens. New family of SRs and their properties have been identified in recent years, and have now been classified into 10 eukaryote families, defined as Classes A-J. These receptors are classified according to their sequences, although in each class they are further classified based in the variations of the sequence. Their ability to bind a range of ligands is reflected on the biological functions such as clearance of modified lipoproteins and pathogens. SR members regulate pathophysiological states including atherosclerosis, pathogen infections, immune surveillance, and cancer. Here, we review our current understanding of SR structure and function implicated in health and disease." }, { "pmid": "20375573", "title": "SR-A, MARCO and TLRs differentially recognise selected surface proteins from Neisseria meningitidis: an example of fine specificity in microbial ligand recognition by innate immune receptors.", "abstract": "Macrophages express various classes of pattern recognition receptors involved in innate immune recognition of artificial, microbial and host-derived ligands. These include the scavenger receptors (SRs), which are important for phagocytosis, and the Toll-like receptors (TLRs) involved in microbe sensing. The class A macrophage scavenger receptor (SR-A) and macrophage receptor with a collagenous structure (MARCO) display similar domain structures and ligand-binding specificity, which has led to the assumption that these two receptors may be functionally redundant. In this study we show that SR-A and MARCO differentially recognise artificial polyanionic ligands as well as surface proteins from the pathogenic bacterium Neisseria meningitidis. We show that, while acetylated low-density lipoprotein (AcLDL) is a strong ligand for SR-A, it is not a ligand for MARCO. Of the neisserial proteins that were SR ligands, some were ligands for both receptors, while other proteins were only recognised by either SR-A or MARCO. We also analysed the potential of these ligands to act as TLR agonists and assessed the requirement for SR-A and MARCO in pro-inflammatory cytokine induction. SR ligation alone did not induce cytokine production; however, for proteins that were both SR and TLR ligands, the SRs were required for full activation of TLR pathways." }, { "pmid": "24278755", "title": "Position weight matrix, gibbs sampler, and the associated significance tests in motif characterization and prediction.", "abstract": "Position weight matrix (PWM) is not only one of the most widely used bioinformatic methods, but also a key component in more advanced computational algorithms (e.g., Gibbs sampler) for characterizing and discovering motifs in nucleotide or amino acid sequences. However, few generally applicable statistical tests are available for evaluating the significance of site patterns, PWM, and PWM scores (PWMS) of putative motifs. Statistical significance tests of the PWM output, that is, site-specific frequencies, PWM itself, and PWMS, are in disparate sources and have never been collected in a single paper, with the consequence that many implementations of PWM do not include any significance test. Here I review PWM-based methods used in motif characterization and prediction (including a detailed illustration of the Gibbs sampler for de novo motif discovery), present statistical and probabilistic rationales behind statistical significance tests relevant to PWM, and illustrate their application with real data. The multiple comparison problem associated with the test of site-specific frequencies is best handled by false discovery rate methods. The test of PWM, due to the use of pseudocounts, is best done by resampling methods. The test of individual PWMS for each sequence segment should be based on the extreme value distribution." }, { "pmid": "19458158", "title": "MEME SUITE: tools for motif discovery and searching.", "abstract": "The MEME Suite web server provides a unified portal for online discovery and analysis of sequence motifs representing features such as DNA binding sites and protein interaction domains. The popular MEME motif discovery algorithm is now complemented by the GLAM2 algorithm which allows discovery of motifs containing gaps. Three sequence scanning algorithms--MAST, FIMO and GLAM2SCAN--allow scanning numerous DNA and protein sequence databases for motifs discovered by MEME and GLAM2. Transcription factor motifs (including those discovered using MEME) can be compared with motifs in many popular motif databases using the motif database scanning algorithm TOMTOM. Transcription factor motifs can be further analyzed for putative function by association with Gene Ontology (GO) terms using the motif-GO term association tool GOMO. MEME output now contains sequence LOGOS for each discovered motif, as well as buttons to allow motifs to be conveniently submitted to the sequence and motif database scanning algorithms (MAST, FIMO and TOMTOM), or to GOMO, for further analysis. GLAM2 output similarly contains buttons for further analysis using GLAM2SCAN and for rerunning GLAM2 with different parameters. All of the motif-based tools are now implemented as web services via Opal. Source code, binaries and a web server are freely available for noncommercial use at http://meme.nbcr.net." }, { "pmid": "16679011", "title": "Multiple sequence alignment.", "abstract": "Multiple sequence alignments are an essential tool for protein structure and function prediction, phylogeny inference and other common tasks in sequence analysis. Recently developed systems have advanced the state of the art with respect to accuracy, ability to scale to thousands of proteins and flexibility in comparing proteins that do not share the same domain architecture. New multiple alignment benchmark databases include PREFAB, SABMARK, OXBENCH and IRMBASE. Although CLUSTALW is still the most popular alignment tool to date, recent methods offer significantly better alignment quality and, in some cases, reduced computational cost." }, { "pmid": "21483869", "title": "A comprehensive benchmark study of multiple sequence alignment methods: current challenges and future perspectives.", "abstract": "Multiple comparison or alignmentof protein sequences has become a fundamental tool in many different domains in modern molecular biology, from evolutionary studies to prediction of 2D/3D structure, molecular function and inter-molecular interactions etc. By placing the sequence in the framework of the overall family, multiple alignments can be used to identify conserved features and to highlight differences or specificities. In this paper, we describe a comprehensive evaluation of many of the most popular methods for multiple sequence alignment (MSA), based on a new benchmark test set. The benchmark is designed to represent typical problems encountered when aligning the large protein sequence sets that result from today's high throughput biotechnologies. We show that alignmentmethods have significantly progressed and can now identify most of the shared sequence features that determine the broad molecular function(s) of a protein family, even for divergent sequences. However,we have identified a number of important challenges. First, the locally conserved regions, that reflect functional specificities or that modulate a protein's function in a given cellular context,are less well aligned. Second, motifs in natively disordered regions are often misaligned. Third, the badly predicted or fragmentary protein sequences, which make up a large proportion of today's databases, lead to a significant number of alignment errors. Based on this study, we demonstrate that the existing MSA methods can be exploited in combination to improve alignment accuracy, although novel approaches will still be needed to fully explore the most difficult regions. We then propose knowledge-enabled, dynamic solutions that will hopefully pave the way to enhanced alignment construction and exploitation in future evolutionary systems biology studies." }, { "pmid": "3612789", "title": "Correlation of co-ordinated amino acid substitutions with function in viruses related to tobacco mosaic virus.", "abstract": "Sequence data are available for the coat proteins of seven tobamoviruses, with homologies ranging from at least 26% to 82%, and atomic co-ordinates are known for tobacco mosaic virus (TMV) vulgare. A significant spatial relationship has been found between groups of residues with identical amino acid substitution patterns. This strongly suggest that their location is linked to a particular function, at least in viruses identical with the wild-type for these residues. The most conserved feature of TMV is the RNA binding region. Core residues are conserved in all viruses or show mutations complementary in volume. The specificity of inter-subunit contacts is achieved in different ways in the three more distantly related viruses." }, { "pmid": "12211028", "title": "Mapping pathways of allosteric communication in GroEL by analysis of correlated mutations.", "abstract": "An interesting example of an allosteric protein is the chaperonin GroEL. It undergoes adenosine 5'-triphosphate-induced conformational changes that are reflected in binding of adenosine 5'-triphosphate with positive cooperativity within rings and negative cooperativity between rings. Herein, correlated mutations in chaperonins are analyzed to unravel routes of allosteric communication in GroEL and in its complex with its co-chaperonin GroES. It is shown that analysis of correlated mutations in the chaperonin family can provide information about pathways of allosteric communication within GroEL and between GroEL and GroES. The results are discussed in the context of available structural, genetic, and biochemical data concerning short- and long-range interactions in the GroE system." }, { "pmid": "24564874", "title": "Ranking and compacting binding segments of protein families using aligned pattern clusters.", "abstract": "BACKGROUND\nDiscovering sequence patterns with variation can unveil functions of a protein family that are important for drug discovery. Exploring protein families using existing methods such as multiple sequence alignment is computationally expensive, thus pattern search, called motif finding in Bioinformatics, is used. However, at present, combinatorial algorithms result in large sets of solutions, and probabilistic models require a richer representation of the amino acid associations. To overcome these shortcomings, we present a method for ranking and compacting these solutions in a new representation referred to as Aligned Pattern Clusters (APCs). To tackle the problem of a large solution set, our method reveals a reduced set of candidate solutions without losing any information. To address the problem of representation, our method captures the amino acid associations and conservations of the aligned patterns. Our algorithm renders a set of APCs in which a set of patterns is discovered, pruned, aligned, and synthesized from the input sequences of a protein family.\n\n\nRESULTS\nOur algorithm identifies the binding or other functional segments and their embedded residues which are important drug targets from the cytochrome c and the ubiquitin protein families taken from Unitprot. The results are independently confirmed by pFam's multiple sequence alignment. For cytochrome c protein the number of resulting patterns with variations are reduced by 76.62% from the number of original patterns without variations. Furthermore, all of the top four candidate APCs correspond to the binding segments with one of each of their conserved amino acid as the binding residue. The discovered proximal APCs agree with pFam and PROSITE results. Surprisingly, the distal binding site discovered by our algorithm is not discovered by pFam nor PROSITE, but confirmed by the three-dimensional cytochrome c structure. When applied to the ubiquitin protein family, our results agree with pFam and reveals six of the seven Lysine binding residues as conserved aligned columns with entropy redundancy measure of 1.0.\n\n\nCONCLUSION\nThe discovery, ranking, reduction, and representation of a set of patterns is important to avert time-consuming and expensive simulations and experimentations during proteomic study and drug discovery." }, { "pmid": "26356022", "title": "Aligning and Clustering Patterns to Reveal the Protein Functionality of Sequences.", "abstract": "Discovering sequence patterns with variations unveils significant functions of a protein family. Existing combinatorial methods of discovering patterns with variations are computationally expensive, and probabilistic methods require more elaborate probabilistic representation of the amino acid associations. To overcome these shortcomings, this paper presents a new computationally efficient method for representing patterns with variations in a compact representation called Aligned Pattern Cluster (AP Cluster). To tackle the runtime, our method discovers a shortened list of non-redundant statistically significant sequence associations based on our previous work. To address the representation of protein functional regions, our pattern alignment and clustering step, presented in this paper captures the conservations and variations of the aligned patterns. We further refine our solution to allow more coverage of sequences via extending the AP Clusters containing only statistically significant patterns to Weak and Conserved AP Clusters. When applied to the cytochrome c, the ubiquitin, and the triosephosphate isomerase protein families, our algorithm identifies the binding segments as well as the binding residues. When compared to other methods, ours discovers all binding sites in the AP Clusters with superior entropy and coverage. The identification of patterns with variations help biologists to avoid time-consuming simulations and experimentations. (Software available upon request)." }, { "pmid": "24162173", "title": "A primer to frequent itemset mining for bioinformatics.", "abstract": "Over the past two decades, pattern mining techniques have become an integral part of many bioinformatics solutions. Frequent itemset mining is a popular group of pattern mining techniques designed to identify elements that frequently co-occur. An archetypical example is the identification of products that often end up together in the same shopping basket in supermarket transactions. A number of algorithms have been developed to address variations of this computationally non-trivial problem. Frequent itemset mining techniques are able to efficiently capture the characteristics of (complex) data and succinctly summarize it. Owing to these and other interesting properties, these techniques have proven their value in biological data analysis. Nevertheless, information about the bioinformatics applications of these techniques remains scattered. In this primer, we introduce frequent itemset mining and their derived association rules for life scientists. We give an overview of various algorithms, and illustrate how they can be used in several real-life bioinformatics application domains. We end with a discussion of the future potential and open challenges for frequent itemset mining in the life sciences." }, { "pmid": "27153647", "title": "Partitioning and correlating subgroup characteristics from Aligned Pattern Clusters.", "abstract": "MOTIVATION\nEvolutionarily conserved amino acids within proteins characterize functional or structural regions. Conversely, less conserved amino acids within these regions are generally areas of evolutionary divergence. A priori knowledge of biological function and species can help interpret the amino acid differences between sequences. However, this information is often erroneous or unavailable, hampering discovery with supervised algorithms. Also, most of the current unsupervised methods depend on full sequence similarity, which become inaccurate when proteins diverge (e.g. inversions, deletions, insertions). Due to these and other shortcomings, we developed a novel unsupervised algorithm which discovers highly conserved regions and uses two types of information measures: (i) data measures computed from input sequences; and (ii) class measures computed using a priori class groupings in order to reveal subgroups (i.e. classes) or functional characteristics.\n\n\nRESULTS\nUsing known and putative sequences of two proteins belonging to a relatively uncharacterized protein family we were able to group evolutionarily related sequences and identify conserved regions, which are strong homologous association patterns called Aligned Pattern Clusters, within individual proteins and across the members of this family. An initial synthetic demonstration and in silico results reveal that (i) the data measures are unbiased and (ii) our class measures can accurately rank the quality of the evolutionarily relevant groupings. Furthermore, combining our data and class measures allowed us to interpret the results by inferring regions of biological importance within the binding domain of these proteins. Compared to popular supervised methods, our algorithm has a superior runtime and comparable accuracy.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe dataset and results are available at www.pami.uwaterloo.ca/∼ealee/files/classification2015 CONTACT: [email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." } ]
Frontiers in Computational Neuroscience
29674961
PMC5895733
10.3389/fncom.2018.00024
Unsupervised Feature Learning With Winner-Takes-All Based STDP
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods.
2. Related work2.1. Spiking neural networks2.1.1. Leaky-integrate-and-fire modelSpiking neural networks are widely used in the neuroscience community to build biologically plausible models of neuron populations in the brain. These models have been designed to reproduce information propagation and temporal dynamics observable in cortical layers. As many models exists, from the most simple to the most realistic, we will focus on the Leaky-Integrate-and-Fire model (LIF), a simple and fast model of a spiking neuron.LIF neurons are asynchronous units receiving input signals called spikes from pre-synaptic cells. Each spike xi is modulated by the weight wi of the corresponding synapse and added to the membrane potential u. In a synchronous formalism, at each time step, the update of the membrane potential at time t can be expressed as follow:(1)Tδu(t)δt=-(u(t)-ures)+∑i=1nwixi,tWhere T is the time constant of the neuron, n the number of afferent cells and ures is the reset potential (which we also consider as the initial potential at t0 = 0).When u reaches a certain threshold T, the neuron emits a spike to its axons and resets its potential to its initial value ures.This type of network has proven to be energy-efficient Gamrat et al. (2015) on analog devices due to its asynchronous and sparse characteristics. Even on digital synchronous devices, spikes can be encoded as binary variables, therefore carrying maximum information over the minimum memory unit.2.1.2. Rank order coding networkA model which fits the criteria of processing speed and adaptation to images data is the rank order coding SNN (Thorpe et al., 2001). This type of network processes the information with single-step feed-forward information propagation by means of the spike latencies. One strong hypothesis for this type of network is the possibility to compute information with only one spike per neuron, which has been demonstrated in rapid visual categorization tasks (Thorpe et al., 1996). Implementations of such networks have proven to be efficient for simple categorization tasks like frontal-face detection on images (Van Rullen et al., 1998; Delorme and Thorpe, 2001).The visual-detection software engine SpikeNet Thorpe et al. (2004) is based on rank order coding networks and is used in industrial applications including face processing for interior security, intrusion detection in airports and casino games monitoring. Also, it is able to learn new objects with a single image, encoding objects with only the first firing spikes.The rank order model SpikeNet is based on a several layers architecture of LIF neurons, all sharing the time constant T, the reset potential ures and the spiking threshold T. During learning, only the first time of spike of each neuron is used to learn a new object. During inference, the network only needs to know if a neuron has spiked or not, hence allowing the use of a binary representation.2.2. Learning with spiking neural networks2.2.1. Deep neural networks conversionThe computational advantages of SNNs led some researchers to convert fully learned deep neural networks into SNNs (Diehl et al., 2015, 2016), in order to give SNNs the inference performance of back-propagation trained neural networks.However, deep neural networks use the back-propagation algorithm to learn the parameters, which remains a computationally heavy algorithm, and requires enormous amounts of labeled data. Also, while some researches hypothesize that the brain could implement back-propagation (Bengio et al., 2015), the biological structures which could support such error transmission process remain to be discovered. Finally, unsupervised learning within DNNs remains a challenge, whereas the brain may learn most of its representations through unsupervised learning (Turk-Browne et al., 2009). Suffering from both its computational cost and its lack of biological plausibility, back-propagation may not be the best learning algorithm to take advantage of SNNs capabilities.On the other hand, researches in neuroscience have developed models of unsupervised learning in the brain based on SNNs. One of the most popular model is the STDP.2.2.2. Spike timing dependent plasticitySpike-Timing-Dependent-Plasticity is a biological learning rule which uses the spike timing of pre and post-synaptic neurons to update the values of the synapses. This learning rule is said to be Hebbian (“What fires together wires together”). Synaptic weights between two neurons updated as a function of the timing difference between a pair or a triplet of pre and post-synaptic spikes. Long-Term Potentiation (LTP) or a Long-Term Depression (LTD) are triggered depending on whether a presynaptic spike occurs before or after a post-synaptic spike, respectively.Formulated two decades ago by Markram et al. (1997), STDP has gained interest in the neurocomputation community as it allows SNN to be used for unsupervised representation learning (Kempter et al., 2001; Rao and Sejnowski, 2001; Masquelier and Thorpe, 2007; Nessler et al., 2009). The features learnt in low-level layers have also been shown to be relevant for classification tasks combined with additional supervision processes in the top layers (Beyeler et al., 2013; Mozafari et al., 2017). As such STDP may be the main unsupervised learning mechanisms in biological neural networks, and shows nearly equivalent mathematical properties to machine learning approaches such as auto-encoders (Burbank, 2015) and non-negative matrix factorization (Carlson et al., 2013; Beyeler et al., in review).We first consider the basic STDP pair-based rule from Kempter et al. (2001). Each time a post synaptic neuron spikes, one computes the timing difference Δt = tpre−tpost (relative to each presynaptic spike) and updates each synapse w as follows:(2)Δw= {A+.eΔtΤ+ if Δt<0A−.eΔtΤ− otherwisewhere A+ > 0, A− < 0, and T+,T->0. The top and bottom terms in this equation are respectively the LTP and LTD terms.This update rule can be made highly computationally efficient by removing the exponential terms eΔtT, resulting in a simple linear time-dependent update rule.Parameters A+ and A− must be tuned on order to regularize weight updates during the learning process. However in practice, tuning these parameters is a tedious task. In order to avoid weight divergences, networks trained with STDP learning rule should also implement stability processes such as refractory periods, homoeostasis with weight normalization or inhibition. Weight regularization may also be implemented directly by reformulating the learning rule equations. For instance in Masquelier and Thorpe (2007), the exponential term in Equation (2) is replaced by a process which guaranties that the weights remain in the range [0…1] :(3)Δw= {A+.w.(1−w) if Δt<0A−.w.(1−w) otherwiseNote that in Equation (3), the amplitude of the update is independent from the absolute time difference between pre and post-synaptic spikes, which only works if pairs of spikes belongs to the same finite time window. In Masquelier and Thorpe (2007) this is guaranteed by the whole propagation schemes, which is applied on image data and rely on a single feedforward propagation step taking into account only one spike per neuron. Thus the maximum time difference between pre and post-synaptic spikes is bounded in this case.2.3. Regulation mechanisms in neural networks2.3.1. WTA as sparsity constrain in deep neural networksWinner-takes-all (WTA) mechanisms are an interesting property of biological neural networks which allow a fast analysis of objects in exploration tasks. Following de Almeida et al. (2009), gamma inhibitory oscillations perform a WTA mechanism independent from the absolute activation level. They may select the principle neurons firing during a stimulation, thus allowing, e.g., the tuning of narrow orientation filters in V1.WTA has been used in deep neural networks in Makhzani and Frey (2015) as a sparsity constraint in autoencoders. Instead of using noise or specific loss functions in order to impose activity sparsity in autoencoder methods, the authors propose an activity-driven regularization technique based on a WTA operator, as defined by Equation (4).(4)WTA(X,d)={Xj if|Xj|=maxk∈d(|Xk|)0 otherwisewhere X is a multidimensional matrix and d is a set of given dimensions of X.After definition of a convolutional architecture, each layer is trained in a greedy layer-wise manner with representation from the previous layer as input. To train a convolutional layer, a WTA layer and a deconvolution layer are placed on top of it. The WTA layer applies the WTA operator on the spatial dimensions of the convolutional output batch and retains only the np% first activities of each neuron. This way for a given layer with N representations map per batch and C output channels, only N.np.C activities are kept at their initial values, all the others activation values being zeroed. Then the deconvolutional layer attempts to reconstruct the input batch.While this method demonstrates the potential usefulness of WTA mechanisms in neural networks, it still relies on computationally heavy backpropagation methods to update the weights of the network.2.3.2. Homosynaptic and heterosynaptic homeostasisIn their original formulation, Hebbian-type learning rule (STDP, Oja rule, BCM rule) does not have any regulation process. The absence of regulation in synaptic weights may impact negatively the way a network learns. Hebbian learning allows the synaptic weights to grow indefinitely, which can lead to abnormally high spiking activity and neurons to always win the competitions induced by inhibitory circuits.To avoid such issues, two types of homeostasis have been formulated.Homosynaptic homeostasis acts on a single synapse and is depends on its respective inputs and outputs activity only. This homeostatic process can be modeled with a self-regulatory term in the Hebbian rule as in Masquelier and Thorpe (2007) or as a synaptic scaling rule depending on the activity driven by the synapse as in Carlson et al. (2013).Heterosynaptic homeostasis is a convenient way to regulate the synaptic strength of a network. The model of such homeostasis takes into account all the synapses connected to a given neuron, all the synapses in a layer (like the L2 loss weight decay in deep learning) or at the network scale. Biological plausibility of such process is still discussed. Nevertheless, some evidences of heterosynaptic homeostasis have been observed in the brain to compensate runaway dynamics of synaptic strength introduced by Hebbian learning (Royer and Paré, 2003; Chistiakova et al., 2014). It then plays an important role in the regulation of spiking activity in the brain and is complementary to homosynaptic plasticity.2.4. Neural networks and image processingImage processing with neural networks is performed with multiple layers of spatial operations (like convolutions, pooling, and non-linearities), giving the name Deep Convolutional Neural Networks to these methods. Their layer architecture is directly inspired from the biological processes of the visual cortex, in particular from the well known HMAX model (Riesenhuber and Poggio, 1999), except that the layers' weights are learnt with back-propagation. Deep CNN models use a single-step forward propagation to perform a given task. Even if convolutions on large maps may be computationally heavy, all the computations are done through only one pass in each layer. One remaining advantage of CNNs is their ability to learn from raw data, such as pixels for images or waveforms for audio.On the other hand, since SNNs use spikes to transmit information to the upper layers, they need to perform neuron potential updates at each time step. Hence, applying such networks with a convolutional architecture requires heavy computations once for each time step. However, spikes and synaptic weights may be set to a very low bit-resolution (down to 1 bit) to reduce this computational cost Thorpe et al. (2004). Also, STDP is known to learn new representations with a few iterations Masquelier et al. (2009), theoretically reducing the number of epochs required to converge.
[ "23994510", "26633645", "26340772", "24727248", "19515917", "11665771", "26941637", "26598671", "27651489", "11705408", "29328958", "8985014", "19718815", "17305422", "27741229", "11570997", "10526343", "12673250", "26217169", "11665765", "8632824", "18787244", "18823241", "9886652" ]
[ { "pmid": "23994510", "title": "Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.", "abstract": "Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain." }, { "pmid": "26633645", "title": "Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.", "abstract": "The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks." }, { "pmid": "26340772", "title": "PCANet: A Simple Deep Learning Baseline for Image Classification?", "abstract": "In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition." }, { "pmid": "24727248", "title": "Heterosynaptic plasticity: multiple mechanisms and multiple roles.", "abstract": "Plasticity is a universal property of synapses. It is expressed in a variety of forms mediated by a multitude of mechanisms. Here we consider two broad kinds of plasticity that differ in their requirement for presynaptic activity during the induction. Homosynaptic plasticity occurs at synapses that were active during the induction. It is also called input specific or associative, and it is governed by Hebbian-type learning rules. Heterosynaptic plasticity can be induced by episodes of strong postsynaptic activity also at synapses that were not active during the induction, thus making any synapse at a cell a target to heterosynaptic changes. Both forms can be induced by typical protocols used for plasticity induction and operate on the same time scales but have differential computational properties and play different roles in learning systems. Homosynaptic plasticity mediates associative modifications of synaptic weights. Heterosynaptic plasticity counteracts runaway dynamics introduced by Hebbian-type rules and balances synaptic changes. It provides learning systems with stability and enhances synaptic competition. We conclude that homosynaptic and heterosynaptic plasticity represent complementary properties of modifiable synapses, and both are necessary for normal operation of neural systems with plastic synapses." }, { "pmid": "19515917", "title": "A second function of gamma frequency oscillations: an E%-max winner-take-all mechanism selects which cells fire.", "abstract": "The role of gamma oscillations in producing synchronized firing of groups of principal cells is well known. Here, we argue that gamma oscillations have a second function: they select which principal cells fire. This selection process occurs through the interaction of excitation with gamma frequency feedback inhibition. We sought to understand the rules that govern this process. One possibility is that a constant fraction of cells fire. Our analysis shows, however, that the fraction is not robust because it depends on the distribution of excitation to different cells. A robust description is termed E%-max: cells fire if they have suprathreshold excitation (E) within E% of the cell that has maximum excitation. The value of E%-max is approximated by the ratio of the delay of feedback inhibition to the membrane time constant. From measured values, we estimate that E%-max is 5-15%. Thus, an E%-max winner-take-all process can discriminate between groups of cells that have only small differences in excitation. To test the utility of this framework, we analyzed the role of oscillations in V1, one of the few systems in which both spiking and intracellular excitation have been directly measured. We show that an E%-max winner-take-all process provides a simple explanation for why the orientation tuning of firing is narrower than that of the excitatory input and why this difference is not affected by increasing excitation. Because gamma oscillations occur in many brain regions, the framework we have developed for understanding the second function of gamma is likely to have wide applicability." }, { "pmid": "11665771", "title": "Face identification using one spike per neuron: resistance to image degradations.", "abstract": "The short response latencies of face selective neurons in the inferotemporal cortex impose major constraints on models of visual processing. It appears that visual information must essentially propagate in a feed-forward fashion with most neurons only having time to fire one spike. We hypothesize that flashed stimuli can be encoded by the order of firing of ganglion cells in the retina and propose a neuronal mechanism, that could be related to fast shunting inhibition, to decode such information. Based on these assumptions, we built a three-layered neural network of retino-topically organized neuronal maps. We showed, by using a learning rule involving spike timing dependant plasticity, that neuronal maps in the output layer can be trained to recognize natural photographs of faces. Not only was the model able to generalize to novel views of the same faces, it was also remarkably resistant to image noise and reductions in contrast." }, { "pmid": "26941637", "title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.", "abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks." }, { "pmid": "26598671", "title": "Attention searches nonuniformly in space and in time.", "abstract": "Difficult search tasks are known to involve attentional resources, but the spatiotemporal behavior of attention remains unknown. Are multiple search targets processed in sequence or in parallel? We developed an innovative methodology to solve this notoriously difficult problem. Observers performed a difficult search task during which two probes were flashed at varying delays. Performance in reporting probes at each location was considered a measure of attentional deployment. By solving a second-degree equation, we determined the probability of probe report at the most and least attended probe locations on each trial. Because these values differed significantly, we conclude that attention was focused on one stimulus or subgroup of stimuli at a time, and not divided uniformly among all search stimuli. Furthermore, this deployment was modulated periodically over time at ∼ 7 Hz. These results provide evidence for a nonuniform spatiotemporal deployment of attention during difficult search." }, { "pmid": "27651489", "title": "Convolutional networks for fast, energy-efficient neuromorphic computing.", "abstract": "Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer." }, { "pmid": "11705408", "title": "Intrinsic stabilization of output rates by spike-based Hebbian learning.", "abstract": "We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input." }, { "pmid": "29328958", "title": "STDP-based spiking deep convolutional neural networks for object recognition.", "abstract": "Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions." }, { "pmid": "8985014", "title": "Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs.", "abstract": "Activity-driven modifications in synaptic connections between neurons in the neocortex may occur during development and learning. In dual whole-cell voltage recordings from pyramidal neurons, the coincidence of postsynaptic action potentials (APs) and unitary excitatory postsynaptic potentials (EPSPs) was found to induce changes in EPSPs. Their average amplitudes were differentially up- or down-regulated, depending on the precise timing of postsynaptic APs relative to EPSPs. These observations suggest that APs propagating back into dendrites serve to modify single active synaptic connections, depending on the pattern of electrical activity in the pre- and postsynaptic neurons." }, { "pmid": "19718815", "title": "Competitive STDP-based spike pattern learning.", "abstract": "Recently it has been shown that a repeating arbitrary spatiotemporal spike pattern hidden in equally dense distracter spike trains can be robustly detected and learned by a single neuron equipped with spike-timing-dependent plasticity (STDP) (Masquelier, Guyonneau, & Thorpe, 2008). To be precise, the neuron becomes selective to successive coincidences of the pattern. Here we extend this scheme to a more realistic scenario with multiple repeating patterns and multiple STDP neurons \"listening\" to the incoming spike trains. These \"listening\" neurons are in competition: as soon as one fires, it strongly inhibits the others through lateral connections (one-winner-take-all mechanism). This tends to prevent the neurons from learning the same (parts of the) repeating patterns, as shown in simulations. Instead, the population self-organizes, trying to cover the different patterns or coding one pattern by the successive firings of several neurons, and a powerful distributed coding scheme emerges. Taken together, these results illustrate how the brain could easily encode and decode information in the spike times, a theory referred to as temporal coding, and how STDP could play a key role by detecting repeating patterns and generating selective response to them." }, { "pmid": "17305422", "title": "Unsupervised learning of visual features through spike timing dependent plasticity.", "abstract": "Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses." }, { "pmid": "27741229", "title": "Theta-Gamma Coding Meets Communication-through-Coherence: Neuronal Oscillatory Multiplexing Theories Reconciled.", "abstract": "Several theories have been advanced to explain how cross-frequency coupling, the interaction of neuronal oscillations at different frequencies, could enable item multiplexing in neural systems. The communication-through-coherence theory proposes that phase-matching of gamma oscillations between areas enables selective processing of a single item at a time, and a later refinement of the theory includes a theta-frequency oscillation that provides a periodic reset of the system. Alternatively, the theta-gamma neural code theory proposes that a sequence of items is processed, one per gamma cycle, and that this sequence is repeated or updated across theta cycles. In short, both theories serve to segregate representations via the temporal domain, but differ on the number of objects concurrently represented. In this study, we set out to test whether each of these theories is actually physiologically plausible, by implementing them within a single model inspired by physiological data. Using a spiking network model of visual processing, we show that each of these theories is physiologically plausible and computationally useful. Both theories were implemented within a single network architecture, with two areas connected in a feedforward manner, and gamma oscillations generated by feedback inhibition within areas. Simply increasing the amplitude of global inhibition in the lower area, equivalent to an increase in the spatial scope of the gamma oscillation, yielded a switch from one mode to the other. Thus, these different processing modes may co-exist in the brain, enabling dynamic switching between exploratory and selective modes of attention." }, { "pmid": "11570997", "title": "Spike-timing-dependent Hebbian plasticity as temporal difference learning.", "abstract": "A spike-timing-dependent Hebbian mechanism governs the plasticity of recurrent excitatory synapses in the neocortex: synapses that are activated a few milliseconds before a postsynaptic spike are potentiated, while those that are activated a few milliseconds after are depressed. We show that such a mechanism can implement a form of temporal difference learning for prediction of input sequences. Using a biophysical model of a cortical neuron, we show that a temporal difference rule used in conjunction with dendritic backpropagating action potentials reproduces the temporally asymmetric window of Hebbian plasticity observed physio-logically. Furthermore, the size and shape of the window vary with the distance of the synapse from the soma. Using a simple example, we show how a spike-timing-based temporal difference learning rule can allow a network of neocortical neurons to predict an input a few milliseconds before the input's expected arrival." }, { "pmid": "10526343", "title": "Hierarchical models of object recognition in cortex.", "abstract": "Visual processing in cortex is classically modeled as a hierarchy of increasingly sophisticated representations, naturally extending the model of simple to complex cells of Hubel and Wiesel. Surprisingly, little quantitative modeling has been done to explore the biological feasibility of this class of models to explain aspects of higher-level visual processing such as object recognition. We describe a new hierarchical model consistent with physiological data from inferotemporal cortex that accounts for this complex visual task and makes testable predictions. The model is based on a MAX-like operation applied to inputs to certain cortical neurons that may have a general role in cortical function." }, { "pmid": "12673250", "title": "Conservation of total synaptic weight through balanced synaptic depression and potentiation.", "abstract": "Memory is believed to depend on activity-dependent changes in the strength of synapses. In part, this view is based on evidence that the efficacy of synapses can be enhanced or depressed depending on the timing of pre- and postsynaptic activity. However, when such plastic synapses are incorporated into neural network models, stability problems may develop because the potentiation or depression of synapses increases the likelihood that they will be further strengthened or weakened. Here we report biological evidence for a homeostatic mechanism that reconciles the apparently opposite requirements of plasticity and stability. We show that, in intercalated neurons of the amygdala, activity-dependent potentiation or depression of particular glutamatergic inputs leads to opposite changes in the strength of inputs ending at other dendritic sites. As a result, little change in total synaptic weight occurs, even though the relative strength of inputs is modified. Furthermore, hetero- but not homosynaptic alterations are blocked by intracellular dialysis of drugs that prevent Ca2+ release from intracellular stores. Thus, in intercalated neurons at least, inverse heterosynaptic plasticity tends to compensate for homosynaptic long-term potentiation and depression, thus stabilizing total synaptic weight." }, { "pmid": "26217169", "title": "Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.", "abstract": "Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time." }, { "pmid": "11665765", "title": "Spike-based strategies for rapid processing.", "abstract": "Most experimental and theoretical studies of brain function assume that neurons transmit information as a rate code, but recent studies on the speed of visual processing impose temporal constraints that appear incompatible with such a coding scheme. Other coding schemes that use the pattern of spikes across a population a neurons may be much more efficient. For example, since strongly activated neurons tend to fire first, one can use the order of firing as a code. We argue that Rank Order Coding is not only very efficient, but also easy to implement in biological hardware: neurons can be made sensitive to the order of activation of their inputs by including a feed-forward shunting inhibition mechanism that progressively desensitizes the neuronal population during a wave of afferent activity. In such a case, maximum activation will only be produced when the afferent inputs are activated in the order of their synaptic weights." }, { "pmid": "8632824", "title": "Speed of processing in the human visual system.", "abstract": "How long does it take for the human visual system to process a complex natural image? Subjectively, recognition of familiar objects and scenes appears to be virtually instantaneous, but measuring this processing time experimentally has proved difficult. Behavioural measures such as reaction times can be used, but these include not only visual processing but also the time required for response execution. However, event-related potentials (ERPs) can sometimes reveal signs of neural processing well before the motor output. Here we use a go/no-go categorization task in which subjects have to decide whether a previously unseen photograph, flashed on for just 20 ms, contains an animal. ERP analysis revealed a frontal negativity specific to no-go trials that develops roughly 150 ms after stimulus onset. We conclude that the visual processing needed to perform this highly demanding task can be achieved in under 150 ms." }, { "pmid": "18787244", "title": "80 million tiny images: a large data set for nonparametric object and scene recognition.", "abstract": "With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors." }, { "pmid": "18823241", "title": "Neural evidence of statistical learning: efficient detection of visual regularities without awareness.", "abstract": "Our environment contains regularities distributed in space and time that can be detected by way of statistical learning. This unsupervised learning occurs without intent or awareness, but little is known about how it relates to other types of learning, how it affects perceptual processing, and how quickly it can occur. Here we use fMRI during statistical learning to explore these questions. Participants viewed statistically structured versus unstructured sequences of shapes while performing a task unrelated to the structure. Robust neural responses to statistical structure were observed, and these responses were notable in four ways: First, responses to structure were observed in the striatum and medial temporal lobe, suggesting that statistical learning may be related to other forms of associative learning and relational memory. Second, statistical regularities yielded greater activation in category-specific visual regions (object-selective lateral occipital cortex and word-selective ventral occipito-temporal cortex), demonstrating that these regions are sensitive to information distributed in time. Third, evidence of learning emerged early during familiarization, showing that statistical learning can operate very quickly and with little exposure. Finally, neural signatures of learning were dissociable from subsequent explicit familiarity, suggesting that learning can occur in the absence of awareness. Overall, our findings help elucidate the underlying nature of statistical learning." }, { "pmid": "9886652", "title": "Face processing using one spike per neurone.", "abstract": "The speed with which neurones in the monkey temporal lobe can respond selectively to the presence of a face implies that processing may be possible using only one spike per neurone, a finding that is problematic for conventional rate coding models that need at least two spikes to estimate interspike interval. One way of avoiding this problem uses the fact that integrate-and-fire neurones will tend to fire at different times, with the most strongly activated neurones firing first (Thorpe, 1990, Parallel Processing in Neural Systems). Under such conditions, processing can be performed by using the order in which cells in a particular layer fire as a code. To test this idea, we have explored a range of architectures using SpikeNET (Thorpe and Gautrais, 1997, Neural Information Processing Systems, 9), a simulator designed for modelling large populations of integrate-and-fire neurones. One such network used a simple four-layer feed-forward architecture to detect and localise the presence of human faces in natural images. Performance of the model was tested with a large range of grey-scale images of faces and other objects and was found to be remarkably good by comparison with more classic image processing techniques. The most remarkable feature of these results is that they were obtained using a purely feed-forward neural network in which none of the neurones fired more than one spike (thus ruling out conventional rate coding mechanisms). It thus appears that the combination of asynchronous spike propagation and rank order coding may provide an important key to understanding how the nervous system can achieve such a huge amount of processing in so little time." } ]
Scientific Reports
29654236
PMC5899127
10.1038/s41598-018-24304-3
Spinal cord gray matter segmentation using deep dilated convolutions
Gray matter (GM) tissue changes have been associated with a wide range of neurological disorders and were recently found relevant as a biomarker for disability in amyotrophic lateral sclerosis. The ability to automatically segment the GM is, therefore, an important task for modern studies of the spinal cord. In this work, we devise a modern, simple and end-to-end fully-automated human spinal cord gray matter segmentation method using Deep Learning, that works both on in vivo and ex vivo MRI acquisitions. We evaluate our method against six independently developed methods on a GM segmentation challenge. We report state-of-the-art results in 8 out of 10 evaluation metrics as well as major network parameter reduction when compared to the traditional medical imaging architectures such as U-Nets.
Related WorkMany methods for spinal cord segmentation were proposed in the past. Regarding the presence or absence of manual intervention, the segmentation methods can be separated into two main categories: semi-automated and fully-automated.In the work14, they propose a probabilistic method for segmentation called “Semi-supervised VBEM”, whereby the observed MRI signals are assumed to be generated by the warping of an averagely shaped reference anatomy6. The observed image intensities are modeled as random variables drawn from a Gaussian mixture distribution, where the parameters are estimated using a variational version of the Expectation-Maximization (EM)14 algorithm. The method can be used in a fully unsupervised fashion or by incorporating training data with manual labels, hence the semi-supervised scheme6.The SCT (Spinal Cord Toolbox) segmentation method13, uses an atlas-based approach and was built based on a previous work27 but with additional improvements such as the use of vertebral level information and linear intensity normalization to accommodate multi-site data13. The SCT approach first builds a dictionary of images using manual WM/GM segmentations after a pre-processing step, then the target image is also pre-processed and normalized, after that, the target image is projected into the PCA (Principal Component Analysis) space of the dictionary images where the most similar dictionary slices are selected using an arbitrary threshold. Finally, the segmentation is done using label fusion between the manual segmentations from the dictionary images that were selected6. The SCT method is freely available as open-source software at https://github.com/neuropoly/spinalcordtoolbox 26.In the work10, a method called “Joint collaboration for spinal cord gray matter segmentation” (JCSCS) is proposed, where two existing label fusion segmentation methods were combined. The method is based on a multi-atlas segmentation propagation using registration and segmentation in 2D slice-wise space. In JCSCS, the “Optimized PatchMatch Label Fusion” (OPAL)28 is used to detect the spinal cord, where the cord localization is achieved by providing an external dataset of spinal cord volumes and their associated manual segmentation10, after that, the “Similarity and Truth Estimation for Propagated Segmentations” (STEPS)29 is used to segment the GM in two steps, first the segmentation propagation, and then a consensus segmentation is created by fusing best-deformed templates (based on locally normalized cross-correlation)10.In the work12, the Morphological Geodesic Active Contour (MGAC) algorithm uses an external spinal cord segmentation tool (“Jim”, from Xinapse Systems) to estimate the spinal cord boundary and a morphological geodesic active contour model to segment the gray matter. The method has five steps: first, the original image spinal cord is segmented with the Jim software and then a template is registered to the subject cord, after which the same transformation is applied to the GM template. The transformed gray matter template is then used as an initial guess for the active contour algorithm12.The “Gray matter Segmentation Based on Maximum Entropy” (GSBME) algorithm6 is a semi-automatic, supervised segmentation method for the GM. The GSBME is comprised of three main stages. First, the image is pre-processed, in this step the GSBME uses the SCT26 to segment the spinal cord using Propseg5 with manual initialization, after which the image intensities are normalized and denoised. In the second step, the images are thresholded, slice by slice, using a sliding window where the optimal threshold is found by maximizing the sum of the GM and WM intensity entropies. In the final stage, an outlier detector discards segmented intensities using morphological features such as perimeter, eccentricity and Hu moments among others6.In the Deepseg approach15, which builds upon the work11, a Deep Learning architecture similar to the U-Net25, where a CNN has a contracting and expanding path. The contracting path aggregates information while the expanding path upsamples the feature maps in order to achieve a dense prediction output. To recover spatial information loss, shortcuts are added between contracting/expanding paths of the network. In Deepseg, instead of using upsampling layers like U-Net, they use an unpooling and “deconvolution” approach such as in the work30. The network architecture possesses 11 layers and is pre-trained using 3 convolutional restricted Boltzmann Machines31. Deepseg also uses a loss function with a weighted sum of two different terms, the mean square differences of the GM and non-GM voxels, thus balancing sensitivity and specificity6. Two models were trained independently, one for the full spinal cord segmentation and another for the GM segmentation.We compare our method with all the aforementioned methods on the SCGM Challenge6 dataset.
[ "25087920", "24780696", "28286318", "22850571", "25483607", "27786306", "26886978", "27495383", "27663988", "26017442", "28778026", "27720818", "24556080", "26244277", "23510558" ]
[ { "pmid": "25087920", "title": "Spinal cord gray matter atrophy correlates with multiple sclerosis disability.", "abstract": "OBJECTIVE\nIn multiple sclerosis (MS), cerebral gray matter (GM) atrophy correlates more strongly than white matter (WM) atrophy with disability. The corresponding relationships in the spinal cord (SC) are unknown due to technical limitations in assessing SC GM atrophy. Using phase-sensitive inversion recovery (PSIR) magnetic resonance imaging, we determined the association of the SC GM and SC WM areas with MS disability and disease type.\n\n\nMETHODS\nA total of 113 MS patients and 20 healthy controls were examined at 3T with a PSIR sequence acquired at the C2/C3 disk level. Two independent, clinically masked readers measured the cord WM and GM areas. Correlations between cord areas and Expanded Disability Status Score (EDSS) were determined. Differences in areas between groups were assessed with age and sex as covariates.\n\n\nRESULTS\nRelapsing MS (RMS) patients showed smaller SC GM areas than age- and sex-matched controls (p = 0.008) without significant differences in SC WM areas. Progressive MS patients showed smaller SC GM and SC WM areas compared to RMS patients (all p ≤ 0.004). SC GM, SC WM, and whole cord areas inversely correlated with EDSS (rho: -0.60, -0.32, -0.42, respectively; all p ≤ 0.001). The SC GM area was the strongest correlate of disability in multivariate models including brain GM and WM volumes, fluid-attenuated inversion recovery lesion load, T1 lesion load, SC WM area, number of SC T2 lesions, age, sex, and disease duration. Brain and spinal GM independently contributed to EDSS.\n\n\nINTERPRETATION\nSC GM atrophy is detectable in vivo in the absence of WM atrophy in RMS. It is more pronounced in progressive MS than RMS and contributes more to patient disability than SC WM or brain GM atrophy." }, { "pmid": "24780696", "title": "Robust, accurate and fast automatic segmentation of the spinal cord.", "abstract": "Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord." }, { "pmid": "28286318", "title": "Spinal cord grey matter segmentation challenge.", "abstract": "An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication." }, { "pmid": "22850571", "title": "Feasibility of grey matter and white matter segmentation of the upper cervical cord in vivo: a pilot study with application to magnetisation transfer measurements.", "abstract": "Spinal cord pathology can be functionally very important in neurological disease. Pathological studies have demonstrated the involvement of spinal cord grey matter (GM) and white matter (WM) in several diseases, although the clinical relevance of abnormalities detected histopathologically is difficult to assess without a reliable way to assess cord GM and WM in vivo. In this study, the feasibility of GM and WM segmentation was investigated in the upper cervical spinal cord of 10 healthy subjects, using high-resolution images acquired with a commercially available 3D gradient-echo pulse sequence at 3T. For each healthy subject, tissue-specific (i.e. WM and GM) cross-sectional areas were segmented and total volumes calculated from a 15 mm section acquired at the level of C2-3 intervertebral disc and magnetisation transfer ratio (MTR) values within the extracted volumes were also determined, as an example of GM and WM quantitative measurements in the cervical cord. Mean (± SD) total cord cross-sectional area (TCA) and total cord volume (TCV) of the section studied across 10 healthy subjects were 86.9 (± 7.7) mm(2) and 1302.8 (± 115) mm(3), respectively; mean (±SD) total GM cross-sectional area (TGMA) and total GM volume (TGMV) were 14.6 (± 1.1) mm(2) and 218.3 (± 16.8) mm(3), respectively; mean (± SD) GM volume fraction (GMVF) was 0.17 (± 0.01); mean (± SD) MTR of the total WM volume (WM-MTR) was 51.4 (± 1.5) and mean (± SD) MTR of the total GM volume (GM-MTR) was 49.7 (± 1.6). The mean scan-rescan, intra- and inter-observer % coefficient of variation for measuring the TCA were 0.7%, 0.5% and 0.5% and for measuring the TGMA were 6.5%, 5.4% and 12.7%. The difference between WM-MTR and GM-MTR was found to be statistically significant (p=0.00006). This study has shown that GM and WM segmentation in the cervical cord is possible and the MR imaging protocol and analysis method presented here in healthy controls can be potentially extended to study the cervical cord in disease states, with the option to explore further quantitative measurements alongside MTR." }, { "pmid": "25483607", "title": "2D phase-sensitive inversion recovery imaging to measure in vivo spinal cord gray and white matter areas in clinically feasible acquisition times.", "abstract": "PURPOSE\nTo present and assess a procedure for measurement of spinal cord total cross-sectional areas (TCA) and gray matter (GM) areas based on phase-sensitive inversion recovery imaging (PSIR). In vivo assessment of spinal cord GM and white matter (WM) could become pivotal to study various neurological diseases, but it is challenging because of insufficient GM/WM contrast provided by conventional magnetic resonance imaging (MRI).\n\n\nMATERIALS AND METHODS\nWe acquired 2D PSIR images at 3T at each disc level of the spinal axis in 10 healthy subjects and measured TCA, cord diameters, WM and GM areas, and GM area/TCA ratios. Second, we investigated 32 healthy subjects at four selected levels (C2-C3, C3-C4, T8-T9, T9-T10, total acquisition time <8 min) and generated normative reference values of TCA and GM areas. We assessed test-retest, intra- and interoperator reliability of the acquisition strategy, and measurement steps.\n\n\nRESULTS\nThe measurement procedure based on 2D PSIR imaging allowed TCA and GM area assessments along the entire spinal cord axis. The tests we performed revealed high test-retest/intraoperator reliability (mean coefficient of variation [COV] at C2-C3: TCA = 0.41%, GM area = 2.75%) and interoperator reliability of the measurements (mean COV on the 4 levels: TCA = 0.44%, GM area = 4.20%; mean intraclass correlation coefficient: TCA = 0.998, GM area = 0.906).\n\n\nCONCLUSION\n2D PSIR allows reliable in vivo assessment of spinal cord TCA, GM, and WM areas in clinically feasible acquisition times. The area measurements presented here are in agreement with previous MRI and postmortem studies." }, { "pmid": "27786306", "title": "Fully automated grey and white matter spinal cord segmentation.", "abstract": "Axonal loss in the spinal cord is one of the main contributing factors to irreversible clinical disability in multiple sclerosis (MS). In vivo axonal loss can be assessed indirectly by estimating a reduction in the cervical cross-sectional area (CSA) of the spinal cord over time, which is indicative of spinal cord atrophy, and such a measure may be obtained by means of image segmentation using magnetic resonance imaging (MRI). In this work, we propose a new fully automated spinal cord segmentation technique that incorporates two different multi-atlas segmentation propagation and fusion techniques: The Optimized PatchMatch Label fusion (OPAL) algorithm for localising and approximately segmenting the spinal cord, and the Similarity and Truth Estimation for Propagated Segmentations (STEPS) algorithm for segmenting white and grey matter simultaneously. In a retrospective analysis of MRI data, the proposed method facilitated CSA measurements with accuracy equivalent to the inter-rater variability, with a Dice score (DSC) of 0.967 at C2/C3 level. The segmentation performance for grey matter at C2/C3 level was close to inter-rater variability, reaching an accuracy (DSC) of 0.826 for healthy subjects and 0.835 people with clinically isolated syndrome MS." }, { "pmid": "26886978", "title": "Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.", "abstract": "We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes." }, { "pmid": "27495383", "title": "Gray matter segmentation of the spinal cord with active contours in MR images.", "abstract": "OBJECTIVE\nFully or partially automated spinal cord gray matter segmentation techniques for spinal cord gray matter segmentation will allow for pivotal spinal cord gray matter measurements in the study of various neurological disorders. The objective of this work was multi-fold: (1) to develop a gray matter segmentation technique that uses registration methods with an existing delineation of the cord edge along with Morphological Geodesic Active Contour (MGAC) models; (2) to assess the accuracy and reproducibility of the newly developed technique on 2D PSIR T1 weighted images; (3) to test how the algorithm performs on different resolutions and other contrasts; (4) to demonstrate how the algorithm can be extended to 3D scans; and (5) to show the clinical potential for multiple sclerosis patients.\n\n\nMETHODS\nThe MGAC algorithm was developed using a publicly available implementation of a morphological geodesic active contour model and the spinal cord segmentation tool of the software Jim (Xinapse Systems) for initial estimate of the cord boundary. The MGAC algorithm was demonstrated on 2D PSIR images of the C2/C3 level with two different resolutions, 2D T2* weighted images of the C2/C3 level, and a 3D PSIR image. These images were acquired from 45 healthy controls and 58 multiple sclerosis patients selected for the absence of evident lesions at the C2/C3 level. Accuracy was assessed though visual assessment, Hausdorff distances, and Dice similarity coefficients. Reproducibility was assessed through interclass correlation coefficients. Validity was assessed through comparison of segmented gray matter areas in images with different resolution for both manual and MGAC segmentations.\n\n\nRESULTS\nBetween MGAC and manual segmentations in healthy controls, the mean Dice similarity coefficient was 0.88 (0.82-0.93) and the mean Hausdorff distance was 0.61 (0.46-0.76) mm. The interclass correlation coefficient from test and retest scans of healthy controls was 0.88. The percent change between the manual segmentations from high and low-resolution images was 25%, while the percent change between the MGAC segmentations from high and low resolution images was 13%. Between MGAC and manual segmentations in MS patients, the average Dice similarity coefficient was 0.86 (0.8-0.92) and the average Hausdorff distance was 0.83 (0.29-1.37) mm.\n\n\nCONCLUSION\nWe demonstrate that an automatic segmentation technique, based on a morphometric geodesic active contours algorithm, can provide accurate and precise spinal cord gray matter segmentations on 2D PSIR images. We have also shown how this automated technique can potentially be extended to other imaging protocols." }, { "pmid": "27663988", "title": "Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter.", "abstract": "The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T2*-weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T2*-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T2*-weighted data. Results of automatic segmentation on T2*-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10-6). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "28778026", "title": "A survey on deep learning in medical image analysis.", "abstract": "Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research." }, { "pmid": "27720818", "title": "SCT: Spinal Cord Toolbox, an open-source software for processing spinal cord MRI data.", "abstract": "For the past 25 years, the field of neuroimaging has witnessed the development of several software packages for processing multi-parametric magnetic resonance imaging (mpMRI) to study the brain. These software packages are now routinely used by researchers and clinicians, and have contributed to important breakthroughs for the understanding of brain anatomy and function. However, no software package exists to process mpMRI data of the spinal cord. Despite the numerous clinical needs for such advanced mpMRI protocols (multiple sclerosis, spinal cord injury, cervical spondylotic myelopathy, etc.), researchers have been developing specific tools that, while necessary, do not provide an integrative framework that is compatible with most usages and that is capable of reaching the community at large. This hinders cross-validation and the possibility to perform multi-center studies. In this study we introduce the Spinal Cord Toolbox (SCT), a comprehensive software dedicated to the processing of spinal cord MRI data. SCT builds on previously-validated methods and includes state-of-the-art MRI templates and atlases of the spinal cord, algorithms to segment and register new data to the templates, and motion correction methods for diffusion and functional time series. SCT is tailored towards standardization and automation of the processing pipeline, versatility, modularity, and it follows guidelines of software development and distribution. Preliminary applications of SCT cover a variety of studies, from cross-sectional area measures in large databases of patients, to the precise quantification of mpMRI metrics in specific spinal pathways. We anticipate that SCT will bring together the spinal cord neuroimaging community by establishing standard templates and analysis procedures." }, { "pmid": "24556080", "title": "Groupwise multi-atlas segmentation of the spinal cord's internal structure.", "abstract": "The spinal cord is an essential and vulnerable component of the central nervous system. Differentiating and localizing the spinal cord internal structure (i.e., gray matter vs. white matter) is critical for assessment of therapeutic impacts and determining prognosis of relevant conditions. Fortunately, new magnetic resonance imaging (MRI) sequences enable clinical study of the in vivo spinal cord's internal structure. Yet, low contrast-to-noise ratio, artifacts, and imaging distortions have limited the applicability of tissue segmentation techniques pioneered elsewhere in the central nervous system. Additionally, due to the inter-subject variability exhibited on cervical MRI, typical deformable volumetric registrations perform poorly, limiting the applicability of a typical multi-atlas segmentation framework. Thus, to date, no automated algorithms have been presented for the spinal cord's internal structure. Herein, we present a novel slice-based groupwise registration framework for robustly segmenting cervical spinal cord MRI. Specifically, we provide a method for (1) pre-aligning the slice-based atlases into a groupwise-consistent space, (2) constructing a model of spinal cord variability, (3) projecting the target slice into the low-dimensional space using a model-specific registration cost function, and (4) estimating robust segmentation susing geodesically appropriate atlas information. Moreover, the proposed framework provides a natural mechanism for performing atlas selection and initializing the free model parameters in an informed manner. In a cross-validation experiment using 67 MR volumes of the cervical spinal cord, we demonstrate sub-millimetric accuracy, significant quantitative and qualitative improvement over comparable multi-atlas frameworks, and provide insight into the sensitivity of the associated model parameters." }, { "pmid": "26244277", "title": "An Optimized PatchMatch for multi-scale and multi-feature label fusion.", "abstract": "Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonance Images (MRI). Recently, patch-based label fusion approaches have demonstrated state-of-the-art segmentation accuracy. In this paper, we introduce a new patch-based label fusion framework to perform segmentation of anatomical structures. The proposed approach uses an Optimized PAtchMatch Label fusion (OPAL) strategy that drastically reduces the computation time required for the search of similar patches. The reduced computation time of OPAL opens the way for new strategies and facilitates processing on large databases. In this paper, we investigate new perspectives offered by OPAL, by introducing a new multi-scale and multi-feature framework. During our validation on hippocampus segmentation we use two datasets: young adults in the ICBM cohort and elderly adults in the EADC-ADNI dataset. For both, OPAL is compared to state-of-the-art methods. Results show that OPAL obtained the highest median Dice coefficient (89.9% for ICBM and 90.1% for EADC-ADNI). Moreover, in both cases, OPAL produced a segmentation accuracy similar to inter-expert variability. On the EADC-ADNI dataset, we compare the hippocampal volumes obtained by manual and automatic segmentation. The volumes appear to be highly correlated that enables to perform more accurate separation of pathological populations." }, { "pmid": "23510558", "title": "STEPS: Similarity and Truth Estimation for Propagated Segmentations and its application to hippocampal segmentation and brain parcelation.", "abstract": "Anatomical segmentation of structures of interest is critical to quantitative analysis in medical imaging. Several automated multi-atlas based segmentation propagation methods that utilise manual delineations from multiple templates appear promising. However, high levels of accuracy and reliability are needed for use in diagnosis or in clinical trials. We propose a new local ranking strategy for template selection based on the locally normalised cross correlation (LNCC) and an extension to the classical STAPLE algorithm by Warfield et al. (2004), which we refer to as STEPS for Similarity and Truth Estimation for Propagated Segmentations. It addresses the well-known problems of local vs. global image matching and the bias introduced in the performance estimation due to structure size. We assessed the method on hippocampal segmentation using a leave-one-out cross validation with optimised model parameters; STEPS achieved a mean Dice score of 0.925 when compared with manual segmentation. This was significantly better in terms of segmentation accuracy when compared to other state-of-the-art fusion techniques. Furthermore, due to the finer anatomical scale, STEPS also obtains more accurate segmentations even when using only a third of the templates, reducing the dependence on large template databases. Using a subset of Alzheimer's Disease Neuroimaging Initiative (ADNI) scans from different MRI imaging systems and protocols, STEPS yielded similarly accurate segmentations (Dice=0.903). A cross-sectional and longitudinal hippocampal volumetric study was performed on the ADNI database. Mean±SD hippocampal volume (mm(3)) was 5195 ± 656 for controls; 4786 ± 781 for MCI; and 4427 ± 903 for Alzheimer's disease patients and hippocampal atrophy rates (%/year) of 1.09 ± 3.0, 2.74 ± 3.5 and 4.04 ± 3.6 respectively. Statistically significant (p<10(-3)) differences were found between disease groups for both hippocampal volume and volume change rates. Finally, STEPS was also applied in a multi-label segmentation propagation scenario using a leave-one-out cross validation, in order to parcellate 83 separate structures of the brain. Comparisons of STEPS with state-of-the-art multi-label fusion algorithms showed statistically significant segmentation accuracy improvements (p<10(-4)) in several key structures." } ]
PLoS Computational Biology
29630593
PMC5908193
10.1371/journal.pcbi.1006053
A multitask clustering approach for single-cell RNA-seq analysis in Recessive Dystrophic Epidermolysis Bullosa
Single-cell RNA sequencing (scRNA-seq) has been widely applied to discover new cell types by detecting sub-populations in a heterogeneous group of cells. Since scRNA-seq experiments have lower read coverage/tag counts and introduce more technical biases compared to bulk RNA-seq experiments, the limited number of sampled cells combined with the experimental biases and other dataset specific variations presents a challenge to cross-dataset analysis and discovery of relevant biological variations across multiple cell populations. In this paper, we introduce a method of variance-driven multitask clustering of single-cell RNA-seq data (scVDMC) that utilizes multiple single-cell populations from biological replicates or different samples. scVDMC clusters single cells in multiple scRNA-seq experiments of similar cell types and markers but varying expression patterns such that the scRNA-seq data are better integrated than typical pooled analyses which only increase the sample size. By controlling the variance among the cell clusters within each dataset and across all the datasets, scVDMC detects cell sub-populations in each individual experiment with shared cell-type markers but varying cluster centers among all the experiments. Applied to two real scRNA-seq datasets with several replicates and one large-scale droplet-based dataset on three patient samples, scVDMC more accurately detected cell populations and known cell markers than pooled clustering and other recently proposed scRNA-seq clustering methods. In the case study applied to in-house Recessive Dystrophic Epidermolysis Bullosa (RDEB) scRNA-seq data, scVDMC revealed several new cell types and unknown markers validated by flow cytometry. MATLAB/Octave code available at https://github.com/kuanglab/scVDMC.
Related workMost existing methods focus only on sub-population clustering and differential gene expression detection among the learned cell clusters with one (pooled) cell population. Some of these methods were directly adopted from traditional bulk RNA-seq analysis and/or classical dimension reduction algorithms such as Principal Component Analysis [6–8], hierarchical clustering [9], t-SNE [10–12], Independent Component Analysis [13] and Multi-dimensional Scaling [14]. Other methods focus on special properties of scRNA-seq data, such as high variance and uneven expressions. For example, SNN-Cliq [15] uses a ranking measurement to get reliable results on high dimensional data; [16] proposed a special dimension reduction method to handle the large amount of zeros in scRNA-seq; [17] proposed a Latent Dirichlet Allocation model with latent gene groups to measure cell-to-cell distance; CellTree method [17] clusters single cells by a detected tree structure outlining the hierarchical relationship between single-cell samples to introduce biological prior knowledge; Seurat [18] was proposed to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns; and more recently a consensus clustering approach SC3 [19] was proposed to improve the robustness of clustering through combining multiple clustering solutions by consensus.Mixed multiple batch strategies [9, 20] have been proposed to reduce the technical variance, which does not directly improve clustering. To the best of our knowledge, multitask clustering with an embedded feature selection has not been previously applied to scRNA-seq data analysis.
[ "22499939", "28094102", "24832513", "27052890", "21543516", "24919153", "23685454", "23934149", "25700174", "26000488", "26000487", "24658644", "24156252", "25805722", "26527291", "27620863", "25867923", "28346451", "26431182", "24739965", "27899570", "26669357", "27364009", "1355776", "3818794", "25803200", "12472552", "19945622", "19945632", "20818854", "25601922", "23711070", "21464317", "27609069", "26141517", "28100499", "25963142", "25628217", "28333934" ]
[ { "pmid": "22499939", "title": "Using gene expression noise to understand gene regulation.", "abstract": "Phenotypic variation is ubiquitous in biology and is often traceable to underlying genetic and environmental variation. However, even genetically identical cells in identical environments display variable phenotypes. Stochastic gene expression, or gene expression \"noise,\" has been suggested as a major source of this variability, and its physiological consequences have been topics of intense research for the last decade. Several recent studies have measured variability in protein and messenger RNA levels, and they have discovered strong connections between noise and gene regulation mechanisms. When integrated with discrete stochastic models, measurements of cell-to-cell variability provide a sensitive \"fingerprint\" with which to explore fundamental questions of gene regulation. In this review, we highlight several studies that used gene expression variability to develop a quantitative understanding of the mechanisms and dynamics of gene regulation." }, { "pmid": "28094102", "title": "Single-Cell Genomics: Approaches and Utility in Immunology.", "abstract": "Single-cell genomics offers powerful tools for studying immune cells, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population level. Advances in computer science and single-cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single-cell RNA-sequencing data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research." }, { "pmid": "24832513", "title": "Methods, Challenges and Potentials of Single Cell RNA-seq.", "abstract": "RNA-sequencing (RNA-seq) has become the tool of choice for transcriptomics. Several recent studies demonstrate its successful adaption to single cell analysis. This allows new biological insights into cell differentiation, cell-to-cell variation and gene regulation, and how these aspects depend on each other. Here, I review the current single cell RNA-seq (scRNA-seq) efforts and discuss experimental protocols, challenges and potentials." }, { "pmid": "27052890", "title": "Design and computational analysis of single-cell RNA-sequencing experiments.", "abstract": "Single-cell RNA-sequencing (scRNA-seq) has emerged as a revolutionary tool that allows us to address scientific questions that eluded examination just a few years ago. With the advantages of scRNA-seq come computational challenges that are just beginning to be addressed. In this article, we highlight the computational methods available for the design and analysis of scRNA-seq experiments, their advantages and disadvantages in various settings, the open questions for which novel methods are needed, and expected future developments in this exciting area." }, { "pmid": "21543516", "title": "Characterization of the single-cell transcriptional landscape by highly multiplex RNA-seq.", "abstract": "Our understanding of the development and maintenance of tissues has been greatly aided by large-scale gene expression analysis. However, tissues are invariably complex, and expression analysis of a tissue confounds the true expression patterns of its constituent cell types. Here we describe a novel strategy to access such complex samples. Single-cell RNA-seq expression profiles were generated, and clustered to form a two-dimensional cell map onto which expression data were projected. The resulting cell map integrates three levels of organization: the whole population of cells, the functionally distinct subpopulations it contains, and the single cells themselves-all without need for known markers to classify cell types. The feasibility of the strategy was demonstrated by analyzing the transcriptomes of 85 single cells of two distinct types. We believe this strategy will enable the unbiased discovery and analysis of naturally occurring cell types during development, adult physiology, and disease." }, { "pmid": "24919153", "title": "Single-cell RNA-seq reveals dynamic paracrine control of cellular variation.", "abstract": "High-throughput single-cell transcriptomics offers an unbiased approach for understanding the extent, basis and function of gene expression variation between seemingly identical cells. Here we sequence single-cell RNA-seq libraries prepared from over 1,700 primary mouse bone-marrow-derived dendritic cells spanning several experimental conditions. We find substantial variation between identically stimulated dendritic cells, in both the fraction of cells detectably expressing a given messenger RNA and the transcript's level within expressing cells. Distinct gene modules are characterized by different temporal heterogeneity profiles. In particular, a 'core' module of antiviral genes is expressed very early by a few 'precocious' cells in response to uniform stimulation with a pathogenic component, but is later activated in all cells. By stimulating cells individually in sealed microfluidic chambers, analysing dendritic cells from knockout mice, and modulating secretion and extracellular signalling, we show that this response is coordinated by interferon-mediated paracrine signalling from these precocious cells. Notably, preventing cell-to-cell communication also substantially reduces variability between cells in the expression of an early-induced 'peaked' inflammatory module, suggesting that paracrine signalling additionally represses part of the inflammatory program. Our study highlights the importance of cell-to-cell communication in controlling cellular heterogeneity and reveals general strategies that multicellular populations can use to establish complex dynamic responses." }, { "pmid": "23685454", "title": "Single-cell transcriptomics reveals bimodality in expression and splicing in immune cells.", "abstract": "Recent molecular studies have shown that, even when derived from a seemingly homogenous population, individual cells can exhibit substantial differences in gene expression, protein levels and phenotypic output, with important functional consequences. Existing studies of cellular heterogeneity, however, have typically measured only a few pre-selected RNAs or proteins simultaneously, because genomic profiling methods could not be applied to single cells until very recently. Here we use single-cell RNA sequencing to investigate heterogeneity in the response of mouse bone-marrow-derived dendritic cells (BMDCs) to lipopolysaccharide. We find extensive, and previously unobserved, bimodal variation in messenger RNA abundance and splicing patterns, which we validate by RNA-fluorescence in situ hybridization for select transcripts. In particular, hundreds of key immune genes are bimodally expressed across cells, surprisingly even for genes that are very highly expressed at the population average. Moreover, splicing patterns demonstrate previously unobserved levels of heterogeneity between cells. Some of the observed bimodality can be attributed to closely related, yet distinct, known maturity states of BMDCs; other portions reflect differences in the usage of key regulatory circuits. For example, we identify a module of 137 highly variable, yet co-regulated, antiviral response genes. Using cells from knockout mice, we show that variability in this module may be propagated through an interferon feedback circuit, involving the transcriptional regulators Stat2 and Irf7. Our study demonstrates the power and promise of single-cell genomics in uncovering functional diversity between cells and in deciphering cell states and circuits." }, { "pmid": "23934149", "title": "Single-cell RNA-Seq profiling of human preimplantation embryos and embryonic stem cells.", "abstract": "Measuring gene expression in individual cells is crucial for understanding the gene regulatory network controlling human embryonic development. Here we apply single-cell RNA sequencing (RNA-Seq) analysis to 124 individual cells from human preimplantation embryos and human embryonic stem cells (hESCs) at different passages. The number of maternally expressed genes detected in our data set is 22,687, including 8,701 long noncoding RNAs (lncRNAs), which represents a significant increase from 9,735 maternal genes detected previously by cDNA microarray. We discovered 2,733 novel lncRNAs, many of which are expressed in specific developmental stages. To address the long-standing question whether gene expression signatures of human epiblast (EPI) and in vitro hESCs are the same, we found that EPI cells and primary hESC outgrowth have dramatically different transcriptomes, with 1,498 genes showing differential expression between them. This work provides a comprehensive framework of the transcriptome landscapes of human early embryos and hESCs." }, { "pmid": "25700174", "title": "Brain structure. Cell types in the mouse cortex and hippocampus revealed by single-cell RNA-seq.", "abstract": "The mammalian cerebral cortex supports cognitive functions such as sensorimotor integration, memory, and social behaviors. Normal brain function relies on a diverse set of differentiated cell types, including neurons, glia, and vasculature. Here, we have used large-scale single-cell RNA sequencing (RNA-seq) to classify cells in the mouse somatosensory cortex and hippocampal CA1 region. We found 47 molecularly distinct subclasses, comprising all known major cell types in the cortex. We identified numerous marker genes, which allowed alignment with known cell types, morphology, and location. We found a layer I interneuron expressing Pax6 and a distinct postmitotic oligodendrocyte subclass marked by Itpr2. Across the diversity of cortical cell types, transcription factors formed a complex, layered regulatory code, suggesting a mechanism for the maintenance of adult cell type identity." }, { "pmid": "26000488", "title": "Highly Parallel Genome-wide Expression Profiling of Individual Cells Using Nanoliter Droplets.", "abstract": "Cells, the basic units of biological structure and function, vary broadly in type and state. Single-cell genomics can characterize cell identity and function, but limitations of ease and scale have prevented its broad application. Here we describe Drop-seq, a strategy for quickly profiling thousands of individual cells by separating them into nanoliter-sized aqueous droplets, associating a different barcode with each cell's RNAs, and sequencing them all together. Drop-seq analyzes mRNA transcripts from thousands of individual cells simultaneously while remembering transcripts' cell of origin. We analyzed transcriptomes from 44,808 mouse retinal cells and identified 39 transcriptionally distinct cell populations, creating a molecular atlas of gene expression for known retinal cell classes and novel candidate cell subtypes. Drop-seq will accelerate biological discovery by enabling routine transcriptional profiling at single-cell resolution. VIDEO ABSTRACT." }, { "pmid": "26000487", "title": "Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells.", "abstract": "It has long been the dream of biologists to map gene expression at the single-cell level. With such data one might track heterogeneous cell sub-populations, and infer regulatory relationships between genes and pathways. Recently, RNA sequencing has achieved single-cell resolution. What is limiting is an effective way to routinely isolate and process large numbers of individual cells for quantitative in-depth sequencing. We have developed a high-throughput droplet-microfluidic approach for barcoding the RNA from thousands of individual cells for subsequent analysis by next-generation sequencing. The method shows a surprisingly low noise profile and is readily adaptable to other sequencing-based assays. We analyzed mouse embryonic stem cells, revealing in detail the population structure and the heterogeneous onset of differentiation after leukemia inhibitory factor (LIF) withdrawal. The reproducibility of these high-throughput single-cell data allowed us to deconstruct cell populations and infer gene expression relationships. VIDEO ABSTRACT." }, { "pmid": "24658644", "title": "The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells.", "abstract": "Defining the transcriptional dynamics of a temporal process such as cell differentiation is challenging owing to the high variability in gene expression between individual cells. Time-series gene expression analyses of bulk cells have difficulty distinguishing early and late phases of a transcriptional cascade or identifying rare subpopulations of cells, and single-cell proteomic methods rely on a priori knowledge of key distinguishing markers. Here we describe Monocle, an unsupervised algorithm that increases the temporal resolution of transcriptome dynamics using single-cell RNA-Seq data collected at multiple time points. Applied to the differentiation of primary human myoblasts, Monocle revealed switch-like changes in expression of key regulatory factors, sequential waves of gene regulation, and expression of regulators that were not known to act in differentiation. We validated some of these predicted regulators in a loss-of function screen. Monocle can in principle be used to recover single-cell gene expression kinetics from a wide array of cellular processes, including differentiation, proliferation and oncogenic transformation." }, { "pmid": "24156252", "title": "Temporal dynamics and transcriptional control using single-cell gene expression analysis.", "abstract": "BACKGROUND\nChanges in environmental conditions lead to expression variation that manifest at the level of gene regulatory networks. Despite a strong understanding of the role noise plays in synthetic biological systems, it remains unclear how propagation of expression heterogeneity in an endogenous regulatory network is distributed and utilized by cells transitioning through a key developmental event.\n\n\nRESULTS\nHere we investigate the temporal dynamics of a single-cell transcriptional network of 45 transcription factors in THP-1 human myeloid monocytic leukemia cells undergoing differentiation to macrophages. We systematically measure temporal regulation of expression and variation by profiling 120 single cells at eight distinct time points, and infer highly controlled regulatory modules through which signaling operates with stochastic effects. This reveals dynamic and specific rewiring as a cellular strategy for differentiation. The integration of both positive and negative co-expression networks further identifies the proto-oncogene MYB as a network hinge to modulate both the pro- and anti-differentiation pathways.\n\n\nCONCLUSIONS\nCompared to averaged cell populations, temporal single-cell expression profiling provides a much more powerful technique to probe for mechanistic insights underlying cellular differentiation. We believe that our approach will form the basis of novel strategies to study the regulation of transcription at a single-cell level." }, { "pmid": "25805722", "title": "Identification of cell types from single-cell transcriptomes using a novel clustering method.", "abstract": "MOTIVATION\nThe recent advance of single-cell technologies has brought new insights into complex biological phenomena. In particular, genome-wide single-cell measurements such as transcriptome sequencing enable the characterization of cellular composition as well as functional variation in homogenic cell populations. An important step in the single-cell transcriptome analysis is to group cells that belong to the same cell types based on gene expression patterns. The corresponding computational problem is to cluster a noisy high dimensional dataset with substantially fewer objects (cells) than the number of variables (genes).\n\n\nRESULTS\nIn this article, we describe a novel algorithm named shared nearest neighbor (SNN)-Cliq that clusters single-cell transcriptomes. SNN-Cliq utilizes the concept of shared nearest neighbor that shows advantages in handling high-dimensional data. When evaluated on a variety of synthetic and real experimental datasets, SNN-Cliq outperformed the state-of-the-art methods tested. More importantly, the clustering results of SNN-Cliq reflect the cell types or origins with high accuracy.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe algorithm is implemented in MATLAB and Python. The source code can be downloaded at http://bioinfo.uncc.edu/SNNCliq." }, { "pmid": "26527291", "title": "ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis.", "abstract": "Single-cell RNA-seq data allows insight into normal cellular function and various disease states through molecular characterization of gene expression on the single cell level. Dimensionality reduction of such high-dimensional data sets is essential for visualization and analysis, but single-cell RNA-seq data are challenging for classical dimensionality-reduction methods because of the prevalence of dropout events, which lead to zero-inflated data. Here, we develop a dimensionality-reduction method, (Z)ero (I)nflated (F)actor (A)nalysis (ZIFA), which explicitly models the dropout characteristics, and show that it improves modeling accuracy on simulated and biological data sets." }, { "pmid": "27620863", "title": "CellTree: an R/bioconductor package to infer the hierarchical structure of cell populations from single-cell RNA-seq data.", "abstract": "BACKGROUND\nSingle-cell RNA sequencing is fast becoming one the standard method for gene expression measurement, providing unique insights into cellular processes. A number of methods, based on general dimensionality reduction techniques, have been suggested to help infer and visualise the underlying structure of cell populations from single-cell expression levels, yet their models generally lack proper biological grounding and struggle at identifying complex differentiation paths.\n\n\nRESULTS\nHere we introduce cellTree: an R/Bioconductor package that uses a novel statistical approach, based on document analysis techniques, to produce tree structures outlining the hierarchical relationship between single-cell samples, while identifying latent groups of genes that can provide biological insights.\n\n\nCONCLUSIONS\nWith cellTree, we provide experimentalists with an easy-to-use tool, based on statistically and biologically-sound algorithms, to efficiently explore and visualise single-cell RNA data. The cellTree package is publicly available in the online Bionconductor repository at: http://bioconductor.org/packages/cellTree/ ." }, { "pmid": "25867923", "title": "Spatial reconstruction of single-cell gene expression data.", "abstract": "Spatial localization is a key determinant of cellular fate and behavior, but methods for spatially resolved, transcriptome-wide gene expression profiling across complex tissues are lacking. RNA staining methods assay only a small number of transcripts, whereas single-cell RNA-seq, which measures global gene expression, separates cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos and generated a transcriptome-wide map of spatial patterning. We confirmed Seurat's accuracy using several experimental approaches, then used the strategy to identify a set of archetypal expression patterns and spatial markers. Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems." }, { "pmid": "28346451", "title": "SC3: consensus clustering of single-cell RNA-seq data.", "abstract": "Single-cell RNA-seq enables the quantitative characterization of cell types based on global transcriptome profiles. We present single-cell consensus clustering (SC3), a user-friendly tool for unsupervised clustering, which achieves high accuracy and robustness by combining multiple clustering solutions through a consensus approach (http://bioconductor.org/packages/SC3). We demonstrate that SC3 is capable of identifying subclones from the transcriptomes of neoplastic cells collected from patients." }, { "pmid": "26431182", "title": "Single Cell RNA-Sequencing of Pluripotent States Unlocks Modular Transcriptional Variation.", "abstract": "Embryonic stem cell (ESC) culture conditions are important for maintaining long-term self-renewal, and they influence cellular pluripotency state. Here, we report single cell RNA-sequencing of mESCs cultured in three different conditions: serum, 2i, and the alternative ground state a2i. We find that the cellular transcriptomes of cells grown in these conditions are distinct, with 2i being the most similar to blastocyst cells and including a subpopulation resembling the two-cell embryo state. Overall levels of intercellular gene expression heterogeneity are comparable across the three conditions. However, this masks variable expression of pluripotency genes in serum cells and homogeneous expression in 2i and a2i cells. Additionally, genes related to the cell cycle are more variably expressed in the 2i and a2i conditions. Mining of our dataset for correlations in gene expression allowed us to identify additional components of the pluripotency network, including Ptma and Zfp640, illustrating its value as a resource for future discovery." }, { "pmid": "24739965", "title": "Reconstructing lineage hierarchies of the distal lung epithelium using single-cell RNA-seq.", "abstract": "The mammalian lung is a highly branched network in which the distal regions of the bronchial tree transform during development into a densely packed honeycomb of alveolar air sacs that mediate gas exchange. Although this transformation has been studied by marker expression analysis and fate-mapping, the mechanisms that control the progression of lung progenitors along distinct lineages into mature alveolar cell types are still incompletely known, in part because of the limited number of lineage markers and the effects of ensemble averaging in conventional transcriptome analysis experiments on cell populations. Here we show that single-cell transcriptome analysis circumvents these problems and enables direct measurement of the various cell types and hierarchies in the developing lung. We used microfluidic single-cell RNA sequencing (RNA-seq) on 198 individual cells at four different stages encompassing alveolar differentiation to measure the transcriptional states which define the developmental and cellular hierarchy of the distal mouse lung epithelium. We empirically classified cells into distinct groups by using an unbiased genome-wide approach that did not require a priori knowledge of the underlying cell types or the previous purification of cell populations. The results confirmed the basic outlines of the classical model of epithelial cell-type diversity in the distal lung and led to the discovery of many previously unknown cell-type markers, including transcriptional regulators that discriminate between the different populations. We reconstructed the molecular steps during maturation of bipotential progenitors along both alveolar lineages and elucidated the full life cycle of the alveolar type 2 cell lineage. This single-cell genomics approach is applicable to any developing or mature tissue to robustly delineate molecularly distinct cell types, define progenitors and lineage hierarchies, and identify lineage-specific regulatory factors." }, { "pmid": "27899570", "title": "Mouse Genome Database (MGD)-2017: community knowledge resource for the laboratory mouse.", "abstract": "The Mouse Genome Database (MGD: http://www.informatics.jax.org) is the primary community data resource for the laboratory mouse. It provides a highly integrated and highly curated system offering a comprehensive view of current knowledge about mouse genes, genetic markers and genomic features as well as the associations of those features with sequence, phenotypes, functional and comparative information, and their relationships to human diseases. MGD continues to enhance access to these data, to extend the scope of data content and visualizations, and to provide infrastructure and user support that ensures effective and efficient use of MGD in the advancement of scientific knowledge. Here, we report on recent enhancements made to the resource and new features." }, { "pmid": "26669357", "title": "Desmoplakin Variants Are Associated with Idiopathic Pulmonary Fibrosis.", "abstract": "RATIONALE\nSequence variation, methylation differences, and transcriptional changes in desmoplakin (DSP) have been observed in patients with idiopathic pulmonary fibrosis (IPF).\n\n\nOBJECTIVES\nTo identify novel variants in DSP associated with IPF and to characterize the relationship of these IPF sequence variants with DSP gene expression in human lung.\n\n\nMETHODS\nA chromosome 6 locus (7,370,061-7,606,946) was sequenced in 230 subjects with IPF and 228 control subjects. Validation genotyping of disease-associated variants was conducted in 936 subjects with IPF and 936 control subjects. DSP gene expression was measured in lung tissue from 334 subjects with IPF and 201 control subjects.\n\n\nMEASUREMENTS AND MAIN RESULTS\nWe identified 23 sequence variants in the chromosome 6 locus associated with IPF. Genotyping of selected variants in our validation cohort revealed that noncoding intron 1 variant rs2744371 (odds ratio = 0.77, 95% confidence interval [CI] = 0.66-0.91, P = 0.002) is protective for IPF, and a previously described IPF-associated intron 5 variant (rs2076295) is associated with increased risk of IPF (odds ratio = 1.36, 95% CI = 1.19-1.56, P < 0.001) after controlling for sex and age. DSP expression is 2.3-fold increased (95% CI = 1.91-2.71) in IPF lung tissue (P < 0.0001). Only the minor allele at rs2076295 is associated with decreased DSP expression (P = 0.001). Staining of fibrotic and normal human lung tissue localized DSP to airway epithelia.\n\n\nCONCLUSIONS\nSequence variants in DSP are associated with IPF, and rs2076295 genotype is associated with differential expression of DSP in the lung. DSP expression is increased in IPF lung and concentrated in the airway epithelia, suggesting a potential role for DSP in the pathogenesis of IPF." }, { "pmid": "27364009", "title": "Epithelial Notch signaling regulates lung alveolar morphogenesis and airway epithelial integrity.", "abstract": "Abnormal enlargement of the alveolar spaces is a hallmark of conditions such as chronic obstructive pulmonary disease and bronchopulmonary dysplasia. Notch signaling is crucial for differentiation and regeneration and repair of the airway epithelium. However, how Notch influences the alveolar compartment and integrates this process with airway development remains little understood. Here we report a prominent role of Notch signaling in the epithelial-mesenchymal interactions that lead to alveolar formation in the developing lung. We found that alveolar type II cells are major sites of Notch2 activation and show by Notch2-specific epithelial deletion (Notch2(cNull)) a unique contribution of this receptor to alveologenesis. Epithelial Notch2 was required for type II cell induction of the PDGF-A ligand and subsequent paracrine activation of PDGF receptor-α signaling in alveolar myofibroblast progenitors. Moreover, Notch2 was crucial in maintaining the integrity of the epithelial and smooth muscle layers of the distal conducting airways. Our data suggest that epithelial Notch signaling regulates multiple aspects of postnatal development in the distal lung and may represent a potential target for intervention in pulmonary diseases." }, { "pmid": "1355776", "title": "Genetic linkage of recessive dystrophic epidermolysis bullosa to the type VII collagen gene.", "abstract": "Generalized mutilating recessive dystrophic epidermolysis bullosa (RDEB) is characterized by extreme skin fragility owing to loss of dermal-epidermal adherence. Immunohistochemical studies have implicated type VII collagen, the major component of anchoring fibrils, in the etiology of RDEB. In this study, we demonstrate genetic linkage of the type VII collagen gene and the generalized mutilating RDEB phenotype. We first identified a Pvull polymorphic site by digestion of an amplified product of the type VII collagen gene, which was shown to reside within the coding region. Genetic linkage analysis between this marker and the RDEB phenotype in 19 affected families which were informative for this polymorphism showed no recombination events, and gave a maximum lod score of 3.97 at a recombination fraction (theta) of 0, demonstrating that this DNA region is involved in this form of RDEB. These data provide strong evidence that the type VII collagen gene, which has also been linked with the dominant form of the disease, harbors the mutation(s) causing the generalized mutilating form of RDEB in these families, thus underscoring the major functional importance of type VII collagen in basement membrane zone stability." }, { "pmid": "3818794", "title": "Type VII collagen forms an extended network of anchoring fibrils.", "abstract": "Type VII collagen is one of the newly identified members of the collagen family. A variety of evidence, including ultrastructural immunolocalization, has previously shown that type VII collagen is a major structural component of anchoring fibrils, found immediately beneath the lamina densa of many epithelia. In the present study, ultrastructural immunolocalization with monoclonal and monospecific polyclonal antibodies to type VII collagen and with a monoclonal antibody to type IV collagen indicates that amorphous electron-dense structures which we term \"anchoring plaques\" are normal features of the basement membrane zone of skin and cornea. These plaques contain type IV collagen and the carboxyl-terminal domain of type VII collagen. Banded anchoring fibrils extend from both the lamina densa and from these plaques, and can be seen bridging the plaques with the lamina densa and with other anchoring plaques. These observations lead to the postulation of a multilayered network of anchoring fibrils and anchoring plaques which underlies the basal lamina of several anchoring fibril-containing tissues. This extended network is capable of entrapping a large number of banded collagen fibers, microfibrils, and other stromal matrix components. These observations support the hypothesis that anchoring fibrils provide additional adhesion of the lamina densa to its underlying stroma." }, { "pmid": "25803200", "title": "From marrow to matrix: novel gene and cell therapies for epidermolysis bullosa.", "abstract": "Epidermolysis bullosa encompasses a group of inherited connective tissue disorders that range from mild to lethal. There is no cure, and current treatment is limited to palliative care that is largely ineffective in treating the systemic, life-threatening pathology associated with the most severe forms of the disease. Although allogeneic cell- and protein-based therapies have shown promise, both novel and combinatorial approaches will undoubtedly be required to totally alleviate the disorder. Progress in the development of next-generation therapies that synergize targeted gene-correction and induced pluripotent stem cell technologies offers exciting prospects for personalized, off-the-shelf treatment options that could avoid many of the limitations associated with current allogeneic cell-based therapies. Although no single therapeutic avenue has achieved complete success, each has substantially increased our collective understanding of the complex biology underlying the disease, both providing mechanistic insights and uncovering new hurdles that must be overcome." }, { "pmid": "12472552", "title": "Quality of life in epidermolysis bullosa.", "abstract": "The quality of life of people with epidermolysis bullosa (EB) living in Scotland was assessed by postal questionnaire using the Dermatology Life Quality Index (DLQI) and the Children's Dermatology Life Quality Index (CDLQI). There were 143 people with EB simplex (EBS) and 99 individuals with non-Hallopeau--Siemens subtypes of dystrophic EB (DEB). A further six individuals had the severe Hallopeau--Siemens subtype of DEB (RDEB-HS). The overall response was 48% (EBS 52%, DEB 40% and RDEB-HS 83%). Impairment of quality of life (QOL) was greatest in those with RDEB-HS, mean scores (adults, 18; children, 22) exceeding those of any skin disorder previously assessed. The effect on QOL of EBS and other subtypes of DEB was similar to that of moderately severe psoriasis and eczema. EBS had a greater impact on QOL than the non-Hallopeau--Siemens subtypes of DEB (EBS adults mean score, 10.7; EBS children mean score, 15; DEB adults mean score, 7.5; DEB children mean score, 11.5)." }, { "pmid": "19945622", "title": "Dystrophic epidermolysis bullosa: pathogenesis and clinical features.", "abstract": "Dystrophic epidermolysis bullosa (DEB) is relatively well understood. Potential therapies are in development. This article describes the pathogenesis and clinical features of DEB. It also describes therapeutic options and the future of molecular therapies." }, { "pmid": "19945632", "title": "Understanding the pathogenesis of recessive dystrophic epidermolysis bullosa squamous cell carcinoma.", "abstract": "Patients with recessive dystrophic epidermolysis bullosa develop numerous life-threatening skin cancers. The reasons for this remain unclear. Parallels exist with other scarring skin conditions, such as Marjolin ulcer. We summarize observational and experimental data and discuss proposed theories for the development of such aggressive skin cancers. A context-driven situation seems to be emerging, but more focused research is required to elucidate the pathogenesis of epidermolysis bullosa-associated squamous cell carcinoma." }, { "pmid": "20818854", "title": "Bone marrow transplantation for recessive dystrophic epidermolysis bullosa.", "abstract": "BACKGROUND\nRecessive dystrophic epidermolysis bullosa is an incurable, often fatal mucocutaneous blistering disease caused by mutations in COL7A1, the gene encoding type VII collagen (C7). On the basis of preclinical data showing biochemical correction and prolonged survival in col7 −/− mice, we hypothesized that allogeneic marrow contains stem cells capable of ameliorating the manifestations of recessive dystrophic epidermolysis bullosa in humans.\n\n\nMETHODS\nBetween October 2007 and August 2009, we treated seven children who had recessive dystrophic epidermolysis bullosa with immunomyeloablative chemotherapy and allogeneic stem-cell transplantation. We assessed C7 expression by means of immunofluorescence staining and used transmission electron microscopy to visualize anchoring fibrils. We measured chimerism by means of competitive polymerase-chain-reaction assay, and documented blister formation and wound healing with the use of digital photography.\n\n\nRESULTS\nOne patient died of cardiomyopathy before transplantation. Of the remaining six patients, one had severe regimen-related cutaneous toxicity, with all having improved wound healing and a reduction in blister formation between 30 and 130 days after transplantation. We observed increased C7 deposition at the dermal-epidermal junction in five of the six recipients, albeit without normalization of anchoring fibrils. Five recipients were alive 130 to 799 days after transplantation; one died at 183 days as a consequence of graft rejection and infection. The six recipients had substantial proportions of donor cells in the skin, and none had detectable anti-C7 antibodies.\n\n\nCONCLUSIONS\nIncreased C7 deposition and a sustained presence of donor cells were found in the skin of children with recessive dystrophic epidermolysis bullosa after allogeneic bone marrow transplantation. Further studies are needed to assess the long-term risks and benefits of such therapy in patients with this disorder. (Funded by the National Institutes of Health; ClinicalTrials.gov number, NCT00478244.)" }, { "pmid": "25601922", "title": "Transplanted bone marrow-derived circulating PDGFRα+ cells restore type VII collagen in recessive dystrophic epidermolysis bullosa mouse skin graft.", "abstract": "Recessive dystrophic epidermolysis bullosa (RDEB) is an intractable genetic blistering skin disease in which the epithelial structure easily separates from the underlying dermis because of genetic loss of functional type VII collagen (Col7) in the cutaneous basement membrane zone. Recent studies have demonstrated that allogeneic bone marrow transplantation (BMT) ameliorates the skin blistering phenotype of RDEB patients by restoring Col7. However, the exact therapeutic mechanism of BMT in RDEB remains unclear. In this study, we investigated the roles of transplanted bone marrow-derived circulating mesenchymal cells in RDEB (Col7-null) mice. In wild-type mice with prior GFP-BMT after lethal irradiation, lineage-negative/GFP-positive (Lin(-)/GFP(+)) cells, including platelet-derived growth factor receptor α-positive (PDGFRα(+)) mesenchymal cells, specifically migrated to skin grafts from RDEB mice and expressed Col7. Vascular endothelial cells and follicular keratinocytes in the deep dermis of the skin grafts expressed SDF-1α, and the bone marrow-derived PDGFRα(+) cells expressed CXCR4 on their surface. Systemic administration of the CXCR4 antagonist AMD3100 markedly decreased the migration of bone marrow-derived PDGFRα(+) cells into the skin graft, resulting in persistent epidermal detachment with massive necrosis and inflammation in the skin graft of RDEB mice; without AMD3100 administration, Col7 was significantly supplemented to ameliorate the pathogenic blistering phenotype. Collectively, these data suggest that the SDF1α/CXCR4 signaling axis induces transplanted bone marrow-derived circulating PDGFRα(+) mesenchymal cells to migrate and supply functional Col7 to regenerate RDEB skin." }, { "pmid": "23711070", "title": "Serum levels of high mobility group box 1 correlate with disease severity in recessive dystrophic epidermolysis bullosa.", "abstract": "In the inherited blistering skin disease, recessive dystrophic epidermolysis bullosa (RDEB), there is clinical heterogeneity with variable scarring and susceptibility to malignancy. Currently, however, there are few biochemical markers of tissue inflammation or disease progression. We assessed whether the non-histone nuclear protein, high mobility group box 1 (HMGB1), which is released from necrotic cells (including keratinocytes in blister roofs), might be elevated in RDEB and whether this correlates with disease severity. We measured serum HMGB1 by ELISA in 26 RDEB individuals (median 21.0 ng/ml, range 3.6-54.9 ng/ml) and 23 healthy controls (median 3.6, range 3.4-5.9 ng/ml) and scored RDEB severity using the Birmingham Epidermolysis Bullosa Severity Score (BEBSS; mean 34/100, range 8-82). There was a positive relationship between the BEBSS and HMGB1 levels (r = 0.54, P = 0.004). This study indicates that serum HMGB1 levels may represent a new biomarker reflecting disease severity in RDEB." }, { "pmid": "21464317", "title": "PDGFRalpha-positive cells in bone marrow are mobilized by high mobility group box 1 (HMGB1) to regenerate injured epithelia.", "abstract": "The role of bone marrow cells in repairing ectodermal tissue, such as skin epidermis, is not clear. To explore this process further, this study examined a particular form of cutaneous repair, skin grafting. Grafting of full thickness wild-type mouse skin onto mice that had received a green fluorescent protein-bone marrow transplant after whole body irradiation led to an abundance of bone marrow-derived epithelial cells in follicular and interfollicular epidermis that persisted for at least 5 mo. The source of the epithelial progenitors was the nonhematopoietic, platelet-derived growth factor receptor α-positive (Lin(-)/PDGFRα(+)) bone marrow cell population. Skin grafts release high mobility group box 1 (HMGB1) in vitro and in vivo, which can mobilize the Lin(-)/PDGFRα(+) cells from bone marrow to target the engrafted skin. These data provide unique insight into how skin grafts facilitate tissue repair and identify strategies germane to regenerative medicine for skin and, perhaps, other ectodermal defects or diseases." }, { "pmid": "27609069", "title": "A COL11A1-correlated pan-cancer gene signature of activated fibroblasts for the prioritization of therapeutic targets.", "abstract": "Although cancer-associated fibroblasts (CAFs) are viewed as a promising therapeutic target, the design of rational therapy has been hampered by two key obstacles. First, attempts to ablate CAFs have resulted in significant toxicity because currently used biomarkers cannot effectively distinguish activated CAFs from non-cancer associated fibroblasts and mesenchymal progenitor cells. Second, it is unclear whether CAFs in different organs have different molecular and functional properties that necessitate organ-specific therapeutic designs. Our analyses uncovered COL11A1 as a highly specific biomarker of activated CAFs. Using COL11A1 as a 'seed', we identified co-expressed genes in 13 types of primary carcinoma in The Cancer Genome Atlas. We demonstrated that a molecular signature of activated CAFs is conserved in epithelial cancers regardless of organ site and transforming events within cancer cells, suggesting that targeting fibroblast activation should be effective in multiple cancers. We prioritized several potential pan-cancer therapeutic targets that are likely to have high specificity for activated CAFs and minimal toxicity in normal tissues." }, { "pmid": "26141517", "title": "Gremlin is a key pro-fibrogenic factor in chronic pancreatitis.", "abstract": "UNLABELLED\nThe current study aims to identify the pro-fibrogenic role of Gremlin, an endogenous antagonist of bone morphogenetic proteins (BMPs) in chronic pancreatitis (CP). CP is a highly debilitating disease characterized by progressive pancreatic inflammation and fibrosis that ultimately leads to exocrine and endocrine dysfunction. While transforming growth factor (TGF)-β is a known key pro-fibrogenic factor in CP, the TGF-β superfamily member BMPs exert an anti-fibrogenic function in CP as reported by our group recently. To investigate how BMP signaling is regulated in CP by BMP antagonists, the mouse CP model induced by cerulein was used. During CP induction, TGF-β1 messenger RNA (mRNA) increased 156-fold in 2 weeks, a BMP antagonist Gremlin 1 (Grem1) mRNA levels increased 145-fold at 3 weeks, and increases in Grem1 protein levels correlated with increases in collagen deposition. Increased Grem1 was also observed in human CP pancreata compared to normal. Grem1 knockout in Grem1 (+/-) mice revealed a 33.2 % reduction in pancreatic fibrosis in CP compared to wild-type littermates. In vitro in isolated pancreatic stellate cells, TGF-β induced Grem1 expression. Addition of the recombinant mouse Grem1 protein blocked BMP2-induced Smad1/5 phosphorylation and abolished BMP2's suppression effects on TGF-β-induced collagen expression. Evidences presented herein demonstrate that Grem1, induced by TGF-β, is pro-fibrogenic by antagonizing BMP activity in CP.\n\n\nKEY MESSAGES\n• Gremlin is upregulated in human chronic pancreatitis and a mouse CP model in vivo. • Deficiency of Grem1 in mice attenuates pancreatic fibrosis under CP induction in vivo. • TGF-β induces Gremlin mRNA and protein expression in pancreatic stellate cells in vitro. • Gremlin blocks BMP2 signaling and function in pancreatic stellate cells in vitro. • This study discloses a pro-fibrogenic role of Gremlin by antagonizing BMP activity in chronic pancreatitis." }, { "pmid": "28100499", "title": "Gremlin1 plays a key role in kidney development and renal fibrosis.", "abstract": "Gremlin1 (Grem1), an antagonist of bone morphogenetic proteins, plays a key role in embryogenesis. A highly specific temporospatial gradient of Grem1 and bone morphogenetic protein signaling is critical to normal lung, kidney, and limb development. Grem1 levels are increased in renal fibrotic conditions, including acute kidney injury, diabetic nephropathy, chronic allograft nephropathy, and immune glomerulonephritis. We demonstrate that a small number of grem1 whole body knockout mice on a mixed genetic background (8%) are viable, with a single, enlarged left kidney and grossly normal histology. The grem1 mice displayed mild renal dysfunction at 4 wk, which recovered by 16 wk. Tubular epithelial cell-specific targeted deletion of Grem1 (TEC-grem1-cKO) mice displayed a milder response in the acute injury and recovery phases of the folic acid model. Increases in indexes of kidney damage were smaller in TEC-grem1-cKO than wild-type mice. In the recovery phase of the folic acid model, associated with renal fibrosis, TEC-grem1-cKO mice displayed reduced histological damage and an attenuated fibrotic gene response compared with wild-type controls. Together, these data demonstrate that Grem1 expression in the tubular epithelial compartment plays a significant role in the fibrotic response to renal injury in vivo." }, { "pmid": "25963142", "title": "The microfibril-associated glycoproteins (MAGPs) and the microfibrillar niche.", "abstract": "The microfibril-associated glycoproteins MAGP-1 and MAGP-2 are extracellular matrix proteins that interact with fibrillin to influence microfibril function. The two proteins are related through a 60 amino acid matrix-binding domain but their sequences differ outside of this region. A distinguishing feature of both proteins is their ability to interact with TGFβ family growth factors, Notch and Notch ligands, and multiple elastic fiber proteins. MAGP-2 can also interact with αvβ3 integrins via a RGD sequence that is not found in MAGP-1. Morpholino knockdown of MAGP-1 expression in zebrafish resulted in abnormal vessel wall architecture and altered vascular network formation. In the mouse, MAGP-1 deficiency had little effect on elastic fibers in blood vessels and lung but resulted in numerous unexpected phenotypes including bone abnormalities, hematopoietic changes, increased fat deposition, diabetes, impaired wound repair, and a bleeding diathesis. Inactivation of the gene for MAGP-2 in mice produced a neutropenia yet had minimal effects on bone or adipose homeostasis. Double knockouts had phenotypes characteristic of each individual knockout as well as several additional traits only seen when both genes are inactivated. A common mechanism underlying all of the traits associated with the knockout phenotypes is altered TGFβ signaling. This review summarizes our current understanding of the function of the MAGPs and discusses ideas related to their role in growth factor regulation." }, { "pmid": "25628217", "title": "Computational and analytical challenges in single-cell transcriptomics.", "abstract": "The development of high-throughput RNA sequencing (RNA-seq) at the single-cell level has already led to profound new discoveries in biology, ranging from the identification of novel cell types to the study of global patterns of stochastic gene expression. Alongside the technological breakthroughs that have facilitated the large-scale generation of single-cell transcriptomic data, it is important to consider the specific computational and analytical challenges that still have to be overcome. Although some tools for analysing RNA-seq data from bulk cell populations can be readily applied to single-cell RNA-seq data, many new computational strategies are required to fully exploit this data type and to enable a comprehensive yet detailed study of gene expression at the single-cell level." }, { "pmid": "28333934", "title": "Visualizing the structure of RNA-seq expression data using grade of membership models.", "abstract": "Grade of membership models, also known as \"admixture models\", \"topic models\" or \"Latent Dirichlet Allocation\", are a generalization of cluster models that allow each sample to have membership in multiple clusters. These models are widely used in population genetics to model admixed individuals who have ancestry from multiple \"populations\", and in natural language processing to model documents having words from multiple \"topics\". Here we illustrate the potential for these models to cluster samples of RNA-seq gene expression data, measured on either bulk samples or single cells. We also provide methods to help interpret the clusters, by identifying genes that are distinctively expressed in each cluster. By applying these methods to several example RNA-seq applications we demonstrate their utility in identifying and summarizing structure and heterogeneity. Applied to data from the GTEx project on 53 human tissues, the approach highlights similarities among biologically-related tissues and identifies distinctively-expressed genes that recapitulate known biology. Applied to single-cell expression data from mouse preimplantation embryos, the approach highlights both discrete and continuous variation through early embryonic development stages, and highlights genes involved in a variety of relevant processes-from germ cell development, through compaction and morula formation, to the formation of inner cell mass and trophoblast at the blastocyst stage. The methods are implemented in the Bioconductor package CountClust." } ]
Frontiers in Neuroscience
29713262
PMC5911500
10.3389/fnins.2018.00217
A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking
Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG) classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA) model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI) competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods.
2. Related workIn past decades, great efforts have been made to address the EEG classification problem, mainly including feature extraction and channel selection.One of the most widely used EEG feature extraction methods is common spatial patterns (CSP). CSP utilizes covariance analysis to amplify the class disparity in the spatial domain, to combine signals from different channels (Blankertz et al., 2007; Li et al., 2013). Improved CSPs, e.g., common spatio-spectral patterns (CSSP) (Lemm et al., 2005), iterative spatio-spectral pattern learning (ISSPL) (Wu et al., 2008), and filter bank common spatial patterns (FBCSP) (Kai et al., 2012), mainly optimize the combination of multi-channel signals by developing a spectral weight coefficient evaluation. Obviously, these spatial feature extraction methods are good at selecting the pivotal spatial information included in EEG signals. However, signal combination cannot be practically used to reduce the channel number and data scale. Apart from the spatial domain, extracting features in the temporal and frequency domains is also prevalent in many works. Multivariate empirical mode decomposition (MEMD) generates multiple dimensional envelopes by projecting signals in all directions of various spaces (Rehman and Mandic, 2010; Islam et al., 2012). The autoregressive (AR) model assumes that EEG signals can be approximated in the AR process, in which the features could be gained as parameters of the approximated AR models (Zabidi et al., 2013). TDP was introduced as a set of broadband features based on the variance of different EEG signals in various orders (Vidaurre et al., 2009). A wavelet transform (WT) simultaneously provides ample frequency and time information about EEG signals at the low and high frequencies, respectively (Jahankhani et al., 2006). We utilize SP based on AR and TDP as two feature extraction examples in this paper.EEG channel selection has been extensively studied. For instance, multi-objective genetic algorithms (GA) (Kee et al., 2015) and Rayleigh coefficient maximization based GA (He et al., 2013) were introduced to simultaneously optimize the number of selected channels and improve the system accuracy by embedding classifiers into the GA process. Recursive feature elimination (Guyon and Elisseeff, 2003) and zero-norm optimization (Weston et al., 2003) based on the training of support vector machines (SVMs) were used to reduce the number of channels without decreasing the motor imagery EEG classification accuracy (Lal et al., 2004). Sequential floating forward selection (SFFS) (Pudil et al., 1994) and successive improved SFFS (ISFFS) (Zhaoyang et al., 2016) took an iterative channel selection strategy that selected the most significant feature from the remaining features and dynamically deleted the least meaningful feature from the selected feature subset. A Gaussian conjugate group-sparse prior was incorporated into the classical empirical Bayesian linear model to gain a group-sparse Bayesian linear discriminant analysis (gsBLDA) method for simultaneous channel selection and EEG classification (Yu et al., 2015). Mean ReliefF channel selection (MRCS) adopted an iterative strategy to adjust the ReliefF-based weights of channels according to their contribution to the SVM classification accuracy (Zhang et al., 2016). The Fisher criterion, based on Fisher's discriminant analysis was utilized to evaluate the discrimination of TDP features, extracted from all channels in different time segments via channel selection, using time information (CSTI) methods (Yang et al., 2016). GA and pattern classification using multi-layer perceptrons (MLP) and rule-extraction based on mathematical programming were combined to create a generic neural mathematical method (GNMM) to select EEG channels (Yang et al., 2012). However, a visible limitation is that the convergence of most of these methods is unstable, e.g., gsBLDA, leading to an uncertain channel selection procedure and selected subset.Therefore, it is natural to select EEG channels by taking advantage of feature selection methods in other areas, e.g., robust feature selection (RFS) (Nie et al., 2010), joint embedding learning and sparse regression (JELSR) (Hou et al., 2011), nonnegative discriminative feature selection (NDFS) (Li et al., 2012), selecting feature subset with sparsity and low redundancy (FSLR) (Han et al., 2015b), robust unsupervised feature selection (RUFS) (Qian and Zhai, 2013), joint Laplacian feature weights learning (JLFWL) (Yan and Yang, 2014), a general augmented Lagrangian multiplier (FS_ALM) (Cai et al., 2013) and structural sparse least square regression based on the l0-norm (SSLSR) (Han et al., 2015a). RFS, JELSR, and NDFS focused on the l2, 1-norm minimization regularization to develop an accurate and compact representation of the original data. RUFS tried to solve the combined object of robust clustering and robust feature selection using a limited-memory BFGS based iterative solution. JLFWL selected important features based on l2-norm regularization, and determined the optimal size of the feature subset according to the number of positive feature weights. FSLR could retain the preserving power, while implementing high sparsity and low redundancy in a unified manner. FS_ALM and SSLSR attempted to handle the least square regression based on l0-norm regularization by introducing a Lagrange multiplier and a direct greedy algorithm, respectively. Further, prevalent deep learning was also utilized in this area. A point-wise gated convolutional deep network was developed to dynamically select key features using a gating mechanism (Zhong et al., 2016). Multi-modal deep Boltzmann machines were employed to select important genes (biomarkers) in gene expression data (Syafiandini et al., 2016). A deep sparse multi-task architecture was exploited to recursively discard uninformative features for Alzheimer's disease diagnosis (Suk et al., 2016). In this paper, we employ three performance-verified feature selection methods with theoretical convergence, i.e., RFS, RUFS, and SSLSR, to rank and select channels.
[ "22479236", "23735712", "15188871", "16189967", "23919646", "18430974", "28442746", "25993900", "19660908", "18714838", "22503644", "25794393", "27669247", "27575684" ]
[ { "pmid": "22479236", "title": "Filter Bank Common Spatial Pattern Algorithm on BCI Competition IV Datasets 2a and 2b.", "abstract": "The Common Spatial Pattern (CSP) algorithm is an effective and popular method for classifying 2-class motor imagery electroencephalogram (EEG) data, but its effectiveness depends on the subject-specific frequency band. This paper presents the Filter Bank Common Spatial Pattern (FBCSP) algorithm to optimize the subject-specific frequency band for CSP on Datasets 2a and 2b of the Brain-Computer Interface (BCI) Competition IV. Dataset 2a comprised 4 classes of 22 channels EEG data from 9 subjects, and Dataset 2b comprised 2 classes of 3 bipolar channels EEG data from 9 subjects. Multi-class extensions to FBCSP are also presented to handle the 4-class EEG data in Dataset 2a, namely, Divide-and-Conquer (DC), Pair-Wise (PW), and One-Versus-Rest (OVR) approaches. Two feature selection algorithms are also presented to select discriminative CSP features on Dataset 2b, namely, the Mutual Information-based Best Individual Feature (MIBIF) algorithm, and the Mutual Information-based Rough Set Reduction (MIRSR) algorithm. The single-trial classification accuracies were presented using 10 × 10-fold cross-validations on the training data and session-to-session transfer on the evaluation data from both datasets. Disclosure of the test data labels after the BCI Competition IV showed that the FBCSP algorithm performed relatively the best among the other submitted algorithms and yielded a mean kappa value of 0.569 and 0.600 across all subjects in Datasets 2a and 2b respectively." }, { "pmid": "23735712", "title": "Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface.", "abstract": "OBJECTIVE\nAt the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task.\n\n\nAPPROACH\nFive human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone.\n\n\nMAIN RESULTS\nIndividual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1).\n\n\nSIGNIFICANCE\nFreely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics." }, { "pmid": "15188871", "title": "Support vector channel selection in BCI.", "abstract": "Designing a brain computer interface (BCI) system one can choose from a variety of features that may be useful for classifying brain activity during a mental task. For the special case of classifying electroencephalogram (EEG) signals we propose the usage of the state of the art feature selection algorithms Recursive Feature Elimination and Zero-Norm Optimization which are based on the training of support vector machines (SVM). These algorithms can provide more accurate solutions than standard filter methods for feature selection. We adapt the methods for the purpose of selecting EEG channels. For a motor imagery paradigm we show that the number of used channels can be reduced significantly without increasing the classification error. The resulting best channels agree well with the expected underlying cortical activity patterns during the mental tasks. Furthermore we show how time dependent task specific information can be visualized." }, { "pmid": "16189967", "title": "Spatio-spectral filters for improving the classification of single trial EEG.", "abstract": "Data recorded in electroencephalogram (EEG)-based brain-computer interface experiments is generally very noisy, non-stationary, and contaminated with artifacts that can deteriorate discrimination/classification methods. In this paper, we extend the common spatial pattern (CSP) algorithm with the aim to alleviate these adverse effects. In particular, we suggest an extension of CSP to the state space, which utilizes the method of time delay embedding. As we will show, this allows for individually tuned frequency filters at each electrode position and, thus, yields an improved and more robust machine learning procedure. The advantages of the proposed method over the original CSP method are verified in terms of an improved information transfer rate (bits per trial) on a set of EEG-recordings from experiments of imagined limb movements." }, { "pmid": "23919646", "title": "L1 norm based common spatial patterns decomposition for scalp EEG BCI.", "abstract": "BACKGROUND\nBrain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc.\n\n\nMETHODS\nIn this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance.\n\n\nRESULTS\nThe results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP.\n\n\nCONCLUSIONS\nBy combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings." }, { "pmid": "18430974", "title": "Sensorimotor rhythm-based brain-computer interface (BCI): model order selection for autoregressive spectral analysis.", "abstract": "People can learn to control EEG features consisting of sensorimotor rhythm amplitudes and can use this control to move a cursor in one or two dimensions to a target on a screen. Cursor movement depends on the estimate of the amplitudes of sensorimotor rhythms. Autoregressive models are often used to provide these estimates. The order of the autoregressive model has varied widely among studies. Through analyses of both simulated and actual EEG data, the present study examines the effects of model order on sensorimotor rhythm measurements and BCI performance. The results show that resolution of lower frequency signals requires higher model orders and that this requirement reflects the temporal span of the model coefficients. This is true for both simulated EEG data and actual EEG data during brain-computer interface (BCI) operation. Increasing model order, and decimating the signal were similarly effective in increasing spectral resolution. Furthermore, for BCI control of two-dimensional cursor movement, higher model orders produced better performance in each dimension and greater independence between horizontal and vertical movements. In sum, these results show that autoregressive model order selection is an important determinant of BCI performance and should be based on criteria that reflect system performance." }, { "pmid": "28442746", "title": "Ultrastructural Characterization of the Lower Motor System in a Mouse Model of Krabbe Disease.", "abstract": "Krabbe disease (KD) is a neurodegenerative disorder caused by the lack of β- galactosylceramidase enzymatic activity and by widespread accumulation of the cytotoxic galactosyl-sphingosine in neuronal, myelinating and endothelial cells. Despite the wide use of Twitcher mice as experimental model for KD, the ultrastructure of this model is partial and mainly addressing peripheral nerves. More details are requested to elucidate the basis of the motor defects, which are the first to appear during KD onset. Here we use transmission electron microscopy (TEM) to focus on the alterations produced by KD in the lower motor system at postnatal day 15 (P15), a nearly asymptomatic stage, and in the juvenile P30 mouse. We find mild effects on motorneuron soma, severe ones on sciatic nerves and very severe effects on nerve terminals and neuromuscular junctions at P30, with peripheral damage being already detectable at P15. Finally, we find that the gastrocnemius muscle undergoes atrophy and structural changes that are independent of denervation at P15. Our data further characterize the ultrastructural analysis of the KD mouse model, and support recent theories of a dying-back mechanism for neuronal degeneration, which is independent of demyelination." }, { "pmid": "25993900", "title": "Deep sparse multi-task learning for feature selection in Alzheimer's disease diagnosis.", "abstract": "Recently, neuroimaging-based Alzheimer's disease (AD) or mild cognitive impairment (MCI) diagnosis has attracted researchers in the field, due to the increasing prevalence of the diseases. Unfortunately, the unfavorable high-dimensional nature of neuroimaging data, but a limited small number of samples available, makes it challenging to build a robust computer-aided diagnosis system. Machine learning techniques have been considered as a useful tool in this respect and, among various methods, sparse regression has shown its validity in the literature. However, to our best knowledge, the existing sparse regression methods mostly try to select features based on the optimal regression coefficients in one step. We argue that since the training feature vectors are composed of both informative and uninformative or less informative features, the resulting optimal regression coefficients are inevidently affected by the uninformative or less informative features. To this end, we first propose a novel deep architecture to recursively discard uninformative features by performing sparse multi-task learning in a hierarchical fashion. We further hypothesize that the optimal regression coefficients reflect the relative importance of features in representing the target response variables. In this regard, we use the optimal regression coefficients learned in one hierarchy as feature weighting factors in the following hierarchy, and formulate a weighted sparse multi-task learning method. Lastly, we also take into account the distributional characteristics of samples per class and use clustering-induced subclass label vectors as target response values in our sparse regression model. In our experiments on the ADNI cohort, we performed both binary and multi-class classification tasks in AD/MCI diagnosis and showed the superiority of the proposed method by comparing with the state-of-the-art methods." }, { "pmid": "19660908", "title": "Time Domain Parameters as a feature for EEG-based Brain-Computer Interfaces.", "abstract": "Several feature types have been used with EEG-based Brain-Computer Interfaces. Among the most popular are logarithmic band power estimates with more or less subject-specific optimization of the frequency bands. In this paper we introduce a feature called Time Domain Parameter that is defined by the generalization of the Hjorth parameters. Time Domain Parameters are studied under two different conditions. The first setting is defined when no data from a subject is available. In this condition our results show that Time Domain Parameters outperform all band power features tested with all spatial filters applied. The second setting is the transition from calibration (no feedback) to feedback, in which the frequency content of the signals can change for some subjects. We compare Time Domain Parameters with logarithmic band power in subject-specific bands and show that these features are advantageous in this situation as well." }, { "pmid": "18714838", "title": "Classifying single-trial EEG during motor imagery by iterative spatio-spectral patterns learning (ISSPL).", "abstract": "In most current motor-imagery-based brain-computer interfaces (BCIs), machine learning is carried out in two consecutive stages: feature extraction and feature classification. Feature extraction has focused on automatic learning of spatial filters, with little or no attention being paid to optimization of parameters for temporal filters that still require time-consuming, ad hoc manual tuning. In this paper, we present a new algorithm termed iterative spatio-spectral patterns learning (ISSPL) that employs statistical learning theory to perform automatic learning of spatio-spectral filters. In ISSPL, spectral filters and the classifier are simultaneously parameterized for optimization to achieve good generalization performance. A detailed derivation and theoretical analysis of ISSPL are given. Experimental results on two datasets show that the proposed algorithm can correctly identify the discriminative frequency bands, demonstrating the algorithm's superiority over contemporary approaches in classification performance." }, { "pmid": "22503644", "title": "Channel selection and classification of electroencephalogram signals: an artificial neural network and genetic algorithm-based approach.", "abstract": "OBJECTIVE\nAn electroencephalogram-based (EEG-based) brain-computer-interface (BCI) provides a new communication channel between the human brain and a computer. Amongst the various available techniques, artificial neural networks (ANNs) are well established in BCI research and have numerous successful applications. However, one of the drawbacks of conventional ANNs is the lack of an explicit input optimization mechanism. In addition, results of ANN learning are usually not easily interpretable. In this paper, we have applied an ANN-based method, the genetic neural mathematic method (GNMM), to two EEG channel selection and classification problems, aiming to address the issues above.\n\n\nMETHODS AND MATERIALS\nPre-processing steps include: least-square (LS) approximation to determine the overall signal increase/decrease rate; locally weighted polynomial regression (Loess) and fast Fourier transform (FFT) to smooth the signals to determine the signal strength and variations. The GNMM method consists of three successive steps: (1) a genetic algorithm-based (GA-based) input selection process; (2) multi-layer perceptron-based (MLP-based) modelling; and (3) rule extraction based upon successful training. The fitness function used in the GA is the training error when an MLP is trained for a limited number of epochs. By averaging the appearance of a particular channel in the winning chromosome over several runs, we were able to minimize the error due to randomness and to obtain an energy distribution around the scalp. In the second step, a threshold was used to select a subset of channels to be fed into an MLP, which performed modelling with a large number of iterations, thus fine-tuning the input/output relationship. Upon successful training, neurons in the input layer are divided into four sub-spaces to produce if-then rules (step 3). Two datasets were used as case studies to perform three classifications. The first data were electrocorticography (ECoG) recordings that have been used in the BCI competition III. The data belonged to two categories, imagined movements of either a finger or the tongue. The data were recorded using an 8 × 8 ECoG platinum electrode grid at a sampling rate of 1000 Hz for a total of 378 trials. The second dataset consisted of a 32-channel, 256 Hz EEG recording of 960 trials where participants had to execute a left- or right-hand button-press in response to left- or right-pointing arrow stimuli. The data were used to classify correct/incorrect responses and left/right hand movements.\n\n\nRESULTS\nFor the first dataset, 100 samples were reserved for testing, and those remaining were for training and validation with a ratio of 90%:10% using K-fold cross-validation. Using the top 10 channels selected by GNMM, we achieved a classification accuracy of 0.80 ± 0.04 for the testing dataset, which compares favourably with results reported in the literature. For the second case, we performed multi-time-windows pre-processing over a single trial. By selecting 6 channels out of 32, we were able to achieve a classification accuracy of about 0.86 for the response correctness classification and 0.82 for the actual responding hand classification, respectively. Furthermore, 139 regression rules were identified after training was completed.\n\n\nCONCLUSIONS\nWe demonstrate that GNMM is able to perform effective channel selections/reductions, which not only reduces the difficulty of data collection, but also greatly improves the generalization of the classifier. An important step that affects the effectiveness of GNMM is the pre-processing method. In this paper, we also highlight the importance of choosing an appropriate time window position." }, { "pmid": "25794393", "title": "Grouped Automatic Relevance Determination and Its Application in Channel Selection for P300 BCIs.", "abstract": "During the development of a brain-computer interface, it is beneficial to exploit information in multiple electrode signals. However, a small channel subset is favored for not only machine learning feasibility, but also practicality in commercial and clinical BCI applications. An embedded channel selection approach based on grouped automatic relevance determination is proposed. The proposed Gaussian conjugate group-sparse prior and the embedded nature of the concerned Bayesian linear model enable simultaneous channel selection and feature classification. Moreover, with the marginal likelihood (evidence) maximization technique, hyper-parameters that determine the sparsity of the model are directly estimated from the training set, avoiding time-consuming cross-validation. Experiments have been conducted on P300 speller BCIs. The results for both public and in-house datasets show that the channels selected by our techniques yield competitive classification performance with the state-of-the-art and are biologically relevant to P300." }, { "pmid": "27669247", "title": "ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.", "abstract": "Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the realization of a practical EEG-based emotion recognition system." }, { "pmid": "27575684", "title": "Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.", "abstract": "To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method." } ]
Systematic Biology
29186587
PMC5920299
10.1093/sysbio/syx090
Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals
AbstractModern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy.
Related Work Everitt et al. (2016) have also developed theory and an implementation for SMC on phylogenies. In their case, they are focused on inferring ultrametric trees in a coalescent framework, whereas OPSMC as described here is for unrooted trees. Their clever attachment proposal is described in terms of lineage (path from root to leaf) and branching time. They use proposals choosing lineage based on differences from the leaf to be attached and the existing leaves using a distribution based on Ewens’ sampling formula (Ewens 1972), and a branching time which also uses pairwise differences. They also make an interesting suggestion to ease the transition between the different posterior distributions by using “intermediate distributions.” However, they do not compare their output to samples from an existing MCMC phylogenetics package, and they have not yet provided an open source implementation that would allow others to do so.
[ "22223445", "24722319", "4667078", "25622821", "23699471", "20525638", "24478338", "21666874", "18278678", "19779555", "27454285", "26115986", "22357727", "25870761", "16679334", "20421198" ]
[ { "pmid": "22223445", "title": "Phylogenetic inference via sequential Monte Carlo.", "abstract": "Bayesian inference provides an appealing general framework for phylogenetic analysis, able to incorporate a wide variety of modeling assumptions and to provide a coherent treatment of uncertainty. Existing computational approaches to bayesian inference based on Markov chain Monte Carlo (MCMC) have not, however, kept pace with the scale of the data analysis problems in phylogenetics, and this has hindered the adoption of bayesian methods. In this paper, we present an alternative to MCMC based on Sequential Monte Carlo (SMC). We develop an extension of classical SMC based on partially ordered sets and show how to apply this framework--which we refer to as PosetSMC--to phylogenetic analysis. We provide a theoretical treatment of PosetSMC and also present experimental evaluation of PosetSMC on both synthetic and real data. The empirical results demonstrate that PosetSMC is a very promising alternative to MCMC, providing up to two orders of magnitude faster convergence. We discuss other factors favorable to the adoption of PosetSMC in phylogenetics, including its ability to estimate marginal likelihoods, its ready implementability on parallel and distributed computing platforms, and the possibility of combining with MCMC in hybrid MCMC-SMC schemes. Software for PosetSMC is available at http://www.stat.ubc.ca/ bouchard/PosetSMC." }, { "pmid": "24722319", "title": "BEAST 2: a software platform for Bayesian evolutionary analysis.", "abstract": "We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format." }, { "pmid": "25622821", "title": "Chromosome 7 gain and DNA hypermethylation at the HOXA10 locus are associated with expression of a stem cell related HOX-signature in glioblastoma.", "abstract": "BACKGROUND\nHOX genes are a family of developmental genes that are expressed neither in the developing forebrain nor in the normal brain. Aberrant expression of a HOX-gene dominated stem-cell signature in glioblastoma has been linked with increased resistance to chemo-radiotherapy and sustained proliferation of glioma initiating cells. Here we describe the epigenetic and genetic alterations and their interactions associated with the expression of this signature in glioblastoma.\n\n\nRESULTS\nWe observe prominent hypermethylation of the HOXA locus 7p15.2 in glioblastoma in contrast to non-tumoral brain. Hypermethylation is associated with a gain of chromosome 7, a hallmark of glioblastoma, and may compensate for tumor-driven enhanced gene dosage as a rescue mechanism by preventing undue gene expression. We identify the CpG island of the HOXA10 alternative promoter that appears to escape hypermethylation in the HOX-high glioblastoma. An additive effect of gene copy gain at 7p15.2 and DNA methylation at key regulatory CpGs in HOXA10 is significantly associated with HOX-signature expression. Additionally, we show concordance between methylation status and presence of active or inactive chromatin marks in glioblastoma-derived spheres that are HOX-high or HOX-low, respectively.\n\n\nCONCLUSIONS\nBased on these findings, we propose co-evolution and interaction between gene copy gain, associated with a gain of chromosome 7, and additional epigenetic alterations as key mechanisms triggering a coordinated, but inappropriate, HOX transcriptional program in glioblastoma." }, { "pmid": "23699471", "title": "Bio++: efficient extensible libraries and tools for computational molecular evolution.", "abstract": "Efficient algorithms and programs for the analysis of the ever-growing amount of biological sequence data are strongly needed in the genomics era. The pace at which new data and methodologies are generated calls for the use of pre-existing, optimized-yet extensible-code, typically distributed as libraries or packages. This motivated the Bio++ project, aiming at developing a set of C++ libraries for sequence analysis, phylogenetics, population genetics, and molecular evolution. The main attractiveness of Bio++ is the extensibility and reusability of its components through its object-oriented design, without compromising the computer-efficiency of the underlying methods. We present here the second major release of the libraries, which provides an extended set of classes and methods. These extensions notably provide built-in access to sequence databases and new data structures for handling and manipulating sequences from the omics era, such as multiple genome alignments and sequencing reads libraries. More complex models of sequence evolution, such as mixture models and generic n-tuples alphabets, are also included." }, { "pmid": "20525638", "title": "New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.", "abstract": "PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/." }, { "pmid": "24478338", "title": "PUmPER: phylogenies updated perpetually.", "abstract": "SUMMARY\nNew sequence data useful for phylogenetic and evolutionary analyses continues to be added to public databases. The construction of multiple sequence alignments and inference of huge phylogenies comprising large taxonomic groups are expensive tasks, both in terms of man hours and computational resources. Therefore, maintaining comprehensive phylogenies, based on representative and up-to-date molecular sequences, is challenging. PUmPER is a framework that can perpetually construct multi-gene alignments (with PHLAWD) and phylogenetic trees (with ExaML or RAxML-Light) for a given NCBI taxonomic group. When sufficient numbers of new gene sequences for the selected taxonomic group have accumulated in GenBank, PUmPER automatically extends the alignment and infers extended phylogenetic trees by using previously inferred smaller trees as starting topologies. Using our framework, large phylogenetic trees can be perpetually updated without human intervention. Importantly, resulting phylogenies are not statistically significantly worse than trees inferred from scratch.\n\n\nAVAILABILITY AND IMPLEMENTATION\nPUmPER can run in stand-alone mode on a single server, or offload the computationally expensive phylogenetic searches to a parallel computing cluster. Source code, documentation, and tutorials are available at https://github.com/fizquierdo/perpetually-updated-trees.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary Material is available at Bioinformatics online." }, { "pmid": "21666874", "title": "SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit.", "abstract": "The web-based, Java-written SOCR (Statistical Online Computational Resource) tools have been utilized in many undergraduate and graduate level statistics courses for seven years now (Dinov 2006; Dinov et al. 2008b). It has been proven that these resources can successfully improve students' learning (Dinov et al. 2008b). Being first published online in 2005, SOCR Analyses is a somewhat new component and it concentrate on data modeling for both parametric and non-parametric data analyses with graphical model diagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learning for high school and undergraduate students. As we have already implemented SOCR Distributions and Experiments, SOCR Analyses and Charts fulfill the rest of a standard statistics curricula. Currently, there are four core components of SOCR Analyses. Linear models included in SOCR Analyses are simple linear regression, multiple linear regression, one-way and two-way ANOVA. Tests for sample comparisons include t-test in the parametric category. Some examples of SOCR Analyses' in the non-parametric category are Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirnoff test and Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman's test and Fisher's exact test. The last component of Analyses is a utility for computing sample sizes for normal distribution. In this article, we present the design framework, computational implementation and the utilization of SOCR Analyses." }, { "pmid": "18278678", "title": "Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics.", "abstract": "The main limiting factor in Bayesian MCMC analysis of phylogeny is typically the efficiency with which topology proposals sample tree space. Here we evaluate the performance of seven different proposal mechanisms, including most of those used in current Bayesian phylogenetics software. We sampled 12 empirical nucleotide data sets--ranging in size from 27 to 71 taxa and from 378 to 2,520 sites--under difficult conditions: short runs, no Metropolis-coupling, and an oversimplified substitution model producing difficult tree spaces (Jukes Cantor with equal site rates). Convergence was assessed by comparison to reference samples obtained from multiple Metropolis-coupled runs. We find that proposals producing topology changes as a side effect of branch length changes (LOCAL and Continuous Change) consistently perform worse than those involving stochastic branch rearrangements (nearest neighbor interchange, subtree pruning and regrafting, tree bisection and reconnection, or subtree swapping). Among the latter, moves that use an extension mechanism to mix local with more distant rearrangements show better overall performance than those involving only local or only random rearrangements. Moves with only local rearrangements tend to mix well but have long burn-in periods, whereas moves with random rearrangements often show the reverse pattern. Combinations of moves tend to perform better than single moves. The time to convergence can be shortened considerably by starting with a good tree, but this comes at the cost of compromising convergence diagnostics based on overdispersed starting points. Our results have important implications for developers of Bayesian MCMC implementations and for the large group of users of Bayesian phylogenetics software." }, { "pmid": "19779555", "title": "Bayesian phylogeography finds its roots.", "abstract": "As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms." }, { "pmid": "27454285", "title": "Real-time selective sequencing using nanopore technology.", "abstract": "The Oxford Nanopore Technologies MinION sequencer enables the selection of specific DNA molecules for sequencing by reversing the driving voltage across individual nanopores. To directly select molecules for sequencing, we used dynamic time warping to match reads to reference sequences. We demonstrate our open-source Read Until software in real-time selective sequencing of regions within small genomes, individual amplicon enrichment and normalization of an amplicon set." }, { "pmid": "26115986", "title": "nextflu: real-time tracking of seasonal influenza virus evolution in humans.", "abstract": "UNLABELLED\nSeasonal influenza viruses evolve rapidly, allowing them to evade immunity in their human hosts and reinfect previously infected individuals. Similarly, vaccines against seasonal influenza need to be updated frequently to protect against an evolving virus population. We have thus developed a processing pipeline and browser-based visualization that allows convenient exploration and analysis of the most recent influenza virus sequence data. This web-application displays a phylogenetic tree that can be decorated with additional information such as the viral genotype at specific sites, sampling location and derived statistics that have been shown to be predictive of future virus dynamics. In addition, mutation, genotype and clade frequency trajectories are calculated and displayed.\n\n\nAVAILABILITY AND IMPLEMENTATION\nPython and Javascript source code is freely available from https://github.com/blab/nextflu, while the web-application is live at http://nextflu.org.\n\n\nCONTACT\[email protected]." }, { "pmid": "22357727", "title": "MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space.", "abstract": "Since its introduction in 2001, MrBayes has grown in popularity as a software package for Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) methods. With this note, we announce the release of version 3.2, a major upgrade to the latest official release presented in 2003. The new version provides convergence diagnostics and allows multiple analyses to be run in parallel with convergence progress monitored on the fly. The introduction of new proposals and automatic optimization of tuning parameters has improved convergence for many problems. The new version also sports significantly faster likelihood calculations through streaming single-instruction-multiple-data extensions (SSE) and support of the BEAGLE library, allowing likelihood calculations to be delegated to graphics processing units (GPUs) on compatible hardware. Speedup factors range from around 2 with SSE code to more than 50 with BEAGLE for codon problems. Checkpointing across all models allows long runs to be completed even when an analysis is prematurely terminated. New models include relaxed clocks, dating, model averaging across time-reversible substitution models, and support for hard, negative, and partial (backbone) tree constraints. Inference of species trees from gene trees is supported by full incorporation of the Bayesian estimation of species trees (BEST) algorithms. Marginal model likelihoods for Bayes factor tests can be estimated accurately across the entire model space using the stepping stone method. The new version provides more output options than previously, including samples of ancestral states, site rates, site d(N)/d(S) rations, branch rates, and node dates. A wide range of statistics on tree parameters can also be output for visualization in FigTree and compatible software." }, { "pmid": "25870761", "title": "A platform for leveraging next generation sequencing for routine microbiology and public health use.", "abstract": "Even with the advent of next-generation sequencing (NGS) technologies which have revolutionised the field of bacterial genomics in recent years, a major barrier still exists to the implementation of NGS for routine microbiological use (in public health and clinical microbiology laboratories). Such routine use would make a big difference to investigations of pathogen transmission and prevention/control of (sometimes lethal) infections. The inherent complexity and high frequency of data analyses on very large sets of bacterial DNA sequence data, the ability to ensure data provenance and automatically track and log all analyses for audit purposes, the need for quick and accurate results, together with an essential user-friendly interface for regular non-technical laboratory staff, are all critical requirements for routine use in a public health setting. There are currently no systems to answer positively to all these requirements, in an integrated manner. In this paper, we describe a system for sequence analysis and interpretation that is highly automated and tackles the issues raised earlier, and that is designed for use in diagnostic laboratories by healthcare workers with no specialist bioinformatics knowledge." }, { "pmid": "16679334", "title": "BAli-Phy: simultaneous Bayesian inference of alignment and phylogeny.", "abstract": "SUMMARY\nBAli-Phy is a Bayesian posterior sampler that employs Markov chain Monte Carlo to explore the joint space of alignment and phylogeny given molecular sequence data. Simultaneous estimation eliminates bias toward inaccurate alignment guide-trees, employs more sophisticated substitution models during alignment and automatically utilizes information in shared insertion/deletions to help infer phylogenies.\n\n\nAVAILABILITY\nSoftware is available for download at http://www.biomath.ucla.edu/msuchard/bali-phy." }, { "pmid": "20421198", "title": "DendroPy: a Python library for phylogenetic computing.", "abstract": "UNLABELLED\nDendroPy is a cross-platform library for the Python programming language that provides for object-oriented reading, writing, simulation and manipulation of phylogenetic data, with an emphasis on phylogenetic tree operations. DendroPy uses a splits-hash mapping to perform rapid calculations of tree distances, similarities and shape under various metrics. It contains rich simulation routines to generate trees under a number of different phylogenetic and coalescent models. DendroPy's data simulation and manipulation facilities, in conjunction with its support of a broad range of phylogenetic data formats (NEXUS, Newick, PHYLIP, FASTA, NeXML, etc.), allow it to serve a useful role in various phyloinformatics and phylogeographic pipelines.\n\n\nAVAILABILITY\nThe stable release of the library is available for download and automated installation through the Python Package Index site (http://pypi.python.org/pypi/DendroPy), while the active development source code repository is available to the public from GitHub (http://github.com/jeetsukumaran/DendroPy)." } ]
Scientific Reports
29739993
PMC5940864
10.1038/s41598-018-24876-0
A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification
Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.
Related WorksSemi-supervised learning methods are not commonly used in the pathology image analysis field although they have previously been employed in some applications of medical image analysis to improve classification performances on partially labeled datasets2–5. In order to make it possible for semi-supervised learning methods to make the most of the labeled and unlabeled data, some assumptions are made for the underlying structure of data space1. Among the assumptions, smoothness and cluster assumption are the basis for most of the state-of-the-art techniques6. In the smoothness assumption, it is assumed that points that are located close to each other in data space are more likely to share the same label, and in the cluster assumption, it is assumed that the data points that belong to one class are more likely to form and share a group/cluster of points. Therefore, the core objective of these two assumptions is to ensure that the found decision boundary lies in low density rather than high density regions within data space.The most basic and easiest SSL method to apply is self-training7–10, which involves repeatedly training and retraining a statistical model. First, labeled data is used to train an initial model and then this model is applied to the unlabeled data. The unlabeled points for which the model is most confident in assigning labels to, are then added to the pool of labeled points and a new model is trained. This process is repeated until some convergence criterion is met. Another family of methods is based on generative models11–13, in which some assumptions are made about the underlying probability distribution of data in feature space. The parameters defining the assumed generative model are then found by fitting the model to the data. Graph-based SSL techniques14–17, attempt to generate an undirected graph on the training data in which every point on the graph is connected by a weighted edge. The weights are assigned to the edges in such a way that closer data points tend to have larger weights and hence they likely share same labels. Labels are assigned to the unlabeled points by propagating labels of labeled points to unlabeled ones through the edges of the graph with the amount dependent on the edge weights. This way unlabeled points can all be labeled even if they are not directly connected to the labeled points.The support vector machine (SVM) classifier is an efficient and reliable learning method and to date is one of the best classifiers in terms of performance18 over a wide range of tasks. Semi-supervised SVM techniques expand the idea of traditional SVM to incorporate the ability to use partially labeled datasets to learn reliable models while maintaining accuracy. The idea is to minimize an objective function by examining all possible label combinations of the unlabeled data iteratively in order to find low-density regions in the data space to place the decision boundary through19–22. Many implementations of the objective functions have been reported in the literature however these are often time inefficient. The reader is referred to Chapelle et al.’s work23 for a review comparing different methods. Kernel tricks which implement the cluster assumption in SSL have also been proposed24,25.Recently, there have been some attempts to replace the lengthy objective function optimization process of semi-supervised SVMs by cluster analysis6,26,27. The concept behind these cluster-then-label techniques for semi-supervised learning28 is to first find point clusters of high density regions in data space and then assign labels to the identified clusters. A supervised learner is then used to find the separating decision boundary that passes through low density regions in data space (i.e. between the clusters). In this study, we propose a novel cluster-then-label semi-supervised learning method and compare its performance with other state-of-the-art techniques for two digital pathology tasks; triaging clinically relevant regions of breast whole mount images29 and the classification of nuclei figures into lymphocyte, normal epithelial and malignant epithelial objects.
[ "12689392", "18237967", "29092947" ]
[ { "pmid": "12689392", "title": "The concave-convex procedure.", "abstract": "The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean-field theory algorithms are also examples of CCCP. The generalized iterative scaling algorithm and Sinkhorn's algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove the convergence of, existing optimization algorithms and as a procedure for generating new algorithms." }, { "pmid": "18237967", "title": "Fast anisotropic Gauss filtering.", "abstract": "We derive the decomposition of the anisotropic Gaussian in a one-dimensional (1-D) Gauss filter in the x-direction followed by a 1-D filter in a nonorthogonal direction phi. So also the anisotropic Gaussian can be decomposed by dimension. This appears to be extremely efficient from a computing perspective. An implementation scheme for normal convolution and for recursive filtering is proposed. Also directed derivative filters are demonstrated. For the recursive implementation, filtering an 512 x 512 image is performed within 40 msec on a current state of the art PC, gaining over 3 times in performance for a typical filter, independent of the standard deviations and orientation of the filter. Accuracy of the filters is still reasonable when compared to truncation error or recursive approximation error. The anisotropic Gaussian filtering method allows fast calculation of edge and ridge maps, with high spatial and angular accuracy. For tracking applications, the normal anisotropic convolution scheme is more advantageous, with applications in the detection of dashed lines in engineering drawings. The recursive implementation is more attractive in feature detection applications, for instance in affine invariant edge and ridge detection in computer vision. The proposed computational filtering method enables the practical applicability of orientation scale-space analysis." }, { "pmid": "29092947", "title": "An Image Analysis Resource for Cancer Research: PIIP-Pathology Image Informatics Platform for Visualization, Analysis, and Management.", "abstract": "Pathology Image Informatics Platform (PIIP) is an NCI/NIH sponsored project intended for managing, annotating, sharing, and quantitatively analyzing digital pathology imaging data. It expands on an existing, freely available pathology image viewer, Sedeen. The goal of this project is to develop and embed some commonly used image analysis applications into the Sedeen viewer to create a freely available resource for the digital pathology and cancer research communities. Thus far, new plugins have been developed and incorporated into the platform for out of focus detection, region of interest transformation, and IHC slide analysis. Our biomarker quantification and nuclear segmentation algorithms, written in MATLAB, have also been integrated into the viewer. This article describes the viewing software and the mechanism to extend functionality by plugins, brief descriptions of which are provided as examples, to guide users who want to use this platform. PIIP project materials, including a video describing its usage and applications, and links for the Sedeen Viewer, plug-ins, and user manuals are freely available through the project web page: http://pathiip.org Cancer Res; 77(21); e83-86. ©2017 AACR." } ]
Frontiers in Neuroscience
29867307
PMC5954047
10.3389/fnins.2018.00272
Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification
Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems.
2. Related worksSignificant efforts have been made in the classification of motor imagery EEG signals. A key point to promote the accuracy of classification algorithms is to prevent overfitting during EEG classification. Here we give a brief review of existing methods for EEG classification from two strategies, and some efforts to reduce overfitting.Because of the characteristics of different regions of the brain, a number of researches have attempted to process signals from different channels independently. An approach was presented to determine the contribution of different bandwidths of the EEG signal in different recording sites using the multiple kernel learning (MKL) method in Schrouff et al. (2016). Channel-frequency map (CFM) was proposed as a tool to develop data-driven frequency band selection methods for parallel EEG processing in Suk and Lee (2011). Genetic algorithm was utilized to identify individually optimized brain areas and frequency ranges based on a predefined chromosome simultaneously in Lee et al. (2012). Popular deep learning was also introduced in this area. For example, deep belief network (DBN) was employed to reveal the critical frequency bands for emotion recognition (Zheng et al., 2015). Support vector machine (SVM) was considered as a useful method to solve small sample and nonlinear classification problems (Boser et al., 1992). SVM was applied in the feature optimization and classification of MI-EEG (Chatterjee and Bandyopadhyay, 2016; Ma et al., 2016), resulting in a speedup of classification while loss in generalization remained acceptable (Xu et al., 2010). Hybrid spatial finite impulse response (FIR) filters of high-order and data-driven were channel-specifically designed to complement broadband CSP filtering in Yu et al. (2013). In this manner, they facilitate the study of the specific properties of the channels. Nevertheless, their disregard of the interaction among channels likely submerged significant data into irrelevant and redundant signals, negatively influencing the classification performance. Another disadvantage of this approach is the significant computational burden related to the enormous volume of signals.There have also been several researches that have attempted to address the combination of multichannel EEG data. Well-known CSP methods combined signals from multiple channels by amplifying the class disparity in the spatial domain by covariance analysis (Blankertz et al., 2007; Li et al., 2013). Improved CSPs, such as common spatio-spectral pattern (CSSP) (Lemm et al., 2005), iterative spatio-spectral pattern learning (ISSPL) (Wu et al., 2008), and filter bank common spatial pattern (FBCSP) (Kai et al., 2012) were introduced to optimize the combination of multichannel signals by designing novel spectral weight coefficient evaluation. Another spatial filtering algorithm called discriminative spatial patten (DSP) solved single trial EEG classification by maximizing the between-class separation (Duda et al., 2001; Hoffmann et al., 2006). CSP and DSP were combined to more efficient feature extraction and classification of single trial EEG during finger movement tasks (Liao et al., 2007). In addition to these methods, there are numerous researches focusing on subset selection of EEG channels. Based on grouped automatic relevance determination, group-sparse Bayesian linear discriminant analysis (gsBLDA) was presented to select EEG channels (Yu et al., 2015). The Separability & Correlation (SEPCOR) approach was designed to automatically search for an optimal EEG channel subset with minimum correlation and maximum class separation (Shri and Sriraam, 2016; Student and Sriraam, 2017). Sequential floating forward selection (SFFS) performed a loop of channel selection continuously by iteratively adding and eliminating EEG channels (Pudil et al., 1994; Meng et al., 2011). By considering adjacent channels as one feature according to their distribution on the cerebral cortex, an improved SFFS (ISFFS) was proposed to remove task-irrelevant and redundant channels with low computational burden (Qiu et al., 2016). In order to reduce overfitting, L1 norm regularization was applied in constructing spatial filters for its competence to achieve sparse solution (Silva et al., 2004; Donoho, 2006; Farquhar et al., 2006). Sparse common spatial pattern (SCSP) was applied to optimally select the least number of channels while containing high performance in classification, with l1/l2 norm as the regularization term (Arvaneh et al., 2011). By combining L1 norm based Eigen decomposition into CSP, a L1-norm based CSP was proposed to effectively improve the robustness of BCI system to EEG outliers and achieved higher classification accuracy than the conventional CSP (Li et al., 2013). A modified CSP with l1 sparse weighting method was developed for EEG trial selection, and successfully rejected low-quality trials in a sparsity-aware way (Tomida et al., 2015). These approaches are effective in determining the informative subset or combination weights of channels based on shallow features extracted from voltage signals. However, the CSP in EEG classification generates a spatial filter matrix that generally contains too many parameters, and therefore vulnerable to be overfitting especially when insufficiency training data is available. A model that requires few number of parameters while utilizing the features right related to task will be potential for EEG classification.
[ "21427014", "18838371", "29163006", "22479236", "23735712", "16189967", "23289769", "17518278", "27313656", "27966546", "10400191", "22438708", "15421287", "16443377", "9749909", "11204034", "26692030", "15203067", "25248173", "12048038", "18714838", "11761077", "27796603", "24204705", "25794393", "27669247" ]
[ { "pmid": "21427014", "title": "Optimizing the channel selection and classification accuracy in EEG-based BCI.", "abstract": "Multichannel EEG is generally used in brain-computer interfaces (BCIs), whereby performing EEG channel selection 1) improves BCI performance by removing irrelevant or noisy channels and 2) enhances user convenience from the use of lesser channels. This paper proposes a novel sparse common spatial pattern (SCSP) algorithm for EEG channel selection. The proposed SCSP algorithm is formulated as an optimization problem to select the least number of channels within a constraint of classification accuracy. As such, the proposed approach can be customized to yield the best classification accuracy by removing the noisy and irrelevant channels, or retain the least number of channels without compromising the classification accuracy obtained by using all the channels. The proposed SCSP algorithm is evaluated using two motor imagery datasets, one with a moderate number of channels and another with a large number of channels. In both datasets, the proposed SCSP channel selection significantly reduced the number of channels, and outperformed existing channel selection methods based on Fisher criterion, mutual information, support vector machine, common spatial pattern, and regularized common spatial pattern in classification accuracy. The proposed SCSP algorithm also yielded an average improvement of 10% in classification accuracy compared to the use of three channels (C3, C4, and Cz)." }, { "pmid": "18838371", "title": "The Berlin Brain--Computer Interface: accurate performance from first-session in BCI-naïve subjects.", "abstract": "The Berlin Brain--Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are: 1) the use of well-established motor competences as control paradigms; 2) high-dimensional features from multichannel EEG; and 3) advanced machine-learning techniques. Spatio-spectral changes of sensorimotor rhythms are used to discriminate imagined movements (left hand, right hand, and foot). A previous feedback study [M. Krauledat, K.-R. MUller, and G. Curio. (2007) The non-invasive Berlin brain--computer Interface: Fast acquisition of effective performance in untrained subjects. NeuroImage. [Online]. 37(2), pp. 539--550. Available: http://dx.doi.org/10.1016/j.neuroimage.2007.01.051] with ten subjects provided preliminary evidence that the BBCI system can be operated at high accuracy for subjects with less than five prior BCI exposures. Here, we demonstrate in a group of 14 fully BCI-naIve subjects that 8 out of 14 BCI novices can perform at >84% accuracy in their very first BCI session, and a further four subjects at >70%. Thus, 12 out of 14 BCI-novices had significant above-chance level performances without any subject training even in the first session, as based on an optimized EEG analysis by advanced machine-learning algorithms." }, { "pmid": "29163006", "title": "MATLAB Toolboxes for Reference Electrode Standardization Technique (REST) of Scalp EEG.", "abstract": "Reference electrode standardization technique (REST) has been increasingly acknowledged and applied as a re-reference technique to transform an actual multi-channels recordings to approximately zero reference ones in electroencephalography/event-related potentials (EEG/ERPs) community around the world in recent years. However, a more easy-to-use toolbox for re-referencing scalp EEG data to zero reference is still lacking. Here, we have therefore developed two open-source MATLAB toolboxes for REST of scalp EEG. One version of REST is closely integrated into EEGLAB, which is a popular MATLAB toolbox for processing the EEG data; and another is a batch version to make it more convenient and efficient for experienced users. Both of them are designed to provide an easy-to-use for novice researchers and flexibility for experienced researchers. All versions of the REST toolboxes can be freely downloaded at http://www.neuro.uestc.edu.cn/rest/Down.html, and the detailed information including publications, comments and documents on REST can also be found from this website. An example of usage is given with comparative results of REST and average reference. We hope these user-friendly REST toolboxes could make the relatively novel technique of REST easier to study, especially for applications in various EEG studies." }, { "pmid": "22479236", "title": "Filter Bank Common Spatial Pattern Algorithm on BCI Competition IV Datasets 2a and 2b.", "abstract": "The Common Spatial Pattern (CSP) algorithm is an effective and popular method for classifying 2-class motor imagery electroencephalogram (EEG) data, but its effectiveness depends on the subject-specific frequency band. This paper presents the Filter Bank Common Spatial Pattern (FBCSP) algorithm to optimize the subject-specific frequency band for CSP on Datasets 2a and 2b of the Brain-Computer Interface (BCI) Competition IV. Dataset 2a comprised 4 classes of 22 channels EEG data from 9 subjects, and Dataset 2b comprised 2 classes of 3 bipolar channels EEG data from 9 subjects. Multi-class extensions to FBCSP are also presented to handle the 4-class EEG data in Dataset 2a, namely, Divide-and-Conquer (DC), Pair-Wise (PW), and One-Versus-Rest (OVR) approaches. Two feature selection algorithms are also presented to select discriminative CSP features on Dataset 2b, namely, the Mutual Information-based Best Individual Feature (MIBIF) algorithm, and the Mutual Information-based Rough Set Reduction (MIRSR) algorithm. The single-trial classification accuracies were presented using 10 × 10-fold cross-validations on the training data and session-to-session transfer on the evaluation data from both datasets. Disclosure of the test data labels after the BCI Competition IV showed that the FBCSP algorithm performed relatively the best among the other submitted algorithms and yielded a mean kappa value of 0.569 and 0.600 across all subjects in Datasets 2a and 2b respectively." }, { "pmid": "23735712", "title": "Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface.", "abstract": "OBJECTIVE\nAt the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task.\n\n\nAPPROACH\nFive human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone.\n\n\nMAIN RESULTS\nIndividual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1).\n\n\nSIGNIFICANCE\nFreely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics." }, { "pmid": "16189967", "title": "Spatio-spectral filters for improving the classification of single trial EEG.", "abstract": "Data recorded in electroencephalogram (EEG)-based brain-computer interface experiments is generally very noisy, non-stationary, and contaminated with artifacts that can deteriorate discrimination/classification methods. In this paper, we extend the common spatial pattern (CSP) algorithm with the aim to alleviate these adverse effects. In particular, we suggest an extension of CSP to the state space, which utilizes the method of time delay embedding. As we will show, this allows for individually tuned frequency filters at each electrode position and, thus, yields an improved and more robust machine learning procedure. The advantages of the proposed method over the original CSP method are verified in terms of an improved information transfer rate (bits per trial) on a set of EEG-recordings from experiments of imagined limb movements." }, { "pmid": "23289769", "title": "Familial or Sporadic Idiopathic Scoliosis - classification based on artificial neural network and GAPDH and ACTB transcription profile.", "abstract": "BACKGROUND\nImportance of hereditary factors in the etiology of Idiopathic Scoliosis is widely accepted. In clinical practice some of the IS patients present with positive familial history of the deformity and some do not. Traditionally about 90% of patients have been considered as sporadic cases without familial recurrence. However the exact proportion of Familial and Sporadic Idiopathic Scoliosis is still unknown. Housekeeping genes encode proteins that are usually essential for the maintenance of basic cellular functions. ACTB and GAPDH are two housekeeping genes encoding respectively a cytoskeletal protein β-actin, and glyceraldehyde-3-phosphate dehydrogenase, an enzyme of glycolysis. Although their expression levels can fluctuate between different tissues and persons, human housekeeping genes seem to exhibit a preserved tissue-wide expression ranking order. It was hypothesized that expression ranking order of two representative housekeeping genes ACTB and GAPDH might be disturbed in the tissues of patients with Familial Idiopathic Scoliosis (with positive family history of idiopathic scoliosis) opposed to the patients with no family members affected (Sporadic Idiopathic Scoliosis). An artificial neural network (ANN) was developed that could serve to differentiate between familial and sporadic cases of idiopathic scoliosis based on the expression levels of ACTB and GAPDH in different tissues of scoliotic patients. The aim of the study was to investigate whether the expression levels of ACTB and GAPDH in different tissues of idiopathic scoliosis patients could be used as a source of data for specially developed artificial neural network in order to predict the positive family history of index patient.\n\n\nRESULTS\nThe comparison of developed models showed, that the most satisfactory classification accuracy was achieved for ANN model with 18 nodes in the first hidden layer and 16 nodes in the second hidden layer. The classification accuracy for positive Idiopathic Scoliosis anamnesis only with the expression measurements of ACTB and GAPDH with the use of ANN based on 6-18-16-1 architecture was 8 of 9 (88%). Only in one case the prediction was ambiguous.\n\n\nCONCLUSIONS\nSpecially designed artificial neural network model proved possible association between expression level of ACTB, GAPDH and positive familial history of Idiopathic Scoliosis." }, { "pmid": "17518278", "title": "Combining spatial filters for the classification of single-trial EEG in a finger movement task.", "abstract": "Brain-computer interface (BCI) is to provide a communication channel that translates human intention reflected by a brain signal such as electroencephalogram (EEG) into a control signal for an output device. In recent years, the event-related desynchronization (ERD) and movement-related potentials (MRPs) are utilized as important features in motor related BCI system, and the common spatial patterns (CSP) algorithm has shown to be very useful for ERD-based classification. However, as MRPs are slow nonoscillatory EEG potential shifts, CSP is not an appropriate approach for MRPs-based classification. Here, another spatial filtering algorithm, discriminative spatial patterns (DSP), is newly introduced for better extraction of the difference in the amplitudes of MRPs, and it is integrated with CSP to extract the features from the EEG signals recorded during voluntary left versus right finger movement tasks. A support vector machines (SVM) based framework is designed as the classifier for the features. The results show that, for MRPs and ERD features, the combined spatial filters can realize the single-trial EEG classification better than anyone of DSP and CSP alone does. Thus, we propose an EEG-based BCI system with the two feature sets, one based on CSP (ERD) and the other based on DSP (MRPs), classified by SVM." }, { "pmid": "27313656", "title": "Classification of Motor Imagery EEG Signals with Support Vector Machines and Particle Swarm Optimization.", "abstract": "Support vector machines are powerful tools used to solve the small sample and nonlinear classification problems, but their ultimate classification performance depends heavily upon the selection of appropriate kernel and penalty parameters. In this study, we propose using a particle swarm optimization algorithm to optimize the selection of both the kernel and penalty parameters in order to improve the classification performance of support vector machines. The performance of the optimized classifier was evaluated with motor imagery EEG signals in terms of both classification and prediction. Results show that the optimized classifier can significantly improve the classification accuracy of motor imagery EEG signals." }, { "pmid": "27966546", "title": "Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks.", "abstract": "Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology." }, { "pmid": "10400191", "title": "Designing optimal spatial filters for single-trial EEG classification in a movement task.", "abstract": "We devised spatial filters for multi-channel EEG that lead to signals which discriminate optimally between two conditions. We demonstrate the effectiveness of this method by classifying single-trial EEGs, recorded during preparation for movements of the left or right index finger or the right foot. The classification rates for 3 subjects were 94, 90 and 84%, respectively. The filters are estimated from a set of multichannel EEG data by the method of Common Spatial Patterns, and reflect the selective activation of cortical areas. By construction, we obtain an automatic weighting of electrodes according to their importance for the classification task. Computationally, this method is parallel by nature, and demands only the evaluation of scalar products. Therefore, it is well suited for on-line data processing. The recognition rates obtained with this relatively simple method are as good as, or higher than those obtained previously with other methods. The high recognition rates and the method's procedural and computational simplicity make it a particularly promising method for an EEG-based brain-computer interface." }, { "pmid": "22438708", "title": "Brain computer interfaces, a review.", "abstract": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices." }, { "pmid": "16443377", "title": "Mu rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks.", "abstract": "We studied the reactivity of EEG rhythms (mu rhythms) in association with the imagination of right hand, left hand, foot, and tongue movement with 60 EEG electrodes in nine able-bodied subjects. During hand motor imagery, the hand mu rhythm blocked or desynchronized in all subjects, whereas an enhancement of the hand area mu rhythm was observed during foot or tongue motor imagery in the majority of the subjects. The frequency of the most reactive components was 11.7 Hz +/- 0.4 (mean +/- SD). While the desynchronized components were broad banded and centered at 10.9 Hz +/- 0.9, the synchronized components were narrow banded and displayed higher frequencies at 12.0 Hz +/- 1.0. The discrimination between the four motor imagery tasks based on classification of single EEG trials improved when, in addition to event-related desynchronization (ERD), event-related synchronization (ERS) patterns were induced in at least one or two tasks. This implies that such EEG phenomena may be utilized in a multi-class brain-computer interface (BCI) operated simply by motor imagery." }, { "pmid": "9749909", "title": "Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters.", "abstract": "Electroencephalogram (EEG) recordings during right and left motor imagery can be used to move a cursor to a target on a computer screen. Such an EEG-based brain-computer interface (BCI) can provide a new communication channel to replace an impaired motor function. It can be used by, e.g., patients with amyotrophic lateral sclerosis (ALS) to develop a simple binary response in order to reply to specific questions. Four subjects participated in a series of on-line sessions with an EEG-based cursor control. The EEG was recorded from electrodes overlying sensory-motor areas during left and right motor imagery. The EEG signals were analyzed in subject-specific frequency bands and classified on-line by a neural network. The network output was used as a feedback signal. The on-line error (100%-perfect classification) was between 10.0 and 38.1%. In addition, the single-trial data were also analyzed off-line by using an adaptive autoregressive (AAR) model of order 6. With a linear discriminant analysis the estimated parameters for left and right motor imagery were separated. The error rate obtained varied between 5.8 and 32.8% and was, on average, better than the on-line results. By using the AAR-model for on-line classification an improvement in the error rate can be expected, however, with a classification delay around 1 s." }, { "pmid": "11204034", "title": "Optimal spatial filtering of single trial EEG during imagined hand movement.", "abstract": "The development of an electroencephalograph (EEG)-based brain-computer interface (BCI) requires rapid and reliable discrimination of EEG patterns, e.g., associated with imaginary movement. One-sided hand movement imagination results in EEG changes located at contra- and ipsilateral central areas. We demonstrate that spatial filters for multichannel EEG effectively extract discriminatory information from two populations of single-trial EEG, recorded during left- and right-hand movement imagery. The best classification results for three subjects are 90.8%, 92.7%, and 99.7%. The spatial filters are estimated from a set of data by the method of common spatial patterns and reflect the specific activation of cortical areas. The method performs a weighting of the electrodes according to their importance for the classification task. The high recognition rates and computational simplicity make it a promising method for an EEG-based brain-computer interface." }, { "pmid": "26692030", "title": "Decoding intracranial EEG data with multiple kernel learning method.", "abstract": "BACKGROUND\nMachine learning models have been successfully applied to neuroimaging data to make predictions about behavioral and cognitive states of interest. While these multivariate methods have greatly advanced the field of neuroimaging, their application to electrophysiological data has been less common especially in the analysis of human intracranial electroencephalography (iEEG, also known as electrocorticography or ECoG) data, which contains a rich spectrum of signals recorded from a relatively high number of recording sites.\n\n\nNEW METHOD\nIn the present work, we introduce a novel approach to determine the contribution of different bandwidths of EEG signal in different recording sites across different experimental conditions using the Multiple Kernel Learning (MKL) method.\n\n\nCOMPARISON WITH EXISTING METHOD\nTo validate and compare the usefulness of our approach, we applied this method to an ECoG dataset that was previously analysed and published with univariate methods.\n\n\nRESULTS\nOur findings proved the usefulness of the MKL method in detecting changes in the power of various frequency bands during a given task and selecting automatically the most contributory signal in the most contributory site(s) of recording.\n\n\nCONCLUSIONS\nWith a single computation, the contribution of each frequency band in each recording site in the estimated multivariate model can be highlighted, which then allows formulation of hypotheses that can be tested a posteriori with univariate methods if needed." }, { "pmid": "15203067", "title": "Evaluation of L1 and L2 minimum norm performances on EEG localizations.", "abstract": "OBJECTIVE\nIn this work we study the performance of minimum norm methods to estimate the localization of brain electrical activity. These methods are based on the simplest forms of L(1) and L(2) norm estimates and are applied to simulated EEG data. The influence of several factors like the number of electrodes, grid density, head model, the number and depth of the sources and noise levels was taken into account. The main objective of the study is to give information about the dependence, on these factors, of the localization sources, to allow for proper interpretation of the data obtained in real EEG records.\n\n\nMETHODS\nFor the tests we used simulated dipoles and compared the localizations predicted by the L(1) and L(2) norms with the location of these point-like sources. We varied each parameter separately and evaluated the results.\n\n\nRESULTS\nFrom this work we conclude that, the grid should be constructed with approximately 650 points, so that the information about the orientation of the sources is preserved, especially for L(2) norm estimates; in favorable noise conditions, both L(1) and L(2) norm approaches are able to distinguish between more than one point-like sources.\n\n\nCONCLUSIONS\nThe critical dependence of the results on the noise level and source depth indicates that regularized and weighted solutions should be used. Finally, all these results are valid both for spherical and for realistic head models." }, { "pmid": "25248173", "title": "Active data selection for motor imagery EEG classification.", "abstract": "Rejecting or selecting data from multiple trials of electroencephalography (EEG) recordings is crucial. We propose a sparsity-aware method to data selection from a set of multiple EEG recordings during motor-imagery tasks, aiming at brain machine interfaces (BMIs). Instead of empirical averaging over sample covariance matrices for multiple trials including low-quality data, which can lead to poor performance in BMI classification, we introduce weighted averaging with weight coefficients that can reject such trials. The weight coefficients are determined by the l1-minimization problem that lead to sparse weights such that almost zero-values are allocated to low-quality trials. The proposed method was successfully applied for estimating covariance matrices for the so-called common spatial pattern (CSP) method, which is widely used for feature extraction from EEG in the two-class classification. Classification of EEG signals during motor imagery was examined to support the proposed method. It should be noted that the proposed data selection method can be applied to a number of variants of the original CSP method." }, { "pmid": "12048038", "title": "Brain-computer interfaces for communication and control.", "abstract": "For many years people have speculated that electroencephalographic activity or other electrophysiological measures of brain function might provide a new non-muscular channel for sending messages and commands to the external world - a brain-computer interface (BCI). Over the past 15 years, productive BCI research programs have arisen. Encouraged by new understanding of brain function, by the advent of powerful low-cost computer equipment, and by growing recognition of the needs and potentials of people with disabilities, these programs concentrate on developing new augmentative communication and control technology for those with severe neuromuscular disorders, such as amyotrophic lateral sclerosis, brainstem stroke, and spinal cord injury. The immediate goal is to provide these users, who may be completely paralyzed, or 'locked in', with basic communication capabilities so that they can express their wishes to caregivers or even operate word processing programs or neuroprostheses. Present-day BCIs determine the intent of the user from a variety of different electrophysiological signals. These signals include slow cortical potentials, P300 potentials, and mu or beta rhythms recorded from the scalp, and cortical neuronal activity recorded by implanted electrodes. They are translated in real-time into commands that operate a computer display or other device. Successful operation requires that the user encode commands in these signals and that the BCI derive the commands from the signals. Thus, the user and the BCI system need to adapt to each other both initially and continually so as to ensure stable performance. Current BCIs have maximum information transfer rates up to 10-25bits/min. This limited capacity can be valuable for people whose severe disabilities prevent them from using conventional augmentative communication methods. At the same time, many possible applications of BCI technology, such as neuroprosthesis control, may require higher information transfer rates. Future progress will depend on: recognition that BCI research and development is an interdisciplinary problem, involving neurobiology, psychology, engineering, mathematics, and computer science; identification of those signals, whether evoked potentials, spontaneous rhythms, or neuronal firing rates, that users are best able to control independent of activity in conventional motor output pathways; development of training methods for helping users to gain and maintain that control; delineation of the best algorithms for translating these signals into device commands; attention to the identification and elimination of artifacts such as electromyographic and electro-oculographic activity; adoption of precise and objective procedures for evaluating BCI performance; recognition of the need for long-term as well as short-term assessment of BCI performance; identification of appropriate BCI applications and appropriate matching of applications and users; and attention to factors that affect user acceptance of augmentative technology, including ease of use, cosmesis, and provision of those communication and control capacities that are most important to the user. Development of BCI technology will also benefit from greater emphasis on peer-reviewed research publications and avoidance of the hyperbolic and often misleading media attention that tends to generate unrealistic expectations in the public and skepticism in other researchers. With adequate recognition and effective engagement of all these issues, BCI systems could eventually provide an important new communication and control option for those with motor disabilities and might also give those without disabilities a supplementary control channel or a control channel useful in special circumstances." }, { "pmid": "18714838", "title": "Classifying single-trial EEG during motor imagery by iterative spatio-spectral patterns learning (ISSPL).", "abstract": "In most current motor-imagery-based brain-computer interfaces (BCIs), machine learning is carried out in two consecutive stages: feature extraction and feature classification. Feature extraction has focused on automatic learning of spatial filters, with little or no attention being paid to optimization of parameters for temporal filters that still require time-consuming, ad hoc manual tuning. In this paper, we present a new algorithm termed iterative spatio-spectral patterns learning (ISSPL) that employs statistical learning theory to perform automatic learning of spatio-spectral filters. In ISSPL, spectral filters and the classifier are simultaneously parameterized for optimization to achieve good generalization performance. A detailed derivation and theoretical analysis of ISSPL are given. Experimental results on two datasets show that the proposed algorithm can correctly identify the discriminative frequency bands, demonstrating the algorithm's superiority over contemporary approaches in classification performance." }, { "pmid": "11761077", "title": "A method to standardize a reference of scalp EEG recordings to a point at infinity.", "abstract": "The effect of an active reference in EEG recording is one of the oldest technical problems in EEG practice. In this paper, a method is proposed to approximately standardize the reference of scalp EEG recordings to a point at infinity. This method is based on the fact that the use of scalp potentials to determine the neural electrical activities or their equivalent sources does not depend on the reference, so we may approximately reconstruct the equivalent sources from scalp EEG recordings with a scalp point or average reference. Then the potentials referenced at infinity are approximately reconstructed from the equivalent sources. As a point at infinity is far from all the possible neural sources, this method may be considered as a reference electrode standardization technique (REST). The simulation studies performed with assumed neural sources included effects of electrode number, volume conductor model and noise on the performance of REST, and the significance of REST in EEG temporal analysis. The results showed that REST is potentially very effective for the most important superficial cortical region and the standardization could be especially important in recovering the temporal information of EEG recordings." }, { "pmid": "27796603", "title": "Gilles de la Tourette's and the Disruption of Interneuron-Mediated Synchrony : Comments on: Hashemiyoon, R., Kuhn, J., Visser-Vandewalle, V. Brain Topography (2016). DOI 10.1007/s10548-016-0525-z.", "abstract": "The article by Hashemiyoon et al. is a masterful synthesis of the clinical, genetic, and neurobiological aspects of Gilles de la Tourette's syndrome that provides unique insights into the neural state dysfunctions that underlie this enigmatic disorder. In particular, the authors make a powerful argument for the disorder arising from hyposynchronization within cortico-basal ganglia-thalamocortical systems which may result from a genetically-driven developmental insult to interneuron regulation, and suggest deep brain stimulation as a valuable tool to assess how balance may be restored to the system and reverse the pathological state." }, { "pmid": "24204705", "title": "The synergy between complex channel-specific FIR filter and spatial filter for single-trial EEG classification.", "abstract": "The common spatial pattern analysis (CSP), a frequently utilized feature extraction method in brain-computer-interface applications, is believed to be time-invariant and sensitive to noises, mainly due to an inherent shortcoming of purely relying on spatial filtering. Therefore, temporal/spectral filtering which can be very effective to counteract the unfavorable influence of noises is usually used as a supplement. This work integrates the CSP spatial filters with complex channel-specific finite impulse response (FIR) filters in a natural and intuitive manner. Each hybrid spatial-FIR filter is of high-order, data-driven and is unique to its corresponding channel. They are derived by introducing multiple time delays and regularization into conventional CSP. The general framework of the method follows that of CSP but performs better, as proven in single-trial classification tasks like event-related potential detection and motor imagery." }, { "pmid": "25794393", "title": "Grouped Automatic Relevance Determination and Its Application in Channel Selection for P300 BCIs.", "abstract": "During the development of a brain-computer interface, it is beneficial to exploit information in multiple electrode signals. However, a small channel subset is favored for not only machine learning feasibility, but also practicality in commercial and clinical BCI applications. An embedded channel selection approach based on grouped automatic relevance determination is proposed. The proposed Gaussian conjugate group-sparse prior and the embedded nature of the concerned Bayesian linear model enable simultaneous channel selection and feature classification. Moreover, with the marginal likelihood (evidence) maximization technique, hyper-parameters that determine the sparsity of the model are directly estimated from the training set, avoiding time-consuming cross-validation. Experiments have been conducted on P300 speller BCIs. The results for both public and in-house datasets show that the channels selected by our techniques yield competitive classification performance with the state-of-the-art and are biologically relevant to P300." }, { "pmid": "27669247", "title": "ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.", "abstract": "Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the realization of a practical EEG-based emotion recognition system." } ]
JMIR Public Health and Surveillance
29743155
PMC5966656
10.2196/publichealth.8214
Causality Patterns for Detecting Adverse Drug Reactions From Social Media: Text Mining Approach
BackgroundDetecting adverse drug reactions (ADRs) is an important task that has direct implications for the use of that drug. If we can detect previously unknown ADRs as quickly as possible, then this information can be provided to the regulators, pharmaceutical companies, and health care organizations, thereby potentially reducing drug-related morbidity and saving lives of many patients. A promising approach for detecting ADRs is to use social media platforms such as Twitter and Facebook. A high level of correlation between a drug name and an event may be an indication of a potential adverse reaction associated with that drug. Although numerous association measures have been proposed by the signal detection community for identifying ADRs, these measures are limited in that they detect correlations but often ignore causality.ObjectiveThis study aimed to propose a causality measure that can detect an adverse reaction that is caused by a drug rather than merely being a correlated signal.MethodsTo the best of our knowledge, this was the first causality-sensitive approach for detecting ADRs from social media. Specifically, the relationship between a drug and an event was represented using a set of automatically extracted lexical patterns. We then learned the weights for the extracted lexical patterns that indicate their reliability for expressing an adverse reaction of a given drug.ResultsOur proposed method obtains an ADR detection accuracy of 74% on a large-scale manually annotated dataset of tweets, covering a standard set of drugs and adverse reactions.ConclusionsBy using lexical patterns, we can accurately detect the causality between drugs and adverse reaction–related events.
Related WorkThe number of co-occurrences between a drug and an ADR can be used as a signal for detecting ADRs associated with drugs. Various measures have been proposed in the literature that evaluate the statistical significance of disproportionally large co-occurrences between a drug and an ADR. These include Multiitem Gamma Poisson Shrinker [24,26-28], Regression-Adjusted Gamma Poisson Shrinker [23], Bayesian Confidence Propagation Neural Network (BCPNN) [20-22], Proportional Reporting Rate [19,28], and Reporting Odds Ratio [19,28]. Each of these algorithms uses a different measure of disproportionality between the signal and its background. Information component is applied in BCPNN, whereas empirical Bayes geometric mean is implemented in all variants of the Gamma Poisson Shrinker algorithm. Each of the measures gives a specific score, which is based on the number of reports including the drug or the event of interest. These count-based methods are collectively referred to as disproportionality measures.In contrast to these disproportionality measures that use only co-occurrence statistics for determining whether there is a positive association between a drug and an event, in this paper, we propose a method that uses the contextual information extracted from social media posts to learn a classifier that determines whether there is a causality relation between a drug and an ADR. Detecting causality between events from natural language texts has been studied in the context of discourse analysis [29,30] and textual entailment [31,32]. In discourse analysis, a discourse structure for a given text is created, showing the various discourse relationships such as causality, negation, and evidence. For example, in Rhetorical Structure Theory [33], a text is represented by a discourse tree where the nodes correspond to sentences or clauses referred to as elementary discourse units (EDUs), and the edges that link those textual nodes represent various discourse relations that exist between 2 EDUs. Supervised methods that require manually annotated discourse trees [34] as well as unsupervised methods that use discourse cues [35] and topic models [36] have been proposed for detecting discourse relations.The problem of determining whether a particular semantic relation exists between 2 given entities in a text is a well-studied problem in the natural language processing (NLP) community. The context in which 2 entities co-occur provides useful clues for determining the semantic relation that exists between those entities. Various types of features have been extracted from co-occurring contexts for this purpose. For example, Cullotta and Sorensen [37] proposed tree kernels that use dependency trees. Dependency paths and the dependency relations over those paths are used as features in the kernel. Agichtein and Gravano [38] used a large set of automatically extracted surface-level lexical patterns for extracting entities and relations from large text collections.To address the limitations of co-occurrence-based approaches, several prior studies have used contextual information [39]. Nikfarjam et al [40] annotated tweets for ADRs, beneficial effects, and indications and used those tweets to train a Conditional Random Field. They use contextual clues from tweets and word embeddings as features. Their problem setting is different from ours in the sense that we do not attempt to detect/extract ADRs or drug names from tweets but are only interested in determining whether the mentioned ADR is indeed relevant to the mentioned drug. A tweet can mention an ADR and a drug, but the ADR might not necessarily be related to the ADR. Huynh et al [41] proposed multiple deep learning models by concatenating convolutional neural network (CNN) and recurrent neural network architectures to build ADR classifiers. Specifically, given a sentence, they would like to create a binary classifier that predicts whether the sentence contains an ADR or otherwise. Their experimental results show CNNs to be the best for ADR detection. This observation is in agreement with broader text classification tasks in NLP where CNNs have reported the state-of-the-art performance [42]. However, one issue when using CNNs for ADR detection is the lack of labeled training instances, such as annotated tweets. This problem is further aggravated if we must learn embeddings of novel drugs or rare ADRs as part of the classifier training.To overcome this problem, Lee et al [43] proposed a semisupervised CNN that can be pretrained using unlabeled data for learning phrase embeddings. Bidirectional Long Short-Term Memory (bi-LSTM) units were used [44] to tag ADRs and indicators in tweets. A small collection of 841 tweets was manually annotated by 2 annotators for this purpose. Pretrained word embeddings using skip-gram on 400 million tweets are used to initialize the bi-LSTM’s word representations. This setting is different to what we study in this paper because we do not aim to tag ADRs and indicators in a tweet but to determine whether a tweet that mentions an ADR and a drug indicator describes an ADR event related to the drug mentioned in the tweet.
[ "11072960", "4981079", "17460642", "12595401", "16454543", "27311964", "26147850", "24300373", "16381072", "12071775", "9696956", "19360795", "12580646", "26163365", "25755127", "28339747", "18602492" ]
[ { "pmid": "11072960", "title": "Adverse drug reactions: definitions, diagnosis, and management.", "abstract": "We define an adverse drug reaction as \"an appreciably harmful or unpleasant reaction, resulting from an intervention related to the use of a medicinal product, which predicts hazard from future administration and warrants prevention or specific treatment, or alteration of the dosage regimen, or withdrawal of the product.\" Such reactions are currently reported by use of WHO's Adverse Reaction Terminology, which will eventually become a subset of the International Classification of Diseases. Adverse drug reactions are classified into six types (with mnemonics): dose-related (Augmented), non-dose-related (Bizarre), dose-related and time-related (Chronic), time-related (Delayed), withdrawal (End of use), and failure of therapy (Failure). Timing, the pattern of illness, the results of investigations, and rechallenge can help attribute causality to a suspected adverse drug reaction. Management includes withdrawal of the drug if possible and specific treatment of its effects. Suspected adverse drug reactions should be reported. Surveillance methods can detect reactions and prove associations." }, { "pmid": "12595401", "title": "Detecting adverse events using information technology.", "abstract": "CONTEXT\nAlthough patient safety is a major problem, most health care organizations rely on spontaneous reporting, which detects only a small minority of adverse events. As a result, problems with safety have remained hidden. Chart review can detect adverse events in research settings, but it is too expensive for routine use. Information technology techniques can detect some adverse events in a timely and cost-effective way, in some cases early enough to prevent patient harm.\n\n\nOBJECTIVE\nTo review methodologies of detecting adverse events using information technology, reports of studies that used these techniques to detect adverse events, and study results for specific types of adverse events.\n\n\nDESIGN\nStructured review.\n\n\nMETHODOLOGY\nEnglish-language studies that reported using information technology to detect adverse events were identified using standard techniques. Only studies that contained original data were included.\n\n\nMAIN OUTCOME MEASURES\nAdverse events, with specific focus on nosocomial infections, adverse drug events, and injurious falls.\n\n\nRESULTS\nTools such as event monitoring and natural language processing can inexpensively detect certain types of adverse events in clinical databases. These approaches already work well for some types of adverse events, including adverse drug events and nosocomial infections, and are in routine use in a few hospitals. In addition, it appears likely that these techniques will be adaptable in ways that allow detection of a broad array of adverse events, especially as more medical information becomes computerized.\n\n\nCONCLUSION\nComputerized detection of adverse events will soon be practical on a widespread basis." }, { "pmid": "16454543", "title": "Adverse drug reaction-related hospitalisations: a nationwide study in The Netherlands.", "abstract": "BACKGROUND\nThe incidence of adverse drug reaction (ADR)-related hospitalisations has usually been assessed within hospitals. Because of the variability in results and methodology, it is difficult to extrapolate these results to a national level.\n\n\nOBJECTIVES\nTo evaluate the incidence and characteristics of ADR-related hospitalisations in The Netherlands in 2001.\n\n\nMETHODS\nWe conducted a nationwide study of all hospital admissions in 2001. Data were retrieved from a nationwide computer database for hospital discharge records. All acute, non-planned admissions to all Dutch academic and general hospitals in 2001 were included in the study (n = 668 714). From these admissions we selected all hospitalisations that were coded as drug-related, but intended forms of overdose, errors in administration and therapeutic failures were excluded. Hence, we extracted all ADR-related hospitalisations. We compared age, sex and the risk of a fatal outcome between patients admitted with ADRs and patients admitted for other reasons, as well as the most frequent main diagnoses in ADR-related hospitalisations and which drugs most frequently caused the ADRs. In addition, we evaluated to what extent these ADRs were reported to the Netherlands Pharmacovigilance Centre Lareb for spontaneous ADR reporting.\n\n\nRESULTS\nIn 2001, 12 249 hospitalisations were coded as ADR related. This was 1.83% of all acute hospital admissions in The Netherlands (95% CI 1.80, 1.86). The proportion increased with age from 0.8% (95% CI 0.75, 0.85) in the <18 years group to 3.2% in the >/=80 years group (95% CI 3.08, 3.32). The most frequent ADR-related diagnoses of hospitalisations were bleeding (n = 1048), non-specified 'unintended effect of drug' (n = 438), hypoglycaemia (n = 375) and fever (n = 347). The drugs most commonly associated with ADR-related hospitalisations were anticoagulants (n = 2185), cytostatics and immunosuppressives (n = 1809) and diuretics (n = 979). Six percent of the ADR-related hospitalisations had a fatal outcome (n = 734). Older age and female gender were associated with ADR-related hospitalisations. Only approximately 1% of the coded ADRs causing hospitalisation were reported to our national centre for spontaneous ADR reporting.\n\n\nCONCLUSION\nThe proportion of ADR-related hospitalisations is substantial, especially considering the fact that not all ADRs may be recognised or mentioned in discharge letters. Under-reporting of ADRs that result in hospital admission to our national centre for spontaneous ADR reporting was considerable." }, { "pmid": "27311964", "title": "Using Social Media Data to Identify Potential Candidates for Drug Repurposing: A Feasibility Study.", "abstract": "BACKGROUND\nDrug repurposing (defined as discovering new indications for existing drugs) could play a significant role in drug development, especially considering the declining success rates of developing novel drugs. Typically, new indications for existing medications are identified by accident. However, new technologies and a large number of available resources enable the development of systematic approaches to identify and validate drug-repurposing candidates. Patients today report their experiences with medications on social media and reveal side effects as well as beneficial effects of those medications.\n\n\nOBJECTIVE\nOur aim was to assess the feasibility of using patient reviews from social media to identify potential candidates for drug repurposing.\n\n\nMETHODS\nWe retrieved patient reviews of 180 medications from an online forum, WebMD. Using dictionary-based and machine learning approaches, we identified disease names in the reviews. Several publicly available resources were used to exclude comments containing known indications and adverse drug effects. After manually reviewing some of the remaining comments, we implemented a rule-based system to identify beneficial effects.\n\n\nRESULTS\nThe dictionary-based system and machine learning system identified 2178 and 6171 disease names respectively in 64,616 patient comments. We provided a list of 10 common patterns that patients used to report any beneficial effects or uses of medication. After manually reviewing the comments tagged by our rule-based system, we identified five potential drug repurposing candidates.\n\n\nCONCLUSIONS\nTo our knowledge, this is the first study to consider using social media data to identify drug-repurposing candidates. We found that even a rule-based system, with a limited number of rules, could identify beneficial effect mentions in patient comments. Our preliminary study shows that social media has the potential to be used in drug repurposing." }, { "pmid": "26147850", "title": "Social media and pharmacovigilance: A review of the opportunities and challenges.", "abstract": "Adverse drug reactions come at a considerable cost on society. Social media are a potentially invaluable reservoir of information for pharmacovigilance, yet their true value remains to be fully understood. In order to realize the benefits social media holds, a number of technical, regulatory and ethical challenges remain to be addressed. We outline these key challenges identifying relevant current research and present possible solutions." }, { "pmid": "24300373", "title": "Signal detection and monitoring based on longitudinal healthcare data.", "abstract": "Post-marketing detection and surveillance of potential safety hazards are crucial tasks in pharmacovigilance. To uncover such safety risks, a wide set of techniques has been developed for spontaneous reporting data and, more recently, for longitudinal data. This paper gives a broad overview of the signal detection process and introduces some types of data sources typically used. The most commonly applied signal detection algorithms are presented, covering simple frequentistic methods like the proportional reporting rate or the reporting odds ratio, more advanced Bayesian techniques for spontaneous and longitudinal data, e.g., the Bayesian Confidence Propagation Neural Network or the Multi-item Gamma-Poisson Shrinker and methods developed for longitudinal data only, like the IC temporal pattern detection. Additionally, the problem of adjustment for underlying confounding is discussed and the most common strategies to automatically identify false-positive signals are addressed. A drug monitoring technique based on Wald's sequential probability ratio test is presented. For each method, a real-life application is given, and a wide set of literature for further reading is referenced." }, { "pmid": "16381072", "title": "Extending the methods used to screen the WHO drug safety database towards analysis of complex associations and improved accuracy for rare events.", "abstract": "Post-marketing drug safety data sets are often massive, and entail problems with heterogeneity and selection bias. Nevertheless, quantitative methods have proven a very useful aid to help clinical experts in screening for previously unknown associations in these data sets. The WHO international drug safety database is the world's largest data set of its kind with over three million reports on suspected adverse drug reaction incidents. Since 1998, an exploratory data analysis method has been in routine use to screen for quantitative associations in this data set. This method was originally based on large sample approximations and limited to pairwise associations, but in this article we propose more accurate credibility interval estimates and extend the method to allow for the analysis of more complex quantitative associations. The accuracy of the proposed credibility intervals is evaluated through comparison to precise Monte Carlo simulations. In addition, we propose a Mantel-Haenszel-type adjustment to control for suspected confounders." }, { "pmid": "12071775", "title": "A data mining approach for signal detection and analysis.", "abstract": "The WHO database contains over 2.5 million case reports, analysis of this data set is performed with the intention of signal detection. This paper presents an overview of the quantitative method used to highlight dependencies in this data set. The method Bayesian confidence propagation neural network (BCPNN) is used to highlight dependencies in the data set. The method uses Bayesian statistics implemented in a neural network architecture to analyse all reported drug adverse reaction combinations. This method is now in routine use for drug adverse reaction signal detection. Also this approach has been extended to highlight drug group effects and look for higher order dependencies in the WHO data. Quantitatively unexpectedly strong relationships in the data are highlighted relative to general reporting of suspected adverse effects; these associations are then clinically assessed." }, { "pmid": "9696956", "title": "A Bayesian neural network method for adverse drug reaction signal generation.", "abstract": "OBJECTIVE\nThe database of adverse drug reactions (ADRs) held by the Uppsala Monitoring Centre on behalf of the 47 countries of the World Health Organization (WHO) Collaborating Programme for International Drug Monitoring contains nearly two million reports. It is the largest database of this sort in the world, and about 35,000 new reports are added quarterly. The task of trying to find new drug-ADR signals has been carried out by an expert panel, but with such a large volume of material the task is daunting. We have developed a flexible, automated procedure to find new signals with known probability difference from the background data.\n\n\nMETHOD\nData mining, using various computational approaches, has been applied in a variety of disciplines. A Bayesian confidence propagation neural network (BCPNN) has been developed which can manage large data sets, is robust in handling incomplete data, and may be used with complex variables. Using information theory, such a tool is ideal for finding drug-ADR combinations with other variables, which are highly associated compared to the generality of the stored data, or a section of the stored data. The method is transparent for easy checking and flexible for different kinds of search.\n\n\nRESULTS\nUsing the BCPNN, some time scan examples are given which show the power of the technique to find signals early (captopril-coughing) and to avoid false positives where a common drug and ADRs occur in the database (digoxin-acne; digoxin-rash). A routine application of the BCPNN to a quarterly update is also tested, showing that 1004 suspected drug-ADR combinations reached the 97.5% confidence level of difference from the generality. Of these, 307 were potentially serious ADRs, and of these 53 related to new drugs. Twelve of the latter were not recorded in the CD editions of The physician's Desk Reference or Martindale's Extra Pharmacopoea and did not appear in Reactions Weekly online.\n\n\nCONCLUSION\nThe results indicate that the BCPNN can be used in the detection of significant signals from the data set of the WHO Programme on International Drug Monitoring. The BCPNN will be an extremely useful adjunct to the expert assessment of very large numbers of spontaneously reported ADRs." }, { "pmid": "19360795", "title": "Bayesian pharmacovigilance signal detection methods revisited in a multiple comparison setting.", "abstract": "Pharmacovigilance spontaneous reporting systems are primarily devoted to early detection of the adverse reactions of marketed drugs. They maintain large spontaneous reporting databases (SRD) for which several automatic signalling methods have been developed. A common limitation of these methods lies in the fact that they do not provide an auto-evaluation of the generated signals so that thresholds of alerts are arbitrarily chosen. In this paper, we propose to revisit the Gamma Poisson Shrinkage (GPS) model and the Bayesian Confidence Propagation Neural Network (BCPNN) model in the Bayesian general decision framework. This results in a new signal ranking procedure based on the posterior probability of null hypothesis of interest and makes it possible to derive with a non-mixture modelling approach Bayesian estimators of the false discovery rate (FDR), false negative rate, sensitivity and specificity. An original data generation process that can be suited to the features of the SRD under scrutiny is proposed and applied to the French SRD to perform a large simulation study. Results indicate better performances according to the FDR for the proposed ranking procedure in comparison with the current ones for the GPS model. They also reveal identical performances according to the four operating characteristics for the proposed ranking procedure with the BCPNN and GPS models but better estimates when using the GPS model. Finally, the proposed procedure is applied to the French data." }, { "pmid": "12580646", "title": "Quantitative methods in pharmacovigilance: focus on signal detection.", "abstract": "Pharmacovigilance serves to detect previously unrecognised adverse events associated with the use of medicines. The simplest method for detecting signals of such events is crude inspection of lists of spontaneously reported drug-event combinations. Quantitative and automated numerator-based methods such as Bayesian data mining can supplement or supplant these methods. The theoretical basis and limitations of these methods should be understood by drug safety professionals, and automated methods should not be automatically accepted. Published evaluations of these techniques are mainly limited to large regulatory databases, and performance characteristics may differ in smaller safety databases of drug developers. Head-to-head comparisons of the major techniques have not been published. Regardless of previous statistical training, pharmacovigilance practitioners should understand how these methods work. The mathematical basis of these techniques should not obscure the numerous confounders and biases inherent in the data. This article seeks to make automated signal detection methods transparent to drug safety professionals of various backgrounds. This is accomplished by first providing a brief overview of the evolution of signal detection followed by a series of sections devoted to the methods with the greatest utilisation and evidentiary support: proportional reporting rations, the Bayesian Confidence Propagation Neural Network and empirical Bayes screening. Sophisticated yet intuitive explanations are provided for each method, supported by figures in which the underlying statistical concepts are explored. Finally the strengths, limitations, pitfalls and outstanding unresolved issues are discussed. Pharmacovigilance specialists should not be intimidated by the mathematics. Understanding the theoretical basis of these methods should enhance the effective assessment and possible implementation of these techniques by drug safety professionals." }, { "pmid": "26163365", "title": "Adverse Drug Reaction Identification and Extraction in Social Media: A Scoping Review.", "abstract": "BACKGROUND\nThe underreporting of adverse drug reactions (ADRs) through traditional reporting channels is a limitation in the efficiency of the current pharmacovigilance system. Patients' experiences with drugs that they report on social media represent a new source of data that may have some value in postmarketing safety surveillance.\n\n\nOBJECTIVE\nA scoping review was undertaken to explore the breadth of evidence about the use of social media as a new source of knowledge for pharmacovigilance.\n\n\nMETHODS\nDaubt et al's recommendations for scoping reviews were followed. The research questions were as follows: How can social media be used as a data source for postmarketing drug surveillance? What are the available methods for extracting data? What are the different ways to use these data? We queried PubMed, Embase, and Google Scholar to extract relevant articles that were published before June 2014 and with no lower date limit. Two pairs of reviewers independently screened the selected studies and proposed two themes of review: manual ADR identification (theme 1) and automated ADR extraction from social media (theme 2). Descriptive characteristics were collected from the publications to create a database for themes 1 and 2.\n\n\nRESULTS\nOf the 1032 citations from PubMed and Embase, 11 were relevant to the research question. An additional 13 citations were added after further research on the Internet and in reference lists. Themes 1 and 2 explored 11 and 13 articles, respectively. Ways of approaching the use of social media as a pharmacovigilance data source were identified.\n\n\nCONCLUSIONS\nThis scoping review noted multiple methods for identifying target data, extracting them, and evaluating the quality of medical information from social media. It also showed some remaining gaps in the field. Studies related to the identification theme usually failed to accurately assess the completeness, quality, and reliability of the data that were analyzed from social media. Regarding extraction, no study proposed a generic approach to easily adding a new site or data source. Additional studies are required to precisely determine the role of social media in the pharmacovigilance system." }, { "pmid": "25755127", "title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features.", "abstract": "OBJECTIVE\nSocial media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media.\n\n\nMETHODS\nWe introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique.\n\n\nRESULTS\nADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance.\n\n\nCONCLUSION\nIt is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets." }, { "pmid": "28339747", "title": "Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts.", "abstract": "OBJECTIVE\nSocial media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media.\n\n\nMATERIALS AND METHODS\nWe developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training.\n\n\nRESULTS\nOur best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision.\n\n\nDISCUSSION\nOur model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models.\n\n\nCONCLUSION\nADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets." }, { "pmid": "18602492", "title": "Drug name recognition and classification in biomedical texts. A case study outlining approaches underpinning automated systems.", "abstract": "This article presents a system for drug name recognition and classification in biomedical texts. The system combines information obtained by the Unified Medical Language System (UMLS) MetaMap Transfer (MMTx) program and nomenclature rules recommended by the World Health Organization (WHO) International Nonproprietary Names (INNs) Program to identify and classify pharmaceutical substances. Moreover, the system is able to detect possible candidates for drug names that have not been detected by MMTx program by applying these rules, achieving, in this way, a broader coverage. This work is the first step in a method for automatic detection of drug interactions from biomedical texts, a specific type of adverse drug event of special interest in patient safety." } ]
Frontiers in Neurorobotics
29872389
PMC5972223
10.3389/fnbot.2018.00022
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
2. Background and related work2.1. Multimodal categorizationThe human capability for object categorization is a fundamental topic in cognitive science (Barsalou, 1999). In the field of robotics, adaptive formation of object categories that considers a robot's embodiment, i.e., its sensory-motor system, is gathering attention as a way to solve the symbol grounding problem (Harnad, 1990; Taniguchi et al., 2016).Recently, various computational models and machine learning methods for multimodal object categorization have been proposed in artificial intelligence, cognitive robotics, and related research fields (Roy and Pentland, 2002; Natale et al., 2004; Nakamura et al., 2007, 2009, 2011a,b, 2014; Iwahashi et al., 2010; Sinapov and Stoytchev, 2011; Araki et al., 2012; Griffith et al., 2012; Ando et al., 2013; Celikkanat et al., 2014; Sinapov et al., 2014). For example, Sinapov and Stoytchev (2011) proposed a graph-based multimodal categorization method that allows a robot to recognize a new object by its similarity to a set of familiar objects. They also built a robotic system that categorizes 100 objects from multimodal information in a supervised manner (Sinapov et al., 2014). Celikkanat et al. (2014) modeled the context in terms of a set of concepts that allow many-to-many relationships between objects and contexts using LDA.Our focus of this paper is not a supervised learning-based, but an unsupervised learning-based multimodal categorization method and an active perception method for categories formed by the method. Of these, a series of statistical unsupervised multimodal categorization methods for autonomous robots have been proposed by extending LDA, i.e., a topic model (Nakamura et al., 2007, 2009, 2011a,b, 2014; Araki et al., 2012; Ando et al., 2013). All these methods are Bayesian generative models, and the MHDP is a representative method of this series (Nakamura et al., 2011b). The MHDP is an extension of the HDP, which was proposed by Teh et al. (2006), and the HDP is a nonparametric Bayesian extension of LDA (Blei et al., 2003). Concretely, the generative model of the MHDP has multiple types of emissions that correspond to various sensor data obtained through various modality inputs. In the HDP, observation data are usually represented as a bag-of-words (BoW). In contrast, the observation data in the MHDP use bag-of-features (BoF) representations for multimodal information. BoF is a histogram-based feature representation that is generated by quantizing observed feature vectors. Latent variables that are regarded as indicators of topics in the HDP correspond to object categories in the MHDP. Nakamura et al. (2011b) showed that the MHDP enables a robot to categorize a large number of objects in a home environment into categories that are similar to human categorization results.To obtain multimodal information, a robot has to perform actions and interact with a target object in various ways, e.g., grasping, shaking, or rotating the object. If the number of actions and types of sensor information increase, multimodal categorization and recognition can require a longer time. When the recognition time is limited and/or if quick recognition is required, it becomes important for a robot to select a small number of actions that are effective for accurate recognition. Action selection for recognition is often called active perception. However, an active perception method for the MHDP has not been proposed. This paper aims to provide an active perception method for the MHDP.2.2. Active perceptionGenerally, active perception is one of the most important cognitive capabilities of humans. From an engineering viewpoint, active perception has many specific tasks, e.g., localization, mapping, navigation, object recognition, object segmentation, and self–other differentiation.In machine learning, active learning is defined as a task in which a method interactively queries an information source to obtain the desired outputs at new data points to learn efficiently Settles (2012). Active learning algorithms select an unobserved input datum and ask a user (labeler) to provide a training signal (label) in order to reduce uncertainty as quickly as possible (Cohn et al., 1996; Muslea et al., 2006; Settles, 2012). These algorithms usually assume a supervised learning problem. This problem is related to the problem in this paper, but is fundamentally different.Historically, active vision, i.e., active visual perception, has been studied as an important engineering problem in computer vision. Dutta Roy et al. (2004) presented a comprehensive survey of active three-dimensional object recognition. For example, Borotschnig et al. (2000) proposed an active vision method in a parametric eigenspace to improve the visual classification results. Denzler and Brown (2002) proposed an information theoretic action selection method to gather information that conveys the true state of a system through an active camera. They used the mutual information (MI) as a criterion for action selection. Krainin et al. (2011) developed an active perception method in which a mobile robot manipulates an object to build a three-dimensional surface model of it. Their method uses the IG criterion to determine when and how the robot should grasp the object.Modeling and/or recognizing a single object as well as modeling a scene and/or segmenting objects are also important tasks in the context of robotics. Eidenberger and Scharinger (2010) proposed an active perception planning method for scene modeling in a realistic environment. van Hoof et al. (2012) proposed an active scene exploration method that enables an autonomous robot to efficiently segment a scene into its constituent objects by interacting with the objects in an unstructured environment. They used IG as a criterion for action selection. InfoMax control for acoustic exploration was proposed by Rebguns et al. (2011).Localization, mapping, and navigation are also targets of active perception. Velez et al. (2012) presented an online planning algorithm that enables a mobile robot to generate plans that maximize the expected performance of object detection. Burgard et al. (1997) proposed an active perception method for localization. Action selection is performed by maximizing the weighted sum of the expected entropy and expected costs. To reduce the computational cost, they only consider a subset of the next locations. Roy and Thrun (1999) proposed a coastal navigation method for a robot to generate trajectories for its goal by minimizing the positional uncertainty at the goal. Stachniss et al. (2005) proposed an information-gain-based exploration method for mapping and localization. Correa and Soto (2009) proposed an active perception method for a mobile robot with a visual sensor mounted on a pan-tilt mechanism to reduce localization uncertainty. They used the IG criterion, which was estimated using a particle filter.In addition, various studies on active perception by a robot have been conducted (Natale et al., 2004; Ji and Carin, 2006; Schneider et al., 2009; Tuci et al., 2010; Saegusa et al., 2011; Fishel and Loeb, 2012; Pape et al., 2012; Sushkov and Sammut, 2012; Gouko et al., 2013; Hogman et al., 2013; Ivaldi et al., 2014; Zhang et al., 2017). In spite of a large number of contributions about active perception, few theories of active perception for multimodal object category recognition have been proposed. In particular, an MHDP-based active perception method has not yet been proposed, although the MHDP-based categorization method and its series have obtained many successful results and extensions.2.3. Active perception for multimodal categorizationSinapov et al. (2014) investigated multimodal categorization and active perception by making a robot perform 10 different behaviors; obtain visual, auditory, and haptic information; explore 100 different objects, and classify them into 20 object categories. In addition, they proposed an active behavior selection method based on confusion matrices. They reported that the method was able to reduce the exploration time by half by dynamically selecting the next exploratory behavior. However, their multimodal categorization is performed in a supervised manner, and the theory of active perception is still heuristic. The method does not have theoretical guarantees of performance.IG-based active perception is popular, as shown above, but the theoretical justification for using IG in each task is often missing in many robotics papers. Moreover, in many cases in robotics studies, IG cannot be evaluated directly, reliably, or accurately. When one takes an IG criterion-based approach, how to estimate the IG is an important problem. In this study, we focus on MHDP-based active perception and develop an efficient near-optimal method based on firm theoretical justification.
[ "11301520", "22393319", "22837748" ]
[ { "pmid": "11301520", "title": "A theory of lexical access in speech production.", "abstract": "Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feed-forward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER++. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging." }, { "pmid": "22393319", "title": "Grounding the Meanings in Sensorimotor Behavior using Reinforcement Learning.", "abstract": "The recent outburst of interest in cognitive developmental robotics is fueled by the ambition to propose ecologically plausible mechanisms of how, among other things, a learning agent/robot could ground linguistic meanings in its sensorimotor behavior. Along this stream, we propose a model that allows the simulated iCub robot to learn the meanings of actions (point, touch, and push) oriented toward objects in robot's peripersonal space. In our experiments, the iCub learns to execute motor actions and comment on them. Architecturally, the model is composed of three neural-network-based modules that are trained in different ways. The first module, a two-layer perceptron, is trained by back-propagation to attend to the target position in the visual scene, given the low-level visual information and the feature-based target information. The second module, having the form of an actor-critic architecture, is the most distinguishing part of our model, and is trained by a continuous version of reinforcement learning to execute actions as sequences, based on a linguistic command. The third module, an echo-state network, is trained to provide the linguistic description of the executed actions. The trained model generalizes well in case of novel action-target combinations with randomized initial arm positions. It can also promptly adapt its behavior if the action/target suddenly changes during motor execution." }, { "pmid": "22837748", "title": "Learning tactile skills through curious exploration.", "abstract": "We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explicit teacher signal. In this approach, the acquisition of skills is driven by the information content of the sensory input signals relative to a learner that aims at representing sensory inputs using fewer and fewer computational resources. We show that, from initially random exploration of its environment, the robotic system autonomously develops a small set of basic motor skills that lead to different kinds of tactile input. Next, the system learns how to exploit the learned motor skills to solve supervised texture classification tasks. Our approach demonstrates the feasibility of autonomous acquisition of tactile skills on physical robotic platforms through curiosity-driven reinforcement learning, overcomes typical difficulties of engineered solutions for active tactile exploration and underactuated control, and provides a basis for studying developmental learning through intrinsic motivation in robots." } ]
Frontiers in Psychology
29881360
PMC5976786
10.3389/fpsyg.2018.00690
Understanding Engagement in Dementia Through Behavior. The Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the Evidence-Based Model of Engagement-Related Behavior (EMODEB)
Engagement in activities is of crucial importance for people with dementia. State of the art assessment techniques rely exclusively on behavior observation to measure engagement in dementia. These techniques are either too general to grasp how engagement is naturally expressed through behavior or too complex to be traced back to an overall engagement state. We carried out a longitudinal study to develop a coding system of engagement-related behavior that could tackle these issues and to create an evidence-based model of engagement to make meaning of such a coding system. Fourteen elderlies with mild to moderate dementia took part in the study. They were involved in two activities: a game-based cognitive stimulation and a robot-based free play. The coding system was developed with a mixed approach: ethographic and Laban-inspired. First, we developed two ethograms to describe the behavior of participants in the two activities in detail. Then, we used Laban Movement Analysis (LMA) to identify a common structure to the behaviors in the two ethograms and unify them in a unique coding system. The inter-rater reliability (IRR) of the coding system proved to be excellent for cognitive games (kappa = 0.78) and very good for robot play (kappa = 0.74). From the scoring of the videos, we developed an evidence-based model of engagement. This was based on the most frequent patterns of body part organization (i.e., the way body parts are connected in movement) observed during activities. Each pattern was given a meaning in terms of engagement by making reference to the literature. The model was tested using structural equation modeling (SEM). It achieved an excellent goodness of fit and all the hypothesized relations between variables were significant. We called the coding system that we developed the Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the model the Evidence-based Model of Engagement-related Behavior (EMODEB). To the best of our knowledge, the ELICSE and the EMODEB constitute the first formalization of engagement-related behavior for dementia that describes how behavior unfolds over time and what it means in terms of engagement.
Related workObservational rating scalesEarly measurements of engagement came in the form of observational rating scales. Observational rating scales are ordinal Likert-type scales that measure engagement through behavior. The most widely used in the field of gerontology is the Observational Measurement of Engagement (OME) developed by Cohen-Mansfield et al. (2009). In the OME, engagement is defined as “the act of being involved or occupied with a stimulus” and is measured across four dimensions: duration (time in seconds that the person with dementia is involved with the stimulus), attention (attentional allocation toward the stimulus measured on a 4-point Likert-scale), attitude (affective stance toward the stimulus measured on a 7-point Likert scale), and refusal (acceptance or rejection of the stimulus). Another broadly employed observational scale of engagement is the Menorah Park Engagement Scale (MPES), which has been developed by Judge et al. (2000) to assess engagement in people with dementia involved in Montessori-based interventions. In the MPES, engagement is defined as “motor or verbal behavior exhibited in response to the activity” and is assessed along a single item, engagement, that can take four values: non-engagement (no motor or verbal behavior in response to the activity, e.g., stare into space, look away from the activity), self-engagement (self-directed motor and verbal behavior in response to the activity, e.g., hand-wringing), passive engagement (passive motor and verbal behavior directed toward the activity, e.g., looking toward the activity, listening) and constructive engagement (proactive motor and verbal behavior directed toward the activity, e.g., manipulating objects, talking). A last observational scale that is widely used in dementia is the Observed Emotion Rating Scale (OERS; Lawton et al., 1996). It does not directly measure engagement but has often been used in concert with the OME and MPES to assess the emotional state of people with dementia during activities (Moyle et al., 2013; Perugia et al., 2017b). The OERS measures the intensity or duration of five affective states along a 5-point Likert-scale: pleasure, anxiety/fear, anger, sadness, and general alertness.Observational rating scales are very useful tools to get a broad idea of the engagement state of the person with dementia during activities. However, they can grasp engagement only at a global level. Indeed, they do not get into the detail of how behavior naturally occurs and unfolds. They collect a general idea of engagement which is drawn from the occurrence of certain behaviors.Coding schemesA different approach toward measuring engagement is for instance adopted in the field of socially interactive robotics (SIR) where the study of interactions between humans and social robots is of crucial importance (Pino et al., 2015). Socially interactive robots are robots that engage socially with humans for the sake of social interaction itself (Feil-Seifer and Mataric, 2011). In the context of SIR, a considerable effort has been done to understand how people with dementia interact with social robots and how such an interaction could have a therapeutic value (Bemelmans et al., 2012; Valentí Soler et al., 2015; Rouaix et al., 2017). To understand the meaningfulness of the interactions that social robots promote, researchers have compiled repertoires of behaviors and used them to annotate videos. Just to make few examples, Takayanagi et al. (2014) used a time sampling method to compare the effects of the social robot PARO (the arctic seal robot) to those of a stuffed animal (a lion) in people with mild/moderate and severe dementia. They divided videos into units of 10 s and at each interval scored whether the observed person talked (to PARO/lion, to the staff, to him/herself or to nobody), touched or stroked (PARO/lion), and had positive, neutral or negative facial expression. Šabanović et al. (2013) explored the behavior behind PARO's therapeutic success by coding visual engagement (look at the robot), verbal engagement (speak, sing, vocalizations toward the robot), and physical engagement (pet, hit, hold, kiss, take/offer PARO). Wada et al. (2010) tested the effectiveness of a manual for the use of PARO with people with dementia by scoring engagement on a coding sheet that comprised the classes: emotional expression (laugh, smile, no expression, hate), gaze (PARO, staff, user, others), talk (PARO, staff, user, others), and type of interactions with PARO (give, stroke, hold, other). Coding schemes have been employed also in other contexts. For instance, to assess engagement in multi-sensory and motor stimulation programs. Cruz et al. (2011) assessed engagement during these types of interventions using a coding scheme composed of the following categories: engagement in the task, interactions with objects, verbal communication, smiling, laughing, nodding the head, and closed eyes.The just described coding schemes provide a deeper understanding of behavior compared to observational scales. However, they grasp only some characteristics of behavior. Indeed, instead of considering behavior in its natural flow, they fragment it to pick up only the desired pieces of information. In these cases, since the fragmentation of behavior is not performed in a systematic way, it results in a cherry-picking of behaviors based on their perceived meaningfulness. Ideally, a researcher should first develop a complete inventory of behaviors (ethograms) and then focus on a portion of it (coding schemes) based on research questions. However, such practice is not reported in these studies.EthogramsEthology is the discipline that studies animal behavior from a biological perspective. As a discipline, Ethology faces nearly the same constraint as gerontology for dementia: the inaccessibility of mental experiences (Troisi, 1999). To address this issue, Ethology has elaborated a very distinctive and powerful method of analysis which is rooted in direct observation, rigorous description and objective analysis of behavior, the ethogram.The words ethogram and coding scheme are often used as synonyms in the literature. Some authors use the word ethogram as a synonym for coding scheme (Cruz et al., 2011), others use the word ethogram to designate a more thorough description and analysis of behavior that stems from field observation and incorporates a good deal of complexity (Mabire et al., 2016). In Ethology, the ethogram is the complete list of actions that a particular species performs, while the coding scheme is a portion of an ethogram aimed at answering specific research questions.Recently, several ethograms have been developed to assess engagement in dementia. Olsen et al. (2016) gauged engagement in people with dementia involved in Animal Assisted Activities (AAA) using an ethogram that comprised the following behaviors: conversation (unspecified target), look at (other people, the dog activity, other things), touch (people, dog), smile, or laugh at (dog, other things), sing/dance/clapping hands, stereotyped behavior, wandering around, agitated behavior, yawn, and sigh, no response, asleep, leaving the room, off camera. Jøranson et al. (2016) studied the behaviors of people with dementia involved in interactions with PARO and grouped them in: conversation with or without PARO, observe (PARO, other participant/activity leader, other things in the room), smile/laughter (PARO, other participant/activity leader), physical contact with PARO, active with PARO, singing/whistling, clapping/humming/dancing, napping, walking around, repetitive movement, time out of recording, physical contact (with participant/activity leader), signs of discomfort, leaving the group, no response to contact. Perhaps one of the most complete ethograms of engagement built for dementia is the Video-Coding Incorporating Observed Emotions (VC-IOE; Jones et al., 2015) The VC-IOE was compiled to assess the engagement of people with dementia with mobile telepresence and companion robots. It has six dimensions: facial emotional response (the OERS items: pleasure, anxiety/fear, anger, sadness, general alertness, none), verbal engagement (positive verbal engagement with stimulus, positive verbal engagement with facilitator, negative verbal engagement, no verbal engagement, missing), visual alertness/engagement (visually engaged with stimulus, visually engaged with facilitator/others, no visual engagement, missing visual), behavioral engagement (positive behavioral engagement, negative behavioral engagement, no behavioral engagement, missing behavior), collective engagement (using stimulus for collective engagement, no evidence of collective engagement), and agitation (based on Cohen-Mansfield Agitation Inventory—CMAI: evidence of agitation and no evidence of agitation; Koss et al., 1997).The ethograms that we have described are optimal to study behavior in its complexity as it naturally occurs and flows. However, they produce a measurement of engagement that is segmented into many small pieces of information that cannot be traced back to an overall engagement state. For this reason, we decided to employ a mixed approach to develop a coding system: ethographic and Laban-inspired. First, we observed people with dementia involved in two very different activities and developed two ethograms to describe their behavior in the two contexts in a detailed way. At this level, we kept behaviors to a very fine granularity. Second, we used LMA to identify a common organizational structure to the behaviors in the two ethograms and unify them in a unique coding system viable for both activities1.Laban movement analysisLMA is a holistic framework that provides a vocabulary to describe, interpret and generate movement (Bartenieff and Lewis, 1980). It is organized into four main categories: body, space, effort, and shape (Hackney, 2002). The category body defines specific body parts (in terms of elements in the body structure) and how these body parts are connected in movement (Maletic, 1987). The orchestration of body parts (also called body part organization) can be successive (adjacent body parts move one after the other), sequential (non-adjacent body parts move one after the other), and simultaneous (all active body parts move together at the same time). The category space describes the specific direction of a movement with the center of the body as a reference point. The aim of this category is mapping the 3-dimensional structure of the body in relation to the 3-dimensional environment. The category effort regards the qualities of movement, how a movement is performed. The movement has four qualities: flow (ongoingness), weight (relating to power and gravity), space (focus), and time (change in speed) (Bradley, 2008). The category shape describes “attitudes toward the environment that are expressed in the way the body changes form” (Wile and Cook, 2010). There are three distinctions in the category shape, also referred to as modes of shape change: shape flow (changes in shape in relation to the self), directional shape (goal-oriented changes of the body shape in relation to the others and the environment), and shaping (molding and carving of the body in interaction with the others and the environment).In the past, LMA has been used in numerous studies. For instance, to create and describe choreographies (Preston-Dunlop, 1995), recognize emotions in dance movement (Camurri et al., 2003), increase movement efficiency for factory workers (Lamb, 1965), develop “choreographies of interaction” for design activities (Weerdesteijn et al., 2005), communicate emotions and mental states to robots (Lourens et al., 2010) and evoke and intensify the perception of emotions (Shafir et al., 2016).In order to organize the ethograms in a unique coding system, we focused on the category shape and, in particular, on the modes of shape change. This was for three reasons. First, the category shape captures the way the body changes shape in relation to the self and the environment, and, in general, the behaviors in the ethograms mostly expressed a direction of the body toward the environment (other participants, facilitator, and game) that had a neutral, positive or negative affective nuance. Second, the modes of shape change conceive the body in its entirety and describe changes in its form as whole-body dynamics. This gave us the possibility to describe a large variety of body configurations by combining behaviors belonging to different body parts. Third, as the modes of shape change describe whole-body dynamics motivated by inner attitudes and by the environment, they were particularly suited to associate an engagement meaning to the different body configurations described2.Frameworks of engagementAt present, there is just one model of engagement developed for people with dementia, the Comprehensive Process Model of Engagement (Cohen-Mansfield et al., 2011). It describes a series of factors that influence engagement (measured with the OME) in people with dementia: environmental attributes (e.g., background noise, lighting, sound, number of persons in proximity) stimuli attributes (e.g., human social stimuli, simulated social stimuli, inanimate social stimuli) and personal attributes (e.g., gender, age, marital status, medication intake). As the experience of engagement is very difficult to study in people with dementia, very little is known on its characteristics and components. To draw a thorough framework of engagement for dementia, we must step into other domains and understand whether renowned models of user engagement are applicable to dementia.Attfield et al. (2011) described user engagement as the “emotional, cognitive and behavioral connection that exists, at any point in time, and possibly over time, between a user and a resource.” Such connection is described by a series of characteristics: focused attention, positive affect, aesthetics (i.e., the sensory and visual appeal of an interface), endurability (i.e., the likelihood of remembering an experience), novelty (i.e., the surprise effect provoked by a new experience), richness and control (i.e., the variety and complexity of thoughts, actions and perceptions evoked by the activity), reputation-trust-expectation and user context (i.e., the motivation, incentives, and benefits that users get from engagement). Some of these characteristics—namely endurability, novelty, richness and control—are difficult to study in dementia since they suppose preserved cognitive skills. Other characteristics—aesthetics and reputation-trust-expectation—are features of the technology influencing engagement, rather than elements composing it. Three elements of this framework might be transferred to the context of dementia: focused attention, positive affect and user context. Attentional and emotional involvement are unanimously considered the fundamentals of user engagement (Cohen-Mansfield et al., 2009; Peters et al., 2009). User characteristics are called personal attributes by Cohen-Mansfield et al. (2011) and are proved to affect engagement in dementia. Indeed, Perugia et al. (2017b) found out that motivational disorders, such as apathy and depression, negatively affect engagement in dementia.When engagement is studied in the context of HRI, things change. Castellano et al. (2009) involved children in a chess play with the robot iCat. They observed that, in such a context, engagement got influenced both by the task that the user had to carry out and by the social interaction with the agent. In general, the framework of Castellano and colleagues is applicable to dementia. Indeed, playful activities are usually carried out in groups in nursing homes. As a matter of fact, Perugia et al. (2017a) applied thematic analysis to an inventory of behaviors displayed by people with dementia during playful activities and identified three main themes overlapping with those just described: attention (task-centered engagement), rapport (social interaction), and affect.In the literature of user engagement, engagement is regarded as a process composed of a number of stages. Sidner et al. (2005) defined engagement as “the process by which individuals in an interaction start, maintain, and end their perceived connection to one another.” O'Brien and Toms (2008) identified four phases of engagement: point of engagement, sustained engagement, disengagement, and re-engagement. The conception of engagement as a process with a beginning, a development and an end can be easily reported to the context of dementia, especially if we are able to create a systematic description of the progression of engagement-related behavior over time.A last feature of engagement to mention is its intensity. Brown and Cairns (2004) observed three levels of the immersion in the game experience: engagement (the gamer invests time, effort, and attention), engrossment (the gamer's emotions are directly affected by the game), and total immersion (the gamer is cut off from reality, all that matters is the game). The first two levels—engagement and engrossment—can be transposed to the context of dementia as they can be gauged with objective measures (e.g., behavior, physiology). The latter—total immersion—cannot. Indeed, it must be assessed with subjective measures (e.g., self-reports) and it is related to a sense of detachment from reality and loss of spatial and temporal reference points that constitutes the normal condition of people with dementia.To summarize, according to the literature, engagement is composed by focused attention (or task-engagement), social interaction (or rapport), and affect. It is a process that has a start (or point of engagement), a development (or sustained engagement), and an end (disengagement) and has different levels of intensity: engagement, engrossment, and total immersion. Within this paper, we present an evidence-based model of engagement-related behavior (EMODEB) that tries to report all these features of engagement to the context of dementia.
[ "21450215", "17612800", "19307858", "21946802", "20155522", "21665880", "3655778", "26577550", "27019225", "16526585", "9236952", "8548515", "26790570", "23177981", "26270953", "23506125", "27590332", "29148293", "26257646", "8465902", "23545466", "28713296", "26793147", "25309434", "10580305", "26388764" ]
[ { "pmid": "21450215", "title": "Socially assistive robots in elderly care: a systematic review into effects and effectiveness.", "abstract": "The ongoing development of robotics on the one hand and, on the other hand, the foreseen relative growth in number of elderly individuals suffering from dementia, raises the question of which contribution robotics could have to rationalize and maintain, or even improve the quality of care. The objective of this review was to assess the published effects and effectiveness of robot interventions aiming at social assistance in elderly care. We searched, using Medical Subject Headings terms and free words, in the CINAHL, MEDLINE, Cochrane, BIOMED, PUBMED, PsycINFO, and EMBASE databases. Also the IEEE Digital Library was searched. No limitations were applied for the date of publication. Only articles written in English were taken into account. Collected publications went through a selection process. In the first step, publications were collected from major databases using a search query. In the second step, 3 reviewers independently selected publications on their title, using predefined selection criteria. In the third step, publications were judged based on their abstracts by the same reviewers, using the same selection criteria. In the fourth step, one reviewer made the final selection of publications based on complete content. Finally, 41 publications were included in the review, describing 17 studies involving 4 robot systems. Most studies reported positive effects of companion-type robots on (socio)psychological (eg, mood, loneliness, and social connections and communication) and physiological (eg, stress reduction) parameters. The methodological quality of the studies was, mostly, low. Although positive effects were reported, the scientific value of the evidence was limited. The positive results described, however, prompt further effectiveness research in this field." }, { "pmid": "17612800", "title": "Enriching opportunities for people living with dementia in nursing homes: an evaluation of a multi-level activity-based model of care.", "abstract": "This paper reports on the evaluation of the Enriched Opportunities Programme in improving well-being, diversity of activity, health, and staff practice in nursing home care for people with dementia. Participants were 127 residents with a diagnosis of dementia or enduring mental health problems in three specialist nursing homes in the UK. A repeated measures within-subjects design was employed, collecting quantitative and qualitative data at three points over a twelve-month period in each facility with follow-up 7 to 14 months later. Two-way ANOVAs revealed a statistically significant increase in levels of observed well-being and in diversity of activity following the intervention. There was a statistically significant increase in the number of positive staff interventions but no change in the number of negative staff interventions overall. There was a significant reduction in levels of depression. No significant changes in anxiety, health status, hospitalisations, or psychotropic medication usage were observed. The Enriched Opportunities Programme demonstrated a positive impact on the lives of people with dementia in nursing homes already offering a relatively good standard of care, in a short period of time. The refined programme requires further evaluation to establish its portability." }, { "pmid": "19307858", "title": "Engagement in persons with dementia: the concept and its measurement.", "abstract": "PURPOSE\nThe aim of this article is to delineate the underlying premises of the concept of engagement in persons with dementia and present a new theoretical framework of engagement.\n\n\nSETTING/SUBJECTS\nThe sample included 193 residents of seven Maryland nursing homes. All participants had a diagnosis of dementia.\n\n\nMETHODOLOGY\nThe authors describe a model of factors that affect engagement of persons with dementia. Moreover, the authors present the psychometric qualities of an assessment designed to capture the dimensions of engagement (Observational Measurement of Engagement). Finally, the authors detail plans for future research and data analyses that are currently underway.\n\n\nDISCUSSION\nThis article lays the foundation for a new theoretical framework concerning the mechanisms of interactions between persons with cognitive impairment and environmental stimuli. Additionally, the study examines what factors are associated with interest and negative and positive feelings in engagement." }, { "pmid": "21946802", "title": "The comprehensive process model of engagement.", "abstract": "BACKGROUND\nEngagement refers to the act of being occupied or involved with an external stimulus. In dementia, engagement is the antithesis of apathy.\n\n\nOBJECTIVE\nThe Comprehensive Process Model of Engagement was examined, in which environmental, personal, and stimulus characteristics impact the level of engagement.\n\n\nMETHODS\n: Participants were 193 residents of 7 Maryland nursing with a diagnosis of dementia. Stimulus engagement was assessed via the Observational Measure of Engagement, measuring duration, attention, and attitude to the stimulus. Twenty-five stimuli were presented, which were categorized as live human social stimuli, simulated social stimuli, inanimate social stimuli, a reading stimulus, manipulative stimuli, a music stimulus, task and work-related stimuli, and two different self-identity stimuli.\n\n\nRESULTS\nAll stimuli elicited significantly greater engagement in comparison to the control stimulus. In the multivariate model, music significantly increased engagement duration, whereas all other stimuli significantly increased duration, attention, and attitude. Significant environmental variables in the multivariate model that increased engagement were: use of the long introduction with modeling (relative to minimal introduction), any level of sound (especially moderate sound), and the presence of between 2 and 24 people in the room. Significant personal attributes included Mini-Mental State Examination scores, activities of daily living performance and clarity of speech, which were positively associated with higher engagement scores.\n\n\nCONCLUSIONS\nResults are consistent with the Comprehensive Process Model of Engagement. Personal attributes, environmental factors, and stimulus characteristics all contribute to the level and nature of engagement, with a secondary finding being that exposure to any stimulus elicits engagement in persons with dementia." }, { "pmid": "20155522", "title": "The impact of past and present preferences on stimulus engagement in nursing home residents with dementia.", "abstract": "OBJECTIVES\nWe examined engagement with stimuli in 193 nursing home residents with dementia. We hypothesized that activities and stimuli based on a person's past and current preferences would result in more engagement than other activities/stimuli.\n\n\nMETHOD\nThe expanded version of the self-identity questionnaire [Cohen-Mansfield, J., Golander, H. & Arheim, G. (2000)] was used to determine participants' past/present interests (as reported by relatives) in the following areas: art, music, babies, pets, reading, television, and office work. We utilized the observational measurement of engagement (Cohen-Mansfield, J., Dakheel-Ali, M., & Marx, M.S. (2009).\n\n\nRESULTS\nAnalysis revealed that residents with current interests in music, art, and pets were more engaged by stimuli that reflect these interests than residents without these interests.\n\n\nCONCLUSION\nOur findings demonstrate the utility of determining a person's preferences for stimuli in order to predict responsiveness. Lack of prediction for some stimuli may reflect differences between past preferences and activities that are feasible in the present." }, { "pmid": "21665880", "title": "Effects of a motor and multisensory-based approach on residents with moderate-to-severe dementia.", "abstract": "Involving institutionalized people with dementia in their routines may be challenging, particularly in advanced stages of the disease. Motor and multisensory stimulation may help to maintain or improve residents' remaining abilities such as communication and self-care. This study examines the effects of a motor and multisensory-based approach on the behavior of 6 residents with moderate-to-severe dementia. A single-group, pre- and post test design was conducted. Motor and multisensory stimulation strategies were implemented in residents' morning care routines by staff, after the provision of training and assistance. Twelve video recordings of morning care (6 pre- and 6 post interventions) were coded for the type of residents' behavior. Results showed a tendency toward improvements in residents' levels of caregiver-direct gaze, laughing and engagement, and a reduction of closed eyes, during morning care. The introduction of a motor and multisensory-based approach in care routines may improve residents' engagement and attention to the environment." }, { "pmid": "3655778", "title": "Validity and reliability of the Experience-Sampling Method.", "abstract": "To understand the dynamics of mental health, it is essential to develop measures for the frequency and the patterning of mental processes in every-day-life situations. The Experience-Sampling Method (ESM) is an attempt to provide a valid instrument to describe variations in self-reports of mental processes. It can be used to obtain empirical data on the following types of variables: a) frequency and patterning of daily activity, social interaction, and changes in location; b) frequency, intensity, and patterning of psychological states, i.e., emotional, cognitive, and conative dimensions of experience; c) frequency and patterning of thoughts, including quality and intensity of thought disturbance. The article reviews practical and methodological issues of the ESM and presents evidence for its short- and long-term reliability when used as an instrument for assessing the variables outlined above. It also presents evidence for validity by showing correlation between ESM measures on the one hand and physiological measures, one-time psychological tests, and behavioral indices on the other. A number of studies with normal and clinical populations that have used the ESM are reviewed to demonstrate the range of issues to which the technique can be usefully applied." }, { "pmid": "26577550", "title": "Assessing engagement in people with dementia: a new approach to assessment using video analysis.", "abstract": "The study of engagement in people with dementia is important to determine the effectiveness of interventions that aim to promote meaningful activity. However, the assessment of engagement for people with dementia in relation to our current work that uses social robots is fraught with challenges. The Video Coding - Incorporating Observed Emotion (VC-IOE) protocol that focuses on six dimensions of engagement: emotional, verbal, visual, behavioral, collective and signs of agitation was therefore developed. This paper provides an overview of the concept of engagement in dementia and outlines the development of the VC-IOE to assess engagement in people with dementia when interacting with social robots." }, { "pmid": "27019225", "title": "Group activity with Paro in nursing homes: systematic investigation of behaviors in participants.", "abstract": "BACKGROUND\nA variety of group activities is promoted for nursing home (NH) residents with dementia with the aim to reduce apathy and to increase engagement and social interaction. Investigating behaviors related to these outcomes could produce insights into how the activities work. The aim of this study was to systematically investigate behaviors seen in people with dementia during group activity with the seal robot Paro, differences in behaviors related to severity of dementia, and to explore changes in behaviors.\n\n\nMETHODS\nThirty participants from five NHs formed groups of five to six participants at each NH. Group sessions with Paro lasted for 30 minutes twice a week during 12 weeks of intervention. Video recordings were conducted in the second and tenth week. An ethogram, containing 18 accurately defined and described behaviors, mapped the participants' behaviors. Duration of behaviors, such as \"Observing Paro,\" \"Conversation with Paro on the lap,\" \"Smile/laughter toward other participants,\" were converted to percentage of total session time and analyzed statistically.\n\n\nRESULTS\n\"Observing Paro\" was observed more often in participants with mild to moderate dementia (p = 0.019), while the variable \"Observing other things\" occurred more in the group of severe dementia (p = 0.042). \"Smile/laughter toward other participants\" showed an increase (p = 0.011), and \"Conversations with Paro on the lap\" showed a decrease (p = 0.014) during the intervention period.\n\n\nCONCLUSIONS\nParticipants with severe dementia seemed to have difficulty in maintaining attention toward Paro during the group session. In the group as a whole, Paro seemed to be a mediator for increased social interactions and created engagement." }, { "pmid": "16526585", "title": "Factors that relate to activity engagement in nursing home residents.", "abstract": "Many nursing home residents are unoccupied and at risk for poor health outcomes because of inactivity. The purpose of this study was to identify characteristics of residents with dementia that predict engagement in activities when activities are implemented under ideal conditions. Data from a clinical trial that tested the efficacy of individually prescribed activities were used to address the study aim. Thirty subjects were videotaped daily for 12 days during 20-minute activity sessions. Measures of engagement (time on task and level of participation) were taken from these videotapes. Univariate logistic regression analyses indicated that cognitive status and physical function explained a significant amount of variance in engagement. Efforts to promote function may facilitate even greater benefits from prescribed activities by improving capacity for engagement." }, { "pmid": "9236952", "title": "Assessing patterns of agitation in Alzheimer's disease patients with the Cohen-Mansfield Agitation Inventory. The Alzheimer's Disease Cooperative Study.", "abstract": "As part of the effort of the NIA Alzheimer's disease cooperative study to develop improved instruments for quantifying effects in Alzheimer's disease (AD) clinical trials, patterns of agitated behaviors were evaluated with the Cohen-Mansfield Agitation Inventory (CMAI) in 241 AD patients and 64 healthy elderly controls with valid baseline assessment on the CMAI. The test-retest reliability of the CMAI over 1 month was good (r = 0.74 to 0.92). Physically and verbally nonaggressive behaviors were most often reported, whereas physically aggressive behaviors were rare. Frequency of agitated behaviors increased with dementia severity, especially for patients with a Mini-Mental Status Exam score of 0-4. Agitation tended to increase in the evening with dementia severity for the more impaired patients. Amount of agitation did increase after 12 months in all but controls and mildly demented patients. The CMAI shows promise for evaluating a unique aspect of behavior and may be useful in assessing the effects of cognitive enhancers and other types of psychotropic drugs on behavior in dementia patients." }, { "pmid": "8548515", "title": "Observed affect in nursing home residents with Alzheimer's disease.", "abstract": "A method for assessing affect states among older people with Alzheimer's disease was developed for use in a study designed to evaluate a special care unit for such residents of a nursing home. The 6-item Philadelphia Geriatric Center Affect Rating Scale was designed for the use of research and other staff in assessing positive affect (pleasure, interest, contentment) and negative affect (sadness, worry/anxiety, and anger) by direct observation of facial expression, body movement, and other cues that do not depend on self-report, among 253 demented and 43 nondemented residents. Each affect scale was highly reliable, expressed in estimated portions of a 10-minute observation period when the affect expression occurred. Validity estimates were affirmative in showing discriminant correlations between the positive states and various independent measures of social and other outwardly engaged behavior and between negative states and other measures of depression, anger, anxiety, and withdrawal. Limited support for the two-factor dimensionality of the affect ratings was obtained, although positive and negative affect were correlated, rather than independent. Some hope is offered that the preference and aversions of Alzheimer patients may be better understood by observations of their emotional behaviors and that such methods may lead to a better ability to judge institutional quality." }, { "pmid": "26790570", "title": "Social interactions between people with dementia: pilot evaluation of an observational instrument in a nursing home.", "abstract": "BACKGROUND\nIn dementia, cognitive and psychological disorders might interfere with maintaining social interactions. We have little information about the nature of these interactions of people with dementia in nursing homes. The aim of this study is to investigate social interactions between people with dementia and to validate an observation grid of them.\n\n\nMETHODS\nFifty-six institutionalized people with dementia took part in this study. Residents had not met beforehand and were divided into groups of four to six. Social behaviors were videotaped and analyzed by two independent raters with an observation grid measuring frequency of occurrence. The ethogram was the conceptual tool that became the Social Observation Behaviors Residents Index (SOBRI).\n\n\nRESULTS\nTwo-thousand-six-hundred-seventy instances of behavior were collected. Behaviors directed at others represented 50.90% and self-centered behaviors 47.83%. No negative behaviors were observed. Principal Component Analysis (PCA) was used to validate the SOBRI and showed two components of social behaviors that explained about 30.56% of the total variance: social interactions with other residents (18.36%) and with care staff (12.20%). The grid showed a good internal consistency with a Cronbach's α of 0.90 for the first component and 0.85 for the second one.\n\n\nCONCLUSIONS\nThe SOBRI presents robust psychometric validity. This pilot study indicates that people with dementia spontaneously interact with other residents. These results contradict the stigma of non-communication and the stereotypes about dementia. More studies and validations are needed to contribute to the knowledge of social interactions in dementia." }, { "pmid": "23177981", "title": "Use of social commitment robots in the care of elderly people with dementia: a literature review.", "abstract": "Globally, the population of elderly people is rising with an increasing number of people living with dementias. This trend is coupled with a prevailing need for compassionate caretakers. A key challenge in dementia care is to assist the person to sustain communication and connection to family, caregivers and the environment. The use of social commitment robots in the care of people with dementia has intriguing possibilities to address some of these care needs. This paper discusses the literature on the use of social commitment robots in the care of elderly people with dementia; the contributions to care that social commitment robots potentially can make and the cautions around their use. Future directions for programs of research are identified to further the development of the evidence-based knowledge in this area." }, { "pmid": "26270953", "title": "Effect of an interactive therapeutic robotic animal on engagement, mood states, agitation and psychotropic drug use in people with dementia: a cluster-randomised controlled trial protocol.", "abstract": "INTRODUCTION\nApathy, agitated behaviours, loneliness and depression are common consequences of dementia. This trial aims to evaluate the effect of a robotic animal on behavioural and psychological symptoms of dementia in people with dementia living in long-term aged care.\n\n\nMETHODS AND ANALYSIS\nA cluster-randomised controlled trial with three treatment groups: PARO (robotic animal), Plush-Toy (non-robotic PARO) or Usual Care (Control). The nursing home sites are Australian Government approved and accredited facilities of 60 or more beds. The sites are located in South-East Queensland, Australia. A sample of 380 adults with a diagnosis of dementia, aged 60 years or older living in one of the participating facilities will be recruited. The intervention consists of three individual 15 min non-facilitated sessions with PARO or Plush-Toy per week, for a period of 10 weeks. The primary outcomes of interest are improvement in agitation, mood states and engagement. Secondary outcomes include sleep duration, step count, change in psychotropic medication use, change in treatment costs, and staff and family perceptions of PARO or Plush-Toy. Video data will be analysed using Noldus XT Pocket Observer; descriptive statistics will be used for participants' demographics and outcome measures; cluster and individual level analyses to test all hypotheses and Generalised Linear Models for cluster level and Generalised Estimation Equations and/or Multi-level Modeling for individual level data.\n\n\nETHICS AND DISSEMINATION\nThe study participants or their proxy will provide written informed consent. The Griffith University Human Research Ethics Committee has approved the study (NRS/03/14/HREC). The results of the study will provide evidence of the efficacy of a robotic animal as a psychosocial treatment for the behavioural and psychological symptoms of dementia. Findings will be presented at local and international conference meetings and published in peer-reviewed journals.\n\n\nTRIAL REGISTRATION NUMBER\nAustralian and New Zealand Clinical Trials Registry number ACTRN12614000508673 date registered 13/05/2014." }, { "pmid": "23506125", "title": "Exploring the effect of companion robots on emotional expression in older adults with dementia: a pilot randomized controlled trial.", "abstract": "This pilot study aimed to compare the effect of companion robots (PARO) to participation in an interactive reading group on emotions in people living with moderate to severe dementia in a residential care setting. A randomized crossover design, with PARO and reading control groups, was used. Eighteen residents with mid- to late-stage dementia from one aged care facility in Queensland, Australia, were recruited. Participants were assessed three times using the Quality of Life in Alzheimer's Disease, Rating Anxiety in Dementia, Apathy Evaluation, Geriatric Depression, and Revised Algase Wandering Scales. PARO had a moderate to large positive influence on participants' quality of life compared to the reading group. The PARO intervention group had higher pleasure scores when compared to the reading group. Findings suggest PARO may be useful as a treatment option for people with dementia; however, the need for a larger trial was identified." }, { "pmid": "27590332", "title": "Engagement in elderly persons with dementia attending animal-assisted group activity.", "abstract": "The need for meaningful activities that enhance engagement is very important among persons with dementia (PWDs), both for PWDs still living at home, as well as for PWDs admitted to a nursing home (NH). In this study, we systematically registered behaviours related to engagement in a group animal-assisted activity (AAA) intervention for 21 PWDs in NHs and among 28 home-dwelling PWDs attending a day care centre. The participants interacted with a dog and its handler for 30 minutes, twice a week for 12 weeks. Video-recordings were carried out early (week 2) and late (week 10) during the intervention period and behaviours were categorized by the use of an ethogram. AAA seems to create engagement in PWDs, and might be a suitable and health promoting intervention for both NH residents and participants of a day care centre. Degree of dementia should be considered when planning individual or group based AAA." }, { "pmid": "29148293", "title": "Quantity of Movement as a Measure of Engagement for Dementia: The Influence of Motivational Disorders.", "abstract": "Engagement in activities is crucial to improve quality of life in dementia. Yet, its measurement relies exclusively on behavior observation and the influence that behavioral and psychological symptoms of dementia (BPSD) have on it is overlooked. This study investigated whether quantity of movement, gauged with a wrist-worn accelerometer, could be a sound measure of engagement and whether apathy and depression negatively affected engagement. Fourteen participants with dementia took part in 6 sessions of activities: 3 of cognitive games (eg, jigsaw puzzles) and 3 of robot play (Pleo). Results highlighted significant correlations between quantity of movement and observational scales of engagement and a strong negative influence of apathy and depression on engagement. Overall, these findings suggest that quantity of movement could be used as an ancillary measure of engagement and underline the need to profile people with dementia according to their concurrent BPSD to better understand their engagement in activities." }, { "pmid": "26257646", "title": "\"Are we ready for robots that care for us?\" Attitudes and opinions of older adults toward socially assistive robots.", "abstract": "Socially Assistive Robots (SAR) may help improve care delivery at home for older adults with cognitive impairment and reduce the burden of informal caregivers. Examining the views of these stakeholders on SAR is fundamental in order to conceive acceptable and useful SAR for dementia care. This study investigated SAR acceptance among three groups of older adults living in the community: persons with Mild Cognitive Impairment, informal caregivers of persons with dementia, and healthy older adults. Different technology acceptance questions related to the robot and user characteristics, potential applications, feelings about technology, ethical issues, and barriers and facilitators for SAR adoption, were addressed in a mixed-method study. Participants (n = 25) completed a survey and took part in a focus group (n = 7). A functional robot prototype, a multimedia presentation, and some use-case scenarios provided a base for the discussion. Content analysis was carried out based on recorded material from focus groups. Results indicated that an accurate insight of influential factors for SAR acceptance could be gained by combining quantitative and qualitative methods. Participants acknowledged the potential benefits of SAR for supporting care at home for individuals with cognitive impairment. In all the three groups, intention to use SAR was found to be lower for the present time than that anticipated for the future. However, caregivers and persons with MCI had a higher perceived usefulness and intention to use SAR, at the present time, than healthy older adults, confirming that current needs are strongly related to technology acceptance and should influence SAR design. A key theme that emerged in this study was the importance of customizing SAR appearance, services, and social capabilities. Mismatch between needs and solutions offered by the robot, usability factors, and lack of experience with technology, were seen as the most important barriers for SAR adoption." }, { "pmid": "23545466", "title": "The psychosocial effects of a companion robot: a randomized controlled trial.", "abstract": "OBJECTIVES\nTo investigate the psychosocial effects of the companion robot, Paro, in a rest home/hospital setting in comparison to a control group.\n\n\nDESIGN\nRandomized controlled trial. Residents were randomized to the robot intervention group or a control group that attended normal activities instead of Paro sessions. Sessions took place twice a week for an hour over 12 weeks. Over the trial period, observations were conducted of residents' social behavior when interacting as a group with the robot. As a comparison, observations were also conducted of all the residents during general activities when the resident dog was or was not present.\n\n\nSETTING\nA residential care facility in Auckland, New Zealand.\n\n\nPARTICIPANTS\nForty residents in hospital and rest home care.\n\n\nMEASUREMENTS\nResidents completed a baseline measure assessing cognitive status, loneliness, depression, and quality of life. At follow-up, residents completed a questionnaire assessing loneliness, depression, and quality of life. During observations, behavior was noted and collated for instances of talking and stroking the dog/robot.\n\n\nRESULTS\nIn comparison with the control group, residents who interacted with the robot had significant decreases in loneliness over the period of the trial. Both the resident dog and the seal robot made an impact on the social environment in comparison to when neither was present. Residents talked to and touched the robot significantly more than the resident dog. A greater number of residents were involved in discussion about the robot in comparison with the resident dog and conversation about the robot occurred more.\n\n\nCONCLUSION\nParo is a positive addition to this environment and has benefits for older people in nursing home care. Paro may be able to address some of the unmet needs of older people that a resident animal may not, particularly relating to loneliness." }, { "pmid": "28713296", "title": "Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia.", "abstract": "The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a \"control software\" for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction." }, { "pmid": "26793147", "title": "Emotion Regulation through Movement: Unique Sets of Movement Characteristics are Associated with and Enhance Basic Emotions.", "abstract": "We have recently demonstrated that motor execution, observation, and imagery of movements expressing certain emotions can enhance corresponding affective states and therefore could be used for emotion regulation. But which specific movement(s) should one use in order to enhance each emotion? This study aimed to identify, using Laban Movement Analysis (LMA), the Laban motor elements (motor characteristics) that characterize movements whose execution enhances each of the basic emotions: anger, fear, happiness, and sadness. LMA provides a system of symbols describing its motor elements, which gives a written instruction (motif) for the execution of a movement or movement-sequence over time. Six senior LMA experts analyzed a validated set of video clips showing whole body dynamic expressions of anger, fear, happiness and sadness, and identified the motor elements that were common to (appeared in) all clips expressing the same emotion. For each emotion, we created motifs of different combinations of the motor elements common to all clips of the same emotion. Eighty subjects from around the world read and moved those motifs, to identify the emotion evoked when moving each motif and to rate the intensity of the evoked emotion. All subjects together moved and rated 1241 motifs, which were produced from 29 different motor elements. Using logistic regression, we found a set of motor elements associated with each emotion which, when moved, predicted the feeling of that emotion. Each emotion was predicted by a unique set of motor elements and each motor element predicted only one emotion. Knowledge of which specific motor elements enhance specific emotions can enable emotional self-regulation through adding some desired motor qualities to one's personal everyday movements (rather than mimicking others' specific movements) and through decreasing motor behaviors which include elements that enhance negative emotions." }, { "pmid": "25309434", "title": "Comparison of Verbal and Emotional Responses of Elderly People with Mild/Moderate Dementia and Those with Severe Dementia in Responses to Seal Robot, PARO.", "abstract": "INTRODUCTION\nThe differences in verbal and emotional responses to a baby seal robot, PARO, of elderly people with dementia residing at an elderly nursing care facility were analyzed. There were two groups of elderly people: one was with mild/moderate dementia (M-group) that consisted with 19 elderly residents in the general ward, and the other was with severe dementia (S-group) that consisted with 11 elderly residents in the dementia ward.\n\n\nMETHOD\nEach elderly resident in both groups interacted with either PARO or a control (stuffed lion toy: Lion) brought by a staff at each resident's private room. Their responses were recorded on video. Behavioral analysis of the initial 6 min of the interaction was conducted using a time sampling method.\n\n\nRESULTS\nIn both groups, subjects talked more frequently to PARO than to Lion, showed more positive changes in emotional expression with PARO than with Lion, and laughed more frequently with PARO than with Lion. Subjects in M-group even showed more negative emotional expressions with Lion than with PARO. Furthermore, subjects in S-group showed neutral expression more frequently with Lion than with PARO, suggesting more active interaction with PARO. For subjects in M-group, frequencies of touching and stroking, frequencies of talking to staff member, and frequencies of talking initiated by staff member were significantly higher with Lion than with PARO.\n\n\nCONCLUSION\nThe elderly people both with mild/moderate dementia and with severe dementia showed greater interest in PARO than in Lion. The results suggest that introducing PARO may increase willingness of the staff members to communicate and work with elderly people with dementia, especially those with mild/moderate dementia who express their demand of communication more than those with severe dementia." }, { "pmid": "10580305", "title": "Ethological research in clinical psychiatry: the study of nonverbal behavior during interviews.", "abstract": "Ethology is relevant to clinical psychiatry for two different reasons. First, ethology may contribute significantly to the development of more accurate and valid methods for measuring the behavior of persons with mental disorders. Second, ethology, as the evolutionary study of behavior, may provide psychiatry with a theoretical framework for integrating a functional perspective into the definition and clinical assessment of mental disorders. This article describes an ethological method for studying the nonverbal behavior of persons with mental disorders during clinical interviews and reviews the results derived from the application of this method in studies of patients who had a diagnosis of schizophrenia or depression. These findings and others that are emerging from current ethological research in psychiatry indicate that the ethological approach is not limited simply to a mere translation into quantitative and objective data of what clinicians already know on the basis of their judgment or the use of rating scales. Rather, it produces new insights on controversial aspects of psychiatric disorders. Although the impact of ethology on clinical psychiatry is still limited, recent developments in the fields of ethological and Darwinian psychiatry can revitalize the interest of clinical psychiatrists for ethology." }, { "pmid": "26388764", "title": "Social robots in advanced dementia.", "abstract": "AIMS\nPilot studies applying a humanoid robot (NAO), a pet robot (PARO) and a real animal (DOG) in therapy sessions of patients with dementia in a nursing home and a day care center.\n\n\nMETHODS\nIn the nursing home, patients were assigned by living units, based on dementia severity, to one of the three parallel therapeutic arms to compare: CONTROL, PARO and NAO (Phase 1) and CONTROL, PARO, and DOG (Phase 2). In the day care center, all patients received therapy with NAO (Phase 1) and PARO (Phase 2). Therapy sessions were held 2 days per week during 3 months. Evaluation, at baseline and follow-up, was carried out by blind raters using: the Global Deterioration Scale (GDS), the Severe Mini Mental State Examination (sMMSE), the Mini Mental State Examination (MMSE), the Neuropsychiatric Inventory (NPI), the Apathy Scale for Institutionalized Patients with Dementia Nursing Home version (APADEM-NH), the Apathy Inventory (AI) and the Quality of Life Scale (QUALID). Statistical analysis included descriptive statistics and non-parametric tests performed by a blinded investigator.\n\n\nRESULTS\nIn the nursing home, 101 patients (Phase 1) and 110 patients (Phase 2) were included. There were no significant differences at baseline. The relevant changes at follow-up were: (Phase 1) patients in the robot groups showed an improvement in apathy; patients in NAO group showed a decline in cognition as measured by the MMSE scores, but not the sMMSE; the robot groups showed no significant changes between them; (Phase 2) QUALID scores increased in the PARO group. In the day care center, 20 patients (Phase 1) and 17 patients (Phase 2) were included. The main findings were: (Phase 1) improvement in the NPI irritability and the NPI total score; (Phase 2) no differences were observed at follow-up." } ]
Orphanet Journal of Rare Diseases
29855327
PMC5984368
10.1186/s13023-018-0830-6
Next generation phenotyping using narrative reports in a rare disease clinical data warehouse
BackgroundSecondary use of data collected in Electronic Health Records opens perspectives for increasing our knowledge of rare diseases. The clinical data warehouse (named Dr. Warehouse) at the Necker-Enfants Malades Children’s Hospital contains data collected during normal care for thousands of patients. Dr. Warehouse is oriented toward the exploration of clinical narratives. In this study, we present our method to find phenotypes associated with diseases of interest.MethodsWe leveraged the frequency and TF-IDF to explore the association between clinical phenotypes and rare diseases. We applied our method in six use cases: phenotypes associated with the Rett, Lowe, Silver Russell, Bardet-Biedl syndromes, DOCK8 deficiency and Activated PI3-kinase Delta Syndrome (APDS). We asked domain experts to evaluate the relevance of the top-50 (for frequency and TF-IDF) phenotypes identified by Dr. Warehouse and computed the average precision and mean average precision.ResultsExperts concluded that between 16 and 39 phenotypes could be considered as relevant in the top-50 phenotypes ranked by descending frequency discovered by Dr. Warehouse (resp. between 11 and 41 for TF-IDF). Average precision ranges from 0.55 to 0.91 for frequency and 0.52 to 0.95 for TF-IDF. Mean average precision was 0.79. Our study suggests that phenotypes identified in clinical narratives stored in Electronic Health Record can provide rare disease specialists with candidate phenotypes that can be used in addition to the literature.ConclusionsClinical Data Warehouses can be used to perform Next Generation Phenotyping, especially in the context of rare diseases. We have developed a method to detect phenotypes associated with a group of patients using medical concepts extracted from free-text clinical narratives.Electronic supplementary materialThe online version of this article (10.1186/s13023-018-0830-6) contains supplementary material, which is available to authorized users.
Related workInformation extractionSeveral approaches have been developed to recognize UMLS concepts, or terminology terms from free-text records. Savova et al. [30] developed cTAKES, an open source modular system of pipelined components combining rule-based and machine learning techniques. cTAKES aims at the extraction of information from the clinical narratives. Despite development in other languages [31, 32], most of the open source clinical Natural Language Processing systems have been developed for the English language (MedLee [33], MetaMap [34], HITex [35]). Many challenges have helped to test and assess the different tools and methodologies. In non-English languages, less out-of-the-box tools and less learning datasets are available to work with text. More recently a challenge was dedicated to the extraction of information in multiple language medical documents (including French) [36].
[ "20841676", "20190053", "24534443", "25717416", "26958189", "25038198", "26482257", "8412823", "19435614", "23920642", "23622176", "14649552", "23828127", "17173180", "20819853", "21901084", "20841824", "15187068", "16872495", "25342177", "26958224" ]
[ { "pmid": "20841676", "title": "Methodology of integration of a clinical data warehouse with a clinical information system: the HEGP case.", "abstract": "Clinical Data Warehouses (CDW) can complement current Clinical Information Systems (CIS) with functions that are not easily implemented by traditional operational database systems. Here, we describe the design and deployment strategy used at the Pompidou University Hospital in southwest Paris. Four realms are described: technological realm, data realm, restitution realm, and administration realm. The corresponding UML use cases and the mapping rules from the shared integrated electronic health records to the five axes of the i2b2 CDW star model are presented. Priority is given to the anonymization and security principles used for the 1.2 million patient records currently stored in the CDW. Exploitation of a CDW by clinicians and investigators can facilitate clinical research, quality evaluations and outcome studies. These indirect benefits are among the reasons for the continuous use of an integrated CIS." }, { "pmid": "20190053", "title": "Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2).", "abstract": "Informatics for Integrating Biology and the Bedside (i2b2) is one of seven projects sponsored by the NIH Roadmap National Centers for Biomedical Computing (http://www.ncbcs.org). Its mission is to provide clinical investigators with the tools necessary to integrate medical record and clinical research data in the genomics age, a software suite to construct and integrate the modern clinical research chart. i2b2 software may be used by an enterprise's research community to find sets of interesting patients from electronic patient medical record data, while preserving patient privacy through a query tool interface. Project-specific mini-databases (\"data marts\") can be created from these sets to make highly detailed data available on these specific patients to the investigators on the i2b2 platform, as reviewed and restricted by the Institutional Review Board. The current version of this software has been released into the public domain and is available at the URL: http://www.i2b2.org/software." }, { "pmid": "24534443", "title": "Secondary use of clinical data: the Vanderbilt approach.", "abstract": "The last decade has seen an exponential growth in the quantity of clinical data collected nationwide, triggering an increase in opportunities to reuse the data for biomedical research. The Vanderbilt research data warehouse framework consists of identified and de-identified clinical data repositories, fee-for-service custom services, and tools built atop the data layer to assist researchers across the enterprise. Providing resources dedicated to research initiatives benefits not only the research community, but also clinicians, patients and institutional leadership. This work provides a summary of our approach in the secondary use of clinical data for research domain, including a description of key components and a list of lessons learned, designed to assist others assembling similar services and infrastructure." }, { "pmid": "25717416", "title": "How essential are unstructured clinical narratives and information fusion to clinical trial recruitment?", "abstract": "Electronic health records capture patient information using structured controlled vocabularies and unstructured narrative text. While structured data typically encodes lab values, encounters and medication lists, unstructured data captures the physician's interpretation of the patient's condition, prognosis, and response to therapeutic intervention. In this paper, we demonstrate that information extraction from unstructured clinical narratives is essential to most clinical applications. We perform an empirical study to validate the argument and show that structured data alone is insufficient in resolving eligibility criteria for recruiting patients onto clinical trials for chronic lymphocytic leukemia (CLL) and prostate cancer. Unstructured data is essential to solving 59% of the CLL trial criteria and 77% of the prostate cancer trial criteria. More specifically, for resolving eligibility criteria with temporal constraints, we show the need for temporal reasoning and information integration with medical events within and across unstructured clinical narratives and structured data." }, { "pmid": "26958189", "title": "Reviewing 741 patients records in two hours with FASTVISU.", "abstract": "The secondary use of electronic health records opens up new perspectives. They provide researchers with structured data and unstructured data, including free text reports. Many applications been developed to leverage knowledge from free-text reports, but manual review of documents is still a complex process. We developed FASTVISU a web-based application to assist clinicians in reviewing documents. We used FASTVISU to review a set of 6340 documents from 741 patients suffering from the celiac disease. A first automated selection pruned the original set to 847 documents from 276 patients' records. The records were reviewed by two trained physicians to identify the presence of 15 auto-immune diseases. It took respectively two hours and two hours and a half to evaluate the entire corpus. Inter-annotator agreement was high (Cohen's kappa at 0.89). FASTVISU is a user-friendly modular solution to validate entities extracted by NLP methods from free-text documents stored in clinical data warehouses." }, { "pmid": "25038198", "title": "A methodology for a minimum data set for rare diseases to support national centers of excellence for healthcare and research.", "abstract": "BACKGROUND\nAlthough rare disease patients make up approximately 6-8% of all patients in Europe, it is often difficult to find the necessary expertise for diagnosis and care and the patient numbers needed for rare disease research. The second French National Plan for Rare Diseases highlighted the necessity for better care coordination and epidemiology for rare diseases. A clinical data standard for normalization and exchange of rare disease patient data was proposed. The original methodology used to build the French national minimum data set (F-MDS-RD) common to the 131 expert rare disease centers is presented.\n\n\nMETHODS\nTo encourage consensus at a national level for homogeneous data collection at the point of care for rare disease patients, we first identified four national expert groups. We reviewed the scientific literature for rare disease common data elements (CDEs) in order to build the first version of the F-MDS-RD. The French rare disease expert centers validated the data elements (DEs). The resulting F-MDS-RD was reviewed and approved by the National Plan Strategic Committee. It was then represented in an HL7 electronic format to maximize interoperability with electronic health records.\n\n\nRESULTS\nThe F-MDS-RD is composed of 58 DEs in six categories: patient, family history, encounter, condition, medication, and questionnaire. It is HL7 compatible and can use various ontologies for diagnosis or sign encoding. The F-MDS-RD was aligned with other CDE initiatives for rare diseases, thus facilitating potential interconnections between rare disease registries.\n\n\nCONCLUSIONS\nThe French F-MDS-RD was defined through national consensus. It can foster better care coordination and facilitate determining rare disease patients' eligibility for research studies, trials, or cohorts. Since other countries will need to develop their own standards for rare disease data collection, they might benefit from the methods presented here." }, { "pmid": "26482257", "title": "Primary Immunodeficiency Diseases: an Update on the Classification from the International Union of Immunological Societies Expert Committee for Primary Immunodeficiency 2015.", "abstract": "We report the updated classification of primary immunodeficiencies compiled by the Primary Immunodeficiency Expert Committee (PID EC) of the International Union of Immunological Societies (IUIS). In the two years since the previous version, 34 new gene defects are reported in this updated version. For each disorder, the key clinical and laboratory features are provided. In this new version we continue to see the increasing overlap between immunodeficiency, as manifested by infection and/or malignancy, and immune dysregulation, as manifested by auto-inflammation, auto-immunity, and/or allergy. There is also an increased number of genetic defects that lead to susceptibility to specific organisms which reflects the finely tuned nature of immune defense systems. This classification is the most up to date catalogue of all known and published primary immunodeficiencies and acts as a current reference of the knowledge of these conditions and is an important aid for the genetic and molecular diagnosis of patients with these rare diseases." }, { "pmid": "8412823", "title": "The Unified Medical Language System.", "abstract": "In 1986, the National Library of Medicine began a long-term research and development project to build the Unified Medical Language System (UMLS). The purpose of the UMLS is to improve the ability of computer programs to \"understand\" the biomedical meaning in user inquiries and to use this understanding to retrieve and integrate relevant machine-readable information for users. Underlying the UMLS effort is the assumption that timely access to accurate and up-to-date information will improve decision making and ultimately the quality of patient care and research. The development of the UMLS is a distributed national experiment with a strong element of international collaboration. The general strategy is to develop UMLS components through a series of successive approximations of the capabilities ultimately desired. Three experimental Knowledge Sources, the Metathesaurus, the Semantic Network, and the Information Sources Map have been developed and are distributed annually to interested researchers, many of whom have tested and evaluated them in a range of applications. The UMLS project and current developments in high-speed, high-capacity international networks are converging in ways that have great potential for enhancing access to biomedical information." }, { "pmid": "19435614", "title": "ConText: an algorithm for determining negation, experiencer, and temporal status from clinical reports.", "abstract": "In this paper we describe an algorithm called ConText for determining whether clinical conditions mentioned in clinical reports are negated, hypothetical, historical, or experienced by someone other than the patient. The algorithm infers the status of a condition with regard to these properties from simple lexical clues occurring in the context of the condition. The discussion and evaluation of the algorithm presented in this paper address the questions of whether a simple surface-based approach which has been shown to work well for negation can be successfully transferred to other contextual properties of clinical conditions, and to what extent this approach is portable among different clinical report types. In our study we find that ConText obtains reasonable to good performance for negated, historical, and hypothetical conditions across all report types that contain such conditions. Conditions experienced by someone other than the patient are very rarely found in our report set. A comprehensive solution to the problem of determining whether a clinical condition is historical or recent requires knowledge above and beyond the surface clues picked up by ConText." }, { "pmid": "23920642", "title": "Extending the NegEx lexicon for multiple languages.", "abstract": "We translated an existing English negation lexicon (NegEx) to Swedish, French, and German and compared the lexicon on corpora from each language. We observed Zipf's law for all languages, i.e., a few phrases occur a large number of times, and a large number of phrases occur fewer times. Negation triggers \"no\" and \"not\" were common for all languages; however, other triggers varied considerably. The lexicon is available in OWL and RDF format and can be extended to other languages. We discuss the challenges in translating negation triggers to other languages and issues in representing multilingual lexical knowledge." }, { "pmid": "23622176", "title": "Genetically determined encephalopathy: Rett syndrome.", "abstract": "Rett syndrome (RTT) is a severe neurodevelopmental disorder primarily affecting females that has an incidence of 1:10000 female births, one of the most common genetic causes of severe mental retardation in females. Development is apparently normal for the first 6-18 months until fine and gross motor skills and social interaction are lost, and stereotypic hand movements develop. Progression and severity of the classical form of RTT are most variable, and there are a number of atypical variants, including congenital, early onset seizure, preserved speech variant, and \"forme fruste.\" Mutations in the X-linked gene methyl-CpG-binding protein 2 (MECP2) involve most of the classical RTT patients. Mutations in cyclin-dependent kinase like 5 (CDKL5) and FoxG1 genes have been identified in the early onset seizure and the congenital variants respectively. Management of RTT is mainly symptomatic and individualized. It focuses on optimizing each patient's abilities. A dynamic multidisciplinary approach is most effective, with specific attention given to epileptic and nonepileptic paroxysmal events, as well as scoliosis, osteoporosis, and the development of spasticity, which can have a major impact on mobility, and to the development of effective communication strategies for these severely disabled individuals." }, { "pmid": "14649552", "title": "Possible mechanisms of osteopenia in Rett syndrome: bone histomorphometric studies.", "abstract": "The etiology of frequently occurring osteoporosis in Rett syndrome is unknown. Five girls, ages 9.75, 11, 12, 13.5, and 14 years, with typical Rett syndrome requiring scoliosis surgery presented an opportunity to study bone remodeling by quantitative bone histomorphometry. Anterior iliac crest bone biopsies taken 1 to 2 days after double labeling of the bone surfaces with tetracycline were submitted for histomorphometry. Bone volume was reduced, and the surface parameters of formation (osteoid surface) were normal, whereas the parameters of resorption (osteoclast surface and number) were decreased. In four girls, the rate of bone formation was reduced but could not be measured in one girl owing to poor labeling. It is possible that the slow rate of bone formation impedes the development and accumulation of peak bone mass and contributes to the decreased bone volume in Rett syndrome. Perhaps MECP2 mutations in Rett syndrome not only influence brain development but also affect bone formation." }, { "pmid": "23828127", "title": "Osteoporosis in Rett syndrome: a case study presenting a novel management intervention for severe osteoporosis.", "abstract": "The present article describes a successful novel therapeutic intervention with Aredia with one child with Rett syndrome, after suffering from six pathological fractures within less than 3 years due to severe osteoporosis. Since the initiation of the treatment (3 years ago), the child has not suffered any fractures. Patients with chronic diseases and those with disabilities or on anticonvulsant medications are at risk for low bone density and possibly for the resultant pathologic fractures that define osteoporosis in children. Individuals with Rett syndrome (RS) have been shown to have low bone mineral density (or osteopenia) at a young age. If osteoporosis occurs in a girl with RS, it can inflict pain and seriously impair the child's mobility and quality of life. The present article describes a case study of a child with RS (showing an average of 1.75 fractures annually for the 4 years preceding the treatment) before and after a treatment with Aredia. Patient received 30 mg/day for 3 days on a once every 3-month cycle. There was a 45 % improvement in bone mass density (BMD) values from pre-post-intervention. The child had no fractures in the 3 years posttreatment. This finding is significant (p < 0.03). The BMD Z-scores of the child showed severe osteoporosis (Z-score of -3.8) at pre-intervention and are elevated to osteopenia levels (Z-score of -1.3) at post-intervention measurements. All measurements suggest that the treatment successfully reversed the osteoporotic process and prevented further fractures. This change caused great relief to the child and her family and an improvement in their quality of life. The findings support the ability (in one case) to reverse the progression of osteoporosis in individuals with Rett syndrome showing severe osteoporosis with multiple fractures." }, { "pmid": "17173180", "title": "Osteoporosis in Rett syndrome: A study on normal values.", "abstract": "Osteoporosis is the reduction of calcium density in bones, usually evident in postmenopausal females, yet the tendency for osteoporosis can also be identified at a young age, especially in patients with chronic diseases, disabilities, and on chronic anticonvalsant treatment. Individuals with Rett syndrome (RS) have been found to show signs of osteoporosis at a young age. This condition may cause pathological fractures, inflict pain, and seriously damage mobility. In such cases, the quality of life of the individual and her primary caretakers will be severely hampered. This article reviews the current knowledge of the phenomenon and suggests some clinical directions for the individual with RS who shows signs of osteoporosis. The article also presents novel findings from a screening test of bone strength in 35 individuals with RS at different ages using the Sunlight Omnisense 7000P ultrasound apparatus. The primary results from this investigation showed a strong and significant positive correlation between calcium intake and bone strength (p < 0.0001) as well as bone density Z values (p < 0.005). The occurrence and frequency of fractures were found connected with reduced bone strength in measurements of both the radius (p < 0.0001) and the tibia (p < 0.004) as well as with negative bone strength Z values (p = 0.03). Other findings specified within the content of the article support the implementation of a comprehensive antiosteoporotic preventive management for this population." }, { "pmid": "20819853", "title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.", "abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text." }, { "pmid": "21901084", "title": "Using electronic patient records to discover disease correlations and stratify patient cohorts.", "abstract": "Electronic patient records remain a rather unexplored, but potentially rich data source for discovering correlations between diseases. We describe a general approach for gathering phenotypic descriptions of patients from medical records in a systematic and non-cohort dependent manner. By extracting phenotype information from the free-text in such records we demonstrate that we can extend the information contained in the structured record data, and use it for producing fine-grained patient stratification and disease co-occurrence statistics. The approach uses a dictionary based on the International Classification of Disease ontology and is therefore in principle language independent. As a use case we show how records from a Danish psychiatric hospital lead to the identification of disease correlations, which subsequently can be mapped to systems biology frameworks." }, { "pmid": "20841824", "title": "Extracting medication information from French clinical texts.", "abstract": "Much more Natural Language Processing (NLP) work has been performed on the English language than on any other. This general observation is also true of medical NLP, although clinical language processing needs are as strong in other languages as they are in English. In specific subdomains, such as drug prescription, the expression of information can be closely related across different languages, which should help transfer systems from English to other languages. We report here the implementation of a medication extraction system which extracts drugs and related information from French clinical texts, on the basis of an approach initially designed for English within the framework of the i2b2 2009 challenge. The system relies on specialized lexicons and a set of extraction rules. A first evaluation on 50 annotated texts obtains 86.7% F-measure, a level higher than the original English system and close to related work. This shows that the same rule-based approach can be applied to English and French languages, with a similar level of performance. We further discuss directions for improving both systems." }, { "pmid": "15187068", "title": "Automated encoding of clinical documents based on natural language processing.", "abstract": "OBJECTIVE\nThe aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method.\n\n\nMETHODS\nAn existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts.\n\n\nRESULTS\nRecall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91.\n\n\nCONCLUSION\nExtraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval." }, { "pmid": "16872495", "title": "Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system.", "abstract": "BACKGROUND\nThe text descriptions in electronic medical records are a rich source of information. We have developed a Health Information Text Extraction (HITEx) tool and used it to extract key findings for a research study on airways disease.\n\n\nMETHODS\nThe principal diagnosis, co-morbidity and smoking status extracted by HITEx from a set of 150 discharge summaries were compared to an expert-generated gold standard.\n\n\nRESULTS\nThe accuracy of HITEx was 82% for principal diagnosis, 87% for co-morbidity, and 90% for smoking status extraction, when cases labeled \"Insufficient Data\" by the gold standard were excluded.\n\n\nCONCLUSION\nWe consider the results promising, given the complexity of the discharge summaries and the extraction tasks." }, { "pmid": "25342177", "title": "Toward a science of learning systems: a research agenda for the high-functioning Learning Health System.", "abstract": "OBJECTIVE\nThe capability to share data, and harness its potential to generate knowledge rapidly and inform decisions, can have transformative effects that improve health. The infrastructure to achieve this goal at scale--marrying technology, process, and policy--is commonly referred to as the Learning Health System (LHS). Achieving an LHS raises numerous scientific challenges.\n\n\nMATERIALS AND METHODS\nThe National Science Foundation convened an invitational workshop to identify the fundamental scientific and engineering research challenges to achieving a national-scale LHS. The workshop was planned by a 12-member committee and ultimately engaged 45 prominent researchers spanning multiple disciplines over 2 days in Washington, DC on 11-12 April 2013.\n\n\nRESULTS\nThe workshop participants collectively identified 106 research questions organized around four system-level requirements that a high-functioning LHS must satisfy. The workshop participants also identified a new cross-disciplinary integrative science of cyber-social ecosystems that will be required to address these challenges.\n\n\nCONCLUSIONS\nThe intellectual merit and potential broad impacts of the innovations that will be driven by investments in an LHS are of great potential significance. The specific research questions that emerged from the workshop, alongside the potential for diverse communities to assemble to address them through a 'new science of learning systems', create an important agenda for informatics and related disciplines." }, { "pmid": "26958224", "title": "Towards data integration automation for the French rare disease registry.", "abstract": "Building a medical registry upon an existing infrastructure and rooted practices is not an easy task. It is the case for the BNDMR project, the French rare disease registry, that aims to collect administrative and medical data of rare disease patients seen in different hospitals. To avoid duplicating data entry for health professionals, the project plans to deploy connectors with the existing systems to automatically retrieve data. Given the data heterogeneity and the large number of source systems, the automation of connectors creation is required. In this context, we propose a methodology that optimizes the use of existing alignment approaches in the data integration processes. The generated mappings are formalized in exploitable mapping expressions. Following this methodology, a process has been experimented on specific data types of a source system: Boolean and predefined lists. As a result, effectiveness of the used alignment approach has been enhanced and more good mappings have been detected. Nonetheless, further improvements could be done to deal with the semantic issue and process other data types." } ]
Frontiers in Neurorobotics
29896096
PMC5987032
10.3389/fnbot.2018.00024
Assisting Movement Training and Execution With Visual and Haptic Feedback
In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.
2. Related workThis section primarily describes related work on techniques to assess the correctness of human motion and provide feedback to the user. It briefly introduces related work on the required components used for modeling the human demonstrations.2.1. Human motion assessment and feedback to the userWith similar goals as in our work, Solis et al. (2002) presented a method to teach users how to write characters using a haptic interface. In their method, characters are modeled with Hidden Markov Models (HMMs) with discrete hidden states and discrete observations. The system recognizes online what character the user intends to write and applies a proportional derivative (PD) controller with fixed gains to restrict the user to move along the trajectory that corresponds to the recognized character. Differently, in our work, the gains of the haptic device are adapted as a function of the user's deviation with respect to the model learned from expert demonstrations or through reinforcement learning. Adaptive gains allow for practicing motor skills with multiple correct possibilities of execution, in case there is not a single correct trajectory. Also, it allows for regulating the stiffness of the robot to impose different levels of precision at different parts of the movement.Parisi et al. (2016) proposed a “multilayer learning architecture with incremental self-organizing networks” to give the user real-time visual feedback during the execution of movements, e.g., powerlifting exercises. In our work, we have not addressed real-time visual feedback so far, although we do address real-time haptic feedback. On the other hand, our framework can deal with movements with different absolute positions and scales when producing visual feedback. By disabling this preprocessing, it would be possible to generate real-time visual feedback as well.Kowsar et al. (2016) presented a workflow to detect anomalies in weight training exercises. In their work, movement repetitions are segmented based on the acceleration along an axis in space. A probability distribution over a number of time-aligned repetitions is built. Then, based on this distribution, movement segments can be deemed correct or incorrect. Our approach focuses rather on correcting movements with respect to their shape or position in space, not on correcting acceleration patterns.A variable impedance controller based on an estimation of the stiffness of the human arm was proposed by Tsumugiwa et al. (2002). This controller enabled a robot to assist humans in calligraphic tasks. In the cited work, the tracked trajectories were not learned from demonstrations.Our work is in line with approaches that aim to assist learning with demonstrations. Raiola et al. (2015), for instance, used probabilistic virtual guides learned from demonstrations to help humans manipulate a robot arm. In another related work, Soh and Demiris (2015) presented a system that learns from demonstrations how to assist humans using a smart wheelchair.Visual, auditory and haptic feedback modalities have been successfully used for motor learning in the fields of sport and rehabilitation (Sigrist et al., 2013). Our method to provide visual feedback to the user, detailed in section 3.4, is, for instance, similar in principle to bandwidth feedback. This sort of feedback means that the user only receives feedback when the movement error exceeds a certain threshold and it has been shown to be effective in rehabilitation (Timmermans et al., 2009). The work here presented relates and can potentially complement previous research on bandwidth feedback in the sense that our threshold is not constant, but depends on a probability distribution over trajectories. Our approach may find applications in tasks where it is desirable to give the user more freedom of movement around a certain position and less freedom around a different position or where multiple variations of movements are considered correct.Ernst and Banks (2002) have demonstrated that maximum-likelihood estimation describes the way humans combine visual and haptic perception. The estimation of a certain environmental property that results from the combination of visual and haptic stimuli presents lower variance than estimations based only on visual or haptic stimuli. When the visual stimulus is noise-free, users tend to rely more on vision to perform their estimation. On the other hand, when the visual stimulus is noisy, users tend to rely more on haptics. Therefore, users may profit from multimodal feedback to learn a new motor skill. In our experimental section, we provide haptic feedback to users to help them perform a teleoperation task in a virtual environment. The findings in Ernst and Banks (2002) indicate that haptic feedback also helps users perceive some aspects of the task that they could not perceive only from visual stimuli, which could help them learn how to better solve the task without assistance next time. The usefulness of haptic feedback to learn motor skills is also demonstrated in Kümmel et al. (2014), where robotic haptic guidance has been shown to induce long-lasting changes in golf swing movements. The work here presented offers an algorithmic solution to the acquisition of policies and control of a robotic device that could be applied to help humans learn and retain motor skills.In contrast to most of the work on haptic feedback for human motor learning, our method modulates the stiffness of the haptic device according to demonstrations and uses reinforcement learning to improve upon the demonstrated movements. Those features may be interesting as a means of communication between an expert and an apprentice or patient and to enable improvement of initial demonstrations.2.2. Learning and adapting models from demonstrationsAn essential component of this work is to construct a model from expert demonstrations, which is then queried at runtime to evaluate the performance of the user. One recurrent issue when building models from demonstration is the problem of handling the variability of phases (i.e., the speed of the execution) of different movements. Listgarten et al. (2004) proposed the Continuous Profile Model (CPM), which can align multiple continuous time series. It assumes that each continuous time series is a non-uniformly subsampled, noisy and locally rescaled version of a single latent trace. The model is similar to a Hidden Markov Model (HMM). The hidden states encode the corresponding time step of the latent trace and a rescaling factor. The CPM has been successfully applied to align speech data and data sets from an experimental biology laboratory.Coates et al. (2008) augmented the model of Listgarten et al. (2004) by additionally learning the dynamics of the controlled system in the vicinity of the intended trajectory. With this modification, their model generates an ideal trajectory that not only is similar to the demonstrations but also obeys the system's dynamics. Moreover, differently from Listgarten et al. (2004), their algorithm to time-align the demonstrations and to determine an ideal trajectory relies both on an EM algorithm and on Dynamic Time Warping (Sakoe and Chiba, 1978). With this approach, they were able to achieve autonomous helicopter aerobatics after training with suboptimal human expert demonstrations.The same method was used by Van Den Berg et al. (2010) to extract an ideal trajectory from multiple demonstrations. The demonstrations were, in this case, movements of a surgical robot operated by a human expert.Similarly to Coates et al. (2008) and Van Den Berg et al. (2010), our system uses Dynamic Time Warping (DTW) to time-align trajectories. While DTW usually aligns pairs of temporal sequences, in section 3.2 we present a solution for aligning multiple trajectories. An alternative solution was presented by Sanguansat (2012), however, it suffers from scalability issues because distances need to be computed between every point of every temporal sequence.Differences in the scale and shape of movements must also be addressed to account for the variability in human demonstrations. In practice, for tasks such as writing, we want our system to be invariant to the scale of the movements of different demonstrations. The analysis of the difference between shapes is usually addressed by Procrustes Analysis (Goodall, 1991). The output of this analysis is the affine transformation that maps one of the inputs to best match the other input, while the residual is quantified as the effective distance (deformation) between the shapes. As the analysis consists of computing such transformations in relation to the centroid, Procrustes Analysis provides a global, average assessment and has found applications in tasks of trajectory and transfer learning (Bocsi et al., 2013; Makondo et al., 2015; Holladay and Srinivasa, 2016) and manipulation (Collet et al., 2009). While this seems the most natural solution to our problem of aligning shapes, we noticed that it is not suitable for detecting anomalies. In fact, in the writing task, we are interested in finding the “outliers” that can be indicated to the human as erroneous strokes. However, Procrustes Analysis aligns the shapes globally such that the positions of the centroids are inappropriately biased toward such outliers. In sections 3.1.1 and 3.1.2 we describe our own alignment method that is suited for detecting particular errors with the introduction of a few heuristics.
[ "25238621", "23132605", "19154570" ]
[ { "pmid": "25238621", "title": "Robotic guidance induces long-lasting changes in the movement pattern of a novel sport-specific motor task.", "abstract": "Facilitating the learning or relearning of motor tasks is one of the main goals of coaches, teachers and therapists. One promising way to achieve this goal is guiding the learner through the correct movement trajectory with the help of a robotic device. The aim of this study was to investigate if haptic guidance can induce long-lasting changes in the movement pattern of a complex sport-specific motor task. For this purpose, 31 subjects were assigned to one of three groups: EA (early angle, n=10), LA (late angle, n=11) and CON (control, n=10). EA and LA successfully completed five training sessions, which consisted of 50 robot-guided golf swings and 10 free swings each, whereas CON had no training. The EA group was guided through the movement with the wrist being bent early during backswing, whereas in the LA group it was bent late. The participants of EA and LA were not told about this difference in the movement patterns. To assess if the robot-guided training was successful in shaping the movement pattern, the timing of the wrist bending during the backswing in free swings was measured before (PRE), one day after (POST), and 7 days after (FUP) the five training sessions. The ANOVA (time×group×angle) showed that during POST and FUP, the participants of the EA group bent their wrist significantly earlier during the backswing than the other groups. Post-hoc analyses revealed that this interaction effect was mainly due to the differences in the wrist angle progression during the first 5° of the backswing. The robot-guided training was successful in shaping the movement pattern, and these changes persisted even after 7 days without further practice. This might have implications for the learning of complex motor tasks in general, as haptic guidance might quickly provide the beginner with an internal model of the correct movement pattern without having to direct the learner's attention towards the key points of the correct movement pattern." }, { "pmid": "23132605", "title": "Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review.", "abstract": "It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated." }, { "pmid": "19154570", "title": "Technology-assisted training of arm-hand skills in stroke: concepts on reacquisition of motor control and therapist guidelines for rehabilitation technology design.", "abstract": "BACKGROUND\nIt is the purpose of this article to identify and review criteria that rehabilitation technology should meet in order to offer arm-hand training to stroke patients, based on recent principles of motor learning.\n\n\nMETHODS\nA literature search was conducted in PubMed, MEDLINE, CINAHL, and EMBASE (1997-2007).\n\n\nRESULTS\nOne hundred and eighty seven scientific papers/book references were identified as being relevant. Rehabilitation approaches for upper limb training after stroke show to have shifted in the last decade from being analytical towards being focussed on environmentally contextual skill training (task-oriented training). Training programmes for enhancing motor skills use patient and goal-tailored exercise schedules and individual feedback on exercise performance. Therapist criteria for upper limb rehabilitation technology are suggested which are used to evaluate the strengths and weaknesses of a number of current technological systems.\n\n\nCONCLUSION\nThis review shows that technology for supporting upper limb training after stroke needs to align with the evolution in rehabilitation training approaches of the last decade. A major challenge for related technological developments is to provide engaging patient-tailored task oriented arm-hand training in natural environments with patient-tailored feedback to support (re) learning of motor skills." } ]
Frontiers in Neuroinformatics
29937723
PMC5992991
10.3389/fninf.2018.00032
Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.
Introduction and related workNeuronal models and neural mass models, usually based on coupled systems of differential equations, contain many degrees of freedom which determine the dynamics of the system. In a neural network, these models are interconnected and the strength of the interactions between elements can also change through time.Since biological evidence to specify a complete set of parameters for a neural network model is often incomplete, conflicting, or measured to an insufficient level of certainty, parameter fitting is typically required to obtain outputs comparable to experimental results (see for example, López-Cuevas et al., 2015; Schuecker et al., 2015; Zaytsev et al., 2015; Schirner et al., 2016). And even if we had infinite experimental data available, Cubitt et al. (2012) have shown that, regardless of how much experimental data is acquired for a general system, the inverse problem of extracting dynamical equations from experimental data is intractable: “extracting dynamical equations from experimental data is NP hard.” This implies that in neural networks, the problem of finding the exact free parameters for a simulation leading to results matching experimental measurements cannot be solved in polynomial time, at least under the current understanding of computational complexity.However, we can explore the parameter space with forward simulations in order to discover the system's characteristic behaviors and thus limit the search space to a computationally tractable sub-problem in an educated manner. The definition of these subspaces can then be the basis for robust—and non-arbitrary—parameter determination (in other words, mathematically valid performance function minimization). In fact, given the known mathematical characteristics of the dynamics of neuronal and neural mass networks, investigators should characterize the solution spaces of sufficiently complex networks and models before selecting what they propose are statistically diagnostic simulation trajectories. In practice, this rarely happens, even though parameter fitting without sufficient constraints and a rigorous exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. Researchers frequently stay within arbitrary regions in the parameter space which show interesting behaviors, leaving other regions unexplored.Visual parameter space exploration has been successfully applied in several key scientific areas, as detailed by Sedlmair et al. (2014). Combined with interactive simulation steering, the time for obtaining optimal parameter space solutions can be significantly reduced (Matković et al., 2008, 2014). Whitlock et al. (2011) present an integration of VisIt (Childs et al., 2005), a flexible end-user visualization system, into existing simulation codes. This approach enables in situ processing of large datasets while adding visual analysis capabilities at simulation runtime. A similar approach has been suggested by Fabian et al. (2011) for ParaView (Henderson, 2004).Coordinated multiple views (CMVs) as proposed by North and Shneiderman (1997) and Wang Baldonado et al. (2000) can assist in visual parameter space exploration. CMVs are a category of visualization systems that use two or more distinct views to support the investigation of a single conceptual entity. For example, a CMV system can display a 3D rendering of a building (the conceptual entity) alongside a top-down view of its schematics—whenever a room is selected within the schematic overview, the 3D rendering will highlight the room's location. Roberts (2007) shows that CMVs support exploratory data analysis by offering interaction with representations of the same data while emphasizing different details. Ryu et al. (2003) present CMV systems that have been successfully utilized to uncover complex relationships by enabling users to relate different data modalities and scales, and assisting researchers in context switches, comparative tasks, and supplementary analysis techniques. Additional examples of such systems are presented by North and Shneiderman (2000), Boukhelifa and Rodgers (2003), and Weaver (2004).Visual exploration of neural network connectivity, e.g., by displaying spatial connectivity data in 3D renderings, has previously been employed by scientists to better understand and validate models as well as to support theories regarding the networks' topological organization (Migliore et al., 2014; Roy et al., 2014). The infinite solution space of suitable connectivity paths and end configurations for neural networks makes fully automatic parameter fitting “hard,” since it involves satisfying multiple contradictory objectives and qualitative assessment of complex data, as explained by Sedlmair et al. (2014). Kammara et al. (2016) conclude that for multi-objective optimization problems, visualization of the optimization space and trajectories permits more efficient and transparent human supervision of optimization process properties, e.g., diversity and neighborhood relations of solution qualities. They also point their work toward interactive exploration of complex spaces which allows expert knowledge and intuition to quickly explore suitable locations in the parameter space.To address efficient but rigorous parameter space exploration, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on the generation of connectivity, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The generation of local connectivity is achieved using structural plasticity in NEST (Bos et al., 2015) following simple homeostatic rules described in Butz and van Ooyen (2013). We specify the problem from the control theory perspective, as variations in the structure system control the transition in its dynamics from an initial to a final state following a defined trajectory. The tool allows researchers to steer the parameters of the structural plasticity during the simulation, thus quickly growing networks composed of multiple populations with individually targeted mean activities. The flexibility of the software allows the exploration of other connectivity and neuron variables apart from those presented as use cases. We use CMVs to interactively plot firing rates and connectivity properties of populations while the simulation is performed. Moreover, simulation steering is realized by providing interactive capabilities to influence simulation parameters on the fly.We have developed this tool based on two use cases where visual exploration is key for obtaining insights into non-unique dynamics and solutions. The first use case focuses on the generation of connectivity in a simple two population network. Here we show how the generation of connectivity to a desired level of average activity in the network can be achieved by taking multiple trajectories with different biological significance. The second use case is inspired by a whole brain simulation described in Deco et al. (2013), where the exploration of non-unique connectivity solutions is desired to understand the behavior of the model.Applying this approach, an intractable inverse problem can be reduced to a tractable subspace, and the requirements for statistically valid analyses can be determined. Visualization can simplify a complex parameter search scenario, helping in the development of mathematically robust descriptions amenable to further automated investigation of characteristic solution ensembles. Observing the evolution of connectivity, especially in cases where several biologically meaningful paths may lead to the same solutions, can be useful for a better understanding of development, learning and brain repair. This work is a first step toward developing new analytic and computational solutions to specific inverse problems in neuronal and neural mass networks. Our software platform promotes rigorous analysis of complex network models and supports well-informed selection of parameters for simulation.This paper is structured as follows: first, we present an introduction to generic dynamic neural network models from a control theory perspective. Next, we describe connectivity construction and its effects on the dynamics of the system. Then, the development process and design of the steering and visualization tool is detailed. The fifth section describes the results of using the steering tool in two different use cases. Finally, we discuss our results and present open questions and future work.
[ "28878643", "23293600", "27736873", "24130472", "22540563", "24899711", "23825427", "27303272", "20195795", "19198667", "16261181", "29503613", "25754070", "18989028", "26356894", "24808855", "26733860", "9960846", "23203991", "25131838", "28146554", "26356930", "16201007", "16436619", "24672470", "26041729" ]
[ { "pmid": "28878643", "title": "Homologous Basal Ganglia Network Models in Physiological and Parkinsonian Conditions.", "abstract": "The classical model of basal ganglia has been refined in recent years with discoveries of subpopulations within a nucleus and previously unknown projections. One such discovery is the presence of subpopulations of arkypallidal and prototypical neurons in external globus pallidus, which was previously considered to be a primarily homogeneous nucleus. Developing a computational model of these multiple interconnected nuclei is challenging, because the strengths of the connections are largely unknown. We therefore use a genetic algorithm to search for the unknown connectivity parameters in a firing rate model. We apply a binary cost function derived from empirical firing rate and phase relationship data for the physiological and Parkinsonian conditions. Our approach generates ensembles of over 1,000 configurations, or homologies, for each condition, with broad distributions for many of the parameter values and overlap between the two conditions. However, the resulting effective weights of connections from or to prototypical and arkypallidal neurons are consistent with the experimental data. We investigate the significance of the weight variability by manipulating the parameters individually and cumulatively, and conclude that the correlation observed between the parameters is necessary for generating the dynamics of the two conditions. We then investigate the response of the networks to a transient cortical stimulus, and demonstrate that networks classified as physiological effectively suppress activity in the internal globus pallidus, and are not susceptible to oscillations, whereas parkinsonian networks show the opposite tendency. Thus, we conclude that the rates and phase relationships observed in the globus pallidus are predictive of experimentally observed higher level dynamical features of the physiological and parkinsonian basal ganglia, and that the multiplicity of solutions generated by our method may well be indicative of a natural diversity in basal ganglia networks. We propose that our approach of generating and analyzing an ensemble of multiple solutions to an underdetermined network model provides greater confidence in its predictions than those derived from a unique solution, and that projecting such homologous networks on a lower dimensional space of sensibly chosen dynamical features gives a better chance than a purely structural analysis at understanding complex pathologies such as Parkinson's disease." }, { "pmid": "23293600", "title": "CoCoMac 2.0 and the future of tract-tracing databases.", "abstract": "The CoCoMac database contains the results of several hundred published axonal tract-tracing studies in the macaque monkey brain. The combined results are used for constructing the macaque macro-connectome. Here we discuss the redevelopment of CoCoMac and compare it to six connectome-related projects: two online resources that provide full access to raw tracing data in rodents, a connectome viewer for advanced 3D graphics, a partial but highly detailed rat connectome, a brain data management system that generates custom connectivity matrices, and a software package that covers the complete pipeline from connectivity data to large-scale brain simulations. The second edition of CoCoMac features many enhancements over the original. For example, a search wizard is provided for full access to all tables and their nested dependencies. Connectivity matrices can be computed on demand in a user-selected nomenclature. A new data entry system is available as a preview, and is to become a generic solution for community-driven data entry in manually collated databases. We conclude with the question whether neuronal tracing will remain the gold standard to uncover the wiring of brains, thereby highlighting developments in human connectome construction, tracer substances, polarized light imaging, and serial block-face scanning electron microscopy." }, { "pmid": "27736873", "title": "Identifying Anatomical Origins of Coexisting Oscillations in the Cortical Microcircuit.", "abstract": "Oscillations are omnipresent in neural population signals, like multi-unit recordings, EEG/MEG, and the local field potential. They have been linked to the population firing rate of neurons, with individual neurons firing in a close-to-irregular fashion at low rates. Using a combination of mean-field and linear response theory we predict the spectra generated in a layered microcircuit model of V1, composed of leaky integrate-and-fire neurons and based on connectivity compiled from anatomical and electrophysiological studies. The model exhibits low- and high-γ oscillations visible in all populations. Since locally generated frequencies are imposed onto other populations, the origin of the oscillations cannot be deduced from the spectra. We develop an universally applicable systematic approach that identifies the anatomical circuits underlying the generation of oscillations in a given network. Based on a theoretical reduction of the dynamics, we derive a sensitivity measure resulting in a frequency-dependent connectivity map that reveals connections crucial for the peak amplitude and frequency of the observed oscillations and identifies the minimal circuit generating a given frequency. The low-γ peak turns out to be generated in a sub-circuit located in layer 2/3 and 4, while the high-γ peak emerges from the inter-neurons in layer 4. Connections within and onto layer 5 are found to regulate slow rate fluctuations. We further demonstrate how small perturbations of the crucial connections have significant impact on the population spectra, while the impairment of other connections leaves the dynamics on the population level unaltered. The study uncovers connections where mechanisms controlling the spectra of the cortical microcircuit are most effective." }, { "pmid": "24130472", "title": "A simple rule for dendritic spine and axonal bouton formation can account for cortical reorganization after focal retinal lesions.", "abstract": "Lasting alterations in sensory input trigger massive structural and functional adaptations in cortical networks. The principles governing these experience-dependent changes are, however, poorly understood. Here, we examine whether a simple rule based on the neurons' need for homeostasis in electrical activity may serve as driving force for cortical reorganization. According to this rule, a neuron creates new spines and boutons when its level of electrical activity is below a homeostatic set-point and decreases the number of spines and boutons when its activity exceeds this set-point. In addition, neurons need a minimum level of activity to form spines and boutons. Spine and bouton formation depends solely on the neuron's own activity level, and synapses are formed by merging spines and boutons independently of activity. Using a novel computational model, we show that this simple growth rule produces neuron and network changes as observed in the visual cortex after focal retinal lesions. In the model, as in the cortex, the turnover of dendritic spines was increased strongest in the center of the lesion projection zone, while axonal boutons displayed a marked overshoot followed by pruning. Moreover, the decrease in external input was compensated for by the formation of new horizontal connections, which caused a retinotopic remapping. Homeostatic regulation may provide a unifying framework for understanding cortical reorganization, including network repair in degenerative diseases or following focal stroke." }, { "pmid": "22540563", "title": "Extracting dynamical equations from experimental data is NP hard.", "abstract": "The behavior of any physical system is governed by its underlying dynamical equations. Much of physics is concerned with discovering these dynamical equations and understanding their consequences. In this Letter, we show that, remarkably, identifying the underlying dynamical equation from any amount of experimental data, however precise, is a provably computationally hard problem (it is NP hard), both for classical and quantum mechanical systems. As a by-product of this work, we give complexity-theoretic answers to both the quantum and classical embedding problems, two long-standing open problems in mathematics (the classical problem, in particular, dating back over 70 years)." }, { "pmid": "24899711", "title": "How local excitation-inhibition ratio impacts the whole brain dynamics.", "abstract": "The spontaneous activity of the brain shows different features at different scales. On one hand, neuroimaging studies show that long-range correlations are highly structured in spatiotemporal patterns, known as resting-state networks, on the other hand, neurophysiological reports show that short-range correlations between neighboring neurons are low, despite a large amount of shared presynaptic inputs. Different dynamical mechanisms of local decorrelation have been proposed, among which is feedback inhibition. Here, we investigated the effect of locally regulating the feedback inhibition on the global dynamics of a large-scale brain model, in which the long-range connections are given by diffusion imaging data of human subjects. We used simulations and analytical methods to show that locally constraining the feedback inhibition to compensate for the excess of long-range excitatory connectivity, to preserve the asynchronous state, crucially changes the characteristics of the emergent resting and evoked activity. First, it significantly improves the model's prediction of the empirical human functional connectivity. Second, relaxing this constraint leads to an unrealistic network evoked activity, with systematic coactivation of cortical areas which are components of the default-mode network, whereas regulation of feedback inhibition prevents this. Finally, information theoretic analysis shows that regulation of the local feedback inhibition increases both the entropy and the Fisher information of the network evoked responses. Hence, it enhances the information capacity and the discrimination accuracy of the global network. In conclusion, the local excitation-inhibition ratio impacts the structure of the spontaneous activity and the information transmission at the large-scale brain level." }, { "pmid": "23825427", "title": "Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations.", "abstract": "Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications." }, { "pmid": "27303272", "title": "Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.", "abstract": "With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework." }, { "pmid": "20195795", "title": "Run-time interoperability between neuronal network simulators based on the MUSIC framework.", "abstract": "MUSIC is a standard API allowing large scale neuron simulators to exchange data within a parallel computer during runtime. A pilot implementation of this API has been released as open source. We provide experiences from the implementation of MUSIC interfaces for two neuronal network simulators of different kinds, NEST and MOOSE. A multi-simulation of a cortico-striatal network model involving both simulators is performed, demonstrating how MUSIC can promote inter-operability between models written for different simulators and how these can be re-used to build a larger model system. Benchmarks show that the MUSIC pilot implementation provides efficient data transfer in a cluster computer with good scaling. We conclude that MUSIC fulfills the design goal that it should be simple to adapt existing simulators to use MUSIC. In addition, since the MUSIC API enforces independence of the applications, the multi-simulation could be built from pluggable component modules without adaptation of the components to each other in terms of simulation time-step or topology of connections between the modules." }, { "pmid": "19198667", "title": "PyNEST: A Convenient Interface to the NEST Simulator.", "abstract": "The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used." }, { "pmid": "16261181", "title": "Critical period plasticity in local cortical circuits.", "abstract": "Neuronal circuits in the brain are shaped by experience during 'critical periods' in early postnatal life. In the primary visual cortex, this activity-dependent development is triggered by the functional maturation of local inhibitory connections and driven by a specific, late-developing subset of interneurons. Ultimately, the structural consolidation of competing sensory inputs is mediated by a proteolytic reorganization of the extracellular matrix that occurs only during the critical period. The reactivation of this process, and subsequent recovery of function in conditions such as amblyopia, can now be studied with realistic circuit models that might generalize across systems." }, { "pmid": "29503613", "title": "Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.", "abstract": "State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems." }, { "pmid": "25754070", "title": "State and parameter estimation of a neural mass model from electrophysiological signals during the status epilepticus.", "abstract": "Status epilepticus is an emergency condition in patients with prolonged seizure or recurrent seizures without full recovery between them. The pathophysiological mechanisms of status epilepticus are not well established. With this argument, we use a computational modeling approach combined with in vivo electrophysiological data obtained from an experimental model of status epilepticus to infer about changes that may lead to a seizure. Special emphasis is done to analyze parameter changes during or after pilocarpine administration. A cubature Kalman filter is utilized to estimate parameters and states of the model in real time from the observed electrophysiological signals. It was observed that during basal activity (before pilocarpine administration) the parameters presented a standard deviation below 30% of the mean value, while during SE activity, the parameters presented variations larger than 200% of the mean value with respect to basal state. The ratio of excitation-inhibition, increased during SE activity by 80% with respect to the transition state, and reaches the lowest value during cessation. In addition, a progression between low and fast inhibitions before or during this condition was found. This method can be implemented in real time, which is particularly important for the design of stimulation devices that attempt to stop seizures. These changes in the parameters analyzed during seizure activity can lead to better understanding of the mechanisms of epilepsy and to improve its treatments." }, { "pmid": "18989028", "title": "Interactive visual steering--rapid visual prototyping of a common rail injection system.", "abstract": "Interactive steering with visualization has been a common goal of the visualization research community for twenty years, but it is rarely ever realized in practice. In this paper we describe a successful realization of a tightly coupled steering loop, integrating new simulation technology and interactive visual analysis in a prototyping environment for automotive industry system design. Due to increasing pressure on car manufacturers to meet new emission regulations, to improve efficiency, and to reduce noise, both simulation and visualization are pushed to their limits. Automotive system components, such as the powertrain system or the injection system have an increasing number of parameters, and new design approaches are required. It is no longer possible to optimize such a system solely based on experience or forward optimization. By coupling interactive visualization with the simulation back-end (computational steering), it is now possible to quickly prototype a new system, starting from a non-optimized initial prototype and the corresponding simulation model. The prototyping continues through the refinement of the simulation model, of the simulation parameters and through trial-and-error attempts to an optimized solution. The ability to early see the first results from a multidimensional simulation space--thousands of simulations are run for a multidimensional variety of input parameters--and to quickly go back into the simulation and request more runs in particular parameter regions of interest significantly improves the prototyping process and provides a deeper understanding of the system behavior. The excellent results which we achieved for the common rail injection system strongly suggest that our approach has a great potential of being generalized to other, similar scenarios." }, { "pmid": "26356894", "title": "Visual Analytics for Complex Engineering Systems: Hybrid Visual Steering of Simulation Ensembles.", "abstract": "In this paper we propose a novel approach to hybrid visual steering of simulation ensembles. A simulation ensemble is a collection of simulation runs of the same simulation model using different sets of control parameters. Complex engineering systems have very large parameter spaces so a naïve sampling can result in prohibitively large simulation ensembles. Interactive steering of simulation ensembles provides the means to select relevant points in a multi-dimensional parameter space (design of experiment). Interactive steering efficiently reduces the number of simulation runs needed by coupling simulation and visualization and allowing a user to request new simulations on the fly. As system complexity grows, a pure interactive solution is not always sufficient. The new approach of hybrid steering combines interactive visual steering with automatic optimization. Hybrid steering allows a domain expert to interactively (in a visualization) select data points in an iterative manner, approximate the values in a continuous region of the simulation space (by regression) and automatically find the \"best\" points in this continuous region based on the specified constraints and objectives (by optimization). We argue that with the full spectrum of optimization options, the steering process can be improved substantially. We describe an integrated system consisting of a simulation, a visualization, and an optimization component. We also describe typical tasks and propose an interactive analysis workflow for complex engineering systems. We demonstrate our approach on a case study from automotive industry, the optimization of a hydraulic circuit in a high pressure common rail Diesel injection system." }, { "pmid": "24808855", "title": "Distributed organization of a brain microcircuit analyzed by three-dimensional modeling: the olfactory bulb.", "abstract": "The functional consequences of the laminar organization observed in cortical systems cannot be easily studied using standard experimental techniques, abstract theoretical representations, or dimensionally reduced models built from scratch. To solve this problem we have developed a full implementation of an olfactory bulb microcircuit using realistic three-dimensional (3D) inputs, cell morphologies, and network connectivity. The results provide new insights into the relations between the functional properties of individual cells and the networks in which they are embedded. To our knowledge, this is the first model of the mitral-granule cell network to include a realistic representation of the experimentally-recorded complex spatial patterns elicited in the glomerular layer (GL) by natural odor stimulation. Although the olfactory bulb, due to its organization, has unique advantages with respect to other brain systems, the method is completely general, and can be integrated with more general approaches to other systems. The model makes experimentally testable predictions on distributed processing and on the differential backpropagation of somatic action potentials in each lateral dendrite following odor learning, providing a powerful 3D framework for investigating the functions of brain microcircuits." }, { "pmid": "26733860", "title": "Integrating Visualizations into Modeling NEST Simulations.", "abstract": "Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work." }, { "pmid": "23203991", "title": "The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model.", "abstract": "In the past decade, the cell-type specific connectivity and activity of local cortical networks have been characterized experimentally to some detail. In parallel, modeling has been established as a tool to relate network structure to activity dynamics. While available comprehensive connectivity maps ( Thomson, West, et al. 2002; Binzegger et al. 2004) have been used in various computational studies, prominent features of the simulated activity such as the spontaneous firing rates do not match the experimental findings. Here, we analyze the properties of these maps to compile an integrated connectivity map, which additionally incorporates insights on the specific selection of target types. Based on this integrated map, we build a full-scale spiking network model of the local cortical microcircuit. The simulated spontaneous activity is asynchronous irregular and cell-type specific firing rates are in agreement with in vivo recordings in awake animals, including the low rate of layer 2/3 excitatory cells. The interplay of excitation and inhibition captures the flow of activity through cortical layers after transient thalamic stimulation. In conclusion, the integration of a large body of the available connectivity data enables us to expose the dynamical consequences of the cortical microcircuitry." }, { "pmid": "25131838", "title": "Using the virtual brain to reveal the role of oscillations and plasticity in shaping brain's dynamical landscape.", "abstract": "Spontaneous brain activity, that is, activity in the absence of controlled stimulus input or an explicit active task, is topologically organized in multiple functional networks (FNs) maintaining a high degree of coherence. These \"resting state networks\" are constrained by the underlying anatomical connectivity between brain areas. They are also influenced by the history of task-related activation. The precise rules that link plastic changes and ongoing dynamics of resting-state functional connectivity (rs-FC) remain unclear. Using the framework of the open source neuroinformatics platform \"The Virtual Brain,\" we identify potential computational mechanisms that alter the dynamical landscape, leading to reconfigurations of FNs. Using a spiking neuron model, we first demonstrate that network activity in the absence of plasticity is characterized by irregular oscillations between low-amplitude asynchronous states and high-amplitude synchronous states. We then demonstrate the capability of spike-timing-dependent plasticity (STDP) combined with intrinsic alpha (8-12 Hz) oscillations to efficiently influence learning. Further, we show how alpha-state-dependent STDP alters the local area dynamics from an irregular to a highly periodic alpha-like state. This is an important finding, as the cortical input from the thalamus is at the rate of alpha. We demonstrate how resulting rhythmic cortical output in this frequency range acts as a neuronal tuner and, hence, leads to synchronization or de-synchronization between brain areas. Finally, we demonstrate that locally restricted structural connectivity changes influence local as well as global dynamics and lead to altered rs-FC." }, { "pmid": "28146554", "title": "Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome.", "abstract": "The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function." }, { "pmid": "26356930", "title": "Visual Parameter Space Analysis: A Conceptual Framework.", "abstract": "Various case studies in different application domains have shown the great potential of visual parameter space analysis to support validating and using simulation models. In order to guide and systematize research endeavors in this area, we provide a conceptual framework for visual parameter space analysis problems. The framework is based on our own experience and a structured analysis of the visualization literature. It contains three major components: (1) a data flow model that helps to abstractly describe visual parameter space analysis problems independent of their application domain; (2) a set of four navigation strategies of how parameter space analysis can be supported by visualization tools; and (3) a characterization of six analysis tasks. Based on our framework, we analyze and classify the current body of literature, and identify three open research gaps in visual parameter space analysis. The framework and its discussion are meant to support visualization designers and researchers in characterizing parameter space analysis problems and to guide their design and evaluation processes." }, { "pmid": "16201007", "title": "The human connectome: A structural description of the human brain.", "abstract": "The connection matrix of the human brain (the human \"connectome\") represents an indispensable foundation for basic and applied neurobiological research. However, the network of anatomical connections linking the neuronal elements of the human brain is still largely unknown. While some databases or collations of large-scale anatomical connection patterns exist for other mammalian species, there is currently no connection matrix of the human brain, nor is there a coordinated research effort to collect, archive, and disseminate this important information. We propose a research strategy to achieve this goal, and discuss its potential impact." }, { "pmid": "16436619", "title": "A recurrent network mechanism of time integration in perceptual decisions.", "abstract": "Recent physiological studies using behaving monkeys revealed that, in a two-alternative forced-choice visual motion discrimination task, reaction time was correlated with ramping of spike activity of lateral intraparietal cortical neurons. The ramping activity appears to reflect temporal accumulation, on a timescale of hundreds of milliseconds, of sensory evidence before a decision is reached. To elucidate the cellular and circuit basis of such integration times, we developed and investigated a simplified two-variable version of a biophysically realistic cortical network model of decision making. In this model, slow time integration can be achieved robustly if excitatory reverberation is primarily mediated by NMDA receptors; our model with only fast AMPA receptors at recurrent synapses produces decision times that are not comparable with experimental observations. Moreover, we found two distinct modes of network behavior, in which decision computation by winner-take-all competition is instantiated with or without attractor states for working memory. Decision process is closely linked to the local dynamics, in the \"decision space\" of the system, in the vicinity of an unstable saddle steady state that separates the basins of attraction for the two alternative choices. This picture provides a rigorous and quantitative explanation for the dependence of performance and response time on the degree of task difficulty, and the reason for which reaction times are longer in error trials than in correct trials as observed in the monkey experiment. Our reduced two-variable neural model offers a simple yet biophysically plausible framework for studying perceptual decision making in general." }, { "pmid": "24672470", "title": "CyNEST: a maintainable Cython-based interface for the NEST simulator.", "abstract": "NEST is a simulator for large-scale networks of spiking point neuron models (Gewaltig and Diesmann, 2007). Originally, simulations were controlled via the Simulation Language Interpreter (SLI), a built-in scripting facility implementing a language derived from PostScript (Adobe Systems, Inc., 1999). The introduction of PyNEST (Eppler et al., 2008), the Python interface for NEST, enabled users to control simulations using Python. As the majority of NEST users found PyNEST easier to use and to combine with other applications, it immediately displaced SLI as the default NEST interface. However, developing and maintaining PyNEST has become increasingly difficult over time. This is partly because adding new features requires writing low-level C++ code intermixed with calls to the Python/C API, which is unrewarding. Moreover, the Python/C API evolves with each new version of Python, which results in a proliferation of version-dependent code branches. In this contribution we present the re-implementation of PyNEST in the Cython language, a superset of Python that additionally supports the declaration of C/C++ types for variables and class attributes, and provides a convenient foreign function interface (FFI) for invoking C/C++ routines (Behnel et al., 2011). Code generation via Cython allows the production of smaller and more maintainable bindings, including increased compatibility with all supported Python releases without additional burden for NEST developers. Furthermore, this novel approach opens up the possibility to support alternative implementations of the Python language at no cost given a functional Cython back-end for the corresponding implementation, and also enables cross-compilation of Python bindings for embedded systems and supercomputers alike." }, { "pmid": "26041729", "title": "Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.", "abstract": "Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings." } ]
PLoS Computational Biology
18193941
PMC2186361
10.1371/journal.pcbi.0040010
Matt: Local Flexibility Aids Protein Multiple Structure Alignment
Even when there is agreement on what measure a protein multiple structure alignment should be optimizing, finding the optimal alignment is computationally prohibitive. One approach used by many previous methods is aligned fragment pair chaining, where short structural fragments from all the proteins are aligned against each other optimally, and the final alignment chains these together in geometrically consistent ways. Ye and Godzik have recently suggested that adding geometric flexibility may help better model protein structures in a variety of contexts. We introduce the program Matt (Multiple Alignment with Translations and Twists), an aligned fragment pair chaining algorithm that, in intermediate steps, allows local flexibility between fragments: small translations and rotations are temporarily allowed to bring sets of aligned fragments closer, even if they are physically impossible under rigid body transformations. After a dynamic programming assembly guided by these “bent” alignments, geometric consistency is restored in the final step before the alignment is output. Matt is tested against other recent multiple protein structure alignment programs on the popular Homstrad and SABmark benchmark datasets. Matt's global performance is competitive with the other programs on Homstrad, but outperforms the other programs on SABmark, a benchmark of multiple structure alignments of proteins with more distant homology. On both datasets, Matt demonstrates an ability to better align the ends of α-helices and β-strands, an important characteristic of any structure alignment program intended to help construct a structural template library for threading approaches to the inverse protein-folding problem. The related question of whether Matt alignments can be used to distinguish distantly homologous structure pairs from pairs of proteins that are not homologous is also considered. For this purpose, a p-value score based on the length of the common core and average root mean squared deviation (RMSD) of Matt alignments is shown to largely separate decoys from homologous protein structures in the SABmark benchmark dataset. We postulate that Matt's strong performance comes from its ability to model proteins in different conformational states and, perhaps even more important, its ability to model backbone distortions in more distantly related proteins.
Related WorkThe only general protein structure alignment programs that previously tried to model flexibility are FlexProt [36] and Ye and Godzik's Fatcat [14] (both for pairwise alignment), and Fatcat's generalization to multiple structure alignment, POSA [27]. Fatcat is also an AFP chaining algorithm, except it allows a globally minimized number of translations or bends in the structure if it improves the overall alignment. In this way, it is able to capture homologous proteins with hinges, or other discrete points of flexibility, due to conformational change. Our program Matt is fundamentally different: it instead allows flexibilities everywhere between short fragments—that is, it does not seek to globally minimize the number of bends, but rather allows continuous small local perturbations in order to better match the “bent” RMSD between structures. Because Matt allows these flexibilities, it can put strict tolerance limits on “bent” RMSD, so it only keeps fragments that locally have very tight alignments. Up until the last step, Matt allows the dynamic program to assemble fragments in ways that are structurally impossible—one chain may have to break or rotate beyond the physical constraints imposed by the backbone molecules in order to simultaneously fit the best transformation. This is repaired in a final step, when the residue to residue alignment produced by this unrealistic “bent” transformation is retained; the best rigid-body transformation that preserves that alignment is found, and then either output along with the residue–residue correspondences produced by the “bent” Matt alignment or extended to include as yet unaligned residues that fall within a user-settable maximum RMSD cutoff under the new rigid-body transformation to form the final Matt “unbent” alignment.
[ "9600892", "3709526", "10195279", "9367768", "16679011", "12127461", "11551177", "11151008", "15201059", "9796821", "10980157", "1961713", "14573862", "17118190", "17683261", "15531601", "15304646", "11153094", "15746292", "15701525", "10964982", "9071025", "16488145", "12520056", "16254179", "12112693", "15333456", "3430611", "15162494", "16736488", "9504803", "8019422", "21869139" ]
[ { "pmid": "9600892", "title": "A unified statistical framework for sequence comparison and structure comparison.", "abstract": "We present an approach for assessing the significance of sequence and structure comparisons by using nearly identical statistical formalisms for both sequence and structure. Doing so involves an all-vs.-all comparison of protein domains [taken here from the Structural Classification of Proteins (scop) database] and then fitting a simple distribution function to the observed scores. By using this distribution, we can attach a statistical significance to each comparison score in the form of a P value, the probability that a better score would occur by chance. As expected, we find that the scores for sequence matching follow an extreme-value distribution. The agreement, moreover, between the P values that we derive from this distribution and those reported by standard programs (e.g., BLAST and FASTA validates our approach. Structure comparison scores also follow an extreme-value distribution when the statistics are expressed in terms of a structural alignment score (essentially the sum of reciprocated distances between aligned atoms minus gap penalties). We find that the traditional metric of structural similarity, the rms deviation in atom positions after fitting aligned atoms, follows a different distribution of scores and does not perform as well as the structural alignment score. Comparison of the sequence and structure statistics for pairs of proteins known to be related distantly shows that structural comparison is able to detect approximately twice as many distant relationships as sequence comparison at the same error rate. The comparison also indicates that there are very few pairs with significant similarity in terms of sequence but not structure whereas many pairs have significant similarity in terms of structure but not sequence." }, { "pmid": "3709526", "title": "The relation between the divergence of sequence and structure in proteins.", "abstract": "Homologous proteins have regions which retain the same general fold and regions where the folds differ. For pairs of distantly related proteins (residue identity approximately 20%), the regions with the same fold may comprise less than half of each molecule. The regions with the same general fold differ in structure by amounts that increase as the amino acid sequences diverge. The root mean square deviation in the positions of the main chain atoms, delta, is related to the fraction of mutated residues, H, by the expression: delta(A) = 0.40 e1.87H." }, { "pmid": "10195279", "title": "Twilight zone of protein sequence alignments.", "abstract": "Sequence alignments unambiguously distinguish between protein pairs of similar and non-similar structure when the pairwise sequence identity is high (>40% for long alignments). The signal gets blurred in the twilight zone of 20-35% sequence identity. Here, more than a million sequence alignments were analysed between protein pairs of known structures to re-define a line distinguishing between true and false positives for low levels of similarity. Four results stood out. (i) The transition from the safe zone of sequence alignment into the twilight zone is described by an explosion of false negatives. More than 95% of all pairs detected in the twilight zone had different structures. More precisely, above a cut-off roughly corresponding to 30% sequence identity, 90% of the pairs were homologous; below 25% less than 10% were. (ii) Whether or not sequence homology implied structural identity depended crucially on the alignment length. For example, if 10 residues were similar in an alignment of length 16 (>60%), structural similarity could not be inferred. (iii) The 'more similar than identical' rule (discarding all pairs for which percentage similarity was lower than percentage identity) reduced false positives significantly. (iv) Using intermediate sequences for finding links between more distant families was almost as successful: pairs were predicted to be homologous when the respective sequence families had proteins in common. All findings are applicable to automatic database searches." }, { "pmid": "9367768", "title": "Do aligned sequences share the same fold?", "abstract": "Sequence comparison remains a powerful tool to assess the structural relatedness of two proteins. To develop a sensitive sequence-based procedure for fold recognition, we performed an exhaustive global alignment (with zero end gap penalties) between sequences of protein domains with known three-dimensional folds. The subset of 1.3 million alignments between sequences of structurally unrelated domains was used to derive a set of analytical functions that represent the probability of structural significance for any sequence alignment at a given sequence identity, sequence similarity and alignment score. Analysis of overlap between structurally significant and insignificant alignments shows that sequence identity and sequence similarity measures are poor indicators of structural relatedness in the \"twilight zone\", while the alignment score allows much better discrimination between alignments of structurally related and unrelated sequences for a wide variety of alignment settings. A fold recognition benchmark was used to compare eight different substitution matrices with eight sets of gap penalties. The best performing matrices were Gonnet and Blosum50 with normalized gap penalties of 2.4/0.15 and 2.0/0.15, respectively, while the positive matrices were the worst performers. The derived functions and parameters can be used for fold recognition via a multilink chain of probability weighted pairwise sequence alignments." }, { "pmid": "16679011", "title": "Multiple sequence alignment.", "abstract": "Multiple sequence alignments are an essential tool for protein structure and function prediction, phylogeny inference and other common tasks in sequence analysis. Recently developed systems have advanced the state of the art with respect to accuracy, ability to scale to thousands of proteins and flexibility in comparing proteins that do not share the same domain architecture. New multiple alignment benchmark databases include PREFAB, SABMARK, OXBENCH and IRMBASE. Although CLUSTALW is still the most popular alignment tool to date, recent methods offer significantly better alignment quality and, in some cases, reduced computational cost." }, { "pmid": "12127461", "title": "Evolution of protein structures and functions.", "abstract": "Within the ever-expanding repertoire of known protein sequences and structures, many examples of evolving three-dimensional structures are emerging that illustrate the plasticity and robustness of protein folds. The mechanisms by which protein folds change often include the fusion of duplicated domains, followed by divergence through mutation. Such changes reflect both the stability of protein folds and the requirements of protein function." }, { "pmid": "11551177", "title": "Fold change in evolution of protein structures.", "abstract": "Typically, protein spatial structures are more conserved in evolution than amino acid sequences. However, the recent explosion of sequence and structure information accompanied by the development of powerful computational methods led to the accumulation of examples of homologous proteins with globally distinct structures. Significant sequence conservation, local structural resemblance, and functional similarity strongly indicate evolutionary relationships between these proteins despite pronounced structural differences at the fold level. Several mechanisms such as insertions/deletions/substitutions, circular permutations, and rearrangements in beta-sheet topologies account for the majority of detected structural irregularities. The existence of evolutionarily related proteins that possess different folds brings new challenges to the homology modeling techniques and the structure classification strategies and offers new opportunities for protein design in experimental studies." }, { "pmid": "11151008", "title": "Protein structural alignments and functional genomics.", "abstract": "Structural genomics-the systematic solution of structures of the proteins of an organism-will increasingly often produce molecules of unknown function with no close relative of known function. Prediction of protein function from structure has thereby become a challenging problem of computational molecular biology. The strong conservation of active site conformations in homologous proteins suggests a method for identifying them. This depends on the relationship between size and goodness-of-fit of aligned substructures in homologous proteins. For all pairs of proteins studied, the root-mean-square deviation (RMSD) as a function of the number of residues aligned varies exponentially for large common substructures and linearly for small common substructures. The exponent of the dependence at large common substructures is well correlated with the RMSD of the core as originally calculated by Chothia and Lesk (EMBO J 1986;5:823-826), affording the possibility of reconciling different structural alignment procedures. In the region of small common substructures, reduced aligned subsets define active sites and can be used to suggest the locations of active sites in homologous proteins." }, { "pmid": "15201059", "title": "3DCoffee: combining protein sequences and structures within multiple sequence alignments.", "abstract": "Most bioinformatics analyses require the assembly of a multiple sequence alignment. It has long been suspected that structural information can help to improve the quality of these alignments, yet the effect of combining sequences and structures has not been evaluated systematically. We developed 3DCoffee, a novel method for combining protein sequences and structures in order to generate high-quality multiple sequence alignments. 3DCoffee is based on TCoffee version 2.00, and uses a mixture of pairwise sequence alignments and pairwise structure comparison methods to generate multiple sequence alignments. We benchmarked 3DCoffee using a subset of HOMSTRAD, the collection of reference structural alignments. We found that combining TCoffee with the threading program Fugue makes it possible to improve the accuracy of our HOMSTRAD dataset by four percentage points when using one structure only per dataset. Using two structures yields an improvement of ten percentage points. The measures carried out on HOM39, a HOMSTRAD subset composed of distantly related sequences, show a linear correlation between multiple sequence alignment accuracy and the ratio of number of provided structure to total number of sequences. Our results suggest that in the case of distantly related sequences, a single structure may not be enough for computing an accurate multiple sequence alignment." }, { "pmid": "9796821", "title": "Protein structure alignment by incremental combinatorial extension (CE) of the optimal path.", "abstract": "A new algorithm is reported which builds an alignment between two protein structures. The algorithm involves a combinatorial extension (CE) of an alignment path defined by aligned fragment pairs (AFPs) rather than the more conventional techniques using dynamic programming and Monte Carlo optimization. AFPs, as the name suggests, are pairs of fragments, one from each protein, which confer structure similarity. AFPs are based on local geometry, rather than global features such as orientation of secondary structures and overall topology. Combinations of AFPs that represent possible continuous alignment paths are selectively extended or discarded thereby leading to a single optimal alignment. The algorithm is fast and accurate in finding an optimal structure alignment and hence suitable for database scanning and detailed analysis of large protein families. The method has been tested and compared with results from Dali and VAST using a representative sample of similar structures. Several new structural similarities not detected by these other methods are reported. Specific one-on-one alignments and searches against all structures as found in the Protein Data Bank (PDB) can be performed via the Web at http://cl.sdsc.edu/ce.html." }, { "pmid": "10980157", "title": "DaliLite workbench for protein structure comparison.", "abstract": "DaliLite is a program for pairwise structure comparison and for structure database searching. It is a standalone version of the search engine of the popular Dali server. A web interface is provided to view the results, multiple alignments and 3D superimpositions of structures." }, { "pmid": "1961713", "title": "Efficient detection of three-dimensional structural motifs in biological macromolecules by computer vision techniques.", "abstract": "Macromolecules carrying biological information often consist of independent modules containing recurring structural motifs. Detection of a specific structural motif within a protein (or DNA) aids in elucidating the role played by the protein (DNA element) and the mechanism of its operation. The number of crystallographically known structures at high resolution is increasing very rapidly. Yet, comparison of three-dimensional structures is a laborious time-consuming procedure that typically requires a manual phase. To date, there is no fast automated procedure for structural comparisons. We present an efficient O(n3) worst case time complexity algorithm for achieving such a goal (where n is the number of atoms in the examined structure). The method is truly three-dimensional, sequence-order-independent, and thus insensitive to gaps, insertions, or deletions. This algorithm is based on the geometric hashing paradigm, which was originally developed for object recognition problems in computer vision. It introduces an indexing approach based on transformation invariant representations and is especially geared toward efficient recognition of partial structures in rigid objects belonging to large data bases. This algorithm is suitable for quick scanning of structural data bases and will detect a recurring structural motif that is a priori unknown. The algorithm uses protein (or DNA) structures, atomic labels, and their three-dimensional coordinates. Additional information pertaining to the structure speeds the comparisons. The algorithm is straightforwardly parallelizable, and several versions of it for computer vision applications have been implemented on the massively parallel connection machine. A prototype version of the algorithm has been implemented and applied to the detection of substructures in proteins." }, { "pmid": "14573862", "title": "Multiple structural alignment by secondary structures: algorithm and applications.", "abstract": "We present MASS (Multiple Alignment by Secondary Structures), a novel highly efficient method for structural alignment of multiple protein molecules and detection of common structural motifs. MASS is based on a two-level alignment, using both secondary structure and atomic representation. Utilizing secondary structure information aids in filtering out noisy solutions and achieves efficiency and robustness. Currently, only a few methods are available for addressing the multiple structural alignment task. In addition to using secondary structure information, the advantage of MASS as compared to these methods is that it is a combination of several important characteristics: (1) While most existing methods are based on series of pairwise comparisons, and thus might miss optimal global solutions, MASS is truly multiple, considering all the molecules simultaneously; (2) MASS is sequence order-independent and thus capable of detecting nontopological structural motifs; (3) MASS is able to detect not only structural motifs, shared by all input molecules, but also motifs shared only by subsets of the molecules. Here, we show the application of MASS to various protein ensembles. We demonstrate its ability to handle a large number (order of tens) of molecules, to detect nontopological motifs and to find biologically meaningful alignments within nonpredefined subsets of the input. In particular, we show how by using conserved structural motifs, one can guide protein-protein docking, which is a notoriously difficult problem. MASS is freely available at http://bioinfo3d.cs.tau.ac.il/MASS/." }, { "pmid": "17118190", "title": "Connectivity independent protein-structure alignment: a hierarchical approach.", "abstract": "BACKGROUND\nProtein-structure alignment is a fundamental tool to study protein function, evolution and model building. In the last decade several methods for structure alignment were introduced, but most of them ignore that structurally similar proteins can share the same spatial arrangement of secondary structure elements (SSE) but differ in the underlying polypeptide chain connectivity (non-sequential SSE connectivity).\n\n\nRESULTS\nWe perform protein-structure alignment using a two-level hierarchical approach implemented in the program GANGSTA. On the first level, pair contacts and relative orientations between SSEs (i.e. alpha-helices and beta-strands) are maximized with a genetic algorithm (GA). On the second level residue pair contacts from the best SSE alignments are optimized. We have tested the method on visually optimized structure alignments of protein pairs (pairwise mode) and for database scans. For a given protein structure, our method is able to detect significant structural similarity of functionally important folds with non-sequential SSE connectivity. The performance for structure alignments with strictly sequential SSE connectivity is comparable to that of other structure alignment methods.\n\n\nCONCLUSION\nAs demonstrated for several applications, GANGSTA finds meaningful protein-structure alignments independent of the SSE connectivity. GANGSTA is able to detect structural similarity of protein folds that are assigned to different superfamilies but nevertheless possess similar structures and perform related functions, even if these proteins differ in SSE connectivity." }, { "pmid": "17683261", "title": "A parameterized algorithm for protein structure alignment.", "abstract": "This paper proposes a parameterized polynomial time approximation scheme (PTAS) for aligning two protein structures, in the case where one protein structure is represented by a contact map graph and the other by a contact map graph or a distance matrix. If the sequential order of alignment is not required, the time complexity is polynomial in the protein size and exponential with respect to two parameters D(u)/D(l) and D(c)/D(l), which usually can be treated as constants. In particular, D(u) is the distance threshold determining if two residues are in contact or not, D(c) is the maximally allowed distance between two matched residues after two proteins are superimposed, and D(l) is the minimum inter-residue distance in a typical protein. This result clearly demonstrates that the computational hardness of the contact map based protein structure alignment problem is related not to protein size but to several parameters modeling the problem. The result is achieved by decomposing the protein structure using tree decomposition and discretizing the rigid-body transformation space. Preliminary experimental results indicate that on a Linux PC, it takes from ten minutes to one hour to align two proteins with approximately 100 residues." }, { "pmid": "15531601", "title": "Non-sequential structure-based alignments reveal topology-independent core packing arrangements in proteins.", "abstract": "MOTIVATION\nProteins of the same class often share a secondary structure packing arrangement but differ in how the secondary structure units are ordered in the sequence. We find that proteins that share a common core also share local sequence-structure similarities, and these can be exploited to align structures with different topologies. In this study, segments from a library of local sequence-structure alignments were assembled hierarchically, enforcing the compactness and conserved inter-residue contacts but not sequential ordering. Previous structure-based alignment methods often ignore sequence similarity, local structural equivalence and compactness.\n\n\nRESULTS\nThe new program, SCALI (Structural Core ALIgnment), can efficiently find conserved packing arrangements, even if they are non-sequentially ordered in space. SCALI alignments conserve remote sequence similarity and contain fewer alignment errors. Clustering of our pairwise non-sequential alignments shows that recurrent packing arrangements exist in topologically different structures. For example, the three-layer sandwich domain architecture may be divided into four structural subclasses based on internal packing arrangements. These subclasses represent an intermediate level of structure classification, more general than topology, but more specific than architecture as defined in CATH. A strategy is presented for developing a set of predictive hidden Markov models based on multiple SCALI alignments." }, { "pmid": "15304646", "title": "Approximate protein structural alignment in polynomial time.", "abstract": "Alignment of protein structures is a fundamental task in computational molecular biology. Good structural alignments can help detect distant evolutionary relationships that are hard or impossible to discern from protein sequences alone. Here, we study the structural alignment problem as a family of optimization problems and develop an approximate polynomial-time algorithm to solve them. For a commonly used scoring function, the algorithm runs in O(n(10)/epsilon(6)) time, for globular protein of length n, and it detects alignments that score within an additive error of epsilon from all optima. Thus, we prove that this task is computationally feasible, although the method that we introduce is too slow to be a useful everyday tool. We argue that such approximate solutions are, in fact, of greater interest than exact ones because of the noisy nature of experimentally determined protein coordinates. The measurement of similarity between a pair of protein structures used by our algorithm involves the Euclidean distance between the structures (appropriately rigidly transformed). We show that an alternative approach, which relies on internal distance matrices, must incorporate sophisticated geometric ingredients if it is to guarantee optimality and run in polynomial time. We use these observations to visualize the scoring function for several real instances of the problem. Our investigations yield insights on the computational complexity of protein alignment under various scoring functions. These insights can be used in the design of scoring functions for which the optimum can be approximated efficiently and perhaps in the development of efficient algorithms for the multiple structural alignment problem." }, { "pmid": "11153094", "title": "Structure comparison and structure patterns.", "abstract": "This article investigates aspects of pairwise and multiple structure comparison, and the problem of automatically discover common patterns in a set of structures. Descriptions and representation of structures and patterns are described, as well as scoring and algorithms for comparison and discovery. A framework and nomenclature is developed for classifying different methods, and many of these are reviewed and placed into this framework." }, { "pmid": "15746292", "title": "Multiple flexible structure alignment using partial order graphs.", "abstract": "MOTIVATION\nExisting comparisons of protein structures are not able to describe structural divergence and flexibility in the structures being compared because they focus on identifying a common invariant core and ignore parts of the structures outside this core. Understanding the structural divergence and flexibility is critical for studying the evolution of functions and specificities of proteins.\n\n\nRESULTS\nA new method of multiple protein structure alignment, POSA (Partial Order Structure Alignment), was developed using a partial order graph representation of multiple alignments. POSA has two unique features: (1) identifies and classifies regions that are conserved only in a subset of input structures and (2) allows internal rearrangements in protein structures. POSA outperforms other programs in the cases where structural flexibilities exist and provides new insights by visualizing the mosaic nature of multiple structural alignments. POSA is an ideal tool for studying the variation of protein structures within diverse structural families.\n\n\nAVAILABILITY\nPOSA is freely available for academic users on a Web server at http://fatcat.burnham.org/POSA" }, { "pmid": "15701525", "title": "Comprehensive evaluation of protein structure alignment methods: scoring by geometric measures.", "abstract": "We report the largest and most comprehensive comparison of protein structural alignment methods. Specifically, we evaluate six publicly available structure alignment programs: SSAP, STRUCTAL, DALI, LSQMAN, CE and SSM by aligning all 8,581,970 protein structure pairs in a test set of 2930 protein domains specially selected from CATH v.2.4 to ensure sequence diversity. We consider an alignment good if it matches many residues, and the two substructures are geometrically similar. Even with this definition, evaluating structural alignment methods is not straightforward. At first, we compared the rates of true and false positives using receiver operating characteristic (ROC) curves with the CATH classification taken as a gold standard. This proved unsatisfactory in that the quality of the alignments is not taken into account: sometimes a method that finds less good alignments scores better than a method that finds better alignments. We correct this intrinsic limitation by using four different geometric match measures (SI, MI, SAS, and GSAS) to evaluate the quality of each structural alignment. With this improved analysis we show that there is a wide variation in the performance of different methods; the main reason for this is that it can be difficult to find a good structural alignment between two proteins even when such an alignment exists. We find that STRUCTAL and SSM perform best, followed by LSQMAN and CE. Our focus on the intrinsic quality of each alignment allows us to propose a new method, called \"Best-of-All\" that combines the best results of all methods. Many commonly used methods miss 10-50% of the good Best-of-All alignments. By putting existing structural alignments into proper perspective, our study allows better comparison of protein structures. By highlighting limitations of existing methods, it will spur the further development of better structural alignment methods. This will have significant biological implications now that structural comparison has come to play a central role in the analysis of experimental work on protein structure, protein function and protein evolution." }, { "pmid": "10964982", "title": "Protein structure alignment using environmental profiles.", "abstract": "A new protein structure alignment procedure is described. An initial alignment is made by comparing a one-dimensional list of primary, secondary and tertiary structural features (profiles) of two proteins, without explicitly considering the three-dimensional geometry of the structures. The alignment is then iteratively refined in the second step, in which new alignments are found by three-dimensional superposition of the structures based on the current alignment. This new procedure is fast enough to do all-against-all structural comparisons routinely. The procedure sometimes finds an alignment that suggests an evolutionary relationship and which is not normally obtained if only geometry is considered. All pair-wise comparisons were made among 3539 protein structural domains that represent all known protein structures. The resulting 3539 z-scores were used to cluster the proteins. The number of main clusters increased continuously as the z-cutoff was raised, but the number of multiple-member clusters showed a maximum at z-cutoff values of 5.0 and 5.5. When a z-cutoff value of 5.0 was used, the total number of main clusters was 2043, of which only 336 clusters had more than one member." }, { "pmid": "9071025", "title": "Comparison of protein structures using 3D profile alignment.", "abstract": "A novel method for protein structure comparison using 3D profile alignment is presented. The 3D profile is a position-dependent scoring matrix derived from three-dimensional structures and is basically used to estimate sequence-structure compatibility for prediction of protein structure. Our idea is to compare two 3D profiles using a dynamic programming algorithm to obtain optimal alignment and a similarity score between them. When the 3D profile of hemoglobin was compared with each of the profiles in the library, which contained 325 profiles of representative structures, all the profiles of other globins were detected with relatively high scores, and proteins in the same structural class followed the globins. Exhaustive comparison of 3D profiles in the library was also performed to depict protein relatedness in the structure space. Using multidimensional scaling, a planar projection of points in the protein structure space revealed an overall grouping in terms of structural classes, i.e., all-alpha, all-beta, alpha/beta, and alpha+beta. These results differ in implication from those obtained by the conventional structure-structure comparison method. Differences are discussed with respect to the structural divergence of proteins in the course of molecular evolution." }, { "pmid": "16488145", "title": "Flexible protein-protein docking.", "abstract": "Predicting the structure of protein-protein complexes using docking approaches is a difficult problem whose major challenges include identifying correct solutions, and properly dealing with molecular flexibility and conformational changes. Flexibility can be addressed at several levels: implicitly, by smoothing the protein surfaces or allowing some degree of interpenetration (soft docking) or by performing multiple docking runs from various conformations (cross or ensemble docking); or explicitly, by allowing sidechain and/or backbone flexibility. Although significant improvements have been achieved in the modeling of sidechains, methods for the explicit inclusion of backbone flexibility in docking are still being developed. A few novel approaches have emerged involving collective degrees of motion, multicopy representations and multibody docking, which should allow larger conformational changes to be modeled." }, { "pmid": "12520056", "title": "MolMovDB: analysis and visualization of conformational change and structural flexibility.", "abstract": "The Database of Macromolecular Movements (http://MolMovDB.org) is a collection of data and software pertaining to flexibility in protein and RNA structures. The database is organized into two parts. Firstly, a collection of 'morphs' of solved structures representing different states of a molecule provides quantitative data for flexibility and a number of graphical representations. Secondly, a classification of known motions according to type of conformational change (e.g. 'hinged domain' or 'allosteric') incorporates textual annotation and information from the literature relating to the motion, linking together many of the morphs. A variety of subsets of the morphs are being developed for use in statistical analyses. In particular, for each subset it is possible to derive distributions of various motional quantities (e.g. maximum rotation) that can be used to place a specific motion in context as being typical or atypical for a given population. Over the past year, the database has been greatly expanded and enhanced to incorporate new structures and to improve the quality of data. The 'morph server', which enables users of the database to add new morphs either from their own research or the PDB, has also been enhanced to handle nucleic acid structures and multi-chain complexes." }, { "pmid": "16254179", "title": "Progress in modeling of protein structures and interactions.", "abstract": "The prediction of the structures and interactions of biological macromolecules at the atomic level and the design of new structures and interactions are critical tests of our understanding of the interatomic interactions that underlie molecular biology. Equally important, the capability to accurately predict and design macromolecular structures and interactions would streamline the interpretation of genome sequence information and allow the creation of macromolecules with new and useful functions. This review summarizes recent progress in modeling that suggests that we are entering an era in which high-resolution prediction and design will make increasingly important contributions to biology and medicine." }, { "pmid": "12112693", "title": "Flexible protein alignment and hinge detection.", "abstract": "Here we present a novel technique for the alignment of flexible proteins. The method does not require an a priori knowledge of the flexible hinge regions. The FlexProt algorithm simultaneously detects the hinge regions and aligns the rigid subparts of the molecules. Our technique is not sensitive to insertions and deletions. Numerous methods have been developed to solve rigid structural comparisons. Unlike FlexProt, all previously developed methods designed to solve the protein flexible alignment require an a priori knowledge of the hinge regions. The FlexProt method is based on 3-D pattern-matching algorithms combined with graph theoretic techniques. The algorithm is highly efficient. For example, it performs a structural comparison of a pair of proteins with 300 amino acids in about 7 s on a 400-MHz desktop PC. We provide experimental results obtained with this algorithm. First, we flexibly align pairs of proteins taken from the database of motions. These are extended by taking additional proteins from the same SCOP family. Next, we present some of the results obtained from exhaustive all-against-all flexible structural comparisons of 1329 SCOP family representatives. Our results include relatively high-scoring flexible structural alignments between the C-terminal merozoite surface protein vs. tissue factor; class II aminoacyl-tRNA synthase, histocompatibility antigen vs. neonatal FC receptor; tyrosine-protein kinase C-SRC vs. haematopoetic cell kinase (HCK); tyrosine-protein kinase C-SRC vs. titine protein (autoinhibited serine kinase domain); and tissue factor vs. hormone-binding protein. These are illustrated and discussed, showing the capabilities of this structural alignment algorithm, which allows un-predefined hinge-based motions." }, { "pmid": "15333456", "title": "SABmark--a benchmark for sequence alignment that covers the entire known fold space.", "abstract": "The Sequence Alignment Benchmark (SABmark) provides sets of multiple alignment problems derived from the SCOP classification. These sets, Twilight Zone and Superfamilies, both cover the entire known fold space using sequences with very low to low, and low to intermediate similarity, respectively. In addition, each set has an alternate version in which unalignable but apparently similar sequences are added to each problem." }, { "pmid": "3430611", "title": "A strategy for the rapid multiple alignment of protein sequences. Confidence levels from tertiary structure comparisons.", "abstract": "An algorithm is presented for the multiple alignment of protein sequences that is both accurate and rapid computationally. The approach is based on the conventional dynamic-programming method of pairwise alignment. Initially, two sequences are aligned, then the third sequence is aligned against the alignment of both sequences one and two. Similarly, the fourth sequence is aligned against one, two and three. This is repeated until all sequences have been aligned. Iteration is then performed to yield a final alignment. The accuracy of sequence alignment is evaluated from alignment of the secondary structures in a family of proteins. For the globins, the multiple alignment was on average 99% accurate compared to 90% for pairwise comparison of sequences. For the alignment of immunoglobulin constant and variable domains, the use of many sequences yielded an alignment of 63% average accuracy compared to 41% average for individual variable/constant alignments. The multiple alignment algorithm yields an assignment of disulphide connectivity in mammalian serotransferrin that is consistent with crystallographic data, whereas pairwise alignments give an alternative assignment." }, { "pmid": "15162494", "title": "A method for simultaneous alignment of multiple protein structures.", "abstract": "Here, we present MultiProt, a fully automated highly efficient technique to detect multiple structural alignments of protein structures. MultiProt finds the common geometrical cores between input molecules. To date, most methods for multiple alignment start from the pairwise alignment solutions. This may lead to a small overall alignment. In contrast, our method derives multiple alignments from simultaneous superpositions of input molecules. Further, our method does not require that all input molecules participate in the alignment. Actually, it efficiently detects high scoring partial multiple alignments for all possible number of molecules in the input. To demonstrate the power of MultiProt, we provide a number of case studies. First, we demonstrate known multiple alignments of protein structures to illustrate the performance of MultiProt. Next, we present various biological applications. These include: (1) a partial alignment of hinge-bent domains; (2) identification of functional groups of G-proteins; (3) analysis of binding sites; and (4) protein-protein interface alignment. Some applications preserve the sequence order of the residues in the alignment, whereas others are order-independent. It is their residue sequence order-independence that allows application of MultiProt to derive multiple alignments of binding sites and of protein-protein interfaces, making MultiProt an extremely useful structural tool." }, { "pmid": "16736488", "title": "MUSTANG: a multiple structural alignment algorithm.", "abstract": "Multiple structural alignment is a fundamental problem in structural genomics. In this article, we define a reliable and robust algorithm, MUSTANG (MUltiple STructural AligNment AlGorithm), for the alignment of multiple protein structures. Given a set of protein structures, the program constructs a multiple alignment using the spatial information of the C(alpha) atoms in the set. Broadly based on the progressive pairwise heuristic, this algorithm gains accuracy through novel and effective refinement phases. MUSTANG reports the multiple sequence alignment and the corresponding superposition of structures. Alignments generated by MUSTANG are compared with several handcurated alignments in the literature as well as with the benchmark alignments of 1033 alignment families from the HOMSTRAD database. The performance of MUSTANG was compared with DALI at a pairwise level, and with other multiple structural alignment tools such as POSA, CE-MC, MALECON, and MultiProt. MUSTANG performs comparably to popular pairwise and multiple structural alignment tools for closely related proteins, and performs more reliably than other multiple structural alignment methods on hard data sets containing distantly related proteins or proteins that show conformational changes." }, { "pmid": "9504803", "title": "SWISS-MODEL and the Swiss-PdbViewer: an environment for comparative protein modeling.", "abstract": "Comparative protein modeling is increasingly gaining interest since it is of great assistance during the rational design of mutagenesis experiments. The availability of this method, and the resulting models, has however been restricted by the availability of expensive computer hardware and software. To overcome these limitations, we have developed an environment for comparative protein modeling that consists of SWISS-MODEL, a server for automated comparative protein modeling and of the SWISS-PdbViewer, a sequence to structure workbench. The Swiss-PdbViewer not only acts as a client for SWISS-MODEL, but also provides a large selection of structure analysis and display tools. In addition, we provide the SWISS-MODEL Repository, a database containing more than 3500 automatically generated protein models. By making such tools freely available to the scientific community, we hope to increase the use of protein structures and models in the process of experiment design." }, { "pmid": "8019422", "title": "Enlarged representative set of protein structures.", "abstract": "To reduce redundancy in the Protein Data Bank of 3D protein structures, which is caused by many homologous proteins in the data bank, we have selected a representative set of structures. The selection algorithm was designed to (1) select as many nonhomologous structures as possible, and (2) to select structures of good quality. The representative set may reduce time and effort in statistical analyses." }, { "pmid": "21869139", "title": "Quad-trees, oct-trees, and k-trees: a generalized approach to recursive decomposition of euclidean space.", "abstract": "K-trees are developed as a K-dimensional analog of quad-trees and oct-trees. K-trees can be used for modeling K-dimensional data. A fast algorithm is given for finding the boundary size of a K-dimensional object represented by a K-tree. For K considered as con-stant; the algorithm provides a method for computing the perimeter of a quad-tree encoded image or the surface area of an oct-tree encoded object in worst case time proportional to the number of nodes in the tree. This improves upon the expected-case linear-time method of Samet [10] for the perimeter problem. Our method has been implemented in Pascal, and a computational example is given." } ]
PLoS Computational Biology
18421371
PMC2275314
10.1371/journal.pcbi.1000054
Predicting Co-Complexed Protein Pairs from Heterogeneous Data
Proteins do not carry out their functions alone. Instead, they often act by participating in macromolecular complexes and play different functional roles depending on the other members of the complex. It is therefore interesting to identify co-complex relationships. Although protein complexes can be identified in a high-throughput manner by experimental technologies such as affinity purification coupled with mass spectrometry (APMS), these large-scale datasets often suffer from high false positive and false negative rates. Here, we present a computational method that predicts co-complexed protein pair (CCPP) relationships using kernel methods from heterogeneous data sources. We show that a diffusion kernel based on random walks on the full network topology yields good performance in predicting CCPPs from protein interaction networks. In the setting of direct ranking, a diffusion kernel performs much better than the mutual clustering coefficient. In the setting of SVM classifiers, a diffusion kernel performs much better than a linear kernel. We also show that combination of complementary information improves the performance of our CCPP recognizer. A summation of three diffusion kernels based on two-hybrid, APMS, and genetic interaction networks and three sequence kernels achieves better performance than the sequence kernels or diffusion kernels alone. Inclusion of additional features achieves a still better ROC50 of 0.937. Assuming a negative-to-positive ratio of 600∶1, the final classifier achieves 89.3% coverage at an estimated false discovery rate of 10%. Finally, we applied our prediction method to two recently described APMS datasets. We find that our predicted positives are highly enriched with CCPPs that are identified by both datasets, suggesting that our method successfully identifies true CCPPs. An SVM classifier trained from heterogeneous data sources provides accurate predictions of CCPPs in yeast. This computational method thereby provides an inexpensive method for identifying protein complexes that extends and complements high-throughput experimental data.
Comparison with Related WorkQi et al. [26] recently performed an extensive study comparing multiple methods on the prediction of complex co-memberships, physical interactions and co-pathway relationships. The study concludes that, among various classification algorithms, random forests performs the best, with random forest-based k-nearest neighbor and SVMs following closely. We applied our kernel methods to their gold standard data set following their learning procedure. 30,000 protein pairs were randomly picked as the training set with 50 from the positive data set and 29,950 from protein pairs not in the positive data set. Another 30,000 protein pairs were picked randomly from the remaining protein pairs as the test set. The test set also contained 50 pairs randomly picked from the positive data set. This training and testing procedure was repeated 5 times instead of 25 times as done by Qi et al. to save time. Our approach with both the RBF and TPPK kernels has a mean ROC50 of 0.69 with standard deviation of 0.05. This is slightly better than the best result (0.68) by Qi et al. Qi et al. published their study before the availability of the two recent large scale APMS studies [21],[22]. We removed these two data sets from the APMS network and tested on the data set of Qi et al. The mean ROC50 is 0.68 with a standard deviation of 0.05. This is similar to what Qi et al. reported as their best performance.Qi et al. simulates a realistic scenario by using a negative-to-positive ratio of 600∶1 in the training set. However, in their setting, each classifier only learns from 50 positive pairs. Because of this relatively small number of positives in the training set, the resulting classifier will likely not generalize as well as a method that learns from all available positive pairs. This is why we instead chose to train on a data set with all available positive training pairs and a negative-to-positive ratio of 10, and simulate the real scenario by magnifying each false positive by 60, as described above.
[ "11518523", "12368246", "14555619", "15319262", "12614624", "11933068", "10427000", "15961482", "14564010", "12676999", "15090078", "15491499", "17160063", "10655498", "10688190", "11805826", "16429126", "16554755", "14764870", "15608220", "10592176", "16450363", "15451510", "15130933", "9223186", "16381927", "9381177", "9784122", "9843569", "11102521", "10929718", "12399584", "15173116", "9254694", "11125103", "14562095", "10802651", "14759368", "17200106", "16451194", "15297299", "10377396", "11102353", "10688664", "15477388", "10438536" ]
[ { "pmid": "11518523", "title": "Correlated sequence-signatures as markers of protein-protein interaction.", "abstract": "As protein-protein interaction is intrinsic to most cellular processes, the ability to predict which proteins in the cell interact can aid significantly in identifying the function of newly discovered proteins, and in understanding the molecular networks they participate in. Here we demonstrate that characteristic pairs of sequence-signatures can be learned from a database of experimentally determined interacting proteins, where one protein contains the one sequence-signature and its interacting partner contains the other sequence-signature. The sequence-signatures that recur in concert in various pairs of interacting proteins are termed correlated sequence-signatures, and it is proposed that they can be used for predicting putative pairs of interacting partners in the cell. We demonstrate the potential of this approach on a comprehensive database of experimentally determined pairs of interacting proteins in the yeast Saccharomyces cerevisiae. The proteins in this database have been characterized by their sequence-signatures, as defined by the InterPro classification. A statistical analysis performed on all possible combinations of sequence-signature pairs has identified those pairs that are over-represented in the database of yeast interacting proteins. It is demonstrated how the use of the correlated sequence-signatures as identifiers of interacting proteins can reduce significantly the search space, and enable directed experimental interaction screens." }, { "pmid": "12368246", "title": "Inferring domain-domain interactions from protein-protein interactions.", "abstract": "The interaction between proteins is one of the most important features of protein functions. Behind protein-protein interactions there are protein domains interacting physically with one another to perform the necessary functions. Therefore, understanding protein interactions at the domain level gives a global view of the protein interaction network, and possibly of protein functions. Two research groups used yeast two-hybrid assays to generate 5719 interactions between proteins of the yeast Saccharomyces cerevisiae. This allows us to study the large-scale conserved patterns of interactions between protein domains. Using evolutionarily conserved domains defined in a protein-domain database called PFAM (http://PFAM.wustl.edu), we apply a Maximum Likelihood Estimation method to infer interacting domains that are consistent with the observed protein-protein interactions. We estimate the probabilities of interactions between every pair of domains and measure the accuracies of our predictions at the protein level. Using the inferred domain-domain interactions, we predict interactions between proteins. Our predicted protein-protein interactions have a significant overlap with the protein-protein interactions (MIPS: http://mips.gfs.de) obtained by methods other than the two-hybrid assays. The mean correlation coefficient of the gene expression profiles for our predicted interaction pairs is significantly higher than that for random pairs. Our method has shown robustness in analyzing incomplete data sets and dealing with various experimental errors. We found several novel protein-protein interactions such as RPS0A interacting with APG17 and TAF40 interacting with SPT3, which are consistent with the functions of the proteins." }, { "pmid": "14555619", "title": "Learning to predict protein-protein interactions from protein sequences.", "abstract": "In order to understand the molecular machinery of the cell, we need to know about the multitude of protein-protein interactions that allow the cell to function. High-throughput technologies provide some data about these interactions, but so far that data is fairly noisy. Therefore, computational techniques for predicting protein-protein interactions could be of significant value. One approach to predicting interactions in silico is to produce from first principles a detailed model of a candidate interaction. We take an alternative approach, employing a relatively simple model that learns dynamically from a large collection of data. In this work, we describe an attraction-repulsion model, in which the interaction between a pair of proteins is represented as the sum of attractive and repulsive forces associated with small, domain- or motif-sized features along the length of each protein. The model is discriminative, learning simultaneously from known interactions and from pairs of proteins that are known (or suspected) not to interact. The model is efficient to compute and scales well to very large collections of data. In a cross-validated comparison using known yeast interactions, the attraction-repulsion method performs better than several competing techniques." }, { "pmid": "15319262", "title": "Predicting protein-protein interactions using signature products.", "abstract": "MOTIVATION\nProteome-wide prediction of protein-protein interaction is a difficult and important problem in biology. Although there have been recent advances in both experimental and computational methods for predicting protein-protein interactions, we are only beginning to see a confluence of these techniques. In this paper, we describe a very general, high-throughput method for predicting protein-protein interactions. Our method combines a sequence-based description of proteins with experimental information that can be gathered from any type of protein-protein interaction screen. The method uses a novel description of interacting proteins by extending the signature descriptor, which has demonstrated success in predicting peptide/protein binding interactions for individual proteins. This descriptor is extended to protein pairs by taking signature products. The signature product is implemented within a support vector machine classifier as a kernel function.\n\n\nRESULTS\nWe have applied our method to publicly available yeast, Helicobacter pylori, human and mouse datasets. We used the yeast and H.pylori datasets to verify the predictive ability of our method, achieving from 70 to 80% accuracy rates using 10-fold cross-validation. We used the human and mouse datasets to demonstrate that our method is capable of cross-species prediction. Finally, we reused the yeast dataset to explore the ability of our algorithm to predict domains.\n\n\nCONTACT\[email protected]" }, { "pmid": "12614624", "title": "Exploiting the co-evolution of interacting proteins to discover interaction specificity.", "abstract": "Protein interactions are fundamental to the functioning of cells, and high throughput experimental and computational strategies are sought to map interactions. Predicting interaction specificity, such as matching members of a ligand family to specific members of a receptor family, is largely an unsolved problem. Here we show that by using evolutionary relationships within such families, it is possible to predict their physical interaction specificities. We introduce the computational method of matrix alignment for finding the optimal alignment between protein family similarity matrices. A second method, 3D embedding, allows visualization of interacting partners via spatial representation of the protein families. These methods essentially align phylogenetic trees of interacting protein families to define specific interaction partners. Prediction accuracy depends strongly on phylogenetic tree complexity, as measured with information theoretic methods. These results, along with simulations of protein evolution, suggest a model for the evolution of interacting protein families in which interaction partners are duplicated in coupled processes. Using these methods, it is possible to successfully find protein interaction specificities, as demonstrated for >18 protein families." }, { "pmid": "11933068", "title": "In silico two-hybrid system for the selection of physically interacting protein pairs.", "abstract": "Deciphering the interaction links between proteins has become one of the main tasks of experimental and bioinformatic methodologies. Reconstruction of complex networks of interactions in simple cellular systems by integrating predicted interaction networks with available experimental data is becoming one of the most demanding needs in the postgenomic era. On the basis of the study of correlated mutations in multiple sequence alignments, we propose a new method (in silico two-hybrid, i2h) that directly addresses the detection of physically interacting protein pairs and identifies the most likely sequence regions involved in the interactions. We have applied the system to several test sets, showing that it can discriminate between true and false interactions in a significant number of cases. We have also analyzed a large collection of E. coli protein pairs as a first step toward the virtual reconstruction of its complete interaction network." }, { "pmid": "10427000", "title": "Detecting protein function and protein-protein interactions from genome sequences.", "abstract": "A computational method is proposed for inferring protein interactions from genome sequences on the basis of the observation that some pairs of interacting proteins have homologs in another organism fused into a single protein chain. Searching sequences from many genomes revealed 6809 such putative protein-protein interactions in Escherichia coli and 45,502 in yeast. Many members of these pairs were confirmed as functionally related; computational filtering further enriches for interactions. Some proteins have links to several other proteins; these coupled links appear to represent functional interactions such as complexes or pathways. Experimentally confirmed interacting pairs are documented in a Database of Interacting Proteins." }, { "pmid": "15961482", "title": "Kernel methods for predicting protein-protein interactions.", "abstract": "MOTIVATION\nDespite advances in high-throughput methods for discovering protein-protein interactions, the interaction networks of even well-studied model organisms are sketchy at best, highlighting the continued need for computational methods to help direct experimentalists in the search for novel interactions.\n\n\nRESULTS\nWe present a kernel method for predicting protein-protein interactions using a combination of data sources, including protein sequences, Gene Ontology annotations, local properties of the network, and homologous interactions in other species. Whereas protein kernels proposed in the literature provide a similarity between single proteins, prediction of interactions requires a kernel between pairs of proteins. We propose a pairwise kernel that converts a kernel between single proteins into a kernel between pairs of proteins, and we illustrate the kernel's effectiveness in conjunction with a support vector machine classifier. Furthermore, we obtain improved performance by combining several sequence-based kernels based on k-mer frequency, motif and domain content and by further augmenting the pairwise sequence kernel with features that are based on other sources of data. We apply our method to predict physical interactions in yeast using data from the BIND database. At a false positive rate of 1% the classifier retrieves close to 80% of a set of trusted interactions. We thus demonstrate the ability of our method to make accurate predictions despite the sizeable fraction of false positives that are known to exist in interaction databases.\n\n\nAVAILABILITY\nThe classification experiments were performed using PyML available at http://pyml.sourceforge.net. Data are available at: http://noble.gs.washington.edu/proj/sppi." }, { "pmid": "14564010", "title": "A Bayesian networks approach for predicting protein-protein interactions from genomic data.", "abstract": "We have developed an approach using Bayesian networks to predict protein-protein interactions genome-wide in yeast. Our method naturally weights and combines into reliable predictions genomic features only weakly associated with interaction (e.g., messenger RNAcoexpression, coessentiality, and colocalization). In addition to de novo predictions, it can integrate often noisy, experimental interaction data sets. We observe that at given levels of sensitivity, our predictions are more accurate than the existing high-throughput experimental data sets. We validate our predictions with TAP (tandem affinity purification) tagging experiments. Our analysis, which gives a comprehensive view of yeast interactions, is available at genecensus.org/intint." }, { "pmid": "12676999", "title": "Assessing experimentally derived interactions in a small world.", "abstract": "Experimentally determined networks are susceptible to errors, yet important inferences can still be drawn from them. Many real networks have also been shown to have the small-world network properties of cohesive neighborhoods and short average distances between vertices. Although much analysis has been done on small-world networks, small-world properties have not previously been used to improve our understanding of individual edges in experimentally derived graphs. Here we focus on a small-world network derived from high-throughput (and error-prone) protein-protein interaction experiments. We exploit the neighborhood cohesiveness property of small-world networks to assess confidence for individual protein-protein interactions. By ascertaining how well each protein-protein interaction (edge) fits the pattern of a small-world network, we stratify even those edges with identical experimental evidence. This result promises to improve the quality of inference from protein-protein interaction networks in particular and small-world networks in general." }, { "pmid": "15090078", "title": "Predicting co-complexed protein pairs using genomic and proteomic data integration.", "abstract": "BACKGROUND\nIdentifying all protein-protein interactions in an organism is a major objective of proteomics. A related goal is to know which protein pairs are present in the same protein complex. High-throughput methods such as yeast two-hybrid (Y2H) and affinity purification coupled with mass spectrometry (APMS) have been used to detect interacting proteins on a genomic scale. However, both Y2H and APMS methods have substantial false-positive rates. Aside from high-throughput interaction screens, other gene- or protein-pair characteristics may also be informative of physical interaction. Therefore it is desirable to integrate multiple datasets and utilize their different predictive value for more accurate prediction of co-complexed relationship.\n\n\nRESULTS\nUsing a supervised machine learning approach--probabilistic decision tree, we integrated high-throughput protein interaction datasets and other gene- and protein-pair characteristics to predict co-complexed pairs (CCP) of proteins. Our predictions proved more sensitive and specific than predictions based on Y2H or APMS methods alone or in combination. Among the top predictions not annotated as CCPs in our reference set (obtained from the MIPS complex catalogue), a significant fraction was found to physically interact according to a separate database (YPD, Yeast Proteome Database), and the remaining predictions may potentially represent unknown CCPs.\n\n\nCONCLUSIONS\nWe demonstrated that the probabilistic decision tree approach can be successfully used to predict co-complexed protein (CCP) pairs from other characteristics. Our top-scoring CCP predictions provide testable hypotheses for experimental validation." }, { "pmid": "15491499", "title": "Information assessment on predicting protein-protein interactions.", "abstract": "BACKGROUND\nIdentifying protein-protein interactions is fundamental for understanding the molecular machinery of the cell. Proteome-wide studies of protein-protein interactions are of significant value, but the high-throughput experimental technologies suffer from high rates of both false positive and false negative predictions. In addition to high-throughput experimental data, many diverse types of genomic data can help predict protein-protein interactions, such as mRNA expression, localization, essentiality, and functional annotation. Evaluations of the information contributions from different evidences help to establish more parsimonious models with comparable or better prediction accuracy, and to obtain biological insights of the relationships between protein-protein interactions and other genomic information.\n\n\nRESULTS\nOur assessment is based on the genomic features used in a Bayesian network approach to predict protein-protein interactions genome-wide in yeast. In the special case, when one does not have any missing information about any of the features, our analysis shows that there is a larger information contribution from the functional-classification than from expression correlations or essentiality. We also show that in this case alternative models, such as logistic regression and random forest, may be more effective than Bayesian networks for predicting interactions.\n\n\nCONCLUSIONS\nIn the restricted problem posed by the complete-information subset, we identified that the MIPS and Gene Ontology (GO) functional similarity datasets as the dominating information contributors for predicting the protein-protein interactions under the framework proposed by Jansen et al. Random forests based on the MIPS and GO information alone can give highly accurate classifications. In this particular subset of complete information, adding other genomic data does little for improving predictions. We also found that the data discretizations used in the Bayesian methods decreased classification performance." }, { "pmid": "17160063", "title": "What is a support vector machine?", "abstract": "Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?" }, { "pmid": "10655498", "title": "Toward a protein-protein interaction map of the budding yeast: A comprehensive system to examine two-hybrid interactions in all possible combinations between the yeast proteins.", "abstract": "Protein-protein interactions play pivotal roles in various aspects of the structural and functional organization of the cell, and their complete description is indispensable to thorough understanding of the cell. As an approach toward this goal, here we report a comprehensive system to examine two-hybrid interactions in all of the possible combinations between proteins of Saccharomyces cerevisiae. We cloned all of the yeast ORFs individually as a DNA-binding domain fusion (\"bait\") in a MATa strain and as an activation domain fusion (\"prey\") in a MATalpha strain, and subsequently divided them into pools, each containing 96 clones. These bait and prey clone pools were systematically mated with each other, and the transformants were subjected to strict selection for the activation of three reporter genes followed by sequence tagging. Our initial examination of approximately 4 x 10(6) different combinations, constituting approximately 10% of the total to be tested, has revealed 183 independent two-hybrid interactions, more than half of which are entirely novel. Notably, the obtained binary data allow us to extract more complex interaction networks, including the one that may explain a currently unsolved mechanism for the connection between distinct steps of vesicular transport. The approach described here thus will provide many leads for integration of various cellular functions and serve as a major driving force in the completion of the protein-protein interaction map." }, { "pmid": "10688190", "title": "A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisiae.", "abstract": "Two large-scale yeast two-hybrid screens were undertaken to identify protein-protein interactions between full-length open reading frames predicted from the Saccharomyces cerevisiae genome sequence. In one approach, we constructed a protein array of about 6,000 yeast transformants, with each transformant expressing one of the open reading frames as a fusion to an activation domain. This array was screened by a simple and automated procedure for 192 yeast proteins, with positive responses identified by their positions in the array. In a second approach, we pooled cells expressing one of about 6,000 activation domain fusions to generate a library. We used a high-throughput screening procedure to screen nearly all of the 6,000 predicted yeast proteins, expressed as Gal4 DNA-binding domain fusion proteins, against the library, and characterized positives by sequence analysis. These approaches resulted in the detection of 957 putative interactions involving 1,004 S. cerevisiae proteins. These data reveal interactions that place functionally unclassified proteins in a biological context, interactions between proteins involved in the same biological function, and interactions that link biological functions together into larger cellular processes. The results of these screens are shown here." }, { "pmid": "11805826", "title": "Functional organization of the yeast proteome by systematic analysis of protein complexes.", "abstract": "Most cellular processes are carried out by multiprotein complexes. The identification and analysis of their components provides insight into how the ensemble of expressed proteins (proteome) is organized into functional units. We used tandem-affinity purification (TAP) and mass spectrometry in a large-scale approach to characterize multiprotein complexes in Saccharomyces cerevisiae. We processed 1,739 genes, including 1,143 human orthologues of relevance to human biology, and purified 589 protein assemblies. Bioinformatic analysis of these assemblies defined 232 distinct multiprotein complexes and proposed new cellular roles for 344 proteins, including 231 proteins with no previous functional annotation. Comparison of yeast and human complexes showed that conservation across species extends from single proteins to their molecular environment. Our analysis provides an outline of the eukaryotic proteome as a network of protein complexes at a level of organization beyond binary interactions. This higher-order map contains fundamental biological information and offers the context for a more reasoned and informed approach to drug discovery." }, { "pmid": "16429126", "title": "Proteome survey reveals modularity of the yeast cell machinery.", "abstract": "Protein complexes are key molecular entities that integrate multiple gene products to perform cellular functions. Here we report the first genome-wide screen for complexes in an organism, budding yeast, using affinity purification and mass spectrometry. Through systematic tagging of open reading frames (ORFs), the majority of complexes were purified several times, suggesting screen saturation. The richness of the data set enabled a de novo characterization of the composition and organization of the cellular machinery. The ensemble of cellular proteins partitions into 491 complexes, of which 257 are novel, that differentially combine with additional attachment proteins or protein modules to enable a diversification of potential functions. Support for this modular organization of the proteome comes from integration with available data on expression, localization, function, evolutionary conservation, protein structure and binary interactions. This study provides the largest collection of physically determined eukaryotic cellular machines so far and a platform for biological data integration and modelling." }, { "pmid": "16554755", "title": "Global landscape of protein complexes in the yeast Saccharomyces cerevisiae.", "abstract": "Identification of protein-protein interactions often provides insight into protein function, and many cellular processes are performed by stable protein complexes. We used tandem affinity purification to process 4,562 different tagged proteins of the yeast Saccharomyces cerevisiae. Each preparation was analysed by both matrix-assisted laser desorption/ionization-time of flight mass spectrometry and liquid chromatography tandem mass spectrometry to increase coverage and accuracy. Machine learning was used to integrate the mass spectrometry scores and assign probabilities to the protein-protein interactions. Among 4,087 different proteins identified with high confidence by mass spectrometry from 2,357 successful purifications, our core data set (median precision of 0.69) comprises 7,123 protein-protein interactions involving 2,708 proteins. A Markov clustering algorithm organized these interactions into 547 protein complexes averaging 4.9 subunits per complex, about half of them absent from the MIPS database, as well as 429 additional interactions between pairs of complexes. The data (all of which are available online) will help future studies on individual proteins as well as functional genomics and systems biology." }, { "pmid": "14764870", "title": "Global mapping of the yeast genetic interaction network.", "abstract": "A genetic interaction network containing approximately 1000 genes and approximately 4000 interactions was mapped by crossing mutations in 132 different query genes into a set of approximately 4700 viable gene yeast deletion mutants and scoring the double mutant progeny for fitness defects. Network connectivity was predictive of function because interactions often occurred among functionally related genes, and similar patterns of interactions tended to identify components of the same pathway. The genetic network exhibited dense local neighborhoods; therefore, the position of a gene on a partially mapped network is predictive of other genetic interactions. Because digenic interactions are common in yeast, similar networks may underlie the complex genetics associated with inherited phenotypes in other organisms." }, { "pmid": "15608220", "title": "The Yeast Resource Center Public Data Repository.", "abstract": "The Yeast Resource Center Public Data Repository (YRC PDR) serves as a single point of access for the experimental data produced from many collaborations typically studying Saccharomyces cerevisiae (baker's yeast). The experimental data include large amounts of mass spectrometry results from protein co-purification experiments, yeast two-hybrid interaction experiments, fluorescence microscopy images and protein structure predictions. All of the data are accessible via searching by gene or protein name, and are available on the Web at http://www.yeastrc.org/pdr/." }, { "pmid": "10592176", "title": "MIPS: a database for genomes and protein sequences.", "abstract": "The Munich Information Center for Protein Sequences (MIPS-GSF), Martinsried, near Munich, Germany, continues its longstanding tradition to develop and maintain high quality curated genome databases. In addition, efforts have been intensified to cover the wealth of complete genome sequences in a systematic, comprehensive form. Bioinformatics, supporting national as well as European sequencing and functional analysis projects, has resulted in several up-to-date genome-oriented databases. This report describes growing databases reflecting the progress of sequencing the Arabidopsis thaliana (MATDB) and Neurospora crassa genomes (MNCDB), the yeast genome database (MYGD) extended by functional analysis data, the database of annotated human EST-clusters (HIB) and the database of the complete cDNA sequences from the DHGP (German Human Genome Project). It also contains information on the up-to-date database of complete genomes (PEDANT), the classification of protein sequences (ProtFam) and the collection of protein sequence data within the framework of the PIR-International Protein Sequence Database. These databases can be accessed through the MIPS WWW server (http://www. mips.biochem.mpg.de)." }, { "pmid": "16450363", "title": "Evaluation of different biological data and computational classification methods for use in protein interaction prediction.", "abstract": "Protein-protein interactions play a key role in many biological systems. High-throughput methods can directly detect the set of interacting proteins in yeast, but the results are often incomplete and exhibit high false-positive and false-negative rates. Recently, many different research groups independently suggested using supervised learning methods to integrate direct and indirect biological data sources for the protein interaction prediction task. However, the data sources, approaches, and implementations varied. Furthermore, the protein interaction prediction task itself can be subdivided into prediction of (1) physical interaction, (2) co-complex relationship, and (3) pathway co-membership. To investigate systematically the utility of different data sources and the way the data is encoded as features for predicting each of these types of protein interactions, we assembled a large set of biological features and varied their encoding for use in each of the three prediction tasks. Six different classifiers were used to assess the accuracy in predicting interactions, Random Forest (RF), RF similarity-based k-Nearest-Neighbor, Naïve Bayes, Decision Tree, Logistic Regression, and Support Vector Machine. For all classifiers, the three prediction tasks had different success rates, and co-complex prediction appears to be an easier task than the other two. Independently of prediction task, however, the RF classifier consistently ranked as one of the top two classifiers for all combinations of feature sets. Therefore, we used this classifier to study the importance of different biological datasets. First, we used the splitting function of the RF tree structure, the Gini index, to estimate feature importance. Second, we determined classification accuracy when only the top-ranking features were used as an input in the classifier. We find that the importance of different features depends on the specific prediction task and the way they are encoded. Strikingly, gene expression is consistently the most important feature for all three prediction tasks, while the protein interactions identified using the yeast-2-hybrid system were not among the top-ranking features under any condition." }, { "pmid": "15451510", "title": "Analyzing protein function on a genomic scale: the importance of gold-standard positives and negatives for network prediction.", "abstract": "The concept of 'protein function' is rather 'fuzzy' because it is often based on whimsical terms or contradictory nomenclature. This currently presents a challenge for functional genomics because precise definitions are essential for most computational approaches. Addressing this challenge, the notion of networks between biological entities (including molecular and genetic interaction networks as well as transcriptional regulatory relationships) potentially provides a unifying language suitable for the systematic description of protein function. Predicting the edges in protein networks requires reference sets of examples with known outcome (that is, 'gold standards'). Such reference sets should ideally include positive examples - as is now widely appreciated - but also, equally importantly, negative ones. Moreover, it is necessary to consider the expected relative occurrence of positives and negatives because this affects the misclassification rates of experiments and computational predictions. For instance, a reason why genome-wide, experimental protein-protein interaction networks have high inaccuracies is that the prior probability of finding interactions (positives) rather than non-interacting protein pairs (negatives) in unbiased screens is very small. These problems can be addressed by constructing well-defined sets of non-interacting proteins from subcellular localization data, which allows computing the probability of interactions based on evidence from multiple datasets." }, { "pmid": "15130933", "title": "A statistical framework for genomic data fusion.", "abstract": "MOTIVATION\nDuring the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data.\n\n\nRESULTS\nThis paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein-protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins--membrane proteins and ribosomal proteins--performs significantly better than the same algorithm trained on any single type of data.\n\n\nAVAILABILITY\nSupplementary data at http://noble.gs.washington.edu/proj/sdp-svm" }, { "pmid": "9223186", "title": "Pfam: a comprehensive database of protein domain families based on seed alignments.", "abstract": "Databases of multiple sequence alignments are a valuable aid to protein sequence classification and analysis. One of the main challenges when constructing such a database is to simultaneously satisfy the conflicting demands of completeness on the one hand and quality of alignment and domain definitions on the other. The latter properties are best dealt with by manual approaches, whereas completeness in practice is only amenable to automatic methods. Herein we present a database based on hidden Markov model profiles (HMMs), which combines high quality and completeness. Our database, Pfam, consists of parts A and B. Pfam-A is curated and contains well-characterized protein domain families with high quality alignments, which are maintained by using manually checked seed alignments and HMMs to find and align all members. Pfam-B contains sequence families that were generated automatically by applying the Domainer algorithm to cluster and align the remaining protein sequences after removal of Pfam-A domains. By using Pfam, a large number of previously unannotated proteins from the Caenorhabditis elegans genome project were classified. We have also identified many novel family memberships in known proteins, including new kazal, Fibronectin type III, and response regulator receiver domains. Pfam-A families have permanent accession numbers and form a library of HMMs available for searching and automatic annotation of new protein sequences." }, { "pmid": "16381927", "title": "BioGRID: a general repository for interaction datasets.", "abstract": "Access to unified datasets of protein and genetic interactions is critical for interrogation of gene/protein function and analysis of global network properties. BioGRID is a freely accessible database of physical and genetic interactions available at http://www.thebiogrid.org. BioGRID release version 2.0 includes >116 000 interactions from Saccharomyces cerevisiae, Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens. Over 30 000 interactions have recently been added from 5778 sources through exhaustive curation of the Saccharomyces cerevisiae primary literature. An internally hyper-linked web interface allows for rapid search and retrieval of interaction data. Full or user-defined datasets are freely downloadable as tab-delimited text files and PSI-MI XML. Pre-computed graphical layouts of interactions are available in a variety of file formats. User-customized graphs with embedded protein, gene and interaction attributes can be constructed with a visualization system called Osprey that is dynamically linked to the BioGRID." }, { "pmid": "9381177", "title": "Exploring the metabolic and genetic control of gene expression on a genomic scale.", "abstract": "DNA microarrays containing virtually every gene of Saccharomyces cerevisiae were used to carry out a comprehensive investigation of the temporal program of gene expression accompanying the metabolic shift from fermentation to respiration. The expression profiles observed for genes with known metabolic functions pointed to features of the metabolic reprogramming that occur during the diauxic shift, and the expression patterns of many previously uncharacterized genes provided clues to their possible functions. The same DNA microarrays were also used to identify genes whose expression was affected by deletion of the transcriptional co-repressor TUP1 or overexpression of the transcriptional activator YAP1. These results demonstrate the feasibility and utility of this approach to genomewide exploration of gene expression patterns." }, { "pmid": "9784122", "title": "The transcriptional program of sporulation in budding yeast.", "abstract": "Diploid cells of budding yeast produce haploid cells through the developmental program of sporulation, which consists of meiosis and spore morphogenesis. DNA microarrays containing nearly every yeast gene were used to assay changes in gene expression during sporulation. At least seven distinct temporal patterns of induction were observed. The transcription factor Ndt80 appeared to be important for induction of a large group of genes at the end of meiotic prophase. Consensus sequences known or proposed to be responsible for temporal regulation could be identified solely from analysis of sequences of coordinately expressed genes. The temporal expression pattern provided clues to potential functions of hundreds of previously uncharacterized genes, some of which have vertebrate homologs that may function during gametogenesis." }, { "pmid": "9843569", "title": "Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization.", "abstract": "We sought to create a comprehensive catalog of yeast genes whose transcript levels vary periodically within the cell cycle. To this end, we used DNA microarrays and samples from yeast cultures synchronized by three independent methods: alpha factor arrest, elutriation, and arrest of a cdc15 temperature-sensitive mutant. Using periodicity and correlation algorithms, we identified 800 genes that meet an objective minimum criterion for cell cycle regulation. In separate experiments, designed to examine the effects of inducing either the G1 cyclin Cln3p or the B-type cyclin Clb2p, we found that the mRNA levels of more than half of these 800 genes respond to one or both of these cyclins. Furthermore, we analyzed our set of cell cycle-regulated genes for known and new promoter elements and show that several known elements (or variations thereof) contain information predictive of cell cycle regulation. A full description and complete data sets are available at http://cellcycle-www.stanford.edu" }, { "pmid": "11102521", "title": "Genomic expression programs in the response of yeast cells to environmental changes.", "abstract": "We explored genomic expression patterns in the yeast Saccharomyces cerevisiae responding to diverse environmental transitions. DNA microarrays were used to measure changes in transcript levels over time for almost every yeast gene, as cells responded to temperature shocks, hydrogen peroxide, the superoxide-generating drug menadione, the sulfhydryl-oxidizing agent diamide, the disulfide-reducing agent dithiothreitol, hyper- and hypo-osmotic shock, amino acid starvation, nitrogen source depletion, and progression into stationary phase. A large set of genes (approximately 900) showed a similar drastic response to almost all of these environmental changes. Additional features of the genomic responses were specialized for specific conditions. Promoter analysis and subsequent characterization of the responses of mutant strains implicated the transcription factors Yap1p, as well as Msn2p and Msn4p, in mediating specific features of the transcriptional response, while the identification of novel sequence elements provided clues to novel regulators. Physiological themes in the genomic responses to specific environmental stresses provided insights into the effects of those stresses on the cell." }, { "pmid": "10929718", "title": "Functional discovery via a compendium of expression profiles.", "abstract": "Ascertaining the impact of uncharacterized perturbations on the cell is a fundamental problem in biology. Here, we describe how a single assay can be used to monitor hundreds of different cellular functions simultaneously. We constructed a reference database or \"compendium\" of expression profiles corresponding to 300 diverse mutations and chemical treatments in S. cerevisiae, and we show that the cellular pathways affected can be determined by pattern matching, even among very subtle profiles. The utility of this approach is validated by examining profiles caused by deletions of uncharacterized genes: we identify and experimentally confirm that eight uncharacterized open reading frames encode proteins required for sterol metabolism, cell wall function, mitochondrial respiration, or protein synthesis. We also show that the compendium can be used to characterize pharmacological perturbations by identifying a novel target of the commonly used drug dyclonine." }, { "pmid": "12399584", "title": "Transcriptional regulatory networks in Saccharomyces cerevisiae.", "abstract": "We have determined how most of the transcriptional regulators encoded in the eukaryote Saccharomyces cerevisiae associate with genes across the genome in living cells. Just as maps of metabolic networks describe the potential pathways that may be used by a cell to accomplish metabolic processes, this network of regulator-gene interactions describes potential pathways yeast cells can use to regulate global gene expression programs. We use this information to identify network motifs, the simplest units of network architecture, and demonstrate that an automated process can use motifs to assemble a transcriptional regulatory network structure. Our results reveal that eukaryotic cellular functions are highly connected through networks of transcriptional regulators that regulate other transcriptional regulators." }, { "pmid": "15173116", "title": "Annotation transfer between genomes: protein-protein interologs and protein-DNA regulogs.", "abstract": "Proteins function mainly through interactions, especially with DNA and other proteins. While some large-scale interaction networks are now available for a number of model organisms, their experimental generation remains difficult. Consequently, interolog mapping--the transfer of interaction annotation from one organism to another using comparative genomics--is of significant value. Here we quantitatively assess the degree to which interologs can be reliably transferred between species as a function of the sequence similarity of the corresponding interacting proteins. Using interaction information from Saccharomyces cerevisiae, Caenorhabditis elegans, Drosophila melanogaster, and Helicobacter pylori, we find that protein-protein interactions can be transferred when a pair of proteins has a joint sequence identity >80% or a joint E-value <10(-70). (These \"joint\" quantities are the geometric means of the identities or E-values for the two pairs of interacting proteins.) We generalize our interolog analysis to protein-DNA binding, finding such interactions are conserved at specific thresholds between 30% and 60% sequence identity depending on the protein family. Furthermore, we introduce the concept of a \"regulog\"--a conserved regulatory relationship between proteins across different species. We map interologs and regulogs from yeast to a number of genomes with limited experimental annotation (e.g., Arabidopsis thaliana) and make these available through an online database at http://interolog.gersteinlab.org. Specifically, we are able to transfer approximately 90,000 potential protein-protein interactions to the worm. We test a number of these in two-hybrid experiments and are able to verify 45 overlaps, which we show to be statistically significant." }, { "pmid": "9254694", "title": "Gapped BLAST and PSI-BLAST: a new generation of protein database search programs.", "abstract": "The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSI-BLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily." }, { "pmid": "11125103", "title": "BIND--The Biomolecular Interaction Network Database.", "abstract": "The Biomolecular Interaction Network Database (BIND; http://binddb. org) is a database designed to store full descriptions of interactions, molecular complexes and pathways. Development of the BIND 2.0 data model has led to the incorporation of virtually all components of molecular mechanisms including interactions between any two molecules composed of proteins, nucleic acids and small molecules. Chemical reactions, photochemical activation and conformational changes can also be described. Everything from small molecule biochemistry to signal transduction is abstracted in such a way that graph theory methods may be applied for data mining. The database can be used to study networks of interactions, to map pathways across taxonomic branches and to generate information for kinetic simulations. BIND anticipates the coming large influx of interaction information from high-throughput proteomics efforts including detailed information about post-translational modifications from mass spectrometry. Version 2.0 of the BIND data model is discussed as well as implementation, content and the open nature of the BIND project. The BIND data specification is available as ASN.1 and XML DTD." }, { "pmid": "14562095", "title": "Global analysis of protein localization in budding yeast.", "abstract": "A fundamental goal of cell biology is to define the functions of proteins in the context of compartments that organize them in the cellular environment. Here we describe the construction and analysis of a collection of yeast strains expressing full-length, chromosomally tagged green fluorescent protein fusion proteins. We classify these proteins, representing 75% of the yeast proteome, into 22 distinct subcellular localization categories, and provide localization information for 70% of previously unlocalized proteins. Analysis of this high-resolution, high-coverage localization data set in the context of transcriptional, genetic, and protein-protein interaction data helps reveal the logic of transcriptional co-regulation, and provides a comprehensive view of interactions within and between organelles in eukaryotic cells." }, { "pmid": "14759368", "title": "High-definition macromolecular composition of yeast RNA-processing complexes.", "abstract": "A remarkably large collection of evolutionarily conserved proteins has been implicated in processing of noncoding RNAs and biogenesis of ribonucleoproteins. To better define the physical and functional relationships among these proteins and their cognate RNAs, we performed 165 highly stringent affinity purifications of known or predicted RNA-related proteins from Saccharomyces cerevisiae. We systematically identified and estimated the relative abundance of stably associated polypeptides and RNA species using a combination of gel densitometry, protein mass spectrometry, and oligonucleotide microarray hybridization. Ninety-two discrete proteins or protein complexes were identified comprising 489 different polypeptides, many associated with one or more specific RNA molecules. Some of the pre-rRNA-processing complexes that were obtained are discrete sub-complexes of those previously described. Among these, we identified the IPI complex required for proper processing of the ITS2 region of the ribosomal RNA primary transcript. This study provides a high-resolution overview of the modular topology of noncoding RNA-processing machinery." }, { "pmid": "17200106", "title": "Toward a comprehensive atlas of the physical interactome of Saccharomyces cerevisiae.", "abstract": "Defining protein complexes is critical to virtually all aspects of cell biology. Two recent affinity purification/mass spectrometry studies in Saccharomyces cerevisiae have vastly increased the available protein interaction data. The practical utility of such high throughput interaction sets, however, is substantially decreased by the presence of false positives. Here we created a novel probabilistic metric that takes advantage of the high density of these data, including both the presence and absence of individual associations, to provide a measure of the relative confidence of each potential protein-protein interaction. This analysis largely overcomes the noise inherent in high throughput immunoprecipitation experiments. For example, of the 12,122 binary interactions in the general repository of interaction data (BioGRID) derived from these two studies, we marked 7504 as being of substantially lower confidence. Additionally, applying our metric and a stringent cutoff we identified a set of 9074 interactions (including 4456 that were not among the 12,122 interactions) with accuracy comparable to that of conventional small scale methodologies. Finally we organized proteins into coherent multisubunit complexes using hierarchical clustering. This work thus provides a highly accurate physical interaction map of yeast in a format that is readily accessible to the biological community." }, { "pmid": "16451194", "title": "Identification and comparative analysis of the large subunit mitochondrial ribosomal proteins of Neurospora crassa.", "abstract": "The mitochondrial ribosome (mitoribosome) has highly evolved from its putative prokaryotic ancestor and varies considerably from one organism to another. To gain further insights into its structural and evolutionary characteristics, we have purified and identified individual mitochondrial ribosomal proteins of Neurospora crassa by mass spectrometry and compared them with those of the budding yeast Saccharomyces cerevisiae. Most of the mitochondrial ribosomal proteins of the two fungi are well conserved with each other, although the degree of conservation varies to a large extent. One of the N. crassa mitochondrial ribosomal proteins was found to be homologous to yeast Mhr1p that is involved in homologous DNA recombination and genome maintenance in yeast mitochondria." }, { "pmid": "15297299", "title": "GO::TermFinder--open source software for accessing Gene Ontology information and finding significantly enriched Gene Ontology terms associated with a list of genes.", "abstract": "SUMMARY\nGO::TermFinder comprises a set of object-oriented Perl modules for accessing Gene Ontology (GO) information and evaluating and visualizing the collective annotation of a list of genes to GO terms. It can be used to draw conclusions from microarray and other biological data, calculating the statistical significance of each annotation. GO::TermFinder can be used on any system on which Perl can be run, either as a command line application, in single or batch mode, or as a web-based CGI script.\n\n\nAVAILABILITY\nThe full source code and documentation for GO::TermFinder are freely available from http://search.cpan.org/dist/GO-TermFinder/." }, { "pmid": "10377396", "title": "Purification of the yeast U4/U6.U5 small nuclear ribonucleoprotein particle and identification of its proteins.", "abstract": "The yeast U4/U6.U5 pre-mRNA splicing small nuclear ribonucleoprotein (snRNP) is a 25S small nuclear ribonucleoprotein particle similar in size, composition, and morphology to its counterpart in human cells. The yeast U4/U6.U5 snRNP complex has been purified to near homogeneity by affinity chromatography and preparative glycerol gradient sedimentation. We show that there are at least 24 proteins stably associated with this particle and performed mass spectrometry microsequencing to determine their identities. In addition to the seven canonical core Sm proteins, there are a set of U6 snRNP specific Sm proteins, eight previously described U4/U6.U5 snRNP proteins, and four novel proteins. Two of the novel proteins have likely RNA binding properties, one has been implicated in the cell cycle, and one has no identifiable sequence homologues or functional motifs. The purification of the low abundance U4/U6.U5 snRNP from yeast and the powerful sequencing methodologies using small amounts of protein make possible the rapid identification of novel and previously unidentified components of large, low-abundance macromolecular machines from any genetically manipulable organism." }, { "pmid": "11102353", "title": "Genetic and physical interactions between factors involved in both cell cycle progression and pre-mRNA splicing in Saccharomyces cerevisiae.", "abstract": "The PRP17/CDC40 gene of Saccharomyces cerevisiae functions in two different cellular processes: pre-mRNA splicing and cell cycle progression. The Prp17/Cdc40 protein participates in the second step of the splicing reaction and, in addition, prp17/cdc40 mutant cells held at the restrictive temperature arrest in the G2 phase of the cell cycle. Here we describe the identification of nine genes that, when mutated, show synthetic lethality with the prp17/cdc40Delta allele. Six of these encode known splicing factors: Prp8p, Slu7p, Prp16p, Prp22p, Slt11p, and U2 snRNA. The other three, SYF1, SYF2, and SYF3, represent genes also involved in cell cycle progression and in pre-mRNA splicing. Syf1p and Syf3p are highly conserved proteins containing several copies of a repeated motif, which we term RTPR. This newly defined motif is shared by proteins involved in RNA processing and represents a subfamily of the known TPR (tetratricopeptide repeat) motif. Using two-hybrid interaction screens and biochemical analysis, we show that the SYF gene products interact with each other and with four other proteins: Isy1p, Cef1p, Prp22p, and Ntc20p. We discuss the role played by these proteins in splicing and cell cycle progression." }, { "pmid": "10688664", "title": "Functional Cus1p is found with Hsh155p in a multiprotein splicing factor associated with U2 snRNA.", "abstract": "To explore the dynamics of snRNP structure and function, we have studied Cus1p, identified as a suppressor of U2 snRNA mutations in budding yeast. Cus1p is homologous to human SAP145, a protein present in the 17S form of the human U2 snRNP. Here, we define the Cus1p amino acids required for function in yeast. The segment of Cus1p required for binding to Hsh49p, a homolog of human SAP49, is contained within an essential region of Cus1p. Antibodies against Cus1p coimmunoprecipitate U2 snRNA, as well as Hsh155p, a protein homologous to human SAP155. Biochemical fractionation of splicing extracts and reconstitution of heat-inactivated splicing extracts from strains carrying a temperature-sensitive allele of CUS1 indicate that Cus1p and Hsh155p reside in a functional, high-salt-stable complex that is salt-dissociable from U2 snRNA. We propose that Cus1p, Hsh49p, and Hsh155p exist in a stable protein complex which can exchange with a core U2 snRNP and which is necessary for U2 snRNP function in prespliceosome assembly. The Cus1p complex shares functional as well as structural similarities with human SF3b." }, { "pmid": "15477388", "title": "A high resolution protein interaction map of the yeast Mediator complex.", "abstract": "Mediator is a large, modular protein complex remotely conserved from yeast to man that conveys regulatory signals from DNA-binding transcription factors to RNA polymerase II. In Saccharomyces cerevisiae, Mediator is thought to be composed of 24 subunits organized in four sub-complexes, termed the head, middle, tail and Cdk8 (Srb8-11) modules. In this work, we have used screening and pair-wise two-hybrid approaches to investigate protein-protein contacts between budding yeast Mediator subunits. The derived interaction map includes the delineation of numerous interaction domains between Mediator subunits, frequently corresponding to segments that have been conserved in evolution, as well as novel connections between the Cdk8 (Srb8-11) and head modules, the head and middle modules, and the middle and tail modules. The two-hybrid analysis, together with co-immunoprecipitation studies and gel filtration experiments revealed that Med31 (Soh1) is associated with the yeast Mediator that therefore comprises 25 subunits. Finally, analysis of the protein interaction network within the Drosophila Mediator middle module indicated that the structural organization of the Mediator complex is conserved from yeast to metazoans. The resulting interaction map provides a framework for delineating Mediator structure-function and investigating how Mediator function is regulated." }, { "pmid": "10438536", "title": "Exo84p is an exocyst protein essential for secretion.", "abstract": "The exocyst is a multiprotein complex that plays an important role in secretory vesicle targeting and docking at the plasma membrane. Here we report the identification and characterization of a new component of the exocyst, Exo84p, in the yeast Saccharomyces cerevisiae. Yeast cells depleted of Exo84p cannot survive. These cells are defective in invertase secretion and accumulate vesicles similar to those in the late sec mutants. Exo84p co-immunoprecipitates with the exocyst components, and a portion of the Exo84p co-sediments with the exocyst complex in velocity gradients. The assembly of Exo84p into the exocyst complex requires two other subunits, Sec5p and Sec10p. Exo84p interacts with both Sec5p and Sec10p in a two-hybrid assay. Overexpression of Exo84p selectively suppresses the temperature sensitivity of a sec5 mutant. Exo84p specifically localizes to the bud tip or mother/daughter connection, sites of polarized secretion in the yeast S. cerevisiae. Exo84p is mislocalized in a sec5 mutant. These studies suggest that Exo84p is an essential protein that plays an important role in polarized secretion." } ]
PLoS Computational Biology
18392148
PMC2289775
10.1371/journal.pcbi.1000016
Statistical Resolution of Ambiguous HLA Typing Data
High-resolution HLA typing plays a central role in many areas of immunology, such as in identifying immunogenetic risk factors for disease, in studying how the genomes of pathogens evolve in response to immune selection pressures, and also in vaccine design, where identification of HLA-restricted epitopes may be used to guide the selection of vaccine immunogens. Perhaps one of the most immediate applications is in direct medical decisions concerning the matching of stem cell transplant donors to unrelated recipients. However, high-resolution HLA typing is frequently unavailable due to its high cost or the inability to re-type historical data. In this paper, we introduce and evaluate a method for statistical, in silico refinement of ambiguous and/or low-resolution HLA data. Our method, which requires an independent, high-resolution training data set drawn from the same population as the data to be refined, uses linkage disequilibrium in HLA haplotypes as well as four-digit allele frequency data to probabilistically refine HLA typings. Central to our approach is the use of haplotype inference. We introduce new methodology to this area, improving upon the Expectation-Maximization (EM)-based approaches currently used within the HLA community. Our improvements are achieved by using a parsimonious parameterization for haplotype distributions and by smoothing the maximum likelihood (ML) solution. These improvements make it possible to scale the refinement to a larger number of alleles and loci in a more computationally efficient and stable manner. We also show how to augment our method in order to incorporate ethnicity information (as HLA allele distributions vary widely according to race/ethnicity as well as geographic area), and demonstrate the potential utility of this experimentally. A tool based on our approach is freely available for research purposes at http://microsoft.com/science.
Related WorkAt the core of our HLA typing refinement model is the ability to infer and predict haplotype structure of HLA alleles across multiple loci (from unphased data, since this is the data that is widely available). If certain alleles tend to be inherited together because of linkage disequilibrium between them, then clearly this information can help us to disambiguate HLA types—and far more so than using only the most common allele at any particular locus. We derive a method for disambiguating HLA types from this haplotype model.Existing methods for haplotype modeling fall into three main categories: ad hoc methods, such as Clark's parsimony algorithm [16] which agglomerates haplotypes starting with those uniquely defined by homozygous alleles; EM-based maximum likelihood methods, such as those belonging to the family introduced by Excoffier and Slatkin, and Hawley and Kidd [17],[18], which are related to the so-called gene-counting method [19]; and full Bayesian approaches, such as those introduced by Stephens et al. [20], with more recent advances by others (e.g., [21],[22]). Clark's method is no longer used, as it is outperformed by other methods. The full Bayesian methods are more principled than the EM-based methods because they average over all uncertainty including uncertainty about the parameters. However, full Bayesian methods are generally much slower than EM-based methods, and their convergence is generally more difficult to assess [23], making them less attractive for widespread use.The haplotype modeling part of our approach is most closely related to the EM-based maximum-likelihood methods, although it differs in several crucial respects. To our knowledge, all implementations of EM-based maximum likelihood haplotype models use a full (unconstrained) joint probability distribution over all haplotypes (i.e., over all possible alleles, at all possible loci) with the exception of the partition-ligation algorithms noted below. Furthermore, because they are maximum-likelihood based, they do not smooth the parameter estimates, thereby allowing for unstable (i.e., high variance) estimates of rare haplotypes. Together, these two issues make existing methods difficult to scale to a large number of loci or to a large number of alleles per locus. This scalability problem is widely known (e.g., [17],[24],[25]), and several attempts to alleviate it have been suggested, such as eliminating posterior states which can never have non-zero probability [24], or using a heuristic divide-and-conquer strategy, called partition-ligation [26],[23] in which the joint probability distribution over haplotypes is factored into independent blocks of contiguous loci, and the solutions to each block are then combined. Although these approaches do help alleviate the problems of scalability, the former does so in a fairly minimal way, and the latter places heuristic constraints on the nature of the solution (through use of the blocks). Furthermore, these methods do not address scaling in the number of alleles, which is the larger concern for HLA typing. In addition, these methods do not address the stability of the statistical estimation procedure. Our EM-based approach tackles the issues of scalability by using a parsimonious haplotype parameterization. This especially helps for scaling up to the large number of alleles in HLA data. Our approach also addresses stability by using MAP (maximum a posteriori) parameter estimation rather than an ML estimate.We note that within the HLA community, even recently, haplotype inference seems to be exclusively performed with the most basic EM-based algorithm of Excoffier and Slatkin, and Hawley and Kidd [17],[18] (e.g., [27],[28],[29],[30],[31],[32],[33]). In fact, in one of the most recently available publications, Maiers et al. were unable to perform haplotype inference for more than three HLA loci, resorting to more heuristic techniques beyond this number. With our approach, such limitations are not reached. In addition, as we shall see, our approach is more accurate.There are two pieces of work which tackle the allele refinement problem using haplotype information: that of Gourraud et al. in the HLA domain [12], and that of Jung et al. in the SNP (single nucleotide polymorphism) domain [34]. Although Gourraud et al. indirectly tackle the HLA refinement problem, their focus is on phasing of HLA data in the presence of ambiguous HLA alleles, and their experimental evaluation is restricted to the phasing task. Additionally, they use the standard, multinomial, EM-based haplotype inference approach, which we show to be inferior for the task of HLA refinement. Also, they do not investigate population-specific effects as we do here. Jung et al., strictly speaking, don't refine their data. Rather, they impute it—that is, they fill in data that is completely missing. The SNP domain is quite different from the HLA domain—the problem of SNP haplotype inference often involves hundreds or thousands of loci, and there are usually only two alleles at each locus (and at most four). HLA haplotype inference, in contrast, involves only a handful of loci with possibly hundreds of alleles at each locus (because we define a locus on an HLA level, not a nucleotide level—although one could do HLA haplotype inference in the nucleotide domain).Thus, issues of scalability and the specific nature of haplotypic patterns are substantially different between these two domains. With respect to methodology, Jung et al. perform imputation in a sub-optimal way. First, they apply an EM-based haplotype inference algorithm ([23]) to obtain a single best phasing of their data (i.e., a ML point estimate). Next, using the statistically phased data, they compute linkage disequilibrium in the inferred haplotypes using the standard measure of Lewontin's linkage disequilibrium. Thus, they ignore the uncertainty over phases which is available from the EM algorithm. Also, they choose only the single best imputed value, ignoring the uncertainty there as well. Our approach incorporates both types of uncertainty. Lastly, the haplotype inference algorithm used by Jung et al. does not account for population-specific effects. Consequently, they do not investigate this area experimentally, as we do here, showing its potential benefits.One other study touches on statistical HLA refinement [31]. In order to estimate haplotype frequencies on serologically-derived HLA data, Muller et al. modify the standard EM-based haplotype inference approach to be able to use donors with unsplit serological HLA types. However, their main purpose is to estimate haplotype frequencies (at a two-digit serological level) rather than to perform HLA refinement; and their experiments focus on this former task.
[ "17237084", "15982560", "12525683", "15103322", "17363674", "17616974", "17559151", "17071929", "15935894", "17868258", "15787720", "11386265", "2108305", "7476138", "7560877", "13268982", "11254454", "16826521", "12452179", "14641248", "10954684", "11741196", "17460569", "15257044", "12507825", "11543903", "15252420", "1637966", "17896860" ]
[ { "pmid": "17237084", "title": "Identifying HLA supertypes by learning distance functions.", "abstract": "MOTIVATION\nThe development of epitope-based vaccines crucially relies on the ability to classify Human Leukocyte Antigen (HLA) molecules into sets that have similar peptide binding specificities, termed supertypes. In their seminal work, Sette and Sidney defined nine HLA class I supertypes and claimed that these provide an almost perfect coverage of the entire repertoire of HLA class I molecules. HLA alleles are highly polymorphic and polygenic and therefore experimentally classifying each of these molecules to supertypes is at present an impossible task. Recently, a number of computational methods have been proposed for this task. These methods are based on defining protein similarity measures, derived from analysis of binding peptides or from analysis of the proteins themselves.\n\n\nRESULTS\nIn this paper we define both peptide derived and protein derived similarity measures, which are based on learning distance functions. The peptide derived measure is defined using a peptide-peptide distance function, which is learned using information about known binding and non-binding peptides. The protein derived similarity measure is defined using a protein-protein distance function, which is learned using information about alleles previously classified to supertypes by Sette and Sidney (1999). We compare the classification obtained by these two complimentary methods to previously suggested classification methods. In general, our results are in excellent agreement with the classifications suggested by Sette and Sidney (1999) and with those reported by Buus et al. (2004). The main important advantage of our proposed distance-based approach is that it makes use of two different and important immunological sources of information-HLA alleles and peptides that are known to bind or not bind to these alleles. Since each of our distance measures is trained using a different source of information, their combination can provide a more confident classification of alleles to supertypes." }, { "pmid": "15982560", "title": "HLA associated genetic predisposition to autoimmune diseases: Genes involved and possible mechanisms.", "abstract": "Autoimmune diseases are the result of an interplay between predisposing genes and triggering environmental factors, leading to loss of self-tolerance and an immune-mediated destruction of autologous cells and/or tissues. Genes in the HLA complex are among the strongest predisposing genetic factors. The HLA complex genes primarily involved are most often those encoding the peptide-presenting HLA class I or II molecules. A probable mechanism is preferential presentation by the disease-associated HLA molecules of peptides from autoantigens to T cells. Recent studies have shown, however, that other genes in the HLA complex also contribute. Taken together, available evidence suggests that the HLA complex harbour both disease predisposing genes which are quite specific for some autoimmune diseases (e.g. HLA-B27 for ankylosing spondylitis) and others which may be more common for several diseases. This will be briefly reviewed in the following." }, { "pmid": "12525683", "title": "The influence of HLA genotype on AIDS.", "abstract": "Genetic resistance to infectious diseases is likely to involve a complex array of immune-response and other genes with variants that impose subtle but significant consequences on gene expression or protein function. We have gained considerable insight into the genetic determinants of HIV-1 disease, and the HLA class I genes appear to be highly influential in this regard. Numerous reports have identified a role for HLA genotype in AIDS outcomes, implicating many HLA alleles in various aspects of HIV disease. Here we review the HLA associations with progression to AIDS that have been consistently affirmed and discuss the underlying mechanisms behind some of these associations based on functional studies of immune cell recognition." }, { "pmid": "15103322", "title": "Host genetic determinants in hepatitis C virus infection.", "abstract": "In addition to viral and environmental/behavioural factors, host genetic diversity is believed to contribute to the spectrum of clinical outcomes in hepatitis C virus (HCV) infection. This paper reviews the literature with respect to studies of host genetic determinants of HCV outcome and attempts to highlight trends and synthesise findings. With respect to the susceptibility to HCV infection, several studies have replicated associations of the HLA class II alleles DQB1(*)0301 and DRB1(*)11 with self-limiting infection predominantly in Caucasian populations. Meta-analyses yielded summary estimates of 3.0 (95% CI: 1.8-4.8) and 2.5 (95% CI: 1.7-3.7) for the effects of DQB1(*)0301 and DRB1(*)11 on self-limiting HCV, respectively. Studies of genetics and the response to interferon-based therapies have largely concerned single-nucleotide polymorphisms and have been inconsistent. Regarding studies of genetics and the progression of HCV-related disease, there is a trend with DRB1(*)11 alleles and less severe disease. Studies of extrahepatic manifestations of chronic HCV have shown an association between DQB1(*)11 and DR3 with the formation of cryoglobulins. Some important initial observations have been made with respect to genetic determinants of HCV outcome. Replication studies are needed for many of these associations, as well as biological data on the function of many of these polymorphisms." }, { "pmid": "17363674", "title": "Founder effects in the assessment of HIV polymorphisms and HLA allele associations.", "abstract": "Escape from T cell-mediated immune responses affects the ongoing evolution of rapidly evolving viruses such as HIV. By applying statistical approaches that account for phylogenetic relationships among viral sequences, we show that viral lineage effects rather than immune escape often explain apparent human leukocyte antigen (HLA)-mediated immune-escape mutations defined by older analysis methods. Phylogenetically informed methods identified immune-susceptible locations with greatly improved accuracy, and the associations we identified with these methods were experimentally validated. This approach has practical implications for understanding the impact of host immunity on pathogen evolution and for defining relevant variants for inclusion in vaccine antigens." }, { "pmid": "17616974", "title": "Evidence of differential HLA class I-mediated viral evolution in functional and accessory/regulatory genes of HIV-1.", "abstract": "Despite the formidable mutational capacity and sequence diversity of HIV-1, evidence suggests that viral evolution in response to specific selective pressures follows generally predictable mutational pathways. Population-based analyses of clinically derived HIV sequences may be used to identify immune escape mutations in viral genes; however, prior attempts to identify such mutations have been complicated by the inability to discriminate active immune selection from virus founder effects. Furthermore, the association between mutations arising under in vivo immune selection and disease progression for highly variable pathogens such as HIV-1 remains incompletely understood. We applied a viral lineage-corrected analytical method to investigate HLA class I-associated sequence imprinting in HIV protease, reverse transcriptase (RT), Vpr, and Nef in a large cohort of chronically infected, antiretrovirally naïve individuals. A total of 478 unique HLA-associated polymorphisms were observed and organized into a series of \"escape maps,\" which identify known and putative cytotoxic T lymphocyte (CTL) epitopes under selection pressure in vivo. Our data indicate that pathways to immune escape are predictable based on host HLA class I profile, and that epitope anchor residues are not the preferred sites of CTL escape. Results reveal differential contributions of immune imprinting to viral gene diversity, with Nef exhibiting far greater evidence for HLA class I-mediated selection compared to other genes. Moreover, these data reveal a significant, dose-dependent inverse correlation between HLA-associated polymorphisms and HIV disease stage as estimated by CD4(+) T cell count. Identification of specific sites and patterns of HLA-associated polymorphisms across HIV protease, RT, Vpr, and Nef illuminates regions of the genes encoding these products under active immune selection pressure in vivo. The high density of HLA-associated polymorphisms in Nef compared to other genes investigated indicates differential HLA class I-driven evolution in different viral genes. The relationship between HLA class I-associated polymorphisms and lower CD4(+) cell count suggests that immune escape correlates with disease status, supporting an essential role of maintenance of effective CTL responses in immune control of HIV-1. The design of preventative and therapeutic CTL-based vaccine approaches could incorporate information on predictable escape pathways." }, { "pmid": "17559151", "title": "Human leukocyte antigen-associated sequence polymorphisms in hepatitis C virus reveal reproducible immune responses and constraints on viral evolution.", "abstract": "UNLABELLED\nCD8(+) T cell responses play a key role in governing the outcome of hepatitis C virus (HCV) infection, and viral evolution enabling escape from these responses may contribute to the inability to resolve infection. To more comprehensively examine the extent of CD8 escape and adaptation of HCV to human leukocyte antigen (HLA) class I restricted immune pressures on a population level, we sequenced all non-structural proteins in a cohort of 70 chronic HCV genotype 1a-infected subjects (28 subjects with HCV monoinfection and 42 with HCV/human immunodeficiency virus [HIV] coinfection). Linking of sequence polymorphisms with HLA allele expression revealed numerous HLA-associated polymorphisms across the HCV proteome. Multiple associations resided within relatively conserved regions, highlighting attractive targets for vaccination. Additional mutations provided evidence of HLA-driven fixation of sequence polymorphisms, suggesting potential loss of some CD8 targets from the population. In a subgroup analysis of mono- and co-infected subjects some associations lost significance partly due to reduced power of the utilized statistics. A phylogenetic analysis of the data revealed the substantial influence of founder effects upon viral evolution and HLA associations, cautioning against simple statistical approaches to examine the influence of host genetics upon sequence evolution of highly variable pathogens.\n\n\nCONCLUSION\nThese data provide insight into the frequency and reproducibility of viral escape from CD8(+) T cell responses in human HCV infection, and clarify the combined influence of multiple forces shaping the sequence diversity of HCV and other highly variable pathogens." }, { "pmid": "17071929", "title": "Evidence of viral adaptation to HLA class I-restricted immune pressure in chronic hepatitis C virus infection.", "abstract": "Cellular immune responses are an important correlate of hepatitis C virus (HCV) infection outcome. These responses are governed by the host's human leukocyte antigen (HLA) type, and HLA-restricted viral escape mutants are a critical aspect of this host-virus interaction. We examined the driving forces of HCV evolution by characterizing the in vivo selective pressure(s) exerted on single amino acid residues within nonstructural protein 3 (NS3) by the HLA types present in two host populations. Associations between polymorphisms within NS3 and HLA class I alleles were assessed in 118 individuals from Western Australia and Switzerland with chronic hepatitis C infection, of whom 82 (69%) were coinfected with human immunodeficiency virus. The levels and locations of amino acid polymorphisms exhibited within NS3 were remarkably similar between the two cohorts and revealed regions under functional constraint and selective pressures. We identified specific HCV mutations within and flanking published epitopes with the correct HLA restriction and predicted escaped amino acid. Additional HLA-restricted mutations were identified that mark putative epitopes targeted by cell-mediated immune responses. This analysis of host-virus interaction reveals evidence of HCV adaptation to HLA class I-restricted immune pressure and identifies in vivo targets of cellular immune responses at the population level." }, { "pmid": "15935894", "title": "Inferred HLA haplotype information for donors from hematopoietic stem cells donor registries.", "abstract": "Human leukocyte antigen (HLA) matching remains a key issue in the outcome of transplantation. In hematopoietic stem cell transplantation with unrelated donors, the matching for compatible donors is based on the HLA phenotype information. In familial transplantation, the matching is achieved at the haplotype level because donor and recipient share the block-transmitted major histocompatibility complex region. We present a statistical method based on the HLA haplotype inference to refine the HLA information available in an unrelated situation. We implement a systematic statistical inference of the haplotype combinations at the individual level. It computes the most likely haplotype pair given the phenotype and its probability. The method is validated on 301 phase-known phenotypes from CEPH families (Centre d'Etude du Polymorphisme Humain). The method is further applied to 85,933 HLA-A B DR typed unrelated donors from the French Registry of hematopoietic stem cells donors (France Greffe de Moelle). The average value of prediction probability is 0.761 (SD 0.199) ranging from 0.26 to 1. Correlations between phenotype characteristics and predictions are also given. Homozygosity (OR = 2.08; [2.02-2.14] p <10(-3)) and linkage disequilibrium (p <10(-3)) are the major factors influencing the quality of prediction. Limits and relevance of the method are related to limits of haplotype estimation. Relevance of the method is discussed in the context of HLA matching refinement." }, { "pmid": "17868258", "title": "Reanalysis of sequence-based HLA-A, -B and -Cw typings: how ambiguous is today's SBT typing tomorrow.", "abstract": "The permanently increasing number of human leukocyte antigen (HLA)-alleles and the growing list of ambiguities require continuous updating of high-resolution HLA typing results. Two different kinds of ambiguities exist: the first, when two or more allele combinations have identical heterozygous sequences, and the second, when differences are located outside the analyzed region. The number of HLA-A, B and C alleles recognized in 1999 was almost tripled in 2006. Two hundred individuals, sequence-based typing (SBT) typed in the period from 1999 to 2002, were reanalyzed using the 2006 database. A final allele typing result of at least four digits was obtained for HLA-A, -B and -C by heterozygous sequencing of exons 2 and 3 and, if necessary, additional exons and/or allele-specific sequencing. Storage of the individual sequences in a specially developed database enabled reanalysis with all present and future HLA releases. In the 5-year period HLA-A, -B and -C typing results became ambiguous in 37%, 46% and 41% of the cases. Most were because of differences outside the analyzed region; ambiguities because of different allele combinations with identical heterozygous sequences were present in 7%, 8% and 13% of the HLA-A, -B and -C typings. These results indicate that within 5 years, approximately half of the HLA SBT typings become ambiguous." }, { "pmid": "11386265", "title": "Effect of a single amino acid change in MHC class I molecules on the rate of progression to AIDS.", "abstract": "BACKGROUND\nFrom studies of genetic polymorphisms and the rate of progression from human immunodeficiency virus type 1 (HIV-1) infection to the acquired immunodeficiency syndrome (AIDS), it appears that the strongest susceptibility is conferred by the major-histocompatibility-complex (MHC) class I type HLA-B*35,Cw*04 allele. However, cytotoxic T-lymphocyte responses have been observed against HIV-1 epitopes presented by HLA-B*3501, the most common HLA-B*35 subtype. We examined subtypes of HLA-B*35 in five cohorts and analyzed the relation of structural differences between HLA-B*35 subtypes to the risk of progression to AIDS.\n\n\nMETHODS\nGenotyping of HLA class I loci was performed for 850 patients who seroconverted and had known dates of HIV-1 infection. Survival analyses with respect to the rate of progression to AIDS were performed to identify the effects of closely related HLA-B*35 subtypes with different peptide-binding specificities.\n\n\nRESULTS\nHLA-B*35 subtypes were divided into two groups according to peptide-binding specificity: the HLA-B*35-PY group, which consists primarily of HLA-B*3501 and binds epitopes with proline in position 2 and tyrosine in position 9; and the more broadly reactive HLA-B*35-Px group, which also binds epitopes with proline in position 2 but can bind several different amino acids (not including tyrosine) in position 9. The influence of HLA-B*35 in accelerating progression to AIDS was completely attributable to HLA-B*35-Px alleles, some of which differ from HLA-B*35-PY alleles by only one amino acid residue.\n\n\nCONCLUSIONS\nThis analysis shows that, in patients with HIV-1 infection, a single amino acid change in HLA molecules has a substantial effect on the rate of progression to AIDS. The different consequences of HLA-B*35-PY and HLA-B*35-Px in terms of disease progression highlight the importance of the epitope specificities of closely related class I molecules in the immune defense against HIV-1." }, { "pmid": "2108305", "title": "Inference of haplotypes from PCR-amplified samples of diploid populations.", "abstract": "Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele's sequence. The sequences of other alleles can be inferred by taking the remaining sequence after \"subtracting off\" the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms." }, { "pmid": "7476138", "title": "Maximum-likelihood estimation of molecular haplotype frequencies in a diploid population.", "abstract": "Molecular techniques allow the survey of a large number of linked polymorphic loci in random samples from diploid populations. However, the gametic phase of haplotypes is usually unknown when diploid individuals are heterozygous at more than one locus. To overcome this difficulty, we implement an expectation-maximization (EM) algorithm leading to maximum-likelihood estimates of molecular haplotype frequencies under the assumption of Hardy-Weinberg proportions. The performance of the algorithm is evaluated for simulated data representing both DNA sequences and highly polymorphic loci with different levels of recombination. As expected, the EM algorithm is found to perform best for large samples, regardless of recombination rates among loci. To ensure finding the global maximum likelihood estimate, the EM algorithm should be started from several initial conditions. The present approach appears to be useful for the analysis of nuclear DNA sequences or highly variable loci. Although the algorithm, in principle, can accommodate an arbitrary number of loci, there are practical limitations because the computing time grows exponentially with the number of polymorphic loci. Although the algorithm, in principle, can accommodate an arbitrary number of loci, there are practical limitations because the computing time grows exponentially with the number of polymorphic loci." }, { "pmid": "11254454", "title": "A new statistical method for haplotype reconstruction from population data.", "abstract": "Current routine genotyping methods typically do not provide haplotype information, which is essential for many analyses of fine-scale molecular-genetics data. Haplotypes can be obtained, at considerable cost, experimentally or (partially) through genotyping of additional family members. Alternatively, a statistical method can be used to infer phase and to reconstruct haplotypes. We present a new statistical method, applicable to genotype data at linked loci from a population sample, that improves substantially on current algorithms; often, error rates are reduced by > 50%, relative to its nearest competitor. Furthermore, our algorithm performs well in absolute terms, suggesting that reconstructing haplotypes experimentally or by genotyping additional family members may be an inefficient use of resources." }, { "pmid": "16826521", "title": "A coalescence-guided hierarchical Bayesian method for haplotype inference.", "abstract": "Haplotype inference from phase-ambiguous multilocus genotype data is an important task for both disease-gene mapping and studies of human evolution. We report a novel haplotype-inference method based on a coalescence-guided hierarchical Bayes model. In this model, a hierarchical structure is imposed on the prior haplotype frequency distributions to capture the similarities among modern-day haplotypes attributable to their common ancestry. As a consequence, the model both allows distinct haplotypes to have different a priori probabilities according to the inferred hierarchical ancestral structure and results in a proper joint posterior distribution for all the parameters of interest. A Markov chain-Monte Carlo scheme is designed to draw from this posterior distribution. By using coalescence-based simulation and empirically generated data sets (Whitehead Institute's inflammatory bowel disease data sets and HapMap data sets), we demonstrate the merits of the new method in comparison with HAPLOTYPER and PHASE, with or without the presence of recombination hotspots and missing genotypes." }, { "pmid": "14641248", "title": "Accelerated gene counting for haplotype frequency estimation.", "abstract": "Current implementations of the EM algorithm for estimating haplotype frequencies from genotypes on proximal loci require computational resources that grow as nh2k, where n is the number of individuals genotyped and h is the number of haplotypes possible on k loci. For diallelic loci hk=2k. We present an approach whose computational requirement grows as n2t where t is the largest number of loci at which an individual in the sample is heterozygous. The method is illustrated by haplotype frequency estimation from a sample of 45 individuals genotyped at 26 single nucleotide polymorphisms in the PIK3R1 gene." }, { "pmid": "10954684", "title": "Accuracy of haplotype frequency estimation for biallelic loci, via the expectation-maximization algorithm for unphased diploid genotype data.", "abstract": "Haplotype analyses have become increasingly common in genetic studies of human disease because of their ability to identify unique chromosomal segments likely to harbor disease-predisposing genes. The study of haplotypes is also used to investigate many population processes, such as migration and immigration rates, linkage-disequilibrium strength, and the relatedness of populations. Unfortunately, many haplotype-analysis methods require phase information that can be difficult to obtain from samples of nonhaploid species. There are, however, strategies for estimating haplotype frequencies from unphased diploid genotype data collected on a sample of individuals that make use of the expectation-maximization (EM) algorithm to overcome the missing phase information. The accuracy of such strategies, compared with other phase-determination methods, must be assessed before their use can be advocated. In this study, we consider and explore sources of error between EM-derived haplotype frequency estimates and their population parameters, noting that much of this error is due to sampling error, which is inherent in all studies, even when phase can be determined. In light of this, we focus on the additional error between haplotype frequencies within a sample data set and EM-derived haplotype frequency estimates incurred by the estimation procedure. We assess the accuracy of haplotype frequency estimation as a function of a number of factors, including sample size, number of loci studied, allele frequencies, and locus-specific allelic departures from Hardy-Weinberg and linkage equilibrium. We point out the relative impacts of sampling error and estimation error, calling attention to the pronounced accuracy of EM estimates once sampling error has been accounted for. We also suggest that many factors that may influence accuracy can be assessed empirically within a data set-a fact that can be used to create \"diagnostics\" that a user can turn to for assessing potential inaccuracies in estimation." }, { "pmid": "11741196", "title": "Bayesian haplotype inference for multiple linked single-nucleotide polymorphisms.", "abstract": "Haplotypes have gained increasing attention in the mapping of complex-disease genes, because of the abundance of single-nucleotide polymorphisms (SNPs) and the limited power of conventional single-locus analyses. It has been shown that haplotype-inference methods such as Clark's algorithm, the expectation-maximization algorithm, and a coalescence-based iterative-sampling algorithm are fairly effective and economical alternatives to molecular-haplotyping methods. To contend with some weaknesses of the existing algorithms, we propose a new Monte Carlo approach. In particular, we first partition the whole haplotype into smaller segments. Then, we use the Gibbs sampler both to construct the partial haplotypes of each segment and to assemble all the segments together. Our algorithm can accurately and rapidly infer haplotypes for a large number of linked SNPs. By using a wide variety of real and simulated data sets, we demonstrate the advantages of our Bayesian algorithm, and we show that it is robust to the violation of Hardy-Weinberg equilibrium, to the presence of missing data, and to occurrences of recombination hotspots." }, { "pmid": "17460569", "title": "Improved definition of human leukocyte antigen frequencies among minorities and applicability to estimates of transplant compatibility.", "abstract": "BACKGROUND\nHLA population data can be applied to estimates of waiting time and probabilities of donor compatibility. Registry data were used for derivation of HLA antigen and haplotype frequencies in a 1996 report. At that time there were several instances of significant deviation from Hardy Weinberg Equilibrium (HWE). Because molecular typing has been increasingly used since 1996, analysis of recent donor phenotypes should provide more accurate HLA frequencies.\n\n\nMETHODS\nHLA frequencies were derived from the phenotypes of 12,061 donors entered into the Organ Procurement and Transplantation Network registry from January 1, 2003 to December 31, 2004. Frequencies for HLA-A;B;DR and HLA-A;B, DR, DQ haplotypes were derived from 11,509 and 10,590 donors, respectively. Frequencies of the allele groups encoding serologic antigens were obtained by gene counting and haplotype frequencies were estimated using the expectation maximization algorithm. Fit to HWE was evaluated by an exact test using Markov Chain Monte Carlo methods.\n\n\nRESULTS\nThere was clear evidence of improved definition of rarer HLA antigens and haplotypes, particularly among minorities. The reported frequencies of broad antigens decreased overall for HLA-A, B, and DR, with concomitant increases in split antigens. Allele group genotypes among the major ethnic groups were in HWE with the single exception of HLA-A locus alleles among Asians. Improved HLA definition also permitted the first report of DR;DQ and A;B;DR;DQ haplotypes among U.S. donors.\n\n\nCONCLUSIONS\nThe noted improvements in HLA definition and the overall lack of significant deviation from HWE indicate the accuracy of these HLA frequencies. These frequencies can therefore be applied for representative estimates of the U.S. donor population." }, { "pmid": "15257044", "title": "Assessment of optimal size and composition of the U.S. National Registry of hematopoietic stem cell donors.", "abstract": "BACKGROUND\nThe National Marrow Donor Program (NMDP) receives federal funding to operate a registry of over 4 million volunteer donors for patients in need of a hematopoietic stem cell transplant. Because minority patients are less likely to find a suitably matched donor than whites, special efforts have been aimed toward recruitment of minorities. Significant financial resources are required to recruit and tissue type additional volunteer donors.\n\n\nMETHODS\nPopulation genetics models have been constructed to project likelihoods of finding a human leukocyte antigen (HLA)-matched donor for patients of various racial/ethnic groups. These projections have been made under a variety of strategies for expansion of the NMDP Registry. Cost-effectiveness calculations incorporated donor unavailability and other barriers to transplantation.\n\n\nRESULTS\nAt current recruitment rates, the probability of an available HLA-A,B,DRB1 matched donor is projected to increase from 27% to 34%; 45% to 54%; 75% to 79%; and 48% to 55%, for blacks, Asians/Pacific Islanders, whites and Hispanics, respectively, by the year 2007. Substantial increases in minority recruitment would have only modest impacts on these projections. These projections are heavily affected by donor availability rates, which are less than 50% for minority volunteers.\n\n\nCONCLUSIONS\nContinued recruitment of additional volunteers can improve the likelihood of finding an HLA-matched donor, but will still leave significant numbers of patients of all racial/ethnic groups without a match. Efforts to improve donor availability (especially among minorities) and to increase the number of patients with access to the NMDP Registry may prove to be more cost-effective means of increasing transplants." }, { "pmid": "12507825", "title": "Gene and haplotype frequencies for the loci hLA-A, hLA-B, and hLA-DR based on over 13,000 german blood donors.", "abstract": "Numerous applications in clinical medicine and forensic sciences depend on reliable data concerning the frequencies of human leukocyte antigen (HLA) genes and haplotypes. Assuming a Hardy-Weinberg equilibrium of the underlying population, these frequencies can be estimated from phenotype data using an expectation-maximization-algorithm also known under the name \"gene counting.\" We have refined this algorithm in order to cope with the heterogeneous resolution of HLA phenotypes frequently occurring in large datasets due to the structure of the HLA nomenclature. This was a prerequisite to analyze a set of 13,386 blood donors contributed by over 40 blood banks who were tested for HLA-DR when they volunteered to become marrow donors. This data set is still unique in the German national donor registry because their HLA-DR-typing was not biased by patient oriented searches or other strategies for selective typing. As a consequence of the size of the sample, the frequency estimates for the genes and the two- and three-locus haplotypes of HLA-A, HLA-B, and HLA-DR are of unprecedented precision and allow interesting projections concerning the efficiency and economic aspects of the development of a large donor registry in Germany." }, { "pmid": "11543903", "title": "Analysis of the frequencies of HLA-A, B, and C alleles and haplotypes in the five major ethnic groups of the United States reveals high levels of diversity in these loci and contrasting distribution patterns in these populations.", "abstract": "The HLA system is the most polymorphic of all human genetic systems. The frequency of HLA class I alleles and their linkage disequilibrium patterns differ significantly among human populations as shown in studies using serologic methods. Many DNA-defined alleles with identical serotypes may have variable frequencies in different populations. We typed HLA-A, B, and C loci at the allele level by PCR-based methods in 1,296 unrelated subjects from five major outbred groups living in the U.S.A (African, AFAM; Caucasians, CAU; Asian, ORI; Hispanic, HIS, and North American Natives, NAI). We detected 46, 100 and 32 HLA-A, B, and C alleles, respectively. ORI and HIS presented more alleles at each of these loci. There was lack of correlation between the levels of heterozygosity and the number of alleles detected in each population. In AFAM, heterozygosity (>90%) is maximized at all class I loci. HLA-A had the lowest heterozygosity in all populations but CAU. Tight LD was observed between HLA-B and C alleles. AFAM had weaker or nonexistent associations between alleles of HLA-A and B than other populations. Analysis of the genetic distances between these and other populations showed a close relationship between specific US populations and a population from their original continents. ORI exhibited the largest genetic distance with all the other U.S. groups and were closer to NAI. Evidence of admixture with CAU was observed for AFAM and HIS. HIS also had significant frequencies of AFAM and Mexican Indian alleles. Differences in both LD and heterozygosity levels suggest distinct evolutionary histories of the HLA loci in the geographical regions from where the U.S. populations originated." }, { "pmid": "15252420", "title": "Handling missing values in population data: consequences for maximum likelihood estimation of haplotype frequencies.", "abstract": "Haplotype frequency estimation in population data is an important problem in genetics and different methods including expectation maximisation (EM) methods have been proposed. The statistical properties of EM methods have been extensively assessed for data sets with no missing values. When numerous markers and/or individuals are tested, however, it is likely that some genotypes will be missing. Thus, it is of interest to investigate the behaviour of the method in the presence of incomplete genotype observations. We propose an extension of the EM method to handle missing genotypes, and we compare it with commonly used methods (such as ignoring individuals with incomplete genotype information or treating a missing allele as any other allele). Simulations were performed, starting from data sets of haematopoietic stem cell donors genotyped at three HLA loci. We deleted some data to create incomplete genotype observations in various proportions. We then compared the haplotype frequencies obtained on these incomplete data sets using the different methods to those obtained on the complete data. We found that the method proposed here provides better estimations, both qualitatively and quantitatively, but increases the computation time required. We discuss the influence of missing values on the algorithm's efficiency and the advantages and disadvantages of deleting incomplete genotypes. We propose guidelines for missing data handling in routine analysis." }, { "pmid": "1637966", "title": "Performing the exact test of Hardy-Weinberg proportion for multiple alleles.", "abstract": "The Hardy-Weinberg law plays an important role in the field of population genetics and often serves as a basis for genetic inference. Because of its importance, much attention has been devoted to tests of Hardy-Weinberg proportions (HWP) over the decades. It has long been recognized that large-sample goodness-of-fit tests can sometimes lead to spurious results when the sample size and/or some genotypic frequencies are small. Although a complete enumeration algorithm for the exact test has been proposed, it is not of practical use for loci with more than a few alleles due to the amount of computation required. We propose two algorithms to estimate the significance level for a test of HWP. The algorithms are easily applicable to loci with multiple alleles. Both are remarkably simple and computationally fast. Relative efficiency and merits of the two algorithms are compared. Guidelines regarding their usage are given. Numerical examples are given to illustrate the practicality of the algorithms." } ]
International Journal of Biomedical Imaging
18431448
PMC2292807
10.1155/2008/590183
3D Wavelet Subbands Mixing for Image Denoising
A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D blockwise version of the nonlocal (NL) means filter with wavelet subbands mixing. The proposed wavelet subbands mixing is based on a multiresolution approach for improving the quality of image denoising filter. Quantitative validation was carried out on synthetic datasets generated with the BrainWeb simulator. The results show that our NL-means filter with wavelet subbands mixing outperforms the classical implementation of the NL-means filter in terms of denoising quality and computation time. Comparison with wellestablished methods, such as nonlinear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better denoising results. Finally, qualitative results on real data are presented.
2. RELATED WORKSMany methods for image denoising have been suggested in the literature, and a complete review of them can be found in [1]. Methods for image restoration aim at preserving the image details and local features while removing the undesirable noise. In many approaches, an initial image is progressively approximated by filtered versions which are smoother or simpler in some sense. Total variation (TV) minimization [3], nonlinear diffusion [4–6], mode filters [7], or regularization methods [3, 8] are among the methods of choice for noise removal. Most of these methods are based on a weighted average of the gray values of the pixels in a spatial neighborhood [9, 10]. One of the earliest examples of such filters has been proposed by Lee [11]. An evolution of this approach has been presented by Tomasi and Manduchi [9] who devised the bilateral filter which includes both a spatial and an intensity neighborhood.Recently, the relationships between bilateral filtering and local mode filtering [7], local M-estimators [12], and nonlinear diffusion [13] have been established. In the context of statistical methods, the bridge between the Bayesian estimators applied on a Gibbs distribution, resulting with a penalty functional [14] and averaging methods for smoothing, has also been described in [10]. Finally, statistical averaging schemes enhanced via incorporating a variable spatial neighborhood scheme have been proposed in [15–17].All these methods aim at removing noise while preserving relevant image information. The tradeoff between noise removal and image preservation is performed by tuning the filter parameters, which is not an easy task in practice. In this paper, we propose to overcome this problem with a 3D subbands wavelet mixing. As in [2], we have chosen to combine a multiresolution approach with the NL-means filter [1], which has recently shown very promising results.Recently introduced by Buades et al. [1], the NL-means filter proposes a new approach for the denoising problem. Contrary to most denoising methods based on a local recovery paradigm, the NL-means filter is based on the idea that any periodic, textured, or natural image has redundancy, and that any voxel of the image has similar voxels that are not necessarily located in a spatial neighborhood. This new nonlocal recovery paradigm allows to improve the two most desired properties of a denoising algorithm: edge preservation and noise removal.
[ "18249686", "22499653", "9735909" ]
[ { "pmid": "18249686", "title": "On the origin of the bilateral filter and ways to improve it.", "abstract": "Additive noise removal from a given signal is an important problem in signal processing. Among the most appealing aspects of this field are the ability to refer it to a well-established theory, and the fact that the proposed algorithms in this field are efficient and practical. Adaptive methods based on anisotropic diffusion (AD), weighted least squares (WLS), and robust estimation (RE) were proposed as iterative locally adaptive machines for noise removal. Tomasi and Manduchi (see Proc. 6th Int. Conf. Computer Vision, New Delhi, India, p.839-46, 1998) proposed an alternative noniterative bilateral filter for removing noise from images. This filter was shown to give similar and possibly better results to the ones obtained by iterative approaches. However, the bilateral filter was proposed as an intuitive tool without theoretical connection to the classical approaches. We propose such a bridge, and show that the bilateral filter also emerges from the Bayesian approach, as a single iteration of some well-known iterative algorithm. Based on this observation, we also show how the bilateral filter can be improved and extended to treat more general reconstruction problems." }, { "pmid": "22499653", "title": "Stochastic relaxation, gibbs distributions, and the bayesian restoration of images.", "abstract": "We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios." }, { "pmid": "9735909", "title": "Design and construction of a realistic digital brain phantom.", "abstract": "After conception and implementation of any new medical image processing algorithm, validation is an important step to ensure that the procedure fulfills all requirements set forth at the initial design stage. Although the algorithm must be evaluated on real data, a comprehensive validation requires the additional use of simulated data since it is impossible to establish ground truth with in vivo data. Experiments with simulated data permit controlled evaluation over a wide range of conditions (e.g., different levels of noise, contrast, intensity artefacts, or geometric distortion). Such considerations have become increasingly important with the rapid growth of neuroimaging, i.e., computational analysis of brain structure and function using brain scanning methods such as positron emission tomography and magnetic resonance imaging. Since simple objects such as ellipsoids or parallelepipedes do not reflect the complexity of natural brain anatomy, we present the design and creation of a realistic, high-resolution, digital, volumetric phantom of the human brain. This three-dimensional digital brain phantom is made up of ten volumetric data sets that define the spatial distribution for different tissues (e.g., grey matter, white matter, muscle, skin, etc.), where voxel intensity is proportional to the fraction of tissue within the voxel. The digital brain phantom can be used to simulate tomographic images of the head. Since the contribution of each tissue type to each voxel in the brain phantom is known, it can be used as the gold standard to test analysis algorithms such as classification procedures which seek to identify the tissue \"type\" of each image voxel. Furthermore, since the same anatomical phantom may be used to drive simulators for different modalities, it is the ideal tool to test intermodality registration algorithms. The brain phantom and simulated MR images have been made publicly available on the Internet (http://www.bic.mni.mcgill.ca/brainweb)." } ]
International Journal of Telemedicine and Applications
18437237
PMC2329738
10.1155/2008/290431
PATHOS: Pervasive at Home Sleep Monitoring
Sleeping disorders affect a large percentage of the population, and many of them go undiagnosed each year because the method of diagnosis is to stay overnight at a sleep center. Because pervasive technologies have become so prevalent and affordable, sleep monitoring is no longer confined to a permanent installation, and can therefore be brought directly into the user home. We present a unique solution to the problem of home sleep monitoring that has the possibility to take the place of and expand on the data from a sleep center. PATHOS focuses not only on analyzing patterns during the night, but also on collecting data about the subject lifestyle that is relevant and important to the diagnosis of his/her sleep. PATHOS means “evoking emotion.” Here, we mean Pathos will help us to keep healthy: both mentally and physically. Our solution uses existing technology to keep down cost and is completely wireless in order to provide portability and be easily to customize. The daytime collection also utilizes existing technology and offers a wide range of input methods to suit any type of person. We also include an in-depth look at the hardware we used to implement and the software providing user interaction. Our system is not only a viable alternative to a sleep center, it also provides functions that a static, short-term solution cannot provide, allowing for a more accurate diagnosis and treatment.
8. RELATED WORKSMany projects are underway that focus on general health monitoring. A long term monitoring system known as Terva [8] has been implemented to collect critical health data such as blood pressure, temperature, sleeping conditions, and weight. The problem with Terva is that although it is self contained, it is housed in a casing about the size of a suitcase, which seriously dampers mobility. As a result, Terva is only practical inside the home. IST VIVAGO is a system used to remotely monitor activity and generate alarms based on received data [9]. In contrast with Terva, our system is small and completely wireless, allowing it to easily adapt to new situations.Another system, wireless wellness monitor (WWM), is built specifically to manage obesity [10]. The system has measuring devices, mobile terminals (handheld devices), and a base station home server with a database. It uses Bluetooth and Jini network technology and everything is connected through the internet. The MobiHealth project [11] is similar to WWM as it monitors a person’s health data using small medical sensors which transmit the data via a powerful and inexpensive wireless system. A combination of these sensors creates a body area network (BAN), and the project utilizes cell phone networks to transmit a signal on the fly from anywhere the network reaches.Students at Duke University [5], as part of their DELTA Smart House design, described a system for monitoring sleeping patterns that is easy to use and inexpensive. In order to gather detailed sleep data, they used a pulse oximeter to record the user’s heart rate and respiratory rate, a watch style actigraph to measure movement, in-bed thermistors for body temperature, and a microphone for audio. Their system achieves a low cost by using multifunctional sensors, but their choice of an actigraph adds a considerable amount to cost. Their approach depends on a computer for data interpretation, and the sensors themselves are not actually integrated. For instance, the watch actigraph must be plugged into a computer to transfer data; collection is not seamless.As part of the SENSATION Project [12] researchers have put together a system using the latest technology to detect sleep and sleepiness. They proposed using a ring that detects heart rate and wirelessly transmits the data, pressure sensitive film to measure chest and limb movement, a microcamera to make sure a driver’s eyes are on the road, and (BAN) technology to have all the parts communicate wirelessly. As apparent by the choice of sensors, the consortium is more focused on preventing driver from falling asleep at the wheel than collecting data for diagnosis of sleeping disorders.Taking a different, completely noninvasive approach to sleep monitoring, researchers at the University of Tokyo [13] have used the “surrounding sensor approach.” Instead of placing sensors on the subject’s body, they are using motion sensors, cameras, and microphones placed in the surrounding environment to provide noninvasive monitoring. The downside to this approach, although it is meant for home use, is that it is not very portable and therefore must be semipermanent.Another approach to inexpensive sleep monitoring has been implemented by the University of Washington Seattle with the use of multimodal sensors. As opposed to the expensive actigraph, they investigated the possibility of using a passive infrared camera to record motion during sleep, a decision which carries the same consequences as the surrounding sensor approach, and may be more difficult to setup than sensors that simply attach to the body.The last system we will review is the FPGA-based sleep apnea screening device for home monitoring developed by researchers at the University of Cairo. The purpose of their system is to determine whether or not a patient should undergo a full polysomnography exam, instead of being used in place of a sleep center. Also, differing from our system, the data is recorded on a Secure Digital card to be processed later by the doctor.
[ "17153213", "15718645" ]
[ { "pmid": "17153213", "title": "Real-time monitoring of respiration rhythm and pulse rate during sleep.", "abstract": "A noninvasive and unconstrained real-time method to detect the respiration rhythm and pulse rate during sleep is presented. By employing the a trous algorithm of the wavelet transformation (WT), the respiration rhythm and pulse rate can be monitored in real-time from a pressure signal acquired with a pressure sensor placed under a pillow. The waveform for respiration rhythm detection is derived from the 26 scale approximation, while that for pulse rate detection is synthesized by combining the 2(4) and 2(5) scale details. To minimize the latency in data processing and realize the highest real-time performance, the respiration rhythm and pulse rate are estimated by using waveforms directly derived from the WT approximation and detail components without the reconstruction procedure. This method is evaluated with data collected from 13 healthy subjects. By comparing with detections from finger photoelectric plethysmograms used for pulse rate detection, the sensitivity and positive predictivity were 99.17% and 98.53%, respectively. Similarly, for respiration rhythm, compared with detections from nasal thermistor signals, results were 95.63% and 95.42%, respectively. This study suggests that the proposed method is promising to be used in a respiration rhythm and pulse rate monitor for real-time monitoring of sleep-related diseases during sleep." }, { "pmid": "15718645", "title": "Wireless body area networks for healthcare: the MobiHealth project.", "abstract": "The forthcoming wide availability of high bandwidth public wireless networks will give rise to new mobile health care services. Towards this direction the MobiHealth project has developed and trialed a highly customisable vital signals' monitoring system based on a Body Area Network (BAN) and an m-health service platform utilizing next generation public wireless networks. The developed system allows the incorporation of diverse medical sensors via wireless connections, and the live transmission of the measured vital signals over public wireless networks to healthcare providers. Nine trials with different health care cases and patient groups in four different European countries have been conducted to test and verify the system, the service and the network infrastructure for its suitability and the restrictions it imposes to mobile health care applications." } ]
PLoS Computational Biology
18535663
PMC2396503
10.1371/journal.pcbi.1000090
CSMET: Comparative Genomic Motif Detection via Multi-Resolution Phylogenetic Shadowing
Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders.
Related WorkOrthology-based motif detection methods developed so far are mainly based on nucleotide-level conservation. Some of the methods do not resort to a formal evolutionary model [14], but are guided by either empirical conservation measures [15]–[17], such as parsimonious substitution events or window-based nucleotide identity, or by empirical likelihood functions not explicitly modeling sequence evolution [4],[18],[19]. The advantage of these non-phylogeny based methods lies in the simplicity of their design, and their non-reliance on strong evolutionary assumptions. However, since they do not correspond to explicit evolutionary models, their utility is restricted to purely pattern search, and not for analytical tasks such as ancestral inference or evolutionary parameter estimation. Some of these methods employ specialized heuristic search algorithms that are difficult to scale up to multiple species, or generalize to aligned sequences with high divergence.Phylogenetic methods such as EMnEM [20], MONKEY [21], and our in-house implementation of PhyloHMM (originally implemented in [1] for gene finding, but in our own version tailored for motif search) explicitly adopt a complete and independent shadowing model at the nucleotide level. These methods are all based on the assumption of homogeneity of functionality across orthologous nucleotides, which is not always true even among relatively closely related species (e.g., of divergence less than 50 mya in Drosophila).Empirical estimation and simulation of turnover events is an emerging subject in the literature [12],[22], but to our knowledge, no explicit evolutionary model for functional turnover has been proposed and brought to bear in comparative genomic search of non-conserved motifs. Thus our CSMET model represents an initial foray in this direction. Closely related to our work, two recent algorithms, rMonkey [12]—an extension over the MONKEY program, and PhyloGibbs [9]—a Gibbs sampling based motif detection algorithm, can also explicitly account for differential functionality among orthologs, both using the technique of shuffling or reducing the input alignment to create well conserved local subalignments. But in both methods, no explicit functional turnover model has been used to infer the turnover events. Another recent program, PhyME [10], partially addresses the incomplete orthology issue via a heuristic that allows motifs only present in a pre-chosen reference taxon to be also detectable, but it is not clear how to generalize this ability to motifs present in arbitrary combination of other taxa, and so far no well-founded evolutionary hypothesis and model is provided to explain the heuristic. Non-homogeneous conservation due to selection across aligned sites has also been studied in DLESS [23] and PhastCons [24], but unlike in CSMET, no explicit substitution model for lineage-specific functional evolution was used in these algorithms, and the HMM-based model employed there makes it computationally much more expensive than CSMET to systematically explore all possible evolutionary hypotheses. A notable work in the context of protein classification proposed a phylogenomic model over protein functions, which employs a regression-like functional to model the evolution of protein functions represented as feature vectors along lineages in a complete phylogeny [25], but such ideas have not been explored so far for comparative genomic motif search.Various nucleotide substitution models, including the Jukes-Cantor 69 (JC69) model [26], and the Felsenstein 81 (F81) model [27], have been employed in current phylogenetic shadowing or footprinting algorithms. PhyloGibbs and PhyME use an analogue of F81 proposed in [28], which is one of the simplest models to handle arbitrary stationary distributions, necessary to model various specific PWMs of motifs. Both PhyME and PhyloGibbs also offer an alternative to use a simplified star-phylogeny to replace the phylogenetic tree when dealing with a large number of taxa, which corresponds to an even simpler substitution process.
[ "12610304", "11997340", "14668220", "14988105", "12538242", "15757364", "15511292", "17040121", "10676967", "12824433", "16888351", "17993689", "17646303", "14992514", "15575972", "17956628", "14656959", "7288891", "8583911", "16452802", "12136103", "8193955", "15637633", "11875036", "15572468", "12519974", "16397004", "15572468", "15173120", "3934395" ]
[ { "pmid": "12610304", "title": "Phylogenetic shadowing of primate sequences to find functional regions of the human genome.", "abstract": "Nonhuman primates represent the most relevant model organisms to understand the biology of Homo sapiens. The recent divergence and associated overall sequence conservation between individual members of this taxon have nonetheless largely precluded the use of primates in comparative sequence studies. We used sequence comparisons of an extensive set of Old World and New World monkeys and hominoids to identify functional regions in the human genome. Analysis of these data enabled the discovery of primate-specific gene regulatory elements and the demarcation of the exons of multiple genes. Much of the information content of the comprehensive primate sequence comparisons could be captured with a small subset of phylogenetically close primates. These results demonstrate the utility of intraprimate sequence comparisons to discover common mammalian as well as primate-specific functional elements in the human genome, which are unattainable through the evaluation of more evolutionarily distant species." }, { "pmid": "11997340", "title": "Discovery of regulatory elements by a computational method for phylogenetic footprinting.", "abstract": "Phylogenetic footprinting is a method for the discovery of regulatory elements in a set of orthologous regulatory regions from multiple species. It does so by identifying the best conserved motifs in those orthologous regions. We describe a computer algorithm designed specifically for this purpose, making use of the phylogenetic relationships among the sequences under study to make more accurate predictions. The program is guaranteed to report all sets of motifs with the lowest parsimony scores, calculated with respect to the phylogenetic tree relating the input species. We report the results of this algorithm on several data sets of interest. A large number of known functional binding sites are identified by our method, but we also find several highly conserved motifs for which no function is yet known." }, { "pmid": "14668220", "title": "Combining phylogenetic data with co-regulated genes to identify regulatory motifs.", "abstract": "MOTIVATION\nDiscovery of regulatory motifs in unaligned DNA sequences remains a fundamental problem in computational biology. Two categories of algorithms have been developed to identify common motifs from a set of DNA sequences. The first can be called a 'multiple genes, single species' approach. It proposes that a degenerate motif is embedded in some or all of the otherwise unrelated input sequences and tries to describe a consensus motif and identify its occurrences. It is often used for co-regulated genes identified through experimental approaches. The second approach can be called 'single gene, multiple species'. It requires orthologous input sequences and tries to identify unusually well conserved regions by phylogenetic footprinting. Both approaches perform well, but each has some limitations. It is tempting to combine the knowledge of co-regulation among different genes and conservation among orthologous genes to improve our ability to identify motifs.\n\n\nRESULTS\nBased on the Consensus algorithm previously established by our group, we introduce a new algorithm called PhyloCon (Phylogenetic Consensus) that takes into account both conservation among orthologous genes and co-regulation of genes within a species. This algorithm first aligns conserved regions of orthologous sequences into multiple sequence alignments, or profiles, then compares profiles representing non-orthologous sequences. Motifs emerge as common regions in these profiles. Here we present a novel statistic to compare profiles of DNA sequences and a greedy approach to search for common subprofiles. We demonstrate that PhyloCon performs well on both synthetic and biological data.\n\n\nAVAILABILITY\nSoftware available upon request from the authors. http://ural.wustl.edu/softwares.html" }, { "pmid": "14988105", "title": "Multiple-sequence functional annotation and the generalized hidden Markov phylogeny.", "abstract": "MOTIVATION\nPhylogenetic shadowing is a comparative genomics principle that allows for the discovery of conserved regions in sequences from multiple closely related organisms. We develop a formal probabilistic framework for combining phylogenetic shadowing with feature-based functional annotation methods. The resulting model, a generalized hidden Markov phylogeny (GHMP), applies to a variety of situations where functional regions are to be inferred from evolutionary constraints.\n\n\nRESULTS\nWe show how GHMPs can be used to predict complete shared gene structures in multiple primate sequences. We also describe shadower, our implementation of such a prediction system. We find that shadower outperforms previously reported ab initio gene finders, including comparative human-mouse approaches, on a small sample of diverse exonic regions. Finally, we report on an empirical analysis of shadower's performance which reveals that as few as five well-chosen species may suffice to attain maximal sensitivity and specificity in exon demarcation.\n\n\nAVAILABILITY\nA Web server is available at http://bonaire.lbl.gov/shadower" }, { "pmid": "12538242", "title": "Gene finding with a hidden Markov model of genome structure and evolution.", "abstract": "MOTIVATION\nA growing number of genomes are sequenced. The differences in evolutionary pattern between functional regions can thus be observed genome-wide in a whole set of organisms. The diverse evolutionary pattern of different functional regions can be exploited in the process of genomic annotation. The modelling of evolution by the existing comparative gene finders leaves room for improvement.\n\n\nRESULTS\nA probabilistic model of both genome structure and evolution is designed. This type of model is called an Evolutionary Hidden Markov Model (EHMM), being composed of an HMM and a set of region-specific evolutionary models based on a phylogenetic tree. All parameters can be estimated by maximum likelihood, including the phylogenetic tree. It can handle any number of aligned genomes, using their phylogenetic tree to model the evolutionary correlations. The time complexity of all algorithms used for handling the model are linear in alignment length and genome number. The model is applied to the problem of gene finding. The benefit of modelling sequence evolution is demonstrated both in a range of simulations and on a set of orthologous human/mouse gene pairs.\n\n\nAVAILABILITY\nFree availability over the Internet on www server: http://www.birc.dk/Software/evogene." }, { "pmid": "15757364", "title": "Functional evolution of a cis-regulatory module.", "abstract": "Lack of knowledge about how regulatory regions evolve in relation to their structure-function may limit the utility of comparative sequence analysis in deciphering cis-regulatory sequences. To address this we applied reverse genetics to carry out a functional genetic complementation analysis of a eukaryotic cis-regulatory module-the even-skipped stripe 2 enhancer-from four Drosophila species. The evolution of this enhancer is non-clock-like, with important functional differences between closely related species and functional convergence between distantly related species. Functional divergence is attributable to differences in activation levels rather than spatiotemporal control of gene expression. Our findings have implications for understanding enhancer structure-function, mechanisms of speciation and computational identification of regulatory modules." }, { "pmid": "15511292", "title": "PhyME: a probabilistic algorithm for finding motifs in sets of orthologous sequences.", "abstract": "BACKGROUND\nThis paper addresses the problem of discovering transcription factor binding sites in heterogeneous sequence data, which includes regulatory sequences of one or more genes, as well as their orthologs in other species.\n\n\nRESULTS\nWe propose an algorithm that integrates two important aspects of a motif's significance - overrepresentation and cross-species conservation - into one probabilistic score. The algorithm allows the input orthologous sequences to be related by any user-specified phylogenetic tree. It is based on the Expectation-Maximization technique, and scales well with the number of species and the length of input sequences. We evaluate the algorithm on synthetic data, and also present results for data sets from yeast, fly, and human.\n\n\nCONCLUSIONS\nThe results demonstrate that the new approach improves motif discovery by exploiting multiple species information." }, { "pmid": "17040121", "title": "Large-scale turnover of functional transcription factor binding sites in Drosophila.", "abstract": "The gain and loss of functional transcription factor binding sites has been proposed as a major source of evolutionary change in cis-regulatory DNA and gene expression. We have developed an evolutionary model to study binding-site turnover that uses multiple sequence alignments to assess the evolutionary constraint on individual binding sites, and to map gain and loss events along a phylogenetic tree. We apply this model to study the evolutionary dynamics of binding sites of the Drosophila melanogaster transcription factor Zeste, using genome-wide in vivo (ChIP-chip) binding data to identify functional Zeste binding sites, and the genome sequences of D. melanogaster, D. simulans, D. erecta, and D. yakuba to study their evolution. We estimate that more than 5% of functional Zeste binding sites in D. melanogaster were gained along the D. melanogaster lineage or lost along one of the other lineages. We find that Zeste-bound regions have a reduced rate of binding-site loss and an increased rate of binding-site gain relative to flanking sequences. Finally, we show that binding-site gains and losses are asymmetrically distributed with respect to D. melanogaster, consistent with lineage-specific acquisition and loss of Zeste-responsive regulatory elements." }, { "pmid": "10676967", "title": "Evidence for stabilizing selection in a eukaryotic enhancer element.", "abstract": "Eukaryotic gene expression is mediated by compact cis-regulatory modules, or enhancers, which are bound by specific sets of transcription factors. The combinatorial interaction of these bound transcription factors determines time- and tissue-specific gene activation or repression. The even-skipped stripe 2 element controls the expression of the second transverse stripe of even-skipped messenger RNA in Drosophila melanogaster embryos, and is one of the best characterized eukaryotic enhancers. Although even-skipped stripe 2 expression is strongly conserved in Drosophila, the stripe 2 element itself has undergone considerable evolutionary change in its binding-site sequences and the spacing between them. We have investigated this apparent contradiction, and here we show that two chimaeric enhancers, constructed by swapping the 5' and 3' halves of the native stripe 2 elements of two species, no longer drive expression of a reporter gene in the wildtype pattern. Sequence differences between species have functional consequences, therefore, but they are masked by other co-evolved differences. On the basis of these results, we present a model for the evolution of eukaryotic regulatory sequences." }, { "pmid": "12824433", "title": "FootPrinter: A program designed for phylogenetic footprinting.", "abstract": "Phylogenetic footprinting is a method for the discovery of regulatory elements in a set of homologous regulatory regions, usually collected from multiple species. It does so by identifying the best conserved motifs in those homologous regions. This note describes web software that has been designed specifically for this purpose, making use of the phylogenetic relationships among the homologous sequences in order to make more accurate predictions. The software is called FootPrinter and is available at http://bio.cs.washington.edu/software.html." }, { "pmid": "16888351", "title": "VISTA family of computational tools for comparative analysis of DNA sequences and whole genomes.", "abstract": "Comparative analysis of DNA sequences is becoming one of the major methods for discovery of functionally important genomic intervals. Presented here the VISTA family of computational tools was built to help researchers in this undertaking. These tools allow the researcher to align DNA sequences, quickly visualize conservation levels between them, identify highly conserved regions, and analyze sequences of interest through one of the following approaches: . Browse precomputed whole-genome alignments of vertebrates and other groups of organisms. . Submit sequences to Genome VISTA to align them to whole genomes. . Submit two or more sequences to mVISTA to align them with each other (a variety of alignment programs with several distinct capabilities are made available).. Submit sequences to Regulatory VISTA (rVISTA) to perform transcription factor binding site predictions based on conservation within sequence alignments.Use stand-alone alignment and visualization programs to run comparative sequence analysis locally All VISTA tools use standard algorithms for visualization and conservation analysis to make comparison of results from different programs more straightforward. The web page http://genome.lbl.gov/vista/ serves as a portal for access to all VISTA tools. Our support group can be reached by email at [email protected]." }, { "pmid": "17993689", "title": "Web-based identification of evolutionary conserved DNA cis-regulatory elements.", "abstract": "Transcription regulation on a gene-by-gene basis is achieved through transcription factors, the DNA-binding proteins that recognize short DNA sequences in the proximity of the genes. Unlike other DNA-binding proteins, each transcription factor recognizes a number of sequences, usually variants of a preferred, \"consensus\" sequence. The degree of dissimilarity of a given target sequence from the consensus is indicative of the binding affinity of the transcription factor-DNA interaction. Because of the short size and the degeneracy of the patterns, it is frequently difficult for a computational algorithm to distinguish between the true sites and the background genomic \"noise.\" One way to overcome this problem of low signal-to-noise ratio is to use evolutionary information to detect signals that are conserved in two or more species. FOOTER is an algorithm that uses this phylogenetic footprinting concept and evaluates putative mammalian transcription factor binding sites in a quantitative way. The user is asked to upload the human and mouse promoter sequences and select the transcription factors to be analyzed. The results' page presents an alignment of the two sequences (color-coded by degree of conservation) and information about the predicted sites and single-nucleotide polymorphisms found around the predicted sites. This chapter presents the main aspects of the underlying method and gives detailed instructions and tips on the use of this web-based tool." }, { "pmid": "17646303", "title": "A statistical method for alignment-free comparison of regulatory sequences.", "abstract": "MOTIVATION\nThe similarity of two biological sequences has traditionally been assessed within the well-established framework of alignment. Here we focus on the task of identifying functional relationships between cis-regulatory sequences that are non-orthologous or greatly diverged. 'Alignment-free' measures of sequence similarity are required in this regime.\n\n\nRESULTS\nWe investigate the use of a new score for alignment-free sequence comparison, called the score. It is based on comparing the frequencies of all fixed-length words in the two sequences. An important, novel feature of the score is that it is comparable across sequence pairs drawn from arbitrary background distributions. We present a method that gives quadratic improvement in the time complexity of calculating the score, over the naïve method. We then evaluate the score on several tissue-specific families of cis-regulatory modules (in Drosophila and human). The new score is highly successful in discriminating functionally related regulatory sequences from unrelated sequence pairs. The performance of the score is compared to five other alignment-free similarity measures, and shown to be consistently superior to all of these measures.\n\n\nAVAILABILITY\nOur implementation of the score will be made freely available as source code, upon publication of this article, at: http://veda.cs.uiuc.edu/d2z/.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "14992514", "title": "Phylogenetic motif detection by expectation-maximization on evolutionary mixtures.", "abstract": "The preferential conservation of transcription factor binding sites implies that non-coding sequence data from related species will prove a powerful asset to motif discovery. We present a unified probabilistic framework for motif discovery that incorporates evolutionary information. We treat aligned DNA sequence as a mixture of evolutionary models, for motif and background, and, following the example of the MEME program, provide an algorithm to estimate the parameters by Expectation-Maximization. We examine a variety of evolutionary models and show that our approach can take advantage of phylogenic information to avoid false positives and discover motifs upstream of groups of characterized target genes. We compare our method to traditional motif finding on only conserved regions. An implementation will be made available at http://rana.lbl.gov." }, { "pmid": "15575972", "title": "MONKEY: identifying conserved transcription-factor binding sites in multiple alignments using a binding site-specific evolutionary model.", "abstract": "We introduce a method (MONKEY) to identify conserved transcription-factor binding sites in multispecies alignments. MONKEY employs probabilistic models of factor specificity and binding-site evolution, on which basis we compute the likelihood that putative sites are conserved and assign statistical significance to each hit. Using genomes from the genus Saccharomyces, we illustrate how the significance of real sites increases with evolutionary distance and explore the relationship between conservation and function." }, { "pmid": "17956628", "title": "Phylogenetic simulation of promoter evolution: estimation and modeling of binding site turnover events and assessment of their impact on alignment tools.", "abstract": "BACKGROUND\nThe phenomenon of functional site turnover has important implications for the study of regulatory region evolution, such as for promoter sequence alignments and transcription factor binding site (TFBS) identification. At present, it remains difficult to estimate TFBS turnover rates on real genomic sequences, as reliable mappings of functional sites across related species are often not available. As an alternative, we introduce a flexible new simulation system, Phylogenetic Simulation of Promoter Evolution (PSPE), designed to study functional site turnovers in regulatory sequences.\n\n\nRESULTS\nUsing PSPE, we study replacement turnover rates of different individual TFBSs and simple modules of two sites under neutral evolutionary functional constraints. We find that TFBS replacement turnover can happen rapidly in promoters, and turnover rates vary significantly among different TFBSs and modules. We assess the influence of different constraints such as insertion/deletion rate and translocation distances. Complementing the simulations, we give simple but effective mathematical models for TFBS turnover rate prediction. As one important application of PSPE, we also present a first systematic evaluation of multiple sequence aligners regarding their capability of detecting TFBSs in promoters with site turnovers.\n\n\nCONCLUSION\nPSPE allows researchers for the first time to investigate TFBS replacement turnovers in promoters systematically. The assessment of alignment tools points out the limitations of current approaches to identify TFBSs in non-coding sequences, where turnover events of functional sites may happen frequently, and where we are interested in assessing the similarity on the functional level. PSPE is freely available at the authors' website." }, { "pmid": "14656959", "title": "Identification and characterization of multi-species conserved sequences.", "abstract": "Comparative sequence analysis has become an essential component of studies aiming to elucidate genome function. The increasing availability of genomic sequences from multiple vertebrates is creating the need for computational methods that can detect highly conserved regions in a robust fashion. Towards that end, we are developing approaches for identifying sequences that are conserved across multiple species; we call these \"Multi-species Conserved Sequences\" (or MCSs). Here we report two strategies for MCS identification, demonstrating their ability to detect virtually all known actively conserved sequences (specifically, coding sequences) but very little neutrally evolving sequence (specifically, ancestral repeats). Importantly, we find that a substantial fraction of the bases within MCSs (approximately 70%) resides within non-coding regions; thus, the majority of sequences conserved across multiple vertebrate species has no known function. Initial characterization of these MCSs has revealed sequences that correspond to clusters of transcription factor-binding sites, non-coding RNA transcripts, and other candidate functional elements. Finally, the ability to detect MCSs represents a valuable metric for assessing the relative contribution of a species' sequence to identifying genomic regions of interest, and our results indicate that the currently available genome sequences are insufficient for the comprehensive identification of MCSs in the human genome." }, { "pmid": "7288891", "title": "Evolutionary trees from DNA sequences: a maximum likelihood approach.", "abstract": "The application of maximum likelihood techniques to the estimation of evolutionary trees from nucleic acid sequence data is discussed. A computationally feasible method for finding such maximum likelihood estimates is developed, and a computer program is available. This method has advantages over the traditional parsimony algorithms, which can give misleading results if rates of evolution differ in different lineages. It also allows the testing of hypotheses about the constancy of evolutionary rates by likelihood ratio tests, and gives rough indication of the error of ;the estimate of the tree." }, { "pmid": "8583911", "title": "A Hidden Markov Model approach to variation among sites in rate of evolution.", "abstract": "The method of Hidden Markov Models is used to allow for unequal and unknown evolutionary rates at different sites in molecular sequences. Rates of evolution at different sites are assumed to be drawn from a set of possible rates, with a finite number of possibilities. The overall likelihood of phylogeny is calculated as a sum of terms, each term being the probability of the data given a particular assignment of rates to sites, times the prior probability of that particular combination of rates. The probabilities of different rate combinations are specified by a stationary Markov chain that assigns rate categories to sites. While there will be a very large number of possible ways of assigning rates to sites, a simple recursive algorithm allows the contributions to the likelihood from all possible combinations of rates to be summed, in a time proportional to the number of different rates at a single site. Thus with three rates, the effort involved is no greater than three times that for a single rate. This \"Hidden Markov Model\" method allows for rates to differ between sites and for correlations between the rates of neighboring sites. By summing over all possibilities it does not require us to know the rates at individual sites. However, it does not allow for correlation of rates at nonadjacent sites, nor does it allow for a continuous distribution of rates over sites. It is shown how to use the Newton-Raphson method to estimate branch lengths of a phylogeny and to infer from a phylogeny what assignment of rates to sites has the largest posterior probability. An example is given using beta-hemoglobin DNA sequences in eight mammal species; the regions of high and low evolutionary rates are inferred and also the average length of patches of similar rates." }, { "pmid": "16452802", "title": "LOGOS: a modular Bayesian model for de novo motif detection.", "abstract": "The complexity of the global organization and internal structures of motifs in higher eukaryotic organisms raises significant challenges for motif detection techniques. To achieve successful de novo motif detection it is necessary to model the complex dependencies within and among motifs and incorporate biological prior knowledge. In this paper, we present LOGOS, an integrated LOcal and GlObal motif Sequence model for biopolymer sequences, which provides a principled framework for developing, modularizing, extending and computing expressive motif models for complex biopolymer sequence analysis. LOGOS consists of two interacting submodels: HMDM, a local alignment model capturing biological prior knowledge and positional dependence within the motif local structure; and HMM, a global motif distribution model modeling frequencies and dependencies of motif occurrences. Model parameters can be fit using training motifs within an empirical Bayesian framework. A variational EM algorithm is developed for de novo motif detection. LOGOS improves over existing models that ignore biological priors and dependencies in motif structures and motif occurrences, and demonstrates superior performance on both semi-realistic test data and cis-regulatory sequences from yeast and Drosophila sequences with regard to sensitivity, specificity, flexibility and extensibility." }, { "pmid": "12136103", "title": "Statistical significance of clusters of motifs represented by position specific scoring matrices in nucleotide sequences.", "abstract": "The human genome encodes the transcriptional control of its genes in clusters of cis-elements that constitute enhancers, silencers and promoter signals. The sequence motifs of individual cis- elements are usually too short and degenerate for confident detection. In most cases, the requirements for organization of cis-elements within these clusters are poorly understood. Therefore, we have developed a general method to detect local concentrations of cis-element motifs, using predetermined matrix representations of the cis-elements, and calculate the statistical significance of these motif clusters. The statistical significance calculation is highly accurate not only for idealized, pseudorandom DNA, but also for real human DNA. We use our method 'cluster of motifs E-value tool' (COMET) to make novel predictions concerning the regulation of genes by transcription factors associated with muscle. COMET performs comparably with two alternative state-of-the-art techniques, which are more complex and lack E-value calculations. Our statistical method enables us to clarify the major bottleneck in the hard problem of detecting cis-regulatory regions, which is that many known enhancers do not contain very significant clusters of the motif types that we search for. Thus, discovery of additional signals that belong to these regulatory regions will be the key to future progress." }, { "pmid": "8193955", "title": "fastDNAmL: a tool for construction of phylogenetic trees of DNA sequences using maximum likelihood.", "abstract": "We have developed a new tool, called fastDNAml, for constructing phylogenetic trees from DNA sequences. The program can be run on a wide variety of computers ranging from Unix workstations to massively parallel systems, and is available from the Ribosomal Database Project (RDP) by anonymous FTP. Our program uses a maximum likelihood approach and is based on version 3.3 of Felsenstein's dnaml program. Several enhancements, including algorithmic changes, significantly improve performance and reduce memory usage, making it feasible to construct even very large trees. Trees containing 40-100 taxa have been easily generated, and phylogenetic estimates are possible even when hundreds of sequences exist. We are currently using the tool to construct a phylogenetic tree based on 473 small subunit rRNA sequences from prokaryotes." }, { "pmid": "15637633", "title": "Assessing computational tools for the discovery of transcription factor binding sites.", "abstract": "The prediction of regulatory elements is a problem where computational methods offer great hope. Over the past few years, numerous tools have become available for this task. The purpose of the current assessment is twofold: to provide some guidance to users regarding the accuracy of currently available tools in various settings, and to provide a benchmark of data sets for assessing future tools." }, { "pmid": "11875036", "title": "Extraction of functional binding sites from unique regulatory regions: the Drosophila early developmental enhancers.", "abstract": "The early developmental enhancers of Drosophila melanogaster comprise one of the most sophisticated regulatory systems in higher eukaryotes. An elaborate code in their DNA sequence translates both maternal and early embryonic regulatory signals into spatial distribution of transcription factors. One of the most striking features of this code is the redundancy of binding sites for these transcription factors (BSTF). Using this redundancy, we explored the possibility of predicting functional binding sites in a single enhancer region without any prior consensus/matrix description or evolutionary sequence comparisons. We developed a conceptually simple algorithm, Scanseq, that employs an original statistical evaluation for identifying the most redundant motifs and locates the position of potential BSTF in a given regulatory region. To estimate the biological relevance of our predictions, we built thorough literature-based annotations for the best-known Drosophila developmental enhancers and we generated detailed distribution maps for the most robust binding sites. The high statistical correlation between the location of BSTF in these experiment-based maps and the location predicted in silico by Scanseq confirmed the relevance of our approach. We also discuss the definition of true binding sites and the possible biological principles that govern patterning of regulatory regions and the distribution of transcriptional signals." }, { "pmid": "15572468", "title": "Drosophila DNase I footprint database: a systematic genome annotation of transcription factor binding sites in the fruitfly, Drosophila melanogaster.", "abstract": "UNLABELLED\nDespite increasing numbers of computational tools developed to predict cis-regulatory sequences, the availability of high-quality datasets of transcription factor binding sites limits advances in the bioinformatics of gene regulation. Here we present such a dataset based on a systematic literature curation and genome annotation of DNase I footprints for the fruitfly, Drosophila melanogaster. Using the experimental results of 201 primary references, we annotated 1367 binding sites from 87 transcription factors and 101 target genes in the D.melanogaster genome sequence. These data will provide a rich resource for future bioinformatics analyses of transcriptional regulation in Drosophila such as constructing motif models, training cis-regulatory module detectors, benchmarking alignment tools and continued text mining of the extensive literature on transcriptional regulation in this important model organism.\n\n\nAVAILABILITY\nhttp://www.flyreg.org/\n\n\nCONTACT\[email protected]." }, { "pmid": "12519974", "title": "The FlyBase database of the Drosophila genome projects and community literature.", "abstract": "FlyBase (http://flybase.bio.indiana.edu/) provides an integrated view of the fundamental genomic and genetic data on the major genetic model Drosophila melanogaster and related species. FlyBase has primary responsibility for the continual reannotation of the D. melanogaster genome. The ultimate goal of the reannotation effort is to decorate the euchromatic sequence of the genome with as much biological information as is available from the community and from the major genome project centers. A complete revision of the annotations of the now-finished euchromatic genomic sequence has been completed. There are many points of entry to the genome within FlyBase, most notably through maps, gene products and ontologies, structured phenotypic and gene expression data, and anatomy." }, { "pmid": "16397004", "title": "ORegAnno: an open access database and curation system for literature-derived promoters, transcription factor binding sites and regulatory variation.", "abstract": "MOTIVATION\nOur understanding of gene regulation is currently limited by our ability to collectively synthesize and catalogue transcriptional regulatory elements stored in scientific literature. Over the past decade, this task has become increasingly challenging as the accrual of biologically validated regulatory sequences has accelerated. To meet this challenge, novel community-based approaches to regulatory element annotation are required.\n\n\nSUMMARY\nHere, we present the Open Regulatory Annotation (ORegAnno) database as a dynamic collection of literature-curated regulatory regions, transcription factor binding sites and regulatory mutations (polymorphisms and haplotypes). ORegAnno has been designed to manage the submission, indexing and validation of new annotations from users worldwide. Submissions to ORegAnno are immediately cross-referenced to EnsEMBL, dbSNP, Entrez Gene, the NCBI Taxonomy database and PubMed, where appropriate.\n\n\nAVAILABILITY\nORegAnno is available directly through MySQL, Web services, and online at http://www.oreganno.org. All software is licensed under the Lesser GNU Public License (LGPL)." }, { "pmid": "15572468", "title": "Drosophila DNase I footprint database: a systematic genome annotation of transcription factor binding sites in the fruitfly, Drosophila melanogaster.", "abstract": "UNLABELLED\nDespite increasing numbers of computational tools developed to predict cis-regulatory sequences, the availability of high-quality datasets of transcription factor binding sites limits advances in the bioinformatics of gene regulation. Here we present such a dataset based on a systematic literature curation and genome annotation of DNase I footprints for the fruitfly, Drosophila melanogaster. Using the experimental results of 201 primary references, we annotated 1367 binding sites from 87 transcription factors and 101 target genes in the D.melanogaster genome sequence. These data will provide a rich resource for future bioinformatics analyses of transcriptional regulation in Drosophila such as constructing motif models, training cis-regulatory module detectors, benchmarking alignment tools and continued text mining of the extensive literature on transcriptional regulation in this important model organism.\n\n\nAVAILABILITY\nhttp://www.flyreg.org/\n\n\nCONTACT\[email protected]." }, { "pmid": "15173120", "title": "WebLogo: a sequence logo generator.", "abstract": "WebLogo generates sequence logos, graphical representations of the patterns within a multiple sequence alignment. Sequence logos provide a richer and more precise description of sequence similarity than consensus sequences and can rapidly reveal significant features of the alignment otherwise difficult to perceive. Each logo consists of stacks of letters, one stack for each position in the sequence. The overall height of each stack indicates the sequence conservation at that position (measured in bits), whereas the height of symbols within the stack reflects the relative frequency of the corresponding amino or nucleic acid at that position. WebLogo has been enhanced recently with additional features and options, to provide a convenient and highly configurable sequence logo generator. A command line interface and the complete, open WebLogo source code are available for local installation and customization." }, { "pmid": "3934395", "title": "Dating of the human-ape splitting by a molecular clock of mitochondrial DNA.", "abstract": "A new statistical method for estimating divergence dates of species from DNA sequence data by a molecular clock approach is developed. This method takes into account effectively the information contained in a set of DNA sequence data. The molecular clock of mitochondrial DNA (mtDNA) was calibrated by setting the date of divergence between primates and ungulates at the Cretaceous-Tertiary boundary (65 million years ago), when the extinction of dinosaurs occurred. A generalized least-squares method was applied in fitting a model to mtDNA sequence data, and the clock gave dates of 92.3 +/- 11.7, 13.3 +/- 1.5, 10.9 +/- 1.2, 3.7 +/- 0.6, and 2.7 +/- 0.6 million years ago (where the second of each pair of numbers is the standard deviation) for the separation of mouse, gibbon, orangutan, gorilla, and chimpanzee, respectively, from the line leading to humans. Although there is some uncertainty in the clock, this dating may pose a problem for the widely believed hypothesis that the pipedal creature Australopithecus afarensis, which lived some 3.7 million years ago at Laetoli in Tanzania and at Hadar in Ethiopia, was ancestral to man and evolved after the human-ape splitting. Another likelier possibility is that mtDNA was transferred through hybridization between a proto-human and a proto-chimpanzee after the former had developed bipedalism." } ]
PLoS Computational Biology
18688266
PMC2453237
10.1371/journal.pcbi.1000131
Modeling the Violation of Reward Maximization and Invariance in Reinforcement Schedules
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as “schedule length effect”). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: “framing,” wherein equivalent options are treated differently depending on the context in which they are presented, and the “sunk cost” effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.
Related WorkThe extension of RL to capture the fundamental role of motivation in reinforcement schedules is currently a major challenge for the field, and other authors have also considered how to include motivation in RL [14],[15]. These authors focused on incorporating overall drive (e.g., such as degree of hunger or thirst) so as to describe how habitual responses can be modified by the current motivational level, which is, in turn, assumed to influence generalized drive through sensitivity to average reward levels [39]. In the reward schedule, however, we focused on how motivation orients behavior in a trial-specific, not generalized, manner. In such a case, an alternative solution to ascribing errors to a decreased level of motivation is Pavlovian-instrumental competition, which has been used to explain suboptimal behavior [16]. Applied to the reward schedule task, this solution would posit that error trials would result from the competition between the negative valence of the valid cue associated to an unrewarded trial (acquired through a Pavlovian-like mechanism), and the incentive to perform the same trial correctly to reach the end of the schedule and obtain reward. This interpretation is supported somewhat by the fact that the visual cues have no instrumental role in the reward schedule (they are neither triggers nor instructors of correct behavioral actions). The schedule length effect, however, escapes explanations in terms of Pavlovian-instrumental competition and would still have to be taken into account. Instead, the single motivational mechanism put forward in this work accounts for all the aspects of the behavior; has a natural interpretation in terms of learned motivation to act, however originated; and can be extended to general MDPs.A dependence on the value of the preceding state implemented in our learning rule suggests an explanation of the schedule length effect as a history effect. When environmental cues are not perfect predictors of the availability of resources, monkeys' decisions about where to forage depend on past information like the history of preceding reinforcements [40], or stored information about recent trends in weather [41]. Lau and Glimcher [28] have found that past choices, in addition to past reinforcements, must be taken into account to predict the trial-by-trial behavior of rhesus monkeys engaged in a choice task resulting in matching behavior. However, contrary to the statistical description of Lau and Glimcher [28], past information in our model bears an effect on the learning rule, not directly on the action selection process, and it does so through the value of the previous state, as opposed to past reinforcements or past choice history. Taken together, these findings point to some form of sensitivity to preceding actions and visited states (or their values) in primates' foraging behavior, and the schedule length effect might be a side effect of such a mechanism, perhaps also present in other forms of reinforcement learning.
[ "17096592", "16843041", "16311337", "16929307", "16192338", "12383782", "16938432", "8867118", "7455683", "10712488", "9502820", "16882024", "16319307", "15302926", "16596980", "16778890", "9054347", "8774460", "11100152", "9802995", "12900173", "12371510", "17187065", "15205529", "16782015", "12040201", "15087550", "17872398", "11867709", "16543461", "16888142", "17434918", "16807345", "15987953" ]
[ { "pmid": "17096592", "title": "Humans can adopt optimal discounting strategy under real-time constraints.", "abstract": "Critical to our many daily choices between larger delayed rewards, and smaller more immediate rewards, are the shape and the steepness of the function that discounts rewards with time. Although research in artificial intelligence favors exponential discounting in uncertain environments, studies with humans and animals have consistently shown hyperbolic discounting. We investigated how humans perform in a reward decision task with temporal constraints, in which each choice affects the time remaining for later trials, and in which the delays vary at each trial. We demonstrated that most of our subjects adopted exponential discounting in this experiment. Further, we confirmed analytically that exponential discounting, with a decay rate comparable to that used by our subjects, maximized the total reward gain in our task. Our results suggest that the particular shape and steepness of temporal discounting is determined by the task that the subject is facing, and question the notion of hyperbolic reward discounting as a universal principle." }, { "pmid": "16843041", "title": "A normative perspective on motivation.", "abstract": "Understanding the effects of motivation on instrumental action selection, and specifically on its two main forms, goal-directed and habitual control, is fundamental to the study of decision making. Motivational states have been shown to 'direct' goal-directed behavior rather straightforwardly towards more valuable outcomes. However, how motivational states can influence outcome-insensitive habitual behavior is more mysterious. We adopt a normative perspective, assuming that animals seek to maximize the utilities they achieve, and viewing motivation as a mapping from outcomes to utilities. We suggest that habitual action selection can direct responding properly only in motivational states which pertained during behavioral training. However, in novel states, we propose that outcome-independent, global effects of the utilities can 'energize' habitual actions." }, { "pmid": "16311337", "title": "Representation of action-specific reward values in the striatum.", "abstract": "The estimation of the reward an action will yield is critical in decision-making. To elucidate the role of the basal ganglia in this process, we recorded striatal neurons of monkeys who chose between left and right handle turns, based on the estimated reward probabilities of the actions. During a delay period before the choices, the activity of more than one-third of striatal projection neurons was selective to the values of one of the two actions. Fewer neurons were tuned to relative values or action choice. These results suggest representation of action values in the striatum, which can guide action selection in the basal ganglia circuit." }, { "pmid": "16929307", "title": "Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans.", "abstract": "Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions. These theories highlight a central role for reward prediction errors in updating the values associated with available actions. In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy. However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy-L-phenylalanine; L-DOPA) or reducing (haloperidol) dopaminergic function. Accordingly, subjects treated with L-DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions." }, { "pmid": "16192338", "title": "Different neural correlates of reward expectation and reward expectation error in the putamen and caudate nucleus during stimulus-action-reward association learning.", "abstract": "To select appropriate behaviors leading to rewards, the brain needs to learn associations among sensory stimuli, selected behaviors, and rewards. Recent imaging and neural-recording studies have revealed that the dorsal striatum plays an important role in learning such stimulus-action-reward associations. However, the putamen and caudate nucleus are embedded in distinct cortico-striatal loop circuits, predominantly connected to motor-related cerebral cortical areas and frontal association areas, respectively. This difference in their cortical connections suggests that the putamen and caudate nucleus are engaged in different functional aspects of stimulus-action-reward association learning. To determine whether this is the case, we conducted an event-related and computational model-based functional MRI (fMRI) study with a stochastic decision-making task in which a stimulus-action-reward association must be learned. A simple reinforcement learning model not only reproduced the subject's action selections reasonably well but also allowed us to quantitatively estimate each subject's temporal profiles of stimulus-action-reward association and reward-prediction error during learning trials. These two internal representations were used in the fMRI correlation analysis. The results revealed that neural correlates of the stimulus-action-reward association reside in the putamen, whereas a correlation with reward-prediction error was found largely in the caudate nucleus and ventral striatum. These nonuniform spatiotemporal distributions of neural correlates within the dorsal striatum were maintained consistently at various levels of task difficulty, suggesting a functional difference in the dorsal striatum between the putamen and caudate nucleus during stimulus-action-reward association learning." }, { "pmid": "12383782", "title": "Reward, motivation, and reinforcement learning.", "abstract": "There is substantial evidence that dopamine is involved in reward learning and appetitive conditioning. However, the major reinforcement learning-based theoretical models of classical conditioning (crudely, prediction learning) are actually based on rules designed to explain instrumental conditioning (action learning). Extensive anatomical, pharmacological, and psychological data, particularly concerning the impact of motivational manipulations, show that these models are unreasonable. We review the data and consider the involvement of a rich collection of different neural systems in various aspects of these forms of conditioning. Dopamine plays a pivotal, but complicated, role." }, { "pmid": "16938432", "title": "The misbehavior of value and the discipline of the will.", "abstract": "Most reinforcement learning models of animal conditioning operate under the convenient, though fictive, assumption that Pavlovian conditioning concerns prediction learning whereas instrumental conditioning concerns action learning. However, it is only through Pavlovian responses that Pavlovian prediction learning is evident, and these responses can act against the instrumental interests of the subjects. This can be seen in both experimental and natural circumstances. In this paper we study the consequences of importing this competition into a reinforcement learning context, and demonstrate the resulting effects in an omission schedule and a maze navigation task. The misbehavior created by Pavlovian values can be quite debilitating; we discuss how it may be disciplined." }, { "pmid": "8867118", "title": "Neural signals in the monkey ventral striatum related to motivation for juice and cocaine rewards.", "abstract": "1. The results of neuropsychological, neuropharmacological, and neurophysiological experiments have implicated the ventral striatum in reward-related processes. We designed a task to allow us to separate the effects of sensory, motor, and internal signals so that we could study the correlation between the activity of neurons in the ventral striatum and different motivational states. In this task, a visual stimulus was used to cue the monkeys as to their progress toward earning a reward. The monkeys performed more quickly and with fewer mistakes in the rewarded trials. After analyzing the behavioral results from three monkeys, we recorded from 143 neurons from two of the monkeys while they performed the task with either juice or cocaine reward. 2. In this task the monkey was required to release its grip on a bar when a small visual response cue changed colors from red (the wait signal) to green (the go signal). The duration of the wait signal was varied randomly. The cue became blue whenever the monkey successfully responded to the go signal within 1 s of its appearance. A reward was delivered after the monkey successfully completed one, two, or three trials. The schedules were randomly interleaved. A second visual stimulus that progressively brightened or dimmed signaled to the monkeys their progress toward earning a reward. This discriminative cue allowed the monkeys to judge the proportion of work remaining in the current ratio schedule of reinforcement. Data were collected from three monkeys while they performed this task. 3. The average reaction times became faster and error rates declined as the monkeys progressed toward completing the current schedule of reinforcement and thereby earning a reward, whereas the modal reaction time did not change. As the duration of the wait period before the go signal increased, the monkeys reacted more quickly but their error rates scarcely changed. From these results we infer that the effects of motivation and motor readiness in this task are generated by separate mechanisms rather than by a single mechanism subserving generalized arousal. 4. The activity of 138 ventral striatal neurons was sampled in two monkeys while they performed the task to earn juice reward. We saw tonic changes in activity throughout the trials, and we saw phasic activity following the reward. The activity of these neurons was markedly different during juice-rewarded trials than during correctly performed trials when no reward was forthcoming (or expected). The responses also were weakly, but significantly, related to the proximity of the reward in the schedules requiring more than one trial. 5. The monkeys worked to obtain intravenous cocaine while we recorded 62 neurons. For 57 of the neurons, we recorded activity while the monkeys worked in blocks of trials during which they self-administered cocaine after blocks during which they worked for juice. Although fewer neurons responded to cocaine than to juice reward (19 vs. 33%), this difference was not significant. The neuronal response properties to cocaine and juice rewards were independent; that is, the responses when one was the reward one failed to predict the response when the other was the reward. In addition, the neuronal activity lost most of its selectivity for rewarded trials, i.e, the activity did not distinguish nearly as well between cocaine and sham rewards as between juice and sham rewards. 6. Our results show that mechanisms by which cocaine acts do not appear to be the same as the ones activated when the monkeys were presented with an oral juice reward. This finding raises the intriguing possibility that the effects of cocaine could be reduced selectively without blocking the effects of many natural rewards." }, { "pmid": "7455683", "title": "The framing of decisions and the psychology of choice.", "abstract": "The psychological principles that govern the perception of decision problems and the evaluation of probabilities and outcomes produce predictable shifts of preference when the same problem is framed in different ways. Reversals of preference are demonstrated in choices regarding monetary outcomes, both hypothetical and real, and in questions pertaining to the loss of human lives. The effects of frames on preferences are compared to the effects of perspectives on perceptual appearance. The dependence of preferences on the formulation of decision problems is a significant concern for the theory of rational choice." }, { "pmid": "10712488", "title": "Response differences in monkey TE and perirhinal cortex: stimulus association related to reward schedules.", "abstract": "Anatomic and behavioral evidence shows that TE and perirhinal cortices are two directly connected but distinct inferior temporal areas. Despite this distinctness, physiological properties of neurons in these two areas generally have been similar with neurons in both areas showing selectivity for complex visual patterns and showing response modulations related to behavioral context in the sequential delayed match-to-sample (DMS) trials, attention, and stimulus familiarity. Here we identify physiological differences in the neuronal activity of these two areas. We recorded single neurons from area TE and perirhinal cortex while the monkeys performed a simple behavioral task using randomly interleaved visually cued reward schedules of one, two, or three DMS trials. The monkeys used the cue's relation to the reward schedule (indicated by the brightness) to adjust their behavioral performance. They performed most quickly and most accurately in trials in which reward was immediately forthcoming and progressively less well as more intermediate trials remained. Thus the monkeys appeared more motivated as they progressed through the trial schedule. Neurons in both TE and perirhinal cortex responded to both the visual cues related to the reward schedules and the stimulus patterns used in the DMS trials. As expected, neurons in both areas showed response selectivity to the DMS patterns, and significant, but small, modulations related to the behavioral context in the DMS trial. However, TE and perirhinal neurons showed strikingly different response properties. The latency distribution of perirhinal responses was centered 66 ms later than the distribution of TE responses, a larger difference than the 10-15 ms usually found in sequentially connected visual cortical areas. In TE, cue-related responses were related to the cue's brightness. In perirhinal cortex, cue-related responses were related to the trial schedules independently of the cue's brightness. For example, some perirhinal neurons responded in the first trial of any reward schedule including the one trial schedule, whereas other neurons failed to respond in the first trial but respond in the last trial of any schedule. The majority of perirhinal neurons had more complicated relations to the schedule. The cue-related activity of TE neurons is interpreted most parsimoniously as a response to the stimulus brightness, whereas the cue-related activity of perirhinal neurons is interpreted most parsimoniously as carrying associative information about the animal's progress through the reward schedule. Perirhinal cortex may be part of a system gauging the relation between work schedules and rewards." }, { "pmid": "9502820", "title": "Neuronal signals in the monkey ventral striatum related to progress through a predictable series of trials.", "abstract": "Single neurons in the ventral striatum of primates carry signals that are related to reward and motivation. When monkeys performed a task requiring one to three bar release trials to be completed successfully before a reward was given, they seemed more motivated as the rewarded trials approached; they responded more quickly and accurately. When the monkeys were cued as to the progress of the schedule, 89 out of 150 ventral striatal neurons responded in at least one part of the task: (1) at the onset of the visual cue, (2) near the time of bar release, and/or (3) near the time of reward delivery. When the cue signaled progress through the schedule, the neuronal activity was related to the progress through the schedule. For example, one large group of these neurons responded in the first trial of every schedule, another large group responded in trials other than the first of a schedule, and a third large group responded in the first trial of schedules longer than one. Thus, these neurons coded the state of the cue, i.e., the neurons carried the information about how the monkey was progressing through the task. The differential activity disappeared on the first trial after randomizing the relation of the cue to the schedule. Considering the anatomical loop structure that includes ventral striatum and prefrontal cortex, we suggest that the ventral striatum might be part of a circuit that supports keeping track of progress through learned behavioral sequences that, when successfully completed, lead to reward." }, { "pmid": "16882024", "title": "Dopamine neuronal responses in monkeys performing visually cued reward schedules.", "abstract": "Dopamine neurons are important for reward-related behaviours. They have been recorded during classical conditioning and operant tasks with stochastic reward delivery. However, daily behaviour, although frequently complex in the number of steps, is often very predictable. We studied the responses of 75 dopamine neurons during schedules of trials in which the events and related reward contingencies could be well-predicted, within and across trials. In this visually cued reward schedule task, a visual cue tells the monkeys exactly how many trials, 1, 2, 3, or 4, must be performed to obtain a reward. The number of errors became larger as the number of trials remaining before the reward increased. Dopamine neurons frequently responded to the cues at the beginning and end of the schedules. Approximately 75% of the first-cue responsive neurons did not distinguish among the schedules that were beginning even though the cues were different. Approximately half of the last-cue responsive neurons depended on which schedule was ending, even though the cue signalling the last trial was the same in all schedules. Thus, the responses were related to what the monkey knew about the relation between the cues and the schedules, not the identity of the cues. These neurons also frequently responded to the go signal and/or to the OK signal indicating the end of a correctly performed trial whether a reward was forthcoming or not, and to the reward itself. Thus, dopamine neurons seem to respond to behaviourally important, i.e. salient, events even when the events have been well-predicted." }, { "pmid": "16319307", "title": "Neuronal signals in the monkey basolateral amygdala during reward schedules.", "abstract": "The amygdala is critical for connecting emotional reactions with environmental events. We recorded neurons from the basolateral complex of two monkeys while they performed visually cued schedules of sequential color discrimination trials, with both valid and random cues. When the cues were valid, the visual cue, which was present throughout each trial, indicated how many trials remained to be successfully completed before a reward. Seventy-six percent of recorded neurons showed response selectivity, with the selectivity depending on some aspects of the current schedule. After a reward, when the monkeys knew that the upcoming cue would be valid, 88 of 246 (36%) neurons responded between schedules, seemingly anticipating the receiving information about the upcoming schedule length. When the cue appeared, 102 of 246 (41%) neurons became selective, at this point encoding information about whether the current trial was the only trial required or how many more trials are needed to obtain a reward. These cue-related responses had a median latency of 120 ms (just between the latencies in inferior temporal visual area TE and perirhinal cortex). When the monkey was releasing a touch bar to complete the trial correctly, 71 of 246 (29%) neurons responded, with responses in the rewarded trials being similar no matter which schedule was ending, thus being sensitive to the reward contingency. Finally, 39 of 246 (16%) neurons responded around the reward. We suggest that basolateral amygdala, by anticipating and then delineating the schedule and representing reward contingency, provide contextual information that is important for adjusting motivational level as a function of immediate behavior goals." }, { "pmid": "15302926", "title": "DNA targeting of rhinal cortex D2 receptor protein reversibly blocks learning of cues that predict reward.", "abstract": "When schedules of several operant trials must be successfully completed to obtain a reward, monkeys quickly learn to adjust their behavioral performance by using visual cues that signal how many trials have been completed and how many remain in the current schedule. Bilateral rhinal (perirhinal and entorhinal) cortex ablations irreversibly prevent this learning. Here, we apply a recombinant DNA technique to investigate the role of dopamine D2 receptor in rhinal cortex for this type of learning. Rhinal cortex was injected with a DNA construct that significantly decreased D2 receptor ligand binding and temporarily produced the same profound learning deficit seen after ablation. However, unlike after ablation, the D2 receptor-targeted, DNA-treated monkeys recovered cue-related learning after 11-19 weeks. Injecting a DNA construct that decreased N-methyl-d-aspartate but not D2 receptor ligand binding did not interfere with learning associations between the cues and the schedules. A second D2 receptor-targeted DNA treatment administered after either recovery from a first D2 receptor-targeted DNA treatment (one monkey), after N-methyl-d-aspartate receptor-targeted DNA treatment (two monkeys), or after a vector control treatment (one monkey) also induced a learning deficit of similar duration. These results suggest that the D2 receptor in primate rhinal cortex is essential for learning to relate the visual cues to the schedules. The specificity of the receptor manipulation reported here suggests that this approach could be generalized in this or other brain pathways to relate molecular mechanisms to cognitive functions." }, { "pmid": "16596980", "title": "Dynamic response-by-response models of matching behavior in rhesus monkeys.", "abstract": "We studied the choice behavior of 2 monkeys in a discrete-trial task with reinforcement contingencies similar to those Herrnstein (1961) used when he described the matching law. In each session, the monkeys experienced blocks of discrete trials at different relative-reinforcer frequencies or magnitudes with unsignalled transitions between the blocks. Steady-state data following adjustment to each transition were well characterized by the generalized matching law; response ratios undermatched reinforcer frequency ratios but matched reinforcer magnitude ratios. We modelled response-by-response behavior with linear models that used past reinforcers as well as past choices to predict the monkeys' choices on each trial. We found that more recently obtained reinforcers more strongly influenced choice behavior. Perhaps surprisingly, we also found that the monkeys' actions were influenced by the pattern of their own past choices. It was necessary to incorporate both past reinforcers and past choices in order to accurately capture steady-state behavior as well as the fluctuations during block transitions and the response-by-response patterns of behavior. Our results suggest that simple reinforcement learning models must account for the effects of past choices to accurately characterize behavior in this task, and that models with these properties provide a conceptual tool for studying how both past reinforcers and past choices are integrated by the neural systems that generate behavior." }, { "pmid": "16778890", "title": "Cortical substrates for exploratory decisions in humans.", "abstract": "Decision making in an uncertain environment poses a conflict between the opposing demands of gathering and exploiting information. In a classic illustration of this 'exploration-exploitation' dilemma, a gambler choosing between multiple slot machines balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions). Far from representing idle curiosity, such exploration is often critical for organisms to discover how best to harvest resources such as food and water. In appetitive choice, substantial experimental evidence, underpinned by computational reinforcement learning (RL) theory, indicates that a dopaminergic, striatal and medial prefrontal network mediates learning to exploit. In contrast, although exploration has been well studied from both theoretical and ethological perspectives, its neural substrates are much less clear. Here we show, in a gambling task, that human subjects' choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma. Furthermore, using this characterization to classify decisions as exploratory or exploitative, we employ functional magnetic resonance imaging to show that the frontopolar cortex and intraparietal sulcus are preferentially active during exploratory decisions. In contrast, regions of striatum and ventromedial prefrontal cortex exhibit activity characteristic of an involvement in value-based exploitative decision making. The results suggest a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes, and provide a computationally precise characterization of the contribution of key decision-related brain systems to each of these functions." }, { "pmid": "9054347", "title": "A neural substrate of prediction and reward.", "abstract": "The capacity to predict future events permits a creature to detect, model, and manipulate the causal structure of its interactions with its environment. Behavioral experiments suggest that learning is driven by changes in the expectations about future salient events such as rewards and punishments. Physiological work has recently complemented these studies by identifying dopaminergic neurons in the primate whose fluctuating output apparently signals changes or errors in the predictions of future salient and rewarding events. Taken together, these findings can be understood through quantitative theories of adaptive optimizing control." }, { "pmid": "8774460", "title": "A framework for mesencephalic dopamine systems based on predictive Hebbian learning.", "abstract": "We develop a theoretical framework that shows how mesencephalic dopamine systems could distribute to their targets a signal that represents information about future expectations. In particular, we show how activity in the cerebral cortex can make predictions about future receipt of reward and how fluctuations in the activity levels of neurons in diffuse dopamine systems above and below baseline levels would represent errors in these predictions that are delivered to cortical and subcortical targets. We present a model for how such errors could be constructed in a real brain that is consistent with physiological results for a subset of dopaminergic neurons located in the ventral tegmental area and surrounding dopaminergic neurons. The theory also makes testable predictions about human choice behavior on a simple decision-making task. Furthermore, we show that, through a simple influence on synaptic plasticity, fluctuations in dopamine release can act to change the predictions in an appropriate manner." }, { "pmid": "11100152", "title": "Learning motivational significance of visual cues for reward schedules requires rhinal cortex.", "abstract": "The limbic system is necessary to associate stimuli with their motivational and emotional significance. The perirhinal cortex is directly connected to this system, and neurons in this region carry signals related to a monkey's progress through visually cued reward schedules. This task manipulates motivation by displaying different visual cues to indicate the amount of work remaining until reward delivery. We asked whether rhinal (that is, entorhinal and perirhinal) cortex is necessary to associate the visual cues with reward schedules. When faced with new visual cues in reward schedules, intact monkeys adjusted their motivation in the schedules, whereas monkeys with rhinal cortex removals failed to do so. Thus, the rhinal cortex is critical for forming associations between visual stimuli and their motivational significance." }, { "pmid": "9802995", "title": "A computational role for dopamine delivery in human decision-making.", "abstract": "Recent work suggests that fluctuations in dopamine delivery at target structures represent an evaluation of future events that can be used to direct learning and decision-making. To examine the behavioral consequences of this interpretation, we gave simple decision-making tasks to 66 human subjects and to a network based on a predictive model of mesencephalic dopamine systems. The human subjects displayed behavior similar to the network behavior in terms of choice allocation and the character of deliberation times. The agreement between human and model performances suggests a direct relationship between biases in human decision strategies and fluctuating dopamine delivery. We also show that the model offers a new interpretation of deficits that result when dopamine levels are increased or decreased through disease or pharmacological interventions. The bottom-up approach presented here also suggests that a variety of behavioral strategies may result from the expression of relatively simple neural mechanisms in different behavioral contexts." }, { "pmid": "12900173", "title": "A computational substrate for incentive salience.", "abstract": "Theories of dopamine function are at a crossroads. Computational models derived from single-unit recordings capture changes in dopaminergic neuron firing rate as a prediction error signal. These models employ the prediction error signal in two roles: learning to predict future rewarding events and biasing action choice. Conversely, pharmacological inhibition or lesion of dopaminergic neuron function diminishes the ability of an animal to motivate behaviors directed at acquiring rewards. These lesion experiments have raised the possibility that dopamine release encodes a measure of the incentive value of a contemplated behavioral act. The most complete psychological idea that captures this notion frames the dopamine signal as carrying 'incentive salience'. On the surface, these two competing accounts of dopamine function seem incommensurate. To the contrary, we demonstrate that both of these functions can be captured in a single computational model of the involvement of dopamine in reward prediction for the purpose of reward seeking." }, { "pmid": "12371510", "title": "Actor-critic models of the basal ganglia: new anatomical and computational perspectives.", "abstract": "A large number of computational models of information processing in the basal ganglia have been developed in recent years. Prominent in these are actor-critic models of basal ganglia functioning, which build on the strong resemblance between dopamine neuron activity and the temporal difference prediction error signal in the critic, and between dopamine-dependent long-term synaptic plasticity in the striatum and learning guided by a prediction error signal in the actor. We selectively review several actor-critic models of the basal ganglia with an emphasis on two important aspects: the way in which models of the critic reproduce the temporal dynamics of dopamine firing, and the extent to which models of the actor take into account known basal ganglia anatomy and physiology. To complement the efforts to relate basal ganglia mechanisms to reinforcement learning (RL), we introduce an alternative approach to modeling a critic network, which uses Evolutionary Computation techniques to 'evolve' an optimal RL mechanism, and relate the evolved mechanism to the basic model of the critic. We conclude our discussion of models of the critic by a critical discussion of the anatomical plausibility of implementations of a critic in basal ganglia circuitry, and conclude that such implementations build on assumptions that are inconsistent with the known anatomy of the basal ganglia. We return to the actor component of the actor-critic model, which is usually modeled at the striatal level with very little detail. We describe an alternative model of the basal ganglia which takes into account several important, and previously neglected, anatomical and physiological characteristics of basal ganglia-thalamocortical connectivity and suggests that the basal ganglia performs reinforcement-biased dimensionality reduction of cortical inputs. We further suggest that since such selective encoding may bias the representation at the level of the frontal cortex towards the selection of rewarded plans and actions, the reinforcement-driven dimensionality reduction framework may serve as a basis for basal ganglia actor models. We conclude with a short discussion of the dual role of the dopamine signal in RL and in behavioral switching." }, { "pmid": "17187065", "title": "Separate neural substrates for skill learning and performance in the ventral and dorsal striatum.", "abstract": "It is widely accepted that the striatum of the basal ganglia is a primary substrate for the learning and performance of skills. We provide evidence that two regions of the rat striatum, ventral and dorsal, play distinct roles in instrumental conditioning (skill learning), with the ventral striatum being critical for learning and the dorsal striatum being important for performance but, notably, not for learning. This implies an actor (dorsal) versus director (ventral) division of labor, which is a new variant of the widely discussed actor-critic architecture. Our results also imply that the successful performance of a skill can ultimately result in its establishment as a habit outside the basal ganglia." }, { "pmid": "15205529", "title": "Matching behavior and the representation of value in the parietal cortex.", "abstract": "Psychologists and economists have long appreciated the contribution of reward history and expectation to decision-making. Yet we know little about how specific histories of choice and reward lead to an internal representation of the \"value\" of possible actions. We approached this problem through an integrated application of behavioral, computational, and physiological techniques. Monkeys were placed in a dynamic foraging environment in which they had to track the changing values of alternative choices through time. In this context, the monkeys' foraging behavior provided a window into their subjective valuation. We found that a simple model based on reward history can duplicate this behavior and that neurons in the parietal cortex represent the relative value of competing actions predicted by this model." }, { "pmid": "16782015", "title": "Primates take weather into account when searching for fruits.", "abstract": "Temperature and solar radiation are known to influence maturation of fruits and insect larvae inside them . We investigated whether gray-cheeked mangabeys (Lophocebus albigena johnstonii) of Kibale Forest, Uganda, take these weather variables into account when searching for ripe figs or unripe figs containing insect larvae. We predicted that monkeys would be more likely to revisit a tree with fruit after several days of warm and sunny weather compared to a cooler and more cloudy period. We preselected 80 target fig trees and monitored whether they contained ripe, unripe, or no fruit. We followed one habituated monkey group from dawn to dusk for three continuous observation periods totalling 210 days. Whenever the group came within a 100 m circle of a previously visited target tree for a second time, we noted whether or not individuals proceeded to the trunk, i.e., whether they \"revisited\" or simply \"bypassed\" the tree. We found that average daily maximum temperature was significantly higher for days preceding revisits than bypasses. The probability of a revisit was additionally influenced by solar radiation experienced on the day of reapproach. These effects were found only for trees that carried fruit at the previous visit but not for trees that had carried none. We concluded that these nonhuman primates were capable of taking into account past weather conditions when searching for food. We discuss the implication of these findings for theories of primate cognitive evolution." }, { "pmid": "12040201", "title": "Anterior cingulate: single neuronal signals related to degree of reward expectancy.", "abstract": "As monkeys perform schedules containing several trials with a visual cue indicating reward proximity, their error rates decrease as the number of remaining trials decreases, suggesting that their motivation and/or reward expectancy increases as the reward approaches. About one-third of single neurons recorded in the anterior cingulate cortex of monkeys during these reward schedules had responses that progressively changed strength with reward expectancy, an effect that disappeared when the cue was random. Alterations of this progression could be the basis for the changes from normal that are reported in anterior cingulate population activity for obsessive-compulsive disorder and drug abuse, conditions characterized by disturbances in reward expectancy." }, { "pmid": "15087550", "title": "Dissociable roles of ventral and dorsal striatum in instrumental conditioning.", "abstract": "Instrumental conditioning studies how animals and humans choose actions appropriate to the affective structure of an environment. According to recent reinforcement learning models, two distinct components are involved: a \"critic,\" which learns to predict future reward, and an \"actor,\" which maintains information about the rewarding outcomes of actions to enable better ones to be chosen more frequently. We scanned human participants with functional magnetic resonance imaging while they engaged in instrumental conditioning. Our results suggest partly dissociable contributions of the ventral and dorsal striatum, with the former corresponding to the critic and the latter corresponding to the actor." }, { "pmid": "17872398", "title": "A comparison of reward-contingent neuronal activity in monkey orbitofrontal cortex and ventral striatum: guiding actions toward rewards.", "abstract": "We have investigated how neuronal activity in the orbitofrontal-ventral striatal circuit is related to reward-directed behavior by comparing activity in these two regions during a visually guided reward schedule task. When a set of visual cues provides information about reward contingency, that is, about whether or not a trial will be rewarded, significant subpopulations of neurons in both orbitofrontal cortex and ventral striatum encode this information. Orbitofrontal and ventral striatal neurons also differentiate between rewarding and non-rewarding trial outcomes, whether or not those outcomes were predicted. The size of the neuronal subpopulation encoding reward contingency is twice as large in orbitofrontal cortex (50% of neurons) as in ventral striatum (26%). Reward-contingency-dependent activity also appears earlier during a trial in orbitofrontal cortex than in ventral striatum. The peak reward-contingency representation in orbitofrontal cortex (31% of neurons), occurs during the wait period, a period of high anticipation prior to any action. The peak ventral striatal representation of reward contingency (18%) occurs during the go period, a time of action. We speculate that signals from orbitofrontal cortex bias ventral striatal activity, and that a flow of reward-contingency information from orbitofrontal cortex to ventral striatum serves to guide actions toward rewards." }, { "pmid": "11867709", "title": "Framing effects and risky decisions in starlings.", "abstract": "Animals are predominantly risk prone toward reward delays and risk averse toward reward amounts. Humans in turn tend to be risk-seeking for losses and risk averse for gains. To explain the human results, Prospect Theory postulates a convex utility for losses and concave utility for gains. In contrast, Scalar Utility Theory (SUT) explains the animal data by postulating that the cognitive representation of outcomes follows Weber's Law, namely that the spread of the distribution of expected outcomes is proportional to its mean. SUT also would explain human results if utility (even if it is linear on expected outcome) followed Weber's Law. We present an experiment that simulates losses and gains in a bird, the European Starling, to test the implication of SUT that risk proneness/aversion should extend to any aversive/desirable dimension other than time and amount of reward. Losses and gains were simulated by offering choices of fixed vs. variable outcomes with lower or higher outcomes than what the birds expected. The subjects were significantly more risk prone for losses than for gains but, against expectations, they were not significantly risk averse toward gains. The results are thus, in part, consistent with Prospect Theory and SUT and show that risk attitude in humans and birds may obey a common fundamental principle." }, { "pmid": "16543461", "title": "State-dependent learned valuation drives choice in an invertebrate.", "abstract": "Humans and other vertebrates occasionally show a preference for items remembered to be costly or experienced when the subject was in a poor condition (this is known as a sunk-costs fallacy or state-dependent valuation). Whether these mechanisms shared across vertebrates are the result of convergence toward an adaptive solution or evolutionary relicts reflecting common ancestral traits is unknown. Here we show that state-dependent valuation also occurs in an invertebrate, the desert locust Schistocerca gregaria (Orthoptera: Acrididae). Given the latter's phylogenetic and neurobiological distance from those groups in which the phenomenon was already known, we suggest that state-dependent valuation mechanisms are probably ecologically rational solutions to widespread problems of choice." }, { "pmid": "16888142", "title": "Frames, biases, and rational decision-making in the human brain.", "abstract": "Human choices are remarkably susceptible to the manner in which options are presented. This so-called \"framing effect\" represents a striking violation of standard economic accounts of human rationality, although its underlying neurobiology is not understood. We found that the framing effect was specifically associated with amygdala activity, suggesting a key role for an emotional system in mediating decision biases. Moreover, across individuals, orbital and medial prefrontal cortex activity predicted a reduced susceptibility to the framing effect. This finding highlights the importance of incorporating emotional processes within models of human choice and suggests how the brain may modulate the effect of these biasing influences to approximate rationality." }, { "pmid": "17434918", "title": "Dynamic changes in representations of preceding and upcoming reward in monkey orbitofrontal cortex.", "abstract": "We investigated how orbitofrontal cortex (OFC) contributes to adaptability in the face of changing reward contingencies by examining how reward representations in monkey orbitofrontal neurons change during a visually cued, multi-trial reward schedule task. A large proportion of orbitofrontal neurons were sensitive to events in this task (69/80 neurons in the valid and 48/58 neurons in the random cue context). Neuronal activity depended upon preceding reward, upcoming reward, reward delivery, and schedule state. Preceding reward-dependent activity occurred in both the valid and random cue contexts, whereas upcoming reward-dependent activity was observed only in the valid context. A greater proportion of neurons encoded preceding reward in the random than the valid cue context. The proportion of neurons with preceding reward-dependent activity declined as each trial progressed, whereas the proportion encoding upcoming reward increased. Reward information was represented by ensembles of neurons, the composition of which changed with task context and time. Overall, neuronal activity in OFC adapted to reflect the importance of different types of reward information in different contexts and time periods. This contextual and temporal adaptability is one hallmark of neurons participating in executive functions." }, { "pmid": "16807345", "title": "Multiple time scales of temporal response in pyramidal and fast spiking cortical neurons.", "abstract": "Neural dynamic processes correlated over several time scales are found in vivo, in stimulus-evoked as well as spontaneous activity, and are thought to affect the way sensory stimulation is processed. Despite their potential computational consequences, a systematic description of the presence of multiple time scales in single cortical neurons is lacking. In this study, we injected fast spiking and pyramidal (PYR) neurons in vitro with long-lasting episodes of step-like and noisy, in-vivo-like current. Several processes shaped the time course of the instantaneous spike frequency, which could be reduced to a small number (1-4) of phenomenological mechanisms, either reducing (adapting) or increasing (facilitating) the neuron's firing rate over time. The different adaptation/facilitation processes cover a wide range of time scales, ranging from initial adaptation (<10 ms, PYR neurons only), to fast adaptation (<300 ms), early facilitation (0.5-1 s, PYR only), and slow (or late) adaptation (order of seconds). These processes are characterized by broad distributions of their magnitudes and time constants across cells, showing that multiple time scales are at play in cortical neurons, even in response to stationary stimuli and in the presence of input fluctuations. These processes might be part of a cascade of processes responsible for the power-law behavior of adaptation observed in several preparations, and may have far-reaching computational consequences that have been recently described." }, { "pmid": "15987953", "title": "Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network.", "abstract": "Behavioral conditioning of cue-reward pairing results in a shift of midbrain dopamine (DA) cell activity from responding to the reward to responding to the predictive cue. However, the precise time course and mechanism underlying this shift remain unclear. Here, we report a combined single-unit recording and temporal difference (TD) modeling approach to this question. The data from recordings in conscious rats showed that DA cells retain responses to predicted reward after responses to conditioned cues have developed, at least early in training. This contrasts with previous TD models that predict a gradual stepwise shift in latency with responses to rewards lost before responses develop to the conditioned cue. By exploring the TD parameter space, we demonstrate that the persistent reward responses of DA cells during conditioning are only accurately replicated by a TD model with long-lasting eligibility traces (nonzero values for the parameter lambda) and low learning rate (alpha). These physiological constraints for TD parameters suggest that eligibility traces and low per-trial rates of plastic modification may be essential features of neural circuits for reward learning in the brain. Such properties enable rapid but stable initiation of learning when the number of stimulus-reward pairings is limited, conferring significant adaptive advantages in real-world environments." } ]
PLoS Genetics
18650965
PMC2483231
10.1371/journal.pgen.1000128
Dynamics of Genome Rearrangement in Bacterial Populations
Genome structure variation has profound impacts on phenotype in organisms ranging from microbes to humans, yet little is known about how natural selection acts on genome arrangement. Pathogenic bacteria such as Yersinia pestis, which causes bubonic and pneumonic plague, often exhibit a high degree of genomic rearrangement. The recent availability of several Yersinia genomes offers an unprecedented opportunity to study the evolution of genome structure and arrangement. We introduce a set of statistical methods to study patterns of rearrangement in circular chromosomes and apply them to the Yersinia. We constructed a multiple alignment of eight Yersinia genomes using Mauve software to identify 78 conserved segments that are internally free from genome rearrangement. Based on the alignment, we applied Bayesian statistical methods to infer the phylogenetic inversion history of Yersinia. The sampling of genome arrangement reconstructions contains seven parsimonious tree topologies, each having different histories of 79 inversions. Topologies with a greater number of inversions also exist, but were sampled less frequently. The inversion phylogenies agree with results suggested by SNP patterns. We then analyzed reconstructed inversion histories to identify patterns of rearrangement. We confirm an over-representation of “symmetric inversions”—inversions with endpoints that are equally distant from the origin of chromosomal replication. Ancestral genome arrangements demonstrate moderate preference for replichore balance in Yersinia. We found that all inversions are shorter than expected under a neutral model, whereas inversions acting within a single replichore are much shorter than expected. We also found evidence for a canonical configuration of the origin and terminus of replication. Finally, breakpoint reuse analysis reveals that inversions with endpoints proximal to the origin of DNA replication are nearly three times more frequent. Our findings represent the first characterization of genome arrangement evolution in a bacterial population evolving outside laboratory conditions. Insight into the process of genomic rearrangement may further the understanding of pathogen population dynamics and selection on the architecture of circular bacterial chromosomes.
Related WorkWhilst rich stochastic models of nucleotide sequence evolution have been developed, comparatively little effort has gone into development of stochastic models of genome arrangement evolution. Inversions are known to affect a variety of genomes, including mitochondria [58], plastids [59],[60] and bacteria. However, mutational processes such as transposition or segmental duplication and loss [61] can also result in genomic rearrangement, and can have an especially profound effect on eukaryotic and mitochondrial gene order. Future efforts to model genome arrangement evolution should undoubtedly address duplication/loss.Although bacteria are usually unichromosomal, they also have plasmids and other short circular chromosomes that might play an important role in rearranging the genetic material. Therefore a Bayesian MCMC method for multichromosomal genome arrangement phylogeny would also be desirable. Pairwise models of multi-chromosomal rearrangement via circular intermediates have recently been derived, although not in a Bayesian context [62],[63],[64].The rearrangement patterns inferred by our study should prove valuable as a guide for phylogenetic inference when the inversion history signal has become saturated. The Yersinia genomes studied here appear to lie precisely on the verge of saturation, as seven parsimonious topologies were discovered. Just as codon models and gamma-distributed rate heterogeneity have aided phylogenetic inference on nucleotides, models of rearrangement which explicitly acknowledge that not all genome arrangements are equally likely may be useful to disambiguate phylogenetic signal in saturated inversion histories. Pairwise study of eukaryotic genome arrangement has demonstrated preference for particular types of rearrangement events [65], and methods similar to ours could conceivably be extended to identify selection on arrangement from phylogenies of multi-chromosomal eukaryotic genomes.A non-phylogenetic, pairwise model of rearrangement by inversion has previously been used to investigate the preference for historic replichore balance in bacteria [66]. Using randomly simulated genome arrangements as a baseline, the authors conclude that historical replichore balance has been significantly maintained in a variety of bacteria, but not all. Our Bayesian method improves on their model by allowing us to gauge more rigorously the degree of statistical confidence and uncertainty in reconstructions of inversion history. Moreover, our method avoids a systematic bias when exploring possible inversion histories. The distribution sampled by the Ajana et al method is not uniform over equally parsimonious inversion scenarios, but is skewed to favor particular mutation events. The difference between their sampling distribution and the uniform distribution can grow exponentially in some cases ([67], section 5.2).
[ "9202482", "16468991", "16586749", "15878987", "12167364", "16176988", "17247021", "1169690", "7601461", "15516592", "4919137", "6583681", "9822387", "16814717", "10760161", "17973909", "17376071", "11178226", "11017076", "11163906", "6273909", "12142430", "12930739", "15184548", "16556833", "9382825", "16211009", "16612541", "16237205", "14585609", "15479949", "8993858", "10570195", "15598742", "16740952", "12450857", "11050348", "17173484", "16221896", "12855474", "12610535", "12810957", "14668356", "4577740", "17189424", "11586360", "16362346", "3049239", "15746427", "3936406", "3186748", "15951307", "17407601", "15525697", "14534182", "16627724", "17095535", "16737554", "17090663", "16423021", "15231754", "15368893", "15358858", "17784789" ]
[ { "pmid": "9202482", "title": "Modulation of gene expression through chromosomal positioning in Escherichia coli.", "abstract": "Variations in expression of the nah genes of the NAH7 (naphthalene biodegradation) plasmid of Pseudomonas putida when placed in different chromosomal locations in Escherichia coli have been studied by employing a collection of hybrid mini-T5 transposons bearing lacZ fusions to the Psal promoter, along with the cognate regulatory gene nahR. Insertions of Psal-lacZ reporters in the proximity of the chromosomal origin of replication, oriC, increased accumulation of beta-galactosidase in vivo. Position-dependent changes in expression of the reporter product could not be associated with local variations of the supercoiling in the DNA region, as revealed by probing the chromosome with mobile gyrB-lacZ elements. Such variations in beta-galactosidase activity (and, therefore, the expression of catabolic genes) seemed, instead, to be linked to the increase in gene dosage associated with regions close to oriC, and not to local variations in chromosome structure. The tolerance of strains to the selection markers borne by the transposons also varied in parallel with the changes in LacZ levels. The role of chromosomal positioning as a mechanism for the outcome of adaptation phenotypes is discussed." }, { "pmid": "16468991", "title": "Replication-associated gene dosage effects shape the genomes of fast-growing bacteria but only for transcription and translation genes.", "abstract": "The bidirectional replication of bacterial genomes leads to transient gene dosage effects. Here, we show that such effects shape the chromosome organisation of fast-growing bacteria and that they correlate strongly with maximal growth rate. Surprisingly the predicted maximal number of replication rounds shows little if any phylogenetic inertia, suggesting that it is a very labile trait. Yet, a combination of theoretical and statistical analyses predicts that dozens of replication forks may be simultaneously present in the cells of certain species. This suggests a strikingly efficient management of the replication apparatus, of replication fork arrests and of chromosome segregation in such cells. Gene dosage effects strongly constrain the position of genes involved in translation and transcription, but not other highly expressed genes. The relative proximity of the former genes to the origin of replication follows the regulatory dependencies observed under exponential growth, as the bias is stronger for RNA polymerase, then rDNA, then ribosomal proteins and tDNA. Within tDNAs we find that only the positions of the previously proposed 'ubiquitous' tRNA, which translate the most frequent codons in highly expressed genes, show strong signs of selection for gene dosage effects. Finally, we provide evidence for selection acting upon genome organisation to take advantage of gene dosage effects by identifying a positive correlation between genome stability and the number of simultaneous replication rounds. We also show that gene dosage effects can explain the over-representation of highly expressed genes in the largest replichore of genomes containing more than one chromosome. Together, these results demonstrate that replication-associated gene dosage is an important determinant of chromosome organisation and dynamics, especially among fast-growing bacteria." }, { "pmid": "15878987", "title": "Global divergence of microbial genome sequences mediated by propagating fronts.", "abstract": "We model the competition between homologous recombination and point mutation in microbial genomes, and present evidence for two distinct phases, one uniform, the other genetically diverse. Depending on the specifics of homologous recombination, we find that global sequence divergence can be mediated by fronts propagating along the genome, whose characteristic signature on genome structure is elucidated, and apparently observed in closely related Bacillus strains. Front propagation provides an emergent, generic mechanism for microbial \"speciation,\" and suggests a classification of microorganisms on the basis of their propensity to support propagating fronts." }, { "pmid": "12167364", "title": "Gene transfer in bacteria: speciation without species?", "abstract": "Although Bacteria and Archaea reproduce by binary fission, exchange of genes among lineages has shaped the diversity of their populations and the diversification of their lineages. Gene exchange can occur by two distinct routes, each differentially impacting the recipient genome. First, homologous recombination mediates the exchange of DNA between closely related individuals (those whose sequences are sufficient similarly to allow efficient integration). As a result, homologous recombination mediates the dispersal of advantageous alleles that may rise to high frequency among genetically related individuals via periodic selection events. Second, lateral gene transfer can introduce novel DNA into a genome from completely unrelated lineages via illegitimate recombination. Gene exchange by this route serves to distribute genes throughout distantly related clades and therefore may confer complex abilities--not otherwise found among closely related lineages--onto the recipient organisms. These two mechanisms of gene exchange play complementary roles in the diversification of microbial populations into independent, ecologically distinct lineages. Although the delineation of microbial \"species\" then becomes difficult--if not impossible--to achieve, a cogent process of speciation can be predicted." }, { "pmid": "16176988", "title": "Highways of gene sharing in prokaryotes.", "abstract": "The extent to which lateral genetic transfer has shaped microbial genomes has major implications for the emergence of community structures. We have performed a rigorous phylogenetic analysis of >220,000 proteins from genomes of 144 prokaryotes to determine the contribution of gene sharing to current prokaryotic diversity, and to identify \"highways\" of sharing between lineages. The inferred relationships suggest a pattern of inheritance that is largely vertical, but with notable exceptions among closely related taxa, and among distantly related organisms that live in similar environments." }, { "pmid": "7601461", "title": "Pairwise end sequencing: a unified approach to genomic mapping and sequencing.", "abstract": "Strategies for large-scale genomic DNA sequencing currently require physical mapping, followed by detailed mapping, and finally sequencing. The level of mapping detail determines the amount of effort, or sequence redundancy, required to finish a project. Current strategies attempt to find a balance between mapping and sequencing efforts. One such approach is to employ strategies that use sequence data to build physical maps. Such maps alleviate the need for prior mapping and reduce the final required sequence redundancy. To this end, the utility of correlating pairs of sequence data derived from both ends of subcloned templates is well recognized. However, optimal strategies employing such pairwise data have not been established. In the present work, we simulate and analyze the parameters of pairwise sequencing projects including template length, sequence read length, and total sequence redundancy. One pairwise strategy based on sequencing both ends of plasmid subclones is recommended and illustrated with raw data simulations. We find that pairwise strategies are effective with both small (cosmid) and large (megaYAC) targets and produce ordered sequence data with a high level of mapping completeness. They are ideal for finescale mapping and gene finding and as initial steps for either a high- or a low-redundancy sequencing effort. Such strategies are highly automatable." }, { "pmid": "15516592", "title": "Single-molecule approach to bacterial genomic comparisons via optical mapping.", "abstract": "Modern comparative genomics has been established, in part, by the sequencing and annotation of a broad range of microbial species. To gain further insights, new sequencing efforts are now dealing with the variety of strains or isolates that gives a species definition and range; however, this number vastly outstrips our ability to sequence them. Given the availability of a large number of microbial species, new whole genome approaches must be developed to fully leverage this information at the level of strain diversity that maximize discovery. Here, we describe how optical mapping, a single-molecule system, was used to identify and annotate chromosomal alterations between bacterial strains represented by several species. Since whole-genome optical maps are ordered restriction maps, sequenced strains of Shigella flexneri serotype 2a (2457T and 301), Yersinia pestis (CO 92 and KIM), and Escherichia coli were aligned as maps to identify regions of homology and to further characterize them as possible insertions, deletions, inversions, or translocations. Importantly, an unsequenced Shigella flexneri strain (serotype Y strain AMC[328Y]) was optically mapped and aligned with two sequenced ones to reveal one novel locus implicated in serotype conversion and several other loci containing insertion sequence elements or phage-related gene insertions. Our results suggest that genomic rearrangements and chromosomal breakpoints are readily identified and annotated against a prototypic sequenced strain by using the tools of optical mapping." }, { "pmid": "6583681", "title": "Lengths of chromosomal segments conserved since divergence of man and mouse.", "abstract": "Linkage relationships of homologous loci in man and mouse were used to estimate the mean length of autosomal segments conserved during evolution. Comparison of the locations of greater than 83 homologous loci revealed 13 conserved segments. Map distances between the outermost markers of these 13 segments are known for the mouse and range from 1 to 24 centimorgans. Methods were developed for using this sample of conserved segments to estimate the mean length of all conserved autosomal segments in the genome. This mean length was estimated to be 8.1 +/- 1.6 centimorgans. Evidence is presented suggesting that chromosomal rearrangements that determine the lengths of these segments are randomly distributed within the genome. The estimated mean length of conserved segments was used to predict the probability that certain loci, such as peptidase-3 and renin, are linked in man given that homologous loci are chi centimorgans apart in the mouse. The mean length of conserved segments was also used to estimate the number of chromosomal rearrangements that have disrupted linkage since divergence of man and mouse. This estimate was shown to be 178 +/- 39 rearrangements." }, { "pmid": "9822387", "title": "Localization of bacterial DNA polymerase: evidence for a factory model of replication.", "abstract": "Two general models have been proposed for DNA replication. In one model, DNA polymerase moves along the DNA (like a train on a track); in the other model, the polymerase is stationary (like a factory), and DNA is pulled through. To distinguish between these models, we visualized DNA polymerase of the bacterium Bacillus subtilis in living cells by the creation of a fusion protein containing the catalytic subunit (PolC) and green fluorescent protein (GFP). PolC-GFP was localized at discrete intracellular positions, predominantly at or near midcell, rather than being distributed randomly. These results suggest that the polymerase is anchored in place and thus support the model in which the DNA template moves through the polymerase." }, { "pmid": "16814717", "title": "A molecular mousetrap determines polarity of termination of DNA replication in E. coli.", "abstract": "During chromosome synthesis in Escherichia coli, replication forks are blocked by Tus bound Ter sites on approach from one direction but not the other. To study the basis of this polarity, we measured the rates of dissociation of Tus from forked TerB oligonucleotides, such as would be produced by the replicative DnaB helicase at both the fork-blocking (nonpermissive) and permissive ends of the Ter site. Strand separation of a few nucleotides at the permissive end was sufficient to force rapid dissociation of Tus to allow fork progression. In contrast, strand separation extending to and including the strictly conserved G-C(6) base pair at the nonpermissive end led to formation of a stable locked complex. Lock formation specifically requires the cytosine residue, C(6). The crystal structure of the locked complex showed that C(6) moves 14 A from its normal position to bind in a cytosine-specific pocket on the surface of Tus." }, { "pmid": "10760161", "title": "Functional polarization of the Escherichia coli chromosome terminus: the dif site acts in chromosome dimer resolution only when located between long stretches of opposite polarity.", "abstract": "In Escherichia coli, chromosome dimers are generated by recombination between circular sister chromosomes. Dimers are lethal unless resolved by a system that involves the XerC, XerD and FtsK proteins acting at a site (dif) in the terminus region. Resolution fails if dif is moved from its normal position. To analyse this positional requirement, dif was transplaced to a variety of positions, and deletions and inversions of portions of the dif region were constructed. Resolution occurs only when dif is located at the convergence of multiple, oppositely polarized DNA sequence elements, inferred to lie in the terminus region. These polar elements may position dif at the cell septum and be general features of chromosome organization with a role in nucleoid dynamics." }, { "pmid": "17973909", "title": "FtsK and SpoIIIE: the tale of the conserved tails.", "abstract": "During Bacillus subtilis sporulation, the SpoIIIE DNA translocase moves a trapped chromosome across the sporulation septum into the forespore. The preferential assembly of SpoIIIE complexes in the mother cell provided the idea that SpoIIIE functioned as a DNA exporter, which ensured translocation orientation. In this issue of Molecular Microbiology, Becker and Pogliano reinvestigate the molecular mechanisms that orient the activity of SpoIIIE. Their findings indicate that SpoIIIE reads the polarity of DNA like its Escherichia coli homologue, FtsK." }, { "pmid": "17376071", "title": "Mutational bias suggests that replication termination occurs near the dif site, not at Ter sites.", "abstract": "In bacteria, Ter sites bound to Tus/Rtp proteins halt replication forks moving only in one direction, providing a convenient mechanism to terminate them once the chromosome had been replicated. Considering the importance of replication termination and its position as a checkpoint in cell division, the accumulated knowledge on these systems has not dispelled fundamental questions regarding its role in cell biology: why are there so many copies of Ter, why are they distributed over such a large portion of the chromosome, why is the tus gene not conserved among bacteria, and why do tus mutants lack measurable phenotypes? Here we examine bacterial genomes using bioinformatics techniques to identify the region(s) where DNA polymerase III-mediated replication has historically been terminated. We find that in both Escherichia coli and Bacillus subtilis, changes in mutational bias patterns indicate that replication termination most likely occurs at or near the dif site. More importantly, there is no evidence from mutational bias signatures that replication forks originating at oriC have terminated at Ter sites. We propose that Ter sites participate in halting replication forks originating from DNA repair events, and not those originating at the chromosomal origin of replication." }, { "pmid": "11017076", "title": "Genome rearrangement by replication-directed translocation.", "abstract": "Gene order in bacteria is poorly conserved during evolution. For example, although many homologous genes are shared by the proteobacteria Escherichia coli, Haemophilus influenzae and Helicobacter pylori, their relative positions are very different in each genome, except local functional clusters such as operons. The complete sequences of the more closely related bacterial genomes, such as pairs of Chlamydia, H. pylori and Mycobacterium species, now allow identification of the processes and mechanisms involved in genome evolution. Here we provide evidence that a substantial proportion of rearrangements in gene order results from recombination sites that are determined by the positions of the replication forks. Our observations suggest that replication has a major role in directing genome evolution." }, { "pmid": "11163906", "title": "Evolution of prokaryotic gene order: genome rearrangements in closely related species.", "abstract": "Conservation of gene order in prokaryotes has become important in predicting protein function because, over the evolutionary timescale, genomes are shuffled so that local gene-order conservation reflects the functional constraints within the protein. Here, we compare closely related genomes to identify the rate with which gene order is disrupted and to infer the genes involved in the genome rearrangement." }, { "pmid": "6273909", "title": "Inversions between ribosomal RNA genes of Escherichia coli.", "abstract": "It might be anticipated that the presence of redundant but oppositely oriented sequences in a chromosome could allow inversion of the intervening material through homologous recombination. For example, the ribosomal RNA gene rrnD of Escherichia coli has the opposite orientation fro rrnB and rrnE and is separated from these genes by roughly 20% of the chromosome. Starting with a derivative of Cavalli Hfr, we have constructed mutants that have an inversion of the segment between rrnD and either rrnB or rrnE. These mutants are generally quite viable but do exhibit a slight reduction in growth rate relative to the parental strain. A major line of laboratory E. coli, W3110 and its derivatives, also has an inversion between rrnD and rrnE, probably created directly by a recombinational event between these highly homologous genes." }, { "pmid": "12142430", "title": "Genome sequence of Yersinia pestis KIM.", "abstract": "We present the complete genome sequence of Yersinia pestis KIM, the etiologic agent of bubonic and pneumonic plague. The strain KIM, biovar Mediaevalis, is associated with the second pandemic, including the Black Death. The 4.6-Mb genome encodes 4,198 open reading frames (ORFs). The origin, terminus, and most genes encoding DNA replication proteins are similar to those of Escherichia coli K-12. The KIM genome sequence was compared with that of Y. pestis CO92, biovar Orientalis, revealing homologous sequences but a remarkable amount of genome rearrangement for strains so closely related. The differences appear to result from multiple inversions of genome segments at insertion sequences, in a manner consistent with present knowledge of replication and recombination. There are few differences attributable to horizontal transfer. The KIM and E. coli K-12 genome proteins were also compared, exposing surprising amounts of locally colinear \"backbone,\" or synteny, that is not discernible at the nucleotide level. Nearly 54% of KIM ORFs are significantly similar to K-12 proteins, with conserved housekeeping functions. However, a number of E. coli pathways and transport systems and at least one global regulator were not found, reflecting differences in lifestyle between them. In KIM-specific islands, new genes encode candidate pathogenicity proteins, including iron transport systems, putative adhesins, toxins, and fimbriae." }, { "pmid": "12930739", "title": "Associations between inverted repeats and the structural evolution of bacterial genomes.", "abstract": "The stability of the structure of bacterial genomes is challenged by recombination events. Since major rearrangements (i.e., inversions) are thought to frequently operate by homologous recombination between inverted repeats, we analyzed the presence and distribution of such repeats in bacterial genomes and their relation to the conservation of chromosomal structure. First, we show that there is a strong under-representation of inverted repeats, relative to direct repeats, in most chromosomes, especially among the ones regarded as most stable. Second, we show that the avoidance of repeats is frequently associated with the stability of the genomes. Closely related genomes reported to differ in terms of stability are also found to differ in the number of inverted repeats. Third, when using replication strand bias as a proxy for genome stability, we find a significant negative correlation between this strand bias and the abundance of inverted repeats. Fourth, when measuring the recombining potential of inverted repeats and their eventual impact on different features of the chromosomal structure, we observe a tendency of repeats to be located in the chromosome in such a way that rearrangements produce a smaller strand switch and smaller asymmetries than expected by chance. Finally, we discuss the limitations of our analysis and the influence of factors such as the nature of repeats, e.g., transposases, or the differences in the recombination machinery among bacteria. These results shed light on the challenges imposed on the genome structure by the presence of inverted repeats." }, { "pmid": "15184548", "title": "The replication-related organization of bacterial genomes.", "abstract": "The replication of the chromosome is among the most essential functions of the bacterial cell and influences many other cellular mechanisms, from gene expression to cell division. Yet the way it impacts on the bacterial chromosome was not fully acknowledged until the availability of complete genomes allowed one to look upon genomes as more than bags of genes. Chromosomal replication includes a set of asymmetric mechanisms, among which are a division in a lagging and a leading strand and a gradient between early and late replicating regions. These differences are the causes of many of the organizational features observed in bacterial genomes, in terms of both gene distribution and sequence composition along the chromosome. When asymmetries or gradients increase in some genomes, e.g. due to a different composition of the DNA polymerase or to a higher growth rate, so do the corresponding biases. As some of the features of the chromosome structure seem to be under strong selection, understanding such biases is important for the understanding of chromosome organization and adaptation. Inversely, understanding chromosome organization may shed further light on questions relating to replication and cell division. Ultimately, the understanding of the interplay between these different elements will allow a better understanding of bacterial genetics and evolution." }, { "pmid": "16556833", "title": "The nature and dynamics of bacterial genomes.", "abstract": "Though generally small and gene rich, bacterial genomes are constantly subjected to both mutational and population-level processes that operate to increase amounts of functionless DNA. As a result, the coding potential of bacterial genomes can be substantially lower than originally predicted. Whereas only a single pseudogene was included in the original annotation of the bacterium Escherichia coli, we estimate that this genome harbors hundreds of inactivated and otherwise functionless genes. Such regions will never yield a detectable phenotype, but their identification is vital to efforts to elucidate the biological role of all the proteins within the cell." }, { "pmid": "9382825", "title": "Recombination initiation: easy as A, B, C, D... chi?", "abstract": "The octameric Chi (chi) sequence is a recombination hotspot in Escherichia coli. Recent studies suggest a singular mechanism by which chi regulates not only the nuclease activity of RecBCD enzyme, but also the ability of RecBCD to promote loading of the strand exchange protein, RecA, onto chi-containing DNA." }, { "pmid": "16211009", "title": "KOPS: DNA motifs that control E. coli chromosome segregation by orienting the FtsK translocase.", "abstract": "Bacterial chromosomes are organized in replichores of opposite sequence polarity. This conserved feature suggests a role in chromosome dynamics. Indeed, sequence polarity controls resolution of chromosome dimers in Escherichia coli. Chromosome dimers form by homologous recombination between sister chromosomes. They are resolved by the combined action of two tyrosine recombinases, XerC and XerD, acting at a specific chromosomal site, dif, and a DNA translocase, FtsK, which is anchored at the division septum and sorts chromosomal DNA to daughter cells. Evidences suggest that DNA motifs oriented from the replication origin towards dif provide FtsK with the necessary information to faithfully distribute chromosomal DNA to either side of the septum, thereby bringing the dif sites together at the end of this process. However, the nature of the DNA motifs acting as FtsK orienting polar sequences (KOPS) was unknown. Using genetics, bioinformatics and biochemistry, we have identified a family of DNA motifs in the E. coli chromosome with KOPS activity." }, { "pmid": "16612541", "title": "Selection for chromosome architecture in bacteria.", "abstract": "Bacterial chromosomes are immense polymers whose faithful replication and segregation are crucial to cell survival. The ability of proteins such as FtsK to move unidirectionally toward the replication terminus, and direct DNA translocation into the appropriate daughter cell during cell division, requires that bacterial genomes maintain an architecture for the orderly replication and segregation of chromosomes. We suggest that proteins that locate the replication terminus exploit strand-biased sequences that are overrepresented on one DNA strand, and that selection increases with decreased distance to the replication terminus. We report a generalized method for detecting these architecture imparting sequences (AIMS) and have identified AIMS in nearly all bacterial genomes. Their increased abundance on leading strands and decreased abundance on lagging strands toward replication termini are not the result of changes in mutational bias; rather, they reflect a gradient of long-term positive selection for AIMS. The maintenance of the pattern of AIMS across the genomes of related bacteria independent of their positions within individual genes suggests a well-conserved role in genome biology. The stable gradient of AIMS abundance from replication origin to terminus suggests that the replicore acts as a target of selection, where selection for chromosome architecture results in the maintenance of gene order and in the lack of high-frequency DNA inversion within replicores." }, { "pmid": "16237205", "title": "Genome plasticity and ori-ter rebalancing in Salmonella typhi.", "abstract": "Genome plasticity resulting from frequent rearrangement of the bacterial genome is a fascinating but poorly understood phenomenon. First reported in Salmonella typhi, it has been observed only in a small number of Salmonella serovars, although the over 2,500 known Salmonella serovars are all very closely related. To gain insights into this phenomenon and elucidate its roles in bacterial evolution, especially those involved in the formation of particular pathogens, we systematically analyzed the genomes of 127 wild-type S. typhi strains isolated from many places of the world and compared them with the two sequenced strains, Ty2 and CT18, attempting to find possible associations between genome rearrangement and other significant genomic features. Like other host-adapted Salmonella serovars, S. typhi contained large genome insertions, including the 134 kb Salmonella pathogenicity island, SPI7. Our analyses showed that SPI7 disrupted the physical balance of the bacterial genome between the replication origin (ori) and terminus (ter) when this DNA segment was inserted into the genome, and rearrangement in individual strains further changed the genome balance status, with a general tendency toward a better balanced genome structure. In a given S. typhi strain, genome diversification occurred and resulted in different structures among cells in the culture. Under a stressed condition, bacterial cells with better balanced genome structures were selected to greatly increase in proportion; in such cases, bacteria with better balanced genomes formed larger colonies and grew with shorter generation times. Our results support the hypothesis that genome plasticity as a result of frequent rearrangement provides the opportunity for the bacterial genome to adopt a better balanced structure and thus eventually stabilizes the genome during evolution." }, { "pmid": "15479949", "title": "Psi-Phi: exploring the outer limits of bacterial pseudogenes.", "abstract": "Because bacterial chromosomes are tightly packed with genes and were traditionally viewed as being optimized for size and replication speed, it was not surprising that the early annotations of sequenced bacterial genomes reported few, if any, pseudogenes. But because pseudogenes are generally recognized by comparisons with their functional counterparts, as more genome sequences accumulated, many bacterial pathogens were found to harbor large numbers of truncated, inactivated, and degraded genes. Because the mutational events that inactivate genes occur continuously in all genomes, we investigated whether the rarity of pseudogenes in some bacteria was attributable to properties inherent to the organism or to the failure to recognize pseudogenes. By developing a program suite (called Psi-Phi, for Psi-gene Finder) that applies a comparative method to identify pseudogenes (attributable both to misannotation and to nonrecognition), we analyzed the pseudogene inventories in the sequenced members of the Escherichia coli/Shigella clade. This approach recovered hundreds of previously unrecognized pseudogenes and showed that pseudogenes are a regular feature of bacterial genomes, even in those whose original annotations registered no truncated or otherwise inactivated genes. In Shigella flexneri 2a, large proportions of pseudogenes are generated by nonsense mutations and IS element insertions, events that seldom produce the pseudogenes present in the other genomes examined. Almost all (>95%) pseudogenes are restricted to only one of the genomes and are of relatively recent origin, suggesting that these bacteria possess active mechanisms to eliminate nonfunctional genes." }, { "pmid": "8993858", "title": "Yersinia pestis--etiologic agent of plague.", "abstract": "Plague is a widespread zoonotic disease that is caused by Yersinia pestis and has had devastating effects on the human population throughout history. Disappearance of the disease is unlikely due to the wide range of mammalian hosts and their attendant fleas. The flea/rodent life cycle of Y. pestis, a gram-negative obligate pathogen, exposes it to very different environmental conditions and has resulted in some novel traits facilitating transmission and infection. Studies characterizing virulence determinants of Y. pestis have identified novel mechanisms for overcoming host defenses. Regulatory systems controlling the expression of some of these virulence factors have proven quite complex. These areas of research have provide new insights into the host-parasite relationship. This review will update our present understanding of the history, etiology, epidemiology, clinical aspects, and public health issues of plague." }, { "pmid": "10570195", "title": "Yersinia pestis, the cause of plague, is a recently emerged clone of Yersinia pseudotuberculosis.", "abstract": "Plague, one of the most devastating diseases of human history, is caused by Yersinia pestis. In this study, we analyzed the population genetic structure of Y. pestis and the two other pathogenic Yersinia species, Y. pseudotuberculosis and Y. enterocolitica. Fragments of five housekeeping genes and a gene involved in the synthesis of lipopolysaccharide were sequenced from 36 strains representing the global diversity of Y. pestis and from 12-13 strains from each of the other species. No sequence diversity was found in any Y. pestis gene, and these alleles were identical or nearly identical to alleles from Y. pseudotuberculosis. Thus, Y. pestis is a clone that evolved from Y. pseudotuberculosis 1,500-20,000 years ago, shortly before the first known pandemics of human plague. Three biovars (Antiqua, Medievalis, and Orientalis) have been distinguished by microbiologists within the Y. pestis clone. These biovars form distinct branches of a phylogenetic tree based on restriction fragment length polymorphisms of the locations of the IS100 insertion element. These data are consistent with previous inferences that Antiqua caused a plague pandemic in the sixth century, Medievalis caused the Black Death and subsequent epidemics during the second pandemic wave, and Orientalis caused the current plague pandemic." }, { "pmid": "15598742", "title": "Microevolution and history of the plague bacillus, Yersinia pestis.", "abstract": "The association of historical plague pandemics with Yersinia pestis remains controversial, partly because the evolutionary history of this largely monomorphic bacterium was unknown. The microevolution of Y. pestis was therefore investigated by three different multilocus molecular methods, targeting genomewide synonymous SNPs, variation in number of tandem repeats, and insertion of IS100 insertion elements. Eight populations were recognized by the three methods, and we propose an evolutionary tree for these populations, rooted on Yersinia pseudotuberculosis. The tree invokes microevolution over millennia, during which enzootic pestoides isolates evolved. This initial phase was followed by a binary split 6,500 years ago, which led to populations that are more frequently associated with human disease. These populations do not correspond directly to classical biovars that are based on phenotypic properties. Thus, we recommend that henceforth groupings should be based on molecular signatures. The age of Y. pestis inferred here is compatible with the dates of historical pandemic plague. However, it is premature to infer an association between any modern molecular grouping and a particular pandemic wave that occurred before the 20th century." }, { "pmid": "16740952", "title": "Complete genome sequence of Yersinia pestis strains Antiqua and Nepal516: evidence of gene reduction in an emerging pathogen.", "abstract": "Yersinia pestis, the causative agent of bubonic and pneumonic plagues, has undergone detailed study at the molecular level. To further investigate the genomic diversity among this group and to help characterize lineages of the plague organism that have no sequenced members, we present here the genomes of two isolates of the \"classical\" antiqua biovar, strains Antiqua and Nepal516. The genomes of Antiqua and Nepal516 are 4.7 Mb and 4.5 Mb and encode 4,138 and 3,956 open reading frames, respectively. Though both strains belong to one of the three classical biovars, they represent separate lineages defined by recent phylogenetic studies. We compare all five currently sequenced Y. pestis genomes and the corresponding features in Yersinia pseudotuberculosis. There are strain-specific rearrangements, insertions, deletions, single nucleotide polymorphisms, and a unique distribution of insertion sequences. We found 453 single nucleotide polymorphisms in protein-coding regions, which were used to assess the evolutionary relationships of these Y. pestis strains. Gene reduction analysis revealed that the gene deletion processes are under selective pressure, and many of the inactivations are probably related to the organism's interaction with its host environment. The results presented here clearly demonstrate the differences between the two biovar antiqua lineages and support the notion that grouping Y. pestis strains based strictly on the classical definition of biovars (predicated upon two biochemical assays) does not accurately reflect the phylogenetic relationships within this species. A comparison of four virulent Y. pestis strains with the human-avirulent strain 91001 provides further insight into the genetic basis of virulence to humans." }, { "pmid": "12450857", "title": "A whole-genome shotgun optical map of Yersinia pestis strain KIM.", "abstract": "Yersinia pestis is the causative agent of the bubonic, septicemic, and pneumonic plagues (also known as black death) and has been responsible for recurrent devastating pandemics throughout history. To further understand this virulent bacterium and to accelerate an ongoing sequencing project, two whole-genome restriction maps (XhoI and PvuII) of Y. pestis strain KIM were constructed using shotgun optical mapping. This approach constructs ordered restriction maps from randomly sheared individual DNA molecules directly extracted from cells. The two maps served different purposes; the XhoI map facilitated sequence assembly by providing a scaffold for high-resolution alignment, while the PvuII map verified genome sequence assembly. Our results show that such maps facilitated the closure of sequence gaps and, most importantly, provided a purely independent means for sequence validation. Given the recent advancements to the optical mapping system, increased resolution and throughput are enabling such maps to guide sequence assembly at a very early stage of a microbial sequencing project." }, { "pmid": "11050348", "title": "Rare genomic changes as a tool for phylogenetics.", "abstract": "DNA sequence data have offered valuable insights into the relationships between living organisms. However, most phylogenetic analyses of DNA sequences rely primarily on single nucleotide substitutions, which might not be perfect phylogenetic markers. Rare genomic changes (RGCs), such as intron indels, retroposon integrations, signature sequences, mitochondrial and chloroplast gene order changes, gene duplications and genetic code changes, provide a suite of complementary markers with enormous potential for molecular systematics. Recent exploitation of RGCs has already started to yield exciting phylogenetic information." }, { "pmid": "17173484", "title": "The complete genome sequence and comparative genome analysis of the high pathogenicity Yersinia enterocolitica strain 8081.", "abstract": "The human enteropathogen, Yersinia enterocolitica, is a significant link in the range of Yersinia pathologies extending from mild gastroenteritis to bubonic plague. Comparison at the genomic level is a key step in our understanding of the genetic basis for this pathogenicity spectrum. Here we report the genome of Y. enterocolitica strain 8081 (serotype 0:8; biotype 1B) and extensive microarray data relating to the genetic diversity of the Y. enterocolitica species. Our analysis reveals that the genome of Y. enterocolitica strain 8081 is a patchwork of horizontally acquired genetic loci, including a plasticity zone of 199 kb containing an extraordinarily high density of virulence genes. Microarray analysis has provided insights into species-specific Y. enterocolitica gene functions and the intraspecies differences between the high, low, and nonpathogenic Y. enterocolitica biotypes. Through comparative genome sequence analysis we provide new information on the evolution of the Yersinia. We identify numerous loci that represent ancestral clusters of genes potentially important in enteric survival and pathogenesis, which have been lost or are in the process of being lost, in the other sequenced Yersinia lineages. Our analysis also highlights large metabolic operons in Y. enterocolitica that are absent in the related enteropathogen, Yersinia pseudotuberculosis, indicating major differences in niche and nutrients used within the mammalian gut. These include clusters directing, the production of hydrogenases, tetrathionate respiration, cobalamin synthesis, and propanediol utilisation. Along with ancestral gene clusters, the genome of Y. enterocolitica has revealed species-specific and enteropathogen-specific loci. This has provided important insights into the pathology of this bacterium and, more broadly, into the evolution of the genus. Moreover, wider investigations looking at the patterns of gene loss and gain in the Yersinia have highlighted common themes in the genome evolution of other human enteropathogens." }, { "pmid": "16221896", "title": "Application of phylogenetic networks in evolutionary studies.", "abstract": "The evolutionary history of a set of taxa is usually represented by a phylogenetic tree, and this model has greatly facilitated the discussion and testing of hypotheses. However, it is well known that more complex evolutionary scenarios are poorly described by such models. Further, even when evolution proceeds in a tree-like manner, analysis of the data may not be best served by using methods that enforce a tree structure but rather by a richer visualization of the data to evaluate its properties, at least as an essential first step. Thus, phylogenetic networks should be employed when reticulate events such as hybridization, horizontal gene transfer, recombination, or gene duplication and loss are believed to be involved, and, even in the absence of such events, phylogenetic networks have a useful role to play. This article reviews the terminology used for phylogenetic networks and covers both split networks and reticulate networks, how they are defined, and how they can be interpreted. Additionally, the article outlines the beginnings of a comprehensive statistical framework for applying split network methods. We show how split networks can represent confidence sets of trees and introduce a conservative statistical test for whether the conflicting signal in a network is treelike. Finally, this article describes a new program, SplitsTree4, an interactive and comprehensive tool for inferring different types of phylogenetic networks from sequences, distances, and trees." }, { "pmid": "12855474", "title": "Scaling up accurate phylogenetic reconstruction from gene-order data.", "abstract": "MOTIVATION\nPhylogenetic reconstruction from gene-order data has attracted increasing attention from both biologists and computer scientists over the last few years. Methods used in reconstruction include distance-based methods (such as neighbor-joining), parsimony methods using sequence-based encodings, Bayesian approaches, and direct optimization. The latter, pioneered by Sankoff and extended by us with the software suite GRAPPA, is the most accurate approach, but cannot handle more than about 15 genomes of limited size (e.g. organelles).\n\n\nRESULTS\nWe report here on our successful efforts to scale up direct optimization through a two-step approach: the first step decomposes the dataset into smaller pieces and runs the direct optimization (GRAPPA) on the smaller pieces, while the second step builds a tree from the results obtained on the smaller pieces. We used the sophisticated disk-covering method (DCM) pioneered by Warnow and her group, suitably modified to take into account the computational limitations of GRAPPA. We find that DCM-GRAPPA scales gracefully to at least 1000 genomes of a few hundred genes each and retains surprisingly high accuracy throughout the range: in our experiments, the topological error rate rarely exceeded a few percent. Thus, reconstruction based on gene-order data can now be accomplished with high accuracy on datasets of significant size." }, { "pmid": "12610535", "title": "Molecular evolution meets the genomics revolution.", "abstract": "Changes in technology in the past decade have had such an impact on the way that molecular evolution research is done that it is difficult now to imagine working in a world without genomics or the Internet. In 1992, GenBank was less than a hundredth of its current size and was updated every three months on a huge spool of tape. Homology searches took 30 minutes and rarely found a hit. Now it is difficult to find sequences with only a few homologs to use as examples for teaching bioinformatics. For molecular evolution researchers, the genomics revolution has showered us with raw data and the information revolution has given us the wherewithal to analyze it. In broad terms, the most significant outcome from these changes has been our newfound ability to examine the evolution of genomes as a whole, enabling us to infer genome-wide evolutionary patterns and to identify subsets of genes whose evolution has been in some way atypical." }, { "pmid": "12810957", "title": "Human and mouse genomic sequences reveal extensive breakpoint reuse in mammalian evolution.", "abstract": "The human and mouse genomic sequences provide evidence for a larger number of rearrangements than previously thought and reveal extensive reuse of breakpoints from the same short fragile regions. Breakpoint clustering in regions implicated in cancer and infertility have been reported in previous studies; we report here on breakpoint clustering in chromosome evolution. This clustering reveals limitations of the widely accepted random breakage theory that has remained unchallenged since the mid-1980s. The genome rearrangement analysis of the human and mouse genomes implies the existence of a large number of very short \"hidden\" synteny blocks that were invisible in the comparative mapping data and ignored in the random breakage model. These blocks are defined by closely located breakpoints and are often hard to detect. Our results suggest a model of chromosome evolution that postulates that mammalian genomes are mosaics of fragile regions with high propensity for rearrangements and solid regions with low propensity for rearrangements." }, { "pmid": "14668356", "title": "Genomic rearrangements at rrn operons in Salmonella.", "abstract": "Most Salmonella serovars are general pathogens that infect a variety of hosts. These \"generalist\" serovars cause disease in many animals from reptiles to mammals. In contrast, a few serovars cause disease only in a specific host. Host-specific serovars can cause a systemic, often fatal disease in one species yet remain avirulent in other species. Host-specific Salmonella frequently have large genomic rearrangements due to recombination at the ribosomal RNA (rrn) operons while the generalists consistently have a conserved chromosomal arrangement. To determine whether this is the result of an intrinsic difference in recombination frequency or a consequence of lifestyle difference between generalist and host-specific Salmonella, we determined the frequency of rearrangements in vitro. Using lacZ genes as portable regions of homology for inversion analysis, we found that both generalist and host-specific serovars of Salmonella have similar tolerances to chromosomal rearrangements in vitro. Using PCR and genetic selection, we found that generalist and host-specific serovars also undergo rearrangements at rrn operons at similar frequencies in vitro. These observations indicate that the observed difference in genomic stability between generalist and host-specific serovars is a consequence of their distinct lifestyles, not intrinsic differences in recombination frequencies." }, { "pmid": "4577740", "title": "Bi-directional chromosomal replication in Salmonella typhimurium.", "abstract": "Transducing frequencies of phage P22 lysates prepared from Salmonella typhimurium exponential cultures in minimal and nutrient broth media were compared. The assumption is that cells grown in a minimal medium will have one replication fork per replication unit, but cells in nutrient broth will have multiple replication forks; therefore, the frequency of genetic markers near the origin of replication will be higher in the nutrient broth culture. Analysis of transduction showed a gradient of marker frequencies from the highest (the cysG-ilv region) to the lowest (purE-trpB region) in both clockwise and counter clockwise directions. This supports our previous observation that chromosome replication proceeds bidirectionally from the origin between cysG (109 min on S. typhimurium map) and ilv (122 min) to a terminus in purE-trpB region (20 to 53 min). Since this method avoids possible artifacts of other methods, the results are assumed to reflect the sequence of chromosome replication in exponentially growing cells. Evidence for the existence of multiple replication forks in nutrient broth-grown cells was supported by the following: (i) the marker frequency data fitted the assumption of multiple replication fork formation; (ii) residual deoxyribonucleic acid increase after inhibition of protein synthesis to complete a round of chromosome synthesis which was 44% in cells grown in a minimal medium and 82% in those in nutrient broth; (iii) segregation patterns of the (3)H-thymidine-labeled chromosome strands during subsequent growth in non-radioactive medium were studied by autoradiography, and the number of replication points per chromosome per cell was estimated as 5.6 for the nutrient broth culture and 2.5 for the minimal medium culture. These data support a model of symmetrical and bidirectional chromosome replication." }, { "pmid": "17189424", "title": "Microinversions in mammalian evolution.", "abstract": "We propose an approach for identifying microinversions across different species and show that microinversions provide a source of low-homoplasy evolutionary characters. These characters may be used as \"certificates\" to verify different branches in a phylogenetic tree, turning the challenging problem of phylogeny reconstruction into a relatively simple algorithmic problem. We estimate that there exist hundreds of thousands of microinversions in genomes of mammals from comparative sequencing projects, an untapped source of new phylogenetic characters." }, { "pmid": "11586360", "title": "Genome sequence of Yersinia pestis, the causative agent of plague.", "abstract": "The Gram-negative bacterium Yersinia pestis is the causative agent of the systemic invasive infectious disease classically referred to as plague, and has been responsible for three human pandemics: the Justinian plague (sixth to eighth centuries), the Black Death (fourteenth to nineteenth centuries) and modern plague (nineteenth century to the present day). The recent identification of strains resistant to multiple drugs and the potential use of Y. pestis as an agent of biological warfare mean that plague still poses a threat to human health. Here we report the complete genome sequence of Y. pestis strain CO92, consisting of a 4.65-megabase (Mb) chromosome and three plasmids of 96.2 kilobases (kb), 70.3 kb and 9.6 kb. The genome is unusually rich in insertion sequences and displays anomalies in GC base-composition bias, indicating frequent intragenomic recombination. Many genes seem to have been acquired from other bacteria and viruses (including adhesins, secretion systems and insecticidal toxins). The genome contains around 150 pseudogenes, many of which are remnants of a redundant enteropathogenic lifestyle. The evidence of ongoing genome fluidity, expansion and decay suggests Y. pestis is a pathogen that has undergone large-scale genetic flux and provides a unique insight into the ways in which new and highly virulent pathogens evolve." }, { "pmid": "16362346", "title": "Polymorphic micro-inversions contribute to the genomic variability of humans and chimpanzees.", "abstract": "A combination of inter- and intra-species genome comparisons is required to identify and classify the full spectrum of genetic changes, both subtle and gross, that have accompanied the evolutionary divergence of humans and other primates. In this study, gene order comparisons of 11,518 human and chimpanzee orthologous gene pairs were performed to detect regions of inverted gene order that are potentially indicative of small-scale rearrangements such as inversions. By these means, a total of 71 potential micro-rearrangements were detected, nine of which were considered to represent micro-inversions encompassing more than three genes. These putative inversions were then investigated by FISH and/or PCR analyses and the authenticity of five of the nine inversions, ranging in size from approximately 800 kb to approximately 4.4 Mb, was confirmed. These inversions mapped to 1p13.2-13.3, 7p22.1, 7p13-14.1, 18p11.21-11.22 and 19q13.12 and encompass 50, 14, 16, 7 and 16 known genes, respectively. Intriguingly, four of the confirmed inversions turned out to be polymorphic: three were polymorphic in the chimpanzee and one in humans. It is concluded that micro-inversions make a significant contribution to genomic variability in both humans and chimpanzees and inversion polymorphisms may be more frequent than previously realized." }, { "pmid": "3049239", "title": "Phase variation in Salmonella: analysis of Hin recombinase and hix recombination site interaction in vivo.", "abstract": "The bacteriophage P22-based challenge phase selection was used to characterize the binding of Salmonella Hin recombinase to the wild-type hixL and hixR recombination sites, as well as to mutant and synthetic hix sequences in vivo. Hin recombinase binds to the hixL or hixR recombination sites and represses transcription from an upstream promoter in the challenge phage system. Hin-mediated repression results from Hin associating into multimers either prior to binding or during the binding process at the hix operator sites (cooperativity). The ability of Hin multimers to repress transcription is eliminated when the hix 13-bp half-sites are rotated to opposite sides of the DNA helix by inserting 4 bp between them. Insertion of 1 bp between half-sites reduces overall repression. Hin also binds one of the hixL half-sites to repress transcription, but only when high levels of Hin protein are present in the cell. Mutations have been identified in the hix sites that impair Hin binding. Five of the 26 bp in the hix sites are critical; sites with base-pair substitutions at these five positions show greatly reduced binding. Three additional base pairs make minor contributions to binding. These results are consistent with the results of binding studies between Hin and the hix sites in vitro." }, { "pmid": "15746427", "title": "Extensive DNA inversions in the B. fragilis genome control variable gene expression.", "abstract": "The obligately anaerobic bacterium Bacteroides fragilis, an opportunistic pathogen and inhabitant of the normal human colonic microbiota, exhibits considerable within-strain phase and antigenic variation of surface components. The complete genome sequence has revealed an unusual breadth (in number and in effect) of DNA inversion events that potentially control expression of many different components, including surface and secreted components, regulatory molecules, and restriction-modification proteins. Invertible promoters of two different types (12 group 1 and 11 group 2) were identified. One group has inversion crossover (fix) sites similar to the hix sites of Salmonella typhimurium. There are also four independent intergenic shufflons that potentially alter the expression and function of varied genes. The composition of the 10 different polysaccharide biosynthesis gene clusters identified (7 with associated invertible promoters) suggests a mechanism of synthesis similar to the O-antigen capsules of Escherichia coli." }, { "pmid": "3186748", "title": "Intramolecular recombination of chloroplast genome mediated by short direct-repeat sequences in wheat species.", "abstract": "Structural alterations of the chloroplast genome tend to occur at \"hot spots\" on the physical map. To clarify the mechanism of mutation of chloroplast genome structure in higher plants, we determined the nucleotide sequence of the hot-spot region of chloroplast DNAs related to length mutations (deletions/insertions) in Triticum (wheat) and Aegilops. From a comparison of this region in wheat with the corresponding region of tobacco or liverwort, it is evident that one of the open reading frames in tobacco (ORF512) has been replaced in wheat by the rpl23 gene, which is a member of the ribosomal protein gene operon. In the deleted positions and in the original genome of Triticum and Aegilops, consensus sequences forming short direct repeats were found, indicating that these deletions were a result of intramolecular recombination mediated by these short direct-repeat sequences. By two independent recombination events in the Aegilops crassa type of chloroplast genome, which is shared by Triticum monococcum, Ae. bicornis, Ae. sharonensis, Ae. comosa, and Ae. mutica, the novel chloroplast DNA sequences of T. aestivum and Ae. squarrosa were generated. This finding indicates the existence of illegitimate recombination in the chloroplast genome and presents a mechanism for producing genetic diversity of that genome." }, { "pmid": "15951307", "title": "Efficient sorting of genomic permutations by translocation, inversion and block interchange.", "abstract": "MOTIVATION\nFinding genomic distance based on gene order is a classic problem in genome rearrangements. Efficient exact algorithms for genomic distances based on inversions and/or translocations have been found but are complicated by special cases, rare in simulations and empirical data. We seek a universal operation underlying a more inclusive set of evolutionary operations and yielding a tractable genomic distance with simple mathematical form.\n\n\nRESULTS\nWe study a universal double-cut-and-join operation that accounts for inversions, translocations, fissions and fusions, but also produces circular intermediates which can be reabsorbed. The genomic distance, computable in linear time, is given by the number of breakpoints minus the number of cycles (b-c) in the comparison graph of the two genomes; the number of hurdles does not enter into it. Without changing the formula, we can replace generation and re-absorption of a circular intermediate by a generalized transposition, equivalent to a block interchange, with weight two. Our simple algorithm converts one multi-linear chromosome genome to another in the minimum distance." }, { "pmid": "17407601", "title": "Dependence of paracentric inversion rate on tract length.", "abstract": "BACKGROUND\nWe develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths.\n\n\nRESULTS\nWe apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths.\n\n\nCONCLUSION\nThe method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted." }, { "pmid": "15525697", "title": "A bayesian analysis of metazoan mitochondrial genome arrangements.", "abstract": "Genome arrangements are a potentially powerful source of information to infer evolutionary relationships among distantly related taxa. Mitochondrial genome arrangements may be especially informative about metazoan evolutionary relationships because (1) nearly all animals have the same set of definitively homologous mitochondrial genes, (2) mitochondrial genome rearrangement events are rare relative to changes in sequences, and (3) the number of possible mitochondrial genome arrangements is huge, making convergent evolution of genome arrangements appear highly unlikely. In previous studies, phylogenetic evidence in genome arrangement data is nearly always used in a qualitative fashion-the support in favor of clades with similar or identical genome arrangements is considered to be quite strong, but is not quantified. The purpose of this article is to quantify the uncertainty among the relationships of metazoan phyla on the basis of mitochondrial genome arrangements while incorporating prior knowledge of the monophyly of various groups from other sources. The work we present here differs from our previous work in the statistics literature in that (1) we incorporate prior information on classifications of metazoans at the phylum level, (2) we describe several advances in our computational approach, and (3) we analyze a much larger data set (87 taxa) that consists of each unique, complete mitochondrial genome arrangement with a full complement of 37 genes that were present in the NCBI (National Center for Biotechnology Information) database at a recent date. In addition, we analyze a subset of 28 of these 87 taxa for which the non-tRNA mitochondrial genomes are unique where the assumption of our inversion-only model of rearrangement is more plausible. We present summaries of Bayesian posterior distributions of tree topology on the basis of these two data sets." }, { "pmid": "14534182", "title": "MCMC genome rearrangement.", "abstract": "MOTIVATION\nAs more and more genomes have been sequenced, genomic data is rapidly accumulating. Genome-wide mutations are believed more neutral than local mutations such as substitutions, insertions and deletions, therefore phylogenetic investigations based on inversions, transpositions and inverted transpositions are less biased by the hypothesis on neutral evolution. Although efficient algorithms exist for obtaining the inversion distance of two signed permutations, there is no reliable algorithm when both inversions and transpositions are considered. Moreover, different type of mutations happen with different rates, and it is not clear how to weight them in a distance based approach.\n\n\nRESULTS\nWe introduce a Markov Chain Monte Carlo method to genome rearrangement based on a stochastic model of evolution, which can estimate the number of different evolutionary events needed to sort a signed permutation. The performance of the method was tested on simulated data, and the estimated numbers of different types of mutations were reliable. Human and Drosophila mitochondrial data were also analysed with the new method. The mixing time of the Markov Chain is short both in terms of CPU times and number of proposals.\n\n\nAVAILABILITY\nThe source code in C is available on request from the author." }, { "pmid": "16627724", "title": "Comment on \"Phylogenetic MCMC algorithms are misleading on mixtures of trees\".", "abstract": "Mossel and Vigoda (Reports, 30 September 2005, p. 2207) show that nearest neighbor interchange transitions, commonly used in phylogenetic Markov chain Monte Carlo (MCMC) algorithms, perform poorly on mixtures of dissimilar trees. However, the conditions leading to their results are artificial. Standard MCMC convergence diagnostics would detect the problem in real data, and correction of the model misspecification would solve it." }, { "pmid": "17095535", "title": "Bayesian estimation of concordance among gene trees.", "abstract": "Multigene sequence data have great potential for elucidating important and interesting evolutionary processes, but statistical methods for extracting information from such data remain limited. Although various biological processes may cause different genes to have different genealogical histories (and hence different tree topologies), we also may expect that the number of distinct topologies among a set of genes is relatively small compared with the number of possible topologies. Therefore evidence about the tree topology for one gene should influence our inferences of the tree topology on a different gene, but to what extent? In this paper, we present a new approach for modeling and estimating concordance among a set of gene trees given aligned molecular sequence data. Our approach introduces a one-parameter probability distribution to describe the prior distribution of concordance among gene trees. We describe a novel 2-stage Markov chain Monte Carlo (MCMC) method that first obtains independent Bayesian posterior probability distributions for individual genes using standard methods. These posterior distributions are then used as input for a second MCMC procedure that estimates a posterior distribution of gene-to-tree maps (GTMs). The posterior distribution of GTMs can then be summarized to provide revised posterior probability distributions for each gene (taking account of concordance) and to allow estimation of the proportion of the sampled genes for which any given clade is true (the sample-wide concordance factor). Further, under the assumption that the sampled genes are drawn randomly from a genome of known size, we show how one can obtain an estimate, with credibility intervals, on the proportion of the entire genome for which a clade is true (the genome-wide concordance factor). We demonstrate the method on a set of 106 genes from 8 yeast species." }, { "pmid": "16737554", "title": "Genome-wide detection and analysis of homologous recombination among sequenced strains of Escherichia coli.", "abstract": "BACKGROUND\nComparisons of complete bacterial genomes reveal evidence of lateral transfer of DNA across otherwise clonally diverging lineages. Some lateral transfer events result in acquisition of novel genomic segments and are easily detected through genome comparison. Other more subtle lateral transfers involve homologous recombination events that result in substitution of alleles within conserved genomic regions. This type of event is observed infrequently among distantly related organisms. It is reported to be more common within species, but the frequency has been difficult to quantify since the sequences under comparison tend to have relatively few polymorphic sites.\n\n\nRESULTS\nHere we report a genome-wide assessment of homologous recombination among a collection of six complete Escherichia coli and Shigella flexneri genome sequences. We construct a whole-genome multiple alignment and identify clusters of polymorphic sites that exhibit atypical patterns of nucleotide substitution using a random walk-based method. The analysis reveals one large segment (approximately 100 kb) and 186 smaller clusters of single base pair differences that suggest lateral exchange between lineages. These clusters include portions of 10% of the 3,100 genes conserved in six genomes. Statistical analysis of the functional roles of these genes reveals that several classes of genes are over-represented, including those involved in recombination, transport and motility.\n\n\nCONCLUSION\nWe demonstrate that intraspecific recombination in E. coli is much more common than previously appreciated and may show a bias for certain types of genes. The described method provides high-specificity, conservative inference of past recombination events." }, { "pmid": "17090663", "title": "A bimodal pattern of relatedness between the Salmonella Paratyphi A and Typhi genomes: convergence or divergence by homologous recombination?", "abstract": "All Salmonella can cause disease but severe systemic infections are primarily caused by a few lineages. Paratyphi A and Typhi are the deadliest human restricted serovars, responsible for approximately 600,000 deaths per annum. We developed a Bayesian changepoint model that uses variation in the degree of nucleotide divergence along two genomes to detect homologous recombination between these strains, and with other lineages of Salmonella enterica. Paratyphi A and Typhi showed an atypical and surprising pattern. For three quarters of their genomes, they appear to be distantly related members of the species S. enterica, both in their gene content and nucleotide divergence. However, the remaining quarter is much more similar in both aspects, with average nucleotide divergence of 0.18% instead of 1.2%. We describe two different scenarios that could have led to this pattern, convergence and divergence, and conclude that the former is more likely based on a variety of criteria. The convergence scenario implies that, although Paratyphi A and Typhi were not especially close relatives within S. enterica, they have gone through a burst of recombination involving more than 100 recombination events. Several of the recombination events transferred novel genes in addition to homologous sequences, resulting in similar gene content in the two lineages. We propose that recombination between Typhi and Paratyphi A has allowed the exchange of gene variants that are important for their adaptation to their common ecological niche, the human host." }, { "pmid": "16423021", "title": "Origin of replication in circular prokaryotic chromosomes.", "abstract": "To predict origins of replication in prokaryotic chromosomes, we analyse the leading and lagging strands of 200 chromosomes for differences in oligomer composition and show that these correlate strongly with taxonomic grouping, lifestyle and molecular details of the replication process. While all bacteria have a preference for Gs over Cs on the leading strand, we discover that the direction of the A/T skew is determined by the polymerase-alpha subunit that replicates the leading strand. The strength of the strand bias varies greatly between both phyla and environments and appears to correlate with growth rate. Finally we observe much greater diversity of skew among archaea than among bacteria. We have developed a program that accurately locates the origins of replication by measuring the differences between leading and lagging strand of all oligonucleotides up to 8 bp in length. The program and results for all publicly available genomes are available from http://www.cbs.dtu.dk/services/GenomeAtlas/suppl/origin." }, { "pmid": "15231754", "title": "Mauve: multiple alignment of conserved genomic sequence with rearrangements.", "abstract": "As genomes evolve, they undergo large-scale evolutionary processes that present a challenge to sequence comparison not posed by short sequences. Recombination causes frequent genome rearrangements, horizontal transfer introduces new sequences into bacterial chromosomes, and deletions remove segments of the genome. Consequently, each genome is a mosaic of unique lineage-specific segments, regions shared with a subset of other genomes and segments conserved among all the genomes under consideration. Furthermore, the linear order of these segments may be shuffled among genomes. We present methods for identification and alignment of conserved genomic DNA in the presence of rearrangements and horizontal transfer. Our methods have been implemented in a software package called Mauve. Mauve has been applied to align nine enterobacterial genomes and to determine global rearrangement structure in three mammalian genomes. We have evaluated the quality of Mauve alignments and drawn comparison to other methods through extensive simulations of genome evolution." }, { "pmid": "15368893", "title": "Complete genome sequence of Yersinia pestis strain 91001, an isolate avirulent to humans.", "abstract": "Genomics provides an unprecedented opportunity to probe in minute detail into the genomes of the world's most deadly pathogenic bacteria- Yersinia pestis. Here we report the complete genome sequence of Y. pestis strain 91001, a human-avirulent strain isolated from the rodent Brandt's vole-Microtus brandti. The genome of strain 91001 consists of one chromosome and four plasmids (pPCP1, pCD1, pMT1 and pCRY). The 9609-bp pPCP1 plasmid of strain 91001 is almost identical to the counterparts from reference strains (CO92 and KIM). There are 98 genes in the 70,159-bp range of plasmid pCD1. The 106,642-bp plasmid pMT1 has slightly different architecture compared with the reference ones. pCRY is a novel plasmid discovered in this work. It is 21,742 bp long and harbors a cryptic type IV secretory system. The chromosome of 91001 is 4,595,065 bp in length. Among the 4037 predicted genes, 141 are possible pseudo-genes. Due to the rearrangements mediated by insertion elements, the structure of the 91001 chromosome shows dramatic differences compared with CO92 and KIM. Based on the analysis of plasmids and chromosome architectures, pseudogene distribution, nitrate reduction negative mechanism and gene comparison, we conclude that strain 91001 and other strains isolated from M. brandti might have evolved from ancestral Y. pestis in a different lineage. The large genome fragment deletions in the 91001 chromosome and some pseudogenes may contribute to its unique nonpathogenicity to humans and host-specificity." }, { "pmid": "15358858", "title": "Insights into the evolution of Yersinia pestis through whole-genome comparison with Yersinia pseudotuberculosis.", "abstract": "Yersinia pestis, the causative agent of plague, is a highly uniform clone that diverged recently from the enteric pathogen Yersinia pseudotuberculosis. Despite their close genetic relationship, they differ radically in their pathogenicity and transmission. Here, we report the complete genomic sequence of Y. pseudotuberculosis IP32953 and its use for detailed genome comparisons with available Y. pestis sequences. Analyses of identified differences across a panel of Yersinia isolates from around the world reveal 32 Y. pestis chromosomal genes that, together with the two Y. pestis-specific plasmids, to our knowledge, represent the only new genetic material in Y. pestis acquired since the the divergence from Y. pseudotuberculosis. In contrast, 149 other pseudogenes (doubling the previous estimate) and 317 genes absent from Y. pestis were detected, indicating that as many as 13% of Y. pseudotuberculosis genes no longer function in Y. pestis. Extensive insertion sequence-mediated genome rearrangements and reductive evolution through massive gene loss, resulting in elimination and modification of preexisting gene expression pathways, appear to be more important than acquisition of genes in the evolution of Y. pestis. These results provide a sobering example of how a highly virulent epidemic clone can suddenly emerge from a less virulent, closely related progenitor." }, { "pmid": "17784789", "title": "The complete genome sequence of Yersinia pseudotuberculosis IP31758, the causative agent of Far East scarlet-like fever.", "abstract": "The first reported Far East scarlet-like fever (FESLF) epidemic swept the Pacific coastal region of Russia in the late 1950s. Symptoms of the severe infection included erythematous skin rash and desquamation, exanthema, hyperhemic tongue, and a toxic shock syndrome. The term FESLF was coined for the infection because it shares clinical presentations with scarlet fever caused by group A streptococci. The causative agent was later identified as Yersinia pseudotuberculosis, although the range of morbidities was vastly different from classical pseudotuberculosis symptoms. To understand the origin and emergence of the peculiar clinical features of FESLF, we have sequenced the genome of the FESLF-causing strain Y. pseudotuberculosis IP31758 and compared it with that of another Y. pseudotuberculosis strain, IP32953, which causes classical gastrointestinal symptoms. The unique gene pool of Y pseudotuberculosis IP31758 accounts for more than 260 strain-specific genes and introduces individual physiological capabilities and virulence determinants, with a significant proportion horizontally acquired that likely originated from Enterobacteriaceae and other soil-dwelling bacteria that persist in the same ecological niche. The mobile genome pool includes two novel plasmids phylogenetically unrelated to all currently reported Yersinia plasmids. An icm/dot type IVB secretion system, shared only with the intracellular persisting pathogens of the order Legionellales, was found on the larger plasmid and could contribute to scarlatinoid fever symptoms in patients due to the introduction of immunomodulatory and immunosuppressive capabilities. We determined the common and unique traits resulting from genome evolution and speciation within the genus Yersinia and drew a more accurate species border between Y. pseudotuberculosis and Y. pestis. In contrast to the lack of genetic diversity observed in the evolutionary young descending Y. pestis lineage, the population genetics of Y. pseudotuberculosis is more heterogenous. Both Y. pseudotuberculosis strains IP31758 and the previously sequenced Y. pseudotuberculosis strain IP32953 have evolved by the acquisition of specific plasmids and by the horizontal acquisition and incorporation of different genetic information into the chromosome, which all together or independently seems to potentially impact the phenotypic adaptation of these two strains." } ]
International Journal of Telemedicine and Applications
18695739
PMC2495075
10.1155/2008/867639
ERMHAN: A Context-Aware Service Platform to Support Continuous Care Networks for Home-Based Assistance
Continuous care models for chronic diseases pose several technology-oriented challenges for home-based continuous care, where assistance services rely on a close collaboration among different stakeholders such as health operators, patient relatives, and social community members. Here we describe Emilia Romagna Mobile Health Assistance Network (ERMHAN) a multichannel context-aware service platform designed to support care networks in cooperating and sharing information with the goal of improving patient quality of life. In order to meet extensibility and flexibility requirements, this platform has been developed through ontology-based context-aware computing and a service oriented approach. We also provide some preliminary results of performance analysis and user survey activity.
2. RELATED WORKSeveral research domains can be considered of interest in delivering AmI and pervasive health services for chronic diseases, spanning from smart homes, assistive technologies and home-based health monitoring, to context-aware hospitals.One of the first relevant contributions has been provided by researchers working at the Georgia Tech Aware Home, a prototype of a smart home, where sensing and perception technologies are used to gain awareness of inhabitant activities and to enable services for maintaining independence and quality of life for an ageing population [6]. The INHOME project [7] aims at providing the means for improving the quality of life of elderly people at home, by developing technologies for managing their domestic environment and enhancing their autonomy and safety at home (e.g., activity monitoring, simple home environment management, flexible AV streams handling, flexible household appliance access). Among more recent works, the “ubiquitous home” is a real-life test-bed for home-based context-aware service experiments [8] in Japan. A set of implemented context-aware services has been evaluated by means of real-life experiments with elderly people.In the field of assistive technologies and home-based health monitoring systems, several examples exist: Vivago is an alarm system which provides long-term user activity monitoring and alarm notification [9]; the CareMedia system [10] uses multimedia information to track user activities.The “hospital of the future” prototype [11] is an example of a context-aware computing system in a hospital environment. It consists of a series of context-aware tools: an electronic patient record (EPR), a pill container, and a hospital bed which displays relevant patient record information, such as the medicine schema, according to contextual information (e.g., nurse position, patient, medicine tray). Muñoz et al. [12] have recently proposed a context-aware mobile system where mobile devices are capable of recognizing the setting in which hospital workers perform their tasks, and let users send messages and access hospital services according to these contextual elements.Despite the multitude of relevant contributions in the above-mentioned research fields, only recently research activities on pervasive services for ageing and chronic disease management have begun addressing these requirements by means of a holistic approach, taking systematically into account standard guidelines and reference models for continuous care. Consolvo et al. [4] have applied social network analysis methodology to the study of continuous care networks; they conducted a series of interviews in order to explore the space of eldercare (i.e., who was involved in the care, what types of care were needed, and what types of care were being provided); based on user study results they offer some design guidelines for the development of successful computer-supported coordinated care (CSCC) systems.Pervasive self care is a conceptual framework for the development of pervasive self-care services [13]. This study has been promoted in the framework of self care, an initiative by the Department of Health in the UK that aims at treating patients with long-term conditions near home. The proposed reference model, inspired by the principles of service-oriented architecture (SOA), distinguishes three main spheres: the body sphere (a body area network supported by a router which interacts with body sensors and with the home sphere); the home sphere (a home server that collects and preprocesses sensed data), and the self-care service sphere (the data processing and sharing subsystem).Some experimentation results are given in [14], where a telemedicine system is used for the home care of patients suffering from chronic obtrusive pulmonary disease (COPD); the integrated telemedicine system provides professionals with shared and ubiquitous access to patient health records and patients with direct access to nurse case manager, telemonitoring, and televisit services.2.1. Contribution of our workThe design of ICT tools for chronic disease management should take into account flexibility and extensibility requirements. These requirements are common to all kinds of distributed systems, but especially are applied to this application domain, due to its intrinsic characteristics, such as different national and local regulation frameworks, the heterogeneity of health centers and communities involved in care service delivery, and the patient's health status development over time. In such a complex and changing environment, cost-effective solutions should be conceived as extensible and flexible service platforms. For that reason, while the research activities surveyed above have focused on the conceptual design or implementation of applications that target specific chronic diseases and not on a specific chronic disease from the very beginning, our approach has been to design and deploy a service platform that provides general purpose services and could be easily extended and specialized in order to match specific requirements of real cases.The aim of our research has been to design and implement ERMHAN, an extensible service platform supporting care teams in providing long-term assistance services. Extensibility and flexibility of the ERMHAN service platform are mainly achieved by means of modular and service-oriented design and the adoption of open and standardized data formats and communication protocols.The platform design is based on the definition of basic functional blocks and their interconnection by means of web services standards. As a matter of fact, web services are recognized as an open and standardized way of achieving interoperation between different software applications, running on heterogeneous platforms and/or frameworks [15].Semantic web technologies have been applied to data representation and processing in order to provide instruments that ease the development of pervasive and personalized care services. More specifically, semantic web is used in order to (a) represent knowledge by means of ontology-based formalisms; (b) reason over knowledge using rule-based and ontology-based engines; (c) apply reasoning techniques in order to implement personalized healthcare plans.The prototype we have developed provides basic and general-purpose services for information sharing, distributed, and multichannel personalized access to patients' records and personalized real-time monitoring and alarm management on a per patient basis. Implemented services include access to complete and updated patient records (including patient health status, description of care provider interventions), even in mobility conditions (at the hospital or at patient's home) and through different devices (at least personal digital assistants and desktop PCs); notification of patient health status conditions and alarms, but without overwhelming care operators with too much information (this might have the drawback of disturbing users and providing them with “no information”). More effective information delivery could be achieved by routing intervention requests according to patient health status gravity and required expertise for intervention.Based on its flexibility, the ERMHAN service platform can be specialized to address the needs of specific patient cases. To achieve this objective, the basic services provided by the ERMHAN platform can be integrated with those offered by other systems, such as assistive technologies, home automation systems, specific chronic disease prognoses, and diagnosis systems [16]. In the following sections, we provide further details about the modeling approach and system architecture of ERMHAN platform.
[ "15289634", "16871726", "18046939" ]
[ { "pmid": "15289634", "title": "Improving the quality of health care for chronic conditions.", "abstract": "Chronic conditions are increasingly the primary concern of health care systems throughout the world. In response to this challenge, the World Health Organization has joined with the MacColl Institute for Healthcare Innovation to adapt the Chronic Care Model (CCM) from a global perspective. The resultant effort is the Innovative Care for Chronic Conditions (ICCC) framework which expands community and policy aspects of improving health care for chronic conditions and includes components at the micro (patient and family), meso (health care organisation and community), and macro (policy) levels. The framework provides a flexible but comprehensive base on which to build or redesign health systems in accordance with local resources and demands." }, { "pmid": "16871726", "title": "Telemedicine experience for chronic care in COPD.", "abstract": "Information and telecommunication technologies are called to play a major role in the changes that healthcare systems have to face to cope with chronic disease. This paper reports a telemedicine experience for the home care of chronic patients suffering from chronic obstructive pulmonary disease (COPD) and an integrated system designed to carry out this experience. To determine the impact on health, the chronic care telemedicine system was used during one year (2002) with 157 COPD patients in a clinical experiment; endpoints were readmissions and mortality. Patients in the intervention group were followed up at their homes and could contact the care team at any time through the call center. The care team shared a unique electronic chronic patient record (ECPR) accessible through the web-based patient management module or the home visit units. Results suggest that integrated home telemedicine services can support health professionals caring for patients with chronic disease, and improve their health. We have found that simple telemedicine services (ubiquitous access to ECPR, ECPR shared by care team, accessibility to case manager, problem reporting integrated in ECPR) can increase the number of patients that were not readmitted (51% intervention, 33% control), are acceptable to professionals, and involve low installation and exploitation costs. Further research is needed to determine the role of telemonitoring and televisit services for this kind of patients." }, { "pmid": "18046939", "title": "Delivering a lifelong integrated electronic health record based on a service oriented architecture.", "abstract": "Efficient access to a citizen's Integrated Electronic Health Record (I-EHR) is considered to be the cornerstone for the support of continuity of care, the reduction of avoidable mistakes, and the provision of tools and methods to support evidence-based medicine. For the past several years, a number of applications and services (including a lifelong I-EHR) have been installed, and enterprise and regional infrastructure has been developed, in HYGEIAnet, the Regional Health Information Network (RHIN) of the island of Crete, Greece. Through this paper, the technological effort toward the delivery of a lifelong I-EHR by means of World Wide Web Consortium (W3C) technologies, on top of a service-oriented architecture that reuses already existing middleware components is presented and critical issues are discussed. Certain design and development decisions are exposed and explained, laying this way the ground for coordinated, dynamic navigation to personalized healthcare delivery." } ]
BMC Medical Informatics and Decision Making
18652655
PMC2526997
10.1186/1472-6947-8-32
Automated de-identification of free-text medical records
BackgroundText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.MethodsWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.ResultsPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.ConclusionWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm.
Related workThere are relatively few published reports concerning the de-identification of unstructured medical free text, and specific algorithms are not usually made publicly available. Gupta et al. [5] devised a de-identification engine for pathology reports that uses a complex combination of dictionaries and text-analysis algorithms. Their approach locates useful (non-PHI) phrases and replaces the rest of the text with de-identified tags. For the identification of relevant medical phrases, the algorithm uses the Unified Medical Language System (UMLS) meta-thesaurus [24], a National Institutes of Health (NIH)-sponsored collection of medical vocabularies, some of which are considered standard for particular applications.Sweeney [6] developed the Scrub system, which employs templates and specialized knowledge of the context to replace PHI in medical records. The system attempts to identify PHI using "common-sense" templates and look-up tables of examplary PHI. Sweeney's system also uses probability tables for template matching, detectors for medical terms to reduce false positives, tools to identify words that sound like other words (to account for spelling variations), and detectors for recurring terms. Sweeney's Scrub system identified 99–100% of the PHI in its author's test set. The false positive rate and details of Sweeny's test corpus are not available.Sweeney later developed the Datafly system [7], which uses user-specific profiles, including a list of preferred fields to be scrubbed, and what external information libraries are permitted. The Datafly system is licensed to Privacert, Inc. [11] and specifics are therefore not publicly available.Ruch et al. [12] developed a technique that uses sophisticated natural language techniques to tag words with appropriate parts of speech and a specialized semantic category known as MEDTAG. The technique then uses contextual rules based on the tags assigned to the text, using up to five-word groups and some "long-distance" (non-local) rules implemented as finite state machines. The algorithm then attempts to identify PHI in a limited region around words marked as 'Identity Markers'. The technique was developed for post-operative reports, laboratory and test results, and discharge summaries, written primarily in French, though with some documents in German and English. The system found 98–99% of all personally-identifying information in their test corpus.Taira et al. [13] created an algorithm to identify patient name references from clinician correspondence, discharge summaries, clinical notes, and operative/surgical reports from pediatric urology records. Their algorithm uses a lexicon with over 64,000 first and last names and a set of semantic constraints to assign probabilities of a given word being a name. After scanning each sentence and classifying it according to the type of logical relation it contains, the algorithm then extracts the potential name based on that logical relation. This technique was shown to have a recall of 99.2%, but it is limited only to patient names and is not applicable to other categories of PHI.Thomas et al. [14] developed a method that uses a lexicon of 1.8 million proper names to identify potential names and a list of "Clinical and Common Usage" words from the UMLS. The Ispell spell-checker dictionary [25] was also employed to reduce false positives. If a word is on both lists, a few simple context rules are used to classify the word. This method was tested on pathology reports and identified 98.7% of all names in their test corpus.Berman [15] developed a technique for removing most PHI from pathology reports by excluding all terms that do not appear in the UMLS. Berman's algorithm parses sentences into coded concepts from the UMLS and stop-words, which are high-frequency structural components of sentences, such as prepositions and common adjectives. All other words, including names and other personally identifiable information, are replaced by blocking symbols, so that the output is totally stripped of non-medical and extraneous information. As Berman points out, the limitation with the concept-match scrubber is that it blocks too much, so the output is full of asterisks (the blocking symbol) and the text is hard to read'. Since publishing the concept-match scrubber, Berman has published a new scrubber algorithm based upon doublet (word pair) matching [26]. Berman's new approach parses through a text, matching every possible doublet (word-pair) in the text against the list of a list of approved identifier-free doublets (about 200,000). The doublet scrubber preserves, in situ, those text doublets that match against one of the doublets in the "safe" list. Everything else in the text is blocked (with an asterisk). This produces an output that is much more readable than the concept-match output and which is also fully de-identified. Although a significant improvement, much useful text is still blocked.A de-identification system similar to ours is the one developed by Beckwith [16] was tested on a pathology report corpus containing 3,499 PHI identifiers and was found to remove all identifying words in pathology reports with a sensitivity of 98.3%. The 19 HIPAA-specified identifiers that were missed by Beckwith's system were mainly consult accession numbers and misspelled names. Unfortunately, the system does not perform as well on nursing progress notes and discharge summaries.Miller et al. [17] developed a de-identification system for scrubbing proper names in a free-text database of indexed surgical pathology reports at the Johns Hopkins Hospital. Proper names were identified from available lists of persons, places and institutions, or by their proximity to keywords, such as "Dr." or "hospital." The identified proper names were subsequently replaced by suitable tokens.Sweeney [18] examined four de-identification algorithms: the Scrub system which locates PHI in letters and notes, the Datafly II system which generalizes and suppresses values in field-structured data sets, Statistics Netherlands' μ-Argus system, and the k-similar algorithm. The Scrub system comprises a system of parallel detectors, each detector recognizing a specific type of explicit identifier in a field-structured database. The Scrub system accurately located 98–100% of all explicit identifiers, but the removal of only explicit identifiers did not ensure anonymity. The Datafly II system de-identifies entity-specific data in field-structured databases. The final outputs of the Datafly II system are anonymous, yet medically useful. In the μ-Argus system the data provider assigns to each attribute the amount of protection necessary. The μ-Argus system does not ensure an anonymous database, but results in a lower frequency of removal of useful information than the Datafly II system. The k-similar algorithm divides the text into groups of words so that each group consists of k or more of the most similar tuples (a finite ordered list of words). The similarity of tuples is based on a minimal distance measure derived from anonymity and quality metrics. Sweeney concluded that the Datafly-II system can remove too many useful phrases, the Scrub and μ-Argus systems can fail to provide adequate protection, and that the k-similar system provides a good trade-off between these two systems, providing "sufficient" anonymization and "minimal" loss of useful medical information. (It should be noted, however, that there is no generally accepted definition of 'sufficient' and 'minimal' for this application.)Sibanda et al. [19] developed a semantic category recognition approach for document understanding that analyzes the syntax of documents. More specifically, a statistical semantic category recognizer is trained with syntactic and lexical contextual clues and ontological information from the UMLS. The semantic category recognizer identifies eight semantic categories in medical discharge summaries, e.g., test results and findings. The results confirm that syntax is important in semantic category recognition, and Sibanda et al. reported PHI classification recall and precision measures of above 90% using their test corpus.Sibanda also developed a software package for de-identifying medical discharge summaries involving statistical models that employs local lexical and syntactic context [20]. Each word in a sentence was considered in isolation, and a Support Vector Machine with a linear kernel, trained on human-annotated data, was used to determine if a given word was PHI. The de-identification software identified at least 92.8% of PHI and misclassified at most 1.1% of non-PHI in four test corpora.A very recent development has been a competition run at the first Workshop on Challenges in Natural Language Processing for Clinical Data to de-identify discharge summary free-text data [27]. Excellent performance was achieved through combining heuristics and statistical methods by György et al. [28] and Wellner et al. [29], with recall and precision performance in the range of 96%–98% and 98%–99% respectively. Their algorithms require large labelled training and test sets, however. Furthermore, their systems were trained on relatively well-structured data, such as discharge summaries and it is unclear how their approaches would perform on nursing progress notes, which are significantly less structured and grammatical than the discharge summaries. In contrast, the system presented here is evaluated using nursing notes, which are likely to be more challenging to de-identify. Evaluation of our system using discharge summaries similar to those used in [27-29], as described in this article, will allow a more meaningful comparison between our approach and others, especially if a common corpus can be used to evaluate multiple algorithms.
[ "14686455", "12180470", "14983930", "8947683", "8947626", "10851218", "12463930", "12741890", "16515714", "17238434", "15361017", "17600094", "17823086", "17600096" ]
[ { "pmid": "14686455", "title": "MIMIC II: a massive temporal ICU patient database to support research in intelligent patient monitoring.", "abstract": "Development and evaluation of Intensive Care Unit (ICU) decision-support systems would be greatly facilitated by the availability of a large-scale ICU patient database. Following our previous efforts with the MIMIC (Multi-parameter Intelligent Monitoring for Intensive Care) Database, we have leveraged advances in networking and storage technologies to develop a far more massive temporal database, MIMIC II. MIMIC II is an ongoing effort: data is continuously and prospectively archived from all ICU patients in our hospital. MIMIC II now consists of over 800 ICU patient records including over 120 gigabytes of data and is growing. A customized archiving system was used to store continuously up to four waveforms and 30 different parameters from ICU patient monitors. An integrated user-friendly relational database was developed for browsing of patients' clinical information (lab results, fluid balance, medications, nurses' progress notes). Based upon its unprecedented size and scope, MIMIC II will prove to be an important resource for intelligent patient monitoring research, and will support efforts in medical data mining and knowledge-discovery." }, { "pmid": "12180470", "title": "Standards for privacy of individually identifiable health information. Final rule.", "abstract": "The Department of Health and Human Services (\"HHS'' or \"Department'') modifies certain standards in the Rule entitled \"Standards for Privacy of Individually Identifiable Health Information'' (\"Privacy Rule''). The Privacy Rule implements the privacy requirements of the Administrative Simplification subtitle of the Health Insurance Portability and Accountability Act of 1996. The purpose of these modifications is to maintain strong protections for the privacy of individually identifiable health information while clarifying certain of the Privacy Rule's provisions, addressing the unintended negative effects of the Privacy Rule on health care quality or access to health care, and relieving unintended administrative burdens created by the Privacy Rule." }, { "pmid": "14983930", "title": "Evaluation of a deidentification (De-Id) software engine to share pathology reports and clinical documents for research.", "abstract": "We evaluated a comprehensive deidentification engine at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, that uses a complex set of rules, dictionaries, pattern-matching algorithms, and the Unified Medical Language System to identify and replace identifying text in clinical reports while preserving medical information for sharing in research. In our initial data set of 967 surgical pathology reports, the software did not suppress outside (103), UPMC (47), and non-UPMC (56) accession numbers; dates (7); names (9) or initials (25) of case pathologists; or hospital or laboratory names (46). In 150 reports, some clinical information was suppressed inadvertently (overmarking). The engine retained eponymic patient names, eg, Barrett and Gleason. In the second evaluation (1,000 reports), the software did not suppress outside (90) or UPMC (6) accession numbers or names (4) or initials (2) of case pathologists. In the third evaluation, the software removed names of patients, hospitals (297/300), pathologists (297/300), transcriptionists, residents and physicians, dates of procedures, and accession numbers (298/300). By the end of the evaluation, the system was reliably and specifically removing safe-harbor identifiers and producing highly readable deidentified text without removing important clinical information. Collaboration between pathology domain experts and system developers and continuous quality assurance are needed to optimize ongoing deidentification processes." }, { "pmid": "8947683", "title": "Replacing personally-identifying information in medical records, the Scrub system.", "abstract": "We define a new approach to locating and replacing personally-identifying information in medical records that extends beyond straight search-and-replace procedures, and we provide techniques for minimizing risk to patient confidentiality. The straightforward approach of global search and replace properly located no more than 30-60% of all personally-identifying information that appeared explicitly in our sample database. On the other hand, our Scrub system found 99-100% of these references. Scrub uses detection algorithms that employ templates and specialized knowledge of what constitutes a name, address, phone number and so forth." }, { "pmid": "8947626", "title": "Evaluation of a continuing professional education opportunity via an on-line service.", "abstract": "Registered Dietitians (RDs) who are members of the on-line service America Online have the opportunity to participate in regular journal club sessions for continuing education credits. A survey conducted after the first six months found that most participants found the journal club to be a convenient way to network with RDs from across the country and earn continuing education credits which compared favorably with traditional journal club meetings." }, { "pmid": "10851218", "title": "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.", "abstract": "The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise." }, { "pmid": "12463930", "title": "A successful technique for removing names in pathology reports using an augmented search and replace method.", "abstract": "The ability to access large amounts of de-identified clinical data would facilitate epidemiologic and retrospective research. Previously described de-identification methods require knowledge of natural language processing or have not been made available to the public. We take advantage of the fact that the vast majority of proper names in pathology reports occur in pairs. In rare cases where one proper name is by itself, it is preceded or followed by an affix that identifies it as a proper name (Mrs., Dr., PhD). We created a tool based on this observation using substitution methods that was easy to implement and was largely based on publicly available data sources. We compiled a Clinical and Common Usage Word (CCUW) list as well as a fairly comprehensive proper name list. Despite the large overlap between these two lists, we were able to refine our methods to achieve accuracy similar to previous attempts at de-identification. Our method found 98.7% of 231 proper names in the narrative sections of pathology reports. Three single proper names were missed out of 1001 pathology reports (0.3%, no first name/last name pairs). It is unlikely that identification could be implied from this information. We will continue to refine our methods, specifically working to improve the quality of our CCUW and proper name lists to obtain higher levels of accuracy." }, { "pmid": "12741890", "title": "Concept-match medical data scrubbing. How pathology text can be used in research.", "abstract": "CONTEXT\nIn the normal course of activity, pathologists create and archive immense data sets of scientifically valuable information. Researchers need pathology-based data sets, annotated with clinical information and linked to archived tissues, to discover and validate new diagnostic tests and therapies. Pathology records can be used for research purposes (without obtaining informed patient consent for each use of each record), provided the data are rendered harmless. Large data sets can be made harmless through 3 computational steps: (1) deidentification, the removal or modification of data fields that can be used to identify a patient (name, social security number, etc); (2) rendering the data ambiguous, ensuring that every data record in a public data set has a nonunique set of characterizing data; and (3) data scrubbing, the removal or transformation of words in free text that can be used to identify persons or that contain information that is incriminating or otherwise private. This article addresses the problem of data scrubbing.\n\n\nOBJECTIVE\nTo design and implement a general algorithm that scrubs pathology free text, removing all identifying or private information.\n\n\nMETHODS\nThe Concept-Match algorithm steps through confidential text. When a medical term matching a standard nomenclature term is encountered, the term is replaced by a nomenclature code and a synonym for the original term. When a high-frequency \"stop\" word, such as a, an, the, or for, is encountered, it is left in place. When any other word is encountered, it is blocked and replaced by asterisks. This produces a scrubbed text. An open-source implementation of the algorithm is freely available.\n\n\nRESULTS\nThe Concept-Match scrub method transformed pathology free text into scrubbed output that preserved the sense of the original sentences, while it blocked terms that did not match terms found in the Unified Medical Language System (UMLS). The scrubbed product is safe, in the restricted sense that the output retains only standard medical terms. The software implementation scrubbed more than half a million surgical pathology report phrases in less than an hour.\n\n\nCONCLUSIONS\nComputerized scrubbing can render the textual portion of a pathology report harmless for research purposes. Scrubbing and deidentification methods allow pathologists to create and use large pathology databases to conduct medical research." }, { "pmid": "16515714", "title": "Development and evaluation of an open source software tool for deidentification of pathology reports.", "abstract": "BACKGROUND\nElectronic medical records, including pathology reports, are often used for research purposes. Currently, there are few programs freely available to remove identifiers while leaving the remainder of the pathology report text intact. Our goal was to produce an open source, Health Insurance Portability and Accountability Act (HIPAA) compliant, deidentification tool tailored for pathology reports. We designed a three-step process for removing potential identifiers. The first step is to look for identifiers known to be associated with the patient, such as name, medical record number, pathology accession number, etc. Next, a series of pattern matches look for predictable patterns likely to represent identifying data; such as dates, accession numbers and addresses as well as patient, institution and physician names. Finally, individual words are compared with a database of proper names and geographic locations. Pathology reports from three institutions were used to design and test the algorithms. The software was improved iteratively on training sets until it exhibited good performance. 1800 new pathology reports were then processed. Each report was reviewed manually before and after deidentification to catalog all identifiers and note those that were not removed.\n\n\nRESULTS\n1254 (69.7 %) of 1800 pathology reports contained identifiers in the body of the report. 3439 (98.3%) of 3499 unique identifiers in the test set were removed. Only 19 HIPAA-specified identifiers (mainly consult accession numbers and misspelled names) were missed. Of 41 non-HIPAA identifiers missed, the majority were partial institutional addresses and ages. Outside consultation case reports typically contain numerous identifiers and were the most challenging to deidentify comprehensively. There was variation in performance among reports from the three institutions, highlighting the need for site-specific customization, which is easily accomplished with our tool.\n\n\nCONCLUSION\nWe have demonstrated that it is possible to create an open-source deidentification program which performs well on free-text pathology reports." }, { "pmid": "17238434", "title": "Syntactically-informed semantic category recognition in discharge summaries.", "abstract": "Semantic category recognition (SCR) contributes to document understanding. Most approaches to SCR fail to make use of syntax. We hypothesize that syntax, if represented appropriately, can improve SCR. We present a statistical semantic category (SC) recognizer trained with syntactic and lexical contextual clues, as well as ontological information from UMLS, to identify eight semantic categories in discharge summaries. Some of our categories, e.g., test results and findings, include complex entries that span multiple phrases. We achieve classification F-measures above 90% for most categories and show that syntactic context is important for SCR." }, { "pmid": "15361017", "title": "A submission model for use in the indexing, searching, and retrieval of distributed pathology case and tissue specimens.", "abstract": "This paper describes the Shared Pathology Informatics Network (SPIN) submission model for uploading de-identified XML annotations of pathology case and specimen information to a distributed peer-to-peer network architecture. SPIN use cases, architecture, and technologies, as well as pathology information design is described. With the architecture currently in use by six member institutions, SPIN appears to be a viable, secure methodology to submit pathology information for query and specimen retrieval by investigators" }, { "pmid": "17600094", "title": "Evaluating the state-of-the-art in automatic de-identification.", "abstract": "To facilitate and survey studies in automatic de-identification, as a part of the i2b2 (Informatics for Integrating Biology to the Bedside) project, authors organized a Natural Language Processing (NLP) challenge on automatically removing private health information (PHI) from medical discharge records. This manuscript provides an overview of this de-identification challenge, describes the data and the annotation process, explains the evaluation metrics, discusses the nature of the systems that addressed the challenge, analyzes the results of received system runs, and identifies directions for future research. The de-indentification challenge data consisted of discharge summaries drawn from the Partners Healthcare system. Authors prepared this data for the challenge by replacing authentic PHI with synthesized surrogates. To focus the challenge on non-dictionary-based de-identification methods, the data was enriched with out-of-vocabulary PHI surrogates, i.e., made up names. The data also included some PHI surrogates that were ambiguous with medical non-PHI terms. A total of seven teams participated in the challenge. Each team submitted up to three system runs, for a total of sixteen submissions. The authors used precision, recall, and F-measure to evaluate the submitted system runs based on their token-level and instance-level performance on the ground truth. The systems with the best performance scored above 98% in F-measure for all categories of PHI. Most out-of-vocabulary PHI could be identified accurately. However, identifying ambiguous PHI proved challenging. The performance of systems on the test data set is encouraging. Future evaluations of these systems will involve larger data sets from more heterogeneous sources." }, { "pmid": "17823086", "title": "State-of-the-art anonymization of medical records using an iterative machine learning framework.", "abstract": "OBJECTIVE\nThe anonymization of medical records is of great importance in the human life sciences because a de-identified text can be made publicly available for non-hospital researchers as well, to facilitate research on human diseases. Here the authors have developed a de-identification model that can successfully remove personal health information (PHI) from discharge records to make them conform to the guidelines of the Health Information Portability and Accountability Act.\n\n\nDESIGN\nWe introduce here a novel, machine learning-based iterative Named Entity Recognition approach intended for use on semi-structured documents like discharge records. Our method identifies PHI in several steps. First, it labels all entities whose tags can be inferred from the structure of the text and it then utilizes this information to find further PHI phrases in the flow text parts of the document.\n\n\nMEASUREMENTS\nFollowing the standard evaluation method of the first Workshop on Challenges in Natural Language Processing for Clinical Data, we used token-level Precision, Recall and F(beta=1) measure metrics for evaluation.\n\n\nRESULTS\nOur system achieved outstanding accuracy on the standard evaluation dataset of the de-identification challenge, with an F measure of 99.7534% for the best submitted model.\n\n\nCONCLUSION\nWe can say that our system is competitive with the current state-of-the-art solutions, while we describe here several techniques that can be beneficial in other tasks that need to handle structured documents such as clinical records." }, { "pmid": "17600096", "title": "Rapidly retargetable approaches to de-identification in medical records.", "abstract": "OBJECTIVE\nThis paper describes a successful approach to de-identification that was developed to participate in a recent AMIA-sponsored challenge evaluation.\n\n\nMETHOD\nOur approach focused on rapid adaptation of existing toolkits for named entity recognition using two existing toolkits, Carafe and LingPipe.\n\n\nRESULTS\nThe \"out of the box\" Carafe system achieved a very good score (phrase F-measure of 0.9664) with only four hours of work to adapt it to the de-identification task. With further tuning, we were able to reduce the token-level error term by over 36% through task-specific feature engineering and the introduction of a lexicon, achieving a phrase F-measure of 0.9736.\n\n\nCONCLUSIONS\nWe were able to achieve good performance on the de-identification task by the rapid retargeting of existing toolkits. For the Carafe system, we developed a method for tuning the balance of recall vs. precision, as well as a confidence score that correlated well with the measured F-score." } ]
PLoS Computational Biology
18846203
PMC2543108
10.1371/journal.pcbi.1000180
A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics.
Related WorkThe theoretical analysis of this model is directly applicable to the learning rule considered in [12]. There, the network behavior of reward-modulated STDP was also studied some situations different from the ones in this article. The computer simulations of [12] operate apparently in a different dynamic regime, where LTD dominates LTP in the STDP-rule, and most weights (except those that are actively increased through reward-modulated STDP) have values close to 0 (see Figure 1b and 1d in [12], and compare with Figure 5 in this article). This setup is likely to require for successful learning a larger dominance of pre-before-post over post-before-pre pairs than the one shown in Figure 4E. Furthermore, whereas a very low spontaneous firing rate of 1 Hz was required in [12], computer simulation 1 shows that reinforcement learning is also feasible at spontaneous firing rates which correspond to those reported in [17] (the preceding theoretical analysis had already suggested that the success of the model does not depend on particularly low firing rates). The articles [15] and [13] investigate variations of reward-modulated STDP rules that do not employ learning curves for STDP that are based on experimental data, but modified curves that arise in the context of a very interesting top-down theoretical approach (distributed reinforcement learning [14]). The authors of [16] arrive at similar learning rules in a supervised scenario which can be reinterpreted in the context of reinforcement learning. We expect that a similar theory as we have presented in this article for the more commonly discussed version of STDP can also be applied to their modified STDP rules, thereby making it possible to predict under which conditions their learning rules will succeed. Another reward based learning rule for spiking neurons was recently presented in [49]. This rule exploits correlations of a reward signal with noisy perturbations of the neuronal membrane conductance in order to optimize some objective function. One crucial assumption of this approach is that the synaptic plasticity mechanism “knows” which contributions to the membrane potential arise from synaptic inputs, and which contributions are due to internal noise. Such explicit knowledge of the noise signal is not needed in the reward-modulated STDP rule of [12], which we have considered in this article. The price one has to pay for this potential gain in biological realism is a reduced generality of the learning capabilities. While the learning rule in [49] approximates gradient ascent on the objective function, this cannot be stated for reward-modulated STDP at present. Timing-based pattern discrimination with a spiking neuron, as discussed in the section “Pattern discrimination with reward-modulated STDP” of this article, was recently tackled in [50]. The authors proposed the tempotron learning rule, which increases the peak membrane voltage for one class of input patterns (if no spike occurred in response to the input pattern) while decreasing the peak membrane voltage for another class of input patterns (if a spike occurred in response to the pattern). The main difference between this learning rule and reward-modulated STDP is that the tempotron learning rule is sensitive to the peak membrane voltage, whereas reward-modulated STDP is sensitive to local fluctuations of the membrane voltage. Since the time of the maximal membrane voltage has to be determined for each pattern by the synaptic plasticity mechanism, the basic tempotron rule is perhaps not biologically realistic. Therefore, an approximate and potentially biologically more realistic learning rule was proposed in [50], where plasticity following error trials is induced at synapse i only if the voltage within the postsynaptic integration time after their activation exceeds a plasticity threshold κ. One potential problem of this rule is the plasticity threshold κ, since a good choice of this parameter strongly depends on the mean membrane voltage after input spikes. This problem is circumvented by reward-modulated STDP, which considers instead the local change in the membrane voltage. Further work is needed to compare the advantages and disadvantages of these different approaches.
[ "11127835", "17287502", "11252764", "12031406", "17400301", "11544526", "12371508", "11452310", "10676963", "12165477", "15242655", "17220510", "17571943", "16764506", "4196269", "4974291", "17234689", "11165912", "17444756", "9852584", "10966623", "11705408", "11744242", "17928565", "10195145", "7770778", "14762148", "12433288", "16310350", "15483600", "9560274", "12022505", "10634775", "9620800", "9801388", "10816319", "810359", "16481565", "11158631", "16907616", "16474393" ]
[ { "pmid": "11127835", "title": "Synaptic plasticity: taming the beast.", "abstract": "Synaptic plasticity provides the basis for most models of learning, memory and development in neural circuits. To generate realistic results, synapse-specific Hebbian forms of plasticity, such as long-term potentiation and depression, must be augmented by global processes that regulate overall levels of neuronal and network activity. Regulatory processes are often as important as the more intensively studied Hebbian processes in determining the consequences of synaptic plasticity for network function. Recent experimental results suggest several novel mechanisms for regulating levels of activity in conjunction with Hebbian synaptic modification. We review three of them-synaptic scaling, spike-timing dependent plasticity and synaptic redistribution-and discuss their functional implications." }, { "pmid": "17287502", "title": "Spike timing-dependent synaptic depression in the in vivo barrel cortex of the rat.", "abstract": "Spike timing-dependent plasticity (STDP) is a computationally powerful form of plasticity in which synapses are strengthened or weakened according to the temporal order and precise millisecond-scale delay between presynaptic and postsynaptic spiking activity. STDP is readily observed in vitro, but evidence for STDP in vivo is scarce. Here, we studied spike timing-dependent synaptic depression in single putative pyramidal neurons of the rat primary somatosensory cortex (S1) in vivo, using two techniques. First, we recorded extracellularly from layer 2/3 (L2/3) and L5 neurons, and paired spontaneous action potentials (postsynaptic spikes) with subsequent subthreshold deflection of one whisker (to drive presynaptic afferents to the recorded neuron) to produce \"post-leading-pre\" spike pairings at known delays. Short delay pairings (<17 ms) resulted in a significant decrease of the extracellular spiking response specific to the paired whisker, consistent with spike timing-dependent synaptic depression. Second, in whole-cell recordings from neurons in L2/3, we paired postsynaptic spikes elicited by direct-current injection with subthreshold whisker deflection to drive presynaptic afferents to the recorded neuron at precise temporal delays. Post-leading-pre pairing (<33 ms delay) decreased the slope and amplitude of the PSP evoked by the paired whisker, whereas \"pre-leading-post\" delays failed to produce depression, and sometimes produced potentiation of whisker-evoked PSPs. These results demonstrate that spike timing-dependent synaptic depression occurs in S1 in vivo, and is therefore a plausible plasticity mechanism in the sensory cortex." }, { "pmid": "11252764", "title": "Is heterosynaptic modulation essential for stabilizing Hebbian plasticity and memory?", "abstract": "In 1894, Ramón y Cajal first proposed that memory is stored as an anatomical change in the strength of neuronal connections. For the following 60 years, little evidence was recruited in support of this idea. This situation changed in the middle of the twentieth century with the development of cellular techniques for the study of synaptic connections and the emergence of new formulations of synaptic plasticity that redefined Ramón y Cajal's idea, making it more suitable for testing. These formulations defined two categories of plasticity, referred to as homosynaptic or Hebbian activity-dependent, and heterosynaptic or modulatory input-dependent. Here we suggest that Hebbian mechanisms are used primarily for learning and for short-term memory but often cannot, by themselves, recruit the events required to maintain a long-term memory. In contrast, heterosynaptic plasticity commonly recruits long-term memory mechanisms that lead to transcription and to synpatic growth. When jointly recruited, homosynaptic mechanisms assure that learning is effectively established and heterosynaptic mechanisms ensure that memory is maintained." }, { "pmid": "12031406", "title": "Neuromodulatory transmitter systems in the cortex and their role in cortical plasticity.", "abstract": "Cortical neuromodulatory transmitter systems refer to those classical neurotransmitters such as acetylcholine and monoamines, which share a number of common features. For instance, their centers are located in subcortical regions and send long projection axons to innervate the cortex. The same transmitter can either excite or inhibit cortical neurons depending on the composition of postsynaptic transmitter receptor subtypes. The overall functions of these transmitters are believed to serve as chemical bases of arousal, attention and motivation. The anatomy and physiology of neuromodulatory transmitter systems and their innervations in the cerebral cortex have been well characterized. In addition, ample evidence is available indicating that neuromodulatory transmitters also play roles in development and plasticity of the cortex. In this article, the anatomical organization and physiological function of each of the following neuromodulatory transmitters, acetylcholine, noradrenaline, serotonin, dopamine, and histamine, in the cortex will be described. The involvement of these transmitters in cortical plasticity will then be discussed. Available data suggest that neuromodulatory transmitters can modulate the excitability of cortical neurons, enhance the signal-to-noise ratio of cortical responses, and modify the threshold for activity-dependent synaptic modifications. Synaptic transmissions of these neuromodulatory transmitters are mediated via numerous subtype receptors, which are linked to multiple signal transduction mechanisms. Among the neuromodulatory transmitter receptor subtypes, cholinergic M(1), noradrenergic beta(1) and serotonergic 5-HT(2C) receptors appear to be more important than other receptor subtypes for cortical plasticity. In general, the contribution of neuromodulatory transmitter systems to cortical plasticity may be made through a facilitation of NMDA receptor-gated processes." }, { "pmid": "17400301", "title": "Behavioral dopamine signals.", "abstract": "Lesioning and psychopharmacological studies suggest a wide range of behavioral functions for ascending midbrain dopaminergic systems. However, electrophysiological and neurochemical studies during specific behavioral tasks demonstrate a more restricted spectrum of dopamine-mediated changes. Substantial increases in dopamine-mediated activity, as measured by electrophysiology or voltammetry, are related to rewards and reward-predicting stimuli. A somewhat slower, distinct electrophysiological response encodes the uncertainty associated with rewards. Aversive events produce different, mostly slower, electrophysiological dopamine responses that consist predominantly of depressions. Additionally, more modest dopamine concentration fluctuations, related to punishment and movement, are seen at 200-18,000 times longer time courses using voltammetry and microdialysis in vivo. Using these responses, dopamine neurotransmission provides differential and heterogeneous information to subcortical and cortical brain structures about essential outcome components for approach behavior, learning and economic decision-making." }, { "pmid": "11544526", "title": "A cellular mechanism of reward-related learning.", "abstract": "Positive reinforcement helps to control the acquisition of learned behaviours. Here we report a cellular mechanism in the brain that may underlie the behavioural effects of positive reinforcement. We used intracranial self-stimulation (ICSS) as a model of reinforcement learning, in which each rat learns to press a lever that applies reinforcing electrical stimulation to its own substantia nigra. The outputs from neurons of the substantia nigra terminate on neurons in the striatum in close proximity to inputs from the cerebral cortex on the same striatal neurons. We measured the effect of substantia nigra stimulation on these inputs from the cortex to striatal neurons and also on how quickly the rats learned to press the lever. We found that stimulation of the substantia nigra (with the optimal parameters for lever-pressing behaviour) induced potentiation of synapses between the cortex and the striatum, which required activation of dopamine receptors. The degree of potentiation within ten minutes of the ICSS trains was correlated with the time taken by the rats to learn ICSS behaviour. We propose that stimulation of the substantia nigra when the lever is pressed induces a similar potentiation of cortical inputs to the striatum, positively reinforcing the learning of the behaviour by the rats." }, { "pmid": "12371508", "title": "Dopamine-dependent plasticity of corticostriatal synapses.", "abstract": "Knowledge of the effect of dopamine on corticostriatal synaptic plasticity has advanced rapidly over the last 5 years. We consider this new knowledge in relation to three factors proposed earlier to describe the rules for synaptic plasticity in the corticostriatal pathway. These factors are a phasic increase in dopamine release, presynaptic activity and postsynaptic depolarisation. A function is proposed which relates the amount of dopamine release in the striatum to the modulation of corticostriatal synaptic efficacy. It is argued that this function, and the experimental data from which it arises, are compatible with existing models which associate the reward-related firing of dopamine neurons with changes in corticostriatal synaptic efficacy." }, { "pmid": "11452310", "title": "Cortical remodelling induced by activity of ventral tegmental dopamine neurons.", "abstract": "Representations of sensory stimuli in the cerebral cortex can undergo progressive remodelling according to the behavioural importance of the stimuli. The cortex receives widespread projections from dopamine neurons in the ventral tegmental area (VTA), which are activated by new stimuli or unpredicted rewards, and are believed to provide a reinforcement signal for such learning-related cortical reorganization. In the primary auditory cortex (AI) dopamine release has been observed during auditory learning that remodels the sound-frequency representations. Furthermore, dopamine modulates long-term potentiation, a putative cellular mechanism underlying plasticity. Here we show that stimulating the VTA together with an auditory stimulus of a particular tone increases the cortical area and selectivity of the neural responses to that sound stimulus in AI. Conversely, the AI representations of nearby sound frequencies are selectively decreased. Strong, sharply tuned responses to the paired tones also emerge in a second cortical area, whereas the same stimuli evoke only poor or non-selective responses in this second cortical field in naive animals. In addition, we found that strong long-range coherence of neuronal discharge emerges between AI and this secondary auditory cortical area." }, { "pmid": "10676963", "title": "A neuronal analogue of state-dependent learning.", "abstract": "State-dependent learning is a phenomenon in which the retrieval of newly acquired information is possible only if the subject is in the same sensory context and physiological state as during the encoding phase. In spite of extensive behavioural and pharmacological characterization, no cellular counterpart of this phenomenon has been reported. Here we describe a neuronal analogue of state-dependent learning in which cortical neurons show an acetylcholine-dependent expression of an acetylcholine-induced functional plasticity. This was demonstrated on neurons of rat somatosensory 'barrel' cortex, whose tunings to the temporal frequency of whisker deflections were modified by cellular conditioning. Pairing whisker stimulation with acetylcholine applied iontophoretically yielded selective lasting modification of responses, the expression of which depended on the presence of exogenous acetylcholine. Administration of acetylcholine during testing revealed frequency-specific changes in response that were not expressed when tested without acetylcholine or when the muscarinic antagonist, atropine, was applied concomitantly. Our results suggest that both acquisition and recall can be controlled by the cortical release of acetylcholine." }, { "pmid": "12165477", "title": "Cholinergic modulation of experience-dependent plasticity in human auditory cortex.", "abstract": "The factors that influence experience-dependent plasticity in the human brain are unknown. We used event-related functional magnetic resonance imaging (fMRI) and a pharmacological manipulation to measure cholinergic modulation of experience-dependent plasticity in human auditory cortex. In a differential aversive conditioning paradigm, subjects were presented with high (1600 Hz) and low tones (400 Hz), one of which was conditioned by pairing with an electrical shock. Prior to presentation, subjects were given either a placebo or an anticholinergic drug (0.4 mg iv scopolamine). Experience-dependent plasticity, expressed as a conditioning-specific enhanced BOLD response, was evident in auditory cortex in the placebo group, but not with scopolamine. This study provides in vivo evidence that experience-dependent plasticity, evident in hemodynamic changes in human auditory cortex, is modulated by acetylcholine." }, { "pmid": "15242655", "title": "Acetylcholine-dependent potentiation of temporal frequency representation in the barrel cortex does not depend on response magnitude during conditioning.", "abstract": "The response properties of neurons of the postero-medial barrel sub-field of the somatosensory cortex (the cortical structure receiving information from the mystacial vibrissae can be modified as a consequence of peripheral manipulations of the afferent activity. This plasticity depends on the integrity of the cortical cholinergic innervation, which originates at the nucleus basalis magnocellularis (NBM). The activity of the NBM is related to the behavioral state of the animal and the putative cholinergic neurons are activated by specific events, such as reward-related signals, during behavioral learning. Experimental studies on acetylcholine (ACh)-dependent cortical plasticity have shown that ACh is needed for both the induction and the expression of plastic modifications induced by sensory-cholinergic pairings. Here we review and discuss ACh-dependent plasticity and activity-dependent plasticity and ask whether these two mechanisms are linked. To address this question, we analyzed our data and tested whether changes mediated by ACh were activity-dependent. We show that ACh-dependent potentiation of response in the barrel cortex of rats observed after sensory-cholinergic pairing was not correlated to the changes in activity induced during pairing. Since these results suggest that the effect of ACh during pairing is not exerted through a direct control of the post-synaptic activity, we propose that ACh might induce its effect either pre- or post-synaptically through activation of second messenger cascades." }, { "pmid": "17220510", "title": "Solving the distal reward problem through linkage of STDP and dopamine signaling.", "abstract": "In Pavlovian and instrumental conditioning, reward typically comes seconds after reward-triggering actions, creating an explanatory conundrum known as \"distal reward problem\": How does the brain know what firing patterns of what neurons are responsible for the reward if 1) the patterns are no longer there when the reward arrives and 2) all neurons and synapses are active during the waiting period to the reward? Here, we show how the conundrum is resolved by a model network of cortical spiking neurons with spike-timing-dependent plasticity (STDP) modulated by dopamine (DA). Although STDP is triggered by nearly coincident firing patterns on a millisecond timescale, slow kinetics of subsequent synaptic plasticity is sensitive to changes in the extracellular DA concentration during the critical period of a few seconds. Random firings during the waiting period to the reward do not affect STDP and hence make the network insensitive to the ongoing activity-the key feature that distinguishes our approach from previous theoretical studies, which implicitly assume that the network be quiet during the waiting period or that the patterns be preserved until the reward arrives. This study emphasizes the importance of precise firing patterns in brain dynamics and suggests how a global diffusive reinforcement signal in the form of extracellular DA can selectively influence the right synapses at the right time." }, { "pmid": "17571943", "title": "Reinforcement learning, spike-time-dependent plasticity, and the BCM rule.", "abstract": "Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists." }, { "pmid": "16764506", "title": "Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning.", "abstract": "In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed." }, { "pmid": "4974291", "title": "Operant conditioning of cortical unit activity.", "abstract": "The activity of single neurons in precentral cortex of unanesthetized monkeys (Macaca mulatta) was conditioned by reinforcing high rates of neuronal discharge with delivery of a food pellet. Auditory or visual feedback of unit firing rates was usually provided in addition to food reinforcement. After several training sessions, monkeys could increase the activity of newly isolated cells by 50 to 500 percent above rates before reinforcement." }, { "pmid": "17234689", "title": "Volitional control of neural activity: implications for brain-computer interfaces.", "abstract": "Successful operation of brain-computer interfaces (BCI) and brain-machine interfaces (BMI) depends significantly on the degree to which neural activity can be volitionally controlled. This paper reviews evidence for such volitional control in a variety of neural signals, with particular emphasis on the activity of cortical neurons. Some evidence comes from conventional experiments that reveal volitional modulation in neural activity related to behaviours, including real and imagined movements, cognitive imagery and shifts of attention. More direct evidence comes from studies on operant conditioning of neural activity using biofeedback, and from BCI/BMI studies in which neural activity controls cursors or peripheral devices. Limits in the degree of accuracy of control in the latter studies can be attributed to several possible factors. Some of these factors, particularly limited practice time, can be addressed with long-term implanted BCIs. Preliminary observations with implanted circuits implementing recurrent BCIs are summarized." }, { "pmid": "11165912", "title": "Dynamics of networks of randomly connected excitatory and inhibitory spiking neurons.", "abstract": "Recent advances in the understanding of the dynamics of populations of spiking neurones are reviewed. These studies shed light on how a population of neurones can follow arbitrary variations in input stimuli, how the dynamics of the population depends on the type of noise, and how recurrent connections influence the dynamics. The importance of inhibitory feedback for the generation of irregularity in single cell behaviour is emphasized. Examples of computation that recurrent networks with excitatory and inhibitory cells can perform are then discussed. Maintenance of a network state as an attractor of the system is discussed as a model for working memory function, in both object and spatial modalities. These models can be used to interpret and make predictions about electrophysiological data in the awake monkey." }, { "pmid": "17444756", "title": "Spike-timing-dependent plasticity in balanced random networks.", "abstract": "The balanced random network model attracts considerable interest because it explains the irregular spiking activity at low rates and large membrane potential fluctuations exhibited by cortical neurons in vivo. In this article, we investigate to what extent this model is also compatible with the experimentally observed phenomenon of spike-timing-dependent plasticity (STDP). Confronted with the plethora of theoretical models for STDP available, we reexamine the experimental data. On this basis, we propose a novel STDP update rule, with a multiplicative dependence on the synaptic weight for depression, and a power law dependence for potentiation. We show that this rule, when implemented in large, balanced networks of realistic connectivity and sparseness, is compatible with the asynchronous irregular activity regime. The resultant equilibrium weight distribution is unimodal with fluctuating individual weight trajectories and does not exhibit development of structure. We investigate the robustness of our results with respect to the relative strength of depression. We introduce synchronous stimulation to a group of neurons and demonstrate that the decoupling of this group from the rest of the network is so severe that it cannot effectively control the spiking of other neurons, even those with the highest convergence from this group." }, { "pmid": "9852584", "title": "Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type.", "abstract": "In cultures of dissociated rat hippocampal neurons, persistent potentiation and depression of glutamatergic synapses were induced by correlated spiking of presynaptic and postsynaptic neurons. The relative timing between the presynaptic and postsynaptic spiking determined the direction and the extent of synaptic changes. Repetitive postsynaptic spiking within a time window of 20 msec after presynaptic activation resulted in long-term potentiation (LTP), whereas postsynaptic spiking within a window of 20 msec before the repetitive presynaptic activation led to long-term depression (LTD). Significant LTP occurred only at synapses with relatively low initial strength, whereas the extent of LTD did not show obvious dependence on the initial synaptic strength. Both LTP and LTD depended on the activation of NMDA receptors and were absent in cases in which the postsynaptic neurons were GABAergic in nature. Blockade of L-type calcium channels with nimodipine abolished the induction of LTD and reduced the extent of LTP. These results underscore the importance of precise spike timing, synaptic strength, and postsynaptic cell type in the activity-induced modification of central synapses and suggest that Hebb's rule may need to incorporate a quantitative consideration of spike timing that reflects the narrow and asymmetric window for the induction of synaptic modification." }, { "pmid": "10966623", "title": "Competitive Hebbian learning through spike-timing-dependent synaptic plasticity.", "abstract": "Hebbian models of development and learning require both activity-dependent synaptic plasticity and a mechanism that induces competition between different synapses. One form of experimentally observed long-term synaptic plasticity, which we call spike-timing-dependent plasticity (STDP), depends on the relative timing of pre- and postsynaptic action potentials. In modeling studies, we find that this form of synaptic modification can automatically balance synaptic strengths to make postsynaptic firing irregular but more sensitive to presynaptic spike timing. It has been argued that neurons in vivo operate in such a balanced regime. Synapses modifiable by STDP compete for control of the timing of postsynaptic action potentials. Inputs that fire the postsynaptic neuron with short latency or that act in correlated groups are able to compete most successfully and develop strong synapses, while synapses of longer-latency or less-effective inputs are weakened." }, { "pmid": "11705408", "title": "Intrinsic stabilization of output rates by spike-based Hebbian learning.", "abstract": "We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input." }, { "pmid": "11744242", "title": "Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons.", "abstract": "To investigate the basis of the fluctuating activity present in neocortical neurons in vivo, we have combined computational models with whole-cell recordings using the dynamic-clamp technique. A simplified 'point-conductance' model was used to represent the currents generated by thousands of stochastically releasing synapses. Synaptic activity was represented by two independent fast glutamatergic and GABAergic conductances described by stochastic random-walk processes. An advantage of this approach is that all the model parameters can be determined from voltage-clamp experiments. We show that the point-conductance model captures the amplitude and spectral characteristics of the synaptic conductances during background activity. To determine if it can recreate in vivo-like activity, we injected this point-conductance model into a single-compartment model, or in rat prefrontal cortical neurons in vitro using dynamic clamp. This procedure successfully recreated several properties of neurons intracellularly recorded in vivo, such as a depolarized membrane potential, the presence of high-amplitude membrane potential fluctuations, a low-input resistance and irregular spontaneous firing activity. In addition, the point-conductance model could simulate the enhancement of responsiveness due to background activity. We conclude that many of the characteristics of cortical neurons in vivo can be explained by fast glutamatergic and GABAergic conductances varying stochastically." }, { "pmid": "17928565", "title": "Reinforcement learning with modulated spike timing dependent synaptic plasticity.", "abstract": "Spike timing-dependent synaptic plasticity (STDP) has emerged as the preferred framework linking patterns of pre- and postsynaptic activity to changes in synaptic strength. Although synaptic plasticity is widely believed to be a major component of learning, it is unclear how STDP itself could serve as a mechanism for general purpose learning. On the other hand, algorithms for reinforcement learning work on a wide variety of problems, but lack an experimentally established neural implementation. Here, we combine these paradigms in a novel model in which a modified version of STDP achieves reinforcement learning. We build this model in stages, identifying a minimal set of conditions needed to make it work. Using a performance-modulated modification of STDP in a two-layer feedforward network, we can train output neurons to generate arbitrarily selected spike trains or population responses. Furthermore, a given network can learn distinct responses to several different input patterns. We also describe in detail how this model might be implemented biologically. Thus our model offers a novel and biologically plausible implementation of reinforcement learning that is capable of training a neural population to produce a very wide range of possible mappings between synaptic input and spiking output." }, { "pmid": "10195145", "title": "Input synchrony and the irregular firing of cortical neurons.", "abstract": "Cortical neurons in the waking brain fire highly irregular, seemingly random, spike trains in response to constant sensory stimulation, whereas in vitro they fire regularly in response to constant current injection. To test whether, as has been suggested, this high in vivo variability could be due to the postsynaptic currents generated by independent synaptic inputs, we injected synthetic synaptic current into neocortical neurons in brain slices. We report that independent inputs cannot account for this high variability, but this variability can be explained by a simple alternative model of the synaptic drive in which inputs arrive synchronously. Our results suggest that synchrony may be important in the neural code by providing a means for encoding signals with high temporal fidelity over a population of neurons." }, { "pmid": "7770778", "title": "Reliability of spike timing in neocortical neurons.", "abstract": "It is not known whether the variability of neural activity in the cerebral cortex carries information or reflects noisy underlying mechanisms. In an examination of the reliability of spike generation using recordings from neurons in rat neocortical slices, the precision of spike timing was found to depend on stimulus transients. Constant stimuli led to imprecise spike trains, whereas stimuli with fluctuations resembling synaptic activity produced spike trains with timing reproducible to less than 1 millisecond. These data suggest a low intrinsic noise level in spike generation, which could allow cortical neurons to accurately transform synaptic input into spike sequences, supporting a possible role for spike timing in the processing of cortical information by the neocortex." }, { "pmid": "14762148", "title": "Dynamics of population rate codes in ensembles of neocortical neurons.", "abstract": "Information processing in neocortex can be very fast, indicating that neuronal ensembles faithfully transmit rapidly changing signals to each other. Apart from signal-to-noise issues, population codes are fundamentally constrained by the neuronal dynamics. In particular, the biophysical properties of individual neurons and collective phenomena may substantially limit the speed at which a graded signal can be represented by the activity of an ensemble. These implications of the neuronal dynamics are rarely studied experimentally. Here, we combine theoretical analysis and whole cell recordings to show that encoding signals in the variance of uncorrelated synaptic inputs to a neocortical ensemble enables faithful transmission of graded signals with high temporal resolution. In contrast, the encoding of signals in the mean current is subject to low-pass filtering." }, { "pmid": "12433288", "title": "Real-time computing without stable states: a new framework for neural computation based on perturbations.", "abstract": "A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology." }, { "pmid": "16310350", "title": "Fading memory and kernel properties of generic cortical microcircuit models.", "abstract": "It is quite difficult to construct circuits of spiking neurons that can carry out complex computational tasks. On the other hand even randomly connected circuits of spiking neurons can in principle be used for complex computational tasks such as time-warp invariant speech recognition. This is possible because such circuits have an inherent tendency to integrate incoming information in such a way that simple linear readouts can be trained to transform the current circuit activity into the target output for a very large number of computational tasks. Consequently we propose to analyze circuits of spiking neurons in terms of their roles as analog fading memory and non-linear kernels, rather than as implementations of specific computational operations and algorithms. This article is a sequel to [W. Maass, T. Natschläger, H. Markram, Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531-2560, Online available as #130 from: ], and contains new results about the performance of generic neural microcircuit models for the recognition of speech that is subject to linear and non-linear time-warps, as well as for computations on time-varying firing rates. These computations rely, apart from general properties of generic neural microcircuit models, just on capabilities of simple linear readouts trained by linear regression. This article also provides detailed data on the fading memory property of generic neural microcircuit models, and a quick review of other new results on the computational power of such circuits of spiking neurons." }, { "pmid": "15483600", "title": "Plasticity in single neuron and circuit computations.", "abstract": "Plasticity in neural circuits can result from alterations in synaptic strength or connectivity, as well as from changes in the excitability of the neurons themselves. To better understand the role of plasticity in the brain, we need to establish how brain circuits work and the kinds of computations that different circuit structures achieve. By linking theoretical and experimental studies, we are beginning to reveal the consequences of plasticity mechanisms for network dynamics, in both simple invertebrate circuits and the complex circuits of mammalian cerebral cortex." }, { "pmid": "9560274", "title": "Differential signaling via the same axon of neocortical pyramidal neurons.", "abstract": "The nature of information stemming from a single neuron and conveyed simultaneously to several hundred target neurons is not known. Triple and quadruple neuron recordings revealed that each synaptic connection established by neocortical pyramidal neurons is potentially unique. Specifically, synaptic connections onto the same morphological class differed in the numbers and dendritic locations of synaptic contacts, their absolute synaptic strengths, as well as their rates of synaptic depression and recovery from depression. The same axon of a pyramidal neuron innervating another pyramidal neuron and an interneuron mediated frequency-dependent depression and facilitation, respectively, during high frequency discharges of presynaptic action potentials, suggesting that the different natures of the target neurons underlie qualitative differences in synaptic properties. Facilitating-type synaptic connections established by three pyramidal neurons of the same class onto a single interneuron, were all qualitatively similar with a combination of facilitation and depression mechanisms. The time courses of facilitation and depression, however, differed for these convergent connections, suggesting that different pre-postsynaptic interactions underlie quantitative differences in synaptic properties. Mathematical analysis of the transfer functions of frequency-dependent synapses revealed supra-linear, linear, and sub-linear signaling regimes in which mixtures of presynaptic rates, integrals of rates, and derivatives of rates are transferred to targets depending on the precise values of the synaptic parameters and the history of presynaptic action potential activity. Heterogeneity of synaptic transfer functions therefore allows multiple synaptic representations of the same presynaptic action potential train and suggests that these synaptic representations are regulated in a complex manner. It is therefore proposed that differential signaling is a key mechanism in neocortical information processing, which can be regulated by selective synaptic modifications." }, { "pmid": "12022505", "title": "Synapses as dynamic memory buffers.", "abstract": "This article throws new light on the possible role of synapses in information transmission through theoretical analysis and computer simulations. We show that the internal dynamic state of a synapse may serve as a transient memory buffer that stores information about the most recent segment of the spike train that was previously sent to this synapse. This information is transmitted to the postsynaptic neuron through the amplitudes of the postsynaptic response for the next few spikes. In fact, we show that most of this information about the preceding spike train is already contained in the postsynaptic response for just two additional spikes. It is demonstrated that the postsynaptic neuron receives simultaneously information about the specific type of synapse which has transmitted these pulses. In view of recent findings by Gupta et al. [Science, 287 (2000) 273] that different types of synapses are characteristic for specific types of presynaptic neurons, the postsynaptic neuron receives in this way partial knowledge about the identity of the presynaptic neuron from which it has received information. Our simulations are based on recent data about the dynamics of GABAergic synapses. We show that the relatively large number of synaptic release sites that make up a GABAergic synaptic connection makes these connections suitable for such complex information transmission processes." }, { "pmid": "10634775", "title": "Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex.", "abstract": "A puzzling feature of the neocortex is the rich array of inhibitory interneurons. Multiple neuron recordings revealed numerous electrophysiological-anatomical subclasses of neocortical gamma-aminobutyric acid-ergic (GABAergic) interneurons and three types of GABAergic synapses. The type of synapse used by each interneuron to influence its neighbors follows three functional organizing principles. These principles suggest that inhibitory synapses could shape the impact of different interneurons according to their specific spatiotemporal patterns of activity and that GABAergic interneuron and synapse diversity may enable combinatorial inhibitory effects in the neocortex." }, { "pmid": "9620800", "title": "Visual input evokes transient and strong shunting inhibition in visual cortical neurons.", "abstract": "The function and nature of inhibition of neurons in the visual cortex have been the focus of both experimental and theoretical investigations. There are two ways in which inhibition can suppress synaptic excitation. In hyperpolarizing inhibition, negative and positive currents sum linearly to produce a net change in membrane potential. In contrast, shunting inhibition acts nonlinearly by causing an increase in membrane conductance; this divides the amplitude of the excitatory response. Visually evoked changes in membrane conductance have been reported to be nonsignificant or weak, supporting the hyperpolarization mode of inhibition. Here we present a new approach to studying inhibition that is based on in vivo whole-cell voltage clamping. This technique allows the continuous measurement of conductance dynamics during visual activation. We show, in neurons of cat primary visual cortex, that the response to optimally orientated flashed bars can increase the somatic input conductance to more than three times that of the resting state. The short latency of the visually evoked peak of conductance, and its apparent reversal potential suggest a dominant contribution from gamma-aminobutyric acid ((GABA)A) receptor-mediated synapses. We propose that nonlinear shunting inhibition may act during the initial stage of visual cortical processing, setting the balance between opponent 'On' and 'Off' responses in different locations of the visual receptive field." }, { "pmid": "9801388", "title": "Synaptic integration in striate cortical simple cells.", "abstract": "Simple cells in the visual cortex respond to the precise position of oriented contours (Hubel and Wiesel, 1962). This sensitivity reflects the structure of the simple receptive field, which exhibits two sorts of antagonism between on and off inputs. First, simple receptive fields are divided into adjacent on and off subregions; second, within each subregion, stimuli of the reverse contrast evoke responses of the opposite sign: push-pull (Hubel and Wiesel, 1962; Palmer and Davis, 1981; Jones and Palmer, 1987; Ferster, 1988). We have made whole-cell patch recordings from cat area 17 during visual stimulation to examine the generation and integration of excitation (push) and suppression (pull) in the simple receptive field. The temporal structure of the push reflected the pattern of thalamic inputs, as judged by comparing the intracellular cortical responses to extracellular recordings made in the lateral geniculate nucleus. Two mechanisms have been advanced to account for the pull-withdrawal of thalamic drive and active, intracortical inhibition (Hubel and Wiesel, 1962; Heggelund, 1968; Ferster, 1988). Our results suggest that intracortical inhibition is the dominant, and perhaps sole, mechanism of suppression. The inhibitory influences operated within a wide dynamic range. When inhibition was strong, the membrane conductance could be doubled or tripled. Furthermore, if a stimulus confined to one subregion was enlarged so that it extended into the next, the sign of response often changed from depolarizing to hyperpolarizing. In other instances, the inhibition modulated neuronal output subtly, by elevating spike threshold or altering firing rate at a given membrane voltage." }, { "pmid": "10816319", "title": "Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex.", "abstract": "Membrane potentials of cortical neurons fluctuate between a hyperpolarized ('down') state and a depolarized ('up') state which may be separated by up to 30 mV, reflecting rapid but infrequent transitions between two patterns of synaptic input. Here we show that such fluctuations may contribute to representation of visual stimuli by cortical cells. In complex cells of anesthetized cats, where such fluctuations are most prominent, prolonged visual stimulation increased the probability of the up state. This probability increase was related to stimulus strength: its dependence on stimulus orientation and contrast matched each cell's averaged membrane potential. Thus large fluctuations in membrane potential are not simply noise on which visual responses are superimposed, but may provide a substrate for encoding sensory information." }, { "pmid": "810359", "title": "Correlations between activity of motor cortex cells and arm muscles during operantly conditioned response patterns.", "abstract": "Monkey motor cortex cells were recorded during isolated, isometric contractions of each of four representative arm muscles -- a flexor and extensor of wrist and elbow -- and comparable response averages computed. Most cells were coactivated with several of the muscles; some fired the same way with all four and others with none. Results suggest that many precentral cells have a higher order relation to muscles than motoneurons. Operantly reinforced bursts of cell activity were associated with coactivation of specific muscles, called the cell's \"motor field\"; the most strongly coactivated muscle was usually the one whose isolated contraction had evoked the most intense unit activity. During active elbow movements most cells fired in a manner consistent with their isometric patterns, but clear exceptions were noted. Differential reinforcement of unit activity and muscle suppression was invariably successful in dissociating correlations. The strength of each unit-muscle correlation was assessed by the relative intensity of their coactivation and its consistency under different response conditions. Several cells exhibited the most intense coactivation with the same muscle during all conditions. Thus, intensity and consistency criteria usually agreed, suggesting that strong correlations so determined may operationally define a \"functional relation\". However, correlations in the sense of covariation are neither necessary nor sufficient evidence to establish anatomical connections. To test the possibility of direct excitatory connections we stimulated the cortex, but found lowest threshold responses in distal muscles, even from points where most cells had been strongly correlated with proximal muscles. Post-spike averages of rectified EMG activity provided scant evidence for cell-related fluctuations in firing probabilities of any muscles." }, { "pmid": "16481565", "title": "A statistical analysis of information-processing properties of lamina-specific cortical microcircuit models.", "abstract": "A major challenge for computational neuroscience is to understand the computational function of lamina-specific synaptic connection patterns in stereotypical cortical microcircuits. Previous work on this problem had focused on hypothesized specific computational roles of individual layers and connections between layers and had tested these hypotheses through simulations of abstract neural network models. We approach this problem by studying instead the dynamical system defined by more realistic cortical microcircuit models as a whole and by investigating the influence that its laminar structure has on the transmission and fusion of information within this dynamical system. The circuit models that we examine consist of Hodgkin-Huxley neurons with dynamic synapses, based on detailed data from Thomson and others (2002), Markram and others (1998), and Gupta and others (2000). We investigate to what extent this cortical microcircuit template supports the accumulation and fusion of information contained in generic spike inputs into layer 4 and layers 2/3 and how well it makes this information accessible to projection neurons in layers 2/3 and layer 5. We exhibit specific computational advantages of such data-based lamina-specific cortical microcircuit model by comparing its performance with various types of control models that have the same components and the same global statistics of neurons and synaptic connections but are missing the lamina-specific structure of real cortical microcircuits. We conclude that computer simulations of detailed lamina-specific cortical microcircuit models provide new insight into computational consequences of anatomical and physiological data." }, { "pmid": "11158631", "title": "What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration.", "abstract": "A previous paper described a network of simple integrate-and-fire neurons that contained output neurons selective for specific spatiotemporal patterns of inputs; only experimental results were described. We now present the principles behind the operation of this network and discuss how these principles point to a general class of computational operations that can be carried out easily and naturally by networks of spiking neurons. Transient synchrony of the action potentials of a group of neurons is used to signal \"recognition\" of a space-time pattern across the inputs of those neurons. Appropriate synaptic coupling produces synchrony when the inputs to these neurons are nearly equal, leaving the neurons unsynchronized or only weakly synchronized for other input circumstances. When the input to this system comes from timed past events represented by decaying delay activity, the pattern of synaptic connections can be set such that synchronization occurs only for selected spatiotemporal patterns. We show how the recognition is invariant to uniform time warp and uniform intensity change of the input events. The fundamental recognition event is a transient collective synchronization, representing \"many neurons now agree,\" an event that is then detected easily by a cell with a small time constant. If such synchronization is used in neurobiological computation, its hallmark will be a brief burst of gamma-band electroencephalogram noise when and where such a recognition event or decision occurs." }, { "pmid": "16907616", "title": "Gradient learning in spiking neural networks by dynamic perturbation of conductances.", "abstract": "We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of \"empiric\" synapses driven by random spike trains from an external source." }, { "pmid": "16474393", "title": "The tempotron: a neuron that learns spike timing-based decisions.", "abstract": "The timing of action potentials in sensory neurons contains substantial information about the eliciting stimuli. Although the computational advantages of spike timing-based neuronal codes have long been recognized, it is unclear whether, and if so how, neurons can learn to read out such representations. We propose a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. The number of categorizations of random spatiotemporal patterns that a neuron can implement is several times larger than the number of its synapses. The underlying nonlinear temporal computation allows neurons to access information beyond single-neuron statistics and to discriminate between inputs on the basis of multineuronal spike statistics. Our work demonstrates the high capacity of neural systems to learn to decode information embedded in distributed patterns of spike synchrony." } ]
International Journal of Telemedicine and Applications
19132095
PMC2613436
10.1155/2009/461560
Security Framework for Pervasive Healthcare Architectures Utilizing MPEG-21 IPMP Components
Nowadays in modern and ubiquitous computing environments, it is imperative more than ever the necessity for deployment of pervasive healthcare architectures into which the patient is the central point surrounded by different types of embedded and small computing devices, which measure sensitive physical indications, interacting with hospitals databases, allowing thus urgent medical response in occurrences of critical situations. Such environments must be developed satisfying the basic security requirements for real-time secure data communication, and protection of sensitive medical data and measurements, data integrity and confidentiality, and protection of the monitored patient's privacy. In this work, we argue that the MPEG-21 Intellectual Property Management and Protection (IPMP) components can be used in order to achieve protection of transmitted medical information and enhance patient's privacy, since there is selective and controlled access to medical data that sent toward the hospital's servers.
2. Related WorkGialelis et al. [1] propose a pervasive healthcare architecture into which a wearable health monitoring system is integrated into a broad telemedical infrastructure allowing high-risk cardiovascular patients to closely monitor changes in their critical vital signs and get experts feedback to help maintain optimal health status. Consistent with the major challenge to provide good quality and reliable health care services to an increasing number of people utilizing limited financial and human resources, they propose a person-based health care system which consists of wearable Commercial of-the-shelf nodes which are already used in the hospital environment, and they are capable of sensing and processing blood-oxygen, blood-pressure, ECGs, and other vital signs and can be seamlessly integrated into wireless personal area networks (WPANs) for ubiquitous real-time patient monitoring. Their architecture lacks safety, security, and privacy considerations, which may lead to serious breaches to architecture's and EMDs functionalities or to users' privacy.Venkatasubramanian and Gupta [2] made a survey on security solutions for pervasive healthcare environments focusing on securing of data collected by EMDs, securing the communications between EMDs and investigation of mechanisms for controlling the access to medical data. They propose the use of cryptographic primitives, where measurements of physiological values are used for cryptographic keys, eliminating thus the necessity for key distribution, for securing data, and for the establishment of secure communications between two entities. Concerning the access control to medical data, they survey methods that are based on role-based access control (RBAC), extending it for usage in pervasive healthcare environments.As far as we know, the only proposal for usage of MPEG-21 as a mechanism to access control to medical records has been proposed by Brox [3]. The author links patients records into MPEG-21 digital items and attempts to find access control mechanisms based on MPEG-21 standard. In this work, the author does not provide a clear architecture that implements the usage of MPEG-21 and also he does not use the MPEG-21 Intellectual Property Management and Protection (IPMP) components for the protection of medical records, but he mentions its use as a future and open research issue.
[]
[]
PLoS Computational Biology
19300487
PMC2651022
10.1371/journal.pcbi.1000318
Accurate Detection of Recombinant Breakpoints in Whole-Genome Alignments
We propose a novel method for detecting sites of molecular recombination in multiple alignments. Our approach is a compromise between previous extremes of computationally prohibitive but mathematically rigorous methods and imprecise heuristic methods. Using a combined algorithm for estimating tree structure and hidden Markov model parameters, our program detects changes in phylogenetic tree topology over a multiple sequence alignment. We evaluate our method on benchmark datasets from previous studies on two recombinant pathogens, Neisseria and HIV-1, as well as simulated data. We show that we are not only able to detect recombinant regions of vastly different sizes but also the location of breakpoints with great accuracy. We show that our method does well inferring recombination breakpoints while at the same time maintaining practicality for larger datasets. In all cases, we confirm the breakpoint predictions of previous studies, and in many cases we offer novel predictions.
Previous Related WorkWe give here an outline of previous methods which are related to our phylo-HMM approach. For a more thorough survey or recombination detection methods, see [1].The rationale for phylogenetic recombination inference is motivated by the structure of the Ancestral Recombination Graph (ARG), which contains all phylogenetic and recombination histories. The underlying idea is that recombination events in the history of the ARG will, in certain cases, lead to discordant phylogenetic histories for present-day species.Various approaches to learn the ARG directly from sequence data have been developed, such as [8] and [9]. We recognize that PRI is in a somewhat different category both in goal and approach as compared these methods, though they are motivated by the same underlying biological phenomenon. Rather than aiming to reconstruct the ARG in its entirety, our emphasis is on modeling fast-evolving organisms with the goal of accurately detecting breakpoints for biological and epidemiological study.The most widely-used program for phylogenetic recombination detection is SimPlot [10] (on MS Windows). Recombination Identification Program (RIP), a similar program, [11] runs on UNIX machines as well as from a server on the LANL HIV Database site. This program slides a window along the alignment and plots the similarity of a query sequence to a panel of reference sequences. The window and step size are adjustable to accommodate varying levels of sensitivity. Bootscanning slides a window and performs many replicates of bootstrapped phylogenetic trees in each window, and plots the percentage of trees which show clustering between the query sequence and the given reference sequence. Bootscanning produces similar output to our program, namely a predicted partition of the alignment as well as trees for each region, but the method is entirely different.In [12], Husmeier and Wright use a model that is similar to ours except for the training scheme. Since they have no scalable tree-optimizing heuristic, their input alignment is limited to 4 taxa so as to cover all unrooted tree topologies with only 3 HMM states, making their method intractable for larger datasets. They show they are able to convincingly detect small recombinant regions in Neisseria as well as simulated datasets limited to 4 taxa [12].The recombination detection problem can be thought of as two inter-related problems: how to accurately partition the alignment and how to construct trees on each region. This property is due to the dual nature of the ARG: it simultaneously encodes the marginal tree topologies as well as where they occur in the alignment. Notice that if the solution to one sub-problem is known, the other becomes easy. If an alignment is already partitioned, simply run a tree-inference program on the separate regions and this will give the marginal trees of the sample. If the trees are known, simply construct an HMM with one tree in each state and run the forward/backward algorithm to infer breakpoints. Previous methods have used this property by assuming one of these problems to be solved and focusing on the other. For example, in Husmeier and Wright's model, there were very few trees to be tested, and so the main difficulty was partitioning the alignment, which they did with a HMM similar to ours. In SimPlot, windows (which are essentially small partitions) passed along the alignment and trees/similarity plots are constructed. This allows the program to focus on tree-construction (usually done with bootstrapped neighbor-joining) rather than searching for the optimal alignment partition.By employing a robust probabilistic model with a novel training scheme, we find a middle ground between the heuristic approach of SimPlot [10] and the computational intractability of Husmeier and Wright's method [12], where we are essentially able to solve the recombination inference problem a whole, rather than neglecting one sub-part and focusing on the other. We use a HMM to model tree topology changes over the columns of a multiple alignment. This is done much in the same way as Husmeier and Wright, but our use of a more sophisticated tree-optimization (the structural EM heuristic) method allows searching for recombination from a larger pool of sequences. By modifying the usual EM method for estimating HMM parameters in a suitable way, we are able to simultaneously learn the optimal partitioning of the alignment as well as trees in each of these partitions. We are able to detect short recombinant regions better than previous methods for several reasons. First, we do not use any sliding windows which may be too coarse-grained to detect such small regions of differing topology. Second, our method allows each tree after EM convergence to be evaluated at every column, and so small recombinant regions are not limited by their size; they must only ‘match’ the topology to be detected or contribute to the tree training. By embedding trees in hidden states of an HMM, the transition matrix allows us to essentially put a prior on the number of breakpoints, as opposed to considering each column independently. Furthermore, since the counts in the E-Step are computed using all columns of the alignment, distant regions of the alignment with similar topology may contribute their signal to a single tree, whereas in a window-sliding approach each window is analyzed independently.
[ "12509753", "17194781", "17090662", "18787691", "11018154", "17033967", "11571075", "3934395", "18028540", "17459965", "9183526", "8288526", "17919110", "16438639", "11752707", "7288891", "5149961", "11264399", "4029609", "15034147", "7984417" ]
[ { "pmid": "12509753", "title": "The evolutionary genomics of pathogen recombination.", "abstract": "A pressing problem in studying the evolution of microbial pathogens is to determine the extent to which these genomes recombine. This information is essential for locating pathogenicity loci by using association studies or population genetic approaches. Recombination also complicates the use of phylogenetic approaches to estimate evolutionary parameters such as selection pressures. Reliable methods that detect and estimate the rate of recombination are, therefore, vital. This article reviews the approaches that are available for detecting and estimating recombination in microbial pathogens and how they can be used to understand pathogen evolution and to identify medically relevant loci." }, { "pmid": "17194781", "title": "Phylogenetic mapping of recombination hotspots in human immunodeficiency virus via spatially smoothed change-point processes.", "abstract": "We present a Bayesian framework for inferring spatial preferences of recombination from multiple putative recombinant nucleotide sequences. Phylogenetic recombination detection has been an active area of research for the last 15 years. However, only recently attempts to summarize information from several instances of recombination have been made. We propose a hierarchical model that allows for simultaneous inference of recombination breakpoint locations and spatial variation in recombination frequency. The dual multiple change-point model for phylogenetic recombination detection resides at the lowest level of our hierarchy under the umbrella of a common prior on breakpoint locations. The hierarchical prior allows for information about spatial preferences of recombination to be shared among individual data sets. To overcome the sparseness of breakpoint data, dictated by the modest number of available recombinant sequences, we a priori impose a biologically relevant correlation structure on recombination location log odds via a Gaussian Markov random field hyperprior. To examine the capabilities of our model to recover spatial variation in recombination frequency, we simulate recombination from a predefined distribution of breakpoint locations. We then proceed with the analysis of 42 human immunodeficiency virus (HIV) intersubtype gag recombinants and identify a putative recombination hotspot." }, { "pmid": "17090662", "title": "Evolution of Chlamydia trachomatis diversity occurs by widespread interstrain recombination involving hotspots.", "abstract": "Chlamydia trachomatis is an obligate intracellular bacterium of major public health significance, infecting over one-tenth of the world's population and causing blindness and infertility in millions. Mounting evidence supports recombination as a key source of genetic diversity among free-living bacteria. Previous research shows that intracellular bacteria such as Chlamydiaceae may also undergo recombination but whether this plays a significant evolutionary role has not been determined. Here, we examine multiple loci dispersed throughout the chromosome to determine the extent and significance of recombination among 19 laboratory reference strains and 10 present-day ocular and urogenital clinical isolates using phylogenetic reconstructions, compatibility matrices, and statistically based recombination programs. Recombination is widespread; all clinical isolates are recombinant at multiple loci with no two belonging to the same clonal lineage. Several reference strains show nonconcordant phylogenies across loci; one strain is unambiguously identified as recombinantly derived from other reference strain lineages. Frequent recombination contrasts with a low level of point substitution; novel substitutions relative to reference strains occur less than one per kilobase. Hotspots for recombination are identified downstream from ompA, which encodes the major outer membrane protein. This widespread recombination, unexpected for an intracellular bacterium, explains why strain-typing using one or two genes, such as ompA, does not correlate with clinical phenotypes. Our results do not point to specific events that are responsible for different pathogenicities but, instead, suggest a new approach to dissect the genetic basis for clinical strain pathology with implications for evolution, host cell adaptation, and emergence of new chlamydial diseases." }, { "pmid": "18787691", "title": "Identifying the important HIV-1 recombination breakpoints.", "abstract": "Recombinant HIV-1 genomes contribute significantly to the diversity of variants within the HIV/AIDS pandemic. It is assumed that some of these mosaic genomes may have novel properties that have led to their prevalence, particularly in the case of the circulating recombinant forms (CRFs). In regions of the HIV-1 genome where recombination has a tendency to convey a selective advantage to the virus, we predict that the distribution of breakpoints--the identifiable boundaries that delimit the mosaic structure--will deviate from the underlying null distribution. To test this hypothesis, we generate a probabilistic model of HIV-1 copy-choice recombination and compare the predicted breakpoint distribution to the distribution from the HIV/AIDS pandemic. Across much of the HIV-1 genome, we find that the observed frequencies of inter-subtype recombination are predicted accurately by our model. This observation strongly indicates that in these regions a probabilistic model, dependent on local sequence identity, is sufficient to explain breakpoint locations. In regions where there is a significant over- (either side of the env gene) or under- (short regions within gag, pol, and most of env) representation of breakpoints, we infer natural selection to be influencing the recombination pattern. The paucity of recombination breakpoints within most of the envelope gene indicates that recombinants generated in this region are less likely to be successful. The breakpoints at a higher frequency than predicted by our model are approximately at either side of env, indicating increased selection for these recombinants as a consequence of this region, or at least part of it, having a tendency to be recombined as an entire unit. Our findings thus provide the first clear indication of the existence of a specific portion of the genome that deviates from a probabilistic null model for recombination. This suggests that, despite the wide diversity of recombinant forms seen in the viral population, only a minority of recombination events appear to be of significance to the evolution of HIV-1." }, { "pmid": "11018154", "title": "Microsatellite markers reveal a spectrum of population structures in the malaria parasite Plasmodium falciparum.", "abstract": "Multilocus genotyping of microbial pathogens has revealed a range of population structures, with some bacteria showing extensive recombination and others showing almost complete clonality. The population structure of the protozoan parasite Plasmodium falciparum has been harder to evaluate, since most studies have used a limited number of antigen-encoding loci that are known to be under strong selection. We describe length variation at 12 microsatellite loci in 465 infections collected from 9 locations worldwide. These data reveal dramatic differences in parasite population structure in different locations. Strong linkage disequilibrium (LD) was observed in six of nine populations. Significant LD occurred in all locations with prevalence <1% and in only two of five of the populations from regions with higher transmission intensities. Where present, LD results largely from the presence of identical multilocus genotypes within populations, suggesting high levels of self-fertilization in populations with low levels of transmission. We also observed dramatic variation in diversity and geographical differentiation in different regions. Mean heterozygosities in South American countries (0.3-0.4) were less than half those observed in African locations (0. 76-0.8), with intermediate heterozygosities in the Southeast Asia/Pacific samples (0.51-0.65). Furthermore, variation was distributed among locations in South America (F:(ST) = 0.364) and within locations in Africa (F:(ST) = 0.007). The intraspecific patterns of diversity and genetic differentiation observed in P. falciparum are strikingly similar to those seen in interspecific comparisons of plants and animals with differing levels of outcrossing, suggesting that similar processes may be involved. The differences observed may also reflect the recent colonization of non-African populations from an African source, and the relative influences of epidemiology and population history are difficult to disentangle. These data reveal a range of population structures within a single pathogen species and suggest intimate links between patterns of epidemiology and genetic structure in this organism." }, { "pmid": "17033967", "title": "Mapping trait loci by use of inferred ancestral recombination graphs.", "abstract": "Large-scale association studies are being undertaken with the hope of uncovering the genetic determinants of complex disease. We describe a computationally efficient method for inferring genealogies from population genotype data and show how these genealogies can be used to fine map disease loci and interpret association signals. These genealogies take the form of the ancestral recombination graph (ARG). The ARG defines a genealogical tree for each locus, and, as one moves along the chromosome, the topologies of consecutive trees shift according to the impact of historical recombination events. There are two stages to our analysis. First, we infer plausible ARGs, using a heuristic algorithm, which can handle unphased and missing data and is fast enough to be applied to large-scale studies. Second, we test the genealogical tree at each locus for a clustering of the disease cases beneath a branch, suggesting that a causative mutation occurred on that branch. Since the true ARG is unknown, we average this analysis over an ensemble of inferred ARGs. We have characterized the performance of our method across a wide range of simulated disease models. Compared with simpler tests, our method gives increased accuracy in positioning untyped causative loci and can also be used to estimate the frequencies of untyped causative alleles. We have applied our method to Ueda et al.'s association study of CTLA4 and Graves disease, showing how it can be used to dissect the association signal, giving potentially interesting results of allelic heterogeneity and interaction. Similar approaches analyzing an ensemble of ARGs inferred using our method may be applicable to many other problems of inference from population genotype data." }, { "pmid": "11571075", "title": "Detection of recombination in DNA multiple alignments with hidden Markov models.", "abstract": "Conventional phylogenetic tree estimation methods assume that all sites in a DNA multiple alignment have the same evolutionary history. This assumption is violated in data sets from certain bacteria and viruses due to recombination, a process that leads to the creation of mosaic sequences from different strains and, if undetected, causes systematic errors in phylogenetic tree estimation. In the current work, a hidden Markov model (HMM) is employed to detect recombination events in multiple alignments of DNA sequences. The emission probabilities in a given state are determined by the branching order (topology) and the branch lengths of the respective phylogenetic tree, while the transition probabilities depend on the global recombination probability. The present study improves on an earlier heuristic parameter optimization scheme and shows how the branch lengths and the recombination probability can be optimized in a maximum likelihood sense by applying the expectation maximization (EM) algorithm. The novel algorithm is tested on a synthetic benchmark problem and is found to clearly outperform the earlier heuristic approach. The paper concludes with an application of this scheme to a DNA sequence alignment of the argF gene from four Neisseria strains, where a likely recombination event is clearly detected." }, { "pmid": "3934395", "title": "Dating of the human-ape splitting by a molecular clock of mitochondrial DNA.", "abstract": "A new statistical method for estimating divergence dates of species from DNA sequence data by a molecular clock approach is developed. This method takes into account effectively the information contained in a set of DNA sequence data. The molecular clock of mitochondrial DNA (mtDNA) was calibrated by setting the date of divergence between primates and ungulates at the Cretaceous-Tertiary boundary (65 million years ago), when the extinction of dinosaurs occurred. A generalized least-squares method was applied in fitting a model to mtDNA sequence data, and the clock gave dates of 92.3 +/- 11.7, 13.3 +/- 1.5, 10.9 +/- 1.2, 3.7 +/- 0.6, and 2.7 +/- 0.6 million years ago (where the second of each pair of numbers is the standard deviation) for the separation of mouse, gibbon, orangutan, gorilla, and chimpanzee, respectively, from the line leading to humans. Although there is some uncertainty in the clock, this dating may pose a problem for the widely believed hypothesis that the pipedal creature Australopithecus afarensis, which lived some 3.7 million years ago at Laetoli in Tanzania and at Hadar in Ethiopia, was ancestral to man and evolved after the human-ape splitting. Another likelier possibility is that mtDNA was transferred through hybridization between a proto-human and a proto-chimpanzee after the former had developed bipedalism." }, { "pmid": "18028540", "title": "Recodon: coalescent simulation of coding DNA sequences with recombination, migration and demography.", "abstract": "BACKGROUND\nCoalescent simulations have proven very useful in many population genetics studies. In order to arrive to meaningful conclusions, it is important that these simulations resemble the process of molecular evolution as much as possible. To date, no single coalescent program is able to simulate codon sequences sampled from populations with recombination, migration and growth.\n\n\nRESULTS\nWe introduce a new coalescent program, called Recodon, which is able to simulate samples of coding DNA sequences under complex scenarios in which several evolutionary forces can interact simultaneously (namely, recombination, migration and demography). The basic codon model implemented is an extension to the general time-reversible model of nucleotide substitution with a proportion of invariable sites and among-site rate variation. In addition, the program implements non-reversible processes and mixtures of different codon models.\n\n\nCONCLUSION\nRecodon is a flexible tool for the simulation of coding DNA sequences under realistic evolutionary models. These simulations can be used to build parameter distributions for testing evolutionary hypotheses using experimental data. Recodon is written in C, can run in parallel, and is freely available from http://darwin.uvigo.es/." }, { "pmid": "17459965", "title": "TOPD/FMTS: a new software to compare phylogenetic trees.", "abstract": "SUMMARY\nTOPD/FMTS has been developed to evaluate similarities and differences between phylogenetic trees. The software implements several new algorithms (including the Disagree method that returns the taxa, that disagree between two trees and the Nodal method that compares two trees using nodal information) and several previously described methods (such as the Partition method, Triplets or Quartets) to compare phylogenetic trees. One of the novelties of this software is that the FMTS (From Multiple to Single) program allows the comparison of trees that contain both orthologs and paralogs. Each option is also complemented with a randomization analysis to test the null hypothesis that the similarity between two trees is not better than chance expectation.\n\n\nAVAILABILITY\nThe Perl source code of TOPD/FMTS is available at http://genomes.urv.es/topd." }, { "pmid": "9183526", "title": "Seq-Gen: an application for the Monte Carlo simulation of DNA sequence evolution along phylogenetic trees.", "abstract": "MOTIVATION\nSeq-Gen is a program that will simulate the evolution of nucleotide sequences along a phylogeny, using common models of the substitution process. A range of models of molecular evolution are implemented, including the general reversible model. Nucleotide frequencies and other parameters of the model may be given and site-specific rate heterogeneity can also be incorporated in a number of ways. Any number of trees may be read in and the program will produce any number of data sets for each tree. Thus, large sets of replicate simulations can be easily created. This can be used to test phylogenetic hypotheses using the parametric bootstrap.\n\n\nAVAILABILITY\nSeq-Gen can be obtained by WWW from http:/(/)evolve.zoo.ox.ac.uk/Seq-Gen/seq-gen.html++ + or by FTP from ftp:/(/)evolve.zoo.ox.ac.uk/packages/Seq-Gen/. The package includes the source code, manual and example files. An Apple Macintosh version is available from the same sites." }, { "pmid": "8288526", "title": "Interspecies recombination between the penA genes of Neisseria meningitidis and commensal Neisseria species during the emergence of penicillin resistance in N. meningitidis: natural events and laboratory simulation.", "abstract": "The penicillin-binding protein 2 genes (penA) of penicillin-resistant Neisseria meningitidis have a mosaic structure that has arisen by the introduction of regions from the penA genes of Neisseria flavescens or Neisseria cinerea. Chromosomal DNA from both N. cinerea and N. flavescens could transform a penicillin-susceptible isolate of N. meningitidis to increased resistance to penicillin. With N. flavescens DNA, transformation to resistance was accompanied by the introduction of the N. flavescens penA gene, providing a laboratory demonstration of the interspecies recombinational events that we believe underlie the development of penicillin resistance in many meningococci in nature. Surprisingly, with N. cinerea DNA, the penicillin-resistant transformants did not obtain the N. cinerea penA gene. However, the region of the penA gene derived from N. cinerea in N. meningitidis K196 contained an extra codon (Asp-345A) which was not found in any of the four N. cinerea isolates that we examined and which is known to result in a decrease in the affinity of PBP 2 in gonococci." }, { "pmid": "17919110", "title": "Near full-length sequence analysis of a Unique CRF01_AE/B recombinant from Kuala Lumpur, Malaysia.", "abstract": "A new HIV-1 circulating recombinant form (CRF), CRF33_01B, has been identified in Malaysia. Concurrently we found a unique recombinant form (URF), that is, the HIV-1 isolate 06MYKLD46, in Kuala Lumpur, Malaysia. It is composed of B or a Thai variant of the B subtype (B') and CRF01_AE. Here, we determined the near full-length genome of the isolate 06MYKLD46 and performed detailed phylogenetic and bootscanning analyses to characterize its mosaic composition and to further confirm the subtype assignments. Although the majority of the 06MYKLD46 genome is CRF01_AE, we found three short fragments of B or B' subtype inserted along the genome. These B or B' subtype regions were 716 and 335 bp, respectively, in the protease-reverse transcriptase (PR-RT) region, similar to those found in CRF33_01B, as well as an extra 590 bp in the env gene region. Thus we suggest that 06MYKLD46 is a possible second-generation HIV-1 recombinant derived from CRF33_01B." }, { "pmid": "16438639", "title": "Identification of two HIV type 1 circulating recombinant forms in Brazil.", "abstract": "Recombination is an important way to generate genetic diversity. Accumulation of HIV-1 full-length genomes in databases demonstrated that recombination is pervasive in viral strains collected globally. Recombinant forms achieving epidemiological relevance are termed circulating recombinant forms (CRFs). CRF12_BF was up to now the only CRF described in South America. The objective was to identify the first CRF in Brazil conducting full genome analysis of samples sharing the same partial genome recombinant structure. Ten samples obtained from individuals residing in Santos, Brazil, sharing the same recombination pattern based on partial genome sequence data, were selected from a larger group to undergo full length genome analysis. Near full length genomes were assembled from overlapping fragments. Mosaic genomes were evaluated by Bootscan, alignment inspection, and phylogenetic analysis using neighbor joining and maximum likelihood. Full genomes were also analyzed by split decomposition. We were able to identify five mosaic genomes. Two of these structures were represented by at least three samples derived from epidemiologically unlinked individuals. These structures were named CRF28_BF and CRF29_BF and are the second and third CRFs composed exclusively by subtypes B and F as well as the second and third CRFs encountered in South America. Other recombinant forms studied here resembled CRF28_BF and CRF29_BF. Our results suggest that a diverse population of related recombinants, including CRFs may play an important part in the Brazilian and South American epidemic." }, { "pmid": "11752707", "title": "Diversity of mosaic structures and common ancestry of human immunodeficiency virus type 1 BF intersubtype recombinant viruses from Argentina revealed by analysis of near full-length genome sequences.", "abstract": "The findings that BF intersubtype recombinant human immunodeficiency type 1 viruses (HIV-1) with coincident breakpoints in pol are circulating widely in Argentina and that non-recombinant F subtype viruses have failed to be detected in this country were reported recently. To analyse the mosaic structures of these viruses and to determine their phylogenetic relationship, near full-length proviral genomes of eight of these recombinant viruses were amplified by PCR and sequenced. Intersubtype breakpoints were analysed by bootscanning and examining the signature nucleotides. Phylogenetic relationships were determined with neighbour-joining trees. Five viruses, each with predominantly subtype F genomes, exhibited mosaic structures that were highly similar. Two intersubtype breakpoints were shared by all viruses and seven by the majority. Of the consensus breakpoints, all nine were present in two viruses, which exhibited identical recombinant structures, and four to eight breakpoints were present in the remaining viruses. Phylogenetic analysis of partial sequences supported both a common ancestry, at least in part of their genomes, for all recombinant viruses and the phylogenetic relationship of F subtype segments with F subtype viruses from Brazil. A common ancestry of the recombinants was supported also by the presence of shared signature amino acids and nucleotides, either unreported or highly unusual in F and B subtype viruses. These results indicate that HIV-1 BF recombinant viruses with diverse mosaic structures, including a circulating recombinant form (which are widespread in Argentina) derive from a common recombinant ancestor and that F subtype segments of these recombinants are related phylogenetically to the F subtype viruses from Brazil." }, { "pmid": "7288891", "title": "Evolutionary trees from DNA sequences: a maximum likelihood approach.", "abstract": "The application of maximum likelihood techniques to the estimation of evolutionary trees from nucleic acid sequence data is discussed. A computationally feasible method for finding such maximum likelihood estimates is developed, and a computer program is available. This method has advantages over the traditional parsimony algorithms, which can give misleading results if rates of evolution differ in different lineages. It also allows the testing of hypotheses about the constancy of evolutionary rates by likelihood ratio tests, and gives rough indication of the error of ;the estimate of the tree." }, { "pmid": "11264399", "title": "Models of sequence evolution for DNA sequences containing gaps.", "abstract": "Most evolutionary tree estimation methods for DNA sequences ignore or inefficiently use the phylogenetic information contained within shared patterns of gaps. This is largely due to the computational difficulties in implementing models for insertions and deletions. A simple way to incorporate this information is to treat a gap as a fifth character (with the four nucleotides being the other four) and to incorporate it within a Markov model of nucleotide substitution. This idea has been dismissed in the past, since it treats a multiple-site insertion or deletion as a sequence of independent events rather than a single event. While this is true, we have found that under many circumstances it is better to incorporate gap information inadequately than to ignore it, at least for topology estimation. We propose an extension to a class of nucleotide substitution models to incorporate the gap character and show that, for data sets (both real and simulated) with short and medium gaps, these models do lead to effective use of the information contained within insertions and deletions. We also implement an ad hoc method in which the likelihood at columns containing multiple-site gaps is downweighted in order to avoid giving them undue influence. The precision of the estimated tree, assessed using Markov chain Monte Carlo techniques to find the posterior distribution over tree space, improves under these five-state models compared with standard methods which effectively ignore gaps." }, { "pmid": "4029609", "title": "Statistical properties of the number of recombination events in the history of a sample of DNA sequences.", "abstract": "Some statistical properties of samples of DNA sequences are studied under an infinite-site neutral model with recombination. The two quantities of interest are R, the number of recombination events in the history of a sample of sequences, and RM, the number of recombination events that can be parsimoniously inferred from a sample of sequences. Formulas are derived for the mean and variance of R. In contrast to R, RM can be determined from the sample. Since no formulas are known for the mean and variance of RM, they are estimated with Monte Carlo simulations. It is found that RM is often much less than R, therefore, the number of recombination events may be greatly under-estimated in a parsimonious reconstruction of the history of a sample. The statistic RM can be used to estimate the product of the recombination rate and the population size or, if the recombination rate is known, to estimate the population size. To illustrate this, DNA sequences from the Adh region of Drosophila melanogaster are used to estimate the effective population size of this species." }, { "pmid": "15034147", "title": "MUSCLE: multiple sequence alignment with high accuracy and high throughput.", "abstract": "We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle." }, { "pmid": "7984417", "title": "CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice.", "abstract": "The sensitivity of the commonly used progressive multiple sequence alignment method has been greatly improved for the alignment of divergent protein sequences. Firstly, individual weights are assigned to each sequence in a partial alignment in order to down-weight near-duplicate sequences and up-weight the most divergent ones. Secondly, amino acid substitution matrices are varied at different alignment stages according to the divergence of the sequences to be aligned. Thirdly, residue-specific gap penalties and locally reduced gap penalties in hydrophilic regions encourage new gaps in potential loop regions rather than regular secondary structure. Fourthly, positions in early alignments where gaps have been opened receive locally reduced gap penalties to encourage the opening up of new gaps at these positions. These modifications are incorporated into a new program, CLUSTAL W which is freely available." } ]
BMC Medical Informatics and Decision Making
19208256
PMC2657779
10.1186/1472-6947-9-10
Sentence retrieval for abstracts of randomized controlled trials
BackgroundThe practice of evidence-based medicine (EBM) requires clinicians to integrate their expertise with the latest scientific research. But this is becoming increasingly difficult with the growing numbers of published articles. There is a clear need for better tools to improve clinician's ability to search the primary literature. Randomized clinical trials (RCTs) are the most reliable source of evidence documenting the efficacy of treatment options. This paper describes the retrieval of key sentences from abstracts of RCTs as a step towards helping users find relevant facts about the experimental design of clinical studies.MethodUsing Conditional Random Fields (CRFs), a popular and successful method for natural language processing problems, sentences referring to Intervention, Participants and Outcome Measures are automatically categorized. This is done by extending a previous approach for labeling sentences in an abstract for general categories associated with scientific argumentation or rhetorical roles: Aim, Method, Results and Conclusion. Methods are tested on several corpora of RCT abstracts. First structured abstracts with headings specifically indicating Intervention, Participant and Outcome Measures are used. Also a manually annotated corpus of structured and unstructured abstracts is prepared for testing a classifier that identifies sentences belonging to each category.ResultsUsing CRFs, sentences can be labeled for the four rhetorical roles with F-scores from 0.93–0.98. This outperforms the use of Support Vector Machines. Furthermore, sentences can be automatically labeled for Intervention, Participant and Outcome Measures, in unstructured and structured abstracts where the section headings do not specifically indicate these three topics. F-scores of up to 0.83 and 0.84 are obtained for Intervention and Outcome Measure sentences.ConclusionResults indicate that some of the methodological elements of RCTs are identifiable at the sentence level in both structured and unstructured abstract reports. This is promising in that sentences labeled automatically could potentially form concise summaries, assist in information retrieval and finer-grained extraction.
Related WorkAccording to rhetorical structure theory [18], clauses in text relate to one another via relations such as Background, Elaboration, Contrast. These rhetorical relations when identified could be useful for information extraction, question answering, information retrieval and summarization. In NLP, researchers have attempted to recognize rhetorical relations using manually crafted and statistical techniques [19,20].It has been claimed [21-23] that abstracts across scientific disciplines including the biomedical domain follow consistent rhetorical roles or "argumentative moves" (e.g. Problem, Solution, Evaluation, Conclusion). Teufel and Moens [24] has proposed a strategy for summarization by classifying sentences from scientific texts into seven rhetorical categories. Extracted sentences could be concatenated for automated user-tailored summaries.Since then, several others have proposed to label sections of MEDLINE abstracts with four or five generic categories (Background, Aim, Method, Results and Conclusion), assigning structure to unstructured abstracts. Ruch et al. [25] used Naive Bayes to label sentences into the four main argumentative moves, with the goal of finding an appropriate Conclusion sentence which appears to be the most informative [26], and therefore best candidate to enhance search results. Other researchers have used Support Vector Machines (SVMs) [27-29], as well as Hidden Markov Models (HMMs) [30,31] which more effectively model the sequential ordering of sentences. Conditional random fields have been employed to recognize the four main rhetorical roles in our previous work [32] and also by Hirohata et al. [33].Beyond the generic discourse level information, researchers have also investigated the extraction of key facts pertinent to clinical trials. In accordance with the PICO Framework [34], Patient, Intervention, Comparison and Outcome are the four dimensions that clinical questions can be reformulated to address. Demner-Fushman [35] has implemented an extractor for outcome sentences using an ensemble of classifiers, and Xu et al. [36] have reported the extraction of patient demographic information using a parser and HMMs.In contrast to previous work, this paper explores the potential for identifying key sentences that are specific to RCT reports. In a study of medical journal abstracts, Dawes et al. [37] report that elements such as Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome and Results were found in over 85% of the time. We seek to investigate here whether sentence categorization is sufficient for recognizing this information from both structured and unstructured abstracts. We specifically address sentences describing Intervention, Participants and Outcome Measure.
[ "8411577", "16239941", "4037559", "11909789", "14702450", "11059477", "11308435", "16815739", "14728211", "7582737", "16221937", "17612476", "18433469" ]
[ { "pmid": "16239941", "title": "Bibliometric analysis of the literature of randomized controlled trials.", "abstract": "OBJECTIVE\nEvidence-based medicine (EBM) is a significant issue and the randomized controlled trial (RCT) literature plays a fundamental role in developing EBM. This study investigates the features of RCT literature based on bibliometric methods. Growth of the literature, publication types, languages, publication countries, and research subjects are addressed. The distribution of journal articles was also examined utilizing Bradford's law and Bradford-Zipf's law.\n\n\nMETHOD\nThe MEDLINE database was searched for articles indexed under the publication type \"Randomized Control Trial,\" and articles retrieved were counted and analyzed using Microsoft Access, Microsoft Excel, and PERL.\n\n\nRESULTS\nFrom 1990 to 2001, a total of 114,850 citations dealing with RCTs were retrieved. The literature growth rate, from 1965 to 2001, is steadily rising and follows an exponential model. Journal articles are the predominant form of publication, and the multicenter study is extensively used. English is the most commonly used language.\n\n\nCONCLUSIONS\nGenerally, RCTs are found in publications concentrating on cardiovascular disease, cancer, asthma, postoperative condition, health, and anesthetics. Zone analysis and graphical formulation from Bradford's law of scattering shows variations from the standard Bradford model. Forty-two core journals were identified using Bradford's law." }, { "pmid": "4037559", "title": "Information needs in office practice: are they being met?", "abstract": "We studied the self-reported information needs of 47 physicians during a half day of typical office practice. The physicians raised 269 questions about patient management. Questions related to all medical specialties and were highly specific to the individual patient's problem. Subspecialists most frequently asked questions related to other subspecialties. Only 30% of physicians' information needs were met during the patient visit, usually by another physician or other health professional. Reasons print sources were not used included the age of textbooks in the office, poor organization of journal articles, inadequate indexing of books and drug information sources, lack of knowledge of an appropriate source, and the time required to find the desired information. Better methods are needed to provide answers to questions that arise in office practice." }, { "pmid": "11909789", "title": "Obstacles to answering doctors' questions about patient care with evidence: qualitative study.", "abstract": "OBJECTIVE\nTo describe the obstacles encountered when attempting to answer doctors' questions with evidence.\n\n\nDESIGN\nQualitative study.\n\n\nSETTING\nGeneral practices in Iowa.\n\n\nPARTICIPANTS\n9 academic generalist doctors, 14 family doctors, and 2 medical librarians.\n\n\nMAIN OUTCOME MEASURE\nA taxonomy of obstacles encountered while searching for evidence based answers to doctors' questions.\n\n\nRESULTS\n59 obstacles were encountered and organised according to the five steps in asking and answering questions: recognise a gap in knowledge, formulate a question, search for relevant information, formulate an answer, and use the answer to direct patient care. Six obstacles were considered particularly salient by the investigators and practising doctors: the excessive time required to find information; difficulty modifying the original question, which was often vague and open to interpretation; difficulty selecting an optimal strategy to search for information; failure of a seemingly appropriate resource to cover the topic; uncertainty about how to know when all the relevant evidence has been found so that the search can stop; and inadequate synthesis of multiple bits of evidence into a clinically useful statement.\n\n\nCONCLUSIONS\nMany obstacles are encountered when asking and answering questions about how to care for patients. Addressing these obstacles could lead to better patient care by improving clinically oriented information resources." }, { "pmid": "14702450", "title": "An evaluation of information-seeking behaviors of general pediatricians.", "abstract": "OBJECTIVE\nUsage of computer resources at the point of care has a positive effect on physician decision making. Pediatricians' information-seeking behaviors are not well characterized. The goal of this study was to characterize quantitatively the information-seeking behaviors of general pediatricians and specifically compare their use of computers, including digital libraries, before and after an educational intervention.\n\n\nMETHODS\nGeneral pediatric residents and faculty at a US Midwest children's hospital participated. A control (year 1) versus intervention group (year 2) research design was implemented. Eligible pediatrician pools overlapped, such that some participated first in the control group and later as part of the intervention. The intervention group received a 10-minute individual training session and handout on how to use a pediatric digital library to answer professional questions. A general medical digital library was also available. Pediatricians in both the control and the intervention groups were surveyed using the critical incident technique during 2 6-month time periods. Both groups were telephoned for 1- to 2-minute interviews and were asked, \"What pediatric question(s) did you have that you needed additional information to answer?\" The main outcome measures were the differences between the proportion of pediatricians who use computers and digital libraries and a comparison of the number of times that pediatricians use these resources before and after intervention.\n\n\nRESULTS\nA total of 58 pediatricians were eligible, and 52 participated (89.6%). Participant demographics between control (N = 41; 89.1%) and intervention (N = 31; 70.4%) were not statistically different. Twenty pediatricians were in both groups. Pediatricians were slightly less likely to pursue answers after the intervention (94.7% vs 89.2%); the primary reason cited for both groups was a lack of time. The pediatricians were as successful in finding answers in each group (95.7% vs 92.7%), but the intervention group took significantly less time (8.3 minutes vs 19.6 minutes). After the intervention, pediatricians used computers and digital libraries more to answer their questions and spent less time using them.\n\n\nCONCLUSION\nThis study showed higher rates of physician questions pursued and answered and higher rates of computer use at baseline and after intervention compared with previous studies. Pediatricians who seek answers at the point of care therefore should begin to shift their information-seeking behaviors toward computer resources, as they are as effective but more time-efficient." }, { "pmid": "11059477", "title": "Electronic trial banks: a complementary method for reporting randomized trials.", "abstract": "BACKGROUND\nRandomized clinical trial (RCT) results are often difficult to find, interpret, or apply to clinical care. The authors propose that RCTs be reported into electronic knowledge bases-trial banks-in addition to being reported in text. What information should these trial-bank reports contain?\n\n\nMETHODS\nUsing the competency decomposition method, the authors specified the ideal trial-bank contents as the information necessary and sufficient for completing the task of systematic reviewing.\n\n\nRESULTS\nThey decomposed the systematic reviewing task into four top-level tasks and 62 subtasks. 162 types of trial information were necessary and sufficient for completing these subtasks. These items relate to a trial's design, execution, administration, and results.\n\n\nCONCLUSION\nTrial-bank publishing of these 162 items would capture into computer-understandable form all the trial information needed for critically appraising and synthesizing trial results. Decision-support systems that access shared, up-to-date trial banks could help clinicians manage, synthesize, and apply RCT evidence more effectively." }, { "pmid": "11308435", "title": "The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials.", "abstract": "To comprehend the results of a randomized controlled trial (RCT), readers must understand its design, conduct, analysis, and interpretation. That goal can be achieved only through complete transparency from authors. Despite several decades of educational efforts, the reporting of RCTs needs improvement. Investigators and editors developed the original CONSORT (Consolidated Standards of Reporting Trials) statement to help authors improve reporting by using a checklist and flow diagram. The revised CONSORT statement presented in this article incorporates new evidence and addresses some criticisms of the original statement. The checklist items pertain to the content of the Title, Abstract, Introduction, Methods, Results, and Comment. The revised checklist includes 22 items selected because empirical evidence indicates that not reporting the information is associated with biased estimates of treatment effect or because the information is essential to judge the reliability or relevance of the findings. We intended the flow diagram to depict the passage of participants through an RCT. The revised flow diagram depicts information from 4 stages of a trial (enrollment, intervention allocation, follow-up, and analysis). The diagram explicitly includes the number of participants, according to each intervention group, included in the primary data analysis. Inclusion of these numbers allows the reader to judge whether the authors have performed an intention-to-treat analysis. In sum, the CONSORT statement is intended to improve the reporting of an RCT, enabling readers to understand a trial's conduct and to assess the validity of its results." }, { "pmid": "16815739", "title": "Using argumentation to extract key sentences from biomedical abstracts.", "abstract": "PROBLEM\nkey word assignment has been largely used in MEDLINE to provide an indicative \"gist\" of the content of articles and to help retrieving biomedical articles. Abstracts are also used for this purpose. However with usually more than 300 words, MEDLINE abstracts can still be regarded as long documents; therefore we design a system to select a unique key sentence. This key sentence must be indicative of the article's content and we assume that abstract's conclusions are good candidates. We design and assess the performance of an automatic key sentence selector, which classifies sentences into four argumentative moves: PURPOSE, METHODS, RESULTS and\n\n\nCONCLUSION\n\n\n\nMETHODS\nwe rely on Bayesian classifiers trained on automatically acquired data. Features representation, selection and weighting are reported and classification effectiveness is evaluated on the four classes using confusion matrices. We also explore the use of simple heuristics to take the position of sentences into account. Recall, precision and F-scores are computed for the CONCLUSION class. For the CONCLUSION class, the F-score reaches 84%. Automatic argumentative classification using Bayesian learners is feasible on MEDLINE abstracts and should help user navigation in such repositories." }, { "pmid": "14728211", "title": "Categorization of sentence types in medical abstracts.", "abstract": "This study evaluated the use of machine learning techniques in the classification of sentence type. 7253 structured abstracts and 204 unstructured abstracts of Randomized Controlled Trials from MedLINE were parsed into sentences and each sentence was labeled as one of four types (Introduction, Method, Result, or Conclusion). Support Vector Machine (SVM) and Linear Classifier models were generated and evaluated on cross-validated data. Treating sentences as a simple \"bag of words\", the SVM model had an average ROC area of 0.92. Adding a feature of relative sentence location improved performance markedly for some models and overall increasing the average ROC to 0.95. Linear classifier performance was significantly worse than the SVM in all datasets. Using the SVM model trained on structured abstracts to predict unstructured abstracts yielded performance similar to that of models trained with unstructured abstracts in 3 of the 4 types. We conclude that classification of sentence type seems feasible within the domain of RCT's. Identification of sentence types may be helpful for providing context to end users or other text summarization techniques." }, { "pmid": "16221937", "title": "Automatically identifying health outcome information in MEDLINE records.", "abstract": "OBJECTIVE\nUnderstanding the effect of a given intervention on the patient's health outcome is one of the key elements in providing optimal patient care. This study presents a methodology for automatic identification of outcomes-related information in medical text and evaluates its potential in satisfying clinical information needs related to health care outcomes.\n\n\nDESIGN\nAn annotation scheme based on an evidence-based medicine model for critical appraisal of evidence was developed and used to annotate 633 MEDLINE citations. Textual, structural, and meta-information features essential to outcome identification were learned from the created collection and used to develop an automatic system. Accuracy of automatic outcome identification was assessed in an intrinsic evaluation and in an extrinsic evaluation, in which ranking of MEDLINE search results obtained using PubMed Clinical Queries relied on identified outcome statements.\n\n\nMEASUREMENTS\nThe accuracy and positive predictive value of outcome identification were calculated. Effectiveness of the outcome-based ranking was measured using mean average precision and precision at rank 10.\n\n\nRESULTS\nAutomatic outcome identification achieved 88% to 93% accuracy. The positive predictive value of individual sentences identified as outcomes ranged from 30% to 37%. Outcome-based ranking improved retrieval accuracy, tripling mean average precision and achieving 389% improvement in precision at rank 10.\n\n\nCONCLUSION\nPreliminary results in outcome-based document ranking show potential validity of the evidence-based medicine-model approach in timely delivery of information critical to clinical decision support at the point of service." }, { "pmid": "17612476", "title": "The identification of clinically important elements within medical journal abstracts: Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome, Duration and Results (PECODR).", "abstract": "BACKGROUND\nInformation retrieval in primary care is becoming more difficult as the volume of medical information held in electronic databases expands. The lexical structure of this information might permit automatic indexing and improved retrieval.\n\n\nOBJECTIVE\nTo determine the possibility of identifying the key elements of clinical studies, namely Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome, Duration and Results (PECODR), from abstracts of medical journals.\n\n\nMETHODS\nWe used a convenience sample of 20 synopses from the journal Evidence-Based Medicine (EBM) and their matching original journal article abstracts obtained from PubMed. Three independent primary care professionals identified PECODR-related extracts of text. Rules were developed to define each PECODR element and the selection process of characters, words, phrases and sentences. From the extracts of text related to PECODR elements, potential lexical patterns that might help identify those elements were proposed and assessed using NVivo software.\n\n\nRESULTS\nA total of 835 PECODR-related text extracts containing 41,263 individual text characters were identified from 20 EBM journal synopses. There were 759 extracts in the corresponding PubMed abstracts containing 31,947 characters. PECODR elements were found in nearly all abstracts and synopses with the exception of duration. There was agreement on 86.6% of the extracts from the 20 EBM synopses and 85.0% on the corresponding PubMed abstracts. After consensus this rose to 98.4% and 96.9% respectively. We found potential text patterns in the Comparison, Outcome and Results elements of both EBM synopses and PubMed abstracts. Some phrases and words are used frequently and are specific for these elements in both synopses and abstracts.\n\n\nCONCLUSIONS\nResults suggest a PECODR-related structure exists in medical abstracts and that there might be lexical patterns specific to these elements. More sophisticated computer-assisted lexical-semantic analysis might refine these results, and pave the way to automating PECODR indexing, and improve information retrieval in primary care." }, { "pmid": "18433469", "title": "Extraction of semantic biomedical relations from text using conditional random fields.", "abstract": "BACKGROUND\nThe increasing amount of published literature in biomedicine represents an immense source of knowledge, which can only efficiently be accessed by a new generation of automated information extraction tools. Named entity recognition of well-defined objects, such as genes or proteins, has achieved a sufficient level of maturity such that it can form the basis for the next step: the extraction of relations that exist between the recognized entities. Whereas most early work focused on the mere detection of relations, the classification of the type of relation is also of great importance and this is the focus of this work. In this paper we describe an approach that extracts both the existence of a relation and its type. Our work is based on Conditional Random Fields, which have been applied with much success to the task of named entity recognition.\n\n\nRESULTS\nWe benchmark our approach on two different tasks. The first task is the identification of semantic relations between diseases and treatments. The available data set consists of manually annotated PubMed abstracts. The second task is the identification of relations between genes and diseases from a set of concise phrases, so-called GeneRIF (Gene Reference Into Function) phrases. In our experimental setting, we do not assume that the entities are given, as is often the case in previous relation extraction work. Rather the extraction of the entities is solved as a subproblem. Compared with other state-of-the-art approaches, we achieve very competitive results on both data sets. To demonstrate the scalability of our solution, we apply our approach to the complete human GeneRIF database. The resulting gene-disease network contains 34758 semantic associations between 4939 genes and 1745 diseases. The gene-disease network is publicly available as a machine-readable RDF graph.\n\n\nCONCLUSION\nWe extend the framework of Conditional Random Fields towards the annotation of semantic relations from text and apply it to the biomedical domain. Our approach is based on a rich set of textual features and achieves a performance that is competitive to leading approaches. The model is quite general and can be extended to handle arbitrary biological entities and relation types. The resulting gene-disease network shows that the GeneRIF database provides a rich knowledge source for text mining. Current work is focused on improving the accuracy of detection of entities as well as entity boundaries, which will also greatly improve the relation extraction performance." } ]
International Journal of Telemedicine and Applications
19325918
PMC2659605
10.1155/2009/101382
Agent-Oriented Privacy-Based Information Brokering Architecture for Healthcare Environments
Healthcare industry is facing a major reform at all levels—locally, regionally, nationally, and internationally. Healthcare services and systems become very complex and comprise of a vast number of components (software systems, doctors, patients, etc.) that are characterized by shared, distributed and heterogeneous information sources with varieties of clinical and other settings. The challenge now faced with decision making, and management of care is to operate effectively in order to meet the information needs of healthcare personnel. Currently, researchers, developers, and systems engineers are working toward achieving better efficiency and quality of service in various sectors of healthcare, such as hospital management, patient care, and treatment. This paper presents a novel information brokering architecture that supports privacy-based information gathering in healthcare. Architecturally, the brokering is viewed as a layer of services where a brokering service is modeled as an agent with a specific architecture and interaction protocol that are appropriate to serve various requests. Within the context of brokering, we model privacy in terms of the entities ability to hide or reveal information related to its identities, requests, and/or capabilities. A prototype of the proposed architecture has been implemented to support information-gathering capabilities in healthcare environments using FIPA-complaint platform JADE.
2. Related WorkPrivacy concerns are key barriers to the growth of health-based systems. Legislation to protect personal medical information was proposed and put in effect to help building a mutual confidence between various participants in the healthcare domain.Privacy-based brokering protocols were proposed in many application domain such as E-auctions [2], data mining [3], and E-commerce. Different techniques were used to enable collaboration among heterogeneous cooperative agents in distributed systems including brokering via middle agents. These middle agents differ from the role they play within the agent community [4–6]. The work in [7] has proposed an agent-based mediation approach, in which privacy has been treated as a base for classifying the various mediation architectures only for the initial state of the system. In another approach, agents capabilities and preferences are assumed to be common knowledge, which might violate the privacy requirements of the involved participants [8]. Other approaches such as in [9–11] have proposed frameworks to facilitate coordination between web services by providing semantic-based discovery and mediation services that utilize semantic description languages such as OWL-S [12] and RDF [13]. Another recent approach distinguishes a resource brokering architecture that manages and schedules different tasks on various distributed resources on the large-scale grid [14]. However, none of the above-mentioned approaches has treated privacy as an architectural element that facilitates the integration of various distributed systems of an enterprise.Several approaches were proposed for integration of distributed information sources in healthcare [15]. In one approach [16], the focus was on providing management assistance to different teams across several hospitals by coordinating their access to distributed information. The brokering architecture is centralized around a mediator agent, which allocates the appropriate medical team to an available operating theatre in which the transplant operation may be performed. Other approach attempts to provide agent-based medical appointments scheduling [17, 18], in these approaches the architecture provides matchmaking mechanisms for the selection of appropriate recipient candidates whenever organs become available through a matchmaking agent that accesses a domain-specific ontology.Other approaches proposed the use of privacy policies along with physical access means (such as smartcards), in which the access of private information is granted through the presence of another trusted authority that mediate between information requesters and information providers [19, 20]. A European IST project [21], TelemediaCare, Lincoln, UK, developed an agent-based framework to support patient-focused distant care and assistance, in the architecture composes two different types of agents, namely, stationary “static” and mobile agents. Web service-based tools were developed to enable patients to remotely schedule appointments, doctor visits, and to access medical data [22].Different approaches had been suggested to protect the location privacy in open-distributed systems [23]. Location privacy is a particular type of information privacy that can be defined as “the ability to prevent other parties from learning one's current or past location”. These approaches range from anonymity, pseudonymity, to cryptographic techniques. Some approaches focus on using anonymity by unlinking user personal information from their identity. One available tool is called anonymizer [24]. The service protects the Internet protocol (IP) address or the identity of the user who views web pages or submits information (including personal preferences) to a remote site. The solution uses anonymous proxies (gateways to the Internet) to route user's Internet traffic through the tool. However, this technique requires a trusted third party because the anonymizer servers (or the user's Internet service provider, ISP) can certainly identify the user. Other tools try not to rely on a trusted third party to achieve complete anonymity of the user's identity on the Internet, such as Crowds [25], Onion routing [26], and MIX networks [27].Various programs and initiatives have proposed a set of guidelines for secure collection, transmission, and storage of patients' data. Some of these programs include the Initiative for Privacy Standardization in Europe (IPSE) and the Health Insurance Portability and Accountability Act (HIPAA) [28, 29]. Yet, these guidelines need the adoption of new technology for healthcare requester/provider interaction.
[ "9821519" ]
[ { "pmid": "9821519", "title": "Web-based health care agents; the case of reminders and todos, too (R2Do2).", "abstract": "This paper describes efforts to develop and field an agent-based, healthcare middleware framework that securely connects practice rule sets to patient records to anticipate health todo items and to remind and alert users about these items over the web. Reminders and todos, too (R2Do2) is an example of merging data- and document-centric architectures, and of integrating agents into patient-provider collaboration environments. A test of this capability verifies that R2Do2 is progressing toward its two goals: (1) an open standards framework for middleware in the healthcare field; and (2) an implementation of the 'principle of optimality' to derive the best possible health plans for each user. This paper concludes with lessons learned to date." } ]
International Journal of Biomedical Imaging
19343184
PMC2662330
10.1155/2009/506120
A Based Bayesian Wavelet Thresholding Method to Enhance Nuclear Imaging
Nuclear images are very often used to study the functionality of some organs. Unfortunately, these images have bad contrast, a weak resolution, and present fluctuations due to the radioactivity disintegration. To enhance their quality, physicians have to increase the quantity of the injected radioactive material and the acquisition time. In this paper, we propose an alternative solution. It consists in a software framework that enhances nuclear image quality and reduces statistical fluctuations. Since these images are modeled as the realization of a Poisson process, we propose a new framework that performs variance stabilizing of the Poisson process before applying an adapted Bayesian wavelet shrinkage. The proposed method has been applied on real images, and it has proved its performance.
3. Related WorksThe first attempt to enhance nuclear images started with the setup of the first gamma cameras. Until now and in spite of the notable improvement of gamma cameras [3], many researchers focus on developing solutions to remove noise from scintigraphic images. Some works consider the general framework of restoration while others focus on the noise removing task.The denoising—or the nonparametric regression in statistical mathematics—is nowadays a powerful tool in signal and image processing. Its main goal is to recover a component corrupted by noise without using any parametric model. In the beginning, linear and nonlinear filters are used, but their immediate consequence is contrast degradation and details smoothing [3]. To overcome this limitation, several nonstationary filters have been proposed [4], but they are not used in daily practice. Probably, this is due to the artificial appearance of the processed images; their texture is relatively different from that of the original images [4].Actually, denoising using wavelets proves its ability to satisfy the compromise between smoothing and conserving important features. The observed data are modeled as a signal embedded in noise. When the noise is additive and Gaussian, the denoising problem becomes how to determine the optimal wavelet basis that concentrates the signal energy in a few coefficients and thresholds the noisy ones.However, in several experimental domains, especially those based on techniques where the detection involves a counting process, the data is modeled as a Poisson process (which is the case for scintigraphic images). In this context, several techniques where considered in order to recover the underlying intensity structure. Unlike the Gaussian noise (which is independent), the Poisson noise depends on the image intensities (Figure 1 simulates the difference between the Gaussian and the Poisson noise). Consequently, the wavelet shrinkage is not suitable for this context.A straightforward method to deal with this problem is to introduce a preprocessing normalizing step such as the Anscombe [5] or the Fisz transform [6]. The noisy image is then transformed into an image contaminated with approximately Gaussian noise with a constant variance. Thus, this variance-stabilizing operation leads to estimate the underlying intensity function by applying one of the many denoising procedures already designed for Gaussian noise.In this context, several proposed Bayesian estimators were more efficient than classical ones. In the Bayesian paradigm, a prior distribution is placed on wavelet details coefficients. So, the estimated image is obtained by applying the appropriate Bayesian rule on these detail coefficients. For the existing Bayesian approaches, we can distinguish univariate and multivariate density estimation both achieving interesting results in practice [7, 8]. In addition, referring to the comparison of different approaches provided by Kirkove [9] and by Besbeas [10], we can conclude that the Bayesian estimators perform interesting results [7].Another approach consists of dealing with the simple Haar transform since it is the most suitable basis for Poisson-like models [11]. This method was introduced by Kolaczyk [12], Charles and Rasson [13], Willett and Nowak [14]. It is based on the Haar wavelet coefficients shrinkage of the original counts (without any preprocessing) using scale-dependent thresholds.
[ "10336202", "684440", "12539975", "18262991", "16519352" ]
[ { "pmid": "10336202", "title": "Counting statistics.", "abstract": "The low radiation dose rates used in nuclear medicine necessitate image formation and measurements that are severely count limited. This limitation may mask our ability to perceive contrast in an image or may affect our confidence in quantitative functional measurements. The randomness of the signal can be described by using the Poisson probability distribution with its associated mean and variance. The validity of a measurement and uncertainties in a result can be determined by examining the count statistics. If multiple measurements are used to derive a result, confidence levels can be determined by examination of the propagation of errors. The statistical properties of the detected signal can also be evaluated to determine if the equipment is functioning properly. For example, the chi2 test can be used to determine if there is too much or too little variability in count samples. Finally, image formation with limited numbers of photons results in noisy images that may be difficult to interpret. An understanding of the trade-offs between contrast, noise, and object size is required to set proper image acquisition parameters and thereby ensure that the information required to make a diagnosis is contained in the final image." }, { "pmid": "684440", "title": "Improvement of scintigrams by computer processing.", "abstract": "Computer processing can improve the quality of scintigrams in several ways. It can increase the accuracy with which the image approximates the activity distribution by reversing degradation. It can selectively enhance normal or abnormal structures of interest. It can optimize the use of the display system presenting the image. The usefulness of computer processing must be determined by observer testing and clinical experience. The need to correct distortion in both intensity (nonuniformity) and space can be avoided by attention to calibration and to the setup of the imaging device employed and by use of the sliding energy window technique. Nonuniformity correction, especially for quantitative studies, should not be done using a flood field as this may actually decrease accuracy. Instead, any necessary correction should employ the sensitivity matrix, which measures the variation of sensitivity to a point source with the position of the source. Statistical fluctuations (noise) and degradation of resolution are commonly corrected using linear, stationary techniques [concepts which are defined and developed in the text], but nonstationary techniques appear to be frequently more successful at the expense of increased processing time. Techniques of choice for pure smoothing are nine-point binomial smoothing and variable shape averaging, and those for both sharpening and smoothing (preferred for most modern, high-count scintigrams) are unsharp masking, Metz or Wiener filtering, and bi-regional sharpening. Structures of interest can be enhanced by methods which detect and emphasize changes in local distributions of slope and curvature of intensity. High quality display devices are essential to reap any benefits from degradation correction. Those devices, which must have appropriately high sensitivity and must avoid display artifacts, have become available only recently. Use of the display should be matched to the processing done. Contrast enhancement, e.g. by histogram qualization, for optimal use for each image of the display intensity range, is often helpful. Most scintigram processing is done using computers with about 32K 16-bit words. Floating point hardware is often useful. Most processing methods require 1-30 seconds on such computers and usually under 15 seconds. Processing time tends to be negligible compared to time for user specification of the processing to be done, so the quality of command languages should be of concern. Careful observer studies using phantoms have shown processing to improve detectability of lesions when a single display is used for both processed and unprocessed images, but not when unprocessed images on standard analog displays are compared to processed images on common computer displays..." }, { "pmid": "12539975", "title": "Statistical and heuristic image noise extraction (SHINE): a new method for processing Poisson noise in scintigraphic images.", "abstract": "Poisson noise is one of the factors degrading scintigraphic images, especially at low count level, due to the statistical nature of photon detection. We have developed an original procedure, named statistical and heuristic image noise extraction (SHINE), to reduce the Poisson noise contained in the scintigraphic images, preserving the resolution, the contrast and the texture. The SHINE procedure consists in dividing the image into 4 x 4 blocks and performing a correspondence analysis on these blocks. Each block is then reconstructed using its own significant factors which are selected using an original statistical variance test. The SHINE procedure has been validated using a line numerical phantom and a hot spots and cold spots real phantom. The reference images are the noise-free simulated images for the numerical phantom and an extremely high counts image for the real phantom. The SHINE procedure has then been applied to the Jaszczak phantom and clinical data including planar bone scintigraphy, planar Sestamibi scintigraphy and Tl-201 myocardial SPECT. The SHINE procedure reduces the mean normalized error between the noisy images and the corresponding reference images. This reduction is constant and does not change with the count level. The SNR in a SHINE processed image is close to that of the corresponding raw image with twice the number of counts. The visual results with the Jaszczak phantom SPECT have shown that SHINE preserves the contrast and the resolution of the slices well. Clinical examples have shown no visual difference between the SHINE images and the corresponding raw images obtained with twice the acquisition duration. SHINE is an entirely automatic procedure which enables halving the acquisition time or the injected dose in scintigraphic acquisitions. It can be applied to all scintigraphic images, including PET data, and to all low-count photon images." }, { "pmid": "18262991", "title": "Adaptive wavelet thresholding for image denoising and compression.", "abstract": "The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising." }, { "pmid": "16519352", "title": "Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising.", "abstract": "We develop three novel wavelet domain denoising methods for subband-adaptive, spatially-adaptive and multivalued image denoising. The core of our approach is the estimation of the probability that a given coefficient contains a significant noise-free component, which we call \"signal of interest.\" In this respect, we analyze cases where the probability of signal presence is 1) fixed per subband, 2) conditioned on a local spatial context, and 3) conditioned on information from multiple image bands. All the probabilities are estimated assuming a generalized Laplacian prior for noise-free subband data and additive white Gaussian noise. The results demonstrate that the new subband-adaptive shrinkage function outperforms Bayesian thresholding approaches in terms of mean-squared error. The spatially adaptive version of the proposed method yields better results than the existing spatially adaptive ones of similar and higher complexity. The performance on color and on multispectral images is superior with respect to recent multiband wavelet thresholding." } ]
International Journal of Telemedicine and Applications
19343191
PMC2662492
10.1155/2009/917826
Managing Requirement Volatility in an Ontology-Driven Clinical LIMS Using Category Theory
Requirement volatility is an issue in software engineering in general, and in Web-based clinical applications in particular, which often originates from an incomplete knowledge of the domain of interest. With advances in the health science, many features and functionalities need to be added to, or removed from, existing software applications in the biomedical domain. At the same time, the increasing complexity of biomedical systems makes them more difficult to understand, and consequently it is more difficult to define their requirements, which contributes considerably to their volatility. In this paper, we present a novel agent-based approach for analyzing and managing volatile and dynamic requirements in an ontology-driven laboratory information management system (LIMS) designed for Web-based case reporting in medical mycology. The proposed framework is empowered with ontologies and formalized using category theory to provide a deep and common understanding of the functional and nonfunctional requirement hierarchies and their interrelations, and to trace the effects of a change on the conceptual framework.
8. Related WorkSeveral efforts have been reported [45–48] during the last decade in the pursuit of inclusive frameworks for managing dynamic taxonomies, ontologies, and control vocabularies. Since existing knowledge representation languages, including well-established description logic, cannot guarantee the computability of highly expressive time-dependent models, the current efforts have been entirely focused on time-independent ontological models. However, the real ontological structures exist in time and space. From another perspective, those who choose other knowledge representation formalisms, such as state machine [49], can cope with time-based models, but these formalisms fail to address ontological concepts and rules because they are much too abstract and have no internal structure or clear semantics. In our proposed framework, category theory, with its rich set of constructors, can be considered as a complementary knowledge representation language for capturing and representing the full semantics of evolving abstract requirements conceptualized within ontological structures. Rosen [50] was among the first to propose the use of category theory in biology, in the context of a “relational biology”.Category theory also has been used by MacFarlane [24] as an efficient vehicle to examine the process of structural change in living/evolving systems. Whitmire [32], Wiels and Easterbrook [51], and Mens [52] have examined category theory for change management in software engineering domain. Hitzler et al. [35] and Zimmermann et al. [36] also have proposed using this formalism in knowledge representation area.
[ "15889362", "9930616" ]
[ { "pmid": "15889362", "title": "Electronic laboratory reporting for the infectious diseases physician and clinical microbiologist.", "abstract": "BACKGROUND\nOne important benefit of electronic health information is the improved interface between infectious diseases practice and public health. Electronic communicable disease reporting (CDR), given its legal mandate and clear public health importance, is a significant early step in the sifting and pooling of health data for purposes beyond patient care and billing. Over the next 5-10 years, almost all CDR will move to the internet.\n\n\nMETHODS\nThis paper reviews the components of electronic laboratory reporting (ELR), including sifting through data in a laboratory information management system for reportable results, controlled \"vocabularies\" (e.g., LOINC, Logical Observation Identifiers Names and Codes [Regenstrief Institute], and SNOMED, Systematized Nomenclature of Medicine [College of American Pathologists]), the \"syntax\" of an electronic message (e.g., health level 7 [HL7]), the implications of the Health Insurance Portability and Accountability Act for ELR, and the obstacles to and potential benefits of ELR.\n\n\nRESULTS\nThere are several ways that infectious diseases physicians, infection control professionals, and microbiology laboratorians will participate in electronic CDR, including web-based case reporting and ELR, the direct, automated messaging of communicable disease reports from clinical lab information management systems to the appropriate public health jurisdiction's information system.\n\n\nCONCLUSIONS\nELR has the potential to make a large impact on the timeliness and the completeness of communicable disease reporting, but it does not replace the clinician's responsibility to submit a case report with important demographic and epidemiologic information." }, { "pmid": "9930616", "title": "Representation of change in controlled medical terminologies.", "abstract": "Computer-based systems that support health care require large controlled terminologies to manage names and meanings of data elements. These terminologies are not static, because change in health care is inevitable. To share data and applications in health care, we need standards not only for terminologies and concept representation, but also for representing change. To develop a principled approach to managing change, we analyze the requirements of controlled medical terminologies and consider features that frame knowledge-representation systems have to offer. Based on our analysis, we present a concept model, a set of change operations, and a change-documentation model that may be appropriate for controlled terminologies in health care. We are currently implementing our modeling approach within a computational architecture." } ]
Genome Medicine
19356222
PMC2684660
10.1186/gm39
A kernel-based integration of genome-wide data for clinical decision support
BackgroundAlthough microarray technology allows the investigation of the transcriptomic make-up of a tumor in one experiment, the transcriptome does not completely reflect the underlying biology due to alternative splicing, post-translational modifications, as well as the influence of pathological conditions (for example, cancer) on transcription and translation. This increases the importance of fusing more than one source of genome-wide data, such as the genome, transcriptome, proteome, and epigenome. The current increase in the amount of available omics data emphasizes the need for a methodological integration framework.MethodsWe propose a kernel-based approach for clinical decision support in which many genome-wide data sources are combined. Integration occurs within the patient domain at the level of kernel matrices before building the classifier. As supervised classification algorithm, a weighted least squares support vector machine is used. We apply this framework to two cancer cases, namely, a rectal cancer data set containing microarray and proteomics data and a prostate cancer data set containing microarray and genomics data. For both cases, multiple outcomes are predicted.ResultsFor the rectal cancer outcomes, the highest leave-one-out (LOO) areas under the receiver operating characteristic curves (AUC) were obtained when combining microarray and proteomics data gathered during therapy and ranged from 0.927 to 0.987. For prostate cancer, all four outcomes had a better LOO AUC when combining microarray and genomics data, ranging from 0.786 for recurrence to 0.987 for metastasis.ConclusionsFor both cancer sites the prediction of all outcomes improved when more than one genome-wide data set was considered. This suggests that integrating multiple genome-wide data sources increases the predictive performance of clinical decision support models. This emphasizes the need for comprehensive multi-modal data. We acknowledge that, in a first phase, this will substantially increase costs; however, this is a necessary investment to ultimately obtain cost-efficient models usable in patient tailored therapy.
Related workOther research groups have already proposed the idea of data integration, but most groups have only investigated the integration of clinical and microarray data. Tibshirani and colleagues [19] proposed such a framework by reducing the microarray data to one variable, addable to models based on clinical characteristics such as age, grade, and size of the tumor. Nevins and colleagues [20] combined clinical risk factors with metagenes (that is, the weighted average expression of a group of genes) in a tree-based classification system. Wang et al. combined microarray data with knowledge on two clinicopathological variables by defining a gene signature only for the subset of patients for whom the clinicopathological variables were not sufficient to predict outcome [21].A further evolution can be seen in studies in which two omics data sources are simultaneously considered, in most cases microarray data combined with proteomics or array CGH data. Much literature on such studies involving data integration already exists. However, the current definition of the integration of high-throughput data sources as it is used in the literature differs from our point of view.In a first group of integration studies, heterogeneous data from different sources were analyzed sequentially; that is, one data source was analyzed while the second was used as confirmation of the found results or for further deepening the understanding of the results [22]. Such approaches are used for biological discovery and a better understanding of the development of a disease, but not for predictive purposes. For example, Fridlyand and colleagues [23] found three breast tumor subtypes with a distinct CNV pattern based on array CGH data. Microarray data were subsequently analyzed to identify the functional categories that characterized these subtypes. Tomioka et al. [24] analyzed microarray and array CGH data of patients with neuroblastoma in a similar way. Genomic signatures resulted from the array CGH data, while molecular signatures were found after the microarray analysis. The authors suggested that a combination of these independent prognostic indicators would be clinically useful.The term data integration has also been used as a synonym for data merging in which different data sets are concatenated at the database level by cross-referencing the sequence identifiers, which requires semantic compatibility among data sets [25,26]. Data merging is a complex task due to, for example, the use of different identifiers, the absence of a 'one gene-one protein' relationship, alternative splicing, and measurement of multiple signals for one gene. In most studies, the concordance between the merged data sets and their interpretation in the context of biological pathways and regulatory mechanisms are investigated. Analyses of the merged data set by clustering or correlating the protein and microarray data can help identify candidate targets when changes in expression occur at both the gene and protein levels. However, there has been only modest success from correlation studies of gene and protein expression. Bitton et al. [27] combined proteomics data with exon array data, which allowed a much more fine-grained analysis by assigning peptides to their originating exons instead of mapping transcripts and proteins based on their IDs.Our definition for the combination of heterogeneous biological data is different. We integrate multiple layers of experimental data into one mathematical model for the development of more homogeneous classifiers in clinical decision support. For this purpose, we present a kernel-based integration framework. Integration occurs within the patient domain at a level not so far described in the literature. Instead of merging data sets or analyzing them in turn, the variables from different omics data are treated equally. This leads to the selection of the most relevant features from all available data sources, which are combined in a machine learning-based model. We were inspired by the idea of Lanckriet and colleagues [28]. They presented an integration framework in which each data set is transformed into a kernel matrix. Integration occurs on this kernel level without referring back to the data. They applied their framework to amino acid sequence information, expression data, protein-protein interaction data, and other types of genomic information to solve a single classification problem: the classification of transmembrane versus non-transmembrane proteins. In this study by Lanckriet and colleagues, all considered data sets were publicly available. This requires a computationally intensive framework for determining the relevance of each data set by solving an optimization problem. Within our set-up, however, all data sources are derived from the patients themselves. This makes the gathering of these data sets highly costly and limits the number of data sets, but guarantees more relevance for the problem at hand.We previously investigated whether the prediction of distant metastasis in breast cancer patients could be improved when considering microarray data besides clinical data [29]. In this manuscript, we consider not only microarray data but also high-throughput data from multiple biological levels. Three different strategies for clinical decision support are proposed: the use of individual data sets (referred to as step A); an integration of each data type over time by manually calculating the change in expression (step B); and an approach in which data sets are integrated over multiple layers in the genome (and over time) by treating variables from the different data sets equally (step C).We apply our framework to two cases, summarized in Table 1. In the first case on rectal cancer, tumor regression grade, lymph node status, and circumferential margin involvement (CRM) are predicted for 36 patients based on microarray and proteomics data, gathered at two time points during therapy. The second case on prostate cancer involves microarray and copy number variation data from 55 patients. Tumor grade, stage, metastasis, and occurrence of recurrence were available for prediction [30,31].Table 1Overview of the two case studies on rectal and prostate cancerData set I: rectal cancerData set II: prostate cancerNumber of samples3655Data sourcesMicroarrayMicroarrayProteomicsGenomicsNumber of features (after preprocessing)T0: 6,913 genes; 90 proteins6,974 genesT1: 6,913 genes; 92 proteins7,305 CNVsOutcomesWHEELERGRADEpN-STAGESTAGECRMMETASTASISRECURRENCE
[ "16226240", "10359783", "10521349", "18258980", "17092406", "18258979", "15920524", "18337604", "17122850", "18703475", "15831087", "12634793", "10976071", "18776910", "16646777", "12928487", "17975138", "17319736", "16620391", "17637744", "16772273", "18358788", "18298841", "15130933", "18003232", "14711987", "17875689", "17208931", "12925520", "12195189", "16525188", "7915774", "2430152", "11395428", "5948714", "15231531", "16670007", "15513985", "6878708", "15701846", "12727794", "12657625", "17854143", "12237895", "12704195", "17476687", "15651028", "18065961", "17971772", "16762975", "17003780", "15466172", "12590567", "17587088", "12826036", "16916471", "15254658", "17562355", "15701845", "14550954", "15665513", "15836783", "17047970", "16142351", "17373663", "15059893", "15743036", "11956618", "17348431", "12553016", "12408769", "17615053", "15599946", "16114059", "11279605", "16276351", "18188713", "11719440", "16598739", "17949478", "17178897" ]
[ { "pmid": "16226240", "title": "Machine learning in bioinformatics: a brief survey and recommendations for practitioners.", "abstract": "Machine learning is used in a large number of bioinformatics applications and studies. The application of machine learning techniques in other areas such as pattern recognition has resulted in accumulated experience as to correct and principled approaches for their use. The aim of this paper is to give an account of issues affecting the application of machine learning tools, focusing primarily on general aspects of feature and model parameter selection, rather than any single specific algorithm. These aspects are discussed in the context of published bioinformatics studies in leading journals over the last 5 years. We assess to what degree the experience gained by the pattern recognition research community pervades these bioinformatics studies. We finally discuss various critical issues relating to bioinformatic data sets and make a number of recommendations on the proper use of machine learning techniques for bioinformatics research based upon previously published research on machine learning." }, { "pmid": "10359783", "title": "Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays.", "abstract": "Oligonucleotide arrays can provide a broad picture of the state of the cell, by monitoring the expression level of thousands of genes at the same time. It is of interest to develop techniques for extracting useful information from the resulting data sets. Here we report the application of a two-way clustering method for analyzing a data set consisting of the expression patterns of different cell types. Gene expression in 40 tumor and 22 normal colon tissue samples was analyzed with an Affymetrix oligonucleotide array complementary to more than 6,500 human genes. An efficient two-way clustering algorithm was applied to both the genes and the tissues, revealing broad coherent patterns that suggest a high degree of organization underlying gene expression in these tissues. Coregulated families of genes clustered together, as demonstrated for the ribosomal proteins. Clustering also separated cancerous from noncancerous tissue and cell lines from in vivo tissues on the basis of subtle distributed patterns of genes even when expression of individual genes varied only slightly between the tissues. Two-way clustering thus may be of use both in classifying genes into functional groups and in classifying tissues based on gene expression." }, { "pmid": "10521349", "title": "Molecular classification of cancer: class discovery and class prediction by gene expression monitoring.", "abstract": "Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge." }, { "pmid": "18258980", "title": "Clinical application of the 70-gene profile: the MINDACT trial.", "abstract": "The 70-gene profile is a new prognostic tool that has the potential to greatly improve risk assessment and treatment decision making for early breast cancer. Its prospective validation is currently ongoing through the MINDACT (Microarray in Node-Negative Disease May Avoid Chemotherapy) trial, a 6,000-patient randomized, multicentric trial. This article reviews the several steps in the development of the profile from its discovery to its clinical validation." }, { "pmid": "18258979", "title": "Development of the 21-gene assay and its application in clinical practice and clinical trials.", "abstract": "Several multigene markers that predict relapse more accurately than classical clinicopathologic features have been developed. The 21-gene assay was developed specifically for patients with estrogen receptor (ER)-positive breast cancer, and has been shown to predict distant recurrence more accurately that classical clinicopathologic features in patients with ER-positive breast cancer and negative axillary nodes treated with adjuvant tamoxifen; validation studies in this population led to its approval as a diagnostic test. In a similar population, it also may be used to assess the benefit of adding chemotherapy to hormonal therapy. Other validation studies indicate that it also predicts the risk of distant and local recurrence in other populations with ER-positive disease, including node-negative patients receiving no adjuvant therapy and patients with positive axillary nodes treated with doxorubicin-containing chemotherapy. The Trial Assigning Individualized Options for Treatment (TAILORx) is multicenter trial that integrates the 21-gene assay into the clinical decision-making process and is designed to refine the utility of the assay in clinical practice and to provide a resource for evaluating additional molecular markers as they are developed in the future." }, { "pmid": "15920524", "title": "Array comparative genomic hybridization and its applications in cancer.", "abstract": "Alteration in DNA copy number is one of the many ways in which gene expression and function may be modified. Some variations are found among normal individuals, others occur in the course of normal processes in some species and still others participate in causing various disease states. For example, many defects in human development are due to gains and losses of chromosomes and chromosomal segments that occur before or shortly after fertilization, and DNA dosage-alteration changes occurring in somatic cells are frequent contributors to cancer. Detecting these aberrations and interpreting them in the context of broader knowledge facilitates the identification of crucial genes and pathways involved in biological processes and disease. Over the past several years, array comparative genomic hybridization has proven its value for analyzing DNA copy-number variations. Here, we discuss the state of the art of array comparative genomic hybridization and its applications in cancer, emphasizing general concepts rather than specific results." }, { "pmid": "17122850", "title": "Global variation in copy number in the human genome.", "abstract": "Copy number variation (CNV) of DNA sequences is functionally significant but has yet to be fully ascertained. We have constructed a first-generation CNV map of the human genome through the study of 270 individuals from four populations with ancestry in Europe, Africa or Asia (the HapMap collection). DNA from these individuals was screened for CNV using two complementary technologies: single-nucleotide polymorphism (SNP) genotyping arrays, and clone-based comparative genomic hybridization. A total of 1,447 copy number variable regions (CNVRs), which can encompass overlapping or adjacent gains or losses, covering 360 megabases (12% of the genome) were identified in these populations. These CNVRs contained hundreds of genes, disease loci, functional elements and segmental duplications. Notably, the CNVRs encompassed more nucleotide content per genome than SNPs, underscoring the importance of CNV in genetic diversity and evolution. The data obtained delineate linkage disequilibrium patterns for many CNVs, and reveal marked variation in copy number among populations. We also demonstrate the utility of this resource for genetic disease studies." }, { "pmid": "15831087", "title": "The molecular make-up of a tumour: proteomics in cancer research.", "abstract": "The enormous progress in proteomics, enabled by recent advances in MS (mass spectrometry), has brought protein analysis back into the limelight of cancer research, reviving old areas as well as opening new fields of study. In this review, we discuss the basic features of proteomic technologies, including the basics of MS, and we consider the main current applications and challenges of proteomics in cancer research, including (i) protein expression profiling of tumours, tumour fluids and tumour cells; (ii) protein microarrays; (iii) mapping of cancer signalling pathways; (iv) pharmacoproteomics; (v) biomarkers for diagnosis, staging and monitoring of the disease and therapeutic response; and (vi) the immune response to cancer. All these applications continue to benefit from further technological advances, such as the development of quantitative proteomics methods, high-resolution, high-speed and high-sensitivity MS, functional protein assays, and advanced bioinformatics for data handling and interpretation. A major challenge will be the integration of proteomics with genomics and metabolomics data and their functional interpretation in conjunction with clinical results and epidemiology." }, { "pmid": "12634793", "title": "Mass spectrometry-based proteomics.", "abstract": "Recent successes illustrate the role of mass spectrometry-based proteomics as an indispensable tool for molecular and cellular biology and for the emerging field of systems biology. These include the study of protein-protein interactions via affinity-based isolations on a small and proteome-wide scale, the mapping of numerous organelles, the concurrent description of the malaria parasite genome and proteome, and the generation of quantitative protein profiles from diverse species. The ability of mass spectrometry to identify and, increasingly, to precisely quantify thousands of proteins from complex samples can be expected to impact broadly on biology and medicine." }, { "pmid": "10976071", "title": "Printing proteins as microarrays for high-throughput function determination.", "abstract": "Systematic efforts are currently under way to construct defined sets of cloned genes for high-throughput expression and purification of recombinant proteins. To facilitate subsequent studies of protein function, we have developed miniaturized assays that accommodate extremely low sample volumes and enable the rapid, simultaneous processing of thousands of proteins. A high-precision robot designed to manufacture complementary DNA microarrays was used to spot proteins onto chemically derivatized glass slides at extremely high spatial densities. The proteins attached covalently to the slide surface yet retained their ability to interact specifically with other proteins, or with small molecules, in solution. Three applications for protein microarrays were demonstrated: screening for protein-protein interactions, identifying the substrates of protein kinases, and identifying the protein targets of small molecules." }, { "pmid": "18776910", "title": "Systematic assessment of copy number variant detection via genome-wide SNP genotyping.", "abstract": "SNP genotyping has emerged as a technology to incorporate copy number variants (CNVs) into genetic analyses of human traits. However, the extent to which SNP platforms accurately capture CNVs remains unclear. Using independent, sequence-based CNV maps, we find that commonly used SNP platforms have limited or no probe coverage for a large fraction of CNVs. Despite this, in 9 samples we inferred 368 CNVs using Illumina SNP genotyping data and experimentally validated over two-thirds of these. We also developed a method (SNP-Conditional Mixture Modeling, SCIMM) to robustly genotype deletions using as few as two SNP probes. We find that HapMap SNPs are strongly correlated with 82% of common deletions, but the newest SNP platforms effectively tag about 50%. We conclude that currently available genome-wide SNP assays can capture CNVs accurately, but improvements in array designs, particularly in duplicated sequences, are necessary to facilitate more comprehensive analyses of genomic variation." }, { "pmid": "16646777", "title": "Pre-validation and inference in microarrays.", "abstract": "In microarray studies, an important problem is to compare a predictor of disease outcome derived from gene expression levels to standard clinical predictors. Comparing them on the same dataset that was used to derive the microarray predictor can lead to results strongly biased in favor of the microarray predictor. We propose a new technique called \"pre-validation'' for making a fairer comparison between the two sets of predictors. We study the method analytically and explore its application in a recent study on breast cancer." }, { "pmid": "12928487", "title": "Towards integrated clinico-genomic models for personalized medicine: combining gene expression signatures and clinical factors in breast cancer outcomes prediction.", "abstract": "Genomic data, particularly genome-scale measures of gene expression derived from DNA microarray studies, has the potential for adding enormous information to the analysis of biological phenotypes. Perhaps the most successful application of this data has been in the characterization of human cancers, including the ability to predict clinical outcomes. Nevertheless, most analyses have used gene expression profiles to define broad group distinctions, similar to the use of traditional clinical risk factors. As a result, there remains considerable heterogeneity within the broadly defined groups and thus predictions fall short of providing accurate predictions for individual patients. One strategy to resolve this heterogeneity is to make use of multiple gene expression patterns that are more powerful in defining individual characteristics and predicting outcomes than any single gene expression pattern. Statistical tree-based classification systems provide a framework for assessing multiple patterns, that we term metagenes, selecting those that are most capable of resolving the biological heterogeneity. Moreover, this framework provides a mechanism to combine multiple forms of data, both genomic and clinical, to most effectively characterize individual patients and achieve the goal of personalized predictions of clinical outcomes." }, { "pmid": "17975138", "title": "Identification and validation of a novel gene signature associated with the recurrence of human hepatocellular carcinoma.", "abstract": "PURPOSE\nTo improve the clinical management of human hepatocellular carcinoma (HCC) by accurate identification, at diagnosis, of patients at risk of recurrence after primary treatment for HCC.\n\n\nEXPERIMENTAL DESIGN\nTwo clinicopathologic variables available at diagnosis, vascular invasion and cirrhosis, together with molecular profiling using Affymetrix human HG-U133A and HG-U133B oligonucleotide probe arrays, were used to identify recurrent HCC disease.\n\n\nRESULTS\nHCC patients presented clinically at diagnosis with vascular invasion and cirrhosis showed a high rate (78-83%) of developing recurrent disease within 6 to 35 months. In comparison, most of the HCC patients (80-100%) without vascular invasion and cirrhosis remained disease-free. However, the risk of recurrent disease for HCC patients with either vascular invasion or cirrhosis could not be accurately ascertained. Using a pool of 23 HCC patients with either vascular invasion or cirrhosis as training set, a 57-gene signature was derived and could predict recurrent disease at diagnosis, with 84% (sensitivity 86%, specificity 82%) accuracy, for a totally independent test set of 25 HCC patients with either vascular invasion or cirrhosis. On further analysis, the disease-free rate was significantly different between patients that were predicted to recur or not to recur in the test group (P = 0.002).\n\n\nCONCLUSION\nWe have presented data to show that by incorporating the status of vascular invasion and cirrhosis available at diagnosis for patients with HCC after partial curative hepatectomy and a novel 57-member gene signature, we could accurately stratify HCC patients with different risks of recurrence." }, { "pmid": "16620391", "title": "Breast tumor copy number aberration phenotypes and genomic instability.", "abstract": "BACKGROUND\nGenomic DNA copy number aberrations are frequent in solid tumors, although the underlying causes of chromosomal instability in tumors remain obscure. Genes likely to have genomic instability phenotypes when mutated (e.g. those involved in mitosis, replication, repair, and telomeres) are rarely mutated in chromosomally unstable sporadic tumors, even though such mutations are associated with some heritable cancer prone syndromes.\n\n\nMETHODS\nWe applied array comparative genomic hybridization (CGH) to the analysis of breast tumors. The variation in the levels of genomic instability amongst tumors prompted us to investigate whether alterations in processes/genes involved in maintenance and/or manipulation of the genome were associated with particular types of genomic instability.\n\n\nRESULTS\nWe discriminated three breast tumor subtypes based on genomic DNA copy number alterations. The subtypes varied with respect to level of genomic instability. We find that shorter telomeres and altered telomere related gene expression are associated with amplification, implicating telomere attrition as a promoter of this type of aberration in breast cancer. On the other hand, the numbers of chromosomal alterations, particularly low level changes, are associated with altered expression of genes in other functional classes (mitosis, cell cycle, DNA replication and repair). Further, although loss of function instability phenotypes have been demonstrated for many of the genes in model systems, we observed enhanced expression of most genes in tumors, indicating that over expression, rather than deficiency underlies instability.\n\n\nCONCLUSION\nMany of the genes associated with higher frequency of copy number aberrations are direct targets of E2F, supporting the hypothesis that deregulation of the Rb pathway is a major contributor to chromosomal instability in breast tumors. These observations are consistent with failure to find mutations in sporadic tumors in genes that have roles in maintenance or manipulation of the genome." }, { "pmid": "17637744", "title": "Novel risk stratification of patients with neuroblastoma by genomic signature, which is independent of molecular signature.", "abstract": "Human neuroblastoma remains enigmatic because it often shows spontaneous regression and aggressive growth. The prognosis of advanced stage of sporadic neuroblastomas is still poor. Here, we investigated whether genomic and molecular signatures could categorize new therapeutic risk groups in primary neuroblastomas. We conducted microarray-based comparative genomic hybridization (array-CGH) with a DNA chip carrying 2464 BAC clones to examine genomic aberrations of 236 neuroblastomas and used in-house cDNA microarrays for gene-expression profiling. Array-CGH demonstrated three major genomic groups of chromosomal aberrations: silent (GGS), partial gains and/or losses (GGP) and whole gains and/or losses (GGW), which well corresponded with the patterns of chromosome 17 abnormalities. They were further classified into subgroups with different outcomes. In 112 sporadic neuroblastomas, MYCN amplification was frequent in GGS (22%) and GGP (53%) and caused serious outcomes in patients. Sporadic tumors with a single copy of MYCN showed the 5-year cumulative survival rates of 89% in GGS, 53% in GGP and 85% in GGW. Molecular signatures also segregated patients into the favorable and unfavorable prognosis groups (P=0.001). Both univariate and multivariate analyses revealed that genomic and molecular signatures were mutually independent, powerful prognostic indicators. Thus, combined genomic and molecular signatures may categorize novel risk groups and confer new clues for allowing tailored or even individualized medicine to patients with neuroblastoma." }, { "pmid": "16772273", "title": "Data merging for integrated microarray and proteomic analysis.", "abstract": "The functioning of even a simple biological system is much more complicated than the sum of its genes, proteins and metabolites. A premise of systems biology is that molecular profiling will facilitate the discovery and characterization of important disease pathways. However, as multiple levels of effector pathway regulation appear to be the norm rather than the exception, a significant challenge presented by high-throughput genomics and proteomics technologies is the extraction of the biological implications of complex data. Thus, integration of heterogeneous types of data generated from diverse global technology platforms represents the first challenge in developing the necessary foundational databases needed for predictive modelling of cell and tissue responses. Given the apparent difficulty in defining the correspondence between gene expression and protein abundance measured in several systems to date, how do we make sense of these data and design the next experiment? In this review, we highlight current approaches and challenges associated with integration and analysis of heterogeneous data sets, focusing on global analysis obtained from high-throughput technologies." }, { "pmid": "18358788", "title": "State of the nation in data integration for bioinformatics.", "abstract": "Data integration is a perennial issue in bioinformatics, with many systems being developed and many technologies offered as a panacea for its resolution. The fact that it is still a problem indicates a persistence of underlying issues. Progress has been made, but we should ask \"what lessons have been learnt?\", and \"what still needs to be done?\" Semantic Web and Web 2.0 technologies are the latest to find traction within bioinformatics data integration. Now we can ask whether the Semantic Web, mashups, or their combination, have the potential to help. This paper is based on the opening invited talk by Carole Goble given at the Health Care and Life Sciences Data Integration for the Semantic Web Workshop collocated with WWW2007. The paper expands on that talk. We attempt to place some perspective on past efforts, highlight the reasons for success and failure, and indicate some pointers to the future." }, { "pmid": "18298841", "title": "Exon level integration of proteomics and microarray data.", "abstract": "BACKGROUND\nPrevious studies comparing quantitative proteomics and microarray data have generally found poor correspondence between the two. We hypothesised that this might in part be because the different assays were targeting different parts of the expressed genome and might therefore be subjected to confounding effects from processes such as alternative splicing.\n\n\nRESULTS\nUsing a genome database as a platform for integration, we combined quantitative protein mass spectrometry with Affymetrix Exon array data at the level of individual exons. We found significantly higher degrees of correlation than have been previously observed (r = 0.808). The study was performed using cell lines in equilibrium in order to reduce a major potential source of biological variation, thus allowing the analysis to focus on the data integration methods in order to establish their performance.\n\n\nCONCLUSION\nWe conclude that part of the variation observed when integrating microarray and proteomics data may occur as a consequence both of the data analysis and of the high granularity to which studies have until recently been limited. The approach opens up the possibility for the first time of considering combined microarray and proteomics datasets at the level of individual exons and isoforms, important given the high proportion of alternative splicing observed in the human genome." }, { "pmid": "15130933", "title": "A statistical framework for genomic data fusion.", "abstract": "MOTIVATION\nDuring the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data.\n\n\nRESULTS\nThis paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein-protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins--membrane proteins and ribosomal proteins--performs significantly better than the same algorithm trained on any single type of data.\n\n\nAVAILABILITY\nSupplementary data at http://noble.gs.washington.edu/proj/sdp-svm" }, { "pmid": "18003232", "title": "Integration of clinical and microarray data with kernel methods.", "abstract": "Currently, the clinical management of cancer is based on empirical data from the literature (clinical studies) or based on the expertise of the clinician. Recently microarray technology emerged and it has the potential to revolutionize the clinical management of cancer and other diseases. A microarray allows to measure the expression levels of thousands of genes simultaneously which may reflect diagnostic or prognostic categories and sensitivity to treatment. The objective of this paper is to investigate whether clinical data, which is the basis of day-to-day clinical decision support, can be efficiently combined with microarray data, which has yet to prove its potential to deliver patient tailored therapy, using Least Squares Support Vector Machines." }, { "pmid": "14711987", "title": "Gene expression profiling identifies clinically relevant subtypes of prostate cancer.", "abstract": "Prostate cancer, a leading cause of cancer death, displays a broad range of clinical behavior from relatively indolent to aggressive metastatic disease. To explore potential molecular variation underlying this clinical heterogeneity, we profiled gene expression in 62 primary prostate tumors, as well as 41 normal prostate specimens and nine lymph node metastases, using cDNA microarrays containing approximately 26,000 genes. Unsupervised hierarchical clustering readily distinguished tumors from normal samples, and further identified three subclasses of prostate tumors based on distinct patterns of gene expression. High-grade and advanced stage tumors, as well as tumors associated with recurrence, were disproportionately represented among two of the three subtypes, one of which also included most lymph node metastases. To further characterize the clinical relevance of tumor subtypes, we evaluated as surrogate markers two genes differentially expressed among tumor subgroups by using immunohistochemistry on tissue microarrays representing an independent set of 225 prostate tumors. Positive staining for MUC1, a gene highly expressed in the subgroups with \"aggressive\" clinicopathological features, was associated with an elevated risk of recurrence (P = 0.003), whereas strong staining for AZGP1, a gene highly expressed in the other subgroup, was associated with a decreased risk of recurrence (P = 0.0008). In multivariate analysis, MUC1 and AZGP1 staining were strong predictors of tumor recurrence independent of tumor grade, stage, and preoperative prostate-specific antigen levels. Our results suggest that prostate tumors can be usefully classified according to their gene expression patterns, and these tumor subtypes may provide a basis for improved prognostication and treatment stratification." }, { "pmid": "17875689", "title": "Genomic profiling reveals alternative genetic pathways of prostate tumorigenesis.", "abstract": "Prostate cancer is clinically heterogeneous, ranging from indolent to lethal disease. Expression profiling previously defined three subtypes of prostate cancer, one (subtype-1) linked to clinically favorable behavior, and the others (subtypes-2 and -3) linked with a more aggressive form of the disease. To explore disease heterogeneity at the genomic level, we carried out array-based comparative genomic hybridization (array CGH) on 64 prostate tumor specimens, including 55 primary tumors and 9 pelvic lymph node metastases. Unsupervised cluster analysis of DNA copy number alterations (CNA) identified recurrent aberrations, including a 6q15-deletion group associated with subtype-1 gene expression patterns and decreased tumor recurrence. Supervised analysis further disclosed distinct patterns of CNA among gene-expression subtypes, where subtype-1 tumors exhibited characteristic deletions at 5q21 and 6q15, and subtype-2 cases harbored deletions at 8p21 (NKX3-1) and 21q22 (resulting in TMPRSS2-ERG fusion). Lymph node metastases, predominantly subtype-3, displayed overall higher frequencies of CNA, and in particular gains at 8q24 (MYC) and 16p13, and loss at 10q23 (PTEN) and 16q23. Our findings reveal that prostate cancers develop via a limited number of alternative preferred genetic pathways. The resultant molecular genetic subtypes provide a new framework for investigating prostate cancer biology and explain in part the clinical heterogeneity of the disease." }, { "pmid": "17208931", "title": "Phase I/II study of preoperative cetuximab, capecitabine, and external beam radiotherapy in patients with rectal cancer.", "abstract": "BACKGROUND\nTo assess the safety and preliminary efficacy of concurrent radiotherapy, capecitabine, and cetuximab in the preoperative treatment of patients with rectal cancer.\n\n\nPATIENTS AND METHODS\nForty patients with rectal cancer (T3-T4, and/or N+, endorectal ultrasound) received preoperative radiotherapy (1.8 Gy, 5 days/week for 5 weeks, total dose 45 Gy, three-dimensional conformal technique) in combination with cetuximab [initial dose 400 mg/m(2) intravenous given 1 week before the beginning of radiation followed by 250 mg/m(2)/week for 5 weeks] and capecitabine for the duration of radiotherapy (650 mg/m(2) orally twice daily, first dose level; 825 mg/m(2) twice daily, second dose level).\n\n\nRESULTS\nFour and six patients were treated at the first and second dose level of capecitabine, respectively. No dose-limiting toxicity occurred. Thirty additional patients were treated with capecitabine at 825 mg/m(2) twice daily. The most frequent grade 1/2 side-effects were acneiform rash (87%), diarrhea (65%), and fatigue (57%). Grade 3 diarrhea was found in 15%. Three grade 4 toxic effects were recorded: one myocardial infarction, one pulmonary embolism, and one pulmonary infection with sepsis. Two patients (5%) had a pathological complete response.\n\n\nCONCLUSIONS\nPreoperative radiotherapy in combination with capecitabine and cetuximab is feasible with some patients achieving pathological downstaging." }, { "pmid": "12925520", "title": "Exploration, normalization, and summaries of high density oligonucleotide array probe level data.", "abstract": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities." }, { "pmid": "12195189", "title": "Quantification of histologic regression of rectal cancer after irradiation: a proposal for a modified staging system.", "abstract": "PURPOSE\nLong-course preoperative radiotherapy has been recommended for rectal carcinoma when there is concern about the ability to perform a curative resection, for example, in larger tethered tumors or those sited anteriorly or near the anal sphincter. \"Downstaging\" of the tumor may occur, and this is of importance when estimating the prognosis and selecting postoperative therapy for patients. We studied the effects of preoperative chemoradiotherapy on the pathology of rectal cancer, and we propose a simplified measurement of tumor regression, the Rectal Cancer Regression Grade.\n\n\nMETHODS\nWe have reviewed those patients who received preoperative chemoradiotherapy followed by surgical resection for carcinomas of the mid or distal third of the rectum found to be Stage T3/4 on transrectal ultrasound or CT between January 1995 and December 1998. Patients received 45 to 50 Gy irradiation and an infusion of 5-fluorouracil. The surgical specimens were examined by one pathologist, and the Rectal Cancer Regression Grade was quantified.\n\n\nRESULTS\nForty-two patients, mean age 60 (range, 42-86) years, underwent chemoradiotherapy before surgery for rectal carcinoma. There were 28 anterior resections (67 percent; 9 with a colonic pouch), 12 abdominoperineal resections (27 percent), and 2 Hartmann's procedures (5 percent). Comparison of preoperative and pathologic staging revealed that the depth of invasion was downstaged in 17 patients (38 percent), and the status of involved lymph nodes was downstaged in 13 (50 percent) of 26 patients. Tumor regression was more than 50 percent (Rectal Cancer Regression Grades 1 and 2) in 36 patients (86 percent), with 7 patients (17 percent) having complete regression with absence of residual cancer cells.\n\n\nCONCLUSION\nSignificant tumor regression was seen in 86 percent of cases after chemoradiotherapy, with 19 patients showing a \"good\" responsiveness. We propose a modified pathologic staging system for irradiated rectal cancer, the Rectal Cancer Regression Grade, which includes a measurement of tumor regression. The utility of the proposed Rectal Cancer Regression Grade must be tested against long-term outcomes before its value in predicting prognosis and survival can be determined." }, { "pmid": "7915774", "title": "Role of circumferential margin involvement in the local recurrence of rectal cancer.", "abstract": "Local recurrence after resection for rectal cancer remains common despite growing acceptance that inadequate local excision may be implicated. In a prospective study of 190 patients with rectal cancer, we examined the circumferential margin of excision of resected specimens for tumour presence, to examine its frequency and its relation to subsequent local recurrence. Tumour involvement of the circumferential margin was seen in 25% (35/141) of specimens for which the surgeon thought the resection was potentially curative, and in 36% (69/190) of all cases. After a median 5 years' follow-up (range 3.0-7.7 years), the frequency of local recurrence after potentially curative resection was 25% (95% CI 18-33%). The frequency of local recurrence was significantly higher for patients who had had tumour involvement of the circumferential margin than for those without such involvement (78 [95% CI 62-94] vs 10 [4-16]%). By Cox's regression analysis tumour involvement of the circumferential margin independently influenced both local recurrence (hazard ratio = 12.2 [4.4-34.6]) and survival (3.2 [1.6-6.53]). These results show the importance of wide local excision during resection for rectal cancer, and the need for routine assessment of the circumferential margin to assess prognosis." }, { "pmid": "2430152", "title": "Local recurrence of rectal adenocarcinoma due to inadequate surgical resection. Histopathological study of lateral tumour spread and surgical excision.", "abstract": "In 52 patients with rectal adenocarcinoma whole-mount sections of the entire operative specimen were examined by transverse slicing. There was spread to the lateral resection margin in 14 of 52 (27%) patients and 12 of these proceeded to local pelvic recurrence. The specificity, sensitivity, and positive predictive values were 92%, 95%, and 85%, respectively. In a retrospective stage-matched and grade-matched control group there was local recurrence in the same proportion of patients, but in this series no patient had been shown by routine sampling to have lateral spread. In rectal adenocarcinoma, local recurrence is mainly due to lateral spread of the tumour and has previously been underestimated." }, { "pmid": "11395428", "title": "Missing value estimation methods for DNA microarrays.", "abstract": "MOTIVATION\nGene expression microarray experiments can generate data sets with multiple missing expression values. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene array values as input. For example, methods such as hierarchical clustering and K-means clustering are not robust to missing data, and may lose effectiveness even with a few missing values. Methods for imputing missing data are needed, therefore, to minimize the effect of incomplete data sets on analyses, and to increase the range of data sets to which these algorithms can be applied. In this report, we investigate automated methods for estimating missing data.\n\n\nRESULTS\nWe present a comparative study of several methods for the estimation of missing values in gene microarray data. We implemented and evaluated three methods: a Singular Value Decomposition (SVD) based method (SVDimpute), weighted K-nearest neighbors (KNNimpute), and row average. We evaluated the methods using a variety of parameter settings and over different real data sets, and assessed the robustness of the imputation methods to the amount of missing data over the range of 1--20% missing values. We show that KNNimpute appears to provide a more robust and sensitive method for missing value estimation than SVDimpute, and both SVDimpute and KNNimpute surpass the commonly used row average method (as well as filling missing values with zeros). We report results of the comparative experiments and provide recommendations and tools for accurate estimation of missing microarray data under a variety of conditions." }, { "pmid": "15231531", "title": "Systematic benchmarking of microarray data classification: assessing the role of non-linearity and dimensionality reduction.", "abstract": "MOTIVATION\nMicroarrays are capable of determining the expression levels of thousands of genes simultaneously. In combination with classification methods, this technology can be useful to support clinical management decisions for individual patients, e.g. in oncology. The aim of this paper is to systematically benchmark the role of non-linear versus linear techniques and dimensionality reduction methods.\n\n\nRESULTS\nA systematic benchmarking study is performed by comparing linear versions of standard classification and dimensionality reduction techniques with their non-linear versions based on non-linear kernel functions with a radial basis function (RBF) kernel. A total of 9 binary cancer classification problems, derived from 7 publicly available microarray datasets, and 20 randomizations of each problem are examined.\n\n\nCONCLUSIONS\nThree main conclusions can be formulated based on the performances on independent test sets. (1) When performing classification with least squares support vector machines (LS-SVMs) (without dimensionality reduction), RBF kernels can be used without risking too much overfitting. The results obtained with well-tuned RBF kernels are never worse and sometimes even statistically significantly better compared to results obtained with a linear kernel in terms of test set receiver operating characteristic and test set accuracy performances. (2) Even for classification with linear classifiers like LS-SVM with linear kernel, using regularization is very important. (3) When performing kernel principal component analysis (kernel PCA) before classification, using an RBF kernel for kernel PCA tends to result in overfitting, especially when using supervised feature selection. It has been observed that an optimal selection of a large number of features is often an indication for overfitting. Kernel PCA with linear kernel gives better results." }, { "pmid": "16670007", "title": "A comparison of univariate and multivariate gene selection techniques for classification of cancer datasets.", "abstract": "BACKGROUND\nGene selection is an important step when building predictors of disease state based on gene expression data. Gene selection generally improves performance and identifies a relevant subset of genes. Many univariate and multivariate gene selection approaches have been proposed. Frequently the claim is made that genes are co-regulated (due to pathway dependencies) and that multivariate approaches are therefore per definition more desirable than univariate selection approaches. Based on the published performances of all these approaches a fair comparison of the available results can not be made. This mainly stems from two factors. First, the results are often biased, since the validation set is in one way or another involved in training the predictor, resulting in optimistically biased performance estimates. Second, the published results are often based on a small number of relatively simple datasets. Consequently no generally applicable conclusions can be drawn.\n\n\nRESULTS\nIn this study we adopted an unbiased protocol to perform a fair comparison of frequently used multivariate and univariate gene selection techniques, in combination with a ränge of classifiers. Our conclusions are based on seven gene expression datasets, across several cancer types.\n\n\nCONCLUSION\nOur experiments illustrate that, contrary to several previous studies, in five of the seven datasets univariate selection approaches yield consistently better results than multivariate approaches. The simplest multivariate selection approach, the Top Scoring method, achieves the best results on the remaining two datasets. We conclude that the correlation structures, if present, are difficult to extract due to the small number of samples, and that consequently, overly-complex gene selection algorithms that attempt to extract these structures are prone to overtraining." }, { "pmid": "15513985", "title": "Identifying differentially expressed genes from microarray experiments via statistic synthesis.", "abstract": "MOTIVATION\nA common objective of microarray experiments is the detection of differential gene expression between samples obtained under different conditions. The task of identifying differentially expressed genes consists of two aspects: ranking and selection. Numerous statistics have been proposed to rank genes in order of evidence for differential expression. However, no one statistic is universally optimal and there is seldom any basis or guidance that can direct toward a particular statistic of choice.\n\n\nRESULTS\nOur new approach, which addresses both ranking and selection of differentially expressed genes, integrates differing statistics via a distance synthesis scheme. Using a set of (Affymetrix) spike-in datasets, in which differentially expressed genes are known, we demonstrate that our method compares favorably with the best individual statistics, while achieving robustness properties lacked by the individual statistics. We further evaluate performance on one other microarray study." }, { "pmid": "6878708", "title": "A method of comparing the areas under receiver operating characteristic curves derived from the same cases.", "abstract": "Receiver operating characteristic (ROC) curves are used to describe and compare the performance of diagnostic technology and diagnostic algorithms. This paper refines the statistical comparison of the areas under two ROC curves derived from the same set of patients by taking into account the correlation between the areas that is induced by the paired nature of the data. The correspondence between the area under an ROC curve and the Wilcoxon statistic is used and underlying Gaussian distributions (binormal) are assumed to provide a table that converts the observed correlations in paired ratings of images into a correlation between the two ROC areas. This between-area correlation can be used to reduce the standard error (uncertainty) about the observed difference in areas. This correction for pairing, analogous to that used in the paired t-test, can produce a considerable increase in the statistical sensitivity (power) of the comparison. For studies involving multiple readers, this method provides a measure of a component of the sampling variation that is otherwise difficult to obtain." }, { "pmid": "15701846", "title": "Epidermal growth factor receptor gene polymorphisms predict pelvic recurrence in patients with rectal cancer treated with chemoradiation.", "abstract": "An association between epidermal growth factor receptor (EGFR) signaling pathway and response of cancer cells to ionizing radiation has been reported. Recently, a polymorphic variant in the EGFR gene that leads to an arginine-to-lysine substitution in the extracellular domain at codon 497 within subdomain IV of EGFR has been identified. The variant EGFR (HER-1 497K) may lead to attenuation in ligand binding, growth stimulation, tyrosine kinase activation, and induction of proto-oncogenes myc, fos, and jun. A (CA)(n) repeat polymorphism in intron 1 of the EGFR gene that alters EGFR expression in vitro and in vivo has also been described. In the current pilot study, we assessed both polymorphisms in 59 patients with locally advanced rectal cancer treated with adjuvant or neoadjuvant chemoradiation therapy using PCR-RFLP and a 5'-end [gamma-(33)P]ATP-labeled PCR protocol. We tested whether either polymorphism alone or in combination can be associated with local recurrence in the setting of chemoradiation treatment. We found that patients with HER-1 497 Arg/Arg genotype or lower number of CA repeats (both alleles <20) tended to have a higher risk of local recurrence (P = 0.24 and 0.31, respectively). Combined analysis showed the highest risk for local recurrence was seen in patients who possessed both a HER-1 497 Arg allele and <20 CA repeats (P = 0.05, log-rank test). Our data suggest that the HER-1 R497K and EGFR intron 1 (CA)(n) repeat polymorphisms may be potential indicators of radiosensitivity in patients with rectal cancer treated with chemoradiation." }, { "pmid": "12727794", "title": "Expression of cyclooxygenase-2 parallels expression of interleukin-1beta, interleukin-6 and NF-kappaB in human colorectal cancer.", "abstract": "Elevated expression of cyclooxygenase-2 (COX-2), the inducible isoform of prostaglandin H synthase, has been found in several human cancers, including colorectal cancer (CRC). This appears as a rationale for the chemopreventive effects of non-steroidal anti-inflammatory drugs in CRC. However, the reason for COX-2 overexpression is not fully understood. In cell culture experiments, COX-2 can be induced by proinflammatory cytokines, such as interleukin (IL)-1beta and IL-6. A crucial step in this signalling pathway is thought to be activation of transcription factor NF-kappaB. Based on these findings, we hypothesized an association between COX-2 overexpression and expression of IL-1beta, IL-6 and the NF-kappaB subunit p65 in human CRC. To test the hypothesis, we performed immunohistochemistry for the respective antigens on colorectal cancer specimens, obtained by surgical resections from 21 patients with CRC. Immunohistochemical results were confirmed by examination of protein levels in tissue lysates and nuclear extracts using western blotting. Non-neoplastic tissue specimens resected well outside the tumour border served as controls. COX-2 expression was found to be markedly enhanced in the neoplastic epithelium compared with controls. This was paralleled by a significantly higher expression of IL-1beta, IL-6 and p65. Serial sections revealed consistent cellular colocalizations of respective antigens in the neoplastic epithelium. Statistically, a significant correlation between expression of COX-2 and IL-1beta, IL-6 and p65 was found. Comparable results were obtained for stromal cells like macrophages and myofibroblasts. Further examination of nuclear extracts from CRC-specimens by western blotting confirmed a higher content of p65 protein compared with non-neoplastic control tissues. Therefore, our study provides evidence for an association between expression of COX-2 and IL-1beta, IL-6 and p65 in human CRC. The results are consistent with the thesis that proinflammatory cytokines such as IL-1beta and IL-6 may be accountable for the overexpression of COX-2 in CRC. Finally, the study corroborates a role for NF-kappaB in the control of COX-2 gene transcription in CRC. Given an antiapoptotic role for COX-2 in tumour cells, inhibition of NF-kappaB may offer an important strategy to interfere with the development and progression of CRC." }, { "pmid": "12657625", "title": "Integrin alpha2 and extracellular signal-regulated kinase are functionally linked in highly malignant autocrine transforming growth factor-alpha-driven colon cancer cells.", "abstract": "Recently, we have shown that autocrine transforming growth factor-alpha (TGF-alpha) controls the expression of integrin alpha2, cell adhesion to collagen IV and motility in highly progressed HCT116 colon cancer cells (Sawhney, R. S., Zhou, G-H. K., Humphrey, L. E., Ghosh, P., Kreisberg, J. I., and Brattain, M. G. (2002) J. Biol. Chem. 277, 75-86). We now report that expression of basal integrin alpha2 and its biological effects are controlled by constitutive activation of the extracellular signal-regulated/mitogen-activated protein kinase (ERK/MAPK) pathway. Treatment of cells with selective mitogen-activated protein kinase kinase (MEK) inhibitors PD098059 and U0126 showed that integrin alpha2 expression, cell adhesion, and activation of ERK are inhibited in a parallel concentration-dependent fashion. Moreover, autocrine TGF-alpha-mediated epidermal growth factor receptor activation was shown to control the constitutive activation of the ERK/MAPK pathway, since neutralizing antibody to the epidermal growth factor receptor was able to block basal ERK activity. TGF-alpha antisense-transfected cells also showed attenuated activation of ERK. Using a real time electric cell impedance sensing technique, it was shown that ERK-dependent integrin alpha2-mediated cell micromotion signaling is controlled by autocrine TGF-alpha. Thus, this study implicates ERK/MAPK signaling activated by endogenous TGF-alpha as one of the mechanistic features controlling metastatic spread." }, { "pmid": "17854143", "title": "Correlation of IL-8 with induction, progression and metastatic potential of colorectal cancer.", "abstract": "AIM\nTo investigate the expression profile of IL-8 in inflammatory and malignant colorectal diseases to evaluate its potential role in the regulation of colorectal cancer (CRC) and the development of colorectal liver metastases (CRLM).\n\n\nMETHODS\nIL-8 expression was assessed by quantitative real-time PCR (Q-RT-PCR) and the enzyme-linked immunosorbent assay (ELISA) in resected specimens from patients with ulcerative colitis (UC, n = 6) colorectal adenomas (CRA, n = 8), different stages of colorectal cancer (n = 48) as well as synchronous and metachronous CRLM along with their corresponding primary colorectal tumors (n = 16).\n\n\nRESULTS\nIL-8 mRNA and protein expression was significantly up-regulated in all pathological colorectal entities investigated compared with the corresponding neighboring tissues. However, in the CRC specimens IL-8 revealed a significantly more pronounced overexpression in relation to the CRA and UC tissues with an average 30-fold IL-8 protein up-regulation in the CRC specimens in comparison to the CRA tissues. Moreover, IL-8 expression revealed a close correlation with tumor grading. Most interestingly, IL-8 up-regulation was most enhanced in synchronous and metachronous CRLM, if compared with the corresponding primary CRC tissues. Herein, an up to 80-fold IL-8 overexpression in individual metachronous metastases compared to normal tumor neighbor tissues was found.\n\n\nCONCLUSION\nOur results strongly suggest an association between IL-8 expression, induction and progression of colorectal carcinoma and the development of colorectal liver metastases." }, { "pmid": "12237895", "title": "Serum HCG beta, CA 72-4 and CEA are independent prognostic factors in colorectal cancer.", "abstract": "In colorectal cancer, stage is considered to be the strongest prognostic factor, but also serum tumour markers have been reported to be of prognostic value. The aim of our study was to investigate the prognostic value of serum carcinoembryonic antigen (CEA), CA 19-9, CA 242, CA 72-4 and free beta subunit of human chorionic gonadotropin (hCG beta) in colorectal cancer. Preoperative serum samples were obtained from 204 colorectal cancer patients, including 31 patients with Dukes' A, 70 with Dukes' B, 49 with Dukes' C and 54 with Dukes' D cancer. The serum levels of CEA, CA 19-9, CA 242 and CA 72-4 were measured with commercial kits with cut-off values of 5 microg/L for CEA, 37 kU/L for CA 19-9, 20 kU/L for CA 242 and 6 kU/L for CA 72-4. The serum hCG beta was quantitated by an immunofluorometric assay (IFMA) with 2 pmol/L as a cut-off value. Survival analyses were performed with Kaplan-Meier life tables, log-rank test and Cox proportional hazards model. The sensitivity was 44% for CEA, 26% for CA 19-9, 36% for CA 242, 27% for CA 72-4 and 16% for hCG beta. The overall 5-year survival was 55%, and in Dukes' A, B, C and D cancers the survival was 89%, 77%, 52% and 3%, respectively. Elevated serum values of all markers correlated with worse survival (p < 0.001). In Cox multivariate analysis, the strongest prognostic factor was Dukes' stage (p < 0.001), followed by tumour location (p = 0.002) and preoperative serum markers hCG beta (p = 0.002), CA 72-4 (p = 0.003) and CEA (p = 0.005). In conclusion, elevated CEA, CA 19-9, CA 242, CA 72-4 and hCG beta relate to poor outcome in colorectal cancer. In multivariate analysis, independent prognostic significance was observed with hCG beta, CA 72-4 and CEA." }, { "pmid": "12704195", "title": "Subcellular localization and tumor-suppressive functions of 15-lipoxygenase 2 (15-LOX2) and its splice variants.", "abstract": "15-Lipoxygenase 2 (15-LOX2), the most abundant arachidonate (AA)-metabolizing enzyme expressed in adult human prostate, is a negative cell-cycle regulator in normal human prostate epithelial cells. Here we study the subcellular distribution of 15-LOX2 and report its tumor-suppressive functions. Immunocytochemistry and biochemical fractionation reveal that 15-LOX2 is expressed at multiple subcellular locations, including cytoplasm, cytoskeleton, cell-cell border, and nucleus. Surprisingly, the three splice variants of 15-LOX2 we previously cloned, i.e. 15-LOX2sv-a/b/c, are mostly excluded from the nucleus. A potential bi-partite nuclear localization signal (NLS),203RKGLWRSLNEMKRIFNFRR221, is identified in the N terminus of 15-LOX2, which is retained in all splice variants. Site-directed mutagenesis reveals that this putative NLS is only partially involved in the nuclear import of 15-LOX2. To elucidate the relationship between nuclear localization, enzymatic activity, and tumor suppressive functions, we established PCa cell clones stably expressing 15-LOX2 or 15-LOX2sv-b. The 15-LOX2 clones express 15-LOX2 in the nuclei and possess robust enzymatic activity, whereas 15-LOX2sv-b clones show neither nuclear protein localization nor AA-metabolizing activity. To our surprise, both 15-LOX2- and 15-LOX2sv-b-stable clones proliferate much slower in vitro when compared with control clones. More importantly, when orthotopically implanted in nude mouse prostate, both 15-LOX2 and 15-LOX2sv-b suppress PC3 tumor growth in vivo. Together, these results suggest that both 15-LOX2 and 15-LOX2sv-b suppress prostate tumor development, and the tumor-suppressive functions apparently do not necessarily depend on AA-metabolizing activity and nuclear localization." }, { "pmid": "17476687", "title": "Secreted frizzled-related protein 4 inhibits proliferation and metastatic potential in prostate cancer.", "abstract": "BACKGROUND\nSecreted frizzled-related proteins (sFRP4) inhibits Wnt signaling and thus cellular proliferation in androgen-independent prostate cancer cells in vitro. However, increased expression of membranous sFRP4 is associated with a good prognosis in human localized androgen-dependent prostate cancer, suggesting a role for sFRP4 in early stage disease. Here, we investigated the phenotype of sFRP4 overexpression in an androgen-dependent prostate cancer model.\n\n\nMETHODS\nAn sFRP4-overexpressing androgen-dependent (LNCaP) prostate cancer model was established to assess changes in cellular proliferation, the expression, and subcellular localization of adhesion molecules and cellular invasiveness, and compared with the findings in sFRP4-overexpressing androgen-independent cells (PC3).\n\n\nRESULTS\nsFRP4 overexpression in both cell line models resulted in a morphologic change to a more epithelioid cell type with increased localization of beta-catenin and cadherins (E-cadherin in LNCaP, N-cadherin in PC3) to the cell membrane. Functionally, sFRP4 overexpression was associated with a decreased rate of proliferation (P = 0.0005), decreased anchorage-independent growth (P < 0.001), and decreased invasiveness in PC3 cells (P < 0.0001). Furthermore, increased membranous sFRP4 expression was associated with increased membranous beta-catenin expression (P = 0.02) in a cohort of 224 localized human androgen-dependent prostate cancers.\n\n\nCONCLUSIONS\nThese data suggest that sFRP4 is an inhibitor of prostate cancer growth and invasion in vitro independent of androgen receptor (AR) signaling with correlative evidence in human androgen-dependent disease suggesting similar changes in the clinical setting. Consequently, potential therapeutic strategies to modulate Wnt signaling by sFRP4 will be relevant to both localized androgen-dependent prostate cancer and advanced metastatic disease." }, { "pmid": "15651028", "title": "Modulation of CXCL14 (BRAK) expression in prostate cancer.", "abstract": "BACKGROUND\nRecent studies suggest inflammatory processes may be involved in the development or progression of prostate cancer. Chemokines are a family of cytokines that can play several roles in cancer progression including angiogenesis, inflammation, cell recruitment, and migration.\n\n\nMETHODS\nReal-time quantitative RT-PCR, in situ RNA hybridization, laser capture microscopy, immunohistochemistry, and cDNA array based technologies were used to examine CXCL14 (BRAK) expression in paired normal and tumor prostate. To determine the role CXCL14 expression has on cancer progression, LAPC4 cells were engineered to overexpress mouse or human CXCL14, and xenograft studies were performed.\n\n\nRESULTS\nCXCL14 RNA expression was observed in normal and tumor prostate epithelium and focally in stromal cells adjacent to cancer. CXCL14 mRNA was significantly upregulated in localized prostate cancer and positively correlated with Gleason score. CXCL14 levels were unchanged in BPH specimens. LAPC4 cells expressing CXCL14 resulted in a 43% tumor growth inhibition (P = 0.019) in vivo compared to vector only xenografts.\n\n\nCONCLUSIONS\nCXCL14 mRNA upregulation is a common feature in prostate cancer. The finding that CXCL14 expression inhibits tumor growth suggests this gene has tumor suppressive functions." }, { "pmid": "18065961", "title": "Mapping of TMPRSS2-ERG fusions in the context of multi-focal prostate cancer.", "abstract": "TMPRSS2-ERG gene fusion leading to the androgenic induction of the ERG proto-oncogene expression is a highly prevalent oncogenic alteration in prostate tumor cells. Prostate cancer is a multi-focal disease, and the origins as well as biological contribution of multiple cancer foci remain unclear with respect to prostate cancer onset or progression. To assess the role of TMPRSS2-ERG alteration in prostate cancer onset and/or progression, we have evaluated the status of fusion transcripts in benign glands, prostatic intraepithelial neoplasia (PIN) and multiple cancer foci of each prostate. Quantitative expression of TMPRSS2-ERG fusion type A and C transcripts was analyzed in benign, tumor and PIN areas, selected from whole-mount radical prostatectomy slides. TMPRSS2-ERG expression was correlated with clinicopathological features. Overall, 30 of 45 (67%) patients exhibited TMPRSS2-ERG fusion transcripts in at least one tumor focus. Of 80 tumor foci analyzed, 39 had TMPRSS2-ERG fusion (type A only: 30, type C only: 2, both types A and C: 7), with predominant detection of the TMPRSS2-ERG fusion type A (27/30, 90%) in the index tumors. Of 14 PIN lesions, 2 were positive for type A fusion. Frequent presence of the TMPRSS2-ERG in index tumors suggests critical roles of ERG alterations in the onset and progression of a large subset of prostate cancer. However, heterogeneity of the TMPRSS2-ERG detection in the context of multiple cancer foci and its frequency in PIN also support the role of other genomic alterations in the origins of prostate cancer." }, { "pmid": "17971772", "title": "Expression of the TMPRSS2:ERG fusion gene predicts cancer recurrence after surgery for localised prostate cancer.", "abstract": "The prostate-specific gene, TMPRSS2 is fused with the gene for the transcription factor ERG in a large proportion of human prostate cancers. The prognostic significance of the presence of the TMPRSS2:ERG gene fusion product remains controversial. We examined prostate cancer specimens from 165 patients who underwent surgery for clinically localised prostate cancer between 1998 and 2006. We tested for the presence of TMPRSS2:ERG gene fusion product, using RT-PCR and direct sequencing. We conducted a survival analysis to determine the prognostic significance of the presence of the TMPRSS2:ERG fusion gene on the risk of prostate cancer recurrence, adjusting for the established prognostic factors. We discovered that the fusion gene was expressed within the prostate cancer cells in 81 of 165 (49.1%) patients. Of the 165 patients, 43 (26.1%) developed prostate-specific antigen (PSA) relapse after a mean follow-up of 28 months. The subgroup of patients with the fusion protein had a significantly higher risk of recurrence (58.4% at 5 years) than did patients who lacked the fusion protein (8.1%, P<0.0001). In a multivariable analysis, the presence of gene fusion was the single most important prognostic factor; the adjusted hazard ratio for disease recurrence for patients with the fusion protein was 8.6 (95% CI=3.6-20.6, P<0.0001) compared to patients without the fusion protein. Among prostate cancer patients treated with surgery, the expression of TMPRSS2:ERG fusion gene is a strong prognostic factor and is independent of grade, stage and PSA level." }, { "pmid": "16762975", "title": "Vav3 oncogene is overexpressed and regulates cell growth and androgen receptor activity in human prostate cancer.", "abstract": "The purpose of this research was to investigate the role of Vav3 oncogene in human prostate cancer. We found that expression of Vav3 was significantly elevated in androgen-independent LNCaP-AI cells in comparison with that in their androgen-dependent counterparts, LNCaP cells. Vav3 expression was also detected in other human prostate cancer cell lines (PC-3, DU145, and 22Rv1) and, by immunohistochemistry analysis, was detected in 32% (26 of 82) of surgical specimens of human prostate cancer. Knockdown expression of Vav3 by small interfering RNA inhibited growth of both androgen-dependent LNCaP and androgen-independent LNCaP-AI cells. In contrast, overexpression of Vav3 promoted androgen-independent growth of LNCaP cells induced by epidermal growth factor. Overexpression of Vav3 enhanced androgen receptor (AR) activity regardless of the presence or absence of androgen and stimulated the promoters of AR target genes. These effects of Vav3 could be attenuated by either phosphatidylinositol 3-kinase (PI3K) inhibitors or dominant-negative Akt and were enhanced by cotransfection of PI3K. Moreover, phosphorylation of Akt was elevated in LNCaP cells overexpressing Vav3, which could be blocked by PI3K inhibitors. Finally, we ascertained that the DH domain of Vav3 was responsible for activation of AR. Taken together, our data show that overexpression of Vav3, through the PI3K-Akt pathway, inappropriately activates AR signaling axis and stimulates cell growth in prostate cancer cells. These findings suggest that Vav3 overexpression may be involved in prostate cancer development and progression." }, { "pmid": "17003780", "title": "Prognostic relevance of Tiam1 protein expression in prostate carcinomas.", "abstract": "The Rac-specific guanine nucleotide exchange factor, Tiam1, plays a major role in oncogenicity, tumour invasion and metastasis but its usefulness as a prognostic marker in human cancer has not been tested yet. In the present study, Tiam1 expression was analysed in benign secretory epithelium, pre-neoplastic high-grade prostatic intraepithelium neoplasia (HG-PIN) and prostate carcinomas of 60 R0-resected radical prostatectomy specimens by semiquantitative immunohistochemistry. Tiam1 proved significantly overexpressed in both HG-PIN (P<0.001) and prostate carcinomas (P<0.001) when compared to benign secretory epithelium. Strong Tiam1 overexpression (i.e. > or =3.5-fold) in prostate carcinomas relative to the respective benign prostatic epithelium was statistically significantly associated with disease recurrence (P=0.016), the presence of lymph vessel invasion (P=0.031) and high Gleason scores (GS) (i.e. > or =7) (P=0.044). Univariate analysis showed a statistically significant association of strong Tiam1 overexpression with decreased disease-free survival (DFS) (P=0.03). This prognostic effect of strong Tiam1 overexpression remained significant in multivariate analysis including preoperative prostate-specific antigen levels, pT stage, and GS (relative risk= 3.75, 95% confidence interval=1.06-13.16; P=0.04). Together, our data suggest that strong Tiam1 overexpression relative to the corresponding benign epithelial cells is a new and independent predictor of decreased DFS for patients with prostate cancer." }, { "pmid": "15466172", "title": "JAGGED1 expression is associated with prostate cancer metastasis and recurrence.", "abstract": "Recent studies suggest that NOTCH signaling can promote epithelial-mesenchymal transitions and augment signaling through AKT, an important growth and survival pathway in epithelial cells and prostate cancer in particular. Here we show that JAGGED1, a NOTCH receptor ligand, is significantly more highly expressed in metastatic prostate cancer as compared with localized prostate cancer or benign prostatic tissues, based on immunohistochemical analysis of JAGGED1 expression in human tumor samples from 154 men. Furthermore, high JAGGED1 expression in a subset of clinically localized tumors was significantly associated with recurrence, independent of other clinical parameters. These findings support a model in which dysregulation of JAGGED1 protein levels plays a role in prostate cancer progression and metastasis and suggest that JAGGED1 may be a useful marker in distinguishing indolent and aggressive prostate cancers." }, { "pmid": "12590567", "title": "Implications for RNase L in prostate cancer biology.", "abstract": "Recently, the interferon (IFN) antiviral pathways and prostate cancer genetics and have surprisingly converged on a single-strand specific, regulated endoribonuclease. Genetics studies from several laboratories in the U.S., Finland, and Israel, support the recent identification of the RNase L gene, RNASEL, as a strong candidate for the long sought after hereditary prostate cancer 1 (HPC1) allele. Results from these studies suggest that mutations in RNASEL predispose men to an increased incidence of prostate cancer, which in some cases reflect more aggressive disease and/or decreased age of onset compared with non-RNASEL linked cases. RNase L is a uniquely regulated endoribonuclease that requires 5'-triphosphorylated, 2',5'-linked oligoadenylates (2-5A) for its activity. The presence of both germline mutations in RNASEL segregating with disease within HPC-affected families and loss of heterozygosity (LOH) in tumor tissues suggest a novel role for the regulated endoribonuclease in the pathogenesis of prostate cancer. The association of mutations in RNASEL with prostate cancer cases further suggests a relationship between innate immunity and tumor suppression. It is proposed here that RNase L functions in counteracting prostate cancer by virtue of its ability to degrade RNA, thus initiating a cellular stress response that leads to apoptosis. This monograph reviews the biochemistry and genetics of RNase L as it relates to the pathobiology of prostate cancer and considers implications for future screening and therapy of this disease." }, { "pmid": "17587088", "title": "What proportion of patients referred to secondary care with iron deficiency anemia have colon cancer?", "abstract": "PURPOSE\nIron deficiency anemia can be the first presentation of right-sided colon cancer. There is an impression that because this presentation is nonspecific it may be associated with a longer delay from referral to diagnosis compared with those patients with symptoms of change in bowel habit and/or rectal bleeding caused by more distal colorectal cancer. This study was designed to determine the incidence of colon cancers in patients referred to the hospital with iron deficiency anemia and to determine what proportion of these patients were referred and diagnosed urgently in line with cancer waiting time targets.\n\n\nMETHODS\nA retrospective study was performed, including all patients referred to one district general hospital in 2003 whose blood indices met the criteria for significant iron deficiency anemia as defined by the Referral Guidelines for Suspected Cancer issued by the Department of Health in 1999, which defined iron deficiency anemia in the \"target wait\" criterion as a low hemoglobin (<11 g/dl in males and < 10 g/dl in postmenopausal females) with a mean corpuscular volume < 78 fl and/or a serum ferritin < 12 ng/ml. Patients with hemoglobinopathy were excluded. The underlying diagnosis reached for each patient was determined by using ICD10 C18-21. Case note review confirmed the diagnoses and yielded information on urgency of referral and time to diagnosis.\n\n\nRESULTS\nOf 513 patients referred with iron deficiency anemia in 2003, 142 (28 percent) met the eligibility criteria. Nine (6.3 percent) of these had colon cancer, including one (1.2 percent) female and eight (14 percent) males. Eight of nine cancers were in the right colon. Other patients with iron deficiency anemia were found to have benign upper or lower gastrointestinal disease (n = 125) or upper gastrointestinal cancer (n = 1). In seven patients, no cause was found. Of the nine patients with iron deficiency anemia who were found to have colon cancer, five had been referred urgently and four as routine. The mean delay from referral to diagnosis for these was 31 days for those referred urgently but 60 days for those referred routinely.\n\n\nCONCLUSIONS\nMales referred with iron deficiency anemia have a significant risk of having colon cancer. The risk seems lower in females; this gender difference has been observed in other studies and further evidence should be sought before advising any change in referral practice." }, { "pmid": "12826036", "title": "Epidermal growth factor receptor (EGFR) as a target in cancer therapy: understanding the role of receptor expression and other molecular determinants that could influence the response to anti-EGFR drugs.", "abstract": "The epidermal growth factor receptor (EGFR) is a rational target for cancer therapy because it is commonly expressed at a high level in a variety of solid tumours and it has been implicated in the control of cell survival, proliferation, metastasis and angiogenesis. However, despite evidence to suggest that EGFR expression is associated with a poor prognosis in some tumours (e.g. breast, head and neck carcinomas), the situation is by no means clear-cut. A number of issues are worthy of particular consideration, including how EGFR is measured and whether these assays are sensitive and reproducible, which mechanisms other than increased EGFR expression might cause the EGFR signalling drive to be increased, and the relationship, if any, between EGFR expression and the response to EGFR-targeted agents." }, { "pmid": "16916471", "title": "Activity and expression of urokinase-type plasminogen activator and matrix metalloproteinases in human colorectal cancer.", "abstract": "BACKGROUND\nMatrix metalloproteinase-2 (MMP-2), matrix metalloproteinase-9 (MMP-9), and urokinase-type plasminogen activator (uPA) are involved in colorectal cancer invasion and metastasis. There is still debate whether the activity of MMP-2 and MMP-9 differs between tumors located in the colon and rectum. We designed this study to determine any differences in the expression of MMP-2, MMP-9 and uPA system between colon and rectal cancer tissues.\n\n\nMETHODS\nCancer tissue samples were obtained from colon carcinoma (n = 12) and rectal carcinomas (n = 10). MMP-2 and MMP-9 levels were examined using gelatin zymography and Western blotting; their endogenous inhibitors, tissue inhibitor of metalloproteinase-2 (TIMP-2) and tissue inhibitor of metalloproteinase-1 (TIMP-1), were assessed by Western blotting. uPA, uPAR and PAI-1 were examined using enzyme-linked immunosorbent assay (ELISA). The activity of uPA was assessed by casein-plasminogen zymography.\n\n\nRESULTS\nIn both colon and rectal tumors, MMP-2, MMP-9 and TIMP-1 protein levels were higher than in corresponding paired normal mucosa, while TIMP-2 level in tumors was significantly lower than in normal mucosa. The enzyme activities or protein levels of MMP-2, MMP-9 and their endogenous inhibitors did not reach a statistically significant difference between colon and rectal cancer compared with their normal mucosa. In rectal tumors, there was an increased activity of uPA compared with the activity in colon tumors (P = 0.0266), however urokinase-type plasminogen activator receptor (uPAR) and plasminogen activator inhibitor-1 (PAI-1) showed no significant difference between colon and rectal cancer tissues.\n\n\nCONCLUSION\nThese findings suggest that uPA may be expressed differentially in colon and rectal cancers, however, the activities or protein levels of MMP-2, MMP-9, TIMP-1, TIMP-2, PAI-1 and uPAR are not affected by tumor location in the colon or the rectum." }, { "pmid": "15254658", "title": "Serum levels of soluble E-selectin in colorectal cancer.", "abstract": "Adhesion molecules play an important role in tumor metastasis. E-selectin can support adhesion of colon cancer cells through the recognition of specific carbohydrate ligands. High levels of soluble E-selectin (sE-selectin) had been reported in melanoma and some epithelial tumors, especially in colorectal carcinoma. The concentrations of the sE-selectin were investigated in serum samples of 64 patients (32 men and 32 women) with colorectal cancer and 16 healthy subjects. Median age was 57 (range 20-75). Nineteen patients were staged as Dukes D, 9 of whom had liver metastasis. Serum levels of sE-selectin were determined by ELISA. In the study group, sE-selectin concentrations (mean+/-SE, ng/ml) were not significantly elevated, compared with the control group (41.09+/-4.57 in the control group and 43.80+/-1.88 in patients, p>0.05). Mean sE-selectin levels were 42.27+/-1.85 in non-metastatic and 47.42+/-4.57 in metastatic patients (p>0.05). Serum concentrations of sE-selectin were significantly elevated in patients with colorectal cancer metastatic to liver (59.07+/-7.52) in comparison to other patients without liver metastasis (p=0.013). There were no significant correlations between sE-selectin levels and other parameters such as age of patients, stage of disease, histopathological differentiation or localization of primary tumor. Elevated sE-selectin levels were confirmed as correlating with poor overall survival. In conclusion, sE-selectin concentrations may not be used as a predictive marker of metastasis in colorectal carcinoma, but high levels of sE-selectin may support diagnosis of liver metastasis." }, { "pmid": "17562355", "title": "GM-CSF promotes differentiation of human dendritic cells and T lymphocytes toward a predominantly type 1 proinflammatory response.", "abstract": "OBJECTIVE\nWe recently demonstrated that patients with high levels of circulating dendritic cells (DC) and interleukin (IL)-12 are associated with reduced cancer relapse after hematopoietic stem cell transplantation. Identifying a growth factor that can promote these immune functions may have beneficial anti-tumor effects. We investigated the hypothesis that granulocyte-macrophage colony-stimulating factor (GM-CSF) induces IL-12 production and polarizes T lymphocytes toward a proinflammatory response.\n\n\nMATERIALS AND METHODS\nPeripheral blood mononuclear cells (PBMC), T lymphocytes, and antigen-presenting cells (APC) were cultured with GM-CSF and compared with no growth factors (control), G-CSF, or both GM-CSF and G-CSF. Cells were matured with either lipopolysaccharide or lectin (phytohemagglutinin). Type 1 and type 2 cytokines were measured by enzyme-linked immunosorbent assay. Induction of allogeneic T-lymphocyte proliferation induced by GM-CSF-stimulated APC was measured by mixed lymphocyte reaction. DC were measured by flow cytometry.\n\n\nRESULTS\nLevels of type 1 (IL-12, interferon-gamma, tumor necrosis factor-alpha) cytokines increased while type 2 (IL-10 and IL-4) cytokines decreased after stimulation of PBMC, T lymphocytes, and APC with GM-CSF. APC treated with GM-CSF induced higher proliferation of allogeneic T cells. CD11c and CD123-positive DC proliferated after exposure to GM-CSF. Both subtypes of DC (DC1 and DC2) were increased by GM-CSF.\n\n\nCONCLUSIONS\nGM-CSF induces production of type 1 proinflammatory cytokines by human PBMC, T lymphocytes, and APC. Type 2 cytokines are downregulated by GM-CSF and proliferation of allogeneic T cells is increased. These results demonstrate the potential for GM-CSF as a clinical agent for immune stimulation." }, { "pmid": "15701845", "title": "Prognostic significance of MMP-1 and MMP-3 functional promoter polymorphisms in colorectal cancer.", "abstract": "PURPOSE\nMatrix metalloproteinase (MMP) belongs to a large group of proteases capable of breaking essentially all components of the extracellular matrix. They are implicated in all steps of tumorogenesis, cancer invasion, and metastasis. Among them, metalloproteinase type 1 (MMP-1) is implicated in tumor invasion and metastasis in different types of cancers including colorectal cancer in which its expression was correlated with poor prognosis. A polymorphism in the promoter region of the MMP-1 gene leads to a variation of its level of transcription.\n\n\nSTUDY DESIGN\nMMP-1 -1607ins/delG and MMP-3 - 1612 ins/delA promoter polymorphisms were genotyped by multiplex PCR from 201 colorectal cancer patients. The median follow-up of patients was 30 months. The MMP genotypes were correlated to clinical outcome.\n\n\nRESULTS\nPatients with the -1607insG/-1607insG MMP-1 genotype had significantly worse specific survival than the others in the whole series (P < 0.04), in stage I to III patients (P < 0.001), and in patients stage I and II (P < 0.01). In multivariate analysis, MMP-1-1607insG allele showed to be an independent poor prognostic factor after adjustment on stage, age, and the use of adjuvant chemotherapy. MMP-3 polymorphism was not associated with survival.\n\n\nCONCLUSIONS\nIn the subgroups of nondistant metatastic patients (stages I and II, and stages I-III), an inverse relation between the number of MMP-1-1607insG allele and survival was observed suggesting a gene dosage effect. Our results are consistent with the importance of MMP-1-1607ins/delG functional polymorphism in regulating transcription level and with the relationship between MMP-1 expression and cancer invasion, metastasis, and prognosis." }, { "pmid": "14550954", "title": "Overexpression of Reg IV in colorectal adenoma.", "abstract": "Identification of molecular markers associated with colorectal adenoma may uncover critical events involved in the initiation and progression of colorectal cancer. Our previous studies, mainly based on suppression subtractive hybridization, have identified Reg IV as a strong candidate for a gene that is highly expressed in colorectal adenoma when compared to normal mucosa. In this study, we sought to determine the mRNA expression of Reg IV in colorectal adenoma, in comparison with normal colorectal mucosa and carcinoma in multiple samples. Semi-quantitative RT-PCR was performed in 12 colorectal adenomas and 10 concurrent carcinomas. Reg IV mRNA level was higher in all adenomas (12/12) (p=0.001) and in 9/10 concurrent colorectal carcinoma (p=0.021) when compared to paired normal colorectal mucosa. Northern blot analysis further confirmed these results. In situ hybridization with digoxigenin (DIG)-labeled cRNA was performed in 32 colorectal adenomas with varying degree of dysplasia. Compared with paired normal tissues, Reg IV was overexpressed in 74% (14/19) adenomas with mild or moderate dysplasia and 100% (13/13) cases of adenoma with severe dysplasia. In addition, higher levels of Reg IV mRNA was consistently scored in regions with more severe dysplasia within the same adenoma sample displaying varying degree of dysplasia. The strongest staining was seen within carcinomoutous areas of the 12 adenoma cases (p=0.002). Our results support that overexpression of Reg IV may be an early event in colorectal carcinogenesis. Detection of Reg IV overexpression may be useful in the early diagnosis of carcinomatous transformation of adenoma." }, { "pmid": "15665513", "title": "TNF-alpha activates MUC2 transcription via NF-kappaB but inhibits via JNK activation.", "abstract": "The molecular mechanisms responsible for TNF-alpha-mediated MUC2 intestinal mucin up-regulation in HM3 colon adenocarcinoma cells were analyzed using promoter-reporter assays of the 5'-flanking region of the MUC2 gene. Chemical inhibitors, mutant reporter constructs, and EMSA confirmed I-kappaB/NF-kappaB pathway involvement. Wortmannin, LY294002 and dominant negative Akt, as well as dominant negative NF-kappaB-inducing kinase (NIK) inhibited MUC2 reporter transcription, indicating that both phosphatidylinositol-3-OH kinase (PI3K)/Akt signaling pathway and NIK pathways mediate the effects of TNF-alpha. Wortmannin inhibited NF-kappaB binding and transcriptional activity without inhibiting NF-kappaB translocation to the nucleus, indicating that PI3K/Akt signaling activates NF-kappaB transcriptional activity directly. Our results demonstrate that TNF-alpha up-regulates MUC2 in human colon epithelial cells via several signaling pathways, involving both NIK and PI3K/Akt, which converge at the common IKK/I-kappaB/NF-kappaB pathway. TNF-alpha activated JNK, but JNK inhibitor SP600125 and dominant negative cJun consistently activated transcription, revealing a negative role for this signaling pathway. Thus TNF-alpha causes a net up-regulation of MUC2 gene expression in cultured colon cancer cells because NF-kappaB transcriptional activation of this gene is able to counter-balance the suppressive effects of the JNK pathway. However, the existence of this inhibitory JNK pathways suggests a mechanism whereby--in the absence of NF-kappaB activation--TNF-alpha production during inflammation in vivo could actually inhibit MUC2 production, giving rise to the defective mucosal protection which characterizes inflammatory bowel disease." }, { "pmid": "15836783", "title": "Expression of a novel carbonic anhydrase, CA XIII, in normal and neoplastic colorectal mucosa.", "abstract": "BACKGROUND\nCarbonic anhydrase (CA) isozymes may have an important role in cancer development. Some isozymes control pH homeostasis in tumors that appears to modulate the behaviour of cancer cells. CA XIII is the newest member of the CA gene family. It is a cytosolic isozyme which is expressed in a number of normal tissues. The present study was designed to investigate CA XIII expression in prospectively collected colorectal tumor samples.\n\n\nMETHODS\nBoth neoplastic and normal tissue specimens were obtained from the same patients. The analyses were performed using CA XIII-specific antibodies and an immunohistochemical staining method. For comparison, the tissue sections were immunostained for other cytosolic isozymes, CA I and II.\n\n\nRESULTS\nThe results indicated that the expression of CA XIII is down-regulated in tumor cells compared to the normal tissue. The lowest signal was detected in carcinoma samples. This pattern of expression was quite parallel for CA I and II.\n\n\nCONCLUSION\nThe down-regulation of cytosolic CA I, II and XIII in colorectal cancer may result from reduced levels of a common transcription factor or loss of closely linked CA1, CA2 and CA13 alleles on chromosome 8. Their possible role as tumor suppressors should be further evaluated." }, { "pmid": "17047970", "title": "Differential expression of genes encoding tight junction proteins in colorectal cancer: frequent dysregulation of claudin-1, -8 and -12.", "abstract": "BACKGROUND AND AIMS\nAs integral membrane proteins, claudins form tight junctions together with occludin. Several claudins were shown to be up-regulated in various cancer types. We performed an expression analysis of genes encoding tight junction proteins to display differential gene expression on RNA and protein level and to identify and validate potential targets for colorectal cancer (CRC) therapy.\n\n\nPATIENTS AND METHODS\nAmplified and biotinylated cRNA from 30 microdissected CRC specimen and corresponding normal tissues was hybridized to Affymetrix U133set GeneChips. Quantification of differential protein expression of claudin-1, -8 and -12 between normal and corresponding tumour tissues was performed by Western blot analyses. Paraffin-embedded CRC tissue samples, colon cancer cell lines and normal tissue microarray were analysed for protein expression of claudin-1 by immunohistochemistry (IHC).\n\n\nRESULTS\nClaudin-1 (CLDN1) and -12 (CLDN12) are frequently overexpressed in CRC, whereas claudin-8 (CLDN8) shows down-regulation in tumour tissue on RNA level. Quantification of proteins confirmed the overexpression of claudin-1 in tumour tissues, whereas changes of claudin-8 and -12 were not significantly detectable on protein level. IHC confirmed the markedly elevated expression level of claudin-1 in the majority of CRC, showing membranous and intracellular vesicular staining.\n\n\nCONCLUSIONS\nDifferential expression of genes encoding claudins in CRC suggests that these tight junction proteins may be associated to and involved in tumorigenesis. CLDN1 is frequently up-regulated in large proportion of CRC and may represent potential target molecule for blocking studies in CRC." }, { "pmid": "16142351", "title": "Interleukin-1 receptor antagonist gene polymorphism in human colorectal cancer.", "abstract": "Several studies indicate that local immunoregulation and associated cytokines have a putative role in the development of cancer. There is evidence that pro-inflammatory cytokines such as interleukin-1 (IL-1) are critically involved with tumour progression. IL-1 receptor antagonist (IL-1Ra) is known to down-regulate and limit the inflammatory response. Therefore we attempted to examine the influence of the known polymorphism of the IL-1Ra gene on the development of human colorectal cancer (CRC). The study included 125 patients with CRC and 134 controls. Variable number tandem repeat (VNTR) polymorphism in intron 2 of the IL-Ra gene was analysed by the polymerase chain reaction method. There was a significant difference in genotype distribution between CRC patients and controls (P=0.025) and also in allelic frequencies (P=0.012). In detail the carriage rate of allele 3 in CRC patients was significantly increased compared with controls (P=0.007). We also found that the allelic distribution differs significantly between colon and rectum (P=0.041) and that allele 3 was overabundant in colon. The frequency of allele 1 in CRC patients with localized disease (Dukes A+B) was higher compared with disseminated disease (Dukes C+D), (P=0.035). These findings therefore suggest that the IL-1Ra polymorphism is associated with colorectal carcinogenesis." }, { "pmid": "17373663", "title": "Beta2-microglobulin mutations in microsatellite unstable colorectal tumors.", "abstract": "Defects of DNA mismatch repair (MMR) cause the high level microsatellite instability (MSI-H) phenotype. MSI-H cancers may develop either sporadically or in the context of the hereditary nonpolyposis colorectal cancer (HNPCC) syndrome that is caused by germline mutations of MMR genes. In colorectal cancer (CRC), MSI-H is characterized by a dense lymphocytic infiltration, reflecting a high immunogenicity of these cancers. As a consequence of immunoselection, MSI-H CRCs frequently display a loss of human leukocyte antigen (HLA) class I antigen presentation caused by mutations of the beta2-microglobulin (beta2m) gene. To examine the implications of beta2m mutations during MSI-H colorectal tumor development, we analyzed the prevalence of beta2m mutations in MSI-H colorectal adenomas (n=38) and carcinomas (n=104) of different stages. Mutations were observed in 6/38 (15.8%) MSI-H adenomas and 29/104 (27.9%) MSI-H CRCs. A higher frequency of beta2m mutations was observed in MSI-H CRC patients with germline mutations of MMR genes MLH1 or MSH2 (36.4%) compared with patients without germline mutations (15.4%). The high frequency of beta2m mutations in HNPCC-associated MSI-H CRCs is in line with the hypothesis that immunoselection may be particularly pronounced in HNPCC patients with inherited predisposition to develop MSI-H cancers. beta2m mutations were positively related to stage in tumors without distant metastases (UICC I-III), suggesting that loss of beta2m expression may promote local progression of colorectal MSI-H tumors. However, no beta2m mutations were observed in metastasized CRCs (UICC stage IV, p=0.04). These results suggest that functional beta2m may be necessary for distant metastasis formation in CRC patients." }, { "pmid": "15059893", "title": "Hypermethylation and silencing of the putative tumor suppressor Tazarotene-induced gene 1 in human cancers.", "abstract": "A variety of tumor suppressor genes are down-regulated by hypermethylation during carcinogenesis. Using methylated CpG amplification-representation difference analysis, we identified a DNA fragment corresponding to the Tazarotene-induced gene 1 (TIG1) promoter-associated CpG island as one of the genes hypermethylated in the leukemia cell line K562. Because TIG1 has been proposed to act as a tumor suppressor, we tested the hypothesis that cytosine methylation of the TIG1 promoter suppresses its expression and causes a loss of responsiveness to retinoic acid in some neoplastic cells. We examined TIG1 methylation and expression status in 53 human cancer cell lines and 74 primary tumors, including leukemia and head and neck, breast, colon, skin, brain, lung, and prostate cancer. Loss of TIG1 expression was strongly associated with TIG1 promoter hypermethylation (P < 0.001). There was no correlation between TIG1 promoter methylation and that of retinoid acid receptor beta2 (RARbeta2), another retinoic-induced putative tumor suppressor gene (P = 0.78). Treatment with the DNA methyltransferase inhibitor 5-aza-2'-deoxycytidine for 5 days restored TIG1 expression in all eight silenced cell lines tested. TIG1 expression was also inducible by treatment with 1 micro M all-trans-retinoic acid for 3 days except in densely methylated cell lines. Treatment of the K562 leukemia cells with demethylating agent combined with all-trans-retinoic acid induced apoptosis. These findings indicate that silencing of TIG1 promoter by hypermethylation is common in human cancers and may contribute to the loss of retinoic acid responsiveness in some neoplastic cells." }, { "pmid": "15743036", "title": "Genetic disregulation of gene coding tumor necrosis factor alpha receptors (TNFalpha Rs) in colorectal cancer cells.", "abstract": "The expression of TNF ligand by malignant cells might be a mechanism for tumour immune escape. Genetic disregulation of gene coding TNF receptors was observed in neoplastic disease by an increased number of receptors on tumour cells and ligand-receptor activity. It might cause tumour proliferation and metastatic potential. Structure of TNF receptors influences TNF activity in vivo and structure of TNF R2 gene may suggest post-transcription modification based on alternative splicing. The aim of the study was to analyse the expression of gene coding TNF receptors R2 and R2/R7 (without exon 7) by estimation of mRNA expression of colorectal cancer cells in comparison with surrounding tissue free from neoplastic infiltration and searched for differently spliced TNFalphaR2/R7 isoforms. The study included fifty four patients with histopathologically confirmed adenocarcinoma (Stage III according to the AJC TNM Classification). Tissue samples removed from the tumour region were obtained from colorectal cancer patients undergoing surgical treatment. The samples were divided into two parts. The first one--was routinely examined histopathologically, the second one--was used for RNA extraction and the number of TNF and its receptors mRNA copies were subsequently quantified. The TNF and TNFRII genes expression were estimated based on the number of mRNA copies on 1 microg total RNA. The presence of TNFR2 and TNFR2/R7 isoforms in tumour, normal and metastatic cells was observed. The highest number of mRNA TNF copies and over expressed TNF genes were investigated and significantly noticed in metastatic cells (lymph nodes). The decreased number of TNFR2/R7 mRNA copies in metastatic lymph nodes secondarily influenced the decreased TNF soluble receptors' concentration. In conclusion, the genetic disregulation observed in neoplastic disease usually concerns dysfunction of cytokines receptor genes." }, { "pmid": "11956618", "title": "Expression of intercellular adhesion molecule-1 and prognosis in colorectal cancer.", "abstract": "Intercellular adhesion molecule-1 (ICAM-1) is a 90-kDa cell surface glycoprotein and is known to be a member of the immunoglobulin gene superfamily of adhesion molecules. It has been suggested that ICAM-1 expression on cancer cells might have a role as a suppressor of tumor progression under the host immune surveillance system. We studied the correlation between the expression of ICAM-1 and clinicopathological factors, as well as infiltration of tumor infiltrating lymphocytes (TILs) in colorectal cancer. Resected specimens from 96 patients with colorectal carcinoma were investigated using immunohistochemical staining with a monoclonal antibody against ICAM-1. As a result, the incidence of lymph node or liver metastasis was significantly lower in patients with ICAM-1-positive tumors than in those with ICAM-1-negative tumors. Infiltration of TILs was more frequently observed in the ICAM-1-positive tumors than in the ICAM-1-negative tumors. The prognosis of the patients with ICAM-1-negative tumors was significantly poorer than that of those with ICAM-1-positive tumors. In conclusion, these findings suggested that ICAM-1 expression is closely associated with metastasis and may be a useful indicator of prognosis in patients with colorectal cancer." }, { "pmid": "17348431", "title": "Prognostic significance of adiponectin levels in non-metastatic colorectal cancer.", "abstract": "BACKGROUND\nCirculating adiponectin levels are inversely correlated with the risk of colorectal cancer (CRC). This study was designed to evaluate the association between adiponectin levels and the clinicopathological variables of CRC and to analyze the possible prognostic value of adiponectin in predicting relapse-free survival.\n\n\nPATIENTS AND METHODS\nBaseline adiponectin and serum tumor markers were analyzed in 60 patients with non-metastatic CRC followed-up from time of surgery for at least three years or until relapse.\n\n\nRESULTS\nThe median adiponectin levels were lower in CRC patients (8.3 microg/ml) than controls (13.1 microg/ml, p <0.001). Moreover, median adiponectin concentration gradually decreased with increase in tumor stage. Low pre-surgical adiponectin levels were found in 52% of the relapsing patients compared to 26% (p=0.037) of the non-relapsing patients. Logistic regression analysis demonstrated that stage of disease (OR (odds ratio)=15.9, p<O.O1) and low adiponectin levels (OR=4.66, p<0.05) were independent predictors of recurrent disease.\n\n\nCONCLUSION\nLow serum adiponectin might represent an adjunctive tool in risk prediction for CRC recurrence." }, { "pmid": "12553016", "title": "Expression and role of thrombospondin-1 in colorectal cancer.", "abstract": "Thrombospondin-1 (TSP1) inhibits angiogenesis and activates transforming growth factor beta-1 (TGF beta-1). The expression and role of TSP1 remain controversial. On 132 colorectal cancer specimens, we performed immunohistochemical staining of TSP1, TGF beta-1, latency-associated peptide (LAP) and CD34, besides performing in situ hybridization (ISH) of TSP1. TSP1 was mainly localised in fibroblasts of the tumor stroma on ISH. The result revealed that 73 cases (55.3%) were evaluated as high-TSP1 while 84 cases (63.6%) were evaluated as high-TGF beta-1. The expression of TSP1 correlated significantly with vessel counts (p = 0.016) and TGF beta-1 expression (p = 0.043). Cox proportional hazard analysis showed that TSP1 expression was significantly correlated with independent prognostic factors. The present study furnishes evidence indicating that TSP1 is expressed in tumor stroma, inhibits tumor angiogenesis and suppresses tumor growth by activating TGF beta-1." }, { "pmid": "12408769", "title": "Relationship between tissue factor expression and hepatic metastasis and prognosis in rectal cancer.", "abstract": "OBJECTIVE\nTo investigate the correlation between tissue factor (TF) expression and hepatic metastasis and prognosis in rectal cancer.\n\n\nMETHODS\nTF expression was retrospectively studied by immunohistochemical method in specimens of 40 rectal cancer, 3 hepatic metastasis and 6 benign adenoma with relation to their clinicopathologic data.\n\n\nRESULTS\n1. TF expression was detected in 20 (50%) of the 40 primary rectal cancer specimens and all the 3 hepatic metastatic specimens, but not in the 6 benign adenoma or normal mucosa of rectum, 2. Significant correlation was observed between TF expression and synchronic hepatic metastasis (P = 0.002) and heterochronic hepatic metastasis (P = 0.001) and 3. TF was a risk factor for the prognosis of primary rectal cancer (P = 0.024).\n\n\nCONCLUSION\nTissue factor expression may play a role in the process of developing hepatic metastasis. It may be considered as a new clinical indicator for monitor of hepatic metastasis and prognosis of primary rectal cancer." }, { "pmid": "17615053", "title": "Polymorphisms in the cytochrome P450 genes CYP1A2, CYP1B1, CYP3A4, CYP3A5, CYP11A1, CYP17A1, CYP19A1 and colorectal cancer risk.", "abstract": "BACKGROUND\nCytochrome P450 (CYP) enzymes have the potential to affect colorectal cancer (CRC) risk by determining the genotoxic impact of exogenous carcinogens and levels of sex hormones.\n\n\nMETHODS\nTo investigate if common variants of CYP1A2, CYP1B1, CYP3A4, CYP3A5, CYP11A1, CYP17A1 and CYP19A1 influence CRC risk we genotyped 2,575 CRC cases and 2,707 controls for 20 single nucleotide polymorphisms (SNPs) that have not previously been shown to have functional consequence within these genes.\n\n\nRESULTS\nThere was a suggestion of increased risk, albeit insignificant after correction for multiple testing, of CRC for individuals homozygous for CYP1B1 rs162558 and heterozygous for CYP1A2 rs2069522 (odds ratio [OR] = 1.36, 95% confidence interval [CI]: 1.03-1.80 and OR = 1.34, 95% CI: 1.00-1.79 respectively).\n\n\nCONCLUSION\nThis study provides some support for polymorphic variation in CYP1A2 and CYP1B1 playing a role in CRC susceptibility." }, { "pmid": "15599946", "title": "The expression and regulation of ADAMTS-1, -4, -5, -9, and -15, and TIMP-3 by TGFbeta1 in prostate cells: relevance to the accumulation of versican.", "abstract": "BACKGROUND\nBenign prostatic hyperplasia (BPH) is characterized by a proportional increase in the size of the stromal compartment of the gland, involving alterations to extracellular matrix (ECM) components. Some of these changes have been associated with the activity and expression of transforming growth factor beta1 (TGFbeta1). Versican (chondroitin sulphate proteoglycan-2) is overexpressed in BPH and prostate cancer and potentially contributes to disease pathology. A sub-group of the ADAMTS lineage of metalloproteases possess versican-degrading properties and are potential regulators of proteoglycan accumulation associated with BPH. These enzymes have one major inhibitor in the ECM, tissue inhibitor of metalloproteinases (TIMP)-3.\n\n\nMETHODS\nThe effect of TGFbeta on mRNA expression in prostatic stromal cells was determined by real-time qRT-PCR using primers to ADAMTS-1, -4, -5, -9, -15, versican, and TIMP-3. MMP-inhibitory potential (TIMP activity) of conditioned medium was measured using a fluorometric peptide substrate.\n\n\nRESULTS\nProstatic stromal cell cultures consistently expressed ADAMTS-1, -4, -5, -9, -15 and TIMP-3, in contrast to PC3, DU145, and LNCaP cells which failed to express at least two ADAMTS transcripts. In stromal cells, TGFbeta1 decreased ADAMTS-1, -5, -9, and -15 transcripts and increased ADAMTS-4, versican, and TIMP-3. TGFbeta also increased TIMP activity in conditioned medium.\n\n\nCONCLUSIONS\nThe induction of versican expression by TGFbeta in BPH stromal cells is in agreement with histological studies. The negative effect of TGFbeta1 on ADAMTS-1, -5, -9, and -15 coupled with increases in their inhibitor, TIMP-3 may aid the accumulation of versican in the stromal compartment of the prostate in BPH and prostate cancer." }, { "pmid": "16114059", "title": "Immunohistochemical expression of tumor antigens MAGE-A1, MAGE-A3/4, and NY-ESO-1 in cancerous and benign prostatic tissue.", "abstract": "OBJECTIVE\nTo investigate immunohistochemical expression of MAGE-A and NY-ESO-1/LAGE-1, cancer testis antigens in prostate tissues showing evidence of malignant transformation or benign hyperplasia.\n\n\nMETHODS\n112 prostate samples from patients undergoing surgery at the Urology Clinic at the Zagreb Clinical Hospital Center from 1995 to 2003 were investigated in this study. Of these, 92 carcinoma samples were obtained by radical prostatectomy, and 20 benign prostatic hyperplasia samples by transvesical prostatectomy. Three monoclonal antibodies were used for immunohistochemical staining: 77B for MAGE-A1, 57B for multi-MAGE-A and D8.38 for NY-ESO-1 expression.\n\n\nRESULTS\nExpression of MAGE-A1 was observed in 10.8% of carcinoma samples, whereas multi-MAGE-A and NY-ESO-1/LAGE-1 stained 85.9% and 84.8% of samples. Immunohistochemical staining was only detectable in the cytoplasm. A significant heterogeneity could be observed within a same tissue sample where areas with strong positivities coexisted with cancer testis antigens negative areas. Interestingly, a majority of 57B positive cases were also found to be D8.38 positive (correlation coefficient r=0.727 (P<0.01)). Cancer testis antigens expression was neither significantly correlated with PSA values nor with Gleason score. In benign prostatic hyperplasia tissues MAGE-A1 expression was detected in 5%, while 57B and D8.38 staining was observed in 15% samples, and in all cases percentages of positive cells were always <10%.\n\n\nCONCLUSION\nOur data underline the peculiar relevance of cancer testis antigens expression in prostate cancers, with potential implications regarding both diagnosis and therapy." }, { "pmid": "11279605", "title": "Aminopeptidase N regulated by zinc in human prostate participates in tumor cell invasion.", "abstract": "Aminopeptidase N (AP-N) degrades collagen type IV and is proposed to play a role in tumor invasion. However, the precise functions of AP-N in tumor cells and the relationship of AP-N to prostate cancer remains unclear. In our study, we examined a possible role for zinc in the regulation of AP-N enzymatic activity in relation to tumor cell invasion in human prostate. AP-N purified from human prostate was irreversibly inhibited by low concentrations of zinc (Ki = 11.2 microM) and bestatin. AP-N, which has zinc in the active center, was also inhibited by the chelating agents, EDTA, o-phenanthroline and EGTA. EDTA was shown to remove zinc from the enzyme. When the effects of zinc and bestatin on invasion of PC-3 cells were investigated in vitro using a Transwell cell-culture chamber, zinc and bestatin effectively suppressed cell invasion into Matrigel at the concentration range of 50-100 microM. These results strongly suggest that the suppression of PC-3 cell invasion by zinc is based on the inhibition of AP-N activity by zinc. We also evaluated the expression of AP-N to investigate the relationship with the progression of prostate disease in human cancerous prostate. AP-N was found to be located at the cytoplasmic membranes of prostate gland epithelial cells and to be expressed more in prostate cancer, while the expression of prostate-specific antigen (PSA), which is a useful marker for prostate cancer, was shown in normal and cancer tissues, suggesting that AP-N is potentially a good histological marker of prostate cancer. Thus, highly expressed AP-N in human cancerous prostate probably plays an important role in the invasion and metastasis of prostate cancer cells." }, { "pmid": "16276351", "title": "Brn-3a neuronal transcription factor functional expression in human prostate cancer.", "abstract": "Neuroendocrine differentiation has been associated with prostate cancer (CaP). Brn-3a (short isoform) and Brn-3c, transcriptional controllers of neuronal differentiation, were readily detectable in human CaP both in vitro and in vivo. Brn-3a expression, but not Brn-3c, was significantly upregulated in >50% of tumours. Furthermore, overexpression of this transcription factor in vitro (i) potentiated CaP cell growth and (ii) regulated the expression of a neuronal gene, the Nav1.7 sodium channel, concomitantly upregulated in human CaP, in an isoform-specific manner. It is concluded that targeting Brn-3a could be a useful strategy for controlling the expression of multiple genes that promote CaP." }, { "pmid": "18188713", "title": "Functional characterization of the GDEP promoter and three enhancer elements in retinoblastoma and prostate cell lines.", "abstract": "GDEP (gene differentially expressed in prostate cancer aka. PCAN1), a newly discovered gene with remarkable tissue specificity, is a promising candidate for regulatory analysis because it exhibits a high level of expression that is limited to two tissues, the retina and the prostate. As these two tissues have different origins and disparate functions it is likely that the regulatory mechanisms responsible for expression are not shared in their entirety. In addition, both the retina and prostate are prime targets for gene therapy. To date there have been no functional studies of the GDEP promoter. Therefore to understand tissue-specific expression of GDEP we constructed promoter expression constructs. To further characterize functional regulatory regions within the GDEP gene, we investigated potential regulatory components for tissue-specific expression in the 40 kb intron of this gene. We have identified a 1.5 kb prostate-specific promoter from the proximal region of the GDEP gene. A smaller 0.5 kb promoter exhibited minimal activity in the retinoblastoma cell line Y79, but not in the prostate cells tested. In addition we have investigated three enhancer elements located in the 40 kb intron of the GDEP gene. We identified two enhancer elements that increase reporter gene expression in prostate cell line LNCaP and one additional enhancer element that increases expression in the Y79 cell line approximately 8-fold making it a strong retinal-specific enhancer." }, { "pmid": "11719440", "title": "T-cell receptor gamma chain alternate reading frame protein (TARP) expression in prostate cancer cells leads to an increased growth rate and induction of caveolins and amphiregulin.", "abstract": "Previously, we showed that prostate and prostate cancer cells express a truncated T-cell receptor gamma chain mRNA that uses an alternative reading frame to produce a novel nuclear T-cell receptor gamma chain alternate reading frame protein (TARP). TARP is expressed in the androgen-sensitive LNCaP prostate cancer cell line but not in the androgen-independent PC3 prostate cancer cell line, indicating that TARP may play a role in prostate cancer progression. To elucidate the function of TARP, we generated a stable PC3 cell line that expresses TARP in a constitutive manner. Expression of TARP in PC3 cells resulted in a more rapid growth rate with a 5-h decrease in doubling time. cDNA microarray analysis of 6538 genes revealed that caveolin 1, caveolin 2, amphiregulin, and melanoma growth stimulatory activity alpha were significantly up-regulated, whereas IL-1beta was significantly down-regulated in PC3 cells expressing TARP. We also demonstrated that TARP expression is up-regulated by testosterone in LNCaP cells that express a functional androgen receptor. These results suggest that TARP has a role in regulating growth and gene expression in prostate cancer cells." }, { "pmid": "16598739", "title": "Characterization of ZAG protein expression in prostate cancer using a semi-automated microscope system.", "abstract": "OBJECTIVE\nZinc-alpha-2-glycoprotein 1 (ZAG) is a 41-kD secreted protein that is known to stimulate lipid degradation in adipocytes. The aim of this study was to determine how ZAG protein expression is associated with prostate cancer (PCa).\n\n\nMATERIALS AND METHODS\nAn immunohistochemistry analysis was performed on a 227 PCa tissue microarray cases. ZAG protein expression was assessed using a semi-automated cellular image analysis system.\n\n\nRESULTS\nZAG expression was associated with tumor stage (pT2 > pT3 > metastasis cases, P < 0.001), and was inversely associated with Gleason score on pathology (P = 0.01). ZAG intensity was predictive of biochemical recurrence (P = 0.002). On multivariate analysis including pT2 patients, the predictive factors of biochemical recurrence were ZAG expression (P = 0.016), Gleason score (P = 0.011), and surgical margin status (P = 0.047).\n\n\nCONCLUSIONS\nThis study characterized ZAG protein expression in PCa using a semi-automated system. ZAG expression level found to have an independent prognostic value for pT2 patients." }, { "pmid": "17949478", "title": "Fibrinogen synthesized by cancer cells augments the proliferative effect of fibroblast growth factor-2 (FGF-2).", "abstract": "BACKGROUND\nFibroblast growth factor (FGF)-2 is a critical growth factor in normal and malignant cell proliferation and tumor-associated angiogenesis. Fibrinogen and fibrin bind to FGF-2 and modulate FGF-2 functions. Furthermore, we have shown that extrahepatic epithelial cells are capable of endogenous production of fibrinogen.\n\n\nOBJECTIVE\nHerein we examined the role of fibrinogen and FGF-2 interactions on prostate and lung adenocarcinoma cell growth in vitro.\n\n\nMETHODS\nCell proliferation was measured by (3)H-thymidine uptake and the specificity of FGF-2-fibrinogen interactions was measured using wild-type and mutant FGF-2s, fibrinogen gamma-chain (FGG) RNAi and co-immunoprecipitation. Metabolic labeling, immunopurification and fluorography demonstrated de novo fibrinogen production.\n\n\nRESULTS\nFGF-2 stimulated DU-145 cell proliferation, whereas neither FGF-2 nor fibrinogen affected the growth of PC-3 or A549 cells. Fibrinogen augmented the proliferative effect of FGF-2 on DU-145 cells. The role of fibrinogen in FGF-2-enhanced DNA synthesis was confirmed using an FGF-2 mutant that exhibits no binding affinity for fibrinogen. FGG transcripts were present in PC-3, A549 and DU-145 cells, but only PC-3 and A549 cells produced detectable levels of intact protein. RNAi-mediated knockdown of FGG expression resulted in decreased production of fibrinogen protein and inhibited (3)H-thymidine uptake in A549 and PC-3 cells by 60%, which was restored by exogenously added fibrinogen. FGF-2 and fibrinogen secreted by the cells were present in the medium as a soluble complex, as determined by coimmunoprecipitation studies.\n\n\nCONCLUSIONS\nThese data indicate that endogenously synthesized fibrinogen promotes the growth of lung and prostate cancer cells through interaction with FGF-2." }, { "pmid": "17178897", "title": "The tumor metastasis suppressor gene Drg-1 down-regulates the expression of activating transcription factor 3 in prostate cancer.", "abstract": "The tumor metastasis suppressor gene Drg-1 has been shown to suppress metastasis without affecting tumorigenicity in immunodeficient mouse models of prostate and colon cancer. Expression of Drg-1 has also been found to have a significant inverse correlation with metastasis or invasiveness in various types of human cancer. However, how Drg-1 exerts its metastasis suppressor function remains unknown. In the present study, to elucidate the mechanism of action of the Drg-1 gene, we did a microarray analysis and found that induction of Drg-1 significantly inhibited the expression of activating transcription factor (ATF) 3, a member of the ATF/cyclic AMP-responsive element binding protein family of transcription factors. We also showed that Drg-1 attenuated the endogenous level of ATF3 mRNA and protein in prostate cancer cells, whereas Drg-1 small interfering RNA up-regulated the ATF3 expression. Furthermore, Drg-1 suppressed the promoter activity of the ATF3 gene, indicating that Drg-1 regulates ATF3 expression at the transcriptional level. Our immunohistochemical analysis on prostate cancer specimens revealed that nuclear expression of ATF3 was inversely correlated to Drg-1 expression and positively correlated to metastases. Consistently, we have found that ATF3 overexpression promoted invasiveness of prostate tumor cells in vitro, whereas Drg-1 suppressed the invasive ability of these cells. More importantly, overexpression of ATF3 in prostate cancer cells significantly enhanced spontaneous lung metastasis of these cells without affecting primary tumorigenicity in a severe combined immunodeficient mouse model. Taken together, our results strongly suggest that Drg-1 suppresses metastasis of prostate tumor cells, at least in part, by inhibiting the invasive ability of the cells via down-regulation of the expression of the ATF3 gene." } ]
Bioinformatics and Biology Insights
19812791
PMC2735966
null
Using a Seed-Network to Query Multiple Large-Scale Gene Expression Datasets from the Developing Retina in Order to Identify and Prioritize Experimental Targets
Understanding the gene networks that orchestrate the differentiation of retinal progenitors into photoreceptors in the developing retina is important not only due to its therapeutic applications in treating retinal degeneration but also because the developing retina provides an excellent model for studying CNS development. Although several studies have profiled changes in gene expression during normal retinal development, these studies offer at best only a starting point for functional studies focused on a smaller subset of genes. The large number of genes profiled at comparatively few time points makes it extremely difficult to reliably infer gene networks from a gene expression dataset. We describe a novel approach to identify and prioritize from multiple gene expression datasets, a small subset of the genes that are likely to be good candidates for further experimental investigation. We report progress on addressing this problem using a novel approach to querying multiple large-scale expression datasets using a ‘seed network’ consisting of a small set of genes that are implicated by published studies in rod photoreceptor differentiation. We use the seed network to identify and sort a list of genes whose expression levels are highly correlated with those of multiple seed network genes in at least two of the five gene expression datasets. The fact that several of the genes in this list have been demonstrated, through experimental studies reported in the literature, to be important in rod photoreceptor function provides support for the utility of this approach in prioritizing experimental targets for further experimental investigation. Based on Gene Ontology and KEGG pathway annotations for the list of genes obtained in the context of other information available in the literature, we identified seven genes or groups of genes for possible inclusion in the gene network involved in differentiation of retinal progenitor cells into rod photoreceptors. Our approach to querying multiple gene expression datasets using a seed network constructed from known interactions between specific genes of interest provides a promising strategy for focusing hypothesis-driven experiments using large-scale ‘omics’ data.
Related WorkSeveral previous studies have examined ways of extending a known seed network (Bader, 2003; Cabusora et al. 2005; Can et al. 2005; Dougherty et al. 2000; Hashimoto et al. 2004; Shmulevich et al. 2002). Most of these focus on filtering or selecting candidate links based on some criteria (Bader, 2003; Cabusora et al. 2005; Dougherty et al. 2000; Hashimoto et al. 2004; Shmulevich et al. 2002) or producing a single ranking of all genes in terms of the degree to which they are “related” to the entire seed network (Can et al. 2005). In contrast, we focus on producing a ranking for each seed gene as well as a ranking of those genes that are correlated with multiple seed genes. The latter is especially useful in showing, at a glance, the specific genes in the seed network that are likely to be involved in interactions with a candidate gene. The resulting prioritized list can then be further examined by human experts in the broader context of related literature and biological knowledge.
[ "10096043", "16505381", "14555618", "7943765", "11733058", "15226823", "12093740", "15840709", "15239836", "10640715", "9390516", "15190009", "16767693", "12702772", "14985324", "11404411", "8288222", "7838158", "11880494", "9390562", "12490560", "16098712", "16094371", "14871865", "15256402", "2031852", "17653270", "16595632", "16854989", "11934739", "11336497", "15964824", "15173114", "14659019", "16777074", "17093405", "17148475", "7724523", "9287318", "11694879", "9570804", "11812828", "14625556", "15277472", "12533605", "11959830", "14744875", "11847074", "7664341", "1459449", "14500831", "11431459", "7736585", "14519200", "15980575", "14991054", "17044933", "17117495" ]
[ { "pmid": "10096043", "title": "The role of NeuroD as a differentiation factor in the mammalian retina.", "abstract": "NeuroD, a vertebrate homolog of Drosophila atonal gene, plays an important role in the differentiation of neuronal precursors (Lee et al., 1995). We have investigated whether NeuroD subserves a similar function in mammalian retinal neurogenesis. Expression of NeuroD is detected in successive stages of retinal neurogenesis and is associated with a differentiating population of retinal cells. The association of NeuroD predominantly with postmitotic precursors in early as well as late neurogenesis suggests that NeuroD expression plays an important role in the terminal differentiation of retinal neurons. The notion is supported by observations that overexpression of NeuroD during late neurogenesis promotes premature differentiation of late-born neurons, rod photoreceptors, and bipolar cells, and that NeuroD can interact specifically with the E-box element in the proximal promoter of the phenotype-specific gene, opsin." }, { "pmid": "16505381", "title": "Targeting of GFP to newborn rods by Nrl promoter and temporal expression profiling of flow-sorted photoreceptors.", "abstract": "The Maf-family transcription factor Nrl is a key regulator of photoreceptor differentiation in mammals. Ablation of the Nrl gene in mice leads to functional cones at the expense of rods. We show that a 2.5-kb Nrl promoter segment directs the expression of enhanced GFP specifically to rod photoreceptors and the pineal gland of transgenic mice. GFP is detected shortly after terminal cell division, corresponding to the timing of rod genesis revealed by birthdating studies. In Nrl-/- retinas, the GFP+ photoreceptors express S-opsin, consistent with the transformation of rod precursors into cones. We report the gene profiles of freshly isolated flow-sorted GFP+ photoreceptors from wild-type and Nrl-/- retinas at five distinct developmental stages. Our results provide a framework for establishing gene regulatory networks that lead to mature functional photoreceptors from postmitotic precursors. Differentially expressed rod and cone genes are excellent candidates for retinopathies." }, { "pmid": "14555618", "title": "Greedily building protein networks with confidence.", "abstract": "MOTIVATION\nWith genome sequences complete for human and model organisms, it is essential to understand how individual genes and proteins are organized into biological networks. Much of the organization is revealed by proteomics experiments that now generate torrents of data. Extracting relevant complexes and pathways from high-throughput proteomics data sets has posed a challenge, however, and new methods to identify and extract networks are essential. We focus on the problem of building pathways starting from known proteins of interest.\n\n\nRESULTS\nWe have developed an efficient, greedy algorithm, SEEDY, that extracts biologically relevant biological networks from protein-protein interaction data, building out from selected seed proteins. The algorithm relies on our previous study establishing statistical confidence levels for interactions generated by two-hybrid screens and inferred from mass spectrometric identification of protein complexes. We demonstrate the ability to extract known yeast complexes from high-throughput protein interaction data with a tunable parameter that governs the trade-off between sensitivity and selectivity. DNA damage repair pathways are presented as a detailed example. We highlight the ability to join heterogeneous data sets, in this case protein-protein interactions and genetic interactions, and the appearance of cross-talk between pathways caused by re-use of shared components. SIGNIFICANCE AND COMPARISON: The significance of the SEEDY algorithm is that it is fast, running time O[(E + V) log V] for V proteins and E interactions, a single adjustable parameter controls the size of the pathways that are generated, and an associated P-value indicates the statistical confidence that the pathways are enriched for proteins with a coherent function. Previous approaches have focused on extracting sub-networks by identifying motifs enriched in known biological networks. SEEDY provides the complementary ability to perform a directed search based on proteins of interest.\n\n\nAVAILABILITY\nSEEDY software (Perl source), data tables and confidence score models (R source) are freely available from the author." }, { "pmid": "7943765", "title": "Expression of the cysteine proteinase inhibitor cystatin C mRNA in rat eye.", "abstract": "BACKGROUND\nCystatin C, a naturally occurring inhibitor of cysteine proteinases, belongs to family 2 of the cystatin superfamily. While cystatins in general, and cystatin C specifically, are expressed in various cell types and found in biological fluids, cystatins in ocular structures have not been investigated. In the present study, the expression of cystatin C mRNA in the eye of the rat was studied.\n\n\nMETHODS\nTotal RNA was extracted from eyes as well as from pooled corneae, retinas, lenses, sclerae, and corneae of adult rats. Cystatin C mRNA was detected in the RNA samples by reverse transcriptase--polymerase chain reaction and Northern blot hybridization. In addition, in situ hybridizations of formalin-fixed cryostat sections were carried out using a digoxigenin-labeled cystatin C probe.\n\n\nRESULTS\nCystatin C mRNA was demonstrated in total RNAs extracted from the eye, sclera, and retina, but not in RNAs isolated from the cornea and lens. In situ hybridizations revealed cystatin C mRNA in most of the stromal cells of the sclera. In the retina, a strong signal was localized in the outer nuclear layer. The distribution of the reaction product suggested that in the retina Müller cells and rod cells are the primary sites of expression of cystatin C. In addition, some glial cells in the inner nuclear and ganglion cell layers were stained. No specific signal for cystatin C mRNA was detected in the cornea, lens, iris, ciliary body, and choroid.\n\n\nCONCLUSIONS\nIn the eye of the rat, significant levels of cystatin C mRNA are detected in the sclera and retina. In the sclera cystatin C may play a role in modulating the activities of cysteine proteinases, mostly cathepsins, involved in the turnover and remodeling of the stroma. In the retina, cystatins synthesized and presumably released by Müller cells and rod cells may have a protective function against the harmful effects of cysteine proteinases released under physiologic and pathologic conditions." }, { "pmid": "11733058", "title": "Comprehensive analysis of photoreceptor gene expression and the identification of candidate retinal disease genes.", "abstract": "To identify the full set of genes expressed by mammalian rods, we conducted serial analysis of gene expression (SAGE) by using libraries generated from mature and developing mouse retina. We identified 264 uncharacterized genes that were specific to or highly enriched in rods. Nearly half of all cloned human retinal disease genes are selectively expressed in rod photoreceptors. In silico mapping of the human orthologs of genes identified in our screen revealed that 86 map within intervals containing uncloned retinal disease genes, representing 37 different loci. We expect these data will allow identification of many disease genes, and that this approach may be useful for cloning genes involved in classes of disease where cell type-specific expression of disease genes is observed." }, { "pmid": "15226823", "title": "Genomic analysis of mouse retinal development.", "abstract": "The vertebrate retina is comprised of seven major cell types that are generated in overlapping but well-defined intervals. To identify genes that might regulate retinal development, gene expression in the developing retina was profiled at multiple time points using serial analysis of gene expression (SAGE). The expression patterns of 1,051 genes that showed developmentally dynamic expression by SAGE were investigated using in situ hybridization. A molecular atlas of gene expression in the developing and mature retina was thereby constructed, along with a taxonomic classification of developmental gene expression patterns. Genes were identified that label both temporal and spatial subsets of mitotic progenitor cells. For each developing and mature major retinal cell type, genes selectively expressed in that cell type were identified. The gene expression profiles of retinal Müller glia and mitotic progenitor cells were found to be highly similar, suggesting that Müller glia might serve to produce multiple retinal cell types under the right conditions. In addition, multiple transcripts that were evolutionarily conserved that did not appear to encode open reading frames of more than 100 amino acids in length (\"noncoding RNAs\") were found to be dynamically and specifically expressed in developing and mature retinal cell types. Finally, many photoreceptor-enriched genes that mapped to chromosomal intervals containing retinal disease genes were identified. These data serve as a starting point for functional investigations of the roles of these genes in retinal development and physiology." }, { "pmid": "12093740", "title": "A growth factor-dependent nuclear kinase phosphorylates p27(Kip1) and regulates cell cycle progression.", "abstract": "The cyclin-dependent kinase inhibitor, p27(Kip1), which regulates cell cycle progression, is controlled by its subcellular localization and subsequent degradation. p27(Kip1) is phosphorylated on serine 10 (S10) and threonine 187 (T187). Although the role of T187 and its phosphorylation by Cdks is well-known, the kinase that phosphorylates S10 and its effect on cell proliferation has not been defined. Here, we identify the kinase responsible for S10 phosphorylation as human kinase interacting stathmin (hKIS) and show that it regulates cell cycle progression. hKIS is a nuclear protein that binds the C-terminal domain of p27(Kip1) and phosphorylates it on S10 in vitro and in vivo, promoting its nuclear export to the cytoplasm. hKIS is activated by mitogens during G(0)/G(1), and expression of hKIS overcomes growth arrest induced by p27(Kip1). Depletion of KIS using small interfering RNA (siRNA) inhibits S10 phosphorylation and enhances growth arrest. p27(-/-) cells treated with KIS siRNA grow and progress to S/G(2 )similar to control treated cells, implicating p27(Kip1) as the critical target for KIS. Through phosphorylation of p27(Kip1) on S10, hKIS regulates cell cycle progression in response to mitogens." }, { "pmid": "15840709", "title": "Differential network expression during drug and stress response.", "abstract": "MOTIVATION\nThe application of microarray chip technology has led to an explosion of data concerning the expression levels of the genes in an organism under a plethora of conditions. One of the major challenges of systems biology today is to devise generally applicable methods of interpreting this data in a way that will shed light on the complex relationships between multiple genes and their products. The importance of such information is clear, not only as an aid to areas of research like drug design, but also as a contribution to our understanding of the mechanisms behind an organism's ability to react to its environment.\n\n\nRESULTS\nWe detail one computational approach for using gene expression data to identify response networks in an organism. The method is based on the construction of biological networks given different sets of interaction information and the reduction of the said networks to important response sub-networks via the integration of the gene expression data. As an application, the expression data of known stress responders and DNA repair genes in Mycobacterium tuberculosis is used to construct a generic stress response sub-network. This is compared to similar networks constructed from data obtained from subjecting M.tuberculosis to various drugs; we are thus able to distinguish between generic stress response and specific drug response. We anticipate that this approach will be able to accelerate target identification and drug development for tuberculosis in the future.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary Figures 1 through 6 on drug response networks and differential network analyses on cerulenin, chlorpromazine, ethionamide, ofloxacin, thiolactomycin and triclosan. Supplementary Tables 1 to 3 on predicted protein interactions. http://www.santafe.edu/~chris/DifferentialNW." }, { "pmid": "15239836", "title": "Clustering analysis of SAGE data using a Poisson approach.", "abstract": "Serial analysis of gene expression (SAGE) data have been poorly exploited by clustering analysis owing to the lack of appropriate statistical methods that consider their specific properties. We modeled SAGE data by Poisson statistics and developed two Poisson-based distances. Their application to simulated and experimental mouse retina data show that the Poisson-based distances are more appropriate and reliable for analyzing SAGE data compared to other commonly used distances or similarity measures such as Pearson correlation or Euclidean distance." }, { "pmid": "10640715", "title": "Expression of Chx10 and Chx10-1 in the developing chicken retina.", "abstract": "We have isolated full-length cDNAs of chick Chx10 and Chx10-1, two members of the paired type homeobox/CVC gene family. A comparison of sequences suggests that Chx10 is closely related to Alx/Vsx-2 and Vsx-2 of zebrafish and goldfish, respectively; while Chx10-1 is closely related to Vsx-1 of zebrafish and goldfish. Chx10 and Chx10-1 are expressed in the early retinal neuroepithelium, but not in the pigment epithelium and lens. The expression of Chx10 is present in most retinal neuroblasts, while Chx10-1 exhibits a novel pattern along the nasotemporal border. In the differentiating retina, both Chx10 and Chx10-1 are restricted to bipolar cells and are maintained at a low level in bipolar cells of the mature retina." }, { "pmid": "9390516", "title": "Crx, a novel Otx-like paired-homeodomain protein, binds to and transactivates photoreceptor cell-specific genes.", "abstract": "The otd/Otx gene family encodes paired-like homeodomain proteins that are involved in the regulation of anterior head structure and sensory organ development. Using the yeast one-hybrid screen with a bait containing the Ret 4 site from the bovine rhodopsin promoter, we have cloned a new member of the family, Crx (Cone rod homeobox). Crx encodes a 299 amino acid residue protein with a paired-like homeodomain near its N terminus. In the adult, it is expressed predominantly in photoreceptors and pinealocytes. In the developing mouse retina, it is expressed by embryonic day 12.5 (E12.5). Recombinant Crx binds in vitro not only to the Ret 4 site but also to the Ret 1 and BAT-1 sites. In transient transfection studies, Crx transactivates rhodopsin promoter-reporter constructs. Its activity is synergistic with that of Nrl. Crx also binds to and transactivates the genes for several other photoreceptor cell-specific proteins (interphotoreceptor retinoid-binding protein, beta-phosphodiesterase, and arrestin). Human Crx maps to chromosome 19q13.3, the site of a cone rod dystrophy (CORDII). These studies implicate Crx as a potentially important regulator of photoreceptor cell development and gene expression and also identify it as a candidate gene for CORDII and other retinal diseases." }, { "pmid": "15190009", "title": "Photoreceptor-specific nuclear receptor NR2E3 functions as a transcriptional activator in rod photoreceptors.", "abstract": "NR2E3, a photoreceptor-specific orphan nuclear receptor, is believed to play a pivotal role in the differentiation of photoreceptors. Mutations in the human NR2E3 gene and its mouse ortholog are associated with enhanced S-cones and retinal degeneration. In order to gain insights into the NR2E3 function, we performed temporal and spatial expression analysis, yeast two-hybrid screening, promoter activity assays and co-immunoprecipitation studies. The Nr2e3 expression was localized preferentially to the rod, and not to the cone, photoreceptor nuclei in rodent retina. The yeast two-hybrid screening of a retinal cDNA library, using NR2E3 as the bait, identified another orphan nuclear receptor NR1D1 (Rev-erbalpha). The interaction of NR2E3 with NR1D1 was confirmed by glutathione S-transferase pulldown and co-immunoprecipitation experiments. In transient transfection studies using HEK 293 cells, both NR2E3 and NR1D1 activated the promoters of rod phototransduction genes synergistically with neural retina leucine zipper (NRL) and cone-rod homeobox (CRX). All four proteins, NR2E3, NR1D1, NRL and CRX, could be co-immunoprecipitated from the bovine retinal nuclear extract, suggesting their existence in a multi-protein transcriptional regulatory complex in vivo. Our results demonstrate that NR2E3 is involved in regulating the expression of rod photoreceptor-specific genes and support its proposed role in transcriptional regulatory network(s) during rod differentiation." }, { "pmid": "16767693", "title": "Heparan sulfate regulation of progenitor cell fate.", "abstract": "Currently there is an intense effort being made to elucidate the factors that control stem and progenitor cell fate. Developments in our understanding of the FGF/FGFR pathway and its role as an effector of stem cell pluripotency have heightened expectations that a therapeutic use for stem cells will move from a possibility to a probability. Mounting evidence is revealing the molecular mechanisms by which fibroblast growth factor (FGF) signaling, together with a large number of other growth and adhesive factors, is controlled by the extracellular sugar, heparan sulfate (HS). What has resulted is a novel means of augmenting and thus regulating the growth factor control of stem and progenitor cell fate. Here, we review the numerous bioactivities of HS, and the development of strategies to implement HS-induced control of cell fate decisions." }, { "pmid": "12702772", "title": "Analysis of gene expression in the developing mouse retina.", "abstract": "In the visual system, differential gene expression underlies development of the anterior-posterior and dorsal-ventral axes. Here we present the results of a microarray screen to identify genes differentially expressed in the developing retina. We assayed gene expression in nasal (anterior), temporal (posterior), dorsal, and ventral embryonic mouse retina. We used a statistical method to estimate gene expression between different retina regions. Genes were clustered according to their expression pattern and were ranked within each cluster. We identified groups of genes expressed in gradients or with restricted patterns of expression as verified by in situ hybridization. A common theme for the identified genes is the differential expression in the dorsal-ventral axis. By analyzing gene expression patterns, we provide insight into the molecular organization of the developing retina." }, { "pmid": "14985324", "title": "Global gene expression analysis of the developing postnatal mouse retina.", "abstract": "PURPOSE\nPostnatal mouse retinal development involves glial and neuronal differentiation, vascularization, and the onset of vision. In the current study, the gene expression profiles of thousands of genes in the developing postnatal mouse retina were analyzed and compared in a large-scale, unbiased microarray gene expression analysis.\n\n\nMETHODS\nFor each of eight different time points during postnatal mouse retinal development, two separate sets of 30 retinas were pooled for RNA isolation, and gene expression was analyzed by hybridization to gene chips in triplicate (Mu74Av2; Affymetrix, Santa Clara, CA). Genes were sorted into clusters based on their expression profiles and intensities. Validation was accomplished by comparing the microarray expression profiles with real-time RT-PCR analysis of selected genes and by comparing selected expression profiles with predicted profiles based on previous studies.\n\n\nRESULTS\nThe Mu74Av2 chip contains more than 6000 known genes and 6500 estimated sequence tags (ESTs) from the mouse Unigene database. Of these, 2635 known gene sequences and 2794 ESTs were expressed at least threefold above background levels during retinal development. Expressed genes were clustered based on expression profiles allowing potential functions for specific genes during retinal development to be inferred by comparison to developmental events occurring at each time point. Specific data and potential functions for genes with various profiles are discussed. All data can be viewed online at http://www.scripps.edu/cb/friedlander/gene_expression/.\n\n\nCONCLUSIONS\nExpression analysis of thousands of different genes during normal postnatal mouse retinal development as reported in this study demonstrates that such an approach can be used to correlate gene expression with known functional differentiation, presenting the opportunity to infer functional correlates between gene expression and specific postnatal developmental events." }, { "pmid": "11404411", "title": "p27Kip1 and p57Kip2 regulate proliferation in distinct retinal progenitor cell populations.", "abstract": "In the developing vertebrate retina, progenitor cell proliferation must be precisely regulated to ensure appropriate formation of the mature tissue. Cyclin kinase inhibitors have been implicated as important regulators of proliferation during development by blocking the activity of cyclin-cyclin-dependent kinase complexes. We have found that the p27(Kip1) cyclin kinase inhibitor regulates progenitor cell proliferation throughout retinal histogenesis. p27(Kip1) is upregulated during the late G(2)/early G(1) phase of the cell cycle in retinal progenitor cells, where it interacts with the major retinal D-type cyclin-cyclin D1. Mice deficient for p27(Kip1) exhibited an increase in the proportion of mitotic cells throughout development as well as extensive apoptosis, particularly during the later stages of retinal histogenesis. Retroviral-mediated overexpression of p27(Kip1) in mitotic retinal progenitor cells led to premature cell cycle exit yet had no dramatic effects on Müller glial or bipolar cell fate specification as seen with the Xenopus cyclin kinase inhibitor, p27(Xic1). Consistent with the overexpression of p27(Kip1), mice lacking one or both alleles of p27(Kip1) maintained the same relative ratios of each major retinal cell type as their wild-type littermates. During the embryonic stages of development, when both p27(Kip1) and p57(Kip2) are expressed in retinal progenitor cells, they were found in distinct populations, demonstrating directly that different retinal progenitor cells are heterogeneous with respect to their expression of cell cycle regulators." }, { "pmid": "8288222", "title": "Molecular characterization of the murine neural retina leucine zipper gene, Nrl.", "abstract": "The NRL gene (D14S46E) is expressed in cells of human retina and encodes a putative DNA-binding protein of the leucine zipper family. Here we describe the analysis of the murine homolog of the NRL gene, Nrl. Various cDNAs resulting from alternate polyadenylation are characterized. The deduced polypeptide sequence is highly conserved between mouse and human, with an identical basic motif and leucine zipper domain. The nucleotide sequences in the 5' and 3'-untranslated regions also show significant homology. The 3'-untranslated region contains a polymorphic AGG-trinucleotide repeat. The murine Nrl gene consists of three exons; of these, the first is untranslated. The 5'-upstream promoter region has no canonical TATA box, but contains consensus binding site sequences for several DNA-binding proteins. Analysis of RNA from adult mouse tissues confirms the retina-specific expression of Nrl. This study provides the basis for dissecting the cis-regulatory elements involved in the retina-specific expression and for the development of an experimental model to investigate the function or any diseases associated with this gene in humans." }, { "pmid": "7838158", "title": "Cross-talk among ROR alpha 1 and the Rev-erb family of orphan nuclear receptors.", "abstract": "We have cloned Rev-erb beta, a novel isoform of the Rev-erb alpha orphan nuclear receptor. The DNA binding domains of Rev-erb alpha and beta are highly related to each other and to the retinoic acid related orphan receptor (ROR)/RZR subfamily of nuclear receptors. Indeed, we find that all three receptors bind as monomers to the sequence AATGT-AGGTCA. Whereas ROR alpha 1 constitutively activates transcription through this sequence, both isoforms of Rev-erb are inactive. When coexpressed, both Rev-erb isoforms suppress the transcriptional activity of ROR alpha 1. Our data define Rev-erb and ROR/RZR as a family of related receptors with opposing activities on overlapping regulatory networks." }, { "pmid": "11880494", "title": "The mouse Crx 5'-upstream transgene sequence directs cell-specific and developmentally regulated expression in retinal photoreceptor cells.", "abstract": "Crx, an Otx-like homeobox gene, is expressed primarily in the photoreceptors of the retina and in the pinealocytes of the pineal gland. The CRX homeodomain protein is a transactivator of many photoreceptor/pineal-specific genes in vivo, such as rhodopsin and the cone opsins. Mutations in Crx are associated with the retinal diseases, cone-rod dystrophy-2, retinitis pigmentosa, and Leber's congenital amaurosis, which lead to loss of vision. We have generated transgenic mice, using 5'- and/or 3'-flanking sequences from the mouse Crx homeobox gene fused to the beta-galactosidase (lacZ) reporter gene, and we have investigated the promoter function of the cell-specific and developmentally regulated expression of Crx. All of the independent transgenic lines commonly showed lacZ expression in the photoreceptor cells of the retina and in the pinealocytes of the pineal gland. We characterized the transgenic lines in detail for cell-specific lacZ expression patterns by 5-bromo-4-chloro-3-indolyl beta-D-galactoside staining and lacZ immunostaining. The lacZ expression was observed in developing and developed photoreceptor cells. This observation was confirmed by coimmunostaining of dissociated retinal cells with the lacZ and opsin antibodies. The ontogeny analysis indicated that the lacZ expression completely agrees with a temporal expression pattern of Crx during retinal development. This study demonstrates that the mouse Crx 5'-upstream genomic sequence is capable of directing a cell-specific and developmentally regulated expression of Crx in photoreceptor cells." }, { "pmid": "9390562", "title": "Crx, a novel otx-like homeobox gene, shows photoreceptor-specific expression and regulates photoreceptor differentiation.", "abstract": "We have isolated a novel otx-like homeobox gene, Crx, from the mouse retina. Crx expression is restricted to developing and mature photoreceptor cells. CRX bound and transactivated the sequence TAATCC/A, which is found upstream of several photoreceptor-specific genes, including the opsin genes from many species. Overexpression of Crx using a retroviral vector increased the frequency of clones containing exclusively rod photoreceptors and reduced the frequency of clones containing amacrine interneurons and Müller glial cells. In addition, presumptive photoreceptor cells expressing a dominant-negative form of CRX failed to form proper photoreceptor outer segments and terminals. Crx is a novel photoreceptor-specific transcription factor and plays a crucial role in the differentiation of photoreceptor cells." }, { "pmid": "12490560", "title": "Genetic rescue of cell number in a mouse model of microphthalmia: interactions between Chx10 and G1-phase cell cycle regulators.", "abstract": "Insufficient cell number is a primary cause of failed retinal development in the Chx10 mutant mouse. To determine if Chx10 regulates cell number by antagonizing p27(Kip1) activity, we generated Chx10, p27(Kip1) double null mice. The severe hypocellular defect in Chx10 single null mice is alleviated in the double null, and while Chx10-null retinas lack lamination, double null retinas have near normal lamination. Bipolar cells are absent in the double null retina, a defect that is attributable to a requirement for Chx10 that is independent of p27(Kip1). We find that p27(Kip1) is abnormally present in progenitors of Chx10-null retinas, and that its ectopic localization is responsible for a significant amount of the proliferation defect in this microphthalmia model system. mRNA and protein expression patterns in these mice and in cyclin D1-null mice suggest that Chx10 influences p27(Kip1) at a post-transcriptional level, through a mechanism that is largely dependent on cyclin D1. This is the first report of rescue of retinal proliferation in a microphthalmia model by deletion of a cell cycle regulatory gene." }, { "pmid": "16098712", "title": "Assessment and integration of publicly available SAGE, cDNA microarray, and oligonucleotide microarray expression data for global coexpression analyses.", "abstract": "Large amounts of gene expression data from several different technologies are becoming available to the scientific community. A common practice is to use these data to calculate global gene coexpression for validation or integration of other \"omic\" data. To assess the utility of publicly available datasets for this purpose we have analyzed Homo sapiens data from 1202 cDNA microarray experiments, 242 SAGE libraries, and 667 Affymetrix oligonucleotide microarray experiments. The three datasets compared demonstrate significant but low levels of global concordance (rc<0.11). Assessment against Gene Ontology (GO) revealed that all three platforms identify more coexpressed gene pairs with common biological processes than expected by chance. As the Pearson correlation for a gene pair increased it was more likely to be confirmed by GO. The Affymetrix dataset performed best individually with gene pairs of correlation 0.9-1.0 confirmed by GO in 74% of cases. However, in all cases, gene pairs confirmed by multiple platforms were more likely to be confirmed by GO. We show that combining results from different expression platforms increases reliability of coexpression. A comparison with other recently published coexpression studies found similar results in terms of performance against GO but with each method producing distinctly different gene pair lists." }, { "pmid": "16094371", "title": "Predictive models of molecular machines involved in Caenorhabditis elegans early embryogenesis.", "abstract": "Although numerous fundamental aspects of development have been uncovered through the study of individual genes and proteins, system-level models are still missing for most developmental processes. The first two cell divisions of Caenorhabditis elegans embryogenesis constitute an ideal test bed for a system-level approach. Early embryogenesis, including processes such as cell division and establishment of cellular polarity, is readily amenable to large-scale functional analysis. A first step toward a system-level understanding is to provide 'first-draft' models both of the molecular assemblies involved and of the functional connections between them. Here we show that such models can be derived from an integrated gene/protein network generated from three different types of functional relationship: protein interaction, expression profiling similarity and phenotypic profiling similarity, as estimated from detailed early embryonic RNA interference phenotypes systematically recorded for hundreds of early embryogenesis genes. The topology of the integrated network suggests that C. elegans early embryogenesis is achieved through coordination of a limited set of molecular machines. We assessed the overall predictive value of such molecular machine models by dynamic localization of ten previously uncharacterized proteins within the living embryo." }, { "pmid": "14871865", "title": "Growing genetic regulatory networks from seed genes.", "abstract": "MOTIVATION\nA number of models have been proposed for genetic regulatory networks. In principle, a network may contain any number of genes, so long as data are available to make inferences about their relationships. Nevertheless, there are two important reasons why the size of a constructed network should be limited. Computationally and mathematically, it is more feasible to model and simulate a network with a small number of genes. In addition, it is more likely that a small set of genes maintains a specific core regulatory mechanism.\n\n\nRESULTS\nSubnetworks are constructed in the context of a directed graph by beginning with a seed consisting of one or more genes believed to participate in a viable subnetwork. Functionalities and regulatory relationships among seed genes may be partially known or they may simply be of interest. Given the seed, we iteratively adjoin new genes in a manner that enhances subnetwork autonomy. The algorithm is applied using both the coefficient of determination and the Boolean-function influence among genes, and it is illustrated using a glioma gene-expression dataset.\n\n\nAVAILABILITY\nSoftware for the seed-growing algorithm will be available at the website for Probabilistic Boolean Networks: http://www2.mdanderson.org/app/ilya/PBN/PBN.htm" }, { "pmid": "15256402", "title": "Limited agreement among three global gene expression methods highlights the requirement for non-global validation.", "abstract": "MOTIVATION\nDNA microarrays have revolutionized biological research, but their reliability and accuracy have not been extensively evaluated. Thorough testing of microarrays through comparison to dissimilar gene expression methods is necessary in order to determine their accuracy.\n\n\nRESULTS\nWe have systematically compared three global gene expression methods on all available histologically normal samples from five human organ types. The data included 25 Affymetrix high-density oligonucleotide array experiments, 23 expressed sequence tag based expression (EBE) experiments and 5 SAGE experiments. The reported gene-by-gene expression patterns showed a wide range of correlations between pairs of methods. This level of agreement was sufficient for accurate clustering of datasets from the same tissue and dissimilar methods, but highlights the need for thorough validation of individual gene expression measurements by alternate, non-global methods. Furthermore, analyses of mRNA abundance distributions indicate limitations in the EBE and SAGE methods at both high- and low-expression levels." }, { "pmid": "2031852", "title": "A comparative analysis of N-myc and c-myc expression and cellular proliferation in mouse organogenesis.", "abstract": "The distribution of c-myc and N-myc transcripts during mouse organogenesis was investigated by in situ hybridization and compared to proliferation in several tissues. Only c-myc expression was found during the formation of cartilage, brown adipose tissue, glandula submandibularis, thymus and liver. There was a temporally and spatially ordered expression of N-myc only during the organogenesis of brain, retina and eye lens. In some organs (e.g., in lung and tooth bud), c-myc and N-myc were expressed in a striking complementary pattern that reflected the ontogenic origins of different tissue components. Transcripts of both genes were found in the early gut epithelium, but as formation of villi began, the spatial expression pattern of N-myc and c-myc diverged. The results suggest a link between the proliferative state of cell types and the differential expression of N-myc vs. c-myc. Specifically, c-myc is only expressed in rapidly proliferating tissues, while N-myc expression often persists through cytodifferentiation, e.g., during development of eye lens, retina, telencephalon and gut epithelium. Thus, in spite of the structural similarities of N-myc and c-myc genes and proteins their developmental expression patterns suggest different functional roles." }, { "pmid": "17653270", "title": "The cis-regulatory logic of the mammalian photoreceptor transcriptional network.", "abstract": "The photoreceptor cells of the retina are subject to a greater number of genetic diseases than any other cell type in the human body. The majority of more than 120 cloned human blindness genes are highly expressed in photoreceptors. In order to establish an integrative framework in which to understand these diseases, we have undertaken an experimental and computational analysis of the network controlled by the mammalian photoreceptor transcription factors, Crx, Nrl, and Nr2e3. Using microarray and in situ hybridization datasets we have produced a model of this network which contains over 600 genes, including numerous retinal disease loci as well as previously uncharacterized photoreceptor transcription factors. To elucidate the connectivity of this network, we devised a computational algorithm to identify the photoreceptor-specific cis-regulatory elements (CREs) mediating the interactions between these transcription factors and their target genes. In vivo validation of our computational predictions resulted in the discovery of 19 novel photoreceptor-specific CREs near retinal disease genes. Examination of these CREs permitted the definition of a simple cis-regulatory grammar rule associated with high-level expression. To test the generality of this rule, we used an expanded form of it as a selection filter to evolve photoreceptor CREs from random DNA sequences in silico. When fused to fluorescent reporters, these evolved CREs drove strong, photoreceptor-specific expression in vivo. This study represents the first systematic identification and in vivo validation of CREs in a mammalian neuronal cell type and lays the groundwork for a systems biology of photoreceptor transcriptional regulation." }, { "pmid": "16595632", "title": "A neurosphere-derived factor, cystatin C, supports differentiation of ES cells into neural stem cells.", "abstract": "Although embryonic stem (ES) cells are capable of unlimited proliferation and pluripotent differentiation, effective preparation of neural stem cells from ES cells are not achieved. Here, we have directly generated under the coculture with dissociated primary neurosphere cells in serum-free medium and the same effect was observed when ES cells were cultured with conditioned medium of primary neurosphere culture (CMPNC). ES-neural stem cells (NSCs) could proliferate for more than seven times and differentiate into neurons, astrocytes, and oligodendrocytes in vitro and in vivo. The responsible molecule in CMPNC was confirmed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, which turned out to be cystatin C. Purified cystatin C in place of the CMPNC could generate ES-NSCs efficiently with self-renewal and multidifferentiation potentials. These results reveal the validity of cystatin C for generating NSCs from ES cells." }, { "pmid": "16854989", "title": "Retinoic acid regulates the expression of photoreceptor transcription factor NRL.", "abstract": "NRL (neural retina leucine zipper) is a key basic motif-leucine zipper (bZIP) transcription factor, which orchestrates rod photoreceptor differentiation by activating the expression of rod-specific genes. The deletion of Nrl in mice results in functional cones that are derived from rod precursors. However, signaling pathways modulating the expression or activity of NRL have not been elucidated. Here, we show that retinoic acid (RA), a diffusible factor implicated in rod development, activates the expression of NRL in serum-deprived Y79 human retinoblastoma cells and in primary cultures of rat and porcine photoreceptors. The effect of RA is mimicked by TTNPB, a RA receptor agonist, and requires new protein synthesis. DNaseI footprinting and electrophoretic mobility shift assays (EMSA) using bovine retinal nuclear extract demonstrate that RA response elements (RAREs) identified within the Nrl promoter bind to RA receptors. Furthermore, in transiently transfected Y79 and HEK293 cells the activity of Nrl-promoter driving a luciferase reporter gene is induced by RA, and this activation is mediated by RAREs. Our data suggest that signaling by RA via RA receptors regulates the expression of NRL, providing a framework for delineating early steps in photoreceptor cell fate determination." }, { "pmid": "11934739", "title": "Analysis of matched mRNA measurements from two different microarray technologies.", "abstract": "MOTIVATION\n[corrected] The existence of several technologies for measuring gene expression makes the question of cross-technology agreement of measurements an important issue. Cross-platform utilization of data from different technologies has the potential to reduce the need to duplicate experiments but requires corresponding measurements to be comparable.\n\n\nMETHODS\nA comparison of mRNA measurements of 2895 sequence-matched genes in 56 cell lines from the standard panel of 60 cancer cell lines from the National Cancer Institute (NCI 60) was carried out by calculating correlation between matched measurements and calculating concordance between cluster from two high-throughput DNA microarray technologies, Stanford type cDNA microarrays and Affymetrix oligonucleotide microarrays.\n\n\nRESULTS\nIn general, corresponding measurements from the two platforms showed poor correlation. Clusters of genes and cell lines were discordant between the two technologies, suggesting that relative intra-technology relationships were not preserved. GC-content, sequence length, average signal intensity, and an estimator of cross-hybridization were found to be associated with the degree of correlation. This suggests gene-specific, or more correctly probe-specific, factors influencing measurements differently in the two platforms, implying a poor prognosis for a broad utilization of gene expression measurements across platforms." }, { "pmid": "11336497", "title": "Developmental expression of mouse Krüppel-like transcription factor KLF7 suggests a potential role in neurogenesis.", "abstract": "To identify potential functions for the Krüppel-like transcription factor KLF7, we have determined the spatiotemporal pattern of gene expression during embryogenesis and in the adult organism. We show that the profile of Klf7 expression predominantly involves the central and peripheral nervous systems and is broadly identified by three separate phases. The first phase occurs early in embryogenesis with increasingly strong expression in the spinal cord, notably in motor neurons of the ventral horn, in dorsal root ganglia, and in sympathetic ganglia. The second robust phase of Klf7 expression is confined to the early postnatal cerebral cortex and is downregulated thereafter. The third phase is characterized by high and sustained expression in the adult cerebellum and dorsal root ganglia. Functionally, these three phases coincide with establishment of neuronal phenotype in embryonic spinal cord, with synaptogenesis and development of mature synaptic circuitry in the postnatal cerebral cortex, and with survival and/or maintenance of function of adult sensory neurons and cerebellar granule cells. Consistent with Klf7 expression in newly formed neuroblasts, overexpression of the gene in cultured fibroblasts and neuroblastoma cells repressed cyclin D1, activated p21, and led to G1 growth arrest. Based on these data, we argue for multiple potential functions for KLF7 in the developing and adult nervous system; they include participating in differentiation and maturation of several neuronal subtypes and in phenotypic maintenance of mature cerebellar granule cells and dorsal root ganglia." }, { "pmid": "15964824", "title": "Transcription factor KLF7 is important for neuronal morphogenesis in selected regions of the nervous system.", "abstract": "The Krüppel-like transcription factors (KLFs) are important regulators of cell proliferation and differentiation in several different organ systems. The mouse Klf7 gene is strongly active in postmitotic neuroblasts of the developing nervous system, and the corresponding protein stimulates transcription of the cyclin-dependent kinase inhibitor p21waf/cip gene. Here we report that loss of KLF7 activity in mice leads to neonatal lethality and a complex phenotype which is associated with deficits in neurite outgrowth and axonal misprojection at selected anatomical locations of the nervous system. Affected axon pathways include those of the olfactory and visual systems, the cerebral cortex, and the hippocampus. In situ hybridizations and immunoblots correlated loss of KLF7 activity in the olfactory epithelium with significant downregulation of the p21waf/cip and p27kip1 genes. Cotransfection experiments extended the last finding by documenting KLF7's ability to transactivate a reporter gene construct driven by the proximal promoter of p27kip1. Consistent with emerging evidence for a role of Cip/Kip proteins in cytoskeletal dynamics, we also documented p21waf/cip and p27kip1 accumulation in the cytoplasm of differentiating olfactory sensory neurons. KLF7 activity might therefore control neuronal morphogenesis in part by optimizing the levels of molecules that promote axon outgrowth." }, { "pmid": "15173114", "title": "Coexpression analysis of human genes across many microarray data sets.", "abstract": "We present a large-scale analysis of mRNA coexpression based on 60 large human data sets containing a total of 3924 microarrays. We sought pairs of genes that were reliably coexpressed (based on the correlation of their expression profiles) in multiple data sets, establishing a high-confidence network of 8805 genes connected by 220,649 \"coexpression links\" that are observed in at least three data sets. Confirmed positive correlations between genes were much more common than confirmed negative correlations. We show that confirmation of coexpression in multiple data sets is correlated with functional relatedness, and show how cluster analysis of the network can reveal functionally coherent groups of genes. Our findings demonstrate how the large body of accumulated microarray data can be exploited to increase the reliability of inferences about gene function." }, { "pmid": "14659019", "title": "Comparing cDNA and oligonucleotide array data: concordance of gene expression across platforms for the NCI-60 cancer cells.", "abstract": "Microarray gene-expression profiles are generally validated one gene at a time by real-time RT-PCR. We describe here a different approach based on simultaneous mutual validation of large numbers of genes using two different expression-profiling platforms. The result described here for the NCI-60 cancer cell lines is a consensus set of genes that give similar profiles on spotted cDNA arrays and Affymetrix oligonucleotide chips. Global concordance is parameterized by a 'correlation of correlations' coefficient." }, { "pmid": "16777074", "title": "Gene expression profiles of mouse retinas during the second and third postnatal weeks.", "abstract": "Mouse retina undergoes crucial changes during early postnatal development. By using Affymetrix microarrays, we analyzed gene expression profiles of wild-type 129SvEv/C57BL/6 mouse retinas at postnatal days (P) 7, 10, 14, 18, and 21 and found significantly altered expression of 355 genes. Characterization of these 355 genes provided insight into physiologic and pathologic processes of mouse retinal development during the second and third postnatal weeks, a period that corresponds to human embryogenesis between weeks 12 and 28. These genes formed 6 groups with similar change patterns. Among the genes, sixteen cause retinal diseases when mutated; most of these 16 genes were upregulated in retina during this period. Using the PathArt program, we identified the biological processes in which many of the 355 gene products function. Among the most active processes in the P7-P21 retina are those involved in neurogenesis, obesity, diabetes type II, apoptosis, growth and differentiation, and protein kinase activity. We examined the expression patterns of 58 genes in P7 and adult retinas by searching the Brain Gene Expression Map database. Although most genes were present in various cell types in retinas, many displayed high levels of expression specifically in the outer nuclear, inner nuclear, and/or ganglion cell layers. By combining our 3 analyses, we demonstrated that during this period of mouse retinal development, many genes play important roles in various cell types, multiple pathways are involved, and some genes in a pathway are expressed in coordinated patterns. Our results thus provide foundation for future detailed studies of specific genes and pathways in various genetic and environmental conditions during retinal development." }, { "pmid": "17093405", "title": "Retinal repair by transplantation of photoreceptor precursors.", "abstract": "Photoreceptor loss causes irreversible blindness in many retinal diseases. Repair of such damage by cell transplantation is one of the most feasible types of central nervous system repair; photoreceptor degeneration initially leaves the inner retinal circuitry intact and new photoreceptors need only make single, short synaptic connections to contribute to the retinotopic map. So far, brain- and retina-derived stem cells transplanted into adult retina have shown little evidence of being able to integrate into the outer nuclear layer and differentiate into new photoreceptors. Furthermore, there has been no demonstration that transplanted cells form functional synaptic connections with other neurons in the recipient retina or restore visual function. This might be because the mature mammalian retina lacks the ability to accept and incorporate stem cells or to promote photoreceptor differentiation. We hypothesized that committed progenitor or precursor cells at later ontogenetic stages might have a higher probability of success upon transplantation. Here we show that donor cells can integrate into the adult or degenerating retina if they are taken from the developing retina at a time coincident with the peak of rod genesis. These transplanted cells integrate, differentiate into rod photoreceptors, form synaptic connections and improve visual function. Furthermore, we use genetically tagged post-mitotic rod precursors expressing the transcription factor Nrl (ref. 6) (neural retina leucine zipper) to show that successfully integrated rod photoreceptors are derived only from immature post-mitotic rod precursors and not from proliferating progenitor or stem cells. These findings define the ontogenetic stage of donor cells for successful rod photoreceptor transplantation." }, { "pmid": "17148475", "title": "Entrez Gene: gene-centered information at NCBI.", "abstract": "Entrez Gene (www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=gene) is NCBI's database for gene-specific information. Entrez Gene includes records from genomes that have been completely sequenced, that have an active research community to contribute gene-specific information or that are scheduled for intense sequence analysis. The content of Entrez Gene represents the result of both curation and automated integration of data from NCBI's Reference Sequence project (RefSeq), from collaborating model organism databases and from other databases within NCBI. Records in Entrez Gene are assigned unique, stable and tracked integers as identifiers. The content (nomenclature, map location, gene products and their attributes, markers, phenotypes and links to citations, sequences, variation details, maps, expression, homologs, protein domains and external databases) is provided via interactive browsing through NCBI's Entrez system, via NCBI's Entrez programing utilities (E-Utilities), and for bulk transfer by ftp." }, { "pmid": "7724523", "title": "Stathmin interaction with a putative kinase and coiled-coil-forming protein domains.", "abstract": "Stathmin is a ubiquitous, cytosolic 19-kDa protein, which is phosphorylated on up to four sites in response to many regulatory signals within cells. Its molecular characterization indicates a functional organization including an N-terminal regulatory domain that bears the phosphorylation sites, linked to a putative alpha-helical binding domain predicted to participate in coiled-coil, protein-protein interactions. We therefore proposed that stathmin may play the role of a relay integrating diverse intracellular regulatory pathways; its action on various target proteins would be a function of its combined phosphorylation state. To search for such target proteins, we used the two-hybrid screen in yeast, with stathmin as a \"bait.\" We isolated and characterized four cDNAs encoding protein domains that interact with stathmin in vivo. One of the corresponding proteins was identified as BiP, a member of the hsp70 heat-shock protein family. Another is a previously unidentified, putative serine/threonine kinase, KIS, which might be regulated by stathmin or, more likely, be part of the kinases controlling its phosphorylation state. Finally, two clones code for subdomains of two proteins, CC1 and CC2, predicted to form alpha-helices participating in coiled-coil interacting structures. Their isolation by interaction screening further supports our model for the regulatory function of stathmin through coiled-coil interactions with diverse downstream targets via its presumed alpha-helical binding domain. The molecular and biological characterization of KIS, CC1, and CC2 proteins will give further insights into the molecular functions and mechanisms of action of stathmin as a relay of integrated intracellular regulatory pathways." }, { "pmid": "9287318", "title": "KIS is a protein kinase with an RNA recognition motif.", "abstract": "Protein phosphorylation is involved at multiple steps of RNA processing and in the regulation of protein expression. We present here the first identification of a serine/threonine kinase that possesses an RNP-type RNA recognition motif: KIS. We originally isolated KIS in a two-hybrid screen through its interaction with stathmin, a small phosphoprotein proposed to play a general role in the relay and integration of diverse intracellular signaling pathways. Determination of the primary sequence of KIS shows that it is formed by the juxtaposition of a kinase core with little homology to known kinases and a C-terminal domain that contains a characteristic RNA recognition motif with an intriguing homology to the C-terminal motif of the splicing factor U2AF. KIS produced in bacteria has an autophosphorylating activity and phosphorylates stathmin on serine residues. It also phosphorylates in vitro other classical substrates such as myelin basic protein and synapsin but not histones that inhibit its autophosphorylating activity. Immunofluorescence and biochemical analyses indicate that KIS overexpressed in HEK293 fibroblastic cells is partly targetted to the nucleus. Altogether, these results suggest the implication of KIS in the control of trafficking and/or splicing of RNAs probably through phosphorylation of associated factors." }, { "pmid": "11694879", "title": "Nrl is required for rod photoreceptor development.", "abstract": "The protein neural retina leucine zipper (Nrl) is a basic motif-leucine zipper transcription factor that is preferentially expressed in rod photoreceptors. It acts synergistically with Crx to regulate rhodopsin transcription. Missense mutations in human NRL have been associated with autosomal dominant retinitis pigmentosa. Here we report that deletion of Nrl in mice results in the complete loss of rod function and super-normal cone function, mediated by S cones. The photoreceptors in the Nrl-/- retina have cone-like nuclear morphology and short, sparse outer segments with abnormal disks. Analysis of retinal gene expression confirms the apparent functional transformation of rods into S cones in the Nrl-/- retina. On the basis of these findings, we postulate that Nrl acts as a 'molecular switch' during rod-cell development by directly modulating rod-specific genes while simultaneously inhibiting the S-cone pathway through the activation of Nr2e3." }, { "pmid": "9570804", "title": "Two phases of rod photoreceptor differentiation during rat retinal development.", "abstract": "We have conducted a comprehensive analysis of the relative timing of the terminal mitosis and the onset of rhodopsin expression in rod precursors in the rat retina in vivo. This analysis demonstrated that there are two distinct phases of rod development during retinal histogenesis. For the majority of rod precursors, those born on or after embryonic day 19 (E19), the onset of rhodopsin expression was strongly correlated temporally with cell cycle withdrawal. For these precursors, the lag between the terminal mitosis and rhodopsin expression was measured to be 5.5-6.5 d on average. By contrast, for rod precursors born before E19, the lag was measured to be significantly longer, averaging from 8.5 to 12.5 d. In addition, these early-born rod precursors seemed to initiate rhodopsin expression in a manner that was not correlated temporally with the terminal mitosis. In these cells, onset of rhodopsin expression appeared approximately synchronous with later-born cells, suggesting a synchronous recruitment to the rod cell fate induced by environmental signals. To examine this possibility, experiments in which the early-born precursors were exposed to a late environment were conducted, using a reaggregate culture system. In these experiments, the early-born precursors appeared remarkably uninfluenced by the late environment with respect to both rod determination and the kinetics of rhodopsin expression. These results support the idea that intrinsically distinct populations of rod precursors constitute the two phases of rod development and that the behavior exhibited by the early-born precursors is intrinsically programmed." }, { "pmid": "11812828", "title": "Gene expression in the developing mouse retina by EST sequencing and microarray analysis.", "abstract": "Retinal development occurs in mice between embryonic day E11.5 and post-natal day P8 as uncommitted neuroblasts assume retinal cell fates. The genetic pathways regulating retinal development are being identified but little is understood about the global networks that link these pathways together or the complexity of the expressed gene set required to form the retina. At E14.5, the retina contains mostly uncommitted neuroblasts and newly differentiated neurons. Here we report a sequence analysis of an E14.5 retinal cDNA library. To date, we have archived 15 268 ESTs and have annotated 9035, which represent 5288 genes. The fraction of singly occurring ESTs as a function of total EST accrual suggests that the total number of expressed genes in the library could approach 27 000. The 9035 ESTs were categorized by their known or putative functions. Representation of the genes involved in eye development was significantly higher in the retinal clone set compared with the NIA mouse 15K cDNA clone set. Screening with a microarray containing 864 cDNA clones using wild-type and brn-3b (-/-) retinal cDNA probes revealed a potential regulatory linkage between the transcription factor Brn-3b and expression of GAP-43, a protein associated with axon growth. The retinal EST database will be a valuable platform for gene expression profiling and a new source for gene discovery." }, { "pmid": "14625556", "title": "Otx2 homeobox gene controls retinal photoreceptor cell fate and pineal gland development.", "abstract": "Understanding the molecular mechanisms by which distinct cell fate is determined during organogenesis is a central issue in development and disease. Here, using conditional gene ablation in mice, we show that the transcription factor Otx2 is essential for retinal photoreceptor cell fate determination and development of the pineal gland. Otx2-deficiency converted differentiating photoreceptor cells to amacrine-like neurons and led to a total lack of pinealocytes in the pineal gland. We also found that Otx2 transactivates the cone-rod homeobox gene Crx, which is required for terminal differentiation and maintenance of photoreceptor cells. Furthermore, retroviral gene transfer of Otx2 steers retinal progenitor cells toward becoming photoreceptors. Thus, Otx2 is a key regulatory gene for the cell fate determination of retinal photoreceptor cells. Our results reveal the key molecular steps required for photoreceptor cell-fate determination and pinealocyte development." }, { "pmid": "15277472", "title": "Kruppel-like factor 15, a zinc-finger transcriptional regulator, represses the rhodopsin and interphotoreceptor retinoid-binding protein promoters.", "abstract": "PURPOSE\nTo identify novel transcriptional regulators of rhodopsin expression as a model for understanding photoreceptor-specific gene regulation.\n\n\nMETHODS\nA bovine retinal cDNA library was screened in a yeast one-hybrid assay, with a 29-bp bovine rhodopsin promoter fragment as bait. Expression studies used RT-PCR and beta-galactosidase (LacZ) histochemistry of retinas from transgenic mice heterozygous for a targeted LacZ replacement of KLF15. Promoter transactivation assays measured luciferase expression in HEK293 cells transiently transfected with bovine rhodopsin or IRBP promoter-reporter constructs and expression constructs containing cDNAs for full or truncated KLF15, Crx (cone rod homeobox), and/or Nrl (neural retina leucine zipper). Data were analyzed with general linear models.\n\n\nRESULTS\nThe zinc-finger transcription factor KLF15 was identified as a rhodopsin-promoter-binding protein in a yeast one-hybrid screen. Expression was detected by RT-PCR in multiple tissues, including the retina, where KLF15-LacZ was observed in the inner nuclear layer, ganglion cell layer, and pigmented epithelial cells, but not in photoreceptors. KLF15 repressed transactivation of rhodopsin and IRBP promoters alone and in combination with the transcriptional activators Crx and/or Nrl. Repressor activity required both a 198-amino-acid element in the N-terminal domain and the C-terminal zinc finger DNA-binding domains.\n\n\nCONCLUSIONS\nThe zinc finger containing transcription factor KLF15 is a transcriptional repressor of the rhodopsin and IRBP promoters in vitro and, in the retina, is a possible participant in repression of photoreceptor-specific gene expression in nonphotoreceptor cells." }, { "pmid": "12533605", "title": "BETA2/NeuroD1 null mice: a new model for transcription factor-dependent photoreceptor degeneration.", "abstract": "BETA2/NeuroD1 is a basic helix-loop-helix transcription factor that is expressed widely throughout the developing nervous system. Previous studies have shown that BETA2/NeuroD1 influences the fate of retinal cells in culture. To analyze the effect of BETA2/NeuroD1 on the structure and function of the retina, we examined a line of BETA2/NeuroD1 knock-out mice that survives until adulthood. At 2-3 months of age, homozygous null mice showed a 50% reduction in rod-driven electroretinograms (ERGs) and a 65% reduction in cone-driven ERGs. ERGs measured from knock-out mice that were >9 months of age were undetectable. At 2-3 months, the number of photoreceptors in the outer nuclear layer was reduced by 50%. In addition, electron microscopy showed that the surviving photoreceptors had shortened outer segments. The number of cones labeled by peanut agglutinin was decreased 50-60%. By 18 months, retinas from null mice were completely devoid of photoreceptors. There appeared to be few changes in the inner retina, although BETA2/NeuroD1 is expressed in this area. Terminal deoxynucleotidyl transferase-mediated biotinylated UTP nick end labeling staining revealed a dramatic increase in cell death, peaking at approximately postnatal day 3 and continuing into adulthood. No defects in cell birth were detected using bromodeoxyuridine staining. Our results reveal that BETA2/NeuroD1 not only plays an important role in terminal differentiation of photoreceptors but also serves as a potential survival factor. Loss of BETA2/NeuroD1 results in an age-related degeneration of both rods and cones." }, { "pmid": "11959830", "title": "Cerebellar proteoglycans regulate sonic hedgehog responses during development.", "abstract": "Sonic hedgehog promotes proliferation of developing cerebellar granule cells. As sonic hedgehog is expressed in the cerebellum throughout life it is not clear why proliferation occurs only in the early postnatal period and only in the external granule cell layer. We asked whether heparan sulfate proteoglycans might regulate sonic hedgehog-induced proliferation and thereby contribute to the specialized proliferative environment of the external granule cell layer. We identified a conserved sequence within sonic hedgehog that is essential for binding to heparan sulfate proteoglycans, but not for binding to the receptor patched. Sonic hedgehog interactions with heparan sulfate proteoglycans promote maximal proliferation of postnatal day 6 granule cells. By contrast, proliferation of less mature granule cells is not affected by sonic hedgehog-proteoglycan interactions. The importance of proteoglycans for proliferation increases during development in parallel with increasing expression of the glycosyltransferase genes, exostosin 1 and exostosin 2. These data suggest that heparan sulfate proteoglycans, synthesized by exostosins, may be critical determinants of granule cell proliferation." }, { "pmid": "14744875", "title": "Delayed expression of the Crx gene and photoreceptor development in the Chx10-deficient retina.", "abstract": "PURPOSE\nThe Chx10 homeobox gene is expressed in neural progenitor cells during retinal development. The absence of Chx10 causes microphthalmia in humans and in the mouse mutant ocular retardation. The purpose of this study was to examine how neuronal development is affected by absence of the Chx10 transcription factor in the mouse retina.\n\n\nMETHODS\nExpression of transcription factor genes, Crx, Pou4f2, and Pax6, that mark specific cell types as they begin to differentiate was analyzed by RNA in situ hybridization of retina from wild-type and Chx10-null ocular retardation mice (Chx10(or-J/or-J)). RT-PCR analysis was used to compare expression of these genes and putative targets of Crx regulation. Photoreceptor development was analyzed by using peanut agglutinin (PNA)-rhodamine and blue cone opsin antibody to label cones and rhodopsin antibody to label rods.\n\n\nRESULTS\nThe photoreceptor gene Crx, normally expressed during embryonic retinal development, was not detected in the embryonic mutant retina, but was expressed after birth. Expression of the targets of Crx regulation, rhodopsin, peripherin, rod phosphodiesterase beta (Pdeb), and arrestin, with the exception of interphotoreceptor retinoid binding protein (Irbp), was delayed in the Chx10(or-J/or-J) retina. Rhodopsin localization in rod outer segments was also delayed. By contrast, temporal and spatial expression of Pou4f2 and Pax6 in developing ganglion and amacrine cells and PNA and blue opsin in developing cone cells was relatively normal in the mutant.\n\n\nCONCLUSIONS\nDelay of the normal temporal expression of genes essential for photoreceptor disc morphogenesis leads to failure of correct rod and cone outer segment formation in the Chx10(or-J/or-J) mutant retina. In addition, the absence of Chx10 appears to affect the development of late-born cells more than that of early-born cells, in that a low number of rods develops, whereas formation of ganglion, amacrine, and cone cells is relatively unaffected." }, { "pmid": "11847074", "title": "Probabilistic Boolean Networks: a rule-based uncertainty model for gene regulatory networks.", "abstract": "MOTIVATION\nOur goal is to construct a model for genetic regulatory networks such that the model class: (i) incorporates rule-based dependencies between genes; (ii) allows the systematic study of global network dynamics; (iii) is able to cope with uncertainty, both in the data and the model selection; and (iv) permits the quantification of the relative influence and sensitivity of genes in their interactions with other genes.\n\n\nRESULTS\nWe introduce Probabilistic Boolean Networks (PBN) that share the appealing rule-based properties of Boolean networks, but are robust in the face of uncertainty. We show how the dynamics of these networks can be studied in the probabilistic context of Markov chains, with standard Boolean networks being special cases. Then, we discuss the relationship between PBNs and Bayesian networks--a family of graphical models that explicitly represent probabilistic relationships between variables. We show how probabilistic dependencies between a gene and its parent genes, constituting the basic building blocks of Bayesian networks, can be obtained from PBNs. Finally, we present methods for quantifying the influence of genes on other genes, within the context of PBNs. Examples illustrating the above concepts are presented throughout the paper." }, { "pmid": "7664341", "title": "Cyclin D1 provides a link between development and oncogenesis in the retina and breast.", "abstract": "Mice lacking cyclin D1 have been generated by gene targeting in embryonic stem cells. Cyclin D1-deficient animals develop to term but show reduced body size, reduced viability, and symptoms of neurological impairment. Their retinas display a striking reduction in cell number due to proliferative failure during embryonic development. In situ hybridization studies of normal mouse embryos revealed an extremely high level of cyclin D1 in the retina, suggesting a special dependence of this tissue on cyclin D1. In adult mutant females, the breast epithelial compartment fails to undergo the massive proliferative changes associated with pregnancy despite normal levels of ovarian steroid hormones. Thus, steroid-induced proliferation of mammary epithelium during pregnancy may be driven through cyclin D1." }, { "pmid": "1459449", "title": "Loss of N-myc function results in embryonic lethality and failure of the epithelial component of the embryo to develop.", "abstract": "myc genes are thought to function in the processes of cellular proliferation and differentiation. To gain insight into the role of the N-myc gene during embryogenesis, we examined its expression in embryos during postimplantation development using RNA in situ hybridization. Tissue- and cell-specific patterns of expression unique to N-myc as compared with the related c-myc gene were observed. N-myc transcripts become progressively restricted to specific cell types, primarily to epithelial tissues including those of the developing nervous system and those in developing organs characterized by epithelio-mesenchymal interaction. In contrast, c-myc transcripts were confined to the mesenchymal compartments. These data suggest that c-myc and N-myc proteins may interact with different substrates in performing their function during embryogenesis and suggest further that there are linked regulatory mechanisms for normal expression in the embryo. We have mutated the N-myc locus via homologous recombination in embryonic stem (ES) cells and introduced the mutated allele into the mouse germ line. Live-born heterozygotes are under-represented but appear normal. Homozygous mutant embryos die prenatally at approximately 11.5 days of gestation. Histologic examination of homozygous mutant embryos indicates that several developing organs are affected. These include the central and peripheral nervous systems, mesonephros, lung, and gut. Thus, N-myc function is required during embryogenesis, and the pathology observed is consistent with the normal pattern of N-myc expression. Examination of c-myc expression in mutant embryos indicates the existence of coordinate regulation of myc genes during mouse embryogenesis." }, { "pmid": "14500831", "title": "Evaluation of gene expression measurements from commercial microarray platforms.", "abstract": "Multiple commercial microarrays for measuring genome-wide gene expression levels are currently available, including oligonucleotide and cDNA, single- and two-channel formats. This study reports on the results of gene expression measurements generated from identical RNA preparations that were obtained using three commercially available microarray platforms. RNA was collected from PANC-1 cells grown in serum-rich medium and at 24 h following the removal of serum. Three biological replicates were prepared for each condition, and three experimental replicates were produced for the first biological replicate. RNA was labeled and hybridized to microarrays from three major suppliers according to manufacturers' protocols, and gene expression measurements were obtained using each platform's standard software. For each platform, gene targets from a subset of 2009 common genes were compared. Correlations in gene expression levels and comparisons for significant gene expression changes in this subset were calculated, and showed considerable divergence across the different platforms, suggesting the need for establishing industrial manufacturing standards, and further independent and thorough validation of the technology." }, { "pmid": "11431459", "title": "Identification and localization of retinal cystatin C.", "abstract": "PURPOSE\nCystatin C is a mammalian cysteine protease inhibitor, synthesized in various amounts by many kinds of cells and appearing in most body fluids. There are reports that it may be synthesized in the mammalian retina and that a cysteine protease inhibitor may influence the degradation of photoreceptor outer segment proteins. In the current study cystatin C was identified, quantitated, and localized in mouse, rat, and human retinas.\n\n\nMETHODS\nEnzyme-linked immunosorbent assay (ELISA), reverse transcription-polymerase chain reaction (RT-PCR), DNA sequencing, Western blot analysis, and immunohistochemistry have been used on mouse, rat, and human retinas (pigment epithelium included).\n\n\nRESULTS\nCystatin C is present in high concentrations in the normal adult rat retina, as it is throughout its postnatal development. Its concentration increases to a peak at the time when rat pups open their eyes and then remains at a high level. It is mainly localized to the pigment epithelium, but also to some few neurons of varying types in the inner retina. Cystatin C is similarly expressed in normal mouse and human retinas.\n\n\nCONCLUSIONS\nCystatin C was identified and the localization described in the retinas of rat, mouse, and human using several techniques. Cystatin C is known to efficiently inactivate certain cysteine proteases. One of them, cathepsin S, is present in the retinal pigment epithelium and affects the proteolytic processing by cathepsin D of diurnally shed photoreceptor outer segments. Hypothetically, it appears possible that retinal cystatin C, given its localization to the pigment epithelium and its ability to inhibit cathepsin S, could be involved in the regulation of photoreceptor degradation." }, { "pmid": "14519200", "title": "Annotation and analysis of 10,000 expressed sequence tags from developing mouse eye and adult retina.", "abstract": "BACKGROUND\nAs a biomarker of cellular activities, the transcriptome of a specific tissue or cell type during development and disease is of great biomedical interest. We have generated and analyzed 10,000 expressed sequence tags (ESTs) from three mouse eye tissue cDNA libraries: embryonic day 15.5 (M15E) eye, postnatal day 2 (M2PN) eye and adult retina (MRA).\n\n\nRESULTS\nAnnotation of 8,633 non-mitochondrial and non-ribosomal high-quality ESTs revealed that 57% of the sequences represent known genes and 43% are unknown or novel ESTs, with M15E having the highest percentage of novel ESTs. Of these, 2,361 ESTs correspond to 747 unique genes and the remaining 6,272 are represented only once. Phototransduction genes are preferentially identified in MRA, whereas transcripts for cell structure and regulatory proteins are highly expressed in the developing eye. Map locations of human orthologs of known genes uncovered a high density of ocular genes on chromosome 17, and identified 277 genes in the critical regions of 37 retinal disease loci. In silico expression profiling identified 210 genes and/or ESTs over-expressed in the eye; of these, more than 26 are known to have vital retinal function. Comparisons between libraries provided a list of temporally regulated genes and/or ESTs. A few of these were validated by qRT-PCR analysis.\n\n\nCONCLUSIONS\nOur studies present a large number of potentially interesting genes for biological investigation, and the annotated EST set provides a useful resource for microarray and functional genomic studies." }, { "pmid": "15980575", "title": "WebGestalt: an integrated system for exploring gene sets in various biological contexts.", "abstract": "High-throughput technologies have led to the rapid generation of large-scale datasets about genes and gene products. These technologies have also shifted our research focus from 'single genes' to 'gene sets'. We have developed a web-based integrated data mining system, WebGestalt (http://genereg.ornl.gov/webgestalt/), to help biologists in exploring large sets of genes. WebGestalt is composed of four modules: gene set management, information retrieval, organization/visualization, and statistics. The management module uploads, saves, retrieves and deletes gene sets, as well as performs Boolean operations to generate the unions, intersections or differences between different gene sets. The information retrieval module currently retrieves information for up to 20 attributes for all genes in a gene set. The organization/visualization module organizes and visualizes gene sets in various biological contexts, including Gene Ontology, tissue expression pattern, chromosome distribution, metabolic and signaling pathways, protein domain information and publications. The statistics module recommends and performs statistical tests to suggest biological areas that are important to a gene set and warrant further investigation. In order to demonstrate the use of WebGestalt, we have generated 48 gene sets with genes over-represented in various human tissue types. Exploration of all the 48 gene sets using WebGestalt is available for the public at http://genereg.ornl.gov/webgestalt/wg_enrich.php." }, { "pmid": "14991054", "title": "Rb regulates proliferation and rod photoreceptor development in the mouse retina.", "abstract": "The retinoblastoma protein (Rb) regulates proliferation, cell fate specification and differentiation in the developing central nervous system (CNS), but the role of Rb in the developing mouse retina has not been studied, because Rb-deficient embryos die before the retinas are fully formed. We combined several genetic approaches to explore the role of Rb in the mouse retina. During postnatal development, Rb is expressed in proliferating retinal progenitor cells and differentiating rod photoreceptors. In the absence of Rb, progenitor cells continue to divide, and rods do not mature. To determine whether Rb functions in these processes in a cell-autonomous manner, we used a replication-incompetent retrovirus encoding Cre recombinase to inactivate the Rb1(lox) allele in individual retinal progenitor cells in vivo. Combined with data from studies of conditional inactivation of Rb1 using a combination of Cre transgenic mouse lines, these results show that Rb is required in a cell-autonomous manner for appropriate exit from the cell cycle of retinal progenitor cells and for rod development." }, { "pmid": "17044933", "title": "A biphasic pattern of gene expression during mouse retina development.", "abstract": "BACKGROUND\nBetween embryonic day 12 and postnatal day 21, six major neuronal and one glia cell type are generated from multipotential progenitors in a characteristic sequence during mouse retina development. We investigated expression patterns of retina transcripts during the major embryonic and postnatal developmental stages to provide a systematic view of normal mouse retina development,\n\n\nRESULTS\nA tissue-specific cDNA microarray was generated using a set of sequence non-redundant EST clones collected from mouse retina. Eleven stages of mouse retina, from embryonic day 12.5 (El2.5) to postnatal day 21 (PN21), were collected for RNA isolation. Non-amplified RNAs were labeled for microarray experiments and three sets of data were analyzed for significance, hierarchical relationships, and functional clustering. Six individual gene expression clusters were identified based on expression patterns of transcripts through retina development. Two developmental phases were clearly divided with postnatal day 5 (PN5) as a separate cluster. Among 4,180 transcripts that changed significantly during development, approximately 2/3 of the genes were expressed at high levels up until PN5 and then declined whereas the other 1/3 of the genes increased expression from PN5 and remained at the higher levels until at least PN21. Less than 1% of the genes observed showed a peak of expression between the two phases. Among the later increased population, only about 40% genes are correlated with rod photoreceptors, indicating that multiple cell types contributed to gene expression in this phase. Within the same functional classes, however, different gene populations were expressed in distinct developmental phases. A correlation coefficient analysis of gene expression during retina development between previous SAGE studies and this study was also carried out.\n\n\nCONCLUSION\nThis study provides a complementary genome-wide view of common gene dynamics and a broad molecular classification of mouse retina development. Different genes in the same functional clusters are expressed in the different developmental stages, suggesting that cells might change gene expression profiles from differentiation to maturation stages. We propose that large-scale changes in gene regulation during development are necessary for the final maturation and function of the retina." }, { "pmid": "17117495", "title": "Regularization network-based gene selection for microarray data analysis.", "abstract": "Microarray data contains a large number of genes (usually more than 1000) and a relatively small number of samples (usually fewer than 100). This presents problems to discriminant analysis of microarray data. One way to alleviate the problem is to reduce dimensionality of data by selecting important genes to the discriminant problem. Gene selection can be cast as a feature selection problem in the context of pattern classification. Feature selection approaches are broadly grouped into filter methods and wrapper methods. The wrapper method outperforms the filter method but at the cost of more intensive computation. In the present study, we proposed a wrapper-like gene selection algorithm based on the Regularization Network. Compared with classical wrapper method, the computational costs in our gene selection algorithm is significantly reduced, because the evaluation criterion we proposed does not demand repeated training in the leave-one-out procedure." } ]
BMC Medical Informatics and Decision Making
19706187
PMC2753305
10.1186/1472-6947-9-41
Privacy-preserving record linkage using Bloom filters
BackgroundCombining multiple databases with disjunctive or additional information on the same person is occurring increasingly throughout research. If unique identification numbers for these individuals are not available, probabilistic record linkage is used for the identification of matching record pairs. In many applications, identifiers have to be encrypted due to privacy concerns.MethodsA new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers.ResultsTests on simulated and actual databases yield linkage results comparable to non-encrypted identifiers and superior to results from phonetic encodings.ConclusionWe proposed a protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers. Since the protocol can be easily enhanced and has a low computational burden, the protocol might be useful for many applications requiring privacy-preserving record linkage.
Related workSeveral methods for approximate string matching in privacy-preserving record linkage have been proposed (for reviews see [12,17,18]). The protocols can be classified into protocols with or without a trusted third party.Three-party protocolsSome protocols rely on exact matching of encrypted keys based on phonetically transformed identifiers by a third party. Such protocols are used for cancer registries [19,20] and information exchange between hospitals. In the proposal of [15,16] identifiers are transformed according to phonetic rules and subsequently encrypted with a one-way hash function. To prevent some cryptographic attacks on this protocol, the identifiers are combined with a common pad before hashing. The hash values are transferred to a third party who hashes them again using another pad. Then the third party performs exact matching on the resulting hash values. Despite exact matching, the linkage allows for some errors in identifiers, because hash values of phonetic encodings are matched. Providing database owners do not collude with the third party the protocol is secure. However, string comparison using phonetic encodings usually yields more false positive links than string similarity functions [21-23].[11] suggested a protocol based on hashed values of sets of consecutive letters (q-grams, see below). For each string, the database holders A and B create for each record the power set of the q-grams of their identifiers. Each subset of the power set is hashed by an HMAC algorithm using a common secret key of the database owners. A and B form tuples containing the hash values, the number of q-grams in the hashed subset and the total number of q-grams and an encryption of the identifiers to a third party C. The number of tuples is much larger than the number of records. To calculate the string similarity between two strings a and b, C computes a similarity measure based on the information in the tuples. As [11] shows, C is able to determine a similarity measure of a and b by selecting the highest similarity coefficient of the tuples associated with a and b. To prevent frequency attacks, [11] propose to use an additional trusted party, thereby extending the number of parties involved to four. Furthermore, they recommend hiding the tuples among tuples created from dummy strings using Rivest's "chaffing and winnowing" technique [24]. Apart from an increase of computational and communication costs [25,26], the protocol is prone to frequency attacks on the hashes of the q-gram subsets with just one q-gram [17,18].[27] used the value of a second identifier for padding every single character of a string before encryption. Subsequently, a third party is able to compare strings on the character level and to compute a string similarity. This elegant protocol requires a total flawless second identifier. However, a second identifier with few different values is open to a frequency attack.In the protocol of [28] two data holders, holding lists of names, build an embedding space from random strings and embed their respective strings therein using the SparseMap method [29,30]. Then, each data holder sends the embedded strings to a third party which determines their similarity. To create the embedding space, data holder A generates n random strings and builds z reference sets from them. Next, A reduces the number of reference sets by the greedy resampling heuristic of SparseMap to the best k < z reference sets. These k reference sets are used to embed the names in a k-dimensional space. The coordinates for a given name are approximations of the distances between the name to the closest random string in each of the k reference sets in terms of the edit distance. As a result, for each name A receives a k-dimensional vector. After receiving the k reference sets from A, B embeds his names in the same way. Finally, both data holders send their vectors to a third party, C, who compares them using the standardEuclidean distance between them. Using SparseMap allows the mapping of strings into the vector space avoiding prohibitive computational costs. This is accomplished by the reduction of dimensions using the greedy resampling method and by the distance approximations. However, the experiments in [28] indicate that the linkage quality is significantly affected by applying the greedy resampling heuristic.Pang and Hansen [31] suggested a protocol based on a set of reference strings common to A and B. For a given identifier, both database holders compute the distances, d, between each identifier string and all reference strings in the set. If d is less than a threshold δ, the respective reference string is encrypted using a key previously agreed on by A and B. For each identifier string, the resulting set of encrypted reference strings along with their distances, d, and an ID number form a tuple. Both database holders send their tuples to a third party C. For every pair of ID numbers where the encrypted reference strings agree, C sums the distances, d, and finds the minimum of this sum. If this minimum lies below a second threshold δsim, the two original identifier strings are classified as a match. The performance of the protocol depends crucially on the set of reference strings. Unless this is a superset of the original strings the performance is rather discouraging.A different approach to solve the privacy-preserving record linkage problem for numerical keys is taken by [32]. They suggest using anonymized versions of the data sets for a first linkage step that is capable of classifying a large portion of record pairs correctly as matches or mismatches. Only those pairs which cannot be classified as matches or mismatches will be used in a costly secure multi-party protocol for computing similarities.Two-party protocols[33] suggested a protocol that allows two parties to compute the distance between two strings without exchanging them. Due to the large amount of necessary communication to compare two strings, such a protocol is unsuited for tasks with large lists of strings as required by privacy-preserving record linkage [28]. The protocol suggested by [34] uses a secure set intersection protocol described in [35]. However, this protocol requires extensive computations and is therefore also regarded as impractical for linking large databases [4,28].The protocol of Yakout et al. [36] assumes that the data holders have already transformed their names into vectors as described by Scannapieco et al. [28] and is designed to compare them without resorting to a third party. In the first phase, the two data holders reduce the number of candidate string pairs by omitting pairs which are unlikely to be similar. In the second phase of the protocol, the standardEuclidean distance between the remaining candidate vector pairs is computed using a secure scalar product protocol. Yakout et al. demonstrate that neither party must reveal their vectors in the computations. Although more parsimoneous, this protocol cannot outperform the protocol of Scannapieco et al. [28].
[ "14987147", "15778797", "1395524" ]
[ { "pmid": "14987147", "title": "Zero-check: a zero-knowledge protocol for reconciling patient identities across institutions.", "abstract": "CONTEXT\nLarge, multi-institutional studies often involve merging data records that have been de-identified to protect patient privacy. Unless patient identities can be reconciled across institutions, individuals with records held in different institutions will be falsely \"counted\" as multiple persons when databases are merged.\n\n\nOBJECTIVE\nThe purpose of this article is to describe a protocol that can reconcile individuals with records in multiple institutions.\n\n\nDESIGN\nInstitution A and Institution B each create a random character string and send it to the other institution. Each institution receives the random string from the other institution and sums it with their own random string, producing a random string common to both institutions (RandA+B). Each institution takes a unique patient identifier and sums it with RandA+B. The product is a random character string that is identical across institutions when the patient is identical in both institutions. A comparison protocol can be implemented as a zero-knowledge transaction, ensuring that neither institution obtains any knowledge of its own patient or of the patient compared at another institution.\n\n\nRESULTS\nThe protocol can be executed at high computational speed. No encryption algorithm or 1-way hash algorithm is employed, and there is no need to protect the protocol from discovery.\n\n\nCONCLUSION\nA zero-knowledge protocol for reconciling patients across institutions is described. This protocol is one of many computational tools that permit pathologists to safely share clinical and research data." }, { "pmid": "15778797", "title": "Decision analysis for the assessment of a record linkage procedure: application to a perinatal network.", "abstract": "OBJECTIVES\nAccording to European legislation, we must develop computer software allowing the linkage of medical records previously rendered anonymous. Some of them, like AUTOMATCH, are used in daily practice either to gather medical files in epidemiologic studies or for clinical purpose. In the first situation, the aim is to avoid homonymous errors, and in the second one, synonymous errors. The objective of this work is to study the effect of different parameters (number of identification variables, phonetic treatments of names, direct or probabilistic linkage procedure) on the reliability of the linkage in order to determine which strategy is the best according to the purpose of the linkage.\n\n\nMETHODS\nThe assessment of the Burgundy Perinatal Network requires the linking of discharge abstracts of mothers and neonates, collected in all the hospitals of the region. Those data are used to compare direct and probabilistic linkage, using different parameterization strategies.\n\n\nRESULTS\nIf the linkage has to be performed in real time, so that no validation of indecisions generated by probabilistic linkage is possible, probabilistic linkage using three variables without any phonetic treatment seems to be the most appropriate approach, combined with a direct linkage using four variables applied to non-conclusive links. If a validation of indecisions is possible in an epidemiological study, probabilistic linkage using five variables, with a phonetic treatment adapted to the local language has to be preferred. For medical purpose, it should be combined with a direct linkage with four or five variables.\n\n\nCONCLUSION\nThis paper reveals that the time and money available to manage indecision as well as the purpose of the linkage are of paramount importance for choosing a linkage strategy." }, { "pmid": "1395524", "title": "Tolerating spelling errors during patient validation.", "abstract": "Misspellings, typographical errors, and variant name forms present a considerable problem for a Clinical Information System when validating patient data. Algorithms to correct these types of errors are being used, but they are based either on a study of frequent types of errors associated with general words in an English text rather than types of errors associated with the spelling of names, or on errors that are phonologically based. This paper investigates the types of errors that are specifically associated with the spelling of patient names, and proposes an algorithm that effectively handles such types of errors. This paper also studies the effectiveness of several relaxation techniques and compares them with the one that is being proposed." } ]
PLoS Computational Biology
19956744
PMC2775131
10.1371/journal.pcbi.1000576
Attention Increases the Temporal Precision of Conscious Perception: Verifying the Neural-ST2 Model
What role does attention play in ensuring the temporal precision of visual perception? Behavioural studies have investigated feature selection and binding in time using fleeting sequences of stimuli in the Rapid Serial Visual Presentation (RSVP) paradigm, and found that temporal accuracy is reduced when attentional control is diminished. To reduce the efficacy of attentional deployment, these studies have employed the Attentional Blink (AB) phenomenon. In this article, we use electroencephalography (EEG) to directly investigate the temporal dynamics of conscious perception. Specifically, employing a combination of experimental analysis and neural network modelling, we test the hypothesis that the availability of attention reduces temporal jitter in the latency between a target's visual onset and its consolidation into working memory. We perform time-frequency analysis on data from an AB study to compare the EEG trials underlying the P3 ERPs (Event-related Potential) evoked by targets seen outside vs. inside the AB time window. We find visual differences in phase-sorted ERPimages and statistical differences in the variance of the P3 phase distributions. These results argue for increased variation in the latency of conscious perception during the AB. This experimental analysis is complemented by a theoretical exploration of temporal attention and target processing. Using activation traces from the Neural-ST2 model, we generate virtual ERPs and virtual ERPimages. These are compared to their human counterparts to propose an explanation of how target consolidation in the context of the AB influences the temporal variability of selective attention. The AB provides us with a suitable phenomenon with which to investigate the interplay between attention and perception. The combination of experimental and theoretical elucidation in this article contributes to converging evidence for the notion that the AB reflects a reduction in the temporal acuity of selective attention and the timeliness of perception.
Related workOur experimental results and theoretical explorations complement and inform previous research on temporal selection and the AB. We now discuss these findings and propose interpretations in terms of the model.Chun (1997), Popple and Levi (2007)Chun [32] provided initial evidence regarding the effect of the AB on temporal binding. Employing an RSVP paradigm consisting of letters enclosed in coloured boxes and target letters marked by a distinctively coloured box, he investigated the distribution of responses made by participants when either one or two targets were presented. He calculated the centre of mass of this distribution for targets outside and inside the AB, and found that for targets outside the AB, the distribution was roughly symmetrical around the target position. But for targets inside the AB, he observed a significant shift in the response distribution toward items presented after the target. In addition, behavioural data presented in [32] shows that the variance of the response distribution for T2 report increases when it is presented inside the AB. Popple and Levi [15] presented additional behavioural evidence consistent with Chun's findings [32]. Using a colour-marked RSVP paradigm where each item had two features (colour and identity), they found that incorrect responses mostly came from the distractor items that were presented close to, and generally following the T2. In addition, they observed that this distribution of responses for T2 showed a pronounced increase in its spread compared to T1.These findings are well explained by the model. In , the inhibition of the blaster delays the deployment of attention to a T2 presented during the AB. Consequently, non-targets presented right after the T2 are more likely to be tokenised when the second stage becomes available, resulting in the observed shift in the response distribution. Also, as explained in the previous section, due to a combination of factors influenced by T1 and T2 strengths, there is increased temporal variability in T2's encoding process. This in turn leads to increased variation in the behavioural response for T2s presented inside the AB.Vul, Nieuwenstein and Kanwisher (2008)Vul, Nieuwenstein and Kanwisher [16] propose that temporal selection is modulated along multiple dimensions by the AB. They employed an RSVP paradigm consisting of letters, with targets delineated by simultaneously presented annular cues. Their behavioural analysis suggests that target selection is affected by the AB in one or more of three externally dissociable dimensions discussed below: suppression, delay, diffusion. However, with the model, we demonstrate that all three can result from the suppression of attention. Suppression refers to the reduction in the effectiveness of temporal selection during the AB, and a concomitant increase in random guesses. Vul et al. [16] measured this effect in the form of a decrease in the mean probability of selecting a proximal response (from item positions) around the target, when it occurs during the AB. In contrast to results in [15], they found a significant decrease in this value for T2s during the AB. In the model, suppression can be explained by a reduction in the probability of a target triggering the blaster. During the AB, a relatively large percentage of T2s fail to fire the blaster and do not have enough bottom-up strength to be tokenised. The model would hence predict the suppression observed by [16], because the percentage of trials in which the blaster fires in response to a T2 would be reduced during the AB. Furthermore, as participants were forced to indicate a response for both targets [16], this reduction would translate to an increase in the number of random guesses for the T2. Finally, as one would expect, the time course of suppression follows the time course of the AB as simulated by the model. Delay refers to a systematic post-target shift in the locus of responses chosen for T2 when compared to T1. Vul et al. [16] quantified delay as the centre of mass of the distribution of responses for each target, calculated similarly to the API (Average Position of Intrusions) measure in [14] and the intrusion index score in [32]. This notion of an increase in the latency of attentional selection is reflected in the model. Specifically, suppression of the blaster during T1 encoding results in an increase in the latency of its response to a T2 during the AB (see [29] for more details on delayed T2 consolidation in the model). As a result, in an RSVP paradigm like that used by [16], items presented after T2 are more likely to get the benefit of the blaster and get chosen as responses, resulting in the observed shift in the response distribution. However, this shift in the locus of responses observed by [16] seems to persist at late T2 lag positions well beyond the duration of the AB, and is somewhat more puzzling. This finding could perhaps be attributed to the cognitive load associated with holding T1 in working memory. Diffusion refers to a decrease in the precision of temporal selection, corresponding to an increase in the overall spread in the distribution of responses during the AB. Vul et al. [16] estimated diffusion by comparing the variance around the centre of mass of the response distributions for T1 and T2, and found that it is significantly increased for T2s during the AB. This observation is explained by the model as follows: in the context of the paradigm in [16], there would be increased temporal variation in T2 encoding because of the influence of T1 processing. Hence, due to the influence of both T1 and T2 strengths on response selection, erroneous responses further away from the target position would get selected for tokenization, producing increased variance in the distribution of responses. Again, the time course of diffusion is similar to that of suppression, and is in keeping with the window of the AB predicted by the model.In summary, we think that a single underlying mechanism of variation in the temporal dynamics of attention from trial to trial could potentially explain the three effects observed in [16]. An explicit computational account of these three dimensions in terms of the model is beyond the scope of this article (and would require it to be extended to simulate the conjunction of multiple stimulus features). Nevertheless, the explanation proposed above highlights the role that the temporal dynamics of transient attention would play in explaining these effects.Sergent, Baillet and Dehaene (2005)Sergent, Baillet and Dehaene [33] combined behaviour and EEG to investigate the timing of brain events underlying access to consciousness during the AB. They analysed early and late ERP components evoked by a pair of targets, a T1 followed by a T2 either at a short lag (equivalent to our inside the AB condition) or at a long lag (equivalent to our outside the AB condition). They plotted unsorted ERPimages to visualise the inter-trial variation in the EEG activity, and found that when T2 was presented within the AB, T1's P3 influenced the temporal dynamics of the ERP components correlated with conscious access to T2. In particular, the ERPimage depicting their T1 and T2 P3s clearly shows that even when T2 is seen during the AB window, it evokes a more ‘smeared out’ P3 as compared to the T1. However, the analysis of single-trial data in [33] presents ERPimages that are not sorted (unlike the phase sorting we have performed in this article), thus limiting their interpretation. Further, they did not compare temporal variability of targets seen outside and inside the AB. Despite these differences, their data agree well with ours, and provide qualitative support for our hypothesis of reduced temporal precision during the AB. This is because we would expect increased inter-trial variability in the P3 evoked by a T2 inside the AB to result in a ‘smearing out’ effect in its ERPimage, when trials are plotted after smoothing, but without sorting by phase.
[ "12467584", "3672124", "17227181", "1500880", "21223931", "9861716", "8857535", "12542127", "11766936", "17888482", "18181792", "18564042", "3691023", "9180042", "2525600", "19485692", "11352145", "2270192", "15102499", "17258178", "12763203", "17662259", "12613677", "9104007", "9401454", "16989545", "9176952", "11222977", "15808977" ]
[ { "pmid": "12467584", "title": "View from the top: hierarchies and reverse hierarchies in the visual system.", "abstract": "We propose that explicit vision advances in reverse hierarchical direction, as shown for perceptual learning. Processing along the feedforward hierarchy of areas, leading to increasingly complex representations, is automatic and implicit, while conscious perception begins at the hierarchy's top, gradually returning downward as needed. Thus, our initial conscious percept--vision at a glance--matches a high-level, generalized, categorical scene interpretation, identifying \"forest before trees.\" For later vision with scrutiny, reverse hierarchy routines focus attention to specific, active, low-level units, incorporating into conscious perception detailed information available there. Reverse Hierarchy Theory dissociates between early explicit perception and implicit low-level vision, explaining a variety of phenomena. Feature search \"pop-out\" is attributed to high areas, where large receptive fields underlie spread attention detecting categorical differences. Search for conjunctions or fine discriminations depends on reentry to low-level specific receptive fields using serial focused attention, consistent with recently reported primary visual cortex effects." }, { "pmid": "3672124", "title": "Dynamics of automatic and controlled visual attention.", "abstract": "The time course of attention was experimentally observed using two kinds of stimuli: a cue to begin attending or to shift attention, and a stimulus to be attended. Precise measurements of the time course of attention show that it consists of two partially concurrent processes: a fast, effortless, automatic process that records the cue and its neighboring events; and a slower, effortful, controlled process that records the stimulus to be attended and its neighboring events." }, { "pmid": "17227181", "title": "The simultaneous type, serial token model of temporal attention and working memory.", "abstract": "A detailed description of the simultaneous type, serial token (ST2) model is presented. ST2 is a model of temporal attention and working memory that encapsulates 5 principles: (a) M. M. Chun and M. C. Potter's (1995) 2-stage model, (b) a Stage 1 salience filter, (c) N. G. Kanwisher's (1987, 1991) types-tokens distinction, (d) a transient attentional enhancement, and (e) a mechanism for associating types with tokens called the binding pool. The authors instantiate this theoretical position in a connectionist implementation, called neural-ST2, which they illustrate by modeling temporal attention results focused on the attentional blink (AB). They demonstrate that the ST2 model explains a spectrum of AB findings. Furthermore, they highlight a number of new temporal attention predictions arising from the ST2 theory, which are tested in a series of behavioral experiments. Finally, the authors review major AB models and theories and compare them with ST2." }, { "pmid": "1500880", "title": "Temporary suppression of visual processing in an RSVP task: an attentional blink? .", "abstract": "Through rapid serial visual presentation (RSVP), we asked Ss to identify a partially specified letter (target) and then to detect the presence or absence of a fully specified letter (probe). Whereas targets are accurately identified, probes are poorly detected when they are presented during a 270-ms interval beginning 180 ms after the target. Probes presented immediately after the target or later in the RSVP stream are accurately detected. This temporary reduction in probe detection was not found in conditions in which a brief blank interval followed the target or Ss were not required to identify the target. The data suggest that the presentation of stimuli after the target but before target-identification processes are complete produces interference at a letter-recognition stage. This interference may cause the temporary suppression of visual attention mechanisms observed in the present study." }, { "pmid": "21223931", "title": "The attentional blink.", "abstract": "When two masked targets (T1 and T2) are presented within approximately 500 ms of each other, subjects are often unable to report the second of the two targets (T2) accurately, even though the first has been reported correctly. In contrast, subjects can report T2 accurately when instructed to ignore T1, or when T1 and T2 are separated by more than 500 ms. The above pattern of results has been labelled the attentional blink (AB). Experiments have revealed that the AB is not the result of perceptual, memory or response output limitations. In general, the various theories advanced to account for the AB, although they differ in the specific mechanisms purported to be responsible, assume that allocating attention to T1 leaves less attention for T2, rendering T2 vulnerable to decay or substitution. The present report attempts to bring together these various accounts by proposing a unifying theory. This report also highlights recent attempts to determine if the AB exists across stimulus modalities and points to applications of AB methods in understanding deficits of visual neglect. We conclude by suggesting that investigations of the AB argue in favour of the view that attention may be thought of as a necessary (but not sufficient) condition for enabling consciousness." }, { "pmid": "9861716", "title": "Electrophysiological evidence for a postperceptual locus of suppression during the attentional blink.", "abstract": "When an observer detects a target in a rapid stream of visual stimuli, there is a brief period of time during which the detection of subsequent targets is impaired. In this study, event-related potentials (ERPs) were recorded from normal adult observers to determine whether this \"attentional blink\" reflects a suppression of perceptual processes or an impairment in postperceptual processes. No suppression was observed during the attentional blink interval for ERP components corresponding to sensory processing (the P1 and N1 components) or semantic analysis (the N400 component). However, complete suppression was observed for an ERP component that has been hypothesized to reflect the updating of working memory (the P3 component). Results indicate that the attentional blink reflects an impairment in a postperceptual stage of processing." }, { "pmid": "8857535", "title": "Word meanings can be accessed but not reported during the attentional blink.", "abstract": "After the detection of a target item in a rapid stream of visual stimuli, there is a period of 400-600 ms during which subsequent targets are missed. This impairment has been labelled the 'attentional blink'. It has been suggested that, unlike an eye blink, the additional blink does not reflect a suppression of perceptual processing, but instead reflects a loss of information at a postperceptual stage, such as visual short-term memory. Here we provide electrophysiological evidence that words presented during the attentional blink period are analysed to the point of meaning extraction, even though these extracted meanings cannot be reported 1-2s later. This shows that the attentional blink does indeed reflect a loss of information at a postperceptual stage of processing, and provides a demonstration of the modularity of human brain function." }, { "pmid": "12542127", "title": "Blinks of the mind: memory effects of attentional processes.", "abstract": "If 2 words are presented successively within 500 ms, subjects often miss the 2nd word. This attentional blink reflects a limited capacity to attend to incoming information. Memory effects were studied for words that fell within an attentional blink. Unrelated words were presented in a modified rapid serial visual presentation task at varying stimulus-onset asynchronies, and attention was systematically manipulated. Subsequently, recognition, repetition priming, and semantic priming were measured separately in 3 experiments. Unidentified words showed no recognition and no repetition priming. However, blinked (i.e., unidentified) words did produce semantic priming in related words. When, for instance, ring was blinked, it was easier to subsequently identify wedding than apple. In contrast, when the blinked word itself was presented again, it was not easier to identify than an unrelated word. Possible interpretations of this paradoxical finding are discussed." }, { "pmid": "11766936", "title": "A model of the formation of illusory conjunctions in the time domain.", "abstract": "The authors present a model to account for the miscombination of features when stimuli are presented using the rapid serial visual presentation (RSVP) technique (illusory conjunctions in the time domain). It explains the distributions of responses through a mixture of trial outcomes. In some trials, attention is successfully focused on the target, whereas in others, the responses are based on partial information. Two experiments are presented that manipulated the mean processing time of the target-defining dimension and of the to-be-reported dimension, respectively. As predicted, the average origin of the responses is delayed when lengthening the target-defining dimension, whereas it is earlier when lengthening the to-be-reported dimension; in the first case the number of correct responses is dramatically reduced, whereas in the second it does not change. The results, a review of other research, and simulations carried out with a formal version of the model are all in close accordance with the predictions." }, { "pmid": "17888482", "title": "Attentional blinks as errors in temporal binding.", "abstract": "In the attentional blink [Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18(3), 849-860.], the second of two targets in a Rapid Serial Visual Presentation (RSVP) stream is difficult to detect and identify when it is presented soon but not immediately after the first target. We varied the Stimulus Onset Asynchrony (SOA) of the items in the stream and the color of the targets (red from gray or vice versa), and looked at the responses to the second target. Exact responses to the second target (zero positional error) showed a typical attentional blink profile, with a drop in performance for an interval of 200-500 ms after the first target. Approximate responses (positional error no greater than 3 frames) showed no such drop in performance, although results were still dependent on color (better for red) and increased with increasing SOA. These findings are consistent with a two-stage model of visual working memory, where encoding of the first target disrupts attention to (and temporal binding of) the second target. We suggest that this disruption occurs within a certain time (approximately 0.5 s) after the first target, during which period salient distractors are as likely as the second target to enter working memory." }, { "pmid": "18181792", "title": "Temporal selection is suppressed, delayed, and diffused during the attentional blink.", "abstract": "How does temporal selection work, and along what dimensions does it vary from one instance to the next? We explored these questions using a phenomenon in which temporal selection goes awry. In the attentional blink, subjects fail to report the second of a pair of targets (T1 and T2) when they are presented at stimulus onset asynchronies (SOAs) of roughly 200 to 500 ms. We directly tested the properties of temporal selection during the blink by analyzing distractor intrusions at a fast rate of item presentation. Our analysis shows that attentional selection is (a) suppressed, (b) delayed, and (c) diffused in time during the attentional blink. These effects are dissociated by their time course: The measure of each effect returns to the baseline value at a different SOA. Our results constrain theories of the attentional blink and indicate that temporal selection varies along at least three dissociable dimensions: efficacy, latency, and precision." }, { "pmid": "18564042", "title": "The attentional blink reveals serial working memory encoding: evidence from virtual and human event-related potentials.", "abstract": "Observers often miss a second target (T2) if it follows an identified first target item (T1) within half a second in rapid serial visual presentation (RSVP), a finding termed the attentional blink. If two targets are presented in immediate succession, however, accuracy is excellent (Lag 1 sparing). The resource sharing hypothesis proposes a dynamic distribution of resources over a time span of up to 600 msec during the attentional blink. In contrast, the ST(2) model argues that working memory encoding is serial during the attentional blink and that, due to joint consolidation, Lag 1 is the only case where resources are shared. Experiment 1 investigates the P3 ERP component evoked by targets in RSVP. The results suggest that, in this context, P3 amplitude is an indication of bottom-up strength rather than a measure of cognitive resource allocation. Experiment 2, employing a two-target paradigm, suggests that T1 consolidation is not affected by the presentation of T2 during the attentional blink. However, if targets are presented in immediate succession (Lag 1 sparing), they are jointly encoded into working memory. We use the ST(2) model's neural network implementation, which replicates a range of behavioral results related to the attentional blink, to generate \"virtual ERPs\" by summing across activation traces. We compare virtual to human ERPs and show how the results suggest a serial nature of working memory encoding as implied by the ST(2) model." }, { "pmid": "9180042", "title": "Types and tokens in visual processing: a double dissociation between the attentional blink and repetition blindness.", "abstract": "In rapid serial visual presentation tasks, correct identification of a target triggers a deficit for reporting a 2nd target appearing within 500 ms: an attentional blink (AB). A different phenomenon, termed repetition blindness (RB), refers to a deficit for the 2nd of 2 stimuli that are identical. What is the relationship between these 2 deficits? The present study obtained a double dissociation between AB and RB. AB and RB followed different time courses (Experiments 1 and 4A), increased target-distractor discriminability alleviated AB but not RB (Experiments 2 and 4A), and enhanced episodic distinctiveness of the two targets eliminated RB but not AB (Experiments 3 and 4B). The implications of the double dissociation between AB and RB for theories of visual processing are discussed." }, { "pmid": "2525600", "title": "Types and tokens in visual letter perception.", "abstract": "Five experiments demonstrate that in briefly presented displays, subjects have difficulty distinguishing repeated instances of a letter or digit (multiple tokens of the same type). When subjects were asked to estimate the numerosity of a display, reports were lower for displays containing repeated letters, for example, DDDD, than for displays containing distinct letters, for example, NRVT. This homogeneity effect depends on the common visual form of adjacent letters. A distinct homogeneity effect, one that depends on the repetition of abstract letter identities, was also found: When subjects were asked to report the number of As and Es in a display, performance was poorer on displays containing two instances of a target letter, one appearing in uppercase and the other in lowercase, than on displays containing one of each target letter. This effect must be due to the repetition of identities, because visual form is not repeated in these mixed-case displays. Further experiments showed that this effect was not influenced by the context surrounding the target letters, and that it can be tied to limitations in attentional processing. The results are interpreted in terms of a model in which parallel encoding processes are capable of automatically analyzing information from several regions of the visual field simultaneously, but fail to accurately encode location information. The resulting representation is thus insufficient to distinguish one token from another because two tokens of a given type differ only in location. However, with serial attentional processing multiple tokens can be kept distinct, pointing to yet another limit on the ability to process visual information in parallel." }, { "pmid": "19485692", "title": "The attentional blink provides episodic distinctiveness: sparing at a cost.", "abstract": "The attentional blink (J. E. Raymond, K. L. Shapiro, & K. M. Arnell, 1992) refers to an apparent gap in perception observed when a second target follows a first within several hundred milliseconds. Theoretical and computational work have provided explanations for early sets of blink data, but more recent data have challenged these accounts by showing that the blink is attenuated when subjects encode strings of stimuli (J. Kawahara, T. Kumada, & V. Di Lollo, 2006; M. R. Nieuwenstein & M. C. Potter, 2006; C. N. Olivers, 2007) or are distracted (C. N. Olivers & S. Nieuwenhuis, 2005) while viewing the rapid serial visual presentation stream. The authors describe the episodic simultaneous type, serial token model, a computational account of encoding visual stimuli into working memory that suggests that the attentional blink is a cognitive strategy rather than a resource limitation. This model is composed of neurobiologically plausible elements and simulates the attentional blink with a competitive attentional mechanism that facilitates the formation of episodically distinct representations within working memory. In addition to addressing the blink, the model addresses the phenomena of repetition blindness and whole report superiority, producing predictions that are supported by experimental work." }, { "pmid": "11352145", "title": "On the utility of P3 amplitude as a measure of processing capacity.", "abstract": "The present review focuses on the utility of the amplitude of P3 of as a measure of processing capacity and mental workload. The paper starts with a brief outline of the conceptual framework underlying the relationship between P3 amplitude and task demands, and the cognitive task manipulations that determine demands on capacity. P3 amplitude results are then discussed on the basis of an extensive review of the relevant literature. It is concluded that although it has often been assumed that P3 amplitude depends on the capacity for processing task relevant stimuli, the utility of P3 amplitude as a sensitive and diagnostic measure of processing capacity remains limited. The major factor that prompts this conclusion is that the two principal task variables that have been used to manipulate capacity allocation, namely task difficulty and task emphasis, have opposite effects on the amplitude of P3. I suggest that this is because, in many tasks, an increase in difficulty transforms the structure or actual content of the flow of information in the processing systems, thereby interfering with the very processes that underlie P3 generation. Finally, in an attempt to theoretically integrate the results of the reviewed studies, it is proposed that P3 amplitude reflects activation of elements in a event-categorization network that is controlled by the joint operation of attention and working memory." }, { "pmid": "2270192", "title": "Electrophysiological evidence for parallel and serial processing during visual search.", "abstract": "Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence." }, { "pmid": "15102499", "title": "EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.", "abstract": "We have developed a toolbox and graphic user interface, EEGLAB, running under the crossplatform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), independent component analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation." }, { "pmid": "17258178", "title": "P3 latency shifts in the attentional blink: further evidence for second target processing postponement.", "abstract": "A rapid serial visual presentation technique was used to display sequentially two targets, T1 and T2, and monitor P3 amplitude and latency variations associated with the attentional blink (AB) effect. A red T1 digit was embedded on each trial in a sequence of black letters. T2 was either masked by a trailing stimulus or not masked. T1 had to be identified on a proportion of trials, or ignored in other trials. T2 was the black letter 'E' on 20% of the trials, or any other non-'E' black letter in the other 80% of the trials. A delayed 'E' detection task was required at the end of each trial. An AB was observed when T1 had to be reported and T2 was masked. The AB effect was associated with a sizable amplitude reduction of the P3 component time locked to T2 onset. When T2 was not masked, no AB or P3 amplitude variations were observed. When T1 had to be reported, a delayed P3 peak latency was observed at short compared to long T1-T2 intervals. No effect of T1-T2 interval was observed on the T2-locked P3 peak latency when T1 could be ignored. Taken together these findings provide converging evidence in support of temporal attention models bridging behavior and electrophysiology that postulate a direct link between the cause of the AB effect and the sources of both amplitude and latency variations in the T2-locked P3 component." }, { "pmid": "12763203", "title": "Event-related potential correlates of the attentional blink phenomenon.", "abstract": "The attentional blink phenomenon results from a transitory impairment of attention that can occur during rapid serial stimulus presentation. A previous study on the physiological correlates of the attentional blink employing event-related potentials (ERPs) suggested that the P3 ERP component for target items presented during this impairment is completely suppressed. This has been taken to indicate that the target-related information does not reach working memory. To reevaluate this hypothesis, we compared ERPs evoked by detected and missed targets in the attentional blink paradigm. Eighteen subjects performed a rapid serial visual presentation (RSVP) task in which either one target (control condition) or two targets had to be detected. ERPs elicited by the second target were analyzed separately for trials in which the target had been detected and missed, respectively. As predicted, detected targets did elicit a P3 during and after the attentional blink period. No clear P3 was found for detected targets presented before the attentional blink, that is, at lag 1. In contrast, missed targets generally did not evoke a P3. Our results provide evidence that targets presented during the attentional blink period can reach working memory. Thus, these findings contribute to evaluating theories of the attentional blink phenomenon." }, { "pmid": "17662259", "title": "A reciprocal relationship between bottom-up trace strength and the attentional blink bottleneck: relating the LC-NE and ST(2) models.", "abstract": "There is considerable current interest in neural modeling of the attentional blink phenomenon. Two prominent models of this task are the Simultaneous Type Serial Token (ST(2)) model and the Locus Coeruleus-Norepinephrine (LC-NE) model. The former of these generates a broad spectrum of behavioral data, while the latter provides a neurophysiologically detailed account. This paper explores the relationship between these two approaches. Specifically, we consider the spectrum of empirical phenomena that the two models generate, particularly emphasizing the need to generate a reciprocal relationship between bottom-up trace strength and the blink bottleneck. Then we discuss the implications of using ST(2) token mechanisms in the LC-NE setting." }, { "pmid": "12613677", "title": "Delayed working memory consolidation during the attentional blink.", "abstract": "After the detection of a target (T1) in a rapid stream of visual stimuli, there is a period of 400-600 msec during which a subsequent target (T2) is missed. This impairment in performance has been labeled the attentional blink. Recent theories propose that the attentional blink reflects a bottleneck in working memory consolidation such that T2 cannot be consolidated until after T1 is consolidated, and T2 is therefore masked by subsequent stimuli if it is presented while T1 is being consolidated. In support of this explanation, Giesbrecht & Di Lollo (1998) found that when T2 is the final item in the stimulus stream, no attentional blink is observed, because there are no subsequent stimuli that might mask T2. To provide a direct test of this explanation of the attentional blink, in the present study we used the P3 component of the event-related potential waveform to track the processing of T2. When T2 was followed by a masking item, we found that the P3 wave was completely suppressed during the attentional blink period, indicating that T2 was not consolidated in working memory. When T2 was the last item in the stimulus stream, however, we found that the P3 wave was delayed but not suppressed, indicating that T2 consolidation was not eliminated but simply delayed. These results are consistent with a fundamental limit on the consolidation of information in working memory." }, { "pmid": "9104007", "title": "Personal names and the attentional blink: a visual \"cocktail party\" effect.", "abstract": "Four experiments were carried out to investigate an early- versus late-selection explanation for the attentional blink (AB). In both Experiments 1 and 2, 3 groups of participants were required to identify a noun (Experiment 1) or a name (Experiment 2) target (experimental conditions) and then to identify the presence or absence of a 2nd target (probe), which was their own name, another name, or a specified noun from among a noun distractor stream (Experiment 1) or a name distractor stream (Experiment 2). The conclusions drawn are that individuals do not experience an AB for their own names but do for either other names or nouns. In Experiments 3 and 4, either the participant's own name or another name was presented, as the target and as the item that immediately followed the target, respectively. An AB effect was revealed in both experimental conditions. The results of these experiments are interpreted as support for a late-selection interference account of the AB." }, { "pmid": "9401454", "title": "Temporal binding errors are redistributed by the attentional blink.", "abstract": "When one searches for a target among nontargets appearing in rapid serial visual presentation (RSVP), one's errors in performance typically involve the misreporting of neighboring nontargets. Such illusory conjunctions or intrusion errors are distributed differently around the target, depending on task or stimulus variables. It is shown here that shifts in intrusion error patterns can be produced by the manipulation of attention alone. In a dual-task paradigm, the magnitude and distribution of intrusion errors changed systematically as a function of available attentional resources. Intrusion errors in RSVP tasks reflect internal capacity limitations for binding independent features. The present results support a two-stage model of RSVP target processing." }, { "pmid": "16989545", "title": "Quick minds don't blink: electrophysiological correlates of individual differences in attentional selection.", "abstract": "A well-established phenomenon in the study of attention is the attentional blink-a deficit in reporting the second of two targets when it occurs 200-500 msec after the first. Although the effect has been shown to be robust in a variety of task conditions, not every individual participant shows the effect. We measured electroencephalographic activity for \"nonblinkers\" and \"blinkers\" during execution of a task in which two letters had to be detected in an sequential stream of digit distractors. Nonblinkers showed an earlier P3 peak, suggesting that they are quicker to consolidate information than are blinkers. Differences in frontal selection positivity were also found, such that nonblinkers showed a larger difference between target and distractor activation than did blinkers. Nonblinkers seem to extract target information better than blinkers do, allowing them to reject distractors more easily and leaving sufficient resources available to report both targets." }, { "pmid": "9176952", "title": "The Psychophysics Toolbox.", "abstract": "The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution." }, { "pmid": "11222977", "title": "Scalp electrode impedance, infection risk, and EEG data quality.", "abstract": "OBJECTIVES\nBreaking the skin when applying scalp electroencephalographic (EEG) electrodes creates the risk of infection from blood-born pathogens such as HIV, Hepatitis-C, and Creutzfeldt-Jacob Disease. Modern engineering principles suggest that excellent EEG signals can be collected with high scalp impedance ( approximately 40 kOmega) without scalp abrasion. The present study was designed to evaluate the effect of electrode-scalp impedance on EEG data quality.\n\n\nMETHODS\nThe first section of the paper reviews electrophysiological recording with modern high input-impedance differential amplifiers and subject isolation, and explains how scalp-electrode impedance influences EEG signal amplitude and power line noise. The second section of the paper presents an experimental study of EEG data quality as a function of scalp-electrode impedance for the standard frequency bands in EEG and event-related potential (ERP) recordings and for 60 Hz noise.\n\n\nRESULTS\nThere was no significant amplitude change in any EEG frequency bands as scalp-electrode impedance increased from less than 10 kOmega (abraded skin) to 40 kOmega (intact skin). 60 Hz was nearly independent of impedance mismatch, suggesting that capacitively coupled noise appearing differentially across mismatched electrode impedances did not contribute substantially to the observed 60 Hz noise levels.\n\n\nCONCLUSIONS\nWith modern high input-impedance amplifiers and accurate digital filters for power line noise, high-quality EEG can be recorded without skin abrasion." }, { "pmid": "15808977", "title": "Modelling event-related responses in the brain.", "abstract": "The aim of this work was to investigate the mechanisms that shape evoked electroencephalographic (EEG) and magneto-encephalographic (MEG) responses. We used a neuronally plausible model to characterise the dependency of response components on the models parameters. This generative model was a neural mass model of hierarchically arranged areas using three kinds of inter-area connections (forward, backward and lateral). We investigated how responses, at each level of a cortical hierarchy, depended on the strength of connections or coupling. Our strategy was to systematically add connections and examine the responses of each successive architecture. We did this in the context of deterministic responses and then with stochastic spontaneous activity. Our aim was to show, in a simple way, how event-related dynamics depend on extrinsic connectivity. To emphasise the importance of nonlinear interactions, we tried to disambiguate the components of event-related potentials (ERPs) or event-related fields (ERFs) that can be explained by a linear superposition of trial-specific responses and those engendered nonlinearly (e.g., by phase-resetting). Our key conclusions were; (i) when forward connections, mediating bottom-up or extrinsic inputs, are sufficiently strong, nonlinear mechanisms cause a saturation of excitatory interneuron responses. This endows the system with an inherent stability that precludes nondissipative population dynamics. (ii) The duration of evoked transients increases with the hierarchical depth or level of processing. (iii) When backward connections are added, evoked transients become more protracted, exhibiting damped oscillations. These are formally identical to late or endogenous components seen empirically. This suggests that late components are mediated by reentrant dynamics within cortical hierarchies. (iv) Bilateral connections produce similar effects to backward connections but can also mediate zero-lag phase-locking among areas. (v) Finally, with spontaneous activity, ERPs/ERFs can arise from two distinct mechanisms: For low levels of (stimulus related and ongoing) activity, the systems response conforms to a quasi-linear superposition of separable responses to the fixed and stochastic inputs. This is consistent with classical assumptions that motivate trial averaging to suppress spontaneous activity and disclose the ERP/ERF. However, when activity is sufficiently high, there are nonlinear interactions between the fixed and stochastic inputs. This interaction is expressed as a phase-resetting and represents a qualitatively different explanation for the ERP/ERF." } ]
PLoS Computational Biology
19997483
PMC2777313
10.1371/journal.pcbi.1000585
Predicting Protein Ligand Binding Sites by Combining Evolutionary Sequence Conservation and 3D Structure
Identifying a protein's functional sites is an important step towards characterizing its molecular function. Numerous structure- and sequence-based methods have been developed for this problem. Here we introduce ConCavity, a small molecule binding site prediction algorithm that integrates evolutionary sequence conservation estimates with structure-based methods for identifying protein surface cavities. In large-scale testing on a diverse set of single- and multi-chain protein structures, we show that ConCavity substantially outperforms existing methods for identifying both 3D ligand binding pockets and individual ligand binding residues. As part of our testing, we perform one of the first direct comparisons of conservation-based and structure-based methods. We find that the two approaches provide largely complementary information, which can be combined to improve upon either approach alone. We also demonstrate that ConCavity has state-of-the-art performance in predicting catalytic sites and drug binding pockets. Overall, the algorithms and analysis presented here significantly improve our ability to identify ligand binding sites and further advance our understanding of the relationship between evolutionary sequence conservation and structural and functional attributes of proteins. Data, source code, and prediction visualizations are available on the ConCavity web site (http://compbio.cs.princeton.edu/concavity/).
Further related workSequence-based functional site prediction has been dominated by the search for residue positions that show evidence of evolutionary constraint. Amino acid conservation in the columns of a multiple sequence alignment of homologs is the most common source of such estimates (see [22] for a review). Recent approaches that compare alignment column amino acid distributions to a background amino acid distribution outperform many existing conservation measures [2],[27]. However, the success of conservation-based prediction varies based on the type of functional residue sought; sequence conservation has been shown to be strongly correlated with ligand binding and catalytic sites, but less so with residues in protein-protein interfaces (PPIs) [2]. A variety of techniques have been used to incorporate phylogenetic information into sequence-based functional site prediction, e.g., traversing phylogenetic trees [28],[29], statistical rate inference [26], analysis of functional subfamilies [9],[12], and phylogenetic motifs [30]. Recently, evolutionary conservation has been combined with other properties predicted from sequence, e.g., secondary structure and relative solvent accessibility, to identify functional sites [31].Structure-based methods for functional site prediction seek to identify protein surface regions favorable for interactions. Ligand binding pockets and residues have been a major focus of these methods [1], [13]–[21]. Ligsite [16] and Surfnet [14] identify pockets by seeking points near the protein surface that are surrounded in most directions by the protein. CASTp [17],[19] applies alpha shape theory from computational geometry to detect and measure cavities. In contrast to these geometric approaches, other methods use models of energetics to identify potential binding sites [23], [25], [32]–[34]. Recent algorithms have focused on van der Waals energetics to create grid potential maps around the surface of the protein. PocketFinder [23] uses an aliphatic carbon as the probe, and Q-SiteFinder [25] uses a methyl group. Our work builds upon geometry and energetics based approaches to ligand binding pocket prediction, but it should be noted that there are other structure-based approaches that do no fit in these categories (e.g., Theoretical Microscopic Titration Curves (THEMATICS) [35], binding site similarity [36], phage display libraries [37], and residue interaction graphs [38]). In contrast to sequence-based predictions, structure-based methods often can make predictions both at the level of residues and regions in space that are likely to contain ligands.Several previous binding site prediction algorithms have considered both sequence and structure. ConSurf [39] provides a visualization of sequence conservation values on the surface of a protein structure, and the recent PatchFinder [40] method automates the prediction of functional surface patches from ConSurf. Spatially clustered residues with high Evolutionary Trace values were found to overlap with functional sites [41], and Panchenko et al. [42] found that averaging sequence conservation across spatially clustered positions provides improvement in functional site identification in certain settings. Several groups have attempted to identify and separate structural and functional constraints on residues [43],[44]. Wang et al. [45] perform logistic regression on three sequence-based properties and predict functional sites by estimating the effect on structural stability of mutations at each position. Though these approaches make use of protein structures, they do not explicitly consider the surface geometry of the protein in prediction. Geometric, chemical, and evolutionary criteria have been used together to define motifs that represent known binding sites for use in protein function prediction [46]. Machine learning algorithms have been applied to features based on sequence and structure [47],[48] to predict catalytic sites [5], [49]–[51] and recently to predict drug targets [52] and a limited set of ligand and ion binding sites [53]–[55]. Sequence conservation has been found to be a dominant predictor in these contexts.Most similar to ConCavity are two recent approaches to ligand binding site identification that have used evolutionary conservation in a post-processing step to rerank [1] or refine [56] geometry based pocket predictions. In contrast, ConCavity integrates conservation directly into the search for pockets. This allows it to identify pockets that are not found when considering structure alone, and enables straightforward analysis of the relationship between sequence conservation, structural patterns, and functional importance.
[ "16995956", "17584799", "17630824", "11021970", "12589769", "17868687", "8609611", "9704298", "16844972", "17933762", "15201400", "15037084", "11575940", "15201051", "18165317", "15544817", "15980475", "19081051", "12547207", "15364576", "16224101", "12850142", "15290784", "9787643", "12421562", "18207163", "9485303", "12391332", "14681376", "9399862" ]
[ { "pmid": "16995956", "title": "LIGSITEcsc: predicting ligand binding sites using the Connolly surface and degree of conservation.", "abstract": "BACKGROUND\nIdentifying pockets on protein surfaces is of great importance for many structure-based drug design applications and protein-ligand docking algorithms. Over the last ten years, many geometric methods for the prediction of ligand-binding sites have been developed.\n\n\nRESULTS\nWe present LIGSITEcsc, an extension and implementation of the LIGSITE algorithm. LIGSITEcsc is based on the notion of surface-solvent-surface events and the degree of conservation of the involved surface residues. We compare our algorithm to four other approaches, LIGSITE, CAST, PASS, and SURFNET, and evaluate all on a dataset of 48 unbound/bound structures and 210 bound-structures. LIGSITEcsc performs slightly better than the other tools and achieves a success rate of 71% and 75%, respectively.\n\n\nCONCLUSION\nThe use of the Connolly surface leads to slight improvements, the prediction re-ranking by conservation to significant improvements of the binding site predictions. A web server for LIGSITEcsc and its source code is available at scoppi.biotec.tu-dresden.de/pocket" }, { "pmid": "17584799", "title": "firestar--prediction of functionally important residues using structural templates and alignment reliability.", "abstract": "UNLABELLED\nHere we present firestar, an expert system for predicting ligand-binding residues in protein structures. The server provides a method for extrapolating from the large inventory of functionally important residues organized in the FireDB database and adds information about the local conservation of potential-binding residues. The interface allows users to make queries by protein sequence or structure. The user can access pairwise and multiple alignments with structures that have relevant functionally important binding sites. The results are presented in a series of easy to read displays that allow users to compare binding residue conservation across homologous proteins. The binding site residues can also be viewed with molecular visualization tools. One feature of firestar is that it can be used to evaluate the biological relevance of small molecule ligands present in PDB structures. With the server it is easy to discern whether small molecule binding is conserved in homologous structures. We found this facility particularly useful during the recent assessment of CASP7 function prediction.\n\n\nAVAILABILITY\nhttp://firedb.bioinfo.cnio.es/Php/FireStar.php." }, { "pmid": "17630824", "title": "Protein-protein interaction hotspots carved into sequences.", "abstract": "Protein-protein interactions, a key to almost any biological process, are mediated by molecular mechanisms that are not entirely clear. The study of these mechanisms often focuses on all residues at protein-protein interfaces. However, only a small subset of all interface residues is actually essential for recognition or binding. Commonly referred to as \"hotspots,\" these essential residues are defined as residues that impede protein-protein interactions if mutated. While no in silico tool identifies hotspots in unbound chains, numerous prediction methods were designed to identify all the residues in a protein that are likely to be a part of protein-protein interfaces. These methods typically identify successfully only a small fraction of all interface residues. Here, we analyzed the hypothesis that the two subsets correspond (i.e., that in silico methods may predict few residues because they preferentially predict hotspots). We demonstrate that this is indeed the case and that we can therefore predict directly from the sequence of a single protein which residues are interaction hotspots (without knowledge of the interaction partner). Our results suggested that most protein complexes are stabilized by similar basic principles. The ability to accurately and efficiently identify hotspots from sequence enables the annotation and analysis of protein-protein interaction hotspots in entire organisms and thus may benefit function prediction and drug development. The server for prediction is available at http://www.rostlab.org/services/isis." }, { "pmid": "11021970", "title": "Analysis and prediction of functional sub-types from protein sequence alignments.", "abstract": "The increasing number and diversity of protein sequence families requires new methods to define and predict details regarding function. Here, we present a method for analysis and prediction of functional sub-types from multiple protein sequence alignments. Given an alignment and set of proteins grouped into sub-types according to some definition of function, such as enzymatic specificity, the method identifies positions that are indicative of functional differences by comparison of sub-type specific sequence profiles, and analysis of positional entropy in the alignment. Alignment positions with significantly high positional relative entropy correlate with those known to be involved in defining sub-types for nucleotidyl cyclases, protein kinases, lactate/malate dehydrogenases and trypsin-like serine proteases. We highlight new positions for these proteins that suggest additional experiments to elucidate the basis of specificity. The method is also able to predict sub-type for unclassified sequences. We assess several variations on a prediction method, and compare them to simple sequence comparisons. For assessment, we remove close homologues to the sequence for which a prediction is to be made (by a sequence identity above a threshold). This simulates situations where a protein is known to belong to a protein family, but is not a close relative of another protein of known sub-type. Considering the four families above, and a sequence identity threshold of 30 %, our best method gives an accuracy of 96 % compared to 80 % obtained for sequence similarity and 74 % for BLAST. We describe the derivation of a set of sub-type groupings derived from an automated parsing of alignments from PFAM and the SWISSPROT database, and use this to perform a large-scale assessment. The best method gives an average accuracy of 94 % compared to 68 % for sequence similarity and 79 % for BLAST. We discuss implications for experimental design, genome annotation and the prediction of protein function and protein intra-residue distances." }, { "pmid": "12589769", "title": "Automatic methods for predicting functionally important residues.", "abstract": "Sequence analysis is often the first guide for the prediction of residues in a protein family that may have functional significance. A few methods have been proposed which use the division of protein families into subfamilies in the search for those positions that could have some functional significance for the whole family, but at the same time which exhibit the specificity of each subfamily (\"Tree-determinant residues\"). However, there are still many unsolved questions like the best division of a protein family into subfamilies, or the accurate detection of sequence variation patterns characteristic of different subfamilies. Here we present a systematic study in a significant number of protein families, testing the statistical meaning of the Tree-determinant residues predicted by three different methods that represent the range of available approaches. The first method takes as a starting point a phylogenetic representation of a protein family and, following the principle of Relative Entropy from Information Theory, automatically searches for the optimal division of the family into subfamilies. The second method looks for positions whose mutational behavior is reminiscent of the mutational behavior of the full-length proteins, by directly comparing the corresponding distance matrices. The third method is an automation of the analysis of distribution of sequences and amino acid positions in the corresponding multidimensional spaces using a vector-based principal component analysis. These three methods have been tested on two non-redundant lists of protein families: one composed by proteins that bind a variety of ligand groups, and the other composed by proteins with annotated functionally relevant sites. In most cases, the residues predicted by the three methods show a clear tendency to be close to bound ligands of biological relevance and to those amino acids described as participants in key aspects of protein function. These three automatic methods provide a wide range of possibilities for biologists to analyze their families of interest, in a similar way to the one presented here for the family of proteins related with ras-p21." }, { "pmid": "17868687", "title": "Functional specificity lies within the properties and evolutionary changes of amino acids.", "abstract": "The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites." }, { "pmid": "8609611", "title": "The automatic search for ligand binding sites in proteins of known three-dimensional structure using only geometric criteria.", "abstract": "The biological function of a protein typically depends on the structure of specific binding sites. These sites are located at the surface of the protein molecule and are determined by geometrical arrangements and physico-chemical properties of tens of non-hydrogen atoms. In this paper we describe a new algorithm called APROPOS, based purely on geometric criteria for identifying such binding sites using atomic co-ordinates. For the description of the protein shape we use an alpha-shape algorithm which generates a whole family of shapes with different levels of detail. Comparing shapes of different resolution we find cavities on the surface of the protein responsible for ligand binding. The algorithm correctly locates more than 95% of all binding sites for ligands and prosthetic groups of molecular mass between about 100 and 2000 Da in a representative set of proteins. Only in very few proteins does the method find binding sites of single ions outside the active site of enzymes. With one exception, we observe that interfaces between subunits show different geometric features compared to binding sites of ligands. Our results clearly support the view that protein-protein interactions occur between flat areas of protein surface whereas specific interactions of smaller ligands take place in pockets in the surface." }, { "pmid": "9704298", "title": "LIGSITE: automatic and efficient detection of potential small molecule-binding sites in proteins.", "abstract": "LIGSITE is a new program for the automatic and time-efficient detection of pockets on the surface of proteins that may act as binding sites for small molecule ligands. Pockets are identified with a series of simple operations on a cubic grid. Using a set of receptor-ligand complexes we show that LIGSITE is able to identify the binding sites of small molecule ligands with high precision. The main advantage of LIGSITE is its speed. Typical search times are in the range of 5 to 20 s for medium-sized proteins. LIGSITE is therefore well suited for identification of pockets in large sets of proteins (e.g., protein families) for comparative studies. For graphical display LIGSITE produces VRML representations of the protein-ligand complex and the binding site for display with a VRML viewer such as WebSpace from SGI." }, { "pmid": "16844972", "title": "CASTp: computed atlas of surface topography of proteins with structural and topographical mapping of functionally annotated residues.", "abstract": "Cavities on a proteins surface as well as specific amino acid positioning within it create the physicochemical properties needed for a protein to perform its function. CASTp (http://cast.engr.uic.edu) is an online tool that locates and measures pockets and voids on 3D protein structures. This new version of CASTp includes annotated functional information of specific residues on the protein structure. The annotations are derived from the Protein Data Bank (PDB), Swiss-Prot, as well as Online Mendelian Inheritance in Man (OMIM), the latter contains information on the variant single nucleotide polymorphisms (SNPs) that are known to cause disease. These annotated residues are mapped to surface pockets, interior voids or other regions of the PDB structures. We use a semi-global pair-wise sequence alignment method to obtain sequence mapping between entries in Swiss-Prot, OMIM and entries in PDB. The updated CASTp web server can be used to study surface features, functional regions and specific roles of key residues of proteins." }, { "pmid": "17933762", "title": "LigASite--a database of biologically relevant binding sites in proteins with known apo-structures.", "abstract": "Better characterization of binding sites in proteins and the ability to accurately predict their location and energetic properties are major challenges which, if addressed, would have many valuable practical applications. Unfortunately, reliable benchmark datasets of binding sites in proteins are still sorely lacking. Here, we present LigASite ('LIGand Attachment SITE'), a gold-standard dataset of binding sites in 550 proteins of known structures. LigASite consists exclusively of biologically relevant binding sites in proteins for which at least one apo- and one holo-structure are available. In defining the binding sites for each protein, information from all holo-structures is combined, considering in each case the quaternary structure defined by the PQS server. LigASite is built using simple criteria and is automatically updated as new structures become available in the PDB, thereby guaranteeing optimal data coverage over time. Both a redundant and a culled non-redundant version of the dataset is available at http://www.scmbb.ulb.ac.be/Users/benoit/LigASite. The website interface allows users to search the dataset by PDB identifiers, ligand identifiers, protein names or sequence, and to look for structural matches as defined by the CATH homologous superfamilies. The datasets can be downloaded from the website as Schema-validated XML files or comma-separated flat files." }, { "pmid": "15201400", "title": "Comparison of site-specific rate-inference methods for protein sequences: empirical Bayesian methods are superior.", "abstract": "The degree to which an amino acid site is free to vary is strongly dependent on its structural and functional importance. An amino acid that plays an essential role is unlikely to change over evolutionary time. Hence, the evolutionary rate at an amino acid site is indicative of how conserved this site is and, in turn, allows evaluation of its importance in maintaining the structure/function of the protein. When using probabilistic methods for site-specific rate inference, few alternatives are possible. In this study we use simulations to compare the maximum-likelihood and Bayesian paradigms. We study the dependence of inference accuracy on such parameters as number of sequences, branch lengths, the shape of the rate distribution, and sequence length. We also study the possibility of simultaneously estimating branch lengths and site-specific rates. Our results show that a Bayesian approach is superior to maximum-likelihood under a wide range of conditions, indicating that the prior that is incorporated into the Bayesian computation significantly improves performance. We show that when branch lengths are unknown, it is better first to estimate branch lengths and then to estimate site-specific rates. This procedure was found to be superior to estimating both the branch lengths and site-specific rates simultaneously. Finally, we illustrate the difference between maximum-likelihood and Bayesian methods when analyzing site-conservation for the apoptosis regulator protein Bcl-x(L)." }, { "pmid": "15037084", "title": "A family of evolution-entropy hybrid methods for ranking protein residues by importance.", "abstract": "In order to identify the amino acids that determine protein structure and function it is useful to rank them by their relative importance. Previous approaches belong to two groups; those that rely on statistical inference, and those that focus on phylogenetic analysis. Here, we introduce a class of hybrid methods that combine evolutionary and entropic information from multiple sequence alignments. A detailed analysis in insulin receptor kinase domain and tests on proteins that are well-characterized experimentally show the hybrids' greater robustness with respect to the input choice of sequences, as well as improved sensitivity and specificity of prediction. This is a further step toward proteome scale analysis of protein structure and function." }, { "pmid": "11575940", "title": "Prediction of functionally important residues based solely on the computed energetics of protein structure.", "abstract": "Catalytic and other functionally important residues in proteins can often be mutated to yield more stable proteins. Many of these residues are charged residues that are located in electrostatically unfavorable environments. Here it is demonstrated that because continuum electrostatics methods can identify these destabilizing residues, the same methods can also be used to identify functionally important residues in otherwise uncharacterized proteins. To establish this point, detailed calculations are performed on six proteins for which good structural and mutational data are available from experiments. In all cases it is shown that functionally important residues known to be destabilizing experimentally are among the most destabilizing residues found in the calculations. A larger scale analysis performed on 216 different proteins demonstrates the existence of a general relationship between the calculated electrostatic energy of a charged residue and its degree of evolutionary conservation. This relationship becomes obscured when electrostatic energies are calculated using Coulomb's law instead of the more complete continuum electrostatics method. Finally, in a first predictive application of the method, calculations are performed on three proteins whose structures have recently been reported by a structural genomics consortium." }, { "pmid": "15201051", "title": "Enzyme/non-enzyme discrimination and prediction of enzyme active site location using charge-based methods.", "abstract": "Calculations of charge interactions complement analysis of a characterised active site, rationalising pH-dependence of activity and transition state stabilisation. Prediction of active site location through large DeltapK(a)s or electrostatic strain is relevant for structural genomics. We report a study of ionisable groups in a set of 20 enzymes, finding that false positives obscure predictive potential. In a larger set of 156 enzymes, peaks in solvent-space electrostatic properties are calculated. Both electric field and potential match well to active site location. The best correlation is found with electrostatic potential calculated from uniform charge density over enzyme volume, rather than from assignment of a standard atom-specific charge set. Studying a shell around each molecule, for 77% of enzymes the potential peak is within that 5% of the shell closest to the active site centre, and 86% within 10%. Active site identification by largest cleft, also with projection onto a shell, gives 58% of enzymes for which the centre of the largest cleft lies within 5% of the active site, and 70% within 10%. Dielectric boundary conditions emphasise clefts in the uniform charge density method, which is suited to recognition of binding pockets embedded within larger clefts. The variation of peak potential with distance from active site, and comparison between enzyme and non-enzyme sets, gives an optimal threshold distinguishing enzyme from non-enzyme. We find that 87% of the enzyme set exceeds the threshold as compared to 29% of the non-enzyme set. Enzyme/non-enzyme homologues, \"structural genomics\" annotated proteins and catalytic/non-catalytic RNAs are studied in this context." }, { "pmid": "18165317", "title": "A threading-based method (FINDSITE) for ligand-binding site prediction and functional annotation.", "abstract": "The detection of ligand-binding sites is often the starting point for protein function identification and drug discovery. Because of inaccuracies in predicted protein structures, extant binding pocket-detection methods are limited to experimentally solved structures. Here, FINDSITE, a method for ligand-binding site prediction and functional annotation based on binding-site similarity across groups of weakly homologous template structures identified from threading, is described. For crystal structures, considering a cutoff distance of 4 A as the hit criterion, the success rate is 70.9% for identifying the best of top five predicted ligand-binding sites with a ranking accuracy of 76.0%. Both high prediction accuracy and ability to correctly rank identified binding sites are sustained when approximate protein models (<35% sequence identity to the closest template structure) are used, showing a 67.3% success rate with 75.5% ranking accuracy. In practice, FINDSITE tolerates structural inaccuracies in protein models up to a rmsd from the crystal structure of 8-10 A. This is because analysis of weakly homologous protein models reveals that about half have a rmsd from the native binding site <2 A. Furthermore, the chemical properties of template-bound ligands can be used to select ligand templates associated with the binding site. In most cases, FINDSITE can accurately assign a molecular function to the protein model." }, { "pmid": "15544817", "title": "Network analysis of protein structures identifies functional residues.", "abstract": "Identifying active site residues strictly from protein three-dimensional structure is a difficult task, especially for proteins that have few or no homologues. We transformed protein structures into residue interaction graphs (RIGs), where amino acid residues are graph nodes and their interactions with each other are the graph edges. We found that active site, ligand-binding and evolutionary conserved residues, typically have high closeness values. Residues with high closeness values interact directly or by a few intermediates with all other residues of the protein. Combining closeness and surface accessibility identified active site residues in 70% of 178 representative structures. Detailed structural analysis of specific enzymes also located other types of functional residues. These include the substrate binding sites of acetylcholinesterases and subtilisin, and the regions whose structural changes activate MAP kinase and glycogen phosphorylase. Our approach uses single protein structures, and does not rely on sequence conservation, comparison to other similar structures or any prior knowledge. Residue closeness is distinct from various sequence and structure measures and can thus complement them in identifying key protein residues. Closeness integrates the effect of the entire protein on single residues. Such natural structural design may be evolutionary maintained to preserve interaction redundancy and contribute to optimal setting of functional sites." }, { "pmid": "15980475", "title": "ConSurf 2005: the projection of evolutionary conservation scores of residues on protein structures.", "abstract": "Key amino acid positions that are important for maintaining the 3D structure of a protein and/or its function(s), e.g. catalytic activity, binding to ligand, DNA or other proteins, are often under strong evolutionary constraints. Thus, the biological importance of a residue often correlates with its level of evolutionary conservation within the protein family. ConSurf (http://consurf.tau.ac.il/) is a web-based tool that automatically calculates evolutionary conservation scores and maps them on protein structures via a user-friendly interface. Structurally and functionally important regions in the protein typically appear as patches of evolutionarily conserved residues that are spatially close to each other. We present here version 3.0 of ConSurf. This new version includes an empirical Bayesian method for scoring conservation, which is more accurate than the maximum-likelihood method that was used in the earlier release. Various additional steps in the calculation can now be controlled by a number of advanced options, thus further improving the accuracy of the calculation. Moreover, ConSurf version 3.0 also includes a measure of confidence for the inferred amino acid conservation scores." }, { "pmid": "19081051", "title": "Detection of functionally important regions in \"hypothetical proteins\" of known structure.", "abstract": "Structural genomics initiatives provide ample structures of \"hypothetical proteins\" (i.e., proteins of unknown function) at an ever increasing rate. However, without function annotation, this structural goldmine is of little use to biologists who are interested in particular molecular systems. To this end, we used (an improved version of) the PatchFinder algorithm for the detection of functional regions on the protein surface, which could mediate its interactions with, e.g., substrates, ligands, and other proteins. Examination, using a data set of annotated proteins, showed that PatchFinder outperforms similar methods. We collected 757 structures of hypothetical proteins and their predicted functional regions in the N-Func database. Inspection of several of these regions demonstrated that they are useful for function prediction. For example, we suggested an interprotein interface and a putative nucleotide-binding site. A web-server implementation of PatchFinder and the N-Func database are available at http://patchfinder.tau.ac.il/." }, { "pmid": "12547207", "title": "An accurate, sensitive, and scalable method to identify functional sites in protein structures.", "abstract": "Functional sites determine the activity and interactions of proteins and as such constitute the targets of most drugs. However, the exponential growth of sequence and structure data far exceeds the ability of experimental techniques to identify their locations and key amino acids. To fill this gap we developed a computational Evolutionary Trace method that ranks the evolutionary importance of amino acids in protein sequences. Studies show that the best-ranked residues form fewer and larger structural clusters than expected by chance and overlap with functional sites, but until now the significance of this overlap has remained qualitative. Here, we use 86 diverse protein structures, including 20 determined by the structural genomics initiative, to show that this overlap is a recurrent and statistically significant feature. An automated ET correctly identifies seven of ten functional sites by the least favorable statistical measure, and nine of ten by the most favorable one. These results quantitatively demonstrate that a large fraction of functional sites in the proteome may be accurately identified from sequence and structure. This should help focus structure-function studies, rational drug design, protein engineering, and functional annotation to the relevant regions of a protein." }, { "pmid": "15364576", "title": "Distinguishing structural and functional restraints in evolution in order to identify interaction sites.", "abstract": "Structural genomics projects are producing many three-dimensional structures of proteins that have been identified only from their gene sequences. It is therefore important to develop computational methods that will predict sites involved in productive intermolecular interactions that might give clues about functions. Techniques based on evolutionary conservation of amino acids have the advantage over physiochemical methods in that they are more general. However, the majority of techniques neither use all available structural and sequence information, nor are able to distinguish between evolutionary restraints that arise from the need to maintain structure and those that arise from function. Three methods to identify evolutionary restraints on protein sequence and structure are described here. The first identifies those residues that have a higher degree of conservation than expected: this is achieved by comparing for each amino acid position the sequence conservation observed in the homologous family of proteins with the degree of conservation predicted on the basis of amino acid type and local environment. The second uses information theory to identify those positions where environment-specific substitution tables make poor predictions of the overall amino acid substitution pattern. The third method identifies those residues that have highly conserved positions when three-dimensional structures of proteins in a homologous family are superposed. The scores derived from these methods are mapped onto the protein three-dimensional structures and contoured, allowing identification clusters of residues with strong evolutionary restraints that are sites of interaction in proteins involved in a variety of functions. Our method differs from other published techniques by making use of structural information to identify restraints that arise from the structure of the protein and differentiating these restraints from others that derive from intermolecular interactions that mediate functions in the whole organism." }, { "pmid": "16224101", "title": "Improvement in protein functional site prediction by distinguishing structural and functional constraints on protein family evolution using computational design.", "abstract": "The prediction of functional sites in newly solved protein structures is a challenge for computational structural biology. Most methods for approaching this problem use evolutionary conservation as the primary indicator of the location of functional sites. However, sequence conservation reflects not only evolutionary selection at functional sites to maintain protein function, but also selection throughout the protein to maintain the stability of the folded state. To disentangle sequence conservation due to protein functional constraints from sequence conservation due to protein structural constraints, we use all atom computational protein design methodology to predict sequence profiles expected under solely structural constraints, and to compute the free energy difference between the naturally occurring amino acid and the lowest free energy amino acid at each position. We show that functional sites are more likely than non-functional sites to have computed sequence profiles which differ significantly from the naturally occurring sequence profiles and to have residues with sub-optimal free energies, and that incorporation of these two measures improves sequence based prediction of protein functional sites. The combined sequence and structure based functional site prediction method has been implemented in a publicly available web server." }, { "pmid": "12850142", "title": "Using a neural network and spatial clustering to predict the location of active sites in enzymes.", "abstract": "Structural genomics projects aim to provide a sharp increase in the number of structures of functionally unannotated, and largely unstudied, proteins. Algorithms and tools capable of deriving information about the nature, and location, of functional sites within a structure are increasingly useful therefore. Here, a neural network is trained to identify the catalytic residues found in enzymes, based on an analysis of the structure and sequence. The neural network output, and spatial clustering of the highly scoring residues are then used to predict the location of the active site.A comparison of the performance of differently trained neural networks is presented that shows how information from sequence and structure come together to improve the prediction accuracy of the network. Spatial clustering of the network results provides a reliable way of finding likely active sites. In over 69% of the test cases the active site is correctly predicted, and a further 25% are partially correctly predicted. The failures are generally due to the poor quality of the automatically generated sequence alignments. We also present predictions identifying the active site, and potential functional residues in five recently solved enzyme structures, not used in developing the method. The method correctly identifies the putative active site in each case. In most cases the likely functional residues are identified correctly, as well as some potentially novel functional groups." }, { "pmid": "15290784", "title": "Recognizing complex, asymmetric functional sites in protein structures using a Bayesian scoring function.", "abstract": "The increase in known three-dimensional protein structures enables us to build statistical profiles of important functional sites in protein molecules. These profiles can then be used to recognize sites in large-scale automated annotations of new protein structures. We report an improved FEATURE system which recognizes functional sites in protein structures. FEATURE defines multi-level physico-chemical properties and recognizes sites based on the spatial distribution of these properties in the sites' microenvironments. It uses a Bayesian scoring function to compare a query region with the statistical profile built from known examples of sites and control nonsites. We have previously shown that FEATURE can accurately recognize calcium-binding sites and have reported interesting results scanning for calcium-binding sites in the entire Protein Data Bank. Here we report the ability of the improved FEATURE to characterize and recognize geometrically complex and asymmetric sites such as ATP-binding sites and disulfide bond-forming sites. FEATURE does not rely on conserved residues or conserved residue geometry of the sites. We also demonstrate that, in the absence of a statistical profile of the sites, FEATURE can use an artificially constructed profile based on a priori knowledge to recognize the sites in new structures, using redoxin active sites as an example." }, { "pmid": "12421562", "title": "Analysis of catalytic residues in enzyme active sites.", "abstract": "We present an analysis of the residues directly involved in catalysis in 178 enzyme active sites. Specific criteria were derived to define a catalytic residue, and used to create a catalytic residue dataset, which was then analysed in terms of properties including secondary structure, solvent accessibility, flexibility, conservation, quaternary structure and function. The results indicate the dominance of a small set of amino acid residues in catalysis and give a picture of a general active site environment. It is hoped that this information will provide a better understanding of the molecular mechanisms involved in catalysis and a heuristic basis for predicting catalytic residues in enzymes of unknown function." }, { "pmid": "18207163", "title": "Crystal structures of the Streptomyces coelicolor TetR-like protein ActR alone and in complex with actinorhodin or the actinorhodin biosynthetic precursor (S)-DNPA.", "abstract": "Actinorhodin, an antibiotic produced by Streptomyces coelicolor, is exported from the cell by the ActA efflux pump. actA is divergently transcribed from actR, which encodes a TetR-like transcriptional repressor. We showed previously that ActR represses transcription by binding to an operator from the actA/actR intergenic region. Importantly, actinorhodin itself or various actinorhodin biosynthetic intermediates can cause ActR to dissociate from its operator, leading to derepression. This suggests that ActR may mediate timely self-resistance to an endogenously produced antibiotic by responding to one of its biosynthetic precursors. Here, we report the structural basis for this precursor-mediated derepression with crystal structures of homodimeric ActR by itself and in complex with either actinorhodin or the actinorhodin biosynthetic intermediate (S)-DNPA [4-dihydro-9-hydroxy-1-methyl-10-oxo-3-H-naphtho-[2,3-c]-pyran-3-(S)-acetic acid]. The ligand-binding tunnel in each ActR monomer has a striking hydrophilic/hydrophobic/hydrophilic arrangement of surface residues that accommodate either one hexacyclic actinorhodin molecule or two back-to-back tricyclic (S)-DNPA molecules. Moreover, our work also reveals the strongest structural evidence to date that TetR-mediated antibiotic resistance may have been acquired from an antibiotic-producer organism." }, { "pmid": "9485303", "title": "Structure of the shiga-like toxin I B-pentamer complexed with an analogue of its receptor Gb3.", "abstract": "Shiga-like toxin I (SLT-I) is a virulence factor of Escherichia coli strains that cause disease in humans. Like other members of the Shiga toxin family, it consists of an enzymatic (A) subunit and five copies of a binding subunit (the B-pentamer). The B-pentamer binds to a specific glycolipid, globotriaosylceramide (Gb3), on the surface of target cells and thereby plays a crucial role in the entry of the toxin. Here we present the crystal structure at 2.8 A resolution of the SLT-I B-pentamer complexed with an analogue of the Gb3 trisaccharide. The structure reveals a surprising density of binding sites, with three trisaccharide molecules bound to each B-subunit monomer of 69 residues. All 15 trisaccharides bind to one side of the B-pentamer, providing further evidence that this side faces the cell membrane. The structural model is consistent with data from site-directed mutagenesis and binding of carbohydrate analogues, and allows the rational design of therapeutic Gb3 analogues that block the attachment of toxin to cells." }, { "pmid": "12391332", "title": "Promiscuity in ligand-binding: The three-dimensional structure of a Piromyces carbohydrate-binding module, CBM29-2, in complex with cello- and mannohexaose.", "abstract": "Carbohydrate-protein recognition is central to many biological processes. Enzymes that act on polysaccharide substrates frequently contain noncatalytic domains, \"carbohydrate-binding modules\" (CBMs), that target the enzyme to the appropriate substrate. CBMs that recognize specific plant structural polysaccharides are often able to accommodate both the variable backbone and the side-chain decorations of heterogeneous ligands. \"CBM29\" modules, derived from a noncatalytic component of the Piromyces equi cellulase/hemicellulase complex, provide an example of this selective yet flexible recognition. They discriminate strongly against some polysaccharides while remaining relatively promiscuous toward both beta-1,4-linked manno- and cello-oligosaccharides. This feature may reflect preferential, but flexible, targeting toward glucomannans in the plant cell wall. The three-dimensional structure of CBM29-2 and its complexes with cello- and mannohexaose reveal a beta-jelly-roll topology, with an extended binding groove on the concave surface. The orientation of the aromatic residues complements the conformation of the target sugar polymer while accommodation of both manno- and gluco-configured oligo- and polysaccharides is conferred by virtue of the plasticity of the direct interactions from their axial and equatorial 2-hydroxyls, respectively. Such flexible ligand recognition targets the anaerobic fungal complex to a range of different components in the plant cell wall and thus plays a pivotal role in the highly efficient degradation of this composite structure by the microbial eukaryote." }, { "pmid": "14681376", "title": "The Catalytic Site Atlas: a resource of catalytic sites and residues identified in enzymes using structural data.", "abstract": "The Catalytic Site Atlas (CSA) provides catalytic residue annotation for enzymes in the Protein Data Bank. It is available online at http://www.ebi.ac.uk/thornton-srv/databases/CSA. The database consists of two types of annotated site: an original hand-annotated set containing information extracted from the primary literature, using defined criteria to assign catalytic residues, and an additional homologous set, containing annotations inferred by PSI-BLAST and sequence alignment to one of the original set. The CSA can be queried via Swiss-Prot identifier and EC number, as well as by PDB code. CSA Version 1.0 contains 177 original hand- annotated entries and 2608 homologous entries, and covers approximately 30% of all EC numbers found in PDB. The CSA will be updated on a monthly basis to include homologous sites found in new PDBs, and new hand-annotated enzymes as and when their annotation is completed." }, { "pmid": "9399862", "title": "The HSSP database of protein structure-sequence alignments and family profiles.", "abstract": "HSSP (http: //www.sander.embl-ebi.ac.uk/hssp/) is a derived database merging structure (3-D) and sequence (1-D) information. For each protein of known 3D structure from the Protein Data Bank (PDB), we provide a multiple sequence alignment of putative homologues and a sequence profile characteristic of the protein family, centered on the known structure. The list of homologues is the result of an iterative database search in SWISS-PROT using a position-weighted dynamic programming method for sequence profile alignment (MaxHom). The database is updated frequently. The listed putative homologues are very likely to have the same 3D structure as the PDB protein to which they have been aligned. As a result, the database not only provides aligned sequence families, but also implies secondary and tertiary structures covering 33% of all sequences in SWISS-PROT." } ]
International Journal of Biomedical Imaging
20119490
PMC2810460
10.1155/2009/767805
Bayesian Classifier with Simplified Learning Phase for Detecting Microcalcifications in Digital Mammograms
Detection of clustered microcalcifications (MCs) in mammograms represents a significant step towards successful detection of breast cancer since their existence is one of the early signs of cancer. In this paper, a new framework that integrates Bayesian classifier and a pattern synthesizing scheme for detecting microcalcification clusters is proposed. This proposed work extracts textural, spectral, and statistical features of each input mammogram and generates models of real MCs to be used as training samples through a simplified learning phase of the Bayesian classifier. Followed by an estimation of the classifier's decision function parameters, a mammogram is segmented into the identified targets (MCs) against background (healthy tissue). The proposed algorithm has been tested using 23 mammograms from the mini-MIAS database. Experimental results achieved MCs detection with average true positive (sensitivity) and false positive (specificity) of 91.3% and 98.6%, respectively. Results also indicate that the modeling of the real MCs plays a significant role in the performance of the classifier and thus should be given further investigation.
2.3. Related WorkThe Gaussian nature of the MCs gray-level distribution and their wavelet representation [21, 31] was used along with other textural features and with a Bayesian classifier to identify the true MCs [19, 22]. This Gaussian nature justifies the use of Bayesian classifier in microcalcification detection. Yu et al. [19] proposed a two stage scheme for detecting MCs by which they used wavelet based filtering and global thresholding to identify suspicious MCs regions. Then, they employed Bayes and back propagation neural network (BPNN) classifiers to identify MCs, that is, reduce the number of false signals resulted from wavelet filtering [14]. Also, they used statistical Markov random field (MRF) modeling along with other image processing techniques to extract primary and secondary features of suspicious MCs and to serve as inputs of the classifiers. Moreover, Caputo et al. [22] used BC and another MRF-based method, statistical spin-glass MRV (SGMRV), to model the different regions within a mammogram. They followed it by a maximum a posteriori probability Bayesian classifier and were able to demonstrate that their proposed approach outperformed both the back propagation artificial neural network and the nearest neighbor classifier detection systems.Several wavelet based MCs detection [14, 17–19, 21, 31] and enhancement [12, 13] schemes have been proposed in literature. Strickland and Hahn [21], for example, concluded that using an appropriate wavelet filter, one can easily detect and segment MCs within wavelet domain by thresholding the wavelet coefficients before the reconstruction process. Modeling MCs as highpass local anomalies, Wang and Karayiannis [14] also applied wavelet filtering, which is an elimination of an approximate wavelet subband and using detail subbands for reconstruction, they were able to detect MCs. Following [14], several studies used this wavelet filtering method for detecting suspected MCs [31] and to reduce false results [17]. Some studies demonstrated that least asymmetric Daubechies are more suitable for enhancement of the mammogram images such as in microcalcification detection [30] while other works demonstrated that the design of a spatial wavelet filter with high regularity is more successful in detecting microcalcifications than conventional wavelet filters such as the orthogonal Daubechies db4 [11]. Moreover, the none-stationary nature of mammogram image texture motivated many researchers to design wavelet transforms using adaptive filters which has been reported to be more efficient than fixed or none-adaptive FIR filter in the detection of low contrast MCs present in the denser breast tissue [31].
[ "11526282", "18977103", "6336871", "18204600", "18222882", "9845306", "16002263", "16723208", "18215904", "12588039", "9735907" ]
[ { "pmid": "11526282", "title": "Screening mammography with computer-aided detection: prospective study of 12,860 patients in a community breast center.", "abstract": "PURPOSE\nTo prospectively assess the effect of computer-aided detection (CAD) on the interpretation of screening mammograms in a community breast center.\n\n\nMATERIALS AND METHODS\nOver a 12-month period, 12,860 screening mammograms were interpreted with the assistance of a CAD system. Each mammogram was initially interpreted without the assistance of CAD, followed immediately by a reevaluation of areas marked by the CAD system. Data were recorded to measure the effect of CAD on the recall rate, positive predictive value for biopsy, cancer detection rate, and stage of malignancies at detection.\n\n\nRESULTS\nWhen comparing the radiologist's performance without CAD with that when CAD was used, the authors observed the following: (a) an increase in recall rate from 6.5% to 7.7%, (b) no change in the positive predictive value for biopsy at 38%, (c) a 19.5% increase in the number of cancers detected, and (d) an increase in the proportion of early-stage (0 and I) malignancies detected from 73% to 78%.\n\n\nCONCLUSION\nThe use of CAD in the interpretation of screening mammograms can increase the detection of early-stage malignancies without undue effect on the recall rate or positive predictive value for biopsy." }, { "pmid": "18977103", "title": "CAD in questions/answers Review of the literature.", "abstract": "Generalization of breast screening programs requires an efficient double reading of the mammograms, which allows reduction of false-negative rate, but might be difficult to organize. CAD (Computed Assisted Diagnosis) is dramatically improving and is able to detect suspicious mammographic lesions, either suspicious microcalcifications, masses or architectural distorsions. CAD mammography might complete or substitute to \"human\" double reading. The aim of this review is to describe major CAD systems commercially available, working of CAD and to present principal results of CAD mammography. Specially, place of CAD within breast screening program, according to the results of recent prospective studies will be discussed." }, { "pmid": "6336871", "title": "Enhanced image mammography.", "abstract": "A blurred mass subtraction technique has been developed for mammography that will enhance small object contrast and visibility throughout the breast area. The procedure is easy to implement and requires no additional exposure. Perception of low-contrast objects is improved by eliminating extreme light and dark image areas. Contrast of structures within certain parts of the breast is increased by compression into the high-contrast part of the film characteristic curve. Detail visibility is also increased by the edge enhancement produced by this process. This paper describes the enhancement process and gives an analysis of its capabilities and limitations." }, { "pmid": "18222882", "title": "Region-based contrast enhancement of mammograms.", "abstract": "Diagnostic features in mammograms vary widely in size and shape. Classical image enhancement techniques cannot adapt to the varying characteristics of such features. An adaptive method for enhancing the contrast of mammographic features of varying size and shape is presented. The method uses each pixel in the image as a seed to grow a region. The extent and shape of the region adapt to local image gray-level variations, corresponding to an image feature. The contrast of each region is calculated with respect to its individual background. Contrast is then enhanced by applying an empirical transformation based on each region's seed pixel value, its contrast, and its background. A quantitative measure of image contrast improvement is also defined based on a histogram of region contrast and used for comparison of results. Using mammogram images digitized at high resolution (less than 0.1 mm pixel size), it is shown that the validity of microcalcification clusters and anatomic details is considerably improved in the processed images." }, { "pmid": "9845306", "title": "Detection of microcalcifications in digital mammograms using wavelets.", "abstract": "This paper presents an approach for detecting microcalcifications in digital mammograms employing wavelet-based subband image decomposition. The microcalcifications appear in small clusters of few pixels with relatively high intensity compared with their neighboring pixels. These image features can be preserved by a detection system that employs a suitable image transform which can localize the signal characteristics in the original and the transform domain. Given that the microcalcifications correspond to high-frequency components of the image spectrum, detection of microcalcifications is achieved by decomposing the mammograms into different frequency subbands, suppressing the low-frequency subband, and, finally, reconstructing the mammogram from the subbands containing only high frequencies. Preliminary experiments indicate that further studies are needed to investigate the potential of wavelet-based subband image decomposition as a tool for detecting microcalcifications in digital mammograms." }, { "pmid": "16002263", "title": "Image segmentation feature selection and pattern classification for mammographic microcalcifications.", "abstract": "Since microcalcifications in X-ray mammograms are the primary indicator of breast cancer, detection of microcalcifications is central to the development of an effective diagnostic system. This paper proposes a two-stage detection procedure. In the first stage, a data driven, closed form mathematical model is used to calculate the location and shape of suspected microcalcifications. When tested on the Nijmegen University Hospital (Netherlands) database, data analysis shows that the proposed model can effectively detect the occurrence of microcalcifications. The proposed mathematical model not only eliminates the need for system training, but also provides information on the borders of suspected microcalcifications for further feature extraction. In the second stage, 61 features are extracted for each suspected microcalcification, representing texture, the spatial domain and the spectral domain. From these features, a sequential forward search (SFS) algorithm selects the classification input vector, which consists of features sensitive only to microcalcifications. Two types of classifiers-a general regression neural network (GRNN) and a support vector machine (SVM)--are applied, and their classification performance is compared using the Az value of the Receiver Operating Characteristic curve. For all 61 features used as input vectors, the test data set yielded Az values of 97.01% for the SVM and 96.00% for the GRNN. With input features selected by SFS, the corresponding Az values were 98.00% for the SVM and 97.80% for the GRNN. The SVM outperformed the GRNN, whether or not the input vectors first underwent SFS feature selection. In both cases, feature selection dramatically reduced the dimension of the input vectors (82% for the SVM and 59% for the GRNN). Moreover, SFS feature selection improved the classification performance, increasing the Az value from 97.01 to 98.00% for the SVM and from 96.00 to 97.80% for the GRNN." }, { "pmid": "16723208", "title": "Detection of microcalcifications in digital mammograms using wavelet filter and Markov random field model.", "abstract": "Clustered microcalcifcations (MCs) in digitized mammograms has been widely recognized as an early sign of breast cancer in women. This work is devoted to developing a computer-aided diagnosis (CAD) system for the detection of MCs in digital mammograms. Such a task actually involves two key issues: detection of suspicious MCs and recognition of true MCs. Accordingly, our approach is divided into two stages. At first, all suspicious MCs are preserved by thresholding a filtered mammogram via a wavelet filter according to the MPV (mean pixel value) of that image. Subsequently, Markov random field parameters based on the Derin-Elliott model are extracted from the neighborhood of every suspicious MCs as the primary texture features. The primary features combined with three auxiliary texture quantities serve as inputs to classifiers for the recognition of true MCs so as to decrease the false positive rate. Both Bayes classifier and back-propagation neural network were used for computer experiments. The data used to test this method were 20 mammograms containing 25 areas of clustered MCs marked by radiologists. Our method can readily remove 1341 false positives out of 1356, namely, 98.9% false positives were removed. Additionally, the sensitivity (true positives rate) is 92%, with only 0.75 false positives per image. From our experiments, we conclude that, with a proper choice of classifier, the texture feature based on Markov random field parameters combined with properly designed auxiliary features extracted from the texture context of the MCs can work outstandingly in the recognition of MCs in digital mammograms." }, { "pmid": "18215904", "title": "Wavelet transforms for detecting microcalcifications in mammograms.", "abstract": "Clusters of fine, granular microcalcifications in mammograms may be an early sign of disease. Individual grains are difficult to detect and segment due to size and shape variability and because the background mammogram texture is typically inhomogeneous. The authors develop a 2-stage method based on wavelet transforms for detecting and segmenting calcifications. The first stage is based on an undecimated wavelet transform, which is simply the conventional filter bank implementation without downsampling, so that the low-low (LL), low-high (LH), high-low (HL), and high-high (HH) sub-bands remain at full size. Detection takes place in HH and the combination LH+HL. Four octaves are computed with 2 inter-octave voices for finer scale resolution. By appropriate selection of the wavelet basis the detection of microcalcifications in the relevant size range can be nearly optimized. In fact, the filters which transform the input image into HH and LH+HL are closely related to prewhitening matched filters for detecting Gaussian objects (idealized microcalcifications) in 2 common forms of Markov (background) noise. The second stage is designed to overcome the limitations of the simplistic Gaussian assumption and provides an accurate segmentation of calcification boundaries. Detected pixel sites in HH and LH+HL are dilated then weighted before computing the inverse wavelet transform. Individual microcalcifications are greatly enhanced in the output image, to the point where straightforward thresholding can be applied to segment them. FROG curves are computed from tests using a freely distributed database of digitized mammograms." }, { "pmid": "12588039", "title": "A support vector machine approach for detection of microcalcifications.", "abstract": "In this paper, we investigate an approach based on support vector machines (SVMs) for detection of microcalcification (MC) clusters in digital mammograms, and propose a successive enhancement learning scheme for improved performance. SVM is a machine-learning method, based on the principle of structural risk minimization, which performs well when applied to data outside the training set. We formulate MC detection as a supervised-learning problem and apply SVM to develop the detection algorithm. We use the SVM to detect at each location in the image whether an MC is present or not. We tested the proposed method using a database of 76 clinical mammograms containing 1120 MCs. We use free-response receiver operating characteristic curves to evaluate detection performance, and compare the proposed algorithm with several existing methods. In our experiments, the proposed SVM framework outperformed all the other methods tested. In particular, a sensitivity as high as 94% was achieved by the SVM method at an error rate of one false-positive cluster per image. The ability of SVM to out perform several well-known methods developed for the widely studied problem of MC detection suggests that SVM is a promising technique for object detection in a medical imaging application." }, { "pmid": "9735907", "title": "A novel approach to microcalcification detection using fuzzy logic technique.", "abstract": "Breast cancer continues to be a significant public health problem in the United States. Approximately, 182,000 new cases of breast cancer are diagnosed and 46,000 women die of breast cancer each year. Even more disturbing is the fact that one out of eight women in the United States will develop breast cancer at some point during her lifetime. Since the cause of breast cancer remains unknown, primary prevention becomes impossible. Computer-aided mammography is an important and challenging task in automated diagnosis. It has great potential over traditional interpretation of film-screen mammography in terms of efficiency and accuracy. Microcalcifications are the earliest sign of breast carcinomas and their detection is one of the key issues for breast cancer control. In this study, a novel approach to microcalcification detection based on fuzzy logic technique is presented. Microcalcifications are first enhanced based on their brightness and nonuniformity. Then, the irrelevant breast structures are excluded by a curve detector. Finally, microcalcifications are located using an iterative threshold selection method. The shapes of microcalcifications are reconstructed and the isolated pixels are removed by employing the mathematical morphology technique. The essential idea of the proposed approach is to apply a fuzzified image of a mammogram to locate the suspicious regions and to interact the fuzzified image with the original image to preserve fidelity. The major advantage of the proposed method is its ability to detect microcalcifications even in very dense breast mammograms. A series of clinical mammograms are employed to test the proposed algorithm and the performance is evaluated by the free-response receiver operating characteristic curve. The experiments aptly show that the microcalcifications can be accurately detected even in very dense mammograms using the proposed approach." } ]
BMC Medical Informatics and Decision Making
20082700
PMC2823596
10.1186/1472-6947-10-3
Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines
BackgroundComputerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase.MethodsA framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA).ResultsThe applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows.ConclusionsThe framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline.
Related WorkIn this section we present some of the research literature related to computerizing clinical guidelines, Rule-based systems, ontologies and the ICSP platform.Computerizing clinical guidelinesA lot of standardization efforts and formalisms for representing clinical guidelines have been proposed in literature [18-21]. The most prevalent formats are the Arden Syntax [22,23], PROforma [24,25], EON [26], GLIF [27,28], PRODIGY [29], Asbru [30] and Guide [31]. More information about these formats can be found in Additional file 1.Some research has also been done on automatically translating manually constructed guidelines to computer programs. Kaiser et al [32] propose a multi-step approach using information extraction and transformation to extract process information from clinical guidelines. Heuristics were applied to perform this extraction. Using patterns in the structure of the document and in expressions, the need of natural language processing (NLP) was eliminated. This approach differs from our approach as it only works on very structured documents. No interaction with a domain expert occurs, which heightens the chance of errors in the translation.Rule-based systemsRule-based systems [33] are used in the field of Artificial Intelligence (AI) to represent and manipulate knowledge in a declarative manner. The domain-specific knowledge is described by a set of (production) rules in the production memory. A Rule can be seen as a simple mathematical implication of the form A → C, where A is the set of conditions, or antecedent, and C is the set of actions to be taken, or consequent. The general idea is that a Rule-based system holds a predefined set of Rules in its memory. From that moment on, a large number of Facts, which represent the data in the Working memory, can be given to the engine. It will check the conditions of each Rule against the Facts. If all the conditions of a Rule are said to be valid, it is fired. When a Rule is fired, its predefined consequent will be executed. Rule Engines require extensive pattern matching during their execution. It has been estimated that up to 90% of a Rule Engine's run time is spent on performing repetitive pattern matching between the Rule set and the working memory elements [34]. Originally String comparison algorithms were used for this such as Boyer-Moore, Knuth-Morris-Pratt and Rabin-Karp. In 1974 the Rete algorithm was published by Dr. Charles L. Forgy [35]. It makes the Rule-to-Fact matching process a lot quicker than the previously mentioned algorithms. When Rules are added, the Rete algorithm constructs a network of nodes, each representing a pattern from the conditions of the Rules. These nodes are connected with each other, whenever the corresponding patterns are in the same antecedent of one of the Rules. The constructed network looks like a tree, with the leaves being the consequents of the Rules. If a path is traced from the root node all the way to one of the leaves, a complete Rule is described. When the Facts are added to the algorithm, they are placed in memory next to each node where the pattern matches the Fact. Once a full path, from root to leaf, is described, a Rule is fired and its consequent is executed.OntologiesOntologies [16] can structure and represent knowledge about a certain domain in a formal way. This knowledge can then easily be shared and reused. The Ontology Web Language (OWL) is the leading language for encoding these ontologies. Because of the foundation of OWL in first-order logic, the models and description of data in these models can be formally proved. It can also be used to detect inconsistencies in the model as well as infer new information out of the correlation of this data. This proofing and classification process is referred to as Reasoning. Reasoners are implemented as generic software-modules, independent of the domain-specific problem. For this research, the Reasoner Pellet [36] was used. Existing medical and natural language ontologies can be used to support the translation process. Cyc [37,38] and WordNet [39] are two well-known ontologies that model general knowledge about the English language such as synonyms and generally true statements. More information about these ontologies can be found in Additional file 2. A wide range of ontologies exist about the eHealth domain. Additional file 2 gives an overview of the most relevant, well-known and well-developed eHealth ontologies, which are available in OWL and that could be (partially) reused to support the semi-automatic translation such as LinkBase [40,41], SNOMED CT [42-44], the Galen Common Reference Model [45,46], the NCI Cancer Ontology [47,48], the Foundational Model of Anatomy Ontology (FMA) [49,50], the Gene Ontology (GO) [51,52] and the Ontology for Biomedical Investigations (OBI) [53,54].Intensive Care Service Platform (ICSP)The successful use of CDSS requires structured and standardised information in the Electronic Health Record (EHR). However, EHR has some limitations, such as clinical data limitations (different meaning of words), technological limitations (usage on PDAs, interoperability) and the lack of standardization [55,56]. Another problem is that less than 20% of the hospitals are completely digital while access through an electronic platform is necessary to adopt CDSS [57,58].The computerization of the Intensive Care Unit of Ghent University hospital was started in 2003 by implementing a system, the ICSP platform, which gathers all generated patient data and stores it in a large database called IZIS (Intensive Care Information System). The ICSP platform consists of a number of services. A bed-side PC allows the medical staff to input clinical observations and prescription and administration of medication. Monitor parameters, administrative data and results of medical tests are automatically gathered from monitoring equipment and other databases. Services monitor the condition of the patient and suggest medical decisions or produce new data which is stored in the database and can be used by other services.The ICSP platform is an example of a Service-Oriented Architecture (SOA) [59]. The main idea behind SOA is the separation of the functions of the system into well-defined, independent, reusable and distributable components, referred to as services. These services communicate with each other by passing data from one service to another or by coordinating an activity between two or more services. Web Services [60] are often used to implement this architecture.
[ "10691588", "15767266", "15187064", "15187061", "10024268", "9292844", "7496881", "7617792", "18690370", "12467791", "12807812", "16597341", "9670133", "11185420", "16770974", "14759820", "10732935", "12799407", "9735080", "12475855", "17855817", "19515252" ]
[ { "pmid": "10691588", "title": "Developing and implementing computerized protocols for standardization of clinical decisions.", "abstract": "Humans have only a limited ability to incorporate information in decision making. In certain situations, the mismatch between this limitation and the availability of extensive information contributes to the varying performance and high error rate of clinical decision makers. Variation in clinical practice is due in part to clinicians' poor compliance with guidelines and recommended therapies. The use of decision-support tools is a response to both the information revolution and poor compliance. Computerized protocols used to deliver decision support can be configured to contain much more detail than textual guidelines or paper-based flow diagrams. Such protocols can generate patient-specific instructions for therapy that can be carried out with little interclinician variability; however, clinicians must be willing to modify personal styles of clinical management. Protocols need not be perfect. Several defensible and reasonable approaches are available for clinical problems. However, one of these reasonable approaches must be chosen and incorporated into the protocol to promote consistent clinical decisions. This reasoning is the basis of an explicit method of decision support that allows the rigorous evaluation of interventions, including use of the protocols themselves. Computerized protocols for mechanical ventilation and management of intravenous fluid and hemodynamic factors in patients with the acute respiratory distress syndrome provide case studies for this discussion." }, { "pmid": "15767266", "title": "Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success.", "abstract": "OBJECTIVE\nTo identify features of clinical decision support systems critical for improving clinical practice.\n\n\nDESIGN\nSystematic review of randomised controlled trials.\n\n\nDATA SOURCES\nLiterature searches via Medline, CINAHL, and the Cochrane Controlled Trials Register up to 2003; and searches of reference lists of included studies and relevant reviews.\n\n\nSTUDY SELECTION\nStudies had to evaluate the ability of decision support systems to improve clinical practice.\n\n\nDATA EXTRACTION\nStudies were assessed for statistically and clinically significant improvement in clinical practice and for the presence of 15 decision support system features whose importance had been repeatedly suggested in the literature.\n\n\nRESULTS\nSeventy studies were included. Decision support systems significantly improved clinical practice in 68% of trials. Univariate analyses revealed that, for five of the system features, interventions possessing the feature were significantly more likely to improve clinical practice than interventions lacking the feature. Multiple logistic regression analysis identified four features as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician workflow (P < 0.00001), provision of recommendations rather than just assessments (P = 0.0187), provision of decision support at the time and location of decision making (P = 0.0263), and computer based decision support (P = 0.0294). Of 32 systems possessing all four features, 30 (94%) significantly improved clinical practice. Furthermore, direct experimental justification was found for providing periodic performance feedback, sharing recommendations with patients, and requesting documentation of reasons for not following recommendations.\n\n\nCONCLUSIONS\nSeveral features were closely correlated with decision support systems' ability to improve patient care significantly. Clinicians and other stakeholders should implement clinical decision support systems that incorporate these features whenever feasible and appropriate." }, { "pmid": "15187064", "title": "Translating research into practice: organizational issues in implementing automated decision support for hypertension in three medical centers.", "abstract": "Information technology can support the implementation of clinical research findings in practice settings. Technology can address the quality gap in health care by providing automated decision support to clinicians that integrates guideline knowledge with electronic patient data to present real-time, patient-specific recommendations. However, technical success in implementing decision support systems may not translate directly into system use by clinicians. Successful technology integration into clinical work settings requires explicit attention to the organizational context. We describe the application of a \"sociotechnical\" approach to integration of ATHENA DSS, a decision support system for the treatment of hypertension, into geographically dispersed primary care clinics. We applied an iterative technical design in response to organizational input and obtained ongoing endorsements of the project by the organization's administrative and clinical leadership. Conscious attention to organizational context at the time of development, deployment, and maintenance of the system was associated with extensive clinician use of the system." }, { "pmid": "15187061", "title": "Bridging the guideline implementation gap: a systematic, document-centered approach to guideline implementation.", "abstract": "OBJECTIVE\nA gap exists between the information contained in published clinical practice guidelines and the knowledge and information that are necessary to implement them. This work describes a process to systematize and make explicit the translation of document-based knowledge into workflow-integrated clinical decision support systems.\n\n\nDESIGN\nThis approach uses the Guideline Elements Model (GEM) to represent the guideline knowledge. Implementation requires a number of steps to translate the knowledge contained in guideline text into a computable format and to integrate the information into clinical workflow. The steps include: (1) selection of a guideline and specific recommendations for implementation, (2) markup of the guideline text, (3) atomization, (4) deabstraction and (5) disambiguation of recommendation concepts, (6) verification of rule set completeness, (7) addition of explanations, (8) building executable statements, (9) specification of origins of decision variables and insertions of recommended actions, (10) definition of action types and selection of associated beneficial services, (11) choice of interface components, and (12) creation of requirement specification.\n\n\nRESULTS\nThe authors illustrate these component processes using examples drawn from recent experience translating recommendations from the National Heart, Lung, and Blood Institute's guideline on management of chronic asthma into a workflow-integrated decision support system that operates within the Logician electronic health record system.\n\n\nCONCLUSION\nUsing the guideline document as a knowledge source promotes authentic translation of domain knowledge and reduces the overall complexity of the implementation task. From this framework, we believe that a better understanding of activities involved in guideline implementation will emerge." }, { "pmid": "9292844", "title": "Representation of clinical practice guidelines in conventional and augmented decision tables.", "abstract": "OBJECTIVE\nTo develop a knowledge representation model for clinical practice guidelines that is linguistically adequate, comprehensible, reusable, and maintainable.\n\n\nDESIGN\nDecision tables provide the basic framework for the proposed knowledge representation model. Guideline logic is represented as rules in conventional decision tables. These tables are augmented by layers where collateral information is recorded in slots beneath the logic.\n\n\nRESULTS\nDecision tables organize rules into cohesive rule sets wherein complex logic is clarified. Decision table rule sets may be verified to assure completeness and consistency. Optimization and display of rule sets as sequential decision trees may enhance the comprehensibility of the logic. The modularity of the rule formats may facilitate maintenance. The augmentation layers provide links to descriptive language, information sources, decision variable characteristics, costs and expected values of policies, and evidence sources and quality.\n\n\nCONCLUSION\nAugmented decision tables can serve as a unifying knowledge representation for developers and implementers of clinical practice guidelines." }, { "pmid": "7496881", "title": "Computerizing guidelines to improve care and patient outcomes: the example of heart failure.", "abstract": "Increasing amounts of medical knowledge, clinical data, and patient expectations have created a fertile environment for developing and using clinical practice guidelines. Electronic medical records have provided an opportunity to invoke guidelines during the everyday practice of clinical medicine to improve health care quality and control costs. In this paper, efforts to incorporate complex guidelines [those for heart failure from the Agency for Health Care Policy and Research (AHCPR)] into a network of physicians' interactive microcomputer workstations are reported. The task proved difficult because the guidelines often lack explicit definitions (e.g., for symptom severity and adverse events) that are necessary to navigate the AHCPR algorithm. They also focus more on errors of omission (not doing the right thing) than on errors of commission (doing the wrong thing) and do not account for comorbid conditions, concurrent drug therapy, or the timing of most interventions and follow-up. As they stand, the heart failure guidelines give good general guidance to individual practitioners, but cannot be used to assess quality or care without extensive \"translation\" into the local environment. Specific recommendations are made so that future guidelines will prove useful to a wide range of prospective users." }, { "pmid": "7617792", "title": "Computerized decision support systems in primary care.", "abstract": "Computerized decision support can be passive or active. Passive decision support occurs when a computer facilitates access to relevant patient data or clinical knowledge for interpretation by the physician. Examples include CPR systems and reference texts or literature databases on CD-ROM. Effective passive decision support may ultimately prove to have a significant impact on physician decision making, but its potential to do so has been largely unexplored. Active decision support implies some higher level of information processing, or inference, by the computer. Examples include reminder / alert systems and diagnostic decision support systems. Inference processing in active decision support systems is generally rule-based, but probabilistic inference has been successfully used as well. Reminder systems have been consistently demonstrated to improve dramatically physician guideline compliance, generally by reducing oversight or error. The same potential for large-scale, systematic impact on physician decision-making by diagnostic decision support systems probably does not exist, but these systems may prove to be extremely useful in individual cases. Current applicability of diagnostic decision support systems to primary care is limited by the incompleteness and inaccuracies of the knowledge bases of these systems with respect to primary care. The applicability of computerized decision support in general to primary care is limited by more practical considerations. Widespread computerized decision support will not occur without CPR systems coupled with appropriate data standards and nomenclatures that will permit decision support tools to be accessed effortlessly during the routine process of patient care." }, { "pmid": "18690370", "title": "Service-oriented subscription management of medical decision data in the intensive care unit.", "abstract": "OBJECTIVES\nThis paper addresses the design of a platform for the management of medical decision data in the ICU. Whenever new medical data from laboratories or monitors is available or at fixed times, the appropriate medical support services are activated and generate a medical alert or suggestion to the bedside terminal, the physician's PDA, smart phone or mailbox. Since future ICU systems will rely ever more on medical decision support, a generic and flexible subscription platform is of high importance.\n\n\nMETHODS\nOur platform is designed based on the principles of service-oriented architectures, and is fundamental for service deployment since the medical support services only need to implement their algorithm and can rely on the platform for general functionalities. A secure communication and execution environment are also provided.\n\n\nRESULTS\nA prototype, where medical support services can be easily plugged in, has been implemented using Web service technology and is currently being evaluated by the Department of Intensive Care of the Ghent University Hospital. To illustrate the platform operation and performance, two prototype medical support services are used, showing that the extra response time introduced by the platform is less than 150 ms.\n\n\nCONCLUSIONS\nThe platform allows for easy integration with hospital information systems. The platform is generic and offers user-friendly patient/service subscription, transparent data and service resource management and priority-based filtering of messages. The performance has been evaluated and it was shown that the response time of platform components is negligible compared to the execution time of the medical support services." }, { "pmid": "12467791", "title": "Representation primitives, process models and patient data in computer-interpretable clinical practice guidelines: a literature review of guideline representation models.", "abstract": "Representation of clinical practice guidelines in a computer-interpretable format is a critical issue for guideline development, implementation, and evaluation. We studied 11 types of guideline representation models that can be used to encode guidelines in computer-interpretable formats. We have consistently found in all reviewed models that primitives for representation of actions and decisions are necessary components of a guideline representation model. Patient states and execution states are important concepts that closely relate to each other. Scheduling constraints on representation primitives can be modeled as sequences, concurrences, alternatives, and loops in a guideline's application process. Nesting of guidelines provides multiple views to a guideline with different granularities. Integration of guidelines with electronic medical records can be facilitated by the introduction of a formal model for patient data. Data collection, decision, patient state, and intervention constitute four basic types of primitives in a guideline's logic flow. Decisions clarify our understanding on a patient's clinical state, while interventions lead to the change from one patient state to another." }, { "pmid": "12807812", "title": "The syntax and semantics of the PROforma guideline modeling language.", "abstract": "PROforma is an executable process modeling language that has been used successfully to build and deploy a range of decision support systems, guidelines, and other clinical applications. It is one of a number of recent proposals for representing clinical protocols and guidelines in a machine-executable format (see <www.openclinical.org>). In this report, the authors outline the task model for the language and provide an operational semantics for process enactment together with a semantics for expressions, which may be used to query the state of a task during enactment. The operational semantics includes a number of public operations that may be performed on an application by an external agent, including operations that change the values of data items, recommend or make decisions, manage tasks that have been performed, and perform any task state changes that are implied by the current state of the application. Disclosure: PROforma has been used as the basis of a commercial decision support and guideline technology Arezzo (Infermed, London, UK; details in text)." }, { "pmid": "16597341", "title": "Evaluation of PROforma as a language for implementing medical guidelines in a practical context.", "abstract": "BACKGROUND\nPROforma is one of several languages that allow clinical guidelines to be expressed in a computer-interpretable manner. How these languages should be compared, and what requirements they should meet, are questions that are being actively addressed by a community of interested researchers.\n\n\nMETHODS\nWe have developed a system to allow hypertensive patients to be monitored and assessed without visiting their GPs (except in the most urgent cases). Blood pressure measurements are performed at the patients' pharmacies and a web-based system, created using PROforma, makes recommendations for continued monitoring, and/or changes in medication. The recommendations and measurements are transmitted electronically to a practitioner with authority to issue and change prescriptions. We evaluated the use of PROforma during the knowledge acquisition, analysis, design and implementation of this system. The analysis focuses on the logical adequacy, heuristic power, notational convenience, and explanation support provided by the PROforma language.\n\n\nRESULTS\nPROforma proved adequate as a language for the implementation of the clinical reasoning required by this project. However a lack of notational convenience led us to use UML activity diagrams, rather than PROforma process descriptions, to create the models that were used during the knowledge acquisition and analysis phases of the project. These UML diagrams were translated into PROforma during the implementation of the project.\n\n\nCONCLUSION\nThe experience accumulated during this study highlighted the importance of structure preserving design, that is to say that the models used in the design and implementation of a knowledge-based system should be structurally similar to those created during knowledge acquisition and analysis. Ideally the same language should be used for all of these models. This means that great importance has to be attached to the notational convenience of these languages, by which we mean the ease with which they can be read, written, and understood by human beings. The importance of notational convenience arises from the fact that a language used during knowledge acquisition and analysis must be intelligible to the potential users of a system, and to the domain experts who provide the knowledge that will be used in its construction." }, { "pmid": "9670133", "title": "The guideline interchange format: a model for representing guidelines.", "abstract": "OBJECTIVE\nTo allow exchange of clinical practice guidelines among institutions and computer-based applications.\n\n\nDESIGN\nThe GuideLine Interchange Format (GLIF) specification consists of GLIF model and the GLIF syntax. The GLIF model is an object-oriented representation that consists of a set of classes for guideline entities, attributes for those classes, and data types for the attribute values. The GLIF syntax specifies the format of the test file that contains the encoding.\n\n\nMETHODS\nResearchers from the InterMed Collaboratory at Columbia University, Harvard University (Brigham and Women's Hospital and Massachusetts General Hospital), and Stanford University analyzed four existing guideline systems to derive a set of requirements for guideline representation. The GLIF specification is a consensus representation developed through a brainstorming process. Four clinical guidelines were encoded in GLIF to assess its expressivity and to study the variability that occurs when two people from different sites encode the same guideline.\n\n\nRESULTS\nThe encoders reported that GLIF was adequately expressive. A comparison of the encodings revealed substantial variability.\n\n\nCONCLUSION\nGLIF was sufficient to model the guidelines for the four conditions that were examined. GLIF needs improvement in standard representation of medical concepts, criterion logic, temporal information, and uncertainty." }, { "pmid": "11185420", "title": "Guideline-based careflow systems.", "abstract": "This paper describes a methodology for achieving an efficient implementation of clinical practice guidelines. Three main steps are illustrated: knowledge representation, model simulation and implementation within a health care organisation. The resulting system can be classified as a 'guideline-based careflow management system'. It is based on computational formalisms representing both medical and health care organisational knowledge. This aggregation allows the implementation of a guideline, not only as a simple reminder, but also as an 'organiser' that facilitates health care processes. As a matter of fact, the system not only suggests the tasks to be performed, but also the resource allocation. The methodology initially comprehends a graphical editor, that allows an unambiguous representation of the guideline. Then the guideline is translated into a high-level Petri net. The resources, both human and technological necessary for performing guideline-based activities, are also represented by means of an organisational model. This allows the running of the Petri net for simulating the implementation of the guideline in the clinical setting. The purpose of the simulation is to validate the careflow model and to suggest the optimal resource allocation before the careflow system is installed. The final step is the careflow implementation. In this phase, we show that the 'workflow management' technology, widely used in business process automation, may be transferred to the health care setting. This requires augmenting the typical workflow management systems with the flexibility and the uncertainty management, typical of the health care processes. For illustrating the proposed methodology, we consider a guideline for the management of patients with acute ischemic stroke." }, { "pmid": "16770974", "title": "Evaluation of the content coverage of SNOMED CT: ability of SNOMED clinical terms to represent clinical problem lists.", "abstract": "OBJECTIVE\nTo evaluate the ability of SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) version 1.0 to represent the most common problems seen at the Mayo Clinic in Rochester, Minn.\n\n\nMATERIAL AND METHODS\nWe selected the 4996 most common nonduplicated text strings from the Mayo Master Sheet Index that describe patient problems associated with inpatient and outpatient episodes of care. From July 2003 through January 2004, 2 physician reviewers compared the Master Sheet Index text with the SNOMED CT terms that were automatically mapped by a vocabulary server or that they identified using a vocabulary browser and rated the \"correctness\" of the match. If the 2 reviewers disagreed, a third reviewer adjudicated. We evaluated the specificity, sensitivity, and positive predictive value of SNOMED CT.\n\n\nRESULTS\nOf the 4996 problems in the test set, SNOMED CT correctly identified 4568 terms (true-positive results); 36 terms were true negatives, 9 terms were false positives, and 383 terms were false negatives. SNOMED CT had a sensitivity of 92.3%, a specificity of 80.0%, and a positive predictive value of 99.8%.\n\n\nCONCLUSION\nSNOMED CT, when used as a compositional terminology, can exactly represent most (92.3%) of the terms used commonly in medical problem lists. Improvements to synonymy and adding missing modifiers would lead to greater coverage of common problem statements. Health care organizations should be encouraged and provided incentives to begin adopting SNOMED CT to drive their decision-support applications." }, { "pmid": "14759820", "title": "A reference ontology for biomedical informatics: the Foundational Model of Anatomy.", "abstract": "The Foundational Model of Anatomy (FMA), initially developed as an enhancement of the anatomical content of UMLS, is a domain ontology of the concepts and relationships that pertain to the structural organization of the human body. It encompasses the material objects from the molecular to the macroscopic levels that constitute the body and associates with them non-material entities (spaces, surfaces, lines, and points) required for describing structural relationships. The disciplined modeling approach employed for the development of the FMA relies on a set of declared principles, high level schemes, Aristotelian definitions and a frame-based authoring environment. We propose the FMA as a reference ontology in biomedical informatics for correlating different views of anatomy, aligning existing and emerging ontologies in bioinformatics ontologies and providing a structure-based template for representing biological functions." }, { "pmid": "10732935", "title": "Sedation in the intensive care unit: a systematic review.", "abstract": "CONTEXT\nSedation has become an integral part of critical care practice in minimizing patient discomfort; however, sedatives have adverse effects and the potential to prolong mechanical ventilation, which may increase health care costs.\n\n\nOBJECTIVE\nTo determine which form of sedation is associated with optimal sedation, the shortest time to extubation, and length of intensive care unit (ICU) stay.\n\n\nDATA SOURCES\nA key word search of MEDLINE, EMBASE, and the Cochrane Collaboration databases and hand searches of 6 anesthesiology journals from 1980 to June 1998. Experts and industry representatives were contacted, personal files were searched, and reference lists of relevant primary and review articles were reviewed.\n\n\nSTUDY SELECTION\nStudies included were randomized controlled trials enrolling adult patients receiving mechanical ventilation and requiring short-term or long-term sedation. At least 2 sedative agents had to be compared and the quality of sedation, time to extubation, or length of ICU stay analyzed.\n\n\nDATA EXTRACTION\nData on population, intervention, outcome, and methodological quality were extracted in duplicate by 2 of 3 investigators using 8 validity criteria.\n\n\nDATA SYNTHESIS\nOf 49 identified randomized controlled trials, 32 met our selection criteria; 20 studied short-term sedation and 14, long-term sedation. Of these, 20 compared propofol with midazolam. Most trials were not double-blind and did not report or standardize important cointerventions. Propofol provides at least as effective sedation as midazolam and results in a faster time to extubation, with an increased risk of hypotension and higher cost. Insufficient data exist to determine effect on length of stay in the ICU. Isoflurane demonstrated some advantages over midazolam, and ketamine had a more favorable hemodynamic profile than fentanyl in patients with head injuries.\n\n\nCONCLUSION\nConsidering the widespread use of sedation for critically ill patients, more large, high-quality, randomized controlled trials of the effectiveness of different agents for short-term and long-term sedation are warranted." }, { "pmid": "12799407", "title": "Monitoring sedation status over time in ICU patients: reliability and validity of the Richmond Agitation-Sedation Scale (RASS).", "abstract": "CONTEXT\nGoal-directed delivery of sedative and analgesic medications is recommended as standard care in intensive care units (ICUs) because of the impact these medications have on ventilator weaning and ICU length of stay, but few of the available sedation scales have been appropriately tested for reliability and validity.\n\n\nOBJECTIVE\nTo test the reliability and validity of the Richmond Agitation-Sedation Scale (RASS).\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nAdult medical and coronary ICUs of a university-based medical center.\n\n\nPARTICIPANTS\nThirty-eight medical ICU patients enrolled for reliability testing (46% receiving mechanical ventilation) from July 21, 1999, to September 7, 1999, and an independent cohort of 275 patients receiving mechanical ventilation were enrolled for validity testing from February 1, 2000, to May 3, 2001.\n\n\nMAIN OUTCOME MEASURES\nInterrater reliability of the RASS, Glasgow Coma Scale (GCS), and Ramsay Scale (RS); validity of the RASS correlated with reference standard ratings, assessments of content of consciousness, GCS scores, doses of sedatives and analgesics, and bispectral electroencephalography.\n\n\nRESULTS\nIn 290-paired observations by nurses, results of both the RASS and RS demonstrated excellent interrater reliability (weighted kappa, 0.91 and 0.94, respectively), which were both superior to the GCS (weighted kappa, 0.64; P<.001 for both comparisons). Criterion validity was tested in 411-paired observations in the first 96 patients of the validation cohort, in whom the RASS showed significant differences between levels of consciousness (P<.001 for all) and correctly identified fluctuations within patients over time (P<.001). In addition, 5 methods were used to test the construct validity of the RASS, including correlation with an attention screening examination (r = 0.78, P<.001), GCS scores (r = 0.91, P<.001), quantity of different psychoactive medication dosages 8 hours prior to assessment (eg, lorazepam: r = - 0.31, P<.001), successful extubation (P =.07), and bispectral electroencephalography (r = 0.63, P<.001). Face validity was demonstrated via a survey of 26 critical care nurses, which the results showed that 92% agreed or strongly agreed with the RASS scoring scheme, and 81% agreed or strongly agreed that the instrument provided a consensus for goal-directed delivery of medications.\n\n\nCONCLUSIONS\nThe RASS demonstrated excellent interrater reliability and criterion, construct, and face validity. This is the first sedation scale to be validated for its ability to detect changes in sedation status over consecutive days of ICU care, against constructs of level of consciousness and delirium, and correlated with the administered dose of sedative and analgesic medications." }, { "pmid": "9735080", "title": "Incidence of and risk factors for ventilator-associated pneumonia in critically ill patients.", "abstract": "BACKGROUND\nUnderstanding the risk factors for ventilator-associated pneumonia can help to assess prognosis and devise and test preventive strategies.\n\n\nOBJECTIVE\nTo examine the baseline and time-dependent risk factors for ventilator-associated pneumonia and to determine the conditional probability and cumulative risk over the duration of stay in the intensive care unit.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\n16 intensive care units in Canada.\n\n\nPATIENTS\n1014 mechanically ventilated patients.\n\n\nMEASUREMENTS\nDemographic and time-dependent variables reflecting illness severity, ventilation, nutrition, and drug exposure. Pneumonia was classified by using five methods: adjudication committee, bedside clinician's diagnosis, Centers for Disease Control and Prevention definition, Clinical Pulmonary Infection score, and positive culture from bronchoalveolar lavage or protected specimen brush.\n\n\nRESULTS\n177 of 1014 patients (17.5%) developed ventilator-associated pneumonia 9.0 +/- 5.9 days (median, 7 days [interquartile range, 5 to 10 days]) after admission to the intensive care unit. Although the cumulative risk increased over time, the daily hazard rate decreased after day 5 (3.3% at day 5, 2.3% at day 10, and 1.3% at day 15). Independent predictors of ventilator-associated pneumonia in multivariable analysis were a primary admitting diagnosis of burns (risk ratio, 5.09 [95% CI, 1.52 to 17.03]), trauma (risk ratio, 5.00 [CI, 1.91 to 13.11]), central nervous system disease (risk ratio, 3.40 [CI, 1.31 to 8.81]), respiratory disease (risk ratio, 2.79 [CI, 1.04 to 7.51]), cardiac disease (risk ratio, 2.72 [CI, 1.05 to 7.01]), mechanical ventilation in the previous 24 hours (risk ratio, 2.28 [CI, 1.11 to 4.68]), witnessed aspiration (risk ratio, 3.25 [CI, 1.62 to 6.50]), and paralytic agents (risk ratio, 1.57 [CI, 1.03 to 2.39]). Exposure to antibiotics conferred protection (risk ratio, 0.37 [CI, 0.27 to 0.51]). Independent risk factors were the same regardless of the pneumonia definition used.\n\n\nCONCLUSIONS\nThe daily risk for pneumonia decreases with increasing duration of stay in the intensive care unit. Witnessed aspiration and exposure to paralytic agents are potentially modifiable independent risk factors. Exposure to antibiotics was associated with low rates of early ventilator-associated pneumonia, but this effect attenuates over time." }, { "pmid": "12475855", "title": "Epidemiology and outcomes of ventilator-associated pneumonia in a large US database.", "abstract": "OBJECTIVES\nTo evaluate risk factors for ventilator-associated pneumonia (VAP), as well as its influence on in-hospital mortality, resource utilization, and hospital charges.\n\n\nDESIGN\nRetrospective matched cohort study using data from a large US inpatient database.\n\n\nPATIENTS\nPatients admitted to an ICU between January 1998 and June 1999 who received mechanical ventilation for > 24 h.\n\n\nMEASUREMENTS\nRisk factors for VAP were examined using crude and adjusted odds ratios (AORs). Cases of VAP were matched on duration of mechanical ventilation, severity of illness on admission (predicted mortality), type of admission (medical, surgical, trauma), and age with up to three control subjects. Mortality, resource utilization, and billed hospital charges were then compared between cases and control subjects.\n\n\nRESULTS\nOf the 9,080 patients meeting study entry criteria, VAP developed in 842 patients (9.3%). The mean interval between intubation, admission to the ICU, hospital admission, and the identification of VAP was 3.3 days, 4.5 days, and 5.4 days, respectively. Identified independent risk factors for the development of VAP were male gender, trauma admission, and intermediate deciles of underlying illness severity (on admission) [AOR, 1.58, 1.75, and 1.47 to 1.70, respectively]. Patients with VAP were matched with 2,243 control subjects without VAP. Hospital mortality did not differ significantly between cases and matched control subjects (30.5% vs 30.4%, p = 0.713). Nevertheless, patients with VAP had a significantly longer duration of mechanical ventilation (14.3 +/- 15.5 days vs 4.7 +/- 7.0 days, p < 0.001), ICU stay (11.7 +/- 11.0 days vs 5.6 +/- 6.1 days, p < 0.001), and hospital stay (25.5 +/- 22.8 days vs 14.0 +/- 14.6 days, p < 0.001). Development of VAP was also associated with an increase of > $40,000 USD in mean hospital charges per patient ($104,983 USD +/- $91,080 USD vs $63,689 USD+/- $75,030 USD, p < 0.001).\n\n\nCONCLUSIONS\nThis retrospective matched cohort study, the largest of its kind, demonstrates that VAP is a common nosocomial infection that is associated with poor clinical and economic outcomes. While strategies to prevent the occurrence of VAP may not reduce mortality, they may yield other important benefits to patients, their families, and hospital systems." }, { "pmid": "17855817", "title": "Effect of a nurse-implemented sedation protocol on the incidence of ventilator-associated pneumonia.", "abstract": "OBJECTIVE\nTo determine whether the use of a nurse-implemented sedation protocol could reduce the incidence of ventilator-associated pneumonia in critically ill patients.\n\n\nDESIGN\nTwo-phase (before-after), prospective, controlled study.\n\n\nSETTING\nUniversity-affiliated, 11-bed medical intensive care unit.\n\n\nPATIENTS\nPatients requiring mechanical ventilation for >or=48 hrs and sedative infusion with midazolam or propofol alone.\n\n\nINTERVENTIONS\nDuring the control phase, sedatives were adjusted according to the physician's decision. During the protocol phase, sedatives were adjusted according to a protocol developed by a multidisciplinary team including nurses and physicians. The protocol was based on the Cambridge scale, and sedation level was adjusted every 3 hrs by the nurses. Standard practices, including weaning from the ventilator and diagnosis of VAP, were the same during both study phases.\n\n\nMEASUREMENTS AND MAIN RESULTS\nA total of 423 patients were enrolled (control group, n = 226; protocol group, n = 197). The incidence of VAP was significantly lower in the protocol group compared with the control group (6% and 15%, respectively, p = .005). By univariate analysis (log-rank test), only use of a nurse-implemented protocol was significantly associated with a decrease of incidence of VAP (p < .01). A nurse-implemented protocol was found to be independently associated with a lower incidence of VAP after adjustment on Simplified Acute Physiology Score II in the multivariate Cox proportional hazards model (hazard rate, 0.81; 95% confidence interval, 0.62-0.95; p = .03). The median duration of mechanical ventilation was significantly shorter in the protocol group (4.2 days; interquartile range, 2.1-9.5) compared with the control group (8 days; interquartile range, 2.2-22.0; p = .001), representing a 52% relative reduction. Extubation failure was more frequently observed in the control group compared with the protocol group (13% and 6%, respectively, p = .01). There was no significant difference in in-hospital mortality (38% vs. 45% in the protocol vs. control group, respectively, p = .22).\n\n\nCONCLUSIONS\nIn patients receiving mechanical ventilation and requiring sedative infusions with midazolam or propofol, the use of a nurse-implemented sedation protocol decreases the rate of VAP and the duration of mechanical ventilation." }, { "pmid": "19515252", "title": "Using data mining techniques to explore physicians' therapeutic decisions when clinical guidelines do not provide recommendations: methods and example for type 2 diabetes.", "abstract": "BACKGROUND\nClinical guidelines carry medical evidence to the point of practice. As evidence is not always available, many guidelines do not provide recommendations for all clinical situations encountered in practice. We propose an approach for identifying knowledge gaps in guidelines and for exploring physicians' therapeutic decisions with data mining techniques to fill these knowledge gaps. We demonstrate our method by an example in the domain of type 2 diabetes.\n\n\nMETHODS\nWe analyzed the French national guidelines for the management of type 2 diabetes to identify clinical conditions that are not covered or those for which the guidelines do not provide recommendations. We extracted patient records corresponding to each clinical condition from a database of type 2 diabetic patients treated at Avicenne University Hospital of Bobigny, France. We explored physicians' prescriptions for each of these profiles using C5.0 decision-tree learning algorithm. We developed decision-trees for different levels of detail of the therapeutic decision, namely the type of treatment, the pharmaco-therapeutic class, the international non proprietary name, and the dose of each medication. We compared the rules generated with those added to the guidelines in a newer version, to examine their similarity.\n\n\nRESULTS\nWe extracted 27 rules from the analysis of a database of 463 patient records. Eleven rules were about the choice of the type of treatment and thirteen rules about the choice of the pharmaco-therapeutic class of each drug. For the choice of the international non proprietary name and the dose, we could extract only a few rules because the number of patient records was too low for these factors. The extracted rules showed similarities with those added to the newer version of the guidelines.\n\n\nCONCLUSION\nOur method showed its usefulness for completing guidelines recommendations with rules learnt automatically from physicians' prescriptions. It could be used during the development of guidelines as a complementary source from practice-based knowledge. It can also be used as an evaluation tool for comparing a physician's therapeutic decisions with those recommended by a given set of clinical guidelines. The example we described showed that physician practice was in some ways ahead of the guideline." } ]
International Journal of Biomedical Imaging
20414352
PMC2856016
10.1155/2010/923780
Molecular Surface Mesh Generation by Filtering Electron Density Map
Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.
2. Related WorkIn the last few years, a lot of methods have been developed for the generation of molecular surface meshes. In 1983, Connolly [23] proposed an analytical algorithm in which points were strategically placed around the molecule with a specific analytical role (maximum, minimum or saddle point) depending on the number of atoms present in the neighborhood. In 2003, Bajaj et al. [24] introduced another analytical method based on NURBS that offers the advantage to be parameterizable without recalculation. In 2002, Laug and Borouchaki [25] used a parametric representation of intersecting spheres to create the surface mesh. MSMS, developed by Sanner et al. [26] is based on alpha-shapes [27] of molecules. This algorithm is widely used because it is time efficient. However, the generated mesh is not a manifold and is composed of very irregular triangles. The beta-shapes [28] are a generalization of the alpha-shapes and were used by Ryu et al. [29] in 2007 to design a similar algorithm. Another vertex based method was used by Cheng and Shi [30]. In this method, molecular surfaces are generated with the help of restricted union of balls. Finally, some methods based on volumetric computation exist, such as the one of Zhang et al. [31] in which the solvent accessible surface is seen as the isosurface of Gaussian shaped electron density maps, and the algorithm of Can et al. [32] (the LSMS) which is based on a front propagation from atom center and on level-sets.Comparisons between the FDM method and the methods mentioned in this section are shown in Section 4.2.
[ "9719646", "12547423", "15963890", "1509259", "15993809", "15809198", "7739053", "9299341", "12421562", "15201051", "15998733", "16919296", "17451744", "7150574", "8906967", "19809581", "16621636", "15264254" ]
[ { "pmid": "9719646", "title": "Method for prediction of protein function from sequence using the sequence-to-structure-to-function paradigm with application to glutaredoxins/thioredoxins and T1 ribonucleases.", "abstract": "The practical exploitation of the vast numbers of sequences in the genome sequence databases is crucially dependent on the ability to identify the function of each sequence. Unfortunately, current methods, including global sequence alignment and local sequence motif identification, are limited by the extent of sequence similarity between sequences of unknown and known function; these methods increasingly fail as the sequence identity diverges into and beyond the twilight zone of sequence identity. To address this problem, a novel method for identification of protein function based directly on the sequence-to-structure-to-function paradigm is described. Descriptors of protein active sites, termed \"fuzzy functional forms\" or FFFs, are created based on the geometry and conformation of the active site. By way of illustration, the active sites responsible for the disulfide oxidoreductase activity of the glutaredoxin/thioredoxin family and the RNA hydrolytic activity of the T1 ribonuclease family are presented. First, the FFFs are shown to correctly identify their corresponding active sites in a library of exact protein models produced by crystallography or NMR spectroscopy, most of which lack the specified activity. Next, these FFFs are used to screen for active sites in low-to-moderate resolution models produced by ab initio folding or threading prediction algorithms. Again, the FFFs can specifically identify the functional sites of these proteins from their predicted structures. The results demonstrate that low-to-moderate resolution models as produced by state-of-the-art tertiary structure prediction algorithms are sufficient to identify protein active sites. Prediction of a novel function for the gamma subunit of a yeast glycosyl transferase and prediction of the function of two hypothetical yeast proteins whose models were produced via threading are presented. This work suggests a means for the large-scale functional screening of genomic sequence databases based on the prediction of structure from sequence, then on the identification of functional active sites in the predicted structure." }, { "pmid": "12547423", "title": "Overview of structural genomics: from structure to function.", "abstract": "The unprecedented increase in the number of new protein sequences arising from genomics and proteomics highlights directly the need for methods to rapidly and reliably determine the molecular and cellular functions of these proteins. One such approach, structural genomics, aims to delineate the total repertoire of protein folds, thereby providing three-dimensional portraits for all proteins in a living organism and to infer molecular functions of the proteins. The goal of obtaining protein structures on a genomic scale has motivated the development of high-throughput technologies for macromolecular structure determination, which have begun to produce structures at a greater rate than previously possible. These new structures have revealed many unexpected functional and evolution relationships that were hidden at the sequence level." }, { "pmid": "15963890", "title": "Predicting protein function from sequence and structural data.", "abstract": "When a protein's function cannot be experimentally determined, it can often be inferred from sequence similarity. Should this process fail, analysis of the protein structure can provide functional clues or confirm tentative functional assignments inferred from the sequence. Many structure-based approaches exist (e.g. fold similarity, three-dimensional templates), but as no single method can be expected to be successful in all cases, a more prudent approach involves combining multiple methods. Several automated servers that integrate evidence from multiple sources have been released this year and particular improvements have been seen with methods utilizing the Gene Ontology functional annotation schema." }, { "pmid": "1509259", "title": "Structure-based strategies for drug design and discovery.", "abstract": "Most drugs have been discovered in random screens or by exploiting information about macromolecular receptors. One source of this information is in the structures of critical proteins and nucleic acids. The structure-based approach to design couples this information with specialized computer programs to propose novel enzyme inhibitors and other therapeutic agents. Iterated design cycles have produced compounds now in clinical trials. The combination of molecular structure determination and computation is emerging as an important tool for drug development. These ideas will be applied to acquired immunodeficiency syndrome (AIDS) and bacterial drug resistance." }, { "pmid": "15993809", "title": "Structural biology and drug discovery.", "abstract": "It has long been recognized that knowledge of the 3D structures of proteins has the potential to accelerate drug discovery, but recent developments in genome sequencing, robotics and bioinformatics have radically transformed the opportunities. Many new protein targets have been identified from genome analyses and studied by X-ray analysis or NMR spectroscopy. Structural biology has been instrumental in directing not only lead optimization and target identification, where it has well-established roles, but also lead discovery, now that high-throughput methods of structure determination can provide powerful approaches to screening." }, { "pmid": "15809198", "title": "Computer prediction of drug resistance mutations in proteins.", "abstract": "Drug resistance is of increasing concern in the treatment of infectious diseases and cancer. Mutation in drug-interacting disease proteins is one of the primary causes for resistance particularly against anti-infectious drugs. Prediction of resistance mutations in these proteins is valuable both for the molecular dissection of drug resistance mechanisms and for predicting features that guide the design of new agents to counter resistant strains. Several protein structure- and sequence-based computer methods have been explored for mechanistic study and prediction of resistance mutations. These methods and their usefulness are reviewed here." }, { "pmid": "7739053", "title": "A geometry-based suite of molecular docking processes.", "abstract": "We have developed a geometry-based suite of processes for molecular docking. The suite consists of a molecular surface representation, a docking algorithm, and a surface inter-penetration and contact filter. The surface representation is composed of a sparse set of critical points (with their associated normals) positioned at the face centers of the molecular surface, providing a concise yet representative set. The docking algorithm is based on the Geometric Hashing technique, which indexes the critical points with their normals in a transformation invariant fashion preserving the multi-element geometric constraints. The inter-penetration and surface contact filter features a three-layer scoring system, through which docked models with high contact area and low clashes are funneled. This suite of processes enables a pipelined operation of molecular docking with high efficacy. Accurate and fast docking has been achieved with a rich collection of complexes and unbound molecules, including protein-protein and protein-small molecule associations. An energy evaluation routine assesses the intermolecular interactions of the funneled models obtained from the docking of the bound molecules by pairwise van der Waals and Coulombic potentials. Applications of this routine demonstrate the goodness of the high scoring, geometrically docked conformations of the bound crystal complexes." }, { "pmid": "9299341", "title": "Modelling protein docking using shape complementarity, electrostatics and biochemical information.", "abstract": "A protein docking study was performed for two classes of biomolecular complexes: six enzyme/inhibitor and four antibody/antigen. Biomolecular complexes for which crystal structures of both the complexed and uncomplexed proteins are available were used for eight of the ten test systems. Our docking experiments consist of a global search of translational and rotational space followed by refinement of the best predictions. Potential complexes are scored on the basis of shape complementarity and favourable electrostatic interactions using Fourier correlation theory. Since proteins undergo conformational changes upon binding, the scoring function must be sufficiently soft to dock unbound structures successfully. Some degree of surface overlap is tolerated to account for side-chain flexibility. Similarly for electrostatics, the interaction of the dispersed point charges of one protein with the Coulombic field of the other is measured rather than precise atomic interactions. We tested our docking protocol using the native rather than the complexed forms of the proteins to address the more scientifically interesting problem of predictive docking. In all but one of our test cases, correctly docked geometries (interface Calpha RMS deviation </=2 A from the experimental structure) are found during a global search of translational and rotational space in a list that was always less than 250 complexes and often less than 30. Varying degrees of biochemical information are still necessary to remove most of the incorrectly docked complexes." }, { "pmid": "12421562", "title": "Analysis of catalytic residues in enzyme active sites.", "abstract": "We present an analysis of the residues directly involved in catalysis in 178 enzyme active sites. Specific criteria were derived to define a catalytic residue, and used to create a catalytic residue dataset, which was then analysed in terms of properties including secondary structure, solvent accessibility, flexibility, conservation, quaternary structure and function. The results indicate the dominance of a small set of amino acid residues in catalysis and give a picture of a general active site environment. It is hoped that this information will provide a better understanding of the molecular mechanisms involved in catalysis and a heuristic basis for predicting catalytic residues in enzymes of unknown function." }, { "pmid": "15201051", "title": "Enzyme/non-enzyme discrimination and prediction of enzyme active site location using charge-based methods.", "abstract": "Calculations of charge interactions complement analysis of a characterised active site, rationalising pH-dependence of activity and transition state stabilisation. Prediction of active site location through large DeltapK(a)s or electrostatic strain is relevant for structural genomics. We report a study of ionisable groups in a set of 20 enzymes, finding that false positives obscure predictive potential. In a larger set of 156 enzymes, peaks in solvent-space electrostatic properties are calculated. Both electric field and potential match well to active site location. The best correlation is found with electrostatic potential calculated from uniform charge density over enzyme volume, rather than from assignment of a standard atom-specific charge set. Studying a shell around each molecule, for 77% of enzymes the potential peak is within that 5% of the shell closest to the active site centre, and 86% within 10%. Active site identification by largest cleft, also with projection onto a shell, gives 58% of enzymes for which the centre of the largest cleft lies within 5% of the active site, and 70% within 10%. Dielectric boundary conditions emphasise clefts in the uniform charge density method, which is suited to recognition of binding pockets embedded within larger clefts. The variation of peak potential with distance from active site, and comparison between enzyme and non-enzyme sets, gives an optimal threshold distinguishing enzyme from non-enzyme. We find that 87% of the enzyme set exceeds the threshold as compared to 29% of the non-enzyme set. Enzyme/non-enzyme homologues, \"structural genomics\" annotated proteins and catalytic/non-catalytic RNAs are studied in this context." }, { "pmid": "15998733", "title": "Computational prediction of native protein ligand-binding and enzyme active site sequences.", "abstract": "Recent studies reveal that the core sequences of many proteins were nearly optimized for stability by natural evolution. Surface residues, by contrast, are not so optimized, presumably because protein function is mediated through surface interactions with other molecules. Here, we sought to determine the extent to which the sequences of protein ligand-binding and enzyme active sites could be predicted by optimization of scoring functions based on protein ligand-binding affinity rather than structural stability. Optimization of binding affinity under constraints on the folding free energy correctly predicted 83% of amino acid residues (94% similar) in the binding sites of two model receptor-ligand complexes, streptavidin-biotin and glucose-binding protein. To explore the applicability of this methodology to enzymes, we applied an identical algorithm to the active sites of diverse enzymes from the peptidase, beta-gal, and nucleotide synthase families. Although simple optimization of binding affinity reproduced the sequences of some enzyme active sites with high precision, imposition of additional, geometric constraints on side-chain conformations based on the catalytic mechanism was required in other cases. With these modifications, our sequence optimization algorithm correctly predicted 78% of residues from all of the enzymes, with 83% similar to native (90% correct, with 95% similar, excluding residues with high variability in multiple sequence alignments). Furthermore, the conformations of the selected side chains were often correctly predicted within crystallographic error. These findings suggest that simple selection pressures may have played a predominant role in determining the sequences of ligand-binding and active sites in proteins." }, { "pmid": "16919296", "title": "Insights into protein-protein interfaces using a Bayesian network prediction method.", "abstract": "Identifying the interface between two interacting proteins provides important clues to the function of a protein, and is becoming increasing relevant to drug discovery. Here, surface patch analysis was combined with a Bayesian network to predict protein-protein binding sites with a success rate of 82% on a benchmark dataset of 180 proteins, improving by 6% on previous work and well above the 36% that would be achieved by a random method. A comparable success rate was achieved even when evolutionary information was missing, a further improvement on our previous method which was unable to handle incomplete data automatically. In a case study of the Mog1p family, we showed that our Bayesian network method can aid the prediction of previously uncharacterised binding sites and provide important clues to protein function. On Mog1p itself a putative binding site involved in the SLN1-SKN7 signal transduction pathway was detected, as was a Ran binding site, previously characterized solely by conservation studies, even though our automated method operated without using homologous proteins. On the remaining members of the family (two structural genomics targets, and a protein involved in the photosystem II complex in higher plants) we identified novel binding sites with little correspondence to those on Mog1p. These results suggest that members of the Mog1p family bind to different proteins and probably have different functions despite sharing the same overall fold. We also demonstrated the applicability of our method to drug discovery efforts by successfully locating a number of binding sites involved in the protein-protein interaction network of papilloma virus infection. In a separate study, we attempted to distinguish between the two types of binding site, obligate and non-obligate, within our dataset using a second Bayesian network. This proved difficult although some separation was achieved on the basis of patch size, electrostatic potential and conservation. Such was the similarity between the two interacting patch types, we were able to use obligate binding site properties to predict the location of non-obligate binding sites and vice versa." }, { "pmid": "17451744", "title": "HotPatch: a statistical approach to finding biologically relevant features on protein surfaces.", "abstract": "We describe a fully automated algorithm for finding functional sites on protein structures. Our method finds surface patches of unusual physicochemical properties on protein structures, and estimates the patches' probability of overlapping functional sites. Other methods for predicting the locations of specific types of functional sites exist, but in previous analyses, it has been difficult to compare methods when they are applied to different types of sites. Thus, we introduce a new statistical framework that enables rigorous comparisons of the usefulness of different physicochemical properties for predicting virtually any kind of functional site. The program's statistical models were trained for 11 individual properties (electrostatics, concavity, hydrophobicity, etc.) and for 15 neural network combination properties, all optimized and tested on 15 diverse protein functions. To simulate what to expect if the program were run on proteins of unknown function, as might arise from structural genomics, we tested it on 618 proteins of diverse mixed functions. In the higher-scoring top half of all predictions, a functional residue could typically be found within the first 1.7 residues chosen at random. The program may or may not use partial information about the protein's function type as an input, depending on which statistical model the user chooses to employ. If function type is used as an additional constraint, prediction accuracy usually increases, and is particularly good for enzymes, DNA-interacting sites, and oligomeric interfaces. The program can be accessed online (at http://hotpatch.mbi.ucla.edu)." }, { "pmid": "7150574", "title": "Stabilization of protein structure by sugars.", "abstract": "The preferential interaction of proteins with solvent components was measured in aqueous lactose and glucose systems by using a high precision densimeter. In all cases, the protein was preferentially hydrated; i.e., addition of these sugars to an aqueous solution of the protein resulted in an unfavorable free-energy change. This effect was shown to increase with an increase in protein surface area, explaining the protein stabilizing action of these sugars and their enhancing effect of protein associations. Correlation of the preferential interaction parameter with the effect of the sugars on the surface tension of water, i.e., their positive surface tension increment, has led to the conclusion that the surface free energy perturbation by sugars plays a predominant role in their preferential interaction with proteins. Other contributing factors are the exclusion volume of the sugars and the chemical nature of the protein surface." }, { "pmid": "8906967", "title": "Reduced surface: an efficient way to compute molecular surfaces.", "abstract": "Because of their wide use in molecular modeling, methods to compute molecular surfaces have received a lot of interest in recent years. However, most of the proposed algorithms compute the analytical representation of only the solvent-accessible surface. There are a few programs that compute the analytical representation of the solvent-excluded surface, but they often have problems handling singular cases of self-intersecting surfaces and tend to fail on large molecules (more than 10,000 atoms). We describe here a program called MSMS, which is shown to be fast and reliable in computing molecular surfaces. It relies on the use of the reduced surface that is briefly defined here and from which the solvent-accessible and solvent-excluded surfaces are computed. The four algorithms composing MSMS are described and their complexity is analyzed. Special attention is given to the handling of self-intersecting parts of the solvent-excluded surface called singularities. The program has been compared with Connolly's program PQMS [M.L. Connolly (1993) Journal of Molecular Graphics, Vol. 11, pp. 139-141] on a set of 709 molecules taken from the Brookhaven Data Base. MSMS was able to compute topologically correct surfaces for each molecule in the set. Moreover, the actual time spent to compute surfaces is in agreement with the theoretical complexity of the program, which is shown to be O[n log(n)] for n atoms. On a Hewlett-Packard 9000/735 workstation, MSMS takes 0.73 s to produce a triangulated solvent-excluded surface for crambin (1 crn, 46 residues, 327 atoms, 4772 triangles), 4.6 s for thermolysin (3tln, 316 residues, 2437 atoms, 26462 triangles), and 104.53 s for glutamine synthetase (2gls, 5676 residues, 43632 atoms, 476665 triangles)." }, { "pmid": "19809581", "title": "Quality Meshing of Implicit Solvation Models of Biomolecular Structures.", "abstract": "This paper describes a comprehensive approach to construct quality meshes for implicit solvation models of biomolecular structures starting from atomic resolution data in the Protein Data Bank (PDB). First, a smooth volumetric electron density map is constructed from atomic data using weighted Gaussian isotropic kernel functions and a two-level clustering technique. This enables the selection of a smooth implicit solvation surface approximation to the Lee-Richards molecular surface. Next, a modified dual contouring method is used to extract triangular meshes for the surface, and tetrahedral meshes for the volume inside or outside the molecule within a bounding sphere/box of influence. Finally, geometric flow techniques are used to improve the surface and volume mesh quality. Several examples are presented, including generated meshes for biomolecules that have been successfully used in finite element simulations involving solvation energetics and binding rate constants." }, { "pmid": "16621636", "title": "Efficient molecular surface generation using level-set methods.", "abstract": "Molecules interact through their surface residues. Calculation of the molecular surface of a protein structure is thus an important step for a detailed functional analysis. One of the main considerations in comparing existing methods for molecular surface computations is their speed. Most of the methods that produce satisfying results for small molecules fail to do so for large complexes. In this article, we present a level-set-based approach to compute and visualize a molecular surface at a desired resolution. The emerging level-set methods have been used for computing evolving boundaries in several application areas from fluid mechanics to computer vision. Our method provides a uniform framework for computing solvent-accessible, solvent-excluded surfaces and interior cavities. The computation is carried out very efficiently even for very large molecular complexes with tens of thousands of atoms. We compared our method to some of the most widely used molecular visualization tools (Swiss-PDBViewer, PyMol, and Chimera) and our results show that we can calculate and display a molecular surface 1.5-3.14 times faster on average than all three of the compared programs. Furthermore, we demonstrate that our method is able to detect all of the interior inaccessible cavities that can accommodate one or more water molecules." }, { "pmid": "15264254", "title": "UCSF Chimera--a visualization system for exploratory research and analysis.", "abstract": "The design, implementation, and capabilities of an extensible visualization system, UCSF Chimera, are discussed. Chimera is segmented into a core that provides basic services and visualization, and extensions that provide most higher level functionality. This architecture ensures that the extension mechanism satisfies the demands of outside developers who wish to incorporate new features. Two unusual extensions are presented: Multiscale, which adds the ability to visualize large-scale molecular assemblies such as viral coats, and Collaboratory, which allows researchers to share a Chimera session interactively despite being at separate locales. Other extensions include Multalign Viewer, for showing multiple sequence alignments and associated structures; ViewDock, for screening docked ligand orientations; Movie, for replaying molecular dynamics trajectories; and Volume Viewer, for display and analysis of volumetric data. A discussion of the usage of Chimera in real-world situations is given, along with anticipated future directions. Chimera includes full user documentation, is free to academic and nonprofit users, and is available for Microsoft Windows, Linux, Apple Mac OS X, SGI IRIX, and HP Tru64 Unix from http://www.cgl.ucsf.edu/chimera/." } ]
International Journal of Telemedicine and Applications
20467560
PMC2868183
10.1155/2010/536237
Arogyasree: An Enhanced Grid-Based Approach to Mobile Telemedicine
A typical telemedicine system involves a small set of hospitals providing remote healthcare services to a small section of the society using dedicated nodal centers. However, in developing nations like India where majority live in rural areas that lack specialist care, we envision the need for much larger Internet-based telemedicine systems that would enable a large pool of doctors and hospitals to collectively provide healthcare services to entire populations. We propose a scalable, Internet-based P2P architecture for telemedicine integrating multiple hospitals, mobile medical specialists, and rural mobile units. This system, based on the store and forward model, features a distributed context-aware scheduler for providing timely and location-aware telemedicine services. Other features like zone-based overlay structure and persistent object space abstraction make the system efficient and easy to use. Lastly, the system uses the existing internet infrastructure and supports mobility at doctor and patient ends.
2. Related WorkWe compare Arogyasree with the existing telemedicine solutions with respect to scalability, context-aware scheduling of patient requests, coverage of the system (mobility enhances coverage of the system), telemedicine services, infrastructure (dedicated versus nondedicated), and so forth. Telemedicine solutions such as [5, 6, 10, 11] use dedicated hospital nodes as centers for providing telemedicine service. They require the specialist to be available at the centre whenever the data is received to ensure that the advice is sent within few minutes. But with the widespread use of mobile communication devices, the reports can be delivered on to the handheld of a specialist, who may be located, anywhere. Arogyasree supports mobility in the part of the medical practitioner like in [12]. It also supports mobility in the patient side thereby improving the coverage in rural areas. Solutions like [13–15] also support mobility in the patient side. In [14], they use wireless ad hoc networks and grids at the patient side. However we have modeled the patient-side resource-constrained mobile device as external entities (not as peers in the grid) to improve the grid stability. Certain solutions address the need for a single hospital and associated patients as in [12]. But our solution incorporates multiple hospitals working collaboratively within the grid providing a farther reach to the target community. Solutions like [16, 17] also use interconnected hospitals to provide medical service. Both these solutions, however, do not support mobility on the doctor side as they use video conferencing as a primary mode of patient-doctor interaction. Additionally, [17] requires dedicated communication infrastructure linking the hospitals whereas our solution is built on top of the Internet.In [11, 15], a central server is used to handle the incoming requests. This makes them inherently nonscalable restricting their scope as a solution for a large-scale global health grid. Also, the central server becomes a single point of failure. Our solution uses multiple zonal servers from various zones to distribute the handling of requests, thereby providing scalability. The solution in [10] uses a different approach for the above problem by using a dedicated server farm to provide request handling. But this leads to underutilization of the computing resources of the nodes present in various hospital nodes and limits scalability. The solutions also differ in the manner in which patient requests are forwarded to the doctors. In [14], a dedicated medical call center comprising of medical practitioners attend to patient requests and forward them to appropriate doctors. But the manual intervention in forwarding the requests makes the system expensive as well as less scalable. In another approach [18], the system accepts the symptoms from the patient and uses artificial intelligence to route it to the appropriate specialist. In our solution, we used a simple automatic forwarding of the requests through the zonal servers based on the specialization as specified in the request. Our solution also allows internal forwarding of the patient requests among multiple specialists. Thus we have achieved a simple, scalable, and cost-effective solution. Solution in [18] does not address the need of a patient-doctor meeting as its primary aim does not require it (military application). But in our case the patient may need to consult the doctor in person depending on the ailment. Hence scheduling is implemented in a way that (as far as possible) the patient requests will be forwarded to the nearer hospitals depending on availability. For this, the presence of geographic proximity-based zones and local tuple spaces for handling surplus requests is used in our solution. Many of the solutions utilize medical grids for computational purposes only as in [19, 20]. But we have also used the grid for storage of patient records, request forwarding, and load balancing. Further, none of the above solutions effectively address the possibility of a group of hospitals becoming overloaded occasionally. This requires the request to be stored temporarily to ensure proximity and later forwarded to the global grid after a specified time-out to ensure timely service. This has been addressed by incorporating locationwise and global tuple spaces.
[ "15787008", "16221939" ]
[ { "pmid": "15787008", "title": "Telemedicine diffusion in a developing country: the case of India (March 2004).", "abstract": "Telemedicine (health-care delivery where physicians examine distant patients using telecommunications technologies) has been heralded as one of several possible solutions to some of the medical dilemmas that face many developing countries. In this study, we examine the current state of telemedicine in a developing country, India. Telemedicine has brought a plethora of benefits to the populace of India, especially those living in rural and remote areas (constituting about 70% of India's population). We discuss three Indian telemedicine implementation cases, consolidate lessons learned from the cases, and culminate with potential researchable critical success factors that account for the growth and modest successes of telemedicine in India." }, { "pmid": "16221939", "title": "HL7 Clinical Document Architecture, Release 2.", "abstract": "Clinical Document Architecture, Release One (CDA R1), became an American National Standards Institute (ANSI)-approved HL7 Standard in November 2000, representing the first specification derived from the Health Level 7 (HL7) Reference Information Model (RIM). CDA, Release Two (CDA R2), became an ANSI-approved HL7 Standard in May 2005 and is the subject of this article, where the focus is primarily on how the standard has evolved since CDA R1, particularly in the area of semantic representation of clinical events. CDA is a document markup standard that specifies the structure and semantics of a clinical document (such as a discharge summary or progress note) for the purpose of exchange. A CDA document is a defined and complete information object that can include text, images, sounds, and other multimedia content. It can be transferred within a message and can exist independently, outside the transferring message. CDA documents are encoded in Extensible Markup Language (XML), and they derive their machine processable meaning from the RIM, coupled with terminology. The CDA R2 model is richly expressive, enabling the formal representation of clinical statements (such as observations, medication administrations, and adverse events) such that they can be interpreted and acted upon by a computer. On the other hand, CDA R2 offers a low bar for adoption, providing a mechanism for simply wrapping a non-XML document with the CDA header or for creating a document with a structured header and sections containing only narrative content. The intent is to facilitate widespread adoption, while providing a mechanism for incremental semantic interoperability." } ]
PLoS Computational Biology
20617200
PMC2895635
10.1371/journal.pcbi.1000837
A Comprehensive Benchmark of Kernel Methods to Extract Protein–Protein Interactions from Literature
The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein–protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three kernels are clearly superior to the other methods.
Related WorkA number of different techniques have been proposed to solve the problem of extracting interactions between proteins in natural language text. These can be roughly sorted into one of three classes: co-occurrence, pattern matching, and machine learning. We briefly review these methods here for completeness; see [39] for a recent survey. We describe kernel-based methods in more detail in Methods.A common baseline method for relationship extraction is to assume a relationship between each pair of entities that co-occur in the same piece of text (e.g., [36]). This “piece of text” is usually restricted to single sentences, but can also be a phrase, a paragraph, or a whole document. The underlying assumption is that whenever (two or more) entities are mentioned together, a semantic relation holds between them. However, the semantic relation does not necessarily mean that the entities interact; consequently, the kind of relation might not match what is sought. In the case of co-occurring proteins, only a fraction of sentences will discuss actual interactions between them. As an example, in the AIMed corpus (see Corpora), only 17% of all sentence-level protein pairs describe protein-protein interactions. Accordingly, precision is often low, but can be improved by additional filtering steps, such as aggregation of single PPI at the corpus level [19], removal of sentences matching certain lexico-syntactic patterns [40], or requiring the occurrence of an additional “interaction word” from a fixed list between the two proteins [15].The second common approach is pattern matching. SUISEKI was one of the first systems to use hand-crafted regular expressions to encode phrases that typically express protein-protein interactions, using part-of-speech and word lists [41]. Overall, they found that a set of about 40 manually derived patterns yields high precision, but achieves only low recall. [42] proposed OpenDMAP, a framework for template matching, which is backed by ontological resources to represent slots and potential slot fillers, etc. With 78 hand-crafted templates, they achieve an F-score of 29% on the BioCreative 2 IPS test set [43], which was the best at the time of the competition. [44] showed that patterns can be generated automatically using manually annotated sentences that are abstracted into patterns. AliBaba goes a step further in deriving patterns from automatically generated training data [45]. The fact that automatically generated patterns usually yield high precision but low individual recall is made up by this method by generating thousands of patterns. On the BioCreative 2 IPS test set, this method achieves an F-score of around 24% without any corpus-specific tuning [45]. The third category of approaches use machine learning, for instance, Bayesian network approaches [46] or maximum-entropy-based methods [47]. The later can be set up as a two-step classification scenario, first judging sentences for relevance to discussing protein-protein interactions, and then classifying each candidate pair of proteins in such sentences. Using half of the BioCreative 1 PPI corpus each for training and testing, the approach yields an accuracy of 81.9% when using both steps, and 81.2% when using the second step only. As ML-based methods are the focus of our paper, we will discuss more closely related work in the next sections.
[ "18721473", "14517352", "18381899", "18269572", "12662919", "18269571", "17135203", "19060303", "18586725", "17344885", "10977089", "19635172", "15811782", "18003645", "19850753", "17254351", "19073593", "19909518", "18207462", "18834491", "18237434", "15890744", "18834492", "19369495", "17291334", "17142812", "11928487", "16046493", "15814565" ]
[ { "pmid": "18721473", "title": "Integration of relational and hierarchical network information for protein function prediction.", "abstract": "BACKGROUND\nIn the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.\n\n\nRESULTS\nWe propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.\n\n\nCONCLUSION\nA cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods." }, { "pmid": "14517352", "title": "Protein complexes and functional modules in molecular networks.", "abstract": "Proteins, nucleic acids, and small molecules form a dense network of molecular interactions in a cell. Molecules are nodes of this network, and the interactions between them are edges. The architecture of molecular networks can reveal important principles of cellular organization and function, similarly to the way that protein structure tells us about the function and organization of a protein. Computational analysis of molecular networks has been primarily concerned with node degree [Wagner, A. & Fell, D. A. (2001) Proc. R. Soc. London Ser. B 268, 1803-1810; Jeong, H., Tombor, B., Albert, R., Oltvai, Z. N. & Barabasi, A. L. (2000) Nature 407, 651-654] or degree correlation [Maslov, S. & Sneppen, K. (2002) Science 296, 910-913], and hence focused on single/two-body properties of these networks. Here, by analyzing the multibody structure of the network of protein-protein interactions, we discovered molecular modules that are densely connected within themselves but sparsely connected with the rest of the network. Comparison with experimental data and functional annotation of genes showed two types of modules: (i) protein complexes (splicing machinery, transcription factors, etc.) and (ii) dynamic functional units (signaling cascades, cell-cycle regulation, etc.). Discovered modules are highly statistically significant, as is evident from comparison with random graphs, and are robust to noise in the data. Our results provide strong support for the network modularity principle introduced by Hartwell et al. [Hartwell, L. H., Hopfield, J. J., Leibler, S. & Murray, A. W. (1999) Nature 402, C47-C52], suggesting that found modules constitute the \"building blocks\" of molecular networks." }, { "pmid": "18381899", "title": "Protein networks in disease.", "abstract": "During a decade of proof-of-principle analysis in model organisms, protein networks have been used to further the study of molecular evolution, to gain insight into the robustness of cells to perturbation, and for assignment of new protein functions. Following these analyses, and with the recent rise of protein interaction measurements in mammals, protein networks are increasingly serving as tools to unravel the molecular basis of disease. We review promising applications of protein networks to disease in four major areas: identifying new disease genes; the study of their network properties; identifying disease-related subnetworks; and network-based disease classification. Applications in infectious disease, personalized medicine, and pharmacology are also forthcoming as the available protein network information improves in quality and coverage." }, { "pmid": "18269572", "title": "Molecular and cellular approaches for the detection of protein-protein interactions: latest techniques and current limitations.", "abstract": "Homotypic and heterotypic protein interactions are crucial for all levels of cellular function, including architecture, regulation, metabolism, and signaling. Therefore, protein interaction maps represent essential components of post-genomic toolkits needed for understanding biological processes at a systems level. Over the past decade, a wide variety of methods have been developed to detect, analyze, and quantify protein interactions, including surface plasmon resonance spectroscopy, NMR, yeast two-hybrid screens, peptide tagging combined with mass spectrometry and fluorescence-based technologies. Fluorescence techniques range from co-localization of tags, which may be limited by the optical resolution of the microscope, to fluorescence resonance energy transfer-based methods that have molecular resolution and can also report on the dynamics and localization of the interactions within a cell. Proteins interact via highly evolved complementary surfaces with affinities that can vary over many orders of magnitude. Some of the techniques described in this review, such as surface plasmon resonance, provide detailed information on physical properties of these interactions, while others, such as two-hybrid techniques and mass spectrometry, are amenable to high-throughput analysis using robotics. In addition to providing an overview of these methods, this review emphasizes techniques that can be applied to determine interactions involving membrane proteins, including the split ubiquitin system and fluorescence-based technologies for characterizing hits obtained with high-throughput approaches. Mass spectrometry-based methods are covered by a review by Miernyk and Thelen (2008; this issue, pp. 597-609). In addition, we discuss the use of interaction data to construct interaction networks and as the basis for the exciting possibility of using to predict interaction surfaces." }, { "pmid": "12662919", "title": "How reliable are experimental protein-protein interaction data?", "abstract": "Data of protein-protein interactions provide valuable insight into the molecular networks underlying a living cell. However, their accuracy is often questioned, calling for a rigorous assessment of their reliability. The computation offered here provides an intelligible mean to assess directly the rate of true positives in a data set of experimentally determined interacting protein pairs. We show that the reliability of high-throughput yeast two-hybrid assays is about 50%, and that the size of the yeast interactome is estimated to be 10,000-16,600 interactions." }, { "pmid": "18269571", "title": "Biochemical approaches for discovering protein-protein interactions.", "abstract": "Protein-protein interactions or protein complexes are integral in nearly all cellular processes, ranging from metabolism to structure. Elucidating both individual protein associations and complex protein interaction networks, while challenging, is an essential goal of functional genomics. For example, discovering interacting partners for a 'protein of unknown function' can provide insight into actual function far beyond what is possible with sequence-based predictions, and provide a platform for future research. Synthetic genetic approaches such as two-hybrid screening often reveal a perplexing array of potential interacting partners for any given target protein. It is now known, however, that this type of anonymous screening approach can yield high levels of false-positive results, and therefore putative interactors must be confirmed by independent methods. In vitro biochemical strategies for identifying interacting proteins are varied and time-honored, some being as old as the field of protein chemistry itself. Herein we discuss five biochemical approaches for isolating and characterizing protein-protein interactions in vitro: co-immunoprecipitation, blue native gel electrophoresis, in vitro binding assays, protein cross-linking, and rate-zonal centrifugation. A perspective is provided for each method, and where appropriate specific, trial-tested methods are included." }, { "pmid": "17135203", "title": "MINT: the Molecular INTeraction database.", "abstract": "The Molecular INTeraction database (MINT, http://mint.bio.uniroma2.it/mint/) aims at storing, in a structured format, information about molecular interactions (MIs) by extracting experimental details from work published in peer-reviewed journals. At present the MINT team focuses the curation work on physical interactions between proteins. Genetic or computationally inferred interactions are not included in the database. Over the past four years MINT has undergone extensive revision. The new version of MINT is based on a completely remodeled database structure, which offers more efficient data exploration and analysis, and is characterized by entries with a richer annotation. Over the past few years the number of curated physical interactions has soared to over 95 000. The whole dataset can be freely accessed online in both interactive and batch modes through web-based interfaces and an FTP server. MINT now includes, as an integrated addition, HomoMINT, a database of interactions between human proteins inferred from experiments with ortholog proteins in model organisms (http://mint.bio.uniroma2.it/mint/)." }, { "pmid": "19060303", "title": "Facts from text: can text mining help to scale-up high-quality manual curation of gene products with ontologies?", "abstract": "The biomedical literature can be seen as a large integrated, but unstructured data repository. Extracting facts from literature and making them accessible is approached from two directions: manual curation efforts develop ontologies and vocabularies to annotate gene products based on statements in papers. Text mining aims to automatically identify entities and their relationships in text using information retrieval and natural language processing techniques. Manual curation is highly accurate but time consuming, and does not scale with the ever increasing growth of literature. Text mining as a high-throughput computational technique scales well, but is error-prone due to the complexity of natural language. How can both be married to combine scalability and accuracy? Here, we review the state-of-the-art text mining approaches that are relevant to annotation and discuss available online services analysing biomedical literature by means of text mining techniques, which could also be utilised by annotation projects. We then examine how far text mining has already been utilised in existing annotation projects and conclude how these techniques could be tightly integrated into the manual annotation process through novel authoring systems to scale-up high-quality manual curation." }, { "pmid": "18586725", "title": "Identifying gene-disease associations using centrality on a literature mined gene-interaction network.", "abstract": "MOTIVATION\nUnderstanding the role of genetics in diseases is one of the most important aims of the biological sciences. The completion of the Human Genome Project has led to a rapid increase in the number of publications in this area. However, the coverage of curated databases that provide information manually extracted from the literature is limited. Another challenge is that determining disease-related genes requires laborious experiments. Therefore, predicting good candidate genes before experimental analysis will save time and effort. We introduce an automatic approach based on text mining and network analysis to predict gene-disease associations. We collected an initial set of known disease-related genes and built an interaction network by automatic literature mining based on dependency parsing and support vector machines. Our hypothesis is that the central genes in this disease-specific network are likely to be related to the disease. We used the degree, eigenvector, betweenness and closeness centrality metrics to rank the genes in the network.\n\n\nRESULTS\nThe proposed approach can be used to extract known and to infer unknown gene-disease associations. We evaluated the approach for prostate cancer. Eigenvector and degree centrality achieved high accuracy. A total of 95% of the top 20 genes ranked by these methods are confirmed to be related to prostate cancer. On the other hand, betweenness and closeness centrality predicted more genes whose relation to the disease is currently unknown and are candidates for experimental study.\n\n\nAVAILABILITY\nA web-based system for browsing the disease-specific gene-interaction networks is available at: http://gin.ncibi.org." }, { "pmid": "17344885", "title": "A human phenome-interactome network of protein complexes implicated in genetic disorders.", "abstract": "We performed a systematic, large-scale analysis of human protein complexes comprising gene products implicated in many different categories of human disease to create a phenome-interactome network. This was done by integrating quality-controlled interactions of human proteins with a validated, computationally derived phenotype similarity score, permitting identification of previously unknown complexes likely to be associated with disease. Using a phenomic ranking of protein complexes linked to human disease, we developed a Bayesian predictor that in 298 of 669 linkage intervals correctly ranks the known disease-causing protein as the top candidate, and in 870 intervals with no identified disease-causing gene, provides novel candidates implicated in disorders such as retinitis pigmentosa, epithelial ovarian cancer, inflammatory bowel disease, amyotrophic lateral sclerosis, Alzheimer disease, type 2 diabetes and coronary heart disease. Our publicly available draft of protein complexes associated with pathology comprises 506 complexes, which reveal functional relationships between disease-promoting genes that will inform future experimentation." }, { "pmid": "10977089", "title": "A pragmatic information extraction strategy for gathering data on genetic interactions.", "abstract": "We present in this paper a pragmatic strategy to perform information extraction from biologic texts. Since the emergence of the information extraction field, techniques have evolved, become more robust and proved their efficiency on specific domains. We are using a combination of existing linguistic and knowledge processing tools to automatically extract information about gene interactions in the literature. Our ultimate goal is to build a network of gene interactions. The methodologies used and the current results are discussed in this paper." }, { "pmid": "19635172", "title": "A realistic assessment of methods for extracting gene/protein interactions from free text.", "abstract": "BACKGROUND\nThe automated extraction of gene and/or protein interactions from the literature is one of the most important targets of biomedical text mining research. In this paper we present a realistic evaluation of gene/protein interaction mining relevant to potential non-specialist users. Hence we have specifically avoided methods that are complex to install or require reimplementation, and we coupled our chosen extraction methods with a state-of-the-art biomedical named entity tagger.\n\n\nRESULTS\nOur results show: that performance across different evaluation corpora is extremely variable; that the use of tagged (as opposed to gold standard) gene and protein names has a significant impact on performance, with a drop in F-score of over 20 percentage points being commonplace; and that a simple keyword-based benchmark algorithm when coupled with a named entity tagger outperforms two of the tools most widely used to extract gene/protein interactions.\n\n\nCONCLUSION\nIn terms of availability, ease of use and performance, the potential non-specialist user community interested in automatically extracting gene and/or protein interactions from free text is poorly served by current tools and systems. The public release of extraction tools that are easy to install and use, and that achieve state-of-art levels of performance should be treated as a high priority by the biomedical text mining community." }, { "pmid": "15811782", "title": "Comparative experiments on learning information extractors for proteins and their interactions.", "abstract": "OBJECTIVE\nAutomatically extracting information from biomedical text holds the promise of easily consolidating large amounts of biological knowledge in computer-accessible form. This strategy is particularly attractive for extracting data relevant to genes of the human genome from the 11 million abstracts in Medline. However, extraction efforts have been frustrated by the lack of conventions for describing human genes and proteins. We have developed and evaluated a variety of learned information extraction systems for identifying human protein names in Medline abstracts and subsequently extracting information on interactions between the proteins.\n\n\nMETHODS AND MATERIAL\nWe used a variety of machine learning methods to automatically develop information extraction systems for extracting information on gene/protein name, function and interactions from Medline abstracts. We present cross-validated results on identifying human proteins and their interactions by training and testing on a set of approximately 1000 manually-annotated Medline abstracts that discuss human genes/proteins.\n\n\nRESULTS\nWe demonstrate that machine learning approaches using support vector machines and maximum entropy are able to identify human proteins with higher accuracy than several previous approaches. We also demonstrate that various rule induction methods are able to identify protein interactions with higher precision than manually-developed rules.\n\n\nCONCLUSION\nOur results show that it is promising to use machine learning to automatically build systems for extracting information from biomedical text. The results also give a broad picture of the relative strengths of a wide variety of methods when tested on a reasonably large human-annotated corpus." }, { "pmid": "18003645", "title": "Kernel approaches for genic interaction extraction.", "abstract": "MOTIVATION\nAutomatic knowledge discovery and efficient information access such as named entity recognition and relation extraction between entities have recently become critical issues in the biomedical literature. However, the inherent difficulty of the relation extraction task, mainly caused by the diversity of natural language, is further compounded in the biomedical domain because biomedical sentences are commonly long and complex. In addition, relation extraction often involves modeling long range dependencies, discontiguous word patterns and semantic relations for which the pattern-based methodology is not directly applicable.\n\n\nRESULTS\nIn this article, we shift the focus of biomedical relation extraction from the problem of pattern extraction to the problem of kernel construction. We suggest four kernels: predicate, walk, dependency and hybrid kernels to adequately encapsulate information required for a relation prediction based on the sentential structures involved in two entities. For this purpose, we view the dependency structure of a sentence as a graph, which allows the system to deal with an essential one from the complex syntactic structure by finding the shortest path between entities. The kernels we suggest are augmented gradually from the flat features descriptions to the structural descriptions of the shortest paths. As a result, we obtain a very promising result, a 77.5 F-score with the walk kernel on the Language Learning in Logic (LLL) 05 genic interaction shared task.\n\n\nAVAILABILITY\nThe used algorithms are free for use for academic research and are available from our Web site http://mllab.sogang.ac.kr/ approximately shkim/LLL05.tar.gz." }, { "pmid": "19850753", "title": "Evaluation of linguistic features useful in extraction of interactions from PubMed; application to annotating known, high-throughput and predicted interactions in I2D.", "abstract": "MOTIVATION\nIdentification and characterization of protein-protein interactions (PPIs) is one of the key aims in biological research. While previous research in text mining has made substantial progress in automatic PPI detection from literature, the need to improve the precision and recall of the process remains. More accurate PPI detection will also improve the ability to extract experimental data related to PPIs and provide multiple evidence for each interaction.\n\n\nRESULTS\nWe developed an interaction detection method and explored the usefulness of various features in automatically identifying PPIs in text. The results show that our approach outperforms other systems using the AImed dataset. In the tests where our system achieves better precision with reduced recall, we discuss possible approaches for improvement. In addition to test datasets, we evaluated the performance on interactions from five human-curated databases-BIND, DIP, HPRD, IntAct and MINT-where our system consistently identified evidence for approximately 60% of interactions when both proteins appear in at least one sentence in the PubMed abstract. We then applied the system to extract articles from PubMed to annotate known, high-throughput and interologous interactions in I(2)D.\n\n\nAVAILABILITY\nThe data and software are available at: http://www.cs.utoronto.ca/ approximately juris/data/BI09/." }, { "pmid": "17254351", "title": "Benchmarking natural-language parsers for biological applications using dependency graphs.", "abstract": "BACKGROUND\nInterest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria.\n\n\nRESULTS\nUsing the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations.\n\n\nCONCLUSION\nEvaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques." }, { "pmid": "19073593", "title": "Evaluating contributions of natural language parsers to protein-protein interaction extraction.", "abstract": "MOTIVATION\nWhile text mining technologies for biomedical research have gained popularity as a way to take advantage of the explosive growth of information in text form in biomedical papers, selecting appropriate natural language processing (NLP) tools is still difficult for researchers who are not familiar with recent advances in NLP. This article provides a comparative evaluation of several state-of-the-art natural language parsers, focusing on the task of extracting protein-protein interaction (PPI) from biomedical papers. We measure how each parser, and its output representation, contributes to accuracy improvement when the parser is used as a component in a PPI system.\n\n\nRESULTS\nAll the parsers attained improvements in accuracy of PPI extraction. The levels of accuracy obtained with these different parsers vary slightly, while differences in parsing speed are larger. The best accuracy in this work was obtained when we combined Miyao and Tsujii's Enju parser and Charniak and Johnson's reranking parser, and the accuracy is better than the state-of-the-art results on the same data.\n\n\nAVAILABILITY\nThe PPI extraction system used in this work (AkanePPI) is available online at http://www-tsujii.is.s.u-tokyo.ac.jp/downloads/downloads.cgi. The evaluated parsers are also available online from each developer's site." }, { "pmid": "19909518", "title": "Linguistic feature analysis for protein interaction extraction.", "abstract": "BACKGROUND\nThe rapid growth of the amount of publicly available reports on biomedical experimental results has recently caused a boost of text mining approaches for protein interaction extraction. Most approaches rely implicitly or explicitly on linguistic, i.e., lexical and syntactic, data extracted from text. However, only few attempts have been made to evaluate the contribution of the different feature types. In this work, we contribute to this evaluation by studying the relative importance of deep syntactic features, i.e., grammatical relations, shallow syntactic features (part-of-speech information) and lexical features. For this purpose, we use a recently proposed approach that uses support vector machines with structured kernels.\n\n\nRESULTS\nOur results reveal that the contribution of the different feature types varies for the different data sets on which the experiments were conducted. The smaller the training corpus compared to the test data, the more important the role of grammatical relations becomes. Moreover, deep syntactic information based classifiers prove to be more robust on heterogeneous texts where no or only limited common vocabulary is shared.\n\n\nCONCLUSION\nOur findings suggest that grammatical relations play an important role in the interaction extraction task. Moreover, the net advantage of adding lexical and shallow syntactic features is small related to the number of added features. This implies that efficient classifiers can be built by using only a small fraction of the features that are typically being used in recent approaches." }, { "pmid": "18207462", "title": "Extracting interactions between proteins from the literature.", "abstract": "During the last decade, biomedicine has witnessed a tremendous development. Large amounts of experimental and computational biomedical data have been generated along with new discoveries, which are accompanied by an exponential increase in the number of biomedical publications describing these discoveries. In the meantime, there has been a great interest with scientific communities in text mining tools to find knowledge such as protein-protein interactions, which is most relevant and useful for specific analysis tasks. This paper provides a outline of the various information extraction methods in biomedical domain, especially for discovery of protein-protein interactions. It surveys methodologies involved in plain texts analyzing and processing, categorizes current work in biomedical information extraction, and provides examples of these methods. Challenges in the field are also presented and possible solutions are discussed." }, { "pmid": "18834491", "title": "OntoGene in BioCreative II.", "abstract": "BACKGROUND\nResearch scientists and companies working in the domains of biomedicine and genomics are increasingly faced with the problem of efficiently locating, within the vast body of published scientific findings, the critical pieces of information that are needed to direct current and future research investment.\n\n\nRESULTS\nIn this report we describe approaches taken within the scope of the second BioCreative competition in order to solve two aspects of this problem: detection of novel protein interactions reported in scientific articles, and detection of the experimental method that was used to confirm the interaction. Our approach to the former problem is based on a high-recall protein annotation step, followed by two strict disambiguation steps. The remaining proteins are then combined according to a number of lexico-syntactic filters, which deliver high-precision results while maintaining reasonable recall. The detection of the experimental methods is tackled by a pattern matching approach, which has delivered the best results in the official BioCreative evaluation.\n\n\nCONCLUSION\nAlthough the results of BioCreative clearly show that no tool is sufficiently reliable for fully automated annotations, a few of the proposed approaches (including our own) already perform at a competitive level. This makes them interesting either as standalone tools for preliminary document inspection, or as modules within an environment aimed at supporting the process of curation of biomedical literature." }, { "pmid": "18237434", "title": "OpenDMAP: an open source, ontology-driven concept analysis engine, with applications to capturing knowledge regarding protein transport, protein interactions and cell-type-specific gene expression.", "abstract": "BACKGROUND\nInformation extraction (IE) efforts are widely acknowledged to be important in harnessing the rapid advance of biomedical knowledge, particularly in areas where important factual information is published in a diverse literature. Here we report on the design, implementation and several evaluations of OpenDMAP, an ontology-driven, integrated concept analysis system. It significantly advances the state of the art in information extraction by leveraging knowledge in ontological resources, integrating diverse text processing applications, and using an expanded pattern language that allows the mixing of syntactic and semantic elements and variable ordering.\n\n\nRESULTS\nOpenDMAP information extraction systems were produced for extracting protein transport assertions (transport), protein-protein interaction assertions (interaction) and assertions that a gene is expressed in a cell type (expression). Evaluations were performed on each system, resulting in F-scores ranging from .26-.72 (precision .39-.85, recall .16-.85). Additionally, each of these systems was run over all abstracts in MEDLINE, producing a total of 72,460 transport instances, 265,795 interaction instances and 176,153 expression instances.\n\n\nCONCLUSION\nOpenDMAP advances the performance standards for extracting protein-protein interaction predications from the full texts of biomedical research articles. Furthermore, this level of performance appears to generalize to other information extraction tasks, including extracting information about predicates of more than two arguments. The output of the information extraction system is always constructed from elements of an ontology, ensuring that the knowledge representation is grounded with respect to a carefully constructed model of reality. The results of these efforts can be used to increase the efficiency of manual curation efforts and to provide additional features in systems that integrate multiple sources for information extraction. The open source OpenDMAP code library is freely available at http://bionlp.sourceforge.net/" }, { "pmid": "15890744", "title": "Discovering patterns to extract protein-protein interactions from the literature: Part II.", "abstract": "MOTIVATION\nAn enormous number of protein-protein interaction relationships are buried in millions of research articles published over the years, and the number is growing. Rediscovering them automatically is a challenging bioinformatics task. Solutions to this problem also reach far beyond bioinformatics.\n\n\nRESULTS\nWe study a new approach that involves automatically discovering English expression patterns, optimizing them and using them to extract protein-protein interactions. In a sister paper, we described how to generate English expression patterns related to protein-protein interactions, and this approach alone has already achieved precision and recall rates significantly higher than those of other automatic systems. This paper continues to present our theory, focusing on how to improve the patterns. A minimum description length (MDL)-based pattern-optimization algorithm is designed to reduce and merge patterns. This has significantly increased generalization power, and hence the recall and precision rates, as confirmed by our experiments.\n\n\nAVAILABILITY\nhttp://spies.cs.tsinghua.edu.cn." }, { "pmid": "18834492", "title": "Gene mention normalization and interaction extraction with context models and sentence motifs.", "abstract": "BACKGROUND\nThe goal of text mining is to make the information conveyed in scientific publications accessible to structured search and automatic analysis. Two important subtasks of text mining are entity mention normalization - to identify biomedical objects in text - and extraction of qualified relationships between those objects. We describe a method for identifying genes and relationships between proteins.\n\n\nRESULTS\nWe present solutions to gene mention normalization and extraction of protein-protein interactions. For the first task, we identify genes by using background knowledge on each gene, namely annotations related to function, location, disease, and so on. Our approach currently achieves an f-measure of 86.4% on the BioCreative II gene normalization data. For the extraction of protein-protein interactions, we pursue an approach that builds on classical sequence analysis: motifs derived from multiple sequence alignments. The method achieves an f-measure of 24.4% (micro-average) in the BioCreative II interaction pair subtask.\n\n\nCONCLUSION\nFor gene mention normalization, our approach outperforms strategies that utilize only the matching of genes names against dictionaries, without invoking further knowledge on each gene. Motifs derived from alignments of sentences are successful at identifying protein interactions in text; the approach we present in this report is fully automated and performs similarly to systems that require human intervention at one or more stages.\n\n\nAVAILABILITY\nOur methods for gene, protein, and species identification, and extraction of protein-protein are available as part of the BioCreative Meta Services (BCMS), see http://bcms.bioinfo.cnio.es/." }, { "pmid": "19369495", "title": "Bayesian inference of protein-protein interactions from biological literature.", "abstract": "MOTIVATION\nProtein-protein interaction (PPI) extraction from published biological articles has attracted much attention because of the importance of protein interactions in biological processes. Despite significant progress, mining PPIs from literatures still rely heavily on time- and resource-consuming manual annotations.\n\n\nRESULTS\nIn this study, we developed a novel methodology based on Bayesian networks (BNs) for extracting PPI triplets (a PPI triplet consists of two protein names and the corresponding interaction word) from unstructured text. The method achieved an overall accuracy of 87% on a cross-validation test using manually annotated dataset. We also showed, through extracting PPI triplets from a large number of PubMed abstracts, that our method was able to complement human annotations to extract large number of new PPIs from literature.\n\n\nAVAILABILITY\nPrograms/scripts we developed/used in the study are available at http://stat.fsu.edu/~jinfeng/datasets/Bio-SI-programs-Bayesian-chowdhary-zhang-liu.zip.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "17291334", "title": "BioInfer: a corpus for information extraction in the biomedical domain.", "abstract": "BACKGROUND\nLately, there has been a great interest in the application of information extraction methods to the biomedical domain, in particular, to the extraction of relationships of genes, proteins, and RNA from scientific publications. The development and evaluation of such methods requires annotated domain corpora.\n\n\nRESULTS\nWe present BioInfer (Bio Information Extraction Resource), a new public resource providing an annotated corpus of biomedical English. We describe an annotation scheme capturing named entities and their relationships along with a dependency analysis of sentence syntax. We further present ontologies defining the types of entities and relationships annotated in the corpus. Currently, the corpus contains 1100 sentences from abstracts of biomedical research articles annotated for relationships, named entities, as well as syntactic dependencies. Supporting software is provided with the corpus. The corpus is unique in the domain in combining these annotation types for a single set of sentences, and in the level of detail of the relationship annotation.\n\n\nCONCLUSION\nWe introduce a corpus targeted at protein, gene, and RNA relationships which serves as a resource for the development of information extraction systems and their components such as parsers and domain analyzers. The corpus will be maintained and further developed with a current version being available at http://www.it.utu.fi/BioInfer." }, { "pmid": "17142812", "title": "RelEx--relation extraction using dependency parse trees.", "abstract": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/)." }, { "pmid": "11928487", "title": "Mining MEDLINE: abstracts, sentences, or phrases?", "abstract": "A growing body of works address automated mining of biochemical knowledge from digital repositories of scientific literature, such as MEDLINE. Some of these works use abstracts as the unit of text from which to extract facts. Others use sentences for this purpose, while still others use phrases. Here we compare abstracts, sentences, and phrases in MEDLINE using the standard information retrieval performance measures of recall, precision, and effectiveness, for the task of mining interactions among biochemical terms based on term co-occurrence. Results show statistically significant differences that can impact the choice of text unit." }, { "pmid": "16046493", "title": "Extraction of regulatory gene/protein networks from Medline.", "abstract": "MOTIVATION\nWe have previously developed a rule-based approach for extracting information on the regulation of gene expression in yeast. The biomedical literature, however, contains information on several other equally important regulatory mechanisms, in particular phosphorylation, which we now expanded for our rule-based system also to extract.\n\n\nRESULTS\nThis paper presents new results for extraction of relational information from biomedical text. We have improved our system, STRING-IE, to capture both new types of linguistic constructs as well as new types of biological information [i.e. (de-)phosphorylation]. The precision remains stable with a slight increase in recall. From almost one million PubMed abstracts related to four model organisms, we manage to extract regulatory networks and binary phosphorylations comprising 3,319 relation chunks. The accuracy is 83-90% and 86-95% for gene expression and (de-)phosphorylation relations, respectively. To achieve this, we made use of an organism-specific resource of gene/protein names considerably larger than those used in most other biology related information extraction approaches. These names were included in the lexicon when retraining the part-of-speech (POS) tagger on the GENIA corpus. For the domain in question, an accuracy of 96.4% was attained on POS tags. It should be noted that the rules were developed for yeast and successfully applied to both abstracts and full-text articles related to other organisms with comparable accuracy.\n\n\nAVAILABILITY\nThe revised GENIA corpus, the POS tagger, the extraction rules and the full sets of extracted relations are available from http://www.bork.embl.de/Docu/STRING-IE" }, { "pmid": "15814565", "title": "Literature mining and database annotation of protein phosphorylation using a rule-based system.", "abstract": "MOTIVATION\nA large volume of experimental data on protein phosphorylation is buried in the fast-growing PubMed literature. While of great value, such information is limited in databases owing to the laborious process of literature-based curation. Computational literature mining holds promise to facilitate database curation.\n\n\nRESULTS\nA rule-based system, RLIMS-P (Rule-based LIterature Mining System for Protein Phosphorylation), was used to extract protein phosphorylation information from MEDLINE abstracts. An annotation-tagged literature corpus developed at PIR was used to evaluate the system for finding phosphorylation papers and extracting phosphorylation objects (kinases, substrates and sites) from abstracts. RLIMS-P achieved a precision and recall of 91.4 and 96.4% for paper retrieval, and of 97.9 and 88.0% for extraction of substrates and sites. Coupling the high recall for paper retrieval and high precision for information extraction, RLIMS-P facilitates literature mining and database annotation of protein phosphorylation." } ]
Journal of NeuroEngineering and Rehabilitation
20840786
PMC2946295
10.1186/1743-0003-7-45
Autonomous indoor wayfinding for individuals with cognitive impairments
BackgroundA challenge to individuals with cognitive impairments in wayfinding is how to remain oriented, recall routines, and travel in unfamiliar areas in a way relying on limited cognitive capacity. While people without disabilities often use maps or written directions as navigation tools or for remaining oriented, this cognitively-impaired population is very sensitive to issues of abstraction (e.g. icons on maps or signage) and presents the designer with a challenge to tailor navigation information specific to each user and context.MethodsThis paper describes an approach to providing distributed cognition support of travel guidance for persons with cognitive disabilities. A solution is proposed based on passive near-field RFID tags and scanning PDAs. A prototype is built and tested in field experiments with real subjects. The unique strength of the system is the ability to provide unique-to-the-user prompts that are triggered by context. The key to the approach is to spread the context awareness across the system, with the context being flagged by the RFID tags and the appropriate response being evoked by displaying the appropriate path guidance images indexed by the intersection of specific end-user and context ID embedded in RFID tags.ResultsWe found that passive RFIDs generally served as good context for triggering navigation prompts, although individual differences in effectiveness varied. The results of controlled experiments provided more evidence with regard to applicabilities of the proposed autonomous indoor wayfinding method.ConclusionsOur findings suggest that the ability to adapt indoor wayfinding devices for appropriate timing of directions and standing orientation will be particularly important.
2. Related WorkThe growing recognition that assistive technology can be developed for cognitive as well as physical impairments has led several research groups to prototype wayfinding systems. A resource-adaptive mobile navigation system [7,8] was studied for both indoor and outdoor environments, although it was not specially design for people with disabilities. Cognitive models were built to study human wayfinding behaviors in unfamiliar buildings and salient features of route directions were identified for outdoor pedestrians [9,10]. Kray [11] proposed situational context for navigational assistance.Baus et al. [12] developed auditory perceptible landmarks for visually impaired people and the elderly people in pedestrian navigation and conducted a field experiment on a university campus. Goodman, Brewster, and Gray [13] showed that an electronic pedestrian photo-based navigation aide based around landmarks was more effective for older people than an analogous paper version. Opportunity Knocks (OK) [14] and other similar work form the University of Washington [15] provided text-based routing directions for users with GPS-enabled cellular phones. It can issue user errors if there is deviation being detected. The Opportunity Knocks experiment was based on one single outdoor user. Furthermore, Opportunity Knocks used a hierarchical Dynamic Bayesian Network model in the inference engine to continuously extract important positions from GPS data streams in outdoor navigation.Sohlberg, Fickas, Hung, and Fortier [5] at the University of Oregon compared four prompts modes for route finding for cognitively impaired community travelers. It was found auditory modality was better than text or image modality in outdoor use of PDAs because image and text on the PDA screen is difficult to read under the sun, especially for subjects with poor vision in their field study. A "Wizard of Oz" approach instead of a context-aware implementation was used for sending navigation information. Researchers at the University of Colorado have implemented a system for delivering just-in-time transit directions to a PDA carried by bus users, using GPS and wireless technology installed on the buses [16].The Assisted Cognition Project at the University of Washington has developed artificial intelligence models that learn a user behavior to assist the user who needs help [15]. The system was tested with success in a metropolitan area. Later a feasibility study [17] of user interface was conducted by the same team, who found photos are a preferred media type for giving directions to cognitively impaired persons who navigated indoors, in comparison with speech and text. They also used a "Wizard of Oz" approach to decide when to send photos from the shadow team.
[ "12724689", "17522993", "16795582" ]
[ { "pmid": "12724689", "title": "Cognitive vision, its disorders and differential diagnosis in adults and children: knowing where and what things are.", "abstract": "As ophthalmologists we need a basic model of how the higher visual system works and its common disorders. This presentation aims to provide an outline of such a model. Our ability to survey a visual scene, locate and recognise an object of interest, move towards it and pick it up, recruits a number of complex cognitive higher visual pathways, all of which are susceptible to damage. The visual map in the mind needs to be co-located with reality and is primarily plotted by the posterior parietal lobes, which interact with the frontal lobes to choose the object of interest. Neck and extraocular muscle proprioceptors are probably responsible for maintaining this co-location when the head and eyes move with respect to the body, and synchronous input from both eyes is needed for correct localisation of moving targets. Recognition of what is being looked at is brought about by comparing the visual input with the \"image libraries\" in the temporal lobes. Once an object is recognised, its choice is mediated by parietal and frontal lobe tissue. The parietal lobes determine the visual coordinates and plan the visually guided movement of the limbs to pick it up, and the frontal lobes participate in making the choice. The connection between the occipital lobes and the parietal lobes is known as the dorsal stream, and the connection between the occipital lobes and the temporal lobes, comprises the ventral stream. Both disorders of neck and extraocular muscle proprioception, and disorders leading to asynchronous input along the two optic nerves are \"peripheral\" causes of impaired visually guided movement, while bilateral damage to the parietal lobes can result in central impairment of visually guided movement, or optic ataxia. Damage to the temporal lobes can result in impaired recognition, problems with route finding and poor visual memory. Spontaneous activity in the temporal lobes can result in formed visual hallucinations, in patients with impaired central visual function, particularly the elderly. Deficits in cognitive visual function can occur in different combinations in both children and adults depending on the nature and distribution of the underlying brain damage. In young children the potential for recovery can lead to significant improvement in parietal lobe function with time. Patients with these disorders need an understanding of their deficits and a structured positive approach to their rehabilitation." }, { "pmid": "17522993", "title": "A comparison of four prompt modes for route finding for community travellers with severe cognitive impairments.", "abstract": "PRIMARY OBJECTIVE\nNavigational skills are fundamental to community travel and, hence, personal independence and are often disrupted in people with cognitive impairments. Navigation devices are being developed that can support community navigation by delivering directional information. Selecting an effective mode to provide route-prompts is a critical design issue. This study evaluated the differential effects on pedestrian route finding using different modes of prompting delivered via a handheld electronic device for travellers with severe cognitive impairments.\n\n\nRESEARCH DESIGN\nA within-subject comparison study was used to evaluate potential differences in route navigation performance when travellers received directions using four different prompt modes: (1) aerial map image, (2) point of view map image, (3) text based instructions/no image and (4) audio direction/no image.\n\n\nMETHODS AND PROCEDURES\nTwenty travellers with severe cognitive impairments due to acquired brain injury walked four equivalent routes using four different prompting modes delivered via a wrist-worn navigation device. Navigation scores were computed that captured accuracy and confidence during navigation.\n\n\nMAIN OUTCOME\nResults of the repeated measures Analysis of Variance suggested that participants performed best when given prompts via speech-based audio directions. The majority of the participants also preferred this prompting mode. Findings are interpreted in the context of cognitive resource allocation theory." }, { "pmid": "16795582", "title": "Multiple-probe technique: a variation on the multiple baseline.", "abstract": "Multiple-baseline and probe procedures are combined into a \"multiple-probe\" technique. The technique is designed to provide a thorough analysis of the relationship between an independent variable and the acquisition of a successive-approximation or chain sequence. It provides answers to the following questions: (1) What is the initial level of performance on each step in the training sequence? (2) What happens if sequential opportunities to perform each next step in the sequence are provided before training on that step? (3) What happens when training is applied? (4) What happens to the performance of remaining steps in the sequence as criterion is reached in the course of training each prior step? The technique features: (1) one initial probe of each step in the training sequence, (2) an additional probe of every step after criterion is reached on any training step, and (3) a series of \"true\" baseline sessions conducted just before the introduction of the independent variable to each training step. Intermittent probes also provide an alternative to continuous baseline measurement, when such measurement during extended multiple baselines (1) may prove reactive, (2) is impractical, and/or (3) a strong a priori assumption of stability can be made." } ]
BMC Medical Informatics and Decision Making
20946670
PMC2972239
10.1186/1472-6947-10-59
Data-driven approach for creating synthetic electronic medical records
BackgroundNew algorithms for disease outbreak detection are being developed to take advantage of full electronic medical records (EMRs) that contain a wealth of patient information. However, due to privacy concerns, even anonymized EMRs cannot be shared among researchers, resulting in great difficulty in comparing the effectiveness of these algorithms. To bridge the gap between novel bio-surveillance algorithms operating on full EMRs and the lack of non-identifiable EMR data, a method for generating complete and synthetic EMRs was developed.MethodsThis paper describes a novel methodology for generating complete synthetic EMRs both for an outbreak illness of interest (tularemia) and for background records. The method developed has three major steps: 1) synthetic patient identity and basic information generation; 2) identification of care patterns that the synthetic patients would receive based on the information present in real EMR data for similar health problems; 3) adaptation of these care patterns to the synthetic patient population.ResultsWe generated EMRs, including visit records, clinical activity, laboratory orders/results and radiology orders/results for 203 synthetic tularemia outbreak patients. Validation of the records by a medical expert revealed problems in 19% of the records; these were subsequently corrected. We also generated background EMRs for over 3000 patients in the 4-11 yr age group. Validation of those records by a medical expert revealed problems in fewer than 3% of these background patient EMRs and the errors were subsequently rectified.ConclusionsA data-driven method was developed for generating fully synthetic EMRs. The method is general and can be applied to any data set that has similar data elements (such as laboratory and radiology orders and results, clinical activity, prescription orders). The pilot synthetic outbreak records were for tularemia but our approach may be adapted to other infectious diseases. The pilot synthetic background records were in the 4-11 year old age group. The adaptations that must be made to the algorithms to produce synthetic background EMRs for other age groups are indicated.
I.b Related WorkThere has been considerable research utilizing the information contained in EMRs. Some recent results occur in bio-surveillance [9-13], screening for reportable disease [14-16] and pharmacovigilance [17,18]. However, there has not been a corresponding increase in the availability of non-identifiable complete EMRs for the development or test of algorithms that operate on them.There have been various efforts at de-identification of medical record information, including the Realistic But Not Real (RBNR) project [6], and recent work on de-identification of clinical notes [19]. Algorithms to de-identify visit records are an active area of research (see, e.g. [20] and references therein) but these algorithms in general do not operate on the entire EMR. As mentioned before, there is some danger in the widespread dissemination of even anonymized or de-identified data. The main risk is that an individual patient or a particular facility or practitioner could potentially be uniquely identified from the conjunction of different types of information within the EMR, such as age group, gender, ethnic group, race, time of medical encounter, or a rare or unusual-for-age diagnosis. Because the medical information itself is not altered by anonymization and date-shuffling algorithms, it is possible that some information in the EMR could be used to identify a patient, a practitioner, or a facility. Proprietary as well as privacy concerns, then, motivate the need to produce synthetic patients that display enough variation in demographic and sanitized medical record information to make identification of the patient and the facility ambiguous.As a recent effort to model the progression of chronic disease in an individual [21], the Archimedes project models the entire clinical timeline of a fictitious patient, including test and radiology results. Its innovation is in realistically and verifiably modeling the progression of a particular chronic disease, the disease manifestation in test and radiology results, and the outcome of clinical interventions in the individual. It does not model the incidence of the chronic diseases in the population and does not, at this time, model the clinical timelines of patients with infectious disease or injury.There are many different models for the spread of infectious disease through a population. From a simple lognormal curve [22-24] through the injection of artificial disease outbreaks in simulated time series [25] to the recent population model for the Models of Infectious Disease Agent Study (MIDAS, [26]), they can vary in complexity, scope and purpose. The focus in Project Mimic [25] is in the generation of realistic but totally synthetic time series of case counts. The MIDAS project [26] models the incidence and spread of disease in a completely synthetic population in a geographic area. While it does generate time series of healthcare encounters with verifiably accurate timelines for disease spread, it does not produce EMRs for the synthetic population.The injection of artificial disease outbreaks into real time-series data, so-called hybrid data, is commonly used to test the effectiveness of disease outbreak detection or clustering algorithms. Typically a series of outbreaks or events is calculated according to an epidemic model, and case counts are added to the real data to simulate the additional cases that would occur as a result of the outbreak. Algorithms can then be tested with and without this outbreak data to gauge their sensitivity (for a general reference see, e.g. [27]). However, injected data is generally limited to time series, or in some cases, injected chief complaint or syndromic data (see, e.g., [28] and references therein) and does not include the complete EMR.The addition of EMR to the available data sources has fueled recent advances in surveillance methods [29-31]. The ability to engineer outbreaks into synthetic data would allow the development of other surveillance algorithms that can improve sensitivity and specificity of outbreak detections.The rest of this paper is organized as follows: Section II presents an overview of the method developed, a description of the data set used, and details of the major steps of the methodology: Synthetic Patient Identities and basic information generation, Identification of Closest Patient Care Models and Descriptors, and Adaptation of Patient Care Models. Section III presents the results, and we finish with discussion and conclusions in Section IV.
[ "18560122", "19717809", "15360805", "18436898", "18612462", "18952940", "16231953", "19261932", "19567795", "19383138", "19331728", "11386933", "3892222", "12364373", "4041193", "10219948", "16178999" ]
[ { "pmid": "19717809", "title": "Bayesian information fusion networks for biosurveillance applications.", "abstract": "This study introduces new information fusion algorithms to enhance disease surveillance systems with Bayesian decision support capabilities. A detection system was built and tested using chief complaints from emergency department visits, International Classification of Diseases Revision 9 (ICD-9) codes from records of outpatient visits to civilian and military facilities, and influenza surveillance data from health departments in the National Capital Region (NCR). Data anomalies were identified and distribution of time offsets between events in the multiple data streams were established. The Bayesian Network was built to fuse data from multiple sources and identify influenza-like epidemiologically relevant events. Results showed increased specificity compared with the alerts generated by temporal anomaly detection algorithms currently deployed by NCR health departments. Further research should be done to investigate correlations between data sources for efficient fusion of the collected data." }, { "pmid": "15360805", "title": "System-wide surveillance for clinical encounters by patients previously identified with MRSA and VRE.", "abstract": "Methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE) have emerged as major infection control problems worldwide. Patients previously infected or colonized with MRSA or VRE need to be identified and often isolated as soon as they visit a health care facility. Infection control personnel usually are not aware when these patients enter their facilities. We developed a system-wide surveillance system to alert infection control personnel when patients with previous MRSA or VRE cultures from LDS Hospital have subsequent clinical encounters at any inpatient or outpatient facility at Intermountain Health Care (IHC). This paper describes this system and includes the results from an initial study on the potential epidemiological benefits provided to help improve patient care. The study found that patients with previous MRSA and VRE had subsequent encounters at 62 different IHC facilities up to 304 miles away from 1 day to over 5 years later. In addition, the new surveillance system was able to alert infection control personnel when ever these patients visited any IHC inpatient or outpatient facility." }, { "pmid": "18436898", "title": "Rapid identification of hospitalized patients at high risk for MRSA carriage.", "abstract": "Patients who are asymptomatic carriers of methicillin-resistant Staphylococcus aureus (MRSA) are major reservoirs for transmission of MRSA to other patients. Medical personnel are usually not aware when these high-risk patients are hospitalized. We developed and tested an enterprise-wide electronic surveillance system to identify patients at high risk for MRSA carriage at hospital admission and during hospitalization. During a two-month study, nasal swabs from 153 high-risk patients were tested for MRSA carriage using polymerase chain reaction (PCR) of which 31 (20.3%) were positive compared to 12 of 293 (4.1%, p < 0.001) low-risk patients. The mean interval from admission to availability of PCR test results was 19.2 hours. Computer alerts for patients at high-risk of MRSA carriage were found to be reliable, timely and offer the potential to replace testing all patients. Previous MRSA colonization was the best predictor but other risk factors were needed to increase the sensitivity of the algorithm." }, { "pmid": "18612462", "title": "Automated identification of acute hepatitis B using electronic medical record data to facilitate public health surveillance.", "abstract": "BACKGROUND\nAutomatic identification of notifiable diseases from electronic medical records can potentially improve the timeliness and completeness of public health surveillance. We describe the development and implementation of an algorithm for prospective surveillance of patients with acute hepatitis B using electronic medical record data.\n\n\nMETHODS\nInitial algorithms were created by adapting Centers for Disease Control and Prevention diagnostic criteria for acute hepatitis B into electronic terms. The algorithms were tested by applying them to ambulatory electronic medical record data spanning 1990 to May 2006. A physician reviewer classified each case identified as acute or chronic infection. Additional criteria were added to algorithms in serial fashion to improve accuracy. The best algorithm was validated by applying it to prospective electronic medical record data from June 2006 through April 2008. Completeness of case capture was assessed by comparison with state health department records.\n\n\nFINDINGS\nA final algorithm including a positive hepatitis B specific test, elevated transaminases and bilirubin, absence of prior positive hepatitis B tests, and absence of an ICD9 code for chronic hepatitis B identified 112/113 patients with acute hepatitis B (sensitivity 97.4%, 95% confidence interval 94-100%; specificity 93.8%, 95% confidence interval 87-100%). Application of this algorithm to prospective electronic medical record data identified 8 cases without false positives. These included 4 patients that had not been reported to the health department. There were no known cases of acute hepatitis B missed by the algorithm.\n\n\nCONCLUSIONS\nAn algorithm using codified electronic medical record data can reliably detect acute hepatitis B. The completeness of public health surveillance may be improved by automatically identifying notifiable diseases from electronic medical record data." }, { "pmid": "18952940", "title": "Electronic Support for Public Health: validated case finding and reporting for notifiable diseases using electronic medical data.", "abstract": "Health care providers are legally obliged to report cases of specified diseases to public health authorities, but existing manual, provider-initiated reporting systems generally result in incomplete, error-prone, and tardy information flow. Automated laboratory-based reports are more likely accurate and timely, but lack clinical information and treatment details. Here, we describe the Electronic Support for Public Health (ESP) application, a robust, automated, secure, portable public health detection and messaging system for cases of notifiable diseases. The ESP application applies disease specific logic to any complete source of electronic medical data in a fully automated process, and supports an optional case management workflow system for case notification control. All relevant clinical, laboratory and demographic details are securely transferred to the local health authority as an HL7 message. The ESP application has operated continuously in production mode since January 2007, applying rigorously validated case identification logic to ambulatory EMR data from more than 600,000 patients. Source code for this highly interoperable application is freely available under an approved open-source license at http://esphealth.org." }, { "pmid": "16231953", "title": "Perspectives on the use of data mining in pharmaco-vigilance.", "abstract": "In the last 5 years, regulatory agencies and drug monitoring centres have been developing computerised data-mining methods to better identify reporting relationships in spontaneous reporting databases that could signal possible adverse drug reactions. At present, there are no guidelines or standards for the use of these methods in routine pharmaco-vigilance. In 2003, a group of statisticians, pharmaco-epidemiologists and pharmaco-vigilance professionals from the pharmaceutical industry and the US FDA formed the Pharmaceutical Research and Manufacturers of America-FDA Collaborative Working Group on Safety Evaluation Tools to review best practices for the use of these methods.In this paper, we provide an overview of: (i) the statistical and operational attributes of several currently used methods and their strengths and limitations; (ii) information about the characteristics of various postmarketing safety databases with which these tools can be deployed; (iii) analytical considerations for using safety data-mining methods and interpreting the results; and (iv) points to consider in integration of safety data mining with traditional pharmaco-vigilance methods. Perspectives from both the FDA and the industry are provided. Data mining is a potentially useful adjunct to traditional pharmaco-vigilance methods. The results of data mining should be viewed as hypothesis generating and should be evaluated in the context of other relevant data. The availability of a publicly accessible global safety database, which is updated on a frequent basis, would further enhance detection and communication about safety issues." }, { "pmid": "19261932", "title": "Active computerized pharmacovigilance using natural language processing, statistics, and electronic health records: a feasibility study.", "abstract": "OBJECTIVE It is vital to detect the full safety profile of a drug throughout its market life. Current pharmacovigilance systems still have substantial limitations, however. The objective of our work is to demonstrate the feasibility of using natural language processing (NLP), the comprehensive Electronic Health Record (EHR), and association statistics for pharmacovigilance purposes. DESIGN Narrative discharge summaries were collected from the Clinical Information System at New York Presbyterian Hospital (NYPH). MedLEE, an NLP system, was applied to the collection to identify medication events and entities which could be potential adverse drug events (ADEs). Co-occurrence statistics with adjusted volume tests were used to detect associations between the two types of entities, to calculate the strengths of the associations, and to determine their cutoff thresholds. Seven drugs/drug classes (ibuprofen, morphine, warfarin, bupropion, paroxetine, rosiglitazone, ACE inhibitors) with known ADEs were selected to evaluate the system. RESULTS One hundred thirty-two potential ADEs were found to be associated with the 7 drugs. Overall recall and precision were 0.75 and 0.31 for known ADEs respectively. Importantly, qualitative evaluation using historic roll back design suggested that novel ADEs could be detected using our system. CONCLUSIONS This study provides a framework for the development of active, high-throughput and prospective systems which could potentially unveil drug safety profiles throughout their entire market life. Our results demonstrate that the framework is feasible although there are some challenging issues. To the best of our knowledge, this is the first study using comprehensive unstructured data from the EHR for pharmacovigilance." }, { "pmid": "19567795", "title": "A globally optimal k-anonymity method for the de-identification of health data.", "abstract": "BACKGROUND\nExplicit patient consent requirements in privacy laws can have a negative impact on health research, leading to selection bias and reduced recruitment. Often legislative requirements to obtain consent are waived if the information collected or disclosed is de-identified.\n\n\nOBJECTIVE\nThe authors developed and empirically evaluated a new globally optimal de-identification algorithm that satisfies the k-anonymity criterion and that is suitable for health datasets.\n\n\nDESIGN\nAuthors compared OLA (Optimal Lattice Anonymization) empirically to three existing k-anonymity algorithms, Datafly, Samarati, and Incognito, on six public, hospital, and registry datasets for different values of k and suppression limits. Measurement Three information loss metrics were used for the comparison: precision, discernability metric, and non-uniform entropy. Each algorithm's performance speed was also evaluated.\n\n\nRESULTS\nThe Datafly and Samarati algorithms had higher information loss than OLA and Incognito; OLA was consistently faster than Incognito in finding the globally optimal de-identification solution.\n\n\nCONCLUSIONS\nFor the de-identification of health datasets, OLA is an improvement on existing k-anonymity algorithms in terms of information loss and performance." }, { "pmid": "19383138", "title": "Syndromic surveillance: STL for modeling, visualizing, and monitoring disease counts.", "abstract": "BACKGROUND\nPublic health surveillance is the monitoring of data to detect and quantify unusual health events. Monitoring pre-diagnostic data, such as emergency department (ED) patient chief complaints, enables rapid detection of disease outbreaks. There are many sources of variation in such data; statistical methods need to accurately model them as a basis for timely and accurate disease outbreak methods.\n\n\nMETHODS\nOur new methods for modeling daily chief complaint counts are based on a seasonal-trend decomposition procedure based on loess (STL) and were developed using data from the 76 EDs of the Indiana surveillance program from 2004 to 2008. Square root counts are decomposed into inter-annual, yearly-seasonal, day-of-the-week, and random-error components. Using this decomposition method, we develop a new synoptic-scale (days to weeks) outbreak detection method and carry out a simulation study to compare detection performance to four well-known methods for nine outbreak scenarios.\n\n\nRESULT\nThe components of the STL decomposition reveal insights into the variability of the Indiana ED data. Day-of-the-week components tend to peak Sunday or Monday, fall steadily to a minimum Thursday or Friday, and then rise to the peak. Yearly-seasonal components show seasonal influenza, some with bimodal peaks.Some inter-annual components increase slightly due to increasing patient populations. A new outbreak detection method based on the decomposition modeling performs well with 90 days or more of data. Control limits were set empirically so that all methods had a specificity of 97%. STL had the largest sensitivity in all nine outbreak scenarios. The STL method also exhibited a well-behaved false positive rate when run on the data with no outbreaks injected.\n\n\nCONCLUSION\nThe STL decomposition method for chief complaint counts leads to a rapid and accurate detection method for disease outbreaks, and requires only 90 days of historical data to be put into operation. The visualization tools that accompany the decomposition and outbreak methods provide much insight into patterns in the data, which is useful for surveillance operations." }, { "pmid": "19331728", "title": "Enhancing time-series detection algorithms for automated biosurveillance.", "abstract": "BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14-28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data." }, { "pmid": "11386933", "title": "Tularemia as a biological weapon: medical and public health management.", "abstract": "OBJECTIVE\nThe Working Group on Civilian Biodefense has developed consensus-based recommendations for measures to be taken by medical and public health professionals if tularemia is used as a biological weapon against a civilian population.\n\n\nPARTICIPANTS\nThe working group included 25 representatives from academic medical centers, civilian and military governmental agencies, and other public health and emergency management institutions and agencies.\n\n\nEVIDENCE\nMEDLINE databases were searched from January 1966 to October 2000, using the Medical Subject Headings Francisella tularensis, Pasteurella tularensis, biological weapon, biological terrorism, bioterrorism, biological warfare, and biowarfare. Review of these references led to identification of relevant materials published prior to 1966. In addition, participants identified other references and sources.\n\n\nCONSENSUS PROCESS\nThree formal drafts of the statement that synthesized information obtained in the formal evidence-gathering process were reviewed by members of the working group. Consensus was achieved on the final draft.\n\n\nCONCLUSIONS\nA weapon using airborne tularemia would likely result 3 to 5 days later in an outbreak of acute, undifferentiated febrile illness with incipient pneumonia, pleuritis, and hilar lymphadenopathy. Specific epidemiological, clinical, and microbiological findings should lead to early suspicion of intentional tularemia in an alert health system; laboratory confirmation of agent could be delayed. Without treatment, the clinical course could progress to respiratory failure, shock, and death. Prompt treatment with streptomycin, gentamicin, doxycycline, or ciprofloxacin is recommended. Prophylactic use of doxycycline or ciprofloxacin may be useful in the early postexposure period." }, { "pmid": "3892222", "title": "Tularemia: a 30-year experience with 88 cases.", "abstract": "Drawing upon our experience with 88 cases and a survey of the English literature, we reviewed the clinical, pathophysiological, and epidemiological aspects of tularemia. Tularemia can be thought of as two syndromes--ulceroglandular and typhoidal. This dichotomy simplifies earlier nomenclature and emphasizes the obscure typhoidal presentation. Clinical manifestations suggest that the two syndromes reflect differences in host response. In ulceroglandular tularemia the pathogen appears to be well contained by a vigorous inflammatory reaction. Pneumonia is less common and the patient's prognosis is good. In typhoidal disease there are few localizing signs; pneumonia is more common; and the mortality without therapy is much higher, suggesting that the host response is somehow deficient. Francisella tularensis is an extremely virulent pathogen capable of initiating infection with as few as 10 organisms inoculated subcutaneously. During an incubation period of 3 to 6 days the host responds first with polymorphonuclear leukocytes and then macrophages. Granulocytes are unable to kill the pathogen without opsonizing antibody leaving cellular immunity to play the major role in host defense. One to 2 weeks after infection, a vigorous T-lymphocyte response can be detected in vitro with lymphocyte blast transformation assays and in vivo with an intradermal skin test, which, unfortunately, is not commercially available. Humoral immunity, often used as a diagnostic modality, appears 2 to 3 weeks into the illness. Cellular immunity is long-lasting, accounting for the common reoccurrence of localized disease upon repeated exposures to the pathogen. There are no symptoms that distinguish the ulceroglandular from the typhoidal syndrome. A pulse-temperature dissociation is seen in less than half of the patients. The location of ulcers and enlarged lymph nodes give a clue to the likely vector since lesions located on the upper extremities are more commonly associated with mammalian, and those of the head and neck and lower extremities with arthropod, vectors. Pharyngitis, pericarditis, and pneumonia can complicate both syndromes, although the latter is much more common in typhoidal disease. Hepatitis, usually of a mild degree, is common and occasionally erythema nodosum is seen. No specific laboratory tests characterize tularemia, and cultures of the pathogen are often difficult to obtain because of the special growth requirements of Francisella tularesis and the inability of many clinical laboratories to handle the dangerous pathogen.(ABSTRACT TRUNCATED AT 400 WORDS)" }, { "pmid": "12364373", "title": "Tularemia.", "abstract": "Francisella tularensis is the etiological agent of tularemia, a serious and occasionally fatal disease of humans and animals. In humans, ulceroglandular tularemia is the most common form of the disease and is usually a consequence of a bite from an arthropod vector which has previously fed on an infected animal. The pneumonic form of the disease occurs rarely but is the likely form of the disease should this bacterium be used as a bioterrorism agent. The diagnosis of disease is not straightforward. F. tularensis is difficult to culture, and the handling of this bacterium poses a significant risk of infection to laboratory personnel. Enzyme-linked immunosorbent assay- and PCR-based methods have been used to detect bacteria in clinical samples, but these methods have not been adequately evaluated for the diagnosis of pneumonic tularemia. Little is known about the virulence mechanisms of F. tularensis, though there is a large body of evidence indicating that it is an intracellular pathogen, surviving mainly in macrophages. An unlicensed live attenuated vaccine is available, which does appear to offer protection against ulceroglandular and pneumonic tularemia. Although an improved vaccine against tularemia is highly desirable, attempts to devise such a vaccine have been limited by the inability to construct defined allelic replacement mutants and by the lack of information on the mechanisms of virulence of F. tularensis. In the absence of a licensed vaccine, aminoglycoside antibiotics play a key role in the prevention and treatment of tularemia." }, { "pmid": "4041193", "title": "Tularemia: emergency department presentation of an infrequently recognized disease.", "abstract": "Tularemia is an uncommon, highly communicable disease occurring with seasonal regularity in endemic parts of the United States. The varied signs and symptoms may confound the unwary physician. Two cases are reported illustrating the ulceroglandular and ingestion forms of the disease. Septic (typhoidal), oculoglandular, pleuropulmonary, glandular, and oropharyngeal forms also are described. Knowledge of the epidemiology and a high index of suspicion should lead the examining physician to ask revealing questions. The diagnosis is presumed upon clinical grounds and confirmed by serological testing. According to published reports delayed diagnosis can result in an overall mortality rate of 7% of cases; however, early diagnosis will lead to uncomplicated recovery in most cases." }, { "pmid": "10219948", "title": "Questions on validity of International Classification of Diseases-coded diagnoses.", "abstract": "International Classification of Diseases (ICD) codes are used for indexing medical diagnoses for various purposes and in various contexts. According to the literature and our personal experience, the validity of the coded information is unsatisfactory in general, however the 'correctness' is purpose and environment dependent. For detecting potential error sources, this paper gives a general framework of the coding process. The key elements of this framework are: (1) the formulation of the established diagnoses in medical language; (2) the induction from diagnoses to diseases; (3) indexing the diseases to ICD categories; (4) labelling of the coded entries (e.g. principal disease, complications, etc.). Each step is a potential source of errors. The most typical types of error are: (1) overlooking of diagnoses; (2) incorrect or skipped induction; (3) indexing errors; (4) violation of ICD rules and external regulations. The main reasons of the errors are the physician's errors in the primary documentation, the insufficient knowledge of the encoders (different steps of the coding process require different kind of knowledge), the internal inconsistency of the ICD, and some psychological factors. Computer systems can facilitate the coding process, but attention has to be paid to the entire coding process, not only to the indexing phase." }, { "pmid": "16178999", "title": "Measuring diagnoses: ICD code accuracy.", "abstract": "OBJECTIVE\nTo examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process.\n\n\nDATA SOURCES/STUDY SETTING\nThe use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications.\n\n\nSTUDY DESIGN/METHODS\nWe summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy.\n\n\nPRINCIPLE FINDINGS\nMain error sources along the \"patient trajectory\" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the \"paper trail\" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding.\n\n\nCONCLUSIONS\nBy clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways." } ]
Scientific Reports
29915325
PMC6006367
10.1038/s41598-018-27160-3
A distributed algorithm to maintain and repair the trail networks of arboreal ants
We study how the arboreal turtle ant (Cephalotes goniodontus) solves a fundamental computing problem: maintaining a trail network and finding alternative paths to route around broken links in the network. Turtle ants form a routing backbone of foraging trails linking several nests and temporary food sources. This species travels only in the trees, so their foraging trails are constrained to lie on a natural graph formed by overlapping branches and vines in the tangled canopy. Links between branches, however, can be ephemeral, easily destroyed by wind, rain, or animal movements. Here we report a biologically feasible distributed algorithm, parameterized using field data, that can plausibly describe how turtle ants maintain the routing backbone and find alternative paths to circumvent broken links in the backbone. We validate the ability of this probabilistic algorithm to circumvent simulated breaks in synthetic and real-world networks, and we derive an analytic explanation for why certain features are crucial to improve the algorithm’s success. Our proposed algorithm uses fewer computational resources than common distributed graph search algorithms, and thus may be useful in other domains, such as for swarm computing or for coordinating molecular robots.
Related workTo our knowledge, this is the first computational analysis of trail networks of an arboreal ant species, whose movements are constrained to a discrete graph structure rather than continuous space. Compared to previous work, we attempt to solve the network repair problem using different constraints and fewer assumptions about the computational abilities of individual ants.Species-specific modeling of ant behaviorPrevious studies of ant trail networks have largely examined species that forage on a continuous 2D surface14, including Pharaoh’s ants15, Argentine ants4,16,17, leaf-cutter ants18, army ants19, and red wood ants20. These species can define nodes and edges at any location on the surface, and form trails using techniques such as random amplification19,21,22, or using their own bodies to form living bridges23. Experimental work on these species sometimes uses discrete mazes or Y-junctions to impose a graph structure; however, these species have evolved to create graph structures in continuous space, not to solve problems on a fixed graph structure, as turtle ants have evolved to do. Turtle ant movements are entirely constrained by the vegetation in which they travel. They cannot form trails with nodes and edges at arbitrary locations; instead, they can use only the nodes and edges that are available to them.Further, to provide the simplest possible algorithm that is biologically realistic, we assume that turtle ants use only one type of pheromone. There are more than 14,000 species of ants, and they differ in their use of chemical cues. For example, Monomorium pharoensis uses several different trail pheromones24–28. There is, however, no evidence that turtle ants lay more than one type of trail pheromone.Ant colony optimizationModels of ant colony optimization (ACO), first proposed in 1991, loosely mimic ant behavior to solve combinatorial optimization problems, such as the traveling salesman problem29–31 and the shortest path problem32. In ACO, individual ants each use a heuristic to construct candidate solutions, and then use pheromone to lead other ants towards higher quality solutions. Recent advances improve ACO through techniques such as local search33, cunning ants34, and iterated ants35. ACO, however, provides simulated ants more computational power than turtle actually ants possess; in particular, ACO-simulated ants have sufficient memory to remember, retrace, and reinforce entire paths or solutions, and they can choose how much pheromone to lay in retrospect, based on the optimality of the solution.Prior work inspired by ants provides solutions to graph search problems36,37, such as the Hamiltonian path problem38 or the Ants Nearby Treasure Search (ANTS) problem. The latter investigates how simulated ants collaboratively search the integer plane for a treasure source. These models afford the simulated ants various computational abilities, including searching exhaustively around a fixed radius39, sending constant sized messages40, or laying pheromone to mark an edge as explored41. Our work involves a similar model of distributed computation, but our problem requires not only that the ants find an alternative path to a nest (a “treasure”), but also that all the ants commit to using the same alternative path. This requires a fundamentally different strategy from that required for just one ant to find a treasure.Graph algorithms and reinforced random walksCommon algorithms used to solve the general network search and repair problem, including Dijkstra’s algorithm, breadth-first search, depth-first search, and A* search11, all require substantial communication or memory complexity. For example, agents must maintain a large routing table, store and query a list of all previously visited nodes, or pre-compute a topology-dependent heuristic to compute node-to-node distances42. These abilities are all unlikely for turtle ants.Distributed graph algorithms, in which nodes are treated as fixed agents capable of passing messages to neighbors, have also been proposed to find shortest paths in a graph43,44, to construct minimum spanning trees45,46, and to approximate various NP-hard problems47,48. In contrast, our work uses a more restrictive model of distributed computation, where agents communicate only through pheromone which does not have a specific targeted recipient.Finally, the limited assumptions about the memory of turtle ants invite comparison to a Markov process. Edge-reinforced random walks49, first introduced by Diaconis and others50,51, proceed as follows: an agent, or random walker, traverses a graph by choosing amongst adjacent edges with a probability proportional to their edge weight; then the agent augments the weight (or pheromone) of each edge chosen. Our model expands edge-reinforced random walks in two ways: first, we allow many agents to walk the graph concurrently, and second, we decrease edge weights over time. Our work is similar to previous models of the gliding behavior of myxobacteria52 that consider synchronous, node (rather than edge)-reinforced, random walks with decay. These models seek to determine when bacteria aggregate on adjacent points or instead walk freely on the grid. By contrast, here we ask whether the random walkers converge to a single consensus path between two points on the grid that are not necessarily adjacent.
[ "20093467", "21288958", "23209749", "29166159", "28009263", "23967129", "14999281", "15351134", "16306981", "18778716", "21490001", "23599264", "22829756", "27815944", "20463735", "24531967", "26151903", "22927811", "25386724" ]
[ { "pmid": "20093467", "title": "Rules for biologically inspired adaptive network design.", "abstract": "Transport networks are ubiquitous in both social and biological systems. Robust network performance involves a complex trade-off involving cost, transport efficiency, and fault tolerance. Biological networks have been honed by many cycles of evolutionary selection pressure and are likely to yield reasonable solutions to such combinatorial optimization problems. Furthermore, they develop without centralized control and may represent a readily scalable solution for growing networks in general. We show that the slime mold Physarum polycephalum forms networks with comparable efficiency, fault tolerance, and cost to those of real-world infrastructure networks--in this case, the Tokyo rail system. The core mechanisms needed for adaptive network formation can be captured in a biologically inspired mathematical model that may be useful to guide network construction in other domains." }, { "pmid": "21288958", "title": "Structure and formation of ant transportation networks.", "abstract": "Many biological systems use extensive networks for the transport of resources and information. Ants are no exception. How do biological systems achieve efficient transportation networks in the absence of centralized control and without global knowledge of the environment? Here, we address this question by studying the formation and properties of inter-nest transportation networks in the Argentine ant (Linepithema humile). We find that the formation of inter-nest networks depends on the number of ants involved in the construction process. When the number of ants is sufficient and networks do form, they tend to have short total length but a low level of robustness. These networks are topologically similar to either minimum spanning trees or Steiner networks. The process of network formation involves an initial construction of multiple links followed by a pruning process that reduces the number of trails. Our study thus illuminates the conditions under and the process by which minimal biological transport networks can be constructed." }, { "pmid": "23209749", "title": "The dynamics of foraging trails in the tropical arboreal ant Cephalotes goniodontus.", "abstract": "The foraging behavior of the arboreal turtle ant, Cephalotes goniodontus, was studied in the tropical dry forest of western Mexico. The ants collected mostly plant-derived food, including nectar and fluids collected from the edges of wounds on leaves, as well as caterpillar frass and lichen. Foraging trails are on small pieces of ephemeral vegetation, and persist in exactly the same place for 4-8 days, indicating that food sources may be used until they are depleted. The species is polydomous, occupying many nests which are abandoned cavities or ends of broken branches in dead wood. Foraging trails extend from trees with nests to trees with food sources. Observations of marked individuals show that each trail is travelled by a distinct group of foragers. This makes the entire foraging circuit more resilient if a path becomes impassable, since foraging in one trail can continue while a different group of ants forms a new trail. The colony's trails move around the forest from month to month; from one year to the next, only one colony out of five was found in the same location. There is continual searching in the vicinity of trails: ants recruited to bait within 3 bifurcations of a main foraging trail within 4 hours. When bait was offered on one trail, to which ants recruited, foraging activity increased on a different trail, with no bait, connected to the same nest. This suggests that the allocation of foragers to different trails is regulated by interactions at the nest." }, { "pmid": "29166159", "title": "Local Regulation of Trail Networks of the Arboreal Turtle Ant, Cephalotes goniodontus.", "abstract": "This study examines how an arboreal ant colony maintains, extends, and repairs its network of foraging trails and nests, built on a network of vegetation. Nodes are junctions where a branch forks off from another or where a branch of one plant touching another provides a new edge on which ants could travel. The ants' choice of edge at a node appears to be reinforced by trail pheromone. Ongoing pruning of the network tends to eliminate cycles and minimize the number of nodes and thus decision points, but not the distance traveled. At junctions, trails tend to stay on the same plant. In combination with the long internode lengths of the branches of vines in the tropical dry forest, this facilitates travel to food sources at the canopy edge. Exploration, when ants leave the trail on an edge that is not being used, makes both search and repair possible. The fewer the junctions between a location and the main trail, the more likely the ants are to arrive there. Ruptured trails are rapidly repaired with a new path, apparently using breadth-first search. The regulation of the network promotes its resilience and continuity." }, { "pmid": "28009263", "title": "The Evolution of the Algorithms for Collective Behavior.", "abstract": "Collective behavior is the outcome of a network of local interactions. Here, I consider collective behavior as the result of algorithms that have evolved to operate in response to a particular environment and physiological context. I discuss how algorithms are shaped by the costs of operating under the constraints that the environment imposes, the extent to which the environment is stable, and the distribution, in space and time, of resources. I suggest that a focus on the dynamics of the environment may provide new hypotheses for elucidating the algorithms that produce the collective behavior of cellular systems." }, { "pmid": "23967129", "title": "Fast and flexible: argentine ants recruit from nearby trails.", "abstract": "Argentine ants (Linepithema humile) live in groups of nests connected by trails to each other and to stable food sources. In a field study, we investigated whether some ants recruit directly from established, persistent trails to food sources, thus accelerating food collection. Our results indicate that Argentine ants recruit nestmates to food directly from persistent trails, and that the exponential increase in the arrival rate of ants at baits is faster than would be possible if recruited ants traveled from distant nests. Once ants find a new food source, they walk back and forth between the bait and sometimes share food by trophallaxis with nestmates on the trail. Recruiting ants from nearby persistent trails creates a dynamic circuit, like those found in other distributed systems, which facilitates a quick response to changes in available resources." }, { "pmid": "14999281", "title": "Optimal traffic organization in ants under crowded conditions.", "abstract": "Efficient transportation, a hot topic in nonlinear science, is essential for modern societies and the survival of biological species. Biological evolution has generated a rich variety of successful solutions, which have inspired engineers to design optimized artificial systems. Foraging ants, for example, form attractive trails that support the exploitation of initially unknown food sources in almost the minimum possible time. However, can this strategy cope with bottleneck situations, when interactions cause delays that reduce the overall flow? Here, we present an experimental study of ants confronted with two alternative routes. We find that pheromone-based attraction generates one trail at low densities, whereas at a high level of crowding, another trail is established before traffic volume is affected, which guarantees that an optimal rate of food return is maintained. This bifurcation phenomenon is explained by a nonlinear modelling approach. Surprisingly, the underlying mechanism is based on inhibitory interactions. It points to capacity reserves, a limitation of the density-induced speed reduction, and a sufficient pheromone concentration for reliable trail perception. The balancing mechanism between cohesive and dispersive forces appears to be generic in natural, urban and transportation systems." }, { "pmid": "15351134", "title": "Coupled computational simulation and empirical research into the foraging system of Pharaoh's ant (Monomorium pharaonis).", "abstract": "The Pharaoh's ant (Monomorium pharaonis), a significant pest in many human environments, is phenomenally successful at locating and exploiting available food resources. Several pheromones are utilized in the self-organized foraging of this ant but most aspects of the overall system are poorly characterised. Agent-based modelling of ants as individual complex X-machines facilitates study of the mechanisms underlying the emergence of trails and aids understanding of the process. Conducting simultaneous modelling, and simulation, alongside empirical biological studies is shown to drive the research by formulating hypotheses that must be tested before the model can be verified and extended. Integration of newly characterised behavioural processes into the overall model will enable testing of general theories giving insight into division of labour within insect societies. This study aims to establish a new paradigm in computational modelling applicable to all types of multi-agent biological systems, from tissues to animal societies, as a powerful tool to accelerate basic research." }, { "pmid": "16306981", "title": "Insect communication: 'no entry' signal in ant foraging.", "abstract": "Forager ants lay attractive trail pheromones to guide nestmates to food, but the effectiveness of foraging networks might be improved if pheromones could also be used to repel foragers from unrewarding routes. Here we present empirical evidence for such a negative trail pheromone, deployed by Pharaoh's ants (Monomorium pharaonis) as a 'no entry' signal to mark an unrewarding foraging path. This finding constitutes another example of the sophisticated control mechanisms used in self-organized ant colonies." }, { "pmid": "18778716", "title": "An agent-based model to investigate the roles of attractive and repellent pheromones in ant decision making during foraging.", "abstract": "Pharaoh's ants organise their foraging system using three types of trail pheromone. All previous foraging models based on specific ant foraging systems have assumed that only a single attractive pheromone is used. Here we present an agent-based model based on trail choice at a trail bifurcation within the foraging trail network of a Pharaoh's ant colony which includes both attractive (positive) and repellent (negative) trail pheromones. Experiments have previously shown that Pharaoh's ants use both types of pheromone. We investigate how the repellent pheromone affects trail choice and foraging success in our simulated foraging system. We find that both the repellent and attractive pheromones have a role in trail choice, and that the repellent pheromone prevents random fluctuations which could otherwise lead to a positive feedback loop causing the colony to concentrate its foraging on the unrewarding trail. An emergent feature of the model is a high level of variability in the level of repellent pheromone on the unrewarding branch. This is caused by the repellent pheromone exerting negative feedback on its own deposition. We also investigate the dynamic situation where the location of the food is changed after foraging trails are established. We find that the repellent pheromone has a key role in enabling the colony to refocus the foraging effort to the new location. Our results show that having a repellent pheromone is adaptive, as it increases the robustness and flexibility of the colony's overall foraging response." }, { "pmid": "21490001", "title": "The effect of individual variation on the structure and function of interaction networks in harvester ants.", "abstract": "Social insects exhibit coordinated behaviour without central control. Local interactions among individuals determine their behaviour and regulate the activity of the colony. Harvester ants are recruited for outside work, using networks of brief antennal contacts, in the nest chamber closest to the nest exit: the entrance chamber. Here, we combine empirical observations, image analysis and computer simulations to investigate the structure and function of the interaction network in the entrance chamber. Ant interactions were distributed heterogeneously in the chamber, with an interaction hot-spot at the entrance leading further into the nest. The distribution of the total interactions per ant followed a right-skewed distribution, indicating the presence of highly connected individuals. Numbers of ant encounters observed positively correlated with the duration of observation. Individuals varied in interaction frequency, even after accounting for the duration of observation. An ant's interaction frequency was explained by its path shape and location within the entrance chamber. Computer simulations demonstrate that variation among individuals in connectivity accelerates information flow to an extent equivalent to an increase in the total number of interactions. Individual variation in connectivity, arising from variation among ants in location and spatial behaviour, creates interaction centres, which may expedite information flow." }, { "pmid": "23599264", "title": "Tracking individuals shows spatial fidelity is a key regulator of ant social organization.", "abstract": "Ants live in organized societies with a marked division of labor among workers, but little is known about how this division of labor is generated. We used a tracking system to continuously monitor individually tagged workers in six colonies of the ant Camponotus fellah over 41 days. Network analyses of more than 9 million interactions revealed three distinct groups that differ in behavioral repertoires. Each group represents a functional behavioral unit with workers moving from one group to the next as they age. The rate of interactions was much higher within groups than between groups. The precise information on spatial and temporal distribution of all individuals allowed us to calculate the expected rates of within- and between-group interactions. These values suggest that the network of interaction within colonies is primarily mediated by age-induced changes in the spatial location of workers." }, { "pmid": "22829756", "title": "Individual rules for trail pattern formation in Argentine ants (Linepithema humile).", "abstract": "We studied the formation of trail patterns by Argentine ants exploring an empty arena. Using a novel imaging and analysis technique we estimated pheromone concentrations at all spatial positions in the experimental arena and at different times. Then we derived the response function of individual ants to pheromone concentrations by looking at correlations between concentrations and changes in speed or direction of the ants. Ants were found to turn in response to local pheromone concentrations, while their speed was largely unaffected by these concentrations. Ants did not integrate pheromone concentrations over time, with the concentration of pheromone in a 1 cm radius in front of the ant determining the turning angle. The response to pheromone was found to follow a Weber's Law, such that the difference between quantities of pheromone on the two sides of the ant divided by their sum determines the magnitude of the turning angle. This proportional response is in apparent contradiction with the well-established non-linear choice function used in the literature to model the results of binary bridge experiments in ant colonies (Deneubourg et al. 1990). However, agent based simulations implementing the Weber's Law response function led to the formation of trails and reproduced results reported in the literature. We show analytically that a sigmoidal response, analogous to that in the classical Deneubourg model for collective decision making, can be derived from the individual Weber-type response to pheromone concentrations that we have established in our experiments when directional noise around the preferred direction of movement of the ants is assumed." }, { "pmid": "27815944", "title": "A locally-blazed ant trail achieves efficient collective navigation despite limited information.", "abstract": "Any organism faces sensory and cognitive limitations which may result in maladaptive decisions. Such limitations are prominent in the context of groups where the relevant information at the individual level may not coincide with collective requirements. Here, we study the navigational decisions exhibited by Paratrechina longicornis ants as they cooperatively transport a large food item. These decisions hinge on the perception of individuals which often restricts them from providing the group with reliable directional information. We find that, to achieve efficient navigation despite partial and even misleading information, these ants employ a locally-blazed trail. This trail significantly deviates from the classical notion of an ant trail: First, instead of systematically marking the full path, ants mark short segments originating at the load. Second, the carrying team constantly loses the guiding trail. We experimentally and theoretically show that the locally-blazed trail optimally and robustly exploits useful knowledge while avoiding the pitfalls of misleading information." }, { "pmid": "20463735", "title": "Molecular robots guided by prescriptive landscapes.", "abstract": "Traditional robots rely for their function on computing, to store internal representations of their goals and environment and to coordinate sensing and any actuation of components required in response. Moving robotics to the single-molecule level is possible in principle, but requires facing the limited ability of individual molecules to store complex information and programs. One strategy to overcome this problem is to use systems that can obtain complex behaviour from the interaction of simple robots with their environment. A first step in this direction was the development of DNA walkers, which have developed from being non-autonomous to being capable of directed but brief motion on one-dimensional tracks. Here we demonstrate that previously developed random walkers-so-called molecular spiders that comprise a streptavidin molecule as an inert 'body' and three deoxyribozymes as catalytic 'legs'-show elementary robotic behaviour when interacting with a precisely defined environment. Single-molecule microscopy observations confirm that such walkers achieve directional movement by sensing and modifying tracks of substrate molecules laid out on a two-dimensional DNA origami landscape. When using appropriately designed DNA origami, the molecular spiders autonomously carry out sequences of actions such as 'start', 'follow', 'turn' and 'stop'. We anticipate that this strategy will result in more complex robotic behaviour at the molecular level if additional control mechanisms are incorporated. One example might be interactions between multiple molecular robots leading to collective behaviour; another might be the ability to read and transform secondary cues on the DNA origami landscape as a means of implementing Turing-universal algorithmic behaviour." }, { "pmid": "24531967", "title": "Designing collective behavior in a termite-inspired robot construction team.", "abstract": "Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals." }, { "pmid": "26151903", "title": "Mechanisms of Vessel Pruning and Regression.", "abstract": "The field of angiogenesis research has primarily focused on the mechanisms of sprouting angiogenesis. Yet vascular networks formed by vessel sprouting subsequently undergo extensive vascular remodeling to form a functional and mature vasculature. This \"trimming\" includes distinct processes of vascular pruning, the regression of selected vascular branches. In some situations complete vascular networks may undergo physiological regression. Vessel regression is an understudied yet emerging field of research. This review summarizes the state-of-the-art of vessel pruning and regression with a focus on the cellular processes and the molecular regulators of vessel maintenance and regression." }, { "pmid": "22927811", "title": "The regulation of ant colony foraging activity without spatial information.", "abstract": "Many dynamical networks, such as the ones that produce the collective behavior of social insects, operate without any central control, instead arising from local interactions among individuals. A well-studied example is the formation of recruitment trails in ant colonies, but many ant species do not use pheromone trails. We present a model of the regulation of foraging by harvester ant (Pogonomyrmex barbatus) colonies. This species forages for scattered seeds that one ant can retrieve on its own, so there is no need for spatial information such as pheromone trails that lead ants to specific locations. Previous work shows that colony foraging activity, the rate at which ants go out to search individually for seeds, is regulated in response to current food availability throughout the colony's foraging area. Ants use the rate of brief antennal contacts inside the nest between foragers returning with food and outgoing foragers available to leave the nest on the next foraging trip. Here we present a feedback-based algorithm that captures the main features of data from field experiments in which the rate of returning foragers was manipulated. The algorithm draws on our finding that the distribution of intervals between successive ants returning to the nest is a Poisson process. We fitted the parameter that estimates the effect of each returning forager on the rate at which outgoing foragers leave the nest. We found that correlations between observed rates of returning foragers and simulated rates of outgoing foragers, using our model, were similar to those in the data. Our simple stochastic model shows how the regulation of ant colony foraging can operate without spatial information, describing a process at the level of individual ants that predicts the overall foraging activity of the colony." }, { "pmid": "25386724", "title": "Trail pheromones: an integrative view of their role in social insect colony organization.", "abstract": "Trail pheromones do more than simply guide social insect workers from point A to point B. Recent research has revealed additional ways in which they help to regulate colony foraging, often via positive and negative feedback processes that influence the exploitation of the different resources that a colony has knowledge of. Trail pheromones are often complementary or synergistic with other information sources, such as individual memory. Pheromone trails can be composed of two or more pheromones with different functions, and information may be embedded in the trail network geometry. These findings indicate remarkable sophistication in how trail pheromones are used to regulate colony-level behavior, and how trail pheromones are used and deployed at the individual level." } ]
GigaScience
29718199
PMC6007562
10.1093/gigascience/giy016
Boutiques: a flexible framework to integrate command-line applications in computing platforms
AbstractWe present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.
Related workSeveral frameworks have been developed to describe and integrate applications in various types of platforms. Boutiques focuses on (1) fullyautomatic integration of applications, including deployment on heterogeneous computing resources through containers, (2) comprehensive input validation through a strict JSON schema, and (3) flexible application description through a rich JSON schema.Common Workflow LanguageThe Common Workflow Language (CWL13) is the work most closely related to Boutiques as it provides a formal way to describe containerized applications. In particular, CWL's Command Line Tool Description overlaps with the Boutiques descriptor. This section highlights the main differences between CWL and Boutiques, based on version 1.0 of the CWL Command Line Tool Description14. According to GitHub, CWL started 6 months before Boutiques (September 2014 vs. May 2015).Conceptual differencesThe following differences are conceptual in the sense that they may not be easily addressed in CWL or Boutiques without deeply refactoring the frameworks.First, CWL has a workflow language whereas Boutiques does not. In Boutiques, workflows are integrated as any other applications, except that they may submit other invocations to enable workflow parallelism. This fundamental difference has consequences on the complexity of CWL application descriptions and on the possibility to reuse existing workflows in Boutiques. The adoption of ontologies in CWL may also be another consequence (see below).CWL imposes a strict command-line format, while Boutiques is more flexible. CWL specifies command lines using an array containing an executable and a set of arguments, whereas Boutiques only uses a string template. Boutiques’ template approach may create issues in some cases, but it also allows developers to add simple operations to an application without having to write a specific wrapper. For instance, a Boutiques command line may easily include input decompression using the tar command in addition to the main application command. Importantly, Boutiques’ template system allows supporting configuration files.CWL uses ontologies, while Boutiques does not. Ontologies allow for richer definitions but they also have an overhead. The main consequences are the following: CWL uses a specific framework for validation, called SALAD (Semantic Annotations for Linked Avro Data), whereas Boutiques uses plain JSON schema. The main goal of SALAD is to allow “working with complex data structures and document formats, such as schemas, object references, and namespaces.” Boutiques only relies on the basic types required to describe and validate a command line syntactically. While the use of SALAD certainly allows for higher-level validation and may simplify the composition and validation of complex workflows, it also introduces a substantial overhead in the specification, and platforms have to use the validator provided by CWL. On the contrary, a regular JSON validator can be used in Boutiques.CWL has a rich set of types, whereas Boutiques only has simple types. This may again be seen as a feature or as an overhead depending on the context. Boutiques tries to limit the complexity of the specification to facilitate its support by platforms where applications will be integrated.Major differencesThe following differences are major but they may be addressed by the CWL and Boutiques developers as they do not undermine the application description model: CWL applications have to write in a specific set of directories called “designated output directory,” “designated temporary directory,” and “system temporary directory.” Applications are informed of the location of such directories through environment variables. Having to write in specific directories is problematic because applications have to be modified to enable that. In Boutiques, the path of output files is defined using a dedicated property.CWL types are richer, not only semantically but also syntactically. For instance, files have properties for basename, dirname, location, path, checksum, etc.Boutiques supports various types of containers (Docker, Singularity, rootfs), while CWL supports only Docker. Both tools have rich requirements: for instance, they may include RAM, disk usage, and walltime estimate. CWL has hints, i.e., recommendations that only lead to warnings when not respected.In Boutiques, dependencies can be defined among inputs, e.g., to specify that an input may be used only when a particular flag is activated. This is a very useful feature to improve validation, in particular for  applications with a lot of options.In Boutiques, named groups of inputs can be defined, which improves the presentation of long parameter lists for the user and enables the definition of more constraints within groups (e.g., mutually exclusive inputs).BIDS appsBIDS apps [5] specify a framework for neuroimaging applications to process datasets complying to the Brain Imaging Data Structure (BIDS). They share common goals with Boutiques, in particular, reusability across platforms through containerization. Conceptually, however, BIDS apps and Boutiques are different since BIDS apps intend to standardize application interfaces, while Boutiques intends to describe them as flexibly as possible. BIDS apps have a specific set of inputs and outputs, for instance, the input dataset, that have to be present in a specific order on the command line for the application to be valid. The specification adopted by BIDS apps simplifies the integration of applications in platforms as they all comply to the same interface. However, it is also limited to the subset of neuroimaging applications that process BIDS datasets and it does not formally describe application-specific inputs. All in all, BIDS apps and Boutiques complement each other. BIDS apps provide a practical way to integrate neuroimaging applications, while Boutiques offers a formal description of their specific parameters. Boutiques descriptors can be generated from BIDS apps using the bosh importer.Other frameworksSeveral other frameworks have been created to facilitate the integration of command-line applications in platforms. In neuroinformatics, many platforms define a formal interface to embed command-line applications. Among them, the Common Toolkit15 interoperates with several platforms such as 3D Slicer [23], NiftyView [24], GIMIAS [25], MedInria [25], MeVisLab [26], and MITK workbench [27]. The framework, however, remains tightly bound to the Common Toolkit's C++ implementation, which limits its adoption, e.g., in web platforms.In the distributed computing community, systems were also proposed to facilitate the embedding of applications in platforms. The Grid Execution Management for Legacy Code Architecture [28] was used to wrap applications in grid computing systems. Interestingly, it has been used to embed workflow engines in the SHIWA platform [29], in a similar but different way than proposed by Boutiques.The recent advent of software containers requires a new generation of application description frameworks that are independent from any programming language and that expose a rich set of properties to describe command lines, as intended by Boutiques.
[ "22144613", "27940837", "28494014", "28278228", "15461798", "26364860", "23014715", "15324759", "23588509" ]
[ { "pmid": "22144613", "title": "Reproducible research in computational science.", "abstract": "Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible." }, { "pmid": "28494014", "title": "Singularity: Scientific containers for mobility of compute.", "abstract": "Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science." }, { "pmid": "28278228", "title": "BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods.", "abstract": "The rate of progress in human neurosciences is limited by the inability to easily apply a wide range of analysis methods to the plethora of different datasets acquired in labs around the world. In this work, we introduce a framework for creating, testing, versioning and archiving portable applications for analyzing neuroimaging data organized and described in compliance with the Brain Imaging Data Structure (BIDS). The portability of these applications (BIDS Apps) is achieved by using container technologies that encapsulate all binary and other dependencies in one convenient package. BIDS Apps run on all three major operating systems with no need for complex setup and configuration and thanks to the comprehensiveness of the BIDS standard they require little manual user input. Previous containerized data processing solutions were limited to single user environments and not compatible with most multi-tenant High Performance Computing systems. BIDS Apps overcome this limitation by taking advantage of the Singularity container technology. As a proof of concept, this work is accompanied by 22 ready to use BIDS Apps, packaging a diverse set of commonly used neuroimaging algorithms." }, { "pmid": "15461798", "title": "Bioconductor: open software development for computational biology and bioinformatics.", "abstract": "The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples." }, { "pmid": "26364860", "title": "The MNI data-sharing and processing ecosystem.", "abstract": "Neuroimaging has been facing a data deluge characterized by the exponential growth of both raw and processed data. As a result, mining the massive quantities of digital data collected in these studies offers unprecedented opportunities and has become paramount for today's research. As the neuroimaging community enters the world of \"Big Data\", there has been a concerted push for enhanced sharing initiatives, whether within a multisite study, across studies, or federated and shared publicly. This article will focus on the database and processing ecosystem developed at the Montreal Neurological Institute (MNI) to support multicenter data acquisition both nationally and internationally, create database repositories, facilitate data-sharing initiatives, and leverage existing software toolkits for large-scale data processing." }, { "pmid": "23014715", "title": "A virtual imaging platform for multi-modality medical image simulation.", "abstract": "This paper presents the Virtual Imaging Platform (VIP), a platform accessible at http://vip.creatis.insa-lyon.fr to facilitate the sharing of object models and medical image simulators, and to provide access to distributed computing and storage resources. A complete overview is presented, describing the ontologies designed to share models in a common repository, the workflow template used to integrate simulators, and the tools and strategies used to exploit computing and storage resources. Simulation results obtained in four image modalities and with different models show that VIP is versatile and robust enough to support large simulations. The platform currently has 200 registered users who consumed 33 years of CPU time in 2011." }, { "pmid": "15324759", "title": "ODIN-object-oriented development interface for NMR.", "abstract": "A cross-platform development environment for nuclear magnetic resonance (NMR) experiments is presented. It allows rapid prototyping of new pulse sequences and provides a common programming interface for different system types. With this object-oriented interface implemented in C++, the programmer is capable of writing applications to control an experiment that can be executed on different measurement devices, even from different manufacturers, without the need to modify the source code. Due to the clear design of the software, new pulse sequences can be created, tested, and executed within a short time. To post-process the acquired data, an interface to well-known numerical libraries is part of the framework. This allows a transparent integration of the data processing instructions into the measurement module. The software focuses mainly on NMR imaging, but can also be used with limitations for spectroscopic experiments. To demonstrate the capabilities of the framework, results of the same experiment, carried out on two NMR imaging systems from different manufacturers are shown and compared with the results of a simulation." }, { "pmid": "23588509", "title": "The Medical Imaging Interaction Toolkit: challenges and advances : 10 years of open-source development.", "abstract": "PURPOSE\nThe Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control.\n\n\nMETHODS\nMITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams.\n\n\nRESULTS\nMITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process.\n\n\nCONCLUSIONS\nMITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research." } ]
Scientific Reports
29921933
PMC6008477
10.1038/s41598-018-27683-9
Multiclass Classifier based Cardiovascular Condition Detection Using Smartphone Mechanocardiography
Cardiac translational and rotational vibrations induced by left ventricular motions are measurable using joint seismocardiography (SCG) and gyrocardiography (GCG) techniques. Multi-dimensional non-invasive monitoring of the heart reveals relative information of cardiac wall motion. A single inertial measurement unit (IMU) allows capturing cardiac vibrations in sufficient details and enables us to perform patient screening for various heart conditions. We envision smartphone mechanocardiography (MCG) for the use of e-health or telemonitoring, which uses a multi-class classifier to detect various types of cardiovascular diseases (CVD) using only smartphone’s built-in internal sensors data. Such smartphone App/solution could be used by either a healthcare professional and/or the patient him/herself to take recordings from their heart. We suggest that smartphone could be used to separate heart conditions such as normal sinus rhythm (SR), atrial fibrillation (AFib), coronary artery disease (CAD), and possibly ST-segment elevated myocardial infarction (STEMI) in multiclass settings. An application could run the disease screening and immediately inform the user about the results. Widespread availability of IMUs within smartphones could enable the screening of patients globally in the future, however, we also discuss the possible challenges raised by the utilization of such self-monitoring systems.
Related Works with ECG and MCG for Cardiovascular MonitoringAtrial fibrillation (AFib)Atrial Fibrillation is a very common cardiac rhythm abnormality, where the atria fail to contract in a coordinated manner, instead vibrating approximately 400 to 600 times (atrial activity) per minute. In this case, contraction of the chambers is irregular and may vary from 40 to 180 times per minute39. ECG is the gold standard method for AFib detection. However, AFib can be detected with others techniques as well. A systematic review and meta-analysis on the accuracy of methods for diagnosing AFib using electrocardiography is available in7. Another recent review on advances in screening of AF using smartphones has been given in40. For instance, Lee et al.9 primarily used an iPhone 4S to measure a pulsatile photoplethysmogram (PPG) signal in order to detect AFib episodes by recording smartphone’s videocamera. The signal was obtained by a recording made with smartphones’s own videocamera. Recently, we presented a primary solution based on time-frequency analysis of seismocardiograms to detect AFib episodes36. The proposed method relies on linear classification of the spectral entropy and a heart rate variability index computed from the SCG signals. In continuation of that study, we developed an extensive machine learning solution37 to detect AFib by extracting various features from GCG and SCG signals obtained by only smartphone inertial sensors. This smartphone-only solution for AFib detection showed an accuracy of 97.4%.Coronary artery disease (CAD)Coronary artery disease refers to accumulation and inflammation of plaque in coronary arteries that could lead to heart attack. With ischemic disease, the blood flow to the heart’s muscle is decreased as the coronary arteries are gradually narrowed due to plaque formation within the walls. The majority of myocardial infarctions and strokes result from sudden rupture of atherosclerotic plaques41.The editorial6 has mentioned numerous approaches to CAD diagnosis by analysis of ECG depolarization. For example, Abboud et al.42 proposed high-frequency analysis of electrocardiogram to assess electrophysiological changes due to CAD. As such, high-frequency changes in ECG QRS complex components, also known ans Hyper-QRS, has been considered a sensitive indicator of acute coronary artery occlusion43,44. Many other techniques have been also developed to detect acute ischemia using ECG8,18,19,21,22. ECG QT-wave dispersion was investigated as a measure of variability in ventricular recovery time and a possible measure for identifying patients at risk of arrhythmias and sudden death after infarction8. Myocardial dispersion, also known as strain rate variations, is measured by echocardiography and reflects the heterogeneity of myocardial systolic contraction and can be used as an indicator for susceptibility to arrhythmias in different heart disease groups such as heart failure, ischemia, and infarction45–47. In recent years, machine learning algorithms based on wavelet transform feature engineering, pattern recognition, and support vector machine classifier have also been suggested to diagnose CAD conditions24,48.Ischemia can be classified into two major categories according to the presence of the ST segment elevation in ECG. If heart’s major arteries are completely obstructed, the amplitude of the observed elevation is directly linked to the severity of acute or threatening damage to the heart muscle. This type of heart attack is called ST-elevation myocardial infarction (STEMI). For patients with suspected myocardial infarction, but without ST-segment elevation in ECG (only partially blocked coronary arteries), the ECG findings are non-specific and investigation of cardiac markers (e.g. troponin) is required to confirm the diagnosis49. In the other category so-called NSTEMI (Non-ST elevation myocardial infarction), the symptoms might be milder or often vague so that other advanced diagnostic methods are considered.In this paper, we consider multi-class classification of various heart conditions using a smartphone-only solution based on SCG and GCG. We believe abnormal morphological changes in cardiogenic vibrations – possibly due to hypoxic myocardium tissue – are recognizable and therefore can improve detection of heart arrhythmia and ischemic diseases. A potential impact of this research is efficient prevention and follow-up of patients with various heart conditions, enabled by mobile technology.Figure 1 shows ECG-SCG-GCG cardiac waveform characteristics in normal, AFib, and CAD conditions. As shown, with normal condition both electrical and mechanical signals follow regular rhythm and monomorphic repeating patterns while in AFib condition cardiac signals appear irregular in terms of rhythm and morphology. More precisely, due to the atria failure in mechanical function left and right ventricles may response with abnormal systolic-diastolic functioning. In CAD situation, although regular rhythm is visible in SCG-GCG, cardiac motion pattern has undergone considerable changes such as poor contractility (amplitude reduction), larger diastolic activity, and widened systolic complex (as shown in D multiple and wide wavelets are visible in the onset of systole), potentially due to the artery blockage.Figure 1Overall waveform characteristics of normal (A), atrial fibrillation (B), and coronary artery disease with ischemic changes: T-wave inversion (C) and ST segment depression (D) conditions shown in ECG (lead I), GCG, and SCG signals.
[ "25801714", "27502855", "23174501", "25705010", "22868524", "26847074", "19147897", "23465249", "24690488", "24825766", "24384108", "25114186", "25212546", "21377149", "26236081", "13872234", "24497157", "19114197", "27834656", "22236222", "15843671", "2957111", "8456174", "11092652", "20223421", "26025592", "18940888", "22922414", "9264491", "10843903", "9735569", "27567408", "27181187" ]
[ { "pmid": "25801714", "title": "The mobile revolution--using smartphone apps to prevent cardiovascular disease.", "abstract": "Cardiovascular disease (CVD) is the leading cause of morbidity and mortality globally. Mobile technology might enable increased access to effective prevention of CVDs. Given the high penetration of smartphones into groups with low socioeconomic status, health-related mobile applications might provide an opportunity to overcome traditional barriers to cardiac rehabilitation access. The huge increase in low-cost health-related apps that are not regulated by health-care policy makers raises three important areas of interest. Are apps developed according to evidenced-based guidelines or on any evidence at all? Is there any evidence that apps are of benefit to people with CVD? What are the components of apps that are likely to facilitate changes in behaviour and enable individuals to adhere to medical advice? In this Review, we assess the current literature and content of existing apps that target patients with CVD risk factors and that can facilitate behaviour change. We present an overview of the current literature on mobile technology as it relates to prevention and management of CVD. We also evaluate how apps can be used throughout all age groups with different CVD prevention needs." }, { "pmid": "27502855", "title": "Effects of interactive patient smartphone support app on drug adherence and lifestyle changes in myocardial infarction patients: A randomized study.", "abstract": "BACKGROUND\nPatients with myocardial infarction (MI) seldom reach recommended targets for secondary prevention. This study evaluated a smartphone application (\"app\") aimed at improving treatment adherence and cardiovascular lifestyle in MI patients.\n\n\nDESIGN\nMulticenter, randomized trial.\n\n\nMETHODS\nA total of 174 ticagrelor-treated MI patients were randomized to either an interactive patient support tool (active group) or a simplified tool (control group) in addition to usual post-MI care. Primary end point was a composite nonadherence score measuring patient-registered ticagrelor adherence, defined as a combination of adherence failure events (2 missed doses registered in 7-day cycles) and treatment gaps (4 consecutive missed doses). Secondary end points included change in cardiovascular risk factors, quality of life (European Quality of Life-5 Dimensions), and patient device satisfaction (System Usability Scale).\n\n\nRESULTS\nPatient mean age was 58 years, 81% were men, and 21% were current smokers. At 6 months, greater patient-registered drug adherence was achieved in the active vs the control group (nonadherence score: 16.6 vs 22.8 [P = .025]). Numerically, the active group was associated with higher degree of smoking cessation, increased physical activity, and change in quality of life; however, this did not reach statistical significance. Patient satisfaction was significantly higher in the active vs the control group (system usability score: 87.3 vs 78.1 [P = .001]).\n\n\nCONCLUSIONS\nIn MI patients, use of an interactive patient support tool improved patient self-reported drug adherence and may be associated with a trend toward improved cardiovascular lifestyle changes and quality of life. Use of a disease-specific interactive patient support tool may be an appreciated, simple, and promising complement to standard secondary prevention." }, { "pmid": "25705010", "title": "Accuracy of methods for diagnosing atrial fibrillation using 12-lead ECG: A systematic review and meta-analysis.", "abstract": "BACKGROUND\nScreening for atrial fibrillation (AF) using 12-lead-electrocardiograms (ECGs) has been recommended; however, the best method for interpreting ECGs to diagnose AF is not known. We compared accuracy of methods for diagnosing AF from ECGs.\n\n\nMETHODS\nWe searched MEDLINE, EMBASE, CINAHL and LILACS until March 24, 2014. Two reviewers identified eligible studies, extracted data and appraised quality using the QUADAS-2 instrument. Meta-analysis, using the bivariate hierarchical random effects method, determined average operating points for sensitivities, specificities, positive and negative likelihood ratios (PLR, NLR) and enabled construction of Summary Receiver Operating Characteristic (SROC) plots.\n\n\nRESULTS\n10 studies investigated 16 methods for interpreting ECGs (n=55,376 participant ECGs). The sensitivity and specificity of automated software (8 studies; 9 methods) were 0.89 (95% C.I. 0.82-0.93) and 0.99 (95% C.I. 0.99-0.99), respectively; PLR 96.6 (95% C.I. 64.2-145.6); NLR 0.11 (95% C.I. 0.07-0.18). Indirect comparisons with software found healthcare professionals (5 studies; 7 methods) had similar sensitivity for diagnosing AF but lower specificity [sensitivity 0.92 (95% C.I. 0.81-0.97), specificity 0.93 (95% C.I. 0.76-0.98), PLR 13.9 (95% C.I. 3.5-55.3), NLR 0.09 (95% C.I. 0.03-0.22)]. Sub-group analyses of primary care professionals found greater specificity for GPs than nurses [GPs: sensitivity 0.91 (95% C.I. 0.68-1.00); specificity 0.96 (95% C.I. 0.89-1.00). Nurses: sensitivity 0.88 (95% C.I. 0.63-1.00); specificity 0.85 (95% C.I. 0.83-0.87)].\n\n\nCONCLUSIONS\nAutomated ECG-interpreting software most accurately excluded AF, although its ability to diagnose this was similar to all healthcare professionals. Within primary care, the specificity of AF diagnosis from ECG was greater for GPs than nurses." }, { "pmid": "22868524", "title": "Atrial fibrillation detection using an iPhone 4S.", "abstract": "Atrial fibrillation (AF) affects three to five million Americans and is associated with significant morbidity and mortality. Existing methods to diagnose this paroxysmal arrhythmia are cumbersome and/or expensive. We hypothesized that an iPhone 4S can be used to detect AF based on its ability to record a pulsatile photoplethysmogram signal from a fingertip using the built-in camera lens. To investigate the capability of the iPhone 4S for AF detection, we first used two databases, the MIT-BIH AF and normal sinus rhythm (NSR) to derive discriminatory threshold values between two rhythms. Both databases include RR time series originating from 250 Hz sampled ECG recordings. We rescaled the RR time series to 30 Hz so that the RR time series resolution is 1/30 (s) which is equivalent to the resolution from an iPhone 4S. We investigated three statistical methods consisting of the root mean square of successive differences (RMSSD), the Shannon entropy (ShE) and the sample entropy (SampE), which have been proved to be useful tools for AF assessment. Using 64-beat segments from the MIT-BIH databases, we found the beat-to-beat accuracy value of 0.9405, 0.9300, and 0.9614 for RMSSD, ShE, and SampE, respectively. Using an iPhone 4S, we collected 2-min pulsatile time series from 25 prospectively recruited subjects with AF pre- and postelectrical cardioversion. Using derived threshold values of RMSSD, ShE and SampE from the MIT-BIH databases, we found the beat-to-beat accuracy of 0.9844, 0.8494, and 0.9522, respectively. It should be recognized that for clinical applications, the most relevant objective is to detect the presence of AF in the data. Using this criterion, we achieved an accuracy of 100% for both the MIT-BIH AF and iPhone 4S databases." }, { "pmid": "26847074", "title": "Smart-watches: a potential challenger to the implantable loop recorder?", "abstract": "The newest generation of smart-watches offer heart rate monitoring technology via photoplethysmography, a technology shown to demonstrate impressive ability in diagnosing arrhythmias including atrial fibrillation. Combining such technology with the portability, connectivity and other location and activity tracking features smart-watches could represent a powerful new tool in extended non-invasive arrhythmia detection. The technology itself, including potential uses and limitations, is discussed. There is a need for further software development but crucially, further work into clarifying the diagnostic accuracy of such technology." }, { "pmid": "19147897", "title": "Robust ballistocardiogram acquisition for home monitoring.", "abstract": "The ballistocardiogram (BCG) measures the reaction of the body to cardiac ejection forces, and is an effective, non-invasive means of evaluating cardiovascular function. A simple, robust method is presented for acquiring high-quality, repeatable BCG signals from a modified, commercially available scale. The measured BCG waveforms for all subjects qualitatively matched values in the existing literature and physiologic expectations in terms of timing and IJ amplitude. Additionally, the BCG IJ amplitude was shown to be correlated with diastolic filling time for a subject with premature atrial contractions, demonstrating the sensitivity of the apparatus to beat-by-beat hemodynamic changes. The signal-to-noise ratio (SNR) of the BCG was estimated using two methods, and the average SNR over all subjects was greater than 12 for both estimates. The BCG measurement was shown to be repeatable over 50 recordings taken from the same subject over a three week period. This approach could allow patients at home to monitor trends in cardiovascular health." }, { "pmid": "24690488", "title": "Intermittent short ECG recording is more effective than 24-hour Holter ECG in detection of arrhythmias.", "abstract": "BACKGROUND\nMany patients report symptoms of palpitations or dizziness/presyncope. These patients are often referred for 24-hour Holter ECG, although the sensitivity for detecting relevant arrhythmias is comparatively low. Intermittent short ECG recording over a longer time period might be a convenient and more sensitive alternative. The objective of this study is to compare the efficacy of 24-hour Holter ECG with intermittent short ECG recording over four weeks to detect relevant arrhythmias in patients with palpitations or dizziness/presyncope.\n\n\nMETHODS\n\n\n\nDESIGN\nprospective, observational, cross-sectional study.\n\n\nSETTING\nClinical Physiology, University Hospital.\n\n\nPATIENTS\n108 consecutive patients referred for ambiguous palpitations or dizziness/presyncope.\n\n\nINTERVENTIONS\nAll individuals underwent a 24-hour Holter ECG and additionally registered 30-second handheld ECG (Zenicor EKG® thumb) recordings at home, twice daily and when having cardiac symptoms, during 28 days.\n\n\nMAIN OUTCOME MEASURES\nSignificant arrhythmias: atrial fibrillation (AF), paroxysmal supraventricular tachycardia (PSVT), atrioventricular (AV) block II-III, sinus arrest (SA), wide complex tachycardia (WCT).\n\n\nRESULTS\n95 patients, 42 men and 53 women with a mean age of 54.1 years, completed registrations. Analysis of Holter registrations showed atrial fibrillation (AF) in two patients and atrioventricular (AV) block II in one patient (= 3.2% relevant arrhythmias [95% CI 1.1-8.9]). Intermittent handheld ECG detected nine patients with AF, three with paroxysmal supraventricular tachycardia (PSVT) and one with AV-block-II (= 13.7% relevant arrhythmias [95% CI 8.2-22.0]). There was a significant difference between the two methods in favour of intermittent ECG with regard to the ability to detect relevant arrhythmias (P = 0.0094). With Holter ECG, no symptoms were registered during any of the detected arrhythmias. With intermittent ECG, symptoms were registered during half of the arrhythmia episodes.\n\n\nCONCLUSIONS\nIntermittent short ECG recording during four weeks is more effective in detecting AF and PSVT in patients with ambiguous symptoms arousing suspicions of arrhythmia than 24-hour Holter ECG." }, { "pmid": "24825766", "title": "Validation and clinical use of a novel diagnostic device for screening of atrial fibrillation.", "abstract": "AIMS\nPatients with asymptomatic and undiagnosed atrial fibrillation (AF) are at increased risk of heart failure and ischaemic stroke. In this study, we validated a new diagnostic device, the MyDiagnostick, for detection of AF by general practitioners and patients. It records and stores a Lead I electrocardiogram (ECG) which is automatically analysed for the presence of AF.\n\n\nMETHODS AND RESULTS\nIn total, 192 patients (age 69.4 ± 12.6 years) were asked to hold the MyDiagnostick for 1 min, immediately before a routine 12-lead ECG was recorded. Atrial fibrillation detection and ECGs stored by the MyDiagnostick were compared with the cardiac rhythm on the 12-lead ECG. In a second part of the study, the MyDiagnostick was used to screen for AF during influenza vaccination in the general practitioner's office. Atrial fibrillation was present in 53 out of the 192 patients (27.6%). All AF patients were correctly detected by the MyDiagnostick (sensitivity 100%; 95% confidence interval 93-100%). MyDiagnostick AF classification in 6 out of 139 patients in sinus rhythm was considered false positive (specificity 95.9%; 95% confidence interval 91.3-98.1%). During 4 h of influenza vaccination in 676 patients (age 74 ± 7.1 years), the MyDiagnostick correctly diagnosed AF in all 55 patients (prevalence 8.1%). In 11 patients (1.6%), AF was not diagnosed before, all with a CHA2DS2VASc score of >1.\n\n\nCONCLUSION\nThe high AF detection performance of the MyDiagnostick, combined with the ease of use of the device, enables large screening programmes for detection of undiagnosed AF." }, { "pmid": "24384108", "title": "Comparison of 24-hour Holter monitoring with 14-day novel adhesive patch electrocardiographic monitoring.", "abstract": "BACKGROUND\nCardiac arrhythmias are remarkably common and routinely go undiagnosed because they are often transient and asymptomatic. Effective diagnosis and treatment can substantially reduce the morbidity and mortality associated with cardiac arrhythmias. The Zio Patch (iRhythm Technologies, Inc, San Francisco, Calif) is a novel, single-lead electrocardiographic (ECG), lightweight, Food and Drug Administration-cleared, continuously recording ambulatory adhesive patch monitor suitable for detecting cardiac arrhythmias in patients referred for ambulatory ECG monitoring.\n\n\nMETHODS\nA total of 146 patients referred for evaluation of cardiac arrhythmia underwent simultaneous ambulatory ECG recording with a conventional 24-hour Holter monitor and a 14-day adhesive patch monitor. The primary outcome of the study was to compare the detection arrhythmia events over total wear time for both devices. Arrhythmia events were defined as detection of any 1 of 6 arrhythmias, including supraventricular tachycardia, atrial fibrillation/flutter, pause greater than 3 seconds, atrioventricular block, ventricular tachycardia, or polymorphic ventricular tachycardia/ventricular fibrillation. McNemar's tests were used to compare the matched pairs of data from the Holter and the adhesive patch monitor.\n\n\nRESULTS\nOver the total wear time of both devices, the adhesive patch monitor detected 96 arrhythmia events compared with 61 arrhythmia events by the Holter monitor (P < .001).\n\n\nCONCLUSIONS\nOver the total wear time of both devices, the adhesive patch monitor detected more events than the Holter monitor. Prolonged duration monitoring for detection of arrhythmia events using single-lead, less-obtrusive, adhesive-patch monitoring platforms could replace conventional Holter monitoring in patients referred for ambulatory ECG monitoring." }, { "pmid": "25212546", "title": "Comparison of the Microlife blood pressure monitor with the Omron blood pressure monitor for detecting atrial fibrillation.", "abstract": "Screening for atrial fibrillation (AF) by assessing the pulse is recommended in high-risk patients. Some clinical trials demonstrated that the Microlife blood pressure monitor (BPM) with AF detection is more accurate than pulse palpation. This led to a change in practice guidelines in the United Kingdom where AF screening with the Microlife device is recommended instead of pulse palpation. Many BPMs have irregular heart beat detection, but they have not been shown to detect AF reliably. Recently, one study, in a highly select population, suggested that the Omron BPM with irregular heart beat detection has a higher sensitivity for AF than the Microlife BPM. We compared the Microlife and Omron BPMs to electrocardiographic readings for AF detection in general cardiology patients. Inclusion criteria were age≥50 years without a pacemaker or defibrillator. A total of 199 subjects were enrolled, 30 with AF. Each subject had a 12-lead electrocardiography, 1 Omron BPM reading, and 3 Microlife BPM readings as per device instructions. The Omron device had a sensitivity of 30% (95% confidence interval [CI] 15.4% to 49.1%) with the sensitivity for the first Microlife reading of 97% (95% CI 81.4% to 100%) and the Microlife readings using the majority rule (AF positive if at least 2 of 3 individual readings were positive for AF) of 100% (95% CI 85.9% to 100%). Specificity for the Omron device was 97% (95% CI 92.5% to 99.2%) and for the first Microlife reading of 90% (95% CI 83.8% to 94.2%) and for the majority rule Microlife device of 92% (95% CI 86.2% to 95.7%; p<0.0001). The specificity of both devices is acceptable, but only the Microlife BPM has a sensitivity value that is high enough to be used for AF screening in clinical practice." }, { "pmid": "21377149", "title": "HeartSaver: a mobile cardiac monitoring system for auto-detection of atrial fibrillation, myocardial infarction, and atrio-ventricular block.", "abstract": "A mobile medical device, dubbed HeartSaver, is developed for real-time monitoring of a patient's electrocardiogram (ECG) and automatic detection of several cardiac pathologies, including atrial fibrillation, myocardial infarction and atrio-ventricular block. HeartSaver is based on adroit integration of four different modern technologies: electronics, wireless communication, computer, and information technologies in the service of medicine. The physical device consists of four modules: sensor and ECG processing unit, a microcontroller, a link between the microcontroller and the cell phone, and mobile software associated with the system. HeartSaver includes automated cardiac pathology detection algorithms. These algorithms are simple enough to be implemented on a low-cost, limited-power microcontroller but powerful enough to detect the relevant cardiac pathologies. When an abnormality is detected, the microcontroller sends a signal to a cell phone. This operation triggers an application software on the cell phone that sends a text message transmitting information about patient's physiological condition and location promptly to a physician or a guardian. HeartSaver can be used by millions of cardiac patients with the potential to transform the cardiac diagnosis, care, and treatment and save thousands of lives." }, { "pmid": "26236081", "title": "Real Time Recognition of Heart Attack in a Smart Phone.", "abstract": "BACKGROUND\nIn many countries, including our own, cardiovascular disease is the most common cause of mortality and morbidity. Myocardial infarction (heart attack) is of particular importance in heart disease as well as time and type of reaction to acute myocardial infarction and these can be a determining factor in patients' outcome.\n\n\nMETHODS\nIn order to reduce physician attendance time and keep patients informed about their condition, the smart phone as a common communication device has been used to process data and determine patients' ECG signals. For ECG signal analysis, we used time domain methods for extracting the ST-segment as the most important feature of the signal to detect myocardial infarction and the thresholding methods and linear classifiers by LabVIEW Mobile Module were used to determine signal risk.\n\n\nRESULTS\nThe sensitivity and specificity as criteria to evaluate the algorithm were 98% and 93.3% respectively in real time.\n\n\nCONCLUSIONS\nThis algorithm, because of the low computational load and high speed, makes it possible to run in a smart phone. Using Bluetooth to send the data from a portable monitoring system to a smart phone facilitates the real time applications. By using this program on the patient's mobile, timely detection of infarction so to inform patients is possible and mobile services such as SMS and calling for a physician's consultation can be done." }, { "pmid": "24497157", "title": "Measuring and influencing physical activity with smartphone technology: a systematic review.", "abstract": "BACKGROUND\nRapid developments in technology have encouraged the use of smartphones in physical activity research, although little is known regarding their effectiveness as measurement and intervention tools.\n\n\nOBJECTIVE\nThis study systematically reviewed evidence on smartphones and their viability for measuring and influencing physical activity.\n\n\nDATA SOURCES\nResearch articles were identified in September 2013 by literature searches in Web of Knowledge, PubMed, PsycINFO, EBSCO, and ScienceDirect.\n\n\nSTUDY SELECTION\nThe search was restricted using the terms (physical activity OR exercise OR fitness) AND (smartphone* OR mobile phone* OR cell phone*) AND (measurement OR intervention). Reviewed articles were required to be published in international academic peer-reviewed journals, or in full text from international scientific conferences, and focused on measuring physical activity through smartphone processing data and influencing people to be more active through smartphone applications.\n\n\nSTUDY APPRAISAL AND SYNTHESIS METHODS\nTwo reviewers independently performed the selection of articles and examined titles and abstracts to exclude those out of scope. Data on study characteristics, technologies used to objectively measure physical activity, strategies applied to influence activity; and the main study findings were extracted and reported.\n\n\nRESULTS\nA total of 26 articles (with the first published in 2007) met inclusion criteria. All studies were conducted in highly economically advantaged countries; 12 articles focused on special populations (e.g. obese patients). Studies measured physical activity using native mobile features, and/or an external device linked to an application. Measurement accuracy ranged from 52 to 100% (n = 10 studies). A total of 17 articles implemented and evaluated an intervention. Smartphone strategies to influence physical activity tended to be ad hoc, rather than theory-based approaches; physical activity profiles, goal setting, real-time feedback, social support networking, and online expert consultation were identified as the most useful strategies to encourage physical activity change. Only five studies assessed physical activity intervention effects; all used step counts as the outcome measure. Four studies (three pre-post and one comparative) reported physical activity increases (12-42 participants, 800-1,104 steps/day, 2 weeks-6 months), and one case-control study reported physical activity maintenance (n = 200 participants; >10,000 steps/day) over 3 months.\n\n\nLIMITATIONS\nSmartphone use is a relatively new field of study in physical activity research, and consequently the evidence base is emerging.\n\n\nCONCLUSIONS\nFew studies identified in this review considered the validity of phone-based assessment of physical activity. Those that did report on measurement properties found average-to-excellent levels of accuracy for different behaviors. The range of novel and engaging intervention strategies used by smartphones, and user perceptions on their usefulness and viability, highlights the potential such technology has for physical activity promotion. However, intervention effects reported in the extant literature are modest at best, and future studies need to utilize randomized controlled trial research designs, larger sample sizes, and longer study periods to better explore the physical activity measurement and intervention capabilities of smartphones." }, { "pmid": "19114197", "title": "Feasibility of a three-axis epicardial accelerometer in detecting myocardial ischemia in cardiac surgical patients.", "abstract": "OBJECTIVE\nWe investigated the feasibility of continuous detection of myocardial ischemia during cardiac surgery with a 3-axis accelerometer.\n\n\nMETHODS\nTen patients with significant left anterior descending coronary artery stenosis underwent off-pump coronary artery bypass grafting. A 3-axis accelerometer (11 x 14 x 5 mm) was sutured onto the left anterior descending coronary artery-perfused region of left ventricle. Twenty episodes of ischemia were studied, with 3-minute occlusion of left anterior descending coronary artery at start of surgery and 3-minute occlusion of left internal thoracic artery at end of surgery. Longitudinal, circumferential, and radial accelerations were continuously measured, with epicardial velocities calculated from the signals. During occlusion, accelerometer velocities were compared with anterior left ventricular longitudinal, circumferential, and radial strains obtained by echocardiography. Ischemia was defined by change in strain greater than 30%.\n\n\nRESULTS\nIschemia was observed echocardiographically during 7 of 10 left anterior descending coronary artery occlusions but not during left internal thoracic artery occlusion. During ischemia, there were no significant electrocardiographic or hemodynamic changes, whereas large and significant changes in accelerometer circumferential peak systolic (P < .01) and isovolumic (P < .01) velocities were observed. During 13 occlusions, no ischemia was demonstrated by strain, nor was any change demonstrated by the accelerometer. A strong correlation was found between circumferential strain and accelerometer circumferential peak systolic velocity during occlusion (r = -0.76, P < .001).\n\n\nCONCLUSIONS\nThe epicardial accelerometer detects myocardial ischemia with great accuracy. This novel technique has potential to improve monitoring of myocardial ischemia during cardiac surgery." }, { "pmid": "27834656", "title": "Automated Detection of Atrial Fibrillation Based on Time-Frequency Analysis of Seismocardiograms.", "abstract": "In this paper, a novel method to detect atrial fibrillation (AFib) from a seismocardiogram (SCG) is presented. The proposed method is based on linear classification of the spectral entropy and a heart rate variability index computed from the SCG. The performance of the developed algorithm is demonstrated on data gathered from 13 patients in clinical setting. After motion artifact removal, in total 119 min of AFib data and 126 min of sinus rhythm data were considered for automated AFib detection. No other arrhythmias were considered in this study. The proposed algorithm requires no direct heartbeat peak detection from the SCG data, which makes it tolerant against interpersonal variations in the SCG morphology, and noise. Furthermore, the proposed method relies solely on the SCG and needs no complementary electrocardiography to be functional. For the considered data, the detection method performs well even on relatively low quality SCG signals. Using a majority voting scheme that takes five randomly selected segments from a signal and classifies these segments using the proposed algorithm, we obtained an average true positive rate of [Formula: see text] and an average true negative rate of [Formula: see text] for detecting AFib in leave-one-out cross-validation. This paper facilitates adoption of microelectromechanical sensor based heart monitoring devices for arrhythmia detection." }, { "pmid": "22236222", "title": "Subclinical atrial fibrillation and the risk of stroke.", "abstract": "BACKGROUND\nOne quarter of strokes are of unknown cause, and subclinical atrial fibrillation may be a common etiologic factor. Pacemakers can detect subclinical episodes of rapid atrial rate, which correlate with electrocardiographically documented atrial fibrillation. We evaluated whether subclinical episodes of rapid atrial rate detected by implanted devices were associated with an increased risk of ischemic stroke in patients who did not have other evidence of atrial fibrillation.\n\n\nMETHODS\nWe enrolled 2580 patients, 65 years of age or older, with hypertension and no history of atrial fibrillation, in whom a pacemaker or defibrillator had recently been implanted. We monitored the patients for 3 months to detect subclinical atrial tachyarrhythmias (episodes of atrial rate >190 beats per minute for more than 6 minutes) and followed them for a mean of 2.5 years for the primary outcome of ischemic stroke or systemic embolism. Patients with pacemakers were randomly assigned to receive or not to receive continuous atrial overdrive pacing.\n\n\nRESULTS\nBy 3 months, subclinical atrial tachyarrhythmias detected by implanted devices had occurred in 261 patients (10.1%). Subclinical atrial tachyarrhythmias were associated with an increased risk of clinical atrial fibrillation (hazard ratio, 5.56; 95% confidence interval [CI], 3.78 to 8.17; P<0.001) and of ischemic stroke or systemic embolism (hazard ratio, 2.49; 95% CI, 1.28 to 4.85; P=0.007). Of 51 patients who had a primary outcome event, 11 had had subclinical atrial tachyarrhythmias detected by 3 months, and none had had clinical atrial fibrillation by 3 months. The population attributable risk of stroke or systemic embolism associated with subclinical atrial tachyarrhythmias was 13%. Subclinical atrial tachyarrhythmias remained predictive of the primary outcome after adjustment for predictors of stroke (hazard ratio, 2.50; 95% CI, 1.28 to 4.89; P=0.008). Continuous atrial overdrive pacing did not prevent atrial fibrillation.\n\n\nCONCLUSIONS\nSubclinical atrial tachyarrhythmias, without clinical atrial fibrillation, occurred frequently in patients with pacemakers and were associated with a significantly increased risk of ischemic stroke or systemic embolism. (Funded by St. Jude Medical; ASSERT ClinicalTrials.gov number, NCT00256152.)." }, { "pmid": "2957111", "title": "Detection of transient myocardial ischemia by computer analysis of standard and signal-averaged high-frequency electrocardiograms in patients undergoing percutaneous transluminal coronary angioplasty.", "abstract": "Electrocardiographic manifestations of transient myocardial ischemia were studied, in 11 patients undergoing angioplasty (PTCA) of a left anterior descending coronary artery stenosis, by the visual inspection of the standard surface electrocardiogram (S-ECG) and the intracoronary ECG (IC-ECG) as well as computer-assisted analysis of the S-ECG. Cross-correlation analysis (CCA) performed by computer was used to compare beat-to-beat variability in ST-T morphology of the S-ECG during different stages of PTCA. CCA was also applied to the signal-averaged high-frequency QRS (SA-HFQ). All patients developed angina during balloon inflation, accompanied by transient marked ST-T changes in IC-ECG in 10 of 11 patients (90%). Visual inspection of S-ECG revealed transient ST-T changes in only 6 of 11 (54%). In contrast, CCA of the S-ECG revealed transient ST-T changes in 9 of 11 (82%). Analysis of SA-HFQ revealed that balloon inflation was associated with a marked reduction in the calculated root-mean-square (RMS) voltage for such signals (2.31 +/- 1.04 microV) as compared with RMS values before (3.27 +/- 1.12 microV, p less than .05) PTCA or after conclusion of PTCA (3.79 +/- 1.39 microV, p less than .01). Balloon inflation was also accompanied by changes in waveform morphology of the SA-HFQ, including the development of new or more prominent time zones of reduced amplitude in 10 of 11 individuals (90%). Such zones may represent slow conduction in regions of the heart rendered ischemic during PTCA. CCA of the S-ECG and of SA-HFQ appears to detect evidence of transient ischemia with greater sensitivity than simple visual inspection of S-ECG, and may therefore prove to be of use in the evaluation of patients with chest pain of uncertain origin." }, { "pmid": "11092652", "title": "Changes in high-frequency QRS components are more sensitive than ST-segment deviation for detecting acute coronary artery occlusion.", "abstract": "OBJECTIVES\nThis study describes changes in high-frequency QRS components (HF-QRS) during percutaneous transluminal coronary angioplasty (PTCA) and compares the ability of these changes in HF-QRS and ST-segment deviation in the standard 12-lead electrocardiogram (ECG) to detect acute coronary artery occlusion.\n\n\nBACKGROUND\nPrevious studies have shown decreased HF-QRS in the frequency range of 150-250 Hz during acute myocardial ischemia. It would be important to know whether the high-frequency analysis could add information to that available from the ST segments in the standard ECG.\n\n\nMETHODS\nThe study population consisted of 52 patients undergoing prolonged balloon occlusion during PTCA. Signal-averaged electrocardiograms (SAECG) were recorded prior to and during the balloon inflation. The HF-QRS were determined within a bandwidth of 150-250 Hz in the preinflation and inflation SAECGs. The ST-segment deviation during inflation was determined in the standard frequency range.\n\n\nRESULTS\nThe sensitivity for detecting acute coronary artery occlusion was 88% using the high-frequency method. In 71% of the patients there was ST elevation during inflation. If both ST elevation and depression were considered, the sensitivity was 79%. The sensitivity was significantly higher using the high-frequency method, p<0.002, compared with the assessment of ST elevation.\n\n\nCONCLUSIONS\nAcute coronary artery occlusion is detected with higher sensitivity using high-frequency QRS analysis compared with conventional assessment of ST segments. This result suggests that analysis of HF-QRS could provide an adjunctive tool with high sensitivity for detecting acute myocardial ischemia." }, { "pmid": "20223421", "title": "Mechanical dispersion assessed by myocardial strain in patients after myocardial infarction for risk prediction of ventricular arrhythmia.", "abstract": "OBJECTIVES\nThe aim of this study was to investigate whether myocardial strain echocardiography can predict ventricular arrhythmias in patients after myocardial infarction (MI).\n\n\nBACKGROUND\nLeft ventricular (LV) ejection fraction (EF) is insufficient for selecting patients for implantable cardioverter-defibrillator (ICD) therapy after MI. Electrical dispersion in infarcted myocardium facilitates malignant arrhythmia. Myocardial strain by echocardiography can quantify detailed regional and global myocardial function and timing. We hypothesized that electrical abnormalities in patients after MI will lead to LV mechanical dispersion, which can be measured as regional heterogeneity of contraction by myocardial strain.\n\n\nMETHODS\nWe prospectively included 85 post-MI patients, 44 meeting primary and 41 meeting secondary ICD prevention criteria. After 2.3 years (range 0.6 to 5.5 years) of follow-up, 47 patients had no and 38 patients had 1 or more recorded arrhythmias requiring appropriate ICD therapy. Longitudinal strain was measured by speckle tracking echocardiography. The SD of time to maximum myocardial shortening in a 16-segment LV model was calculated as a parameter of mechanical dispersion. Global strain was calculated as average strain in a 16-segment LV model.\n\n\nRESULTS\nThe EF did not differ between ICD patients with and without arrhythmias occurring during follow-up (34 +/- 11% vs. 35 +/- 9%, p = 0.70). Mechanical dispersion was greater in ICD patients with recorded ventricular arrhythmias compared with those without (85 +/- 29 ms vs. 56 +/- 13 ms, p < 0.001). By Cox regression, mechanical dispersion was a strong and independent predictor of arrhythmias requiring ICD therapy (hazard ratio: 1.25 per 10-ms increase, 95% confidence interval: 1.1 to 1.4, p < 0.001). In patients with an EF >35%, global strain showed better LV function in those without recorded arrhythmias (-14.0% +/- 4.0% vs. -12.0 +/- 3.0%, p = 0.05), whereas the EF did not differ (44 +/- 8% vs. 41 +/- 5%, p = 0.23).\n\n\nCONCLUSIONS\nMechanical dispersion was more pronounced in post-MI patients with recurrent arrhythmias. Global strain was a marker of arrhythmias in post-MI patients with relatively preserved ventricular function. These novel parameters assessed by myocardial strain may add important information about susceptibility for ventricular arrhythmias after MI." }, { "pmid": "26025592", "title": "Association between ventricular arrhythmias and myocardial mechanical dispersion assessed by strain analysis in patients with nonischemic cardiomyopathy.", "abstract": "BACKGROUND\nMechanical dispersion (MD), defined as the standard deviation of time to maximum myocardial shortening assessed by 2D speckle tracking echocardiographic strain imaging (2DS), has been recently proposed as a predictor for ventricular tachycardia or fibrillation (VT/VF) in patients with ischemic cardiomyopathy and long QT syndrome. However, the role of MD in patients with non-ischemic cardiomyopathy (NICM) has not yet been studied.\n\n\nMETHODS AND RESULTS\nIn 20 patients with NICM (mean age 62 ± 11 years, 75 % male, mean EF 32 ± 6 %, mean QRS duration 102 ± 14 ms), we measured longitudinal strain by 2DS in a 16-segment left ventricular model and calculated the MD. Patients were divided into two groups, defined by the presence or absence of documented VT/VF. In 11 patients (55 %), VT/VF was documented. The median time from VT/VF to echocardiographic examination was 26 (IQR 15-58) months. There were no significant differences in baseline characteristics between patients with and without index events. MD was significantly greater in patients with VT/VF as compared to those without arrhythmias (84 ± 31 ms vs. 53 ± 16 ms, p = 0.017). The analysis of the ROC curve (AUC 0.81, 95 % CI 0.63-1.00, p = 0.017) revealed that dispersion >50 ms is associated with twelve times higher risk of VT/VF in patients with NICM (OR 12.5, 95 % CI 1.1-143.4, p = 0.024).\n\n\nCONCLUSIONS\nIn this small cohort of NICM patients, greater MD was associated with a higher incidence of VT/VF." }, { "pmid": "18940888", "title": "Left ventricular mechanical dispersion by tissue Doppler imaging: a novel approach for identifying high-risk individuals with long QT syndrome.", "abstract": "AIMS\nThe aim of this study was to investigate whether prolonged and dispersed myocardial contraction duration assessed by tissue Doppler imaging (TDI) may serve as risk markers for cardiac events (documented arrhythmia, syncope, and cardiac arrest) in patients with long QT syndrome (LQTS).\n\n\nMETHODS AND RESULTS\nSeventy-three patients with genetically confirmed LQTS (nine double- and 33 single-mutation carriers with previous cardiac events and 31 single-mutation carriers without events) were studied. Myocardial contraction duration was prolonged in each group of LQTS patients compared with 20 healthy controls (P < 0.001). Contraction duration was longer in single-mutation carriers with previous cardiac events compared with those without (0.46 +/- 0.06 vs. 0.40 +/- 0.06 s, P = 0.001). Prolonged contraction duration could better identify cardiac events compared with corrected QT (QTc) interval in single-mutation carriers [area under curve by receiver-operating characteristic analysis 0.77 [95% confidence interval (95% CI) 0.65-0.89] vs. 0.66 (95% CI 0.52-0.79)]. Dispersion of contraction was more pronounced in single-mutation carriers with cardiac events compared with those without (0.048 +/- 0.018 vs. 0.031 +/- 0.019 s, P = 0.001).\n\n\nCONCLUSION\nDispersion of myocardial contraction assessed by TDI was increased in LQTS patients. Prolonged contraction duration was superior to QTc for risk assessment. These new methods can easily be implemented in clinical routine and may improve clinical management of LQTS patients." }, { "pmid": "9264491", "title": "Predicting survival in heart failure case and control subjects by use of fully automated methods for deriving nonlinear and conventional indices of heart rate dynamics.", "abstract": "BACKGROUND\nDespite much recent interest in quantification of heart rate variability (HRV), the prognostic value of conventional measures of HRV and of newer indices based on nonlinear dynamics is not universally accepted.\n\n\nMETHODS AND RESULTS\nWe have designed algorithms for analyzing ambulatory ECG recordings and measuring HRV without human intervention, using robust methods for obtaining time-domain measures (mean and SD of heart rate), frequency-domain measures (power in the bands of 0.001 to 0.01 Hz [VLF], 0.01 to 0.15 Hz [LF], and 0.15 to 0.5 Hz [HF] and total spectral power [TP] over all three of these bands), and measures based on nonlinear dynamics (approximate entropy [ApEn], a measure of complexity, and detrended fluctuation analysis [DFA], a measure of long-term correlations). The study population consisted of chronic congestive heart failure (CHF) case patients and sex- and age-matched control subjects in the Framingham Heart Study. After exclusion of technically inadequate studies and those with atrial fibrillation, we used these algorithms to study HRV in 2-hour ambulatory ECG recordings of 69 participants (mean age, 71.7+/-8.1 years). By use of separate Cox proportional-hazards models, the conventional measures SD (P<.01), LF (P<.01), VLF (P<.05), and TP (P<.01) and the nonlinear measure DFA (P<.05) were predictors of survival over a mean follow-up period of 1.9 years; other measures, including ApEn (P>.3), were not. In multivariable models, DFA was of borderline predictive significance (P=.06) after adjustment for the diagnosis of CHF and SD.\n\n\nCONCLUSIONS\nThese results demonstrate that HRV analysis of ambulatory ECG recordings based on fully automated methods can have prognostic value in a population-based study and that nonlinear HRV indices may contribute prognostic value to complement traditional HRV measures." }, { "pmid": "10843903", "title": "Physiological time-series analysis using approximate entropy and sample entropy.", "abstract": "Entropy, as it relates to dynamical systems, is the rate of information production. Methods for estimation of the entropy of a system represented by a time series are not, however, well suited to analysis of the short and noisy data sets encountered in cardiovascular and other biological studies. Pincus introduced approximate entropy (ApEn), a set of measures of system complexity closely related to entropy, which is easily applied to clinical cardiovascular and other time series. ApEn statistics, however, lead to inconsistent results. We have developed a new and related complexity measure, sample entropy (SampEn), and have compared ApEn and SampEn by using them to analyze sets of random numbers with known probabilistic character. We have also evaluated cross-ApEn and cross-SampEn, which use cardiovascular data sets to measure the similarity of two distinct time series. SampEn agreed with theory much more closely than ApEn over a broad range of conditions. The improved accuracy of SampEn statistics should make them useful in the study of experimental clinical cardiovascular and other biological time series." }, { "pmid": "9735569", "title": "Stochastic complexity measures for physiological signal analysis.", "abstract": "Traditional feature extraction methods describe signals in terms of amplitude and frequency. This paper takes a paradigm shift and investigates four stochastic-complexity features. Their advantages are demonstrated on synthetic and physiological signals; the latter recorded during periods of Cheyne-Stokes respiration, anesthesia, sleep, and motor-cortex investigation." }, { "pmid": "27181187", "title": "Electrocardiographic diagnosis of ST segment elevation myocardial infarction: An evaluation of three automated interpretation algorithms.", "abstract": "OBJECTIVE\nTo assess the validity of three different computerized electrocardiogram (ECG) interpretation algorithms in correctly identifying STEMI patients in the prehospital environment who require emergent cardiac intervention.\n\n\nMETHODS\nThis retrospective study validated three diagnostic algorithms (AG) against the presence of a culprit coronary artery upon cardiac catheterization. Two patient groups were enrolled in this study: those with verified prehospital ST-elevation myocardial infarction (STEMI) activation (cases) and those with a prehospital impression of chest pain due to ACS (controls).\n\n\nRESULTS\nThere were 500 records analyzed resulting in a case group with 151 patients and a control group with 349 patients. Sensitivities differed between AGs (AG1=0.69 vs AG2=0.68 vs AG3=0.62), with statistical differences in sensitivity found when comparing AG1 to AG3 and AG1 to AG2. Specificities also differed between AGs (AG1=0.89 vs AG2=0.91 vs AG3=0.95), with AG1 and AG2 significantly less specific than AG3.\n\n\nCONCLUSIONS\nSTEMI diagnostic algorithms vary in regards to their validity in identifying patients with culprit artery lesions. This suggests that systems could apply more sensitive or specific algorithms depending on the needs in their community." } ]
BMC Medical Informatics and Decision Making
29940927
PMC6019216
10.1186/s12911-018-0639-1
Identification of research hypotheses and new knowledge from scientific literature
BackgroundText mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author’s intended knowledge gain) and New Knowledge (an author’s findings). The method incorporates various features, including a combination of simple MK dimensions.MethodsWe identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated.ResultsWe show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836).ConclusionWe have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.Electronic supplementary materialThe online version of this article (10.1186/s12911-018-0639-1) contains supplementary material, which is available to authorized users.
Related workThe task of automatically classifying knowledge contained within scientific literature according to its intended interpretation has long been recognised as an important step towards helping researchers to make sense of the information reported, and to allow important details to be located in an efficient manner. Previous work, focussing either on general scientific text or biomedical text, has aimed to assign interpretative information to continuous textual units, varying in granularity from segments of sentences to complete paragraphs, but most frequently concerning complete sentences. Specific aspects of interpretation addressed have included negation [5], speculation [6–8], general information content/rhetorical intent, e.g., background, methods, results, insights, etc. [9–12] and the distinction between novel information and background knowledge [13, 14].Despite the demonstrated utility of approaches such as the above, performing such classifications at the level of continuous text spans is not straightforward. For example, a single sentence or clause can introduce multiple types of information (e.g., several interactions or associations), each of which may have a different interpretation, in terms of speculation, negation, research novelty, etc. As can be seen from Fig. 1, events and relations can structure and categorise the potentially complex information that is described in a continuous text span. Following on from the successful development of IE systems that are able to extract both gene-disease relations [15–17] and biomolecular events [18, 19], there has been a growing interest in the task of assigning interpretative information to relations and events. However, given that a single sentence may contain mutiple events or relations, the challenge is to determine whether and how the interpretation of each of these structures is affected by the presence of particular words or phrases in the sentence that denote negation or speculation, etc.IE systems are typically developed by applying supervised or semi-supervised methods to annotated corpora marked up with relations and events. There have been several efforts to manually enrich corpora with interpretative information, such that it is possible to train models to determine automatically how particular types of contxtual information in a sentence affect the interpretation of different events and relations. Most work on enriching relations and events has been focussed on one or two specific aspects of interpretation (e.g., negation [20, 21] and/or speculation [22, 23]). Subsequent work has shown that these types of information can be detected automatically [24, 25].In contrast, work on Meta-Knowledge (MK) captures a wider range of contextual information, integrating and building upon various aspects of the above-mentioned schemes to create a number of separate ‘dimensions’ of information, which are aimed at capturing subtle differences in the interpretation of relations and events. Domain-specific versions of the MK scheme have been created to enrich complex event structures in two different domain corpora, i.e., the ACE-MK corpus [26], which enriches the general domain news-related events of the ACE2005 corpus [27], and the GENIA-MK corpus [28], which adds MK to the biomolecular interactions captured as events in the GENIA event corpus [22]. Recent work has focussed on the detection of uncertainty around events in the GENIA-MK Corpus. Uncertainty was detected using a hybrid approach of rules and machine learning. The authors were able to show that incorporating uncertainty into a pathway modelling task led to an improvement in curator performance [3].The GENIA-MK annotation scheme defines five distinct core dimensions of MK for events, each of which has a number of possible values, as shown in Fig. 2: Knowledge Type, which categorises the knowledge that the author wishes to express into one of: Observation, Investigation, Analysis, Method, Fact or Other. Fig. 2The GENIA-MK annotation scheme. There are five Meta-Knowledge dimensions introduced by Thompson et al. as well as two further hyperdimensions Knowledge Source, which encodes whether the author presents the knowledge as part of their own work (Current), or whether it is referring to previous work (Other).Polarity, which is set to Positive if the event took place, and to Negative if it is negated, i.e., it did not take place.Manner, which denotes the event’s intensity, i.e., High, Low or Neutral.Certainty Level or Uncertainty, which indicates how certain an event is. It may be certain (L3), probable (L2) or possible (L1).These five dimensions are considered to be independent of one another, in that the value of one dimension does not affect the value of any other dimension. There may, however, be emergent correlations between the dimensions (i.e., an event with the MK value ’Knowledge Source=Other’ is more frequently negated), which occur due to the characteristics of the events. Previous work using the GENIA-MK corpus has demonstrated the feasibility of automatically recognising one or more of the MK dimensions [29–31]. In addition to the five core dimensions, Thompson et al. [28] introduced the notion of hyperdimensions, (i.e., New Knowledge and Hypothesis) which represent higher level dimensions of information whose values are determined according to specific combinations of values that are assigned to different core MK dimensions. These hyperdimensions are also represented in Fig. 2. We build upon these approaches in our own work to develop novel techniques for the recognition of New Knowledge and Hypothesis, which take into account several of the core MK dimensions described above, as well as other features pertaining to the structure of the event and sentence.
[ "24190659", "22032181", "18173834", "16815739", "22321698", "18433469", "25886734", "17291334", "18173834", "22554700", "22226192", "22233443", "21199577", "23323936", "22621266", "23092060", "15130936" ]
[ { "pmid": "24190659", "title": "Reduction of CD18 promotes expansion of inflammatory γδ T cells collaborating with CD4+ T cells in chronic murine psoriasiform dermatitis.", "abstract": "IL-17 is a critical factor in the pathogenesis of psoriasis and other inflammatory diseases. The impact of γδ T cells, accounting for an important source of IL-17 in acute murine IL-23- and imiquimod-induced skin inflammation, in human psoriasis is still unclear. Using the polygenic CD18(hypo) PL/J psoriasis mouse model spontaneously developing chronic psoriasiform dermatitis due to reduced CD18/β2 integrin expression to 2-16% of wild-type levels, we investigated in this study the influence of adhesion molecule expression on generation of inflammatory γδ T cells and analyzed the occurrence of IL-17-producing γδ and CD4(+) T cells at different disease stages. Severity of CD18(hypo) PL/J psoriasiform dermatitis correlated with a loss of skin-resident Vγ5(+) T cells and concurrent skin infiltration with IL-17(+), IL-22(+), and TNF-α(+) γδTCR(low) cells preceded by increases in Vγ4(+) T cells in local lymph nodes. In vitro, reduced CD18 levels promoted expansion of inflammatory memory-type γδ T cells in response to IL-7. Similar to IL-17 or IL-23/p19 depletion, injection of diseased CD18(hypo) PL/J mice with anti-γδTCR Abs significantly reduced skin inflammation and largely eliminated pathological γδ and CD4(+) T cells. Moreover, CD18(hypo) γδ T cells induced allogeneic CD4(+) T cell responses more potently than CD18(wt) counterparts and, upon adoptive transfer, triggered psoriasiform dermatitis in susceptible hosts. These results demonstrate a novel function of reduced CD18 levels in generation of pathological γδ T cells that was confirmed by detection of increases in CD18(low) γδ T cells in psoriasis patients and may also have implications for other inflammatory diseases." }, { "pmid": "22032181", "title": "BioNØT: a searchable database of biomedical negated sentences.", "abstract": "BACKGROUND\nNegated biomedical events are often ignored by text-mining applications; however, such events carry scientific significance. We report on the development of BioNØT, a database of negated sentences that can be used to extract such negated events.\n\n\nDESCRIPTION\nCurrently BioNØT incorporates ≈32 million negated sentences, extracted from over 336 million biomedical sentences from three resources: ≈2 million full-text biomedical articles in Elsevier and the PubMed Central, as well as ≈20 million abstracts in PubMed. We evaluated BioNØT on three important genetic disorders: autism, Alzheimer's disease and Parkinson's disease, and found that BioNØT is able to capture negated events that may be ignored by experts.\n\n\nCONCLUSIONS\nThe BioNØT database can be a useful resource for biomedical researchers. BioNØT is freely available at http://bionot.askhermes.org/. In future work, we will develop semantic web related technologies to enrich BioNØT." }, { "pmid": "18173834", "title": "Evaluation of time profile reconstruction from complex two-color microarray designs.", "abstract": "BACKGROUND\nAs an alternative to the frequently used \"reference design\" for two-channel microarrays, other designs have been proposed. These designs have been shown to be more profitable from a theoretical point of view (more replicates of the conditions of interest for the same number of arrays). However, the interpretation of the measurements is less straightforward and a reconstruction method is needed to convert the observed ratios into the genuine profile of interest (e.g. a time profile). The potential advantages of using these alternative designs thus largely depend on the success of the profile reconstruction. Therefore, we compared to what extent different linear models agree with each other in reconstructing expression ratios and corresponding time profiles from a complex design.\n\n\nRESULTS\nOn average the correlation between the estimated ratios was high, and all methods agreed with each other in predicting the same profile, especially for genes of which the expression profile showed a large variance across the different time points. Assessing the similarity in profile shape, it appears that, the more similar the underlying principles of the methods (model and input data), the more similar their results. Methods with a dye effect seemed more robust against array failure. The influence of a different normalization was not drastic and independent of the method used.\n\n\nCONCLUSION\nIncluding a dye effect such as in the methods lmbr_dye, anovaFix and anovaMix compensates for residual dye related inconsistencies in the data and renders the results more robust against array failure. Including random effects requires more parameters to be estimated and is only advised when a design is used with a sufficient number of replicates. Because of this, we believe lmbr_dye, anovaFix and anovaMix are most appropriate for practical use." }, { "pmid": "16815739", "title": "Using argumentation to extract key sentences from biomedical abstracts.", "abstract": "PROBLEM\nkey word assignment has been largely used in MEDLINE to provide an indicative \"gist\" of the content of articles and to help retrieving biomedical articles. Abstracts are also used for this purpose. However with usually more than 300 words, MEDLINE abstracts can still be regarded as long documents; therefore we design a system to select a unique key sentence. This key sentence must be indicative of the article's content and we assume that abstract's conclusions are good candidates. We design and assess the performance of an automatic key sentence selector, which classifies sentences into four argumentative moves: PURPOSE, METHODS, RESULTS and\n\n\nCONCLUSION\n\n\n\nMETHODS\nwe rely on Bayesian classifiers trained on automatically acquired data. Features representation, selection and weighting are reported and classification effectiveness is evaluated on the four classes using confusion matrices. We also explore the use of simple heuristics to take the position of sentences into account. Recall, precision and F-scores are computed for the CONCLUSION class. For the CONCLUSION class, the F-score reaches 84%. Automatic argumentative classification using Bayesian learners is feasible on MEDLINE abstracts and should help user navigation in such repositories." }, { "pmid": "22321698", "title": "Automatic recognition of conceptualization zones in scientific articles and two life science applications.", "abstract": "MOTIVATION\nScholarly biomedical publications report on the findings of a research investigation. Scientists use a well-established discourse structure to relate their work to the state of the art, express their own motivation and hypotheses and report on their methods, results and conclusions. In previous work, we have proposed ways to explicitly annotate the structure of scientific investigations in scholarly publications. Here we present the means to facilitate automatic access to the scientific discourse of articles by automating the recognition of 11 categories at the sentence level, which we call Core Scientific Concepts (CoreSCs). These include: Hypothesis, Motivation, Goal, Object, Background, Method, Experiment, Model, Observation, Result and Conclusion. CoreSCs provide the structure and context to all statements and relations within an article and their automatic recognition can greatly facilitate biomedical information extraction by characterizing the different types of facts, hypotheses and evidence available in a scientific publication.\n\n\nRESULTS\nWe have trained and compared machine learning classifiers (support vector machines and conditional random fields) on a corpus of 265 full articles in biochemistry and chemistry to automatically recognize CoreSCs. We have evaluated our automatic classifications against a manually annotated gold standard, and have achieved promising accuracies with 'Experiment', 'Background' and 'Model' being the categories with the highest F1-scores (76%, 62% and 53%, respectively). We have analysed the task of CoreSC annotation both from a sentence classification as well as sequence labelling perspective and we present a detailed feature evaluation. The most discriminative features are local sentence features such as unigrams, bigrams and grammatical dependencies while features encoding the document structure, such as section headings, also play an important role for some of the categories. We discuss the usefulness of automatically generated CoreSCs in two biomedical applications as well as work in progress.\n\n\nAVAILABILITY\nA web-based tool for the automatic annotation of articles with CoreSCs and corresponding documentation is available online at http://www.sapientaproject.com/software http://www.sapientaproject.com also contains detailed information pertaining to CoreSC annotation and links to annotation guidelines as well as a corpus of manually annotated articles, which served as our training data.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "18433469", "title": "Extraction of semantic biomedical relations from text using conditional random fields.", "abstract": "BACKGROUND\nThe increasing amount of published literature in biomedicine represents an immense source of knowledge, which can only efficiently be accessed by a new generation of automated information extraction tools. Named entity recognition of well-defined objects, such as genes or proteins, has achieved a sufficient level of maturity such that it can form the basis for the next step: the extraction of relations that exist between the recognized entities. Whereas most early work focused on the mere detection of relations, the classification of the type of relation is also of great importance and this is the focus of this work. In this paper we describe an approach that extracts both the existence of a relation and its type. Our work is based on Conditional Random Fields, which have been applied with much success to the task of named entity recognition.\n\n\nRESULTS\nWe benchmark our approach on two different tasks. The first task is the identification of semantic relations between diseases and treatments. The available data set consists of manually annotated PubMed abstracts. The second task is the identification of relations between genes and diseases from a set of concise phrases, so-called GeneRIF (Gene Reference Into Function) phrases. In our experimental setting, we do not assume that the entities are given, as is often the case in previous relation extraction work. Rather the extraction of the entities is solved as a subproblem. Compared with other state-of-the-art approaches, we achieve very competitive results on both data sets. To demonstrate the scalability of our solution, we apply our approach to the complete human GeneRIF database. The resulting gene-disease network contains 34758 semantic associations between 4939 genes and 1745 diseases. The gene-disease network is publicly available as a machine-readable RDF graph.\n\n\nCONCLUSION\nWe extend the framework of Conditional Random Fields towards the annotation of semantic relations from text and apply it to the biomedical domain. Our approach is based on a rich set of textual features and achieves a performance that is competitive to leading approaches. The model is quite general and can be extended to handle arbitrary biological entities and relation types. The resulting gene-disease network shows that the GeneRIF database provides a rich knowledge source for text mining. Current work is focused on improving the accuracy of detection of entities as well as entity boundaries, which will also greatly improve the relation extraction performance." }, { "pmid": "25886734", "title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research.", "abstract": "BACKGROUND\nCurrent biomedical research needs to leverage and exploit the large amount of information reported in scientific publications. Automated text mining approaches, in particular those aimed at finding relationships between entities, are key for identification of actionable knowledge from free text repositories. We present the BeFree system aimed at identifying relationships between biomedical entities with a special focus on genes and their associated diseases.\n\n\nRESULTS\nBy exploiting morpho-syntactic information of the text, BeFree is able to identify gene-disease, drug-disease and drug-target associations with state-of-the-art performance. The application of BeFree to real-case scenarios shows its effectiveness in extracting information relevant for translational research. We show the value of the gene-disease associations extracted by BeFree through a number of analyses and integration with other data sources. BeFree succeeds in identifying genes associated to a major cause of morbidity worldwide, depression, which are not present in other public resources. Moreover, large-scale extraction and analysis of gene-disease associations, and integration with current biomedical knowledge, provided interesting insights on the kind of information that can be found in the literature, and raised challenges regarding data prioritization and curation. We found that only a small proportion of the gene-disease associations discovered by using BeFree is collected in expert-curated databases. Thus, there is a pressing need to find alternative strategies to manual curation, in order to review, prioritize and curate text-mining data and incorporate it into domain-specific databases. We present our strategy for data prioritization and discuss its implications for supporting biomedical research and applications.\n\n\nCONCLUSIONS\nBeFree is a novel text mining system that performs competitively for the identification of gene-disease, drug-disease and drug-target associations. Our analyses show that mining only a small fraction of MEDLINE results in a large dataset of gene-disease associations, and only a small proportion of this dataset is actually recorded in curated resources (2%), raising several issues on data prioritization and curation. We propose that joint analysis of text mined data with data curated by experts appears as a suitable approach to both assess data quality and highlight novel and interesting information." }, { "pmid": "17291334", "title": "BioInfer: a corpus for information extraction in the biomedical domain.", "abstract": "BACKGROUND\nLately, there has been a great interest in the application of information extraction methods to the biomedical domain, in particular, to the extraction of relationships of genes, proteins, and RNA from scientific publications. The development and evaluation of such methods requires annotated domain corpora.\n\n\nRESULTS\nWe present BioInfer (Bio Information Extraction Resource), a new public resource providing an annotated corpus of biomedical English. We describe an annotation scheme capturing named entities and their relationships along with a dependency analysis of sentence syntax. We further present ontologies defining the types of entities and relationships annotated in the corpus. Currently, the corpus contains 1100 sentences from abstracts of biomedical research articles annotated for relationships, named entities, as well as syntactic dependencies. Supporting software is provided with the corpus. The corpus is unique in the domain in combining these annotation types for a single set of sentences, and in the level of detail of the relationship annotation.\n\n\nCONCLUSION\nWe introduce a corpus targeted at protein, gene, and RNA relationships which serves as a resource for the development of information extraction systems and their components such as parsers and domain analyzers. The corpus will be maintained and further developed with a current version being available at http://www.it.utu.fi/BioInfer." }, { "pmid": "18173834", "title": "Evaluation of time profile reconstruction from complex two-color microarray designs.", "abstract": "BACKGROUND\nAs an alternative to the frequently used \"reference design\" for two-channel microarrays, other designs have been proposed. These designs have been shown to be more profitable from a theoretical point of view (more replicates of the conditions of interest for the same number of arrays). However, the interpretation of the measurements is less straightforward and a reconstruction method is needed to convert the observed ratios into the genuine profile of interest (e.g. a time profile). The potential advantages of using these alternative designs thus largely depend on the success of the profile reconstruction. Therefore, we compared to what extent different linear models agree with each other in reconstructing expression ratios and corresponding time profiles from a complex design.\n\n\nRESULTS\nOn average the correlation between the estimated ratios was high, and all methods agreed with each other in predicting the same profile, especially for genes of which the expression profile showed a large variance across the different time points. Assessing the similarity in profile shape, it appears that, the more similar the underlying principles of the methods (model and input data), the more similar their results. Methods with a dye effect seemed more robust against array failure. The influence of a different normalization was not drastic and independent of the method used.\n\n\nCONCLUSION\nIncluding a dye effect such as in the methods lmbr_dye, anovaFix and anovaMix compensates for residual dye related inconsistencies in the data and renders the results more robust against array failure. Including random effects requires more parameters to be estimated and is only advised when a design is used with a sufficient number of replicates. Because of this, we believe lmbr_dye, anovaFix and anovaMix are most appropriate for practical use." }, { "pmid": "22554700", "title": "The EU-ADR corpus: annotated drugs, diseases, targets, and their relationships.", "abstract": "Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts." }, { "pmid": "22226192", "title": "A comparison study on feature selection of DNA structural properties for promoter prediction.", "abstract": "BACKGROUND\nPromoter prediction is an integrant step for understanding gene regulation and annotating genomes. Traditional promoter analysis is mainly based on sequence compositional features. Recently, many kinds of structural features have been employed in promoter prediction. However, considering the high-dimensionality and overfitting problems, it is unfeasible to utilize all available features for promoter prediction. Thus it is necessary to choose some appropriate features for the prediction task.\n\n\nRESULTS\nThis paper conducts an extensive comparison study on feature selection of DNA structural properties for promoter prediction. Firstly, to examine whether promoters possess some special structures, we carry out a systematical comparison among the profiles of thirteen structural features on promoter and non-promoter sequences. Secondly, we investigate the correlations between these structural features and promoter sequences. Thirdly, both filter and wrapper methods are utilized to select appropriate feature subsets from thirteen different kinds of structural features for promoter prediction, and the predictive power of the selected feature subsets is evaluated. Finally, we compare the prediction performance of the feature subsets selected in this paper with nine existing promoter prediction approaches.\n\n\nCONCLUSIONS\nExperimental results show that the structural features are differentially correlated to promoters. Specifically, DNA-bending stiffness, DNA denaturation and energy-related features are highly correlated with promoters. The predictive power for promoter sequences differentiates greatly among different structural features. Selecting the relevant features can significantly improve the accuracy of promoter prediction." }, { "pmid": "22233443", "title": "Protein docking prediction using predicted protein-protein interface.", "abstract": "BACKGROUND\nMany important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations.\n\n\nRESULTS\nWe present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering.\n\n\nCONCLUSION\nWe have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases." }, { "pmid": "21199577", "title": "MTML-msBayes: approximate Bayesian comparative phylogeographic inference from multiple taxa and multiple loci with rate heterogeneity.", "abstract": "BACKGROUND\nMTML-msBayes uses hierarchical approximate Bayesian computation (HABC) under a coalescent model to infer temporal patterns of divergence and gene flow across codistributed taxon-pairs. Under a model of multiple codistributed taxa that diverge into taxon-pairs with subsequent gene flow or isolation, one can estimate hyper-parameters that quantify the mean and variability in divergence times or test models of migration and isolation. The software uses multi-locus DNA sequence data collected from multiple taxon-pairs and allows variation across taxa in demographic parameters as well as heterogeneity in DNA mutation rates across loci. The method also allows a flexible sampling scheme: different numbers of loci of varying length can be sampled from different taxon-pairs.\n\n\nRESULTS\nSimulation tests reveal increasing power with increasing numbers of loci when attempting to distinguish temporal congruence from incongruence in divergence times across taxon-pairs. These results are robust to DNA mutation rate heterogeneity. Estimating mean divergence times and testing simultaneous divergence was less accurate with migration, but improved if one specified the correct migration model. Simulation validation tests demonstrated that one can detect the correct migration or isolation model with high probability, and that this HABC model testing procedure was greatly improved by incorporating a summary statistic originally developed for this task (Wakeley's ΨW). The method is applied to an empirical data set of three Australian avian taxon-pairs and a result of simultaneous divergence with some subsequent gene flow is inferred.\n\n\nCONCLUSIONS\nTo retain flexibility and compatibility with existing bioinformatics tools, MTML-msBayes is a pipeline software package consisting of Perl, C and R programs that are executed via the command line. Source code and binaries are available for download at http://msbayes.sourceforge.net/ under an open source license (GNU Public License)." }, { "pmid": "23323936", "title": "Negated bio-events: analysis and identification.", "abstract": "BACKGROUND\nNegation occurs frequently in scientific literature, especially in biomedical literature. It has previously been reported that around 13% of sentences found in biomedical research articles contain negation. Historically, the main motivation for identifying negated events has been to ensure their exclusion from lists of extracted interactions. However, recently, there has been a growing interest in negative results, which has resulted in negation detection being identified as a key challenge in biomedical relation extraction. In this article, we focus on the problem of identifying negated bio-events, given gold standard event annotations.\n\n\nRESULTS\nWe have conducted a detailed analysis of three open access bio-event corpora containing negation information (i.e., GENIA Event, BioInfer and BioNLP'09 ST), and have identified the main types of negated bio-events. We have analysed the key aspects of a machine learning solution to the problem of detecting negated events, including selection of negation cues, feature engineering and the choice of learning algorithm. Combining the best solutions for each aspect of the problem, we propose a novel framework for the identification of negated bio-events. We have evaluated our system on each of the three open access corpora mentioned above. The performance of the system significantly surpasses the best results previously reported on the BioNLP'09 ST corpus, and achieves even better results on the GENIA Event and BioInfer corpora, both of which contain more varied and complex events.\n\n\nCONCLUSIONS\nRecently, in the field of biomedical text mining, the development and enhancement of event-based systems has received significant interest. The ability to identify negated events is a key performance element for these systems. We have conducted the first detailed study on the analysis and identification of negated bio-events. Our proposed framework can be integrated with state-of-the-art event extraction systems. The resulting systems will be able to extract bio-events with attached polarities from textual documents, which can serve as the foundation for more elaborate systems that are able to detect mutually contradicting bio-events." }, { "pmid": "22621266", "title": "Extracting semantically enriched events from biomedical literature.", "abstract": "BACKGROUND\nResearch into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them.\n\n\nRESULTS\nBased on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task.\n\n\nCONCLUSIONS\nWe have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare." }, { "pmid": "23092060", "title": "Interrater reliability: the kappa statistic.", "abstract": "The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested." }, { "pmid": "15130936", "title": "Distribution of information in biomedical abstracts and full-text publications.", "abstract": "MOTIVATION\nFull-text documents potentially hold more information than their abstracts, but require more resources for processing. We investigated the added value of full text over abstracts in terms of information content and occurrences of gene symbol--gene name combinations that can resolve gene-symbol ambiguity.\n\n\nRESULTS\nWe analyzed a set of 3902 biomedical full-text articles. Different keyword measures indicate that information density is highest in abstracts, but that the information coverage in full texts is much greater than in abstracts. Analysis of five different standard sections of articles shows that the highest information coverage is located in the results section. Still, 30-40% of the information mentioned in each section is unique to that section. Only 30% of the gene symbols in the abstract are accompanied by their corresponding names, and a further 8% of the gene names are found in the full text. In the full text, only 18% of the gene symbols are accompanied by their gene names." } ]
Food Science & Nutrition
29983947
PMC6021740
10.1002/fsn3.613
Hierarchical network modeling with multidimensional information for aquatic safety management in the cold chain
AbstractThe cold‐chain information has characterized by the loss and dispersion according to the different collecting methods. The description for the quality decay factors of aquatic products can be defined as the multidimensional information. A series of nodes with multidimensional information are assembled to be hierarchies aiming at describing the environment conditions and locations in the supply chain. Each of the single hierarchy levels constitutes a sequence of node information in a network, which is applied as internal information analysis. The cross‐layer information structure is defined as “bridge” information which is able to record the information transmissions among every hierarchy from the point of view of the whole chain. The study has established a novel structured modeling to describe the cold chain of aquatic products based on a network‐hierarchy framework. An organized and sustainable transmission process can be built and recorded by the multidimensional attributes for the whole course of cold chain of aquatic products. In addition, seamless connections among every hierarchy are attainable by the environmental information records continuously to monitor the quality of aquatic products. The quality assessments and shelf life predictions are estimated properly as the risk control in order to monitor and trace the safety of aquatic products under the supply chain perspective.
2RELATED WORKIn the most of related researches, the information modeling is studied in a view of information managements including information collection, information transmission, and information processing. There are some well‐known methodologies as follows: network management modeling, information record methodology with the workflow diagram and the Petri net, and multidimensional information description modeling. Data cube and OLAP data warehouse are commonly applied to describe node information of one kind of aquatic products as multidimensional information (Cui & Wang, 2016; Huang et al., 2015; Imani & Ghassemian, 2016, Lughofer & Sayed‐Mouchaweh, 2015; Rice, 2016; Uzam, Gelen, & Saleh, 2017; Wang, Zeshui, Fujita, & Liu, 2016). Wang, Li, Luo, and Fujita (2016) focused on the dynamic RSA for the multidimensional variation of an ordered information system and proposed a novel incremental simplified algorithm. Sifer and Potter (2013) combined distributional and correlational views of hierarchical multidimensional data to explore the data distribution and correlation. Ang and Wang (2015) used energy data with multiple attributes to analyze the changes in energy consumption over time. Kołacz and Grzegorzewski (2016) proposed an axiomatic definition of a dispersion measure that could be applied for any finite sample of k‐dimensional real observations. Boulila, Le Bera, Bimonteb, Gracc, and Cernessond (2014) presented the application of data warehouse (DW) and online analytical processing (OLAP) technologies to the field of water quality assessment. Santos, Castro, and Velasco (2016) presented the automation of the mapping between XBRL and the multidimensional data model and included a formalization of the validation rules. Data cube and OLAP data warehouse are the most applied to describe node information of cold chain of a single aquatic product as multidimensional information. Usman, Pears, and Fong (2013) presented a novel methodology for the discovery of cubes of interest in large multidimensional datasets. Kaya and Alhajj (2014) proposed and developed three different academic networks with a novel data cube‐based modeling method. Kapelko and Kranakis (2016) considered n sensors placed randomly and independently with the uniform distribution in a d‐dimensional unit cube. Julien Aligon, Gallinucci, Golfarelli, Marcel, and Rizzi (2015) proposed a recommendation approach stemming from collaborative filtering for multidimensional cubes. Blanco, de Guzmán, Fernández‐Medina, and Trujillo (2015) defined a model‐driven approach for developing a secure DW repository by following a relational approach based on the multidimensional modeling. Do (2014) applied online analytical processing (OLAP) to a product data management (PDM) database to evaluate the performance of in‐progress product development. Svetlana Mansmann, Rehman, Weiler, and Scholl (2014) introduced a data enrichment layer responsible for detecting new structural elements in the data using data mining and other techniques. Dehne, Kong, Rau‐Chaplin, Zaboli, and Zhou (2015) introduced CR‐OLAP, a scalable cloud‐based real‐time OLAP system, based on a new distributed index structure for the distributed PDCR tree. Network modeling is mainly a topological structure to describe information constitution and transmission (Sookhak, Gani, Khan, & Buyya, 2017). The links of information description are connected tightly and clearly. Demirci, Yardimci, Muge Sayit, Tunali, and Bulut (2017) proposed a novel overlay architecture for constructing hierarchical and scalable clustering of peer‐to‐peer (P2P) networks. Alam, Dobbie, and Rehman (2015) used a hierarchical agglomerative manner with HPSO clustering by execution time to measure the performance of our proposed techniques. Wang, Yang and Bin (2016) advanced a new hierarchical representation learning (HRL)‐based spatiotemporal data redundancy reduction approach.The Petri net and workflow diagram are concurrent event records to describe computer processes (Cheng, Fan, Jia, & Zhang, 2013; Long & Zhang, 2014; Ribas et al., 2015; Wu, Wu, Zhang, & Olson, 2013). Application of the Petri network is one of the solutions in workflow diagram for its excellent feature of process records and information collection (Li, Wang, Zhao, & Liu, 2016; Nývlt, Haugen, & Ferkl, 2015; Zegordi & Davarzani, 2012). It is capable of recording and tracing information thoroughly. In particular, the ignition mechanism presents a clear connection among those pieces of information. Gamboa Quintanilla, Cardin, L'Anton, and Castagna (2016) presented a methodology to increase planning flexibility in service‐oriented manufacturing systems (SOHMS). Liu and Barkaoui (2016) surveyed the state‐of‐the‐art siphon theory of Petri nets including basic concepts, computation of siphons, controllability conditions, and deadlock control policies based on siphons. Vatani and Doustmohammadi (2015) proposed a new method of decomposition of first‐order hybrid Petri nets (FOHPNs) and introduced the hierarchical control of the subnets through a coordinator. Motallebi et al. defined parametric multisingular hybrid Petri nets (P‐MSHPNs), as a parametric extension of MSHPNs (Motallebi & Azgomi, 2015). Workflow diagram is a chart with proper symbols to indicate logical relationship of entire work records in a systematical organization and present the relationship between workflow connection and integrity as well as information flow sequence in cold chain. Ghafarian and Javadi (2015) proposed that workflow scheduling system partitions a workflow into subworkflows to minimize data dependencies among the subworkflows. Kranjc, Orač, Podpečan, Lavrač, and Robnik‐Šikonja (2017) presented a platform for distributed computing, developed using the latest software technologies and computing paradigms to enable big data mining based on a workflow. Liu, Fan, Wang, and Leon Zhao (2017) proposed a novel approach called data‐centric workflow model reuse (DWMR) framework to provide a solution to workflow model reuse. Park, Ahn, and Kim (2016) formalized a theoretical framework coping with discovery phase and analysis phase and conceived a series of formalisms and algorithms for representing, discovering, and analyzing the workflow‐supported social network. Hsieh and Lin (2014) applied PNML to develop context‐aware workflow systems using Petri net. Ribas et al. (2015) proposed a place/transition or Petri net‐based multicriteria decision‐making (MCDM) framework to assess a cloud service in comparison with a similar on‐premise service. The advantage of Petri Net is that the information can be classified with different colors according to the time coordinate records. However, by far, there is little research on the modeling methodology of the Petri network merged with workflow diagram for the information definition, information transmission, and supply chain structure analysis.The descriptions and organizational structures are all practical, respectively. Multidimensional information is suitable at processing node information of aquatic products in detail. The Petri network and workflow diagram are applied for describing procedures of information transmission, while network modeling is built for hierarchical and structural descriptions. Currently, none of those methodologies is comprehensive enough for description and application. Furthermore, present research scopes are deficient in depth and width for studying the cold chain of aquatic products. The deficiency of information collection of the cold chain in depth, for instance, is lack of distribution nodes and location information, full‐dimensional environmental information, and processing information of aquatic products in the cold chain. Even worse, those types of inadequate information are not bounded up tightly and deeply. Meantime, the shortage of cold‐chain information records can cause data loss and fragmentation caused by few data interface adapters established between vendors during the process of information delivery. The information deficiencies mentioned above are considered as decisive and reliable approaches to quality safety assessments and validity estimation of the cold chain. In this research, the hierarchical network information modeling consisting multidimensional information is applied in the studies to deliver full‐dimensional and perspective description of aquatic products in cold chain.
[ "7873331", "23973840", "27041314" ]
[ { "pmid": "7873331", "title": "A dynamic approach to predicting bacterial growth in food.", "abstract": "A new member of the family of growth models described by Baranyi et al. (1993a) is introduced in which the physiological state of the cells is represented by a single variable. The duration of lag is determined by the value of that variable at inoculation and by the post-inoculation environment. When the subculturing procedure is standardized, as occurs in laboratory experiments leading to models, the physiological state of the inoculum is relatively constant and independent of subsequent growth conditions. It is shown that, with cells with the same pre-inoculation history, the product of the lag parameter and the maximum specific growth rate is a simple transformation of the initial physiological state. An important consequence is that it is sufficient to estimate this constant product and to determine how the environmental factors define the specific growth rate without modelling the environment dependence of the lag separately. Assuming that the specific growth rate follows the environmental changes instantaneously, the new model can also describe the bacterial growth in an environment where the factors, such as temperature, pH and aw, change with time." }, { "pmid": "23973840", "title": "Modeling the influence of temperature, water activity and water mobility on the persistence of Salmonella in low-moisture foods.", "abstract": "Salmonella can survive in low-moisture foods for long periods of time. Reduced microbial inactivation during heating is believed to be due to the interaction of cells and water, and is thought to be related to water activity (a(w)). Little is known about the role of water mobility in influencing the survival of Salmonella in low-moisture foods. The aim of this study was to determine how the physical state of water in low-moisture foods influences the survival of Salmonella and to use this information to develop mathematical models that predict the behavior of Salmonella in these foods. Whey protein powder of differing water mobilities was produced by pH adjustment and heat denaturation, and then equilibrated to aw levels between 0.19±0.03 and 0.54±0.02. Water mobility was determined by wide-line proton-NMR. Powders were inoculated with a four-strain cocktail of Salmonella, vacuum-sealed and stored at 21, 36, 50, 60, 70 and 80°C. Survival data was fitted to the log-linear, the Geeraerd-tail, the Weibull, the biphasic-linear and the Baranyi models. The model with the best ability to describe the data over all temperatures, water activities and water mobilities (f(test)<F(table)) was selected for secondary modeling. The Weibull model provided the best description of survival kinetics for Salmonella. The influence of temperature, aw and water mobility on the survival of Salmonella was evaluated using multiple linear regression. Secondary models were developed and then validated in dry non-fat dairy and grain, and low-fat peanut and cocoa products within the range of the modeled data. Water activity significantly influenced the survival of Salmonella at all temperatures, survival increasing with decreasing a(w). Water mobility did not significantly influence survival independent of a(w). Secondary models were useful in predicting the survival of Salmonella in various low-moisture foods providing a correlation of R=0.94 and an acceptable prediction performance of 81%. The % bias and % discrepancy results showed that the models were more accurate in predicting survival in non-fat food systems as compared to foods containing low-fat levels (12% fat). The models developed in this study represent the first predictive models for survival of Salmonella in low-moisture foods. These models provide baseline information to be used for research on risk mitigation strategies for low-moisture foods." }, { "pmid": "27041314", "title": "Super-chilling (-0.7°C) with high-CO2 packaging inhibits biochemical changes of microbial origin in catfish (Clarias gariepinus) muscle during storage.", "abstract": "Controlled freezing-point storage (CFPS) is an emerging preservative technique desirable for fish. In the present study, catfish fillets were stored at -0.7°C under different packaging atmospheres: air (AP), vacuum (VP), and 60% CO2/40% N2 (MAP). Chemical, microbiological, and sensory analyses were performed during storage. Results showed the following descending order of chemical changes (degradation of nucleotides, conversion of protein to volatile-based nitrogen and biogenic amines, and production of trimethylamine nitrogen), as well as loss of sensory properties: 4°C AP>-0.7°C AP≈4°C VP>-0.7°C VP≈4°C MAP>-0.7°C MAP. The chemical changes were well-correlated with microbial growth suggesting the microbiological pathways. Hence, CFPS at -0.7°C in combination with high-CO2 MAP can effectively maintain the quality of fresh catfish meat compared to traditional preservation methods." } ]
Frontiers in Neurorobotics
29977200
PMC6022201
10.3389/fnbot.2018.00032
Experience Replay Using Transition Sequences
Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.
2. Related workThe problem of learning from limited experience is not new in the field of RL (Thrun, 1992; Thomas and Brunskill, 2016). Generally, learning speed and sample efficiency are critical factors that determine the feasibility of deploying learning algorithms in the real world. Particularly for robotics applications, these factors are even more important, as exploration of the environment is typically time and energy expensive (Bakker et al., 2006; Kober et al., 2013). It is thus important for a learning agent to be able to gather as much relevant knowledge as possible from whatever exploratory actions occur.Off-policy algorithms are well suited to this need as it enables multiple value functions to be learned together in parallel. When the behavior and target policies vary considerably from each other, importance sampling (Sutton and Barto, 1998; Rubinstein and Kroese, 2016) is commonly used in order to obtain more accurate estimates of the value functions. Importance sampling reduces the variance of the estimate by taking into account the distributions associated with the behavior and target policies, and making modifications to the off-policy update equations accordingly. However, the estimates are still unlikely to be close to their optimal values if the agent receives very little experience relevant to a particular task.This issue is partially addressed with experience replay, in which information contained in the replay memory is used from time to time in order to update the value functions. As a result, the agent is able to learn from uncorrelated historical data, and the sample efficiency of learning is greatly improved. This approach has received a lot of attention in recent years due to its utility in deep RL applications (Adam et al., 2012; Mnih et al., 2013, 2015, 2016; de Bruin et al., 2015).Recent works (Narasimhan et al., 2015; Schaul et al., 2016) have revealed that certain transitions are more useful than others. Schaul et al. (2016) prioritized transitions on the basis of their associated TD errors. They also briefly mentioned the possibility of replaying transitions in a sequential manner. The experience replay framework developed by Adam et al. (2012) involved some variants that replayed sequences of experiences, but these sequences were drawn randomly from the replay memory. More recently, Isele et al. (Isele and Cosgun, 2018) reported a selective experience replay approach aimed at performing well in the context of lifelong learning (Thrun, 1996). The authors of this work proposed a long term replay memory in addition to the conventionally used one. Certain bases for designing this long-term replay memory, such as favoring transitions associated with high rewards and high absolute TD errors are similar to the ones described in the present work. However, the approach does not explore the replay of sequences, and its fundamental purpose is to shield against catastrophic forgetting (Goodfellow et al., 2013) when multiple tasks are learned in sequence. The replay approach described in the present work focuses on enabling more sample-efficient learning in situations where positive rewards occur rarely. Apart from this, Andrychowicz et al. (2017) proposed a hindsight experience replay approach, directed at addressing this problem, where each episode is replayed with a goal that is different from the original goal of the agent. The authors reported significant improvements in the learning performance in problems with sparse and binary rewards. These improvements were essentially brought about by allowing the learned value/Q values (which would otherwise remain mostly unchanged due to the sparsity of rewards) to undergo significant change under the influence of an arbitrary goal. The underlying idea behind our approach also involves modification of the Q−values in reward-sparse regions of the state-action space. The modifications, however, are not based on arbitrary goals, and are selectively performed on state-action pairs associated with successful transition sequences associated with high absolute TD errors. Nevertheless, the hindsight replay approach is orthogonal to our proposed approach, and hence, could be used in conjunction with it.Much like in Schaul et al. (2016), TD errors have been frequently used as a basis for prioritization in other RL problems (Thrun, 1992; White et al., 2014; Schaul et al., 2016). In particular, the model-based approach of prioritized sweeping (Moore and Atkeson, 1993; van Seijen and Sutton, 2013) prioritizes backups that are expected to result in a significant change in the value function.The algorithm we propose here uses a model-free architecture, and it is based on the idea of selectively reusing previous experience. However, we describe the reuse of sequences of transitions based on the TD errors observed when these transitions take place. Replaying sequences of experiences also seems to be biologically plausible (Buhry et al., 2011; Ólafsdóttir et al., 2015). In addition, it is known that animals tend to remember experiences that lead to high rewards (Singer and Frank, 2009). This is an idea reflected in our work, as only those transition sequences that lead to high rewards are considered for being stored in the replay memory. In filtering transition sequences in this manner, we simultaneously address the issue of determining which experiences are to be stored.In addition to selecting transition sequences, we also generate virtual sequences of transitions which the agent could have possibly experienced, but in reality, did not. This virtual experience is then replayed to improve the agent's learning. Some early approaches in RL, such as the dyna architecture (Sutton, 1990) also made use of simulated experience to improve the value function estimates. However, unlike the approach proposed here, the simulated experience was generated based on models of the reward function and transition probabilities which were continuously updated based on the agent's interactions with the environment. In this sense, the virtual experience generated in our approach is more grounded in reality, as it is based directly on the data collected through the agent-environment interaction. In more recent work, Fonteneau et al. describe an approach to generate artificial trajectories and use them to find policies with acceptable performance guarantees (Fonteneau et al., 2013). However, this approach is designed for batch RL, and the generated artificial trajectories are not constructed using a TD error basis. Our approach also recognizes the real-world limitations of replay memory (de Bruin et al., 2015), and stores only a certain amount of information at a time, specified by memory parameters. The selected and generated sequences are stored in the replay memory in the form of libraries which are continuously updated so that the agent is equipped with transition sequences that are most relevant to the task at hand.Figure 1Structure of the proposed algorithm in contrast to the traditional off-policy structure. Q and R denote the action-value function and reward, respectively.
[ "21918724", "24049244", "25719670", "26112828", "20064396" ]
[ { "pmid": "21918724", "title": "Reactivation, replay, and preplay: how it might all fit together.", "abstract": "Sequential activation of neurons that occurs during \"offline\" states, such as sleep or awake rest, is correlated with neural sequences recorded during preceding exploration phases. This so-called reactivation, or replay, has been observed in a number of different brain regions such as the striatum, prefrontal cortex, primary visual cortex and, most prominently, the hippocampus. Reactivation largely co-occurs together with hippocampal sharp-waves/ripples, brief high-frequency bursts in the local field potential. Here, we first review the mounting evidence for the hypothesis that reactivation is the neural mechanism for memory consolidation during sleep. We then discuss recent results that suggest that offline sequential activity in the waking state might not be simple repetitions of previously experienced sequences. Some offline sequential activity occurs before animals are exposed to a novel environment for the first time, and some sequences activated offline correspond to trajectories never experienced by the animal. We propose a conceptual framework for the dynamics of offline sequential activity that can parsimoniously describe a broad spectrum of experimental results. These results point to a potentially broader role of offline sequential activity in cognitive functions such as maintenance of spatial representation, learning, or planning." }, { "pmid": "24049244", "title": "Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories.", "abstract": "In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of \"artificial trajectories\" from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning." }, { "pmid": "25719670", "title": "Human-level control through deep reinforcement learning.", "abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks." }, { "pmid": "26112828", "title": "Hippocampal place cells construct reward related sequences through unexplored space.", "abstract": "Dominant theories of hippocampal function propose that place cell representations are formed during an animal's first encounter with a novel environment and are subsequently replayed during off-line states to support consolidation and future behaviour. Here we report that viewing the delivery of food to an unvisited portion of an environment leads to off-line pre-activation of place cells sequences corresponding to that space. Such 'preplay' was not observed for an unrewarded but otherwise similar portion of the environment. These results suggest that a hippocampal representation of a visible, yet unexplored environment can be formed if the environment is of motivational relevance to the animal. We hypothesise such goal-biased preplay may support preparation for future experiences in novel environments." }, { "pmid": "20064396", "title": "Rewarded outcomes enhance reactivation of experience in the hippocampus.", "abstract": "Remembering experiences that lead to reward is essential for survival. The hippocampus is required for forming and storing memories of events and places, but the mechanisms that associate specific experiences with rewarding outcomes are not understood. Event memory storage is thought to depend on the reactivation of previous experiences during hippocampal sharp wave ripples (SWRs). We used a sequence switching task that allowed us to examine the interaction between SWRs and reward. We compared SWR activity after animals traversed spatial trajectories and either received or did not receive a reward. Here, we show that rat hippocampal CA3 principal cells are significantly more active during SWRs following receipt of reward. This SWR activity was further enhanced during learning and reactivated coherent elements of the paths associated with the reward location. This enhanced reactivation in response to reward could be a mechanism to bind rewarding outcomes to the experiences that precede them." } ]
Genes
29914084
PMC6027449
10.3390/genes9060301
Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE
Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.
2. Related WorksOver the past few years, a number of feature selection algorithms have been proposed, such as exhaustive searching, forward selection, backward elimination etc. They can be roughly divided into three categories: filter methods, wrapper methods, and embedded methods [8,9,10]. Filter is a method that uses an indicator to evaluate the features, ranks the features based on the index values, and picks features that are at the top of the ranking. Compared to the other two methods, it takes the least time. Wrapper evaluates a feature according to the final performance of the model after adding this feature. Filter method and wrapper method can be used together with various algorithms, while the embedded method selects features as part of the model construction process, and is quite closely integrated with the algorithm itself, thus, the feature selection is completed during the training of the model. Among the feature selection algorithms in the literature, RFE is one of the most popular methods. It was introduced by Guyon et al., for the selection of the optimal gene subsets in cancer classification [11], and was later widely used in other fields, such as DNA microarray studies [12,13], toxicity studies [4], image classification studies [14,15], etc.Recursive Feature Elimination is commonly used together with many classification algorithms (e.g., support vector machine, RF, etc.) to build more efficient classifiers. A ranking of features as well as candidate subsets is produced through RFE. A list of accuracy values corresponding to each subset is also generated through this procedure. A support vector machine (SVM) based on recursive feature elimination (SVM-RFE) selects patterns using SVM’s weights, and has shown its good feature selection ability. It combines the excellent performance of SVM and the advantage of RFE [11]. Yang et al. used SVM-RFE to maximize the classification accuracy of fault detection by selecting the best combination of the variables [16]; Duan et al. used SVM-RFE to select gene in cancer classification [8]. However, SVM-RFE has its intrinsic defects on the application of data analysis, such as the performance on small dataset is better [17]. Random forest is a widely used machine learning model, which was introduced by Breiman [18]. It has some advantages compared with other algorithms. For instance, it is good at handling the high-dimensional data. A ranking of feature importance which represents their classification contribution can be provided. Compared with other methods, RF-RFE has been proven to be more effective, which can use fewer features to get a higher classification accuracy [19]. Granitto et al. used the RF-RFE algorithm to accomplish the feature selection in Proton Transfer Reaction – Mass Spectrometry (PTR-MS) study [17]; In Chen et al.’s study, they proposed an enhanced recursive feature elimination method to classify small training samples [20].The combination of RFE with classification algorithms leads to a lower data dimension and higher computation efficiency. However, there were problems in terms of selecting the optimal subset rise in the procedure of RFE. Usually, a number N to determine how many features are selected is often set in advance. Then, the top N features from the ranking are selected as the final subset. If N is not known in advance, using what variant to decide the optimal subset is often ambiguous and subjective. Besides a preset number, most studies used the subset corresponding to the HA, or relevant variants to determine the optimal subset.In order to have an overall view of the variants used currently, we analyzed 30 most recent publications which used RFE for classification/regression. A statistics conclusion was given (see Figure 1). In these papers, the features were sorted according to their importance. The least important features were removed, and the features used for classification were updated iteratively. Meanwhile, the classification accuracy of each feature subset was also provided in this procedure.In these 30 studies, we found that the most commonly used selection variant is the highest accuracy (HA). Of these 30 studies, 11 used HA as selection variant [16,21,22,23,24,25,26,27,28,29]. In this method, the optimal feature subset was determined when the classification accuracy achieves the highest or a certain percentage of the HA, e.g., 90%. For instance, in Yang et al.’s study, five features were selected when the accuracy achieved was the highest [27].There are six studies which selected the subsets according to a pre-defined number [19,21,30,31,32,33]. In this method, within a certain accuracy scope, the number of selected features is not the same according to different applications. Tiwari et al. selected the top 50 features to compare the classification accuracy, while others might select only less than ten features [32].Besides these, four studies used other selection variants [34,35,36,37]. Qian et al. used Least Square Support Vector Machine and RFE to select the optimal feature subset [35]. They claimed that comparing with other methods, they could reach the same accuracy using fewer features, which shortened the execution time and increased the computation efficiency. Furthermore, there are nine studies which listed the accuracies or importance but did not make a choice [38,39,40,41,42,43,44,45]. Song listed the classification accuracy using different feature numbers, and drew the curve for analysis, but no choice for optimal feature subset was given [43].
[ "26612367", "17720704", "25313160", "23517638", "21566255", "25946884", "26861308", "26484910", "26945722", "26497657", "26460680", "26403299", "26518718", "26903567", "26807332", "26872146", "26297890", "26318777", "26793434", "24048354", "14960456" ]
[ { "pmid": "26612367", "title": "High-throughput imaging-based nephrotoxicity prediction for xenobiotics with diverse chemical structures.", "abstract": "The kidney is a major target for xenobiotics, which include drugs, industrial chemicals, environmental toxicants and other compounds. Accurate methods for screening large numbers of potentially nephrotoxic xenobiotics with diverse chemical structures are currently not available. Here, we describe an approach for nephrotoxicity prediction that combines high-throughput imaging of cultured human renal proximal tubular cells (PTCs), quantitative phenotypic profiling, and machine learning methods. We automatically quantified 129 image-based phenotypic features, and identified chromatin and cytoskeletal features that can predict the human in vivo PTC toxicity of 44 reference compounds with ~82 % (primary PTCs) or 89 % (immortalized PTCs) test balanced accuracies. Surprisingly, our results also revealed that a DNA damage response is commonly induced by different PTC toxicants that have diverse chemical structures and injury mechanisms. Together, our results show that human nephrotoxicity can be predicted with high efficiency and accuracy by combining cell-based and computational methods that are suitable for automation." }, { "pmid": "17720704", "title": "A review of feature selection techniques in bioinformatics.", "abstract": "Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed techniques. In this article, we make the interested reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, and discussing their use, variety and potential in a number of both common as well as upcoming bioinformatics applications." }, { "pmid": "25313160", "title": "Open TG-GATEs: a large-scale toxicogenomics database.", "abstract": "Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html." }, { "pmid": "23517638", "title": "In silico approaches for designing highly effective cell penetrating peptides.", "abstract": "BACKGROUND\nCell penetrating peptides have gained much recognition as a versatile transport vehicle for the intracellular delivery of wide range of cargoes (i.e. oligonucelotides, small molecules, proteins, etc.), that otherwise lack bioavailability, thus offering great potential as future therapeutics. Keeping in mind the therapeutic importance of these peptides, we have developed in silico methods for the prediction of cell penetrating peptides, which can be used for rapid screening of such peptides prior to their synthesis.\n\n\nMETHODS\nIn the present study, support vector machine (SVM)-based models have been developed for predicting and designing highly effective cell penetrating peptides. Various features like amino acid composition, dipeptide composition, binary profile of patterns, and physicochemical properties have been used as input features. The main dataset used in this study consists of 708 peptides. In addition, we have identified various motifs in cell penetrating peptides, and used these motifs for developing a hybrid prediction model. Performance of our method was evaluated on an independent dataset and also compared with that of the existing methods.\n\n\nRESULTS\nIn cell penetrating peptides, certain residues (e.g. Arg, Lys, Pro, Trp, Leu, and Ala) are preferred at specific locations. Thus, it was possible to discriminate cell-penetrating peptides from non-cell penetrating peptides based on amino acid composition. All models were evaluated using five-fold cross-validation technique. We have achieved a maximum accuracy of 97.40% using the hybrid model that combines motif information and binary profile of the peptides. On independent dataset, we achieved maximum accuracy of 81.31% with MCC of 0.63.\n\n\nCONCLUSION\nThe present study demonstrates that features like amino acid composition, binary profile of patterns and motifs, can be used to train an SVM classifier that can predict cell penetrating peptides with higher accuracy. The hybrid model described in this study achieved more accuracy than the previous methods and thus may complement the existing methods. Based on the above study, a user-friendly web server CellPPD has been developed to help the biologists, where a user can predict and design CPPs with much ease. CellPPD web server is freely accessible at http://crdd.osdd.net/raghava/cellppd/." }, { "pmid": "21566255", "title": "Robust feature selection for microarray data based on multicriterion fusion.", "abstract": "Feature selection often aims to select a compact feature subset to build a pattern classifier with reduced complexity, so as to achieve improved classification performance. From the perspective of pattern analysis, producing stable or robust solution is also a desired property of a feature selection algorithm. However, the issue of robustness is often overlooked in feature selection. In this study, we analyze the robustness issue existing in feature selection for high-dimensional and small-sized gene-expression data, and propose to improve robustness of feature selection algorithm by using multiple feature selection evaluation criteria. Based on this idea, a multicriterion fusion-based recursive feature elimination (MCF-RFE) algorithm is developed with the goal of improving both classification performance and stability of feature selection results. Experimental studies on five gene-expression data sets show that the MCF-RFE algorithm outperforms the commonly used benchmark feature selection algorithm SVM-RFE." }, { "pmid": "25946884", "title": "Margin-maximised redundancy-minimised SVM-RFE for diagnostic classification of mammograms.", "abstract": "Classification techniques function as a main component in digital mammography for breast cancer treatment. While many classification techniques currently exist, recent developments in the derivatives of Support Vector Machines (SVM) with feature selection have shown to yield superior classification accuracy rates in comparison with other competing techniques. In this paper, we propose a new classification technique that is derived from SVM in which margin is maximised and redundancy is minimised during the feature selection process. We have conducted experiments on the largest publicly available data set of mammograms. The empirical results indicate that our proposed classification technique performs superior to other previously proposed SVM-based techniques." }, { "pmid": "26861308", "title": "A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data.", "abstract": "The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew's Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions." }, { "pmid": "26484910", "title": "A Pathway Based Classification Method for Analyzing Gene Expression for Alzheimer's Disease Diagnosis.", "abstract": "BACKGROUND\nRecent studies indicate that gene expression levels in blood may be able to differentiate subjects with Alzheimer's disease (AD) from normal elderly controls and mild cognitively impaired (MCI) subjects. However, there is limited replicability at the single marker level. A pathway-based interpretation of gene expression may prove more robust.\n\n\nOBJECTIVES\nThis study aimed to investigate whether a case/control classification model built on pathway level data was more robust than a gene level model and may consequently perform better in test data. The study used two batches of gene expression data from the AddNeuroMed (ANM) and Dementia Case Registry (DCR) cohorts.\n\n\nMETHODS\nOur study used Illumina Human HT-12 Expression BeadChips to collect gene expression from blood samples. Random forest modeling with recursive feature elimination was used to predict case/control status. Age and APOE ɛ4 status were used as covariates for all analysis.\n\n\nRESULTS\nGene and pathway level models performed similarly to each other and to a model based on demographic information only.\n\n\nCONCLUSIONS\nAny potential increase in concordance from the novel pathway level approach used here has not lead to a greater predictive ability in these datasets. However, we have only tested one method for creating pathway level scores. Further, we have been able to benchmark pathways against genes in datasets that had been extensively harmonized. Further work should focus on the use of alternative methods for creating pathway level scores, in particular those that incorporate pathway topology, and the use of an endophenotype based approach." }, { "pmid": "26945722", "title": "Computer aided analysis of gait patterns in patients with acute anterior cruciate ligament injury.", "abstract": "BACKGROUND\nGait analysis is a useful tool to evaluate the functional status of patients with anterior cruciate ligament injury. Pattern recognition methods can be used to automatically assess walking patterns and objectively support clinical decisions. This study aimed to test a pattern recognition system for analyzing kinematic gait patterns of recently anterior cruciate ligament injured patients and for evaluating the effects of a therapeutic treatment.\n\n\nMETHODS\nGait kinematics of seven male patients with an acute unilateral anterior cruciate ligament rupture and seven healthy males were recorded. A support vector machine was trained to distinguish the groups. Principal component analysis and recursive feature elimination were used to extract features from 3D marker trajectories. A Classifier Oriented Gait Score was defined as a measure of gait quality. Visualizations were used to allow functional interpretations of characteristic group differences. The injured group was evaluated by the system after a therapeutic treatment. The results were compared against a clinical rating of the patients' gait.\n\n\nFINDINGS\nCross validation yielded 100% accuracy. After the treatment the score improved significantly (P<0.01) as well as the clinical rating (P<0.05). The visualizations revealed characteristic kinematic features, which differentiated between the groups.\n\n\nINTERPRETATION\nThe results show that gait alterations in the early phase after anterior cruciate ligament injury can be detected automatically. The results of the automatic analysis are comparable with the clinical rating and support the validity of the system. The visualizations allow interpretations on discriminatory features and can facilitate the integration of the results into the diagnostic process." }, { "pmid": "26497657", "title": "Multivariate classification of smokers and nonsmokers using SVM-RFE on structural MRI images.", "abstract": "Voxel-based morphometry (VBM) studies have revealed gray matter alterations in smokers, but this type of analysis has poor predictive value for individual cases, which limits its applicability in clinical diagnoses and treatment. A predictive model would essentially embody a complex biomarker that could be used to evaluate treatment efficacy. In this study, we applied VBM along with a multivariate classification method consisting of a support vector machine with recursive feature elimination to discriminate smokers from nonsmokers using their structural MRI data. Mean gray matter volumes in 1,024 cerebral cortical regions of interest created using a subparcellated version of the Automated Anatomical Labeling template were calculated from 60 smokers and 60 nonsmokers, and served as input features to the classification procedure. The classifier achieved the highest accuracy of 69.6% when taking the 139 highest ranked features via 10-fold cross-validation. Critically, these features were later validated on an independent testing set that consisted of 28 smokers and 28 nonsmokers, yielding a 64.04% accuracy level (binomial P = 0.01). Following classification, exploratory post hoc regression analyses were performed, which revealed that gray matter volumes in the putamen, hippocampus, prefrontal cortex, cingulate cortex, caudate, thalamus, pre-/postcentral gyrus, precuneus, and the parahippocampal gyrus, were inversely related to smoking behavioral characteristics. These results not only indicate that smoking related gray matter alterations can provide predictive power for group membership, but also suggest that machine learning techniques can reveal underlying smoking-related neurobiology." }, { "pmid": "26460680", "title": "A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.", "abstract": "Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets." }, { "pmid": "26403299", "title": "An automatic method for arterial pulse waveform recognition using KNN and SVM classifiers.", "abstract": "The measurement and analysis of the arterial pulse waveform (APW) are the means for cardiovascular risk assessment. Optical sensors represent an attractive instrumental solution to APW assessment due to their truly non-contact nature that makes the measurement of the skin surface displacement possible, especially at the carotid artery site. In this work, an automatic method to extract and classify the acquired data of APW signals and noise segments was proposed. Two classifiers were implemented: k-nearest neighbours and support vector machine (SVM), and a comparative study was made, considering widely used performance metrics. This work represents a wide study in feature creation for APW. A pool of 37 features was extracted and split in different subsets: amplitude features, time domain statistics, wavelet features, cross-correlation features and frequency domain statistics. The support vector machine recursive feature elimination was implemented for feature selection in order to identify the most relevant feature. The best result (0.952 accuracy) in discrimination between signals and noise was obtained for the SVM classifier with an optimal feature subset ." }, { "pmid": "26518718", "title": "Identification of gene markers in the development of smoking-induced lung cancer.", "abstract": "Lung cancer is a malignant tumor with high mortality in both women and men. To study the mechanisms of smoking-induced lung cancer, we analyzed microarray of GSE4115. GSE4115 was downloaded from Gene Expression Omnibus including 78 and 85 bronchial epithelium tissue samples separately from smokers with and without lung cancer. Limma package in R was used to screen differentially expressed genes (DEGs). Hierarchical cluster analysis for DEGs was conducted using orange software and visualized by distance map. Using DAVID software, functional and pathway enrichment analyses separately were conducted for the DEGs. And protein-protein interaction (PPI) network was constructed using Cytoscape software. Then, the pathscores of enriched pathways were calculated. Besides, functional features were screened and optimized using the recursive feature elimination (RFE) method. Additionally, the support vector machine (SVM) method was used to train model. Total 1923 DEGs were identified between the two groups. Hierarchical cluster analysis indicated that there were differences in gene level between the two groups. And SVM analysis indicated that the five features had potential diagnostic value. Importantly, MAPK1 (degree=30), SRC (degree=29), SMAD4 (degree=23), EEF1A1 (degree=21), TRAF2 (degree=21) and PLCG1 (degree=20) had higher degrees in the PPI network of the DEGs. They might be involved in smoking-induced lung cancer by interacting with each other (e.g. MAPK1-SMAD4, SMAD4-EEF1A1 and SRC-PLCG1). MAPK1, SRC, SMAD4, EEF1A1, TRAF2 and PLCG1 might be responsible for the development of smoking-induced lung cancer." }, { "pmid": "26903567", "title": "Random Forest (RF) Wrappers for Waveband Selection and Classification of Hyperspectral Data.", "abstract": "Hyperspectral data collected using a field spectroradiometer was used to model asymptomatic stress in Pinus radiata and Pinus patula seedlings infected with the pathogen Fusarium circinatum. Spectral data were analyzed using the random forest algorithm. To improve the classification accuracy of the model, subsets of wavebands were selected using three feature selection algorithms: (1) Boruta; (2) recursive feature elimination (RFE); and (3) area under the receiver operating characteristic curve of the random forest (AUC-RF). Results highlighted the robustness of the above feature selection methods when used in conjunction with the random forest algorithm for analyzing hyperspectral data. Overall, the Boruta feature selection algorithm provided the best results. When discriminating F. circinatum stress in Pinus radiata seedlings, Boruta selected wavebands (n = 69) yielded the best overall classification accuracies (training error of 17.00%, independent test error of 17.00% and an AUC value of 0.91). Classification results were, however, significantly lower for P. patula seedlings, with a training error of 24.00%, independent test error of 38.00%, and an AUC value of 0.65. A hybrid selection method that utilizes combinations of wavebands selected from the three feature selection algorithms was also tested. The hybrid method showed an improvement in classification accuracies for P. patula, and no improvement for P. radiata. The results of this study provide impetus towards implementing a hyperspectral framework for detecting stress within nursery environments." }, { "pmid": "26807332", "title": "A semi-supervised Support Vector Machine model for predicting the language outcomes following cochlear implantation based on pre-implant brain fMRI imaging.", "abstract": "INTRODUCTION\nWe developed a machine learning model to predict whether or not a cochlear implant (CI) candidate will develop effective language skills within 2 years after the CI surgery by using the pre-implant brain fMRI data from the candidate.\n\n\nMETHODS\nThe language performance was measured 2 years after the CI surgery by the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2). Based on the CELF-P2 scores, the CI recipients were designated as either effective or ineffective CI users. For feature extraction from the fMRI data, we constructed contrast maps using the general linear model, and then utilized the Bag-of-Words (BoW) approach that we previously published to convert the contrast maps into feature vectors. We trained both supervised models and semi-supervised models to classify CI users as effective or ineffective.\n\n\nRESULTS\nCompared with the conventional feature extraction approach, which used each single voxel as a feature, our BoW approach gave rise to much better performance for the classification of effective versus ineffective CI users. The semi-supervised model with the feature set extracted by the BoW approach from the contrast of speech versus silence achieved a leave-one-out cross-validation AUC as high as 0.97. Recursive feature elimination unexpectedly revealed that two features were sufficient to provide highly accurate classification of effective versus ineffective CI users based on our current dataset.\n\n\nCONCLUSION\nWe have validated the hypothesis that pre-implant cortical activation patterns revealed by fMRI during infancy correlate with language performance 2 years after cochlear implantation. The two brain regions highlighted by our classifier are potential biomarkers for the prediction of CI outcomes. Our study also demonstrated the superiority of the semi-supervised model over the supervised model. It is always worthwhile to try a semi-supervised model when unlabeled data are available." }, { "pmid": "26872146", "title": "A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.", "abstract": "DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes." }, { "pmid": "26297890", "title": "Classification of signaling proteins based on molecular star graph descriptors using Machine Learning models.", "abstract": "Signaling proteins are an important topic in drug development due to the increased importance of finding fast, accurate and cheap methods to evaluate new molecular targets involved in specific diseases. The complexity of the protein structure hinders the direct association of the signaling activity with the molecular structure. Therefore, the proposed solution involves the use of protein star graphs for the peptide sequence information encoding into specific topological indices calculated with S2SNet tool. The Quantitative Structure-Activity Relationship classification model obtained with Machine Learning techniques is able to predict new signaling peptides. The best classification model is the first signaling prediction model, which is based on eleven descriptors and it was obtained using the Support Vector Machines-Recursive Feature Elimination (SVM-RFE) technique with the Laplacian kernel (RFE-LAP) and an AUROC of 0.961. Testing a set of 3114 proteins of unknown function from the PDB database assessed the prediction performance of the model. Important signaling pathways are presented for three UniprotIDs (34 PDBs) with a signaling prediction greater than 98.0%." }, { "pmid": "26318777", "title": "Effects of imaging modalities, brain atlases and feature selection on prediction of Alzheimer's disease.", "abstract": "BACKGROUND\nThe choice of biomarkers for early detection of Alzheimer's disease (AD) is important for improving the accuracy of imaging-based prediction of conversion from mild cognitive impairment (MCI) to AD. The primary goal of this study was to assess the effects of imaging modalities and brain atlases on prediction. We also investigated the influence of support vector machine recursive feature elimination (SVM-RFE) on predictive performance.\n\n\nMETHODS\nEighty individuals with amnestic MCI [40 developed AD within 3 years] underwent structural magnetic resonance imaging (MRI) and (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans at baseline. Using Automated Anatomical Labeling (AAL) and LONI Probabilistic Brain Atlas (LPBA40), we extracted features representing gray matter density and relative cerebral metabolic rate for glucose in each region of interest from the baseline MRI and FDG-PET data, respectively. We used linear SVM ensemble with bagging and computed the area under the receiver operating characteristic curve (AUC) as a measure of classification performance. We performed multiple SVM-RFE to compute feature ranking. We performed analysis of variance on the mean AUCs for eight feature sets.\n\n\nRESULTS\nThe interactions between atlas and modality choices were significant. The main effect of SVM-RFE was significant, but the interactions with the other factors were not significant.\n\n\nCOMPARISON WITH EXISTING METHOD\nMultimodal features were found to be better than unimodal features to predict AD. FDG-PET was found to be better than MRI.\n\n\nCONCLUSIONS\nImaging modalities and brain atlases interact with each other and affect prediction. SVM-RFE can improve the predictive accuracy when using atlas-based features." }, { "pmid": "26793434", "title": "Classification of autistic individuals and controls using cross-task characterization of fMRI activity.", "abstract": "Multivariate pattern analysis (MVPA) has been applied successfully to task-based and resting-based fMRI recordings to investigate which neural markers distinguish individuals with autistic spectrum disorders (ASD) from controls. While most studies have focused on brain connectivity during resting state episodes and regions of interest approaches (ROI), a wealth of task-based fMRI datasets have been acquired in these populations in the last decade. This calls for techniques that can leverage information not only from a single dataset, but from several existing datasets that might share some common features and biomarkers. We propose a fully data-driven (voxel-based) approach that we apply to two different fMRI experiments with social stimuli (faces and bodies). The method, based on Support Vector Machines (SVMs) and Recursive Feature Elimination (RFE), is first trained for each experiment independently and each output is then combined to obtain a final classification output. Second, this RFE output is used to determine which voxels are most often selected for classification to generate maps of significant discriminative activity. Finally, to further explore the clinical validity of the approach, we correlate phenotypic information with obtained classifier scores. The results reveal good classification accuracy (range between 69% and 92.3%). Moreover, we were able to identify discriminative activity patterns pertaining to the social brain without relying on a priori ROI definitions. Finally, social motivation was the only dimension which correlated with classifier scores, suggesting that it is the main dimension captured by the classifiers. Altogether, we believe that the present RFE method proves to be efficient and may help identifying relevant biomarkers by taking advantage of acquired task-based fMRI datasets in psychiatric populations." }, { "pmid": "24048354", "title": "Toxygates: interactive toxicity analysis on a hybrid microarray and linked data platform.", "abstract": "MOTIVATION\nIn early stage drug development, it is desirable to assess the toxicity of compounds as quickly as possible. Biomarker genes can help predict whether a candidate drug will adversely affect a given individual, but they are often difficult to discover. In addition, the mechanism of toxicity of many drugs and common compounds is not yet well understood. The Japanese Toxicogenomics Project provides a large database of systematically collected microarray samples from rats (liver, kidney and primary hepatocytes) and human cells (primary hepatocytes) after exposure to 170 different compounds in different dosages and at different time intervals. However, until now, no intuitive user interface has been publically available, making it time consuming and difficult for individual researchers to explore the data.\n\n\nRESULTS\nWe present Toxygates, a user-friendly integrated analysis platform for this database. Toxygates combines a large microarray dataset with the ability to fetch semantic linked data, such as pathways, compound-protein interactions and orthologs, on demand. It can also perform pattern-based compound ranking with respect to the expression values of a set of relevant candidate genes. By using Toxygates, users can freely interrogate the transcriptome's response to particular compounds and conditions, which enables deep exploration of toxicity mechanisms." }, { "pmid": "14960456", "title": "affy--analysis of Affymetrix GeneChip data at the probe level.", "abstract": "MOTIVATION\nThe processing of the Affymetrix GeneChip data has been a recent focus for data analysts. Alternatives to the original procedure have been proposed and some of these new methods are widely used.\n\n\nRESULTS\nThe affy package is an R package of functions and classes for the analysis of oligonucleotide arrays manufactured by Affymetrix. The package is currently in its second release, affy provides the user with extreme flexibility when carrying out an analysis and make it possible to access and manipulate probe intensity data. In this paper, we present the main classes and functions in the package and demonstrate how they can be used to process probe-level data. We also demonstrate the importance of probe-level analysis when using the Affymetrix GeneChip platform." } ]
Scientific Reports
29967326
PMC6028651
10.1038/s41598-018-28243-x
The Role of PET-Based Radiomic Features in Predicting Local Control of Esophageal Cancer Treated with Concurrent Chemoradiotherapy
This study was designed to evaluate the predictive performance of 18F-fluorodeoxyglucose positron emission tomography (PET)-based radiomic features for local control of esophageal cancer treated with concurrent chemoradiotherapy (CRT). For each of the 30 patients enrolled, 440 radiomic features were extracted from both pre-CRT and mid-CRT PET images. The top 25 features with the highest areas under the receiver operating characteristic curve for identifying local control status were selected as discriminative features. Four machine-learning methods, random forest (RF), support vector machine, logistic regression, and extreme learning machine, were used to build predictive models with clinical features, radiomic features or a combination of both. An RF model incorporating both clinical and radiomic features achieved the best predictive performance, with an accuracy of 93.3%, a specificity of 95.7%, and a sensitivity of 85.7%. Based on risk scores of local failure predicted by this model, the 2-year local control rate and PFS rate were 100.0% (95% CI 100.0–100.0%) and 52.2% (31.8–72.6%) in the low-risk group and 14.3% (0.0–40.2%) and 0.0% (0.0–40.2%) in the high-risk group, respectively. This model may have the potential to stratify patients with different risks of local failure after CRT for esophageal cancer, which may facilitate the delivery of personalized treatment.
Related WorksRecently, radiomics analysis has been widely used in diagnosis and treatment response prediction. Lambin et al.11 described radiomics as a bridge between medical imaging and personalized medicine. Fehr et al.12 proposed using MRI-based texture features to automatically classify prostate cancer using Gleason scores. Coroller et al.13 found radiomic phenotype features were predictive for pathological response in non-small cell lung cancer. Maforo et al.16 investigated computer-extracted tumor phenotypes by using radiomic features extracted from diffusion-weighted imaging. Aerts et al.19 used the radiomics approach to decode tumor phenotypes by noninvasive imaging. Zhao et al.5 reported that intratumoral 18F-FDG distribution corresponds well to the expression levels of Glut-1, Glut-3, and HK-II. Tixier F. et al.6 proposed several textural features to predict the therapy response in esophageal cancer and demonstrated that these features of tumor metabolic distribution allowed the best stratification of esophageal carcinoma patients in the context of therapy response prediction. Tan et al.8 tried to use spatial-temporal 18F-FDG-PET features to predict the pathologic response of esophageal cancer to neoadjuvant chemoradiation therapy. Moreover, many studies have focused on the changes in radiomics features (delta-radiomics features), and found their potential prognostic value in cancer. Fave et al.14 reported that the delta-radiomics features calculated from CT images can be used to predict the patient outcomes in non-small cell lung cancer. Cunliffe et al.15 utilized delta-radiomics features to identify patients who would develop radiation pneumonitis during treatment in esophageal cancer.Many machine-learning methods can be used to identify the radiomic features predicting the local control status, such as RF, SVM, LR, and ELM23,25,26. The RF classification method is robust against overfitting and contains several decision trees. Each decision tree generates a prediction and the final result is determined by accumulating the votes of all decision trees. The SVM model uses an RBF kernel to map training samples into a high-dimensional space, and aims to find a hyper-plane that can linearly separate the two classes (local control and local failure) by the widest margin. The LR model first uses a logit function to transform training samples to make the corresponding output values fall within a range (usually [0–1]). Then, a linear function is used to approximate the transformed samples. ELMs are feedforward neural networks for classification and regression with a single layer or multiple layers of hidden nodes, and the parameters of the hidden nodes does not need to be tuned.
[ "22565611", "20177086", "11831394", "15809491", "21321270", "19403881", "25695610", "28975929", "25733904", "27085484", "28373718", "25670540", "26242589", "22257792", "22898692", "25586952", "26176655" ]
[ { "pmid": "22565611", "title": "Failure patterns in patients with esophageal cancer treated with definitive chemoradiation.", "abstract": "BACKGROUND\nLocal failure after definitive chemoradiation therapy for unresectable esophageal cancer remains problematic. Little is known about the failure pattern based on modern-day radiation treatment volumes. We hypothesized that most local failures would be within the gross tumor volume (GTV), where the bulk of the tumor burden resides.\n\n\nMETHODS\nWe reviewed treatment volumes for 239 patients who underwent definitive chemoradiation therapy and compared this information with failure patterns on follow-up positron emission tomography (PET). Failures were categorized as within the GTV, the larger clinical target volume (CTV, which encompasses microscopic disease), or the still larger planning target volume (PTV, which encompasses setup variability) or outside the radiation field.\n\n\nRESULTS\nAt a median follow-up time of 52.6 months (95% confidence interval, 46.1-56.7 months), 119 patients (50%) had experienced local failure, 114 (48%) had distant failure, and 74 (31%) had no evidence of failure. Of all local failures, 107 (90%) were within the GTV, 27 (23%) were within the CTV, and 14 (12%) were within in the PTV. On multivariate analysis, GTV failure was associated with tumor status (T3/T4 vs T1/T2; odds ratio, 6.35; P = .002), change in standardized uptake value on PET before and after treatment (decrease >52%: odds ratio, 0.368; P = .003), and tumor size (>8 cm, 4.08; P = .009).\n\n\nCONCLUSIONS\nMost local failures after definitive chemoradiation for unresectable esophageal cancer occur in the GTV. Future therapeutic strategies should focus on enhancing local control." }, { "pmid": "20177086", "title": "Prediction of tumor response to neoadjuvant therapy in patients with esophageal cancer with use of 18F FDG PET: a systematic review.", "abstract": "PURPOSE\nTo systematically review the accuracy of fluorine 18 ((18)F) fluorodeoxyglucose (FDG) positron emission tomography (PET) in the prediction of tumor response to neoadjuvant therapy in patients with esophageal cancer.\n\n\nMATERIALS AND METHODS\nThe MEDLINE and EMBASE databases were systematically searched for relevant studies. Methodologic quality of the included studies was assessed. Sensitivities and specificities of (18)F FDG PET in individual studies were calculated and underwent meta-analysis with a random effects model. A summary receiver operating characteristic curve (sROC) was constructed with the Moses-Shapiro-Littenberg method. A chi(2) test was performed to test for heterogeneity (defined as P < .10). Potential sources for heterogeneity were explored by assessing whether certain covariates significantly (P < .05) influenced the relative diagnostic odds ratio.\n\n\nRESULTS\nTwenty reports, comprising a total of 849 patients with esophageal cancer, were included. Overall, the studies were of moderate methodologic quality. Sensitivity and specificity of (18)F FDG PET ranged from 33% to 100% and from 30% to 100%, respectively, with pooled estimates of 67% (95% confidence interval: 62%, 72%) and 68% (95% confidence interval: 64%, 73%), respectively. The area under the sROC curve was 0.7815. There was significant heterogeneity in both the sensitivity and specificity of the included studies (P < .0001). Spearman rho between the logit of sensitivity and the logit of 1-specificity was 0.086 (P = .719), which suggested that there was no threshold effect. Studies performed outside of the United States and studies of higher methodologic quality yielded significantly higher overall accuracy.\n\n\nCONCLUSION\nOn the basis of current evidence, (18)F FDG PET should not yet be used in routine clinical practice to guide neoadjuvant therapy decisions in patients with esophageal cancer.\n\n\nSUPPLEMENTAL MATERIAL\nhttp://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.09091324/-/DC1." }, { "pmid": "11831394", "title": "From tumor biology to clinical Pet: a review of positron emission tomography (PET) in oncology.", "abstract": "Cancer cells show increased metabolism of both glucose and amino acids, which can be monitored with 18F-2-deoxy-2-fluoro-D-glucose (FDG), a glucose analogue, and 11C-L-methionine (Met), respectively. FDG uptake is higher in fast-growing than in slow-growing tumors. FDG uptake is considered to be a good marker of the grade of malignancy. Several studies have indicated that the degree of FDG uptake in primary lung cancer can be used as a prognostic indicator. Differential diagnosis of lung tumors has been studied extensively with both computed tomography (CT) and positron emission tomography (PET). It has been established that FDG-PET is clinically very useful and that its diagnostic accuracy is higher than that of CT. Detection of lymph node or distant metastases in known cancer patients using a whole-body imaging technique with FDG-PET has become a good indication for PET. FDG uptake may be seen in a variety of tissues due to physiological glucose consumption. Also FDG uptake is not specific for cancer. Various types of active inflammation showed FDG uptake to a certain high level. Understanding of the physiological and benign causes of FDG uptake is important for accurate interpretation of FDG-PET. In monitoring radio/chemotherapy, changes in FDG uptake correlate with the number of viable cancer cells, whereas Met is a marker of proliferation. Reduction of FDG uptake is a sensitive marker of viable tissue, preceding necrotic extension and volumetric shrinkage. FDG-PET is useful for the detection of recurrence and for monitoring the therapeutic response of tumor tissues in various cancers, including those of the lung, colon, and head and neck. Thus, PET, particularly with FDG, is effective in monitoring cancer cell viability, and is clinically very useful for the diagnosis and detection of recurrence of lung and other cancers." }, { "pmid": "15809491", "title": "Biologic correlates of intratumoral heterogeneity in 18F-FDG distribution with regional expression of glucose transporters and hexokinase-II in experimental tumor.", "abstract": "UNLABELLED\nThe biologic mechanisms involved in the intratumoral heterogeneous distribution of 18F-FDG have not been fully investigated. To clarify factors inducing heterogeneous 18F-FDG distribution, we determined the intratumoral distribution of 18F-FDG by autoradiography (ARG) and compared it with the regional expression levels of glucose transporters Glut-1 and Glut-3 and hexokinase-II (HK-II) in a rat model of malignant tumor.\n\n\nMETHODS\nRats were inoculated with allogenic hepatoma cells (KDH-8) into the left calf muscle (n = 7). Tumor tissues were excised 1 h after the intravenous injection of 18F-FDG and sectioned to obtain 2 adjacent slices for ARG and histochemical studies. The regions of interest (ROIs) were placed on ARG images to cover mainly the central (CT) and peripheral (PT) regions of viable tumor tissues and necrotic/apoptotic (NA) regions. The radioactivity in each ROI was analyzed quantitatively using a computerized imaging analysis system. The expression levels of Glut-1, Glut-3, and HK-II were determined by immunostaining and semiquantitative evaluation. The hypoxia-inducible factor 1 (HIF-1) was also immunostained.\n\n\nRESULTS\nARG images showed that intratumoral 18F-FDG distribution was heterogeneous. The accumulation of 18F-FDG in the CT region was the highest, which was 1.6 and 2.3 times higher than those in the PT and NA regions, respectively (P < 0.001). The expression levels of Glut-1, Glut-3, and HK-II were markedly higher in the CT region (P < 0.001) compared with those in the PT region. The intratumoral distribution of 18F-FDG significantly correlated with the expression levels of Glut-1, Glut-3, and HK-II (r = 0.923, P < 0.001 for Glut-1; r = 0.829, P < 0.001 for Glut-3; and r = 0.764, P < 0.01 for HK-II). The positive staining of HIF-1 was observed in the CT region.\n\n\nCONCLUSION\nThese results demonstrate that intratumoral 18F-FDG distribution corresponds well to the expression levels of Glut-1, Glut-3, and HK-II. The elevated expression levels of Glut-1, Glut-3, and HK-II, induced by hypoxia (HIF-1), may be contributing factors to the higher 18F-FDG accumulation in the CT region." }, { "pmid": "21321270", "title": "Intratumor heterogeneity characterized by textural features on baseline 18F-FDG PET images predicts response to concomitant radiochemotherapy in esophageal cancer.", "abstract": "UNLABELLED\n(18)F-FDG PET is often used in clinical routine for diagnosis, staging, and response to therapy assessment or prediction. The standardized uptake value (SUV) in the primary or regional area is the most common quantitative measurement derived from PET images used for those purposes. The aim of this study was to propose and evaluate new parameters obtained by textural analysis of baseline PET scans for the prediction of therapy response in esophageal cancer.\n\n\nMETHODS\nForty-one patients with newly diagnosed esophageal cancer treated with combined radiochemotherapy were included in this study. All patients underwent pretreatment whole-body (18)F-FDG PET. Patients were treated with radiotherapy and alkylatinlike agents (5-fluorouracil-cisplatin or 5-fluorouracil-carboplatin). Patients were classified as nonresponders (progressive or stable disease), partial responders, or complete responders according to the Response Evaluation Criteria in Solid Tumors. Different image-derived indices obtained from the pretreatment PET tumor images were considered. These included usual indices such as maximum SUV, peak SUV, and mean SUV and a total of 38 features (such as entropy, size, and magnitude of local and global heterogeneous and homogeneous tumor regions) extracted from the 5 different textures considered. The capacity of each parameter to classify patients with respect to response to therapy was assessed using the Kruskal-Wallis test (P < 0.05). Specificity and sensitivity (including 95% confidence intervals) for each of the studied parameters were derived using receiver-operating-characteristic curves.\n\n\nRESULTS\nRelationships between pairs of voxels, characterizing local tumor metabolic nonuniformities, were able to significantly differentiate all 3 patient groups (P < 0.0006). Regional measures of tumor characteristics, such as size of nonuniform metabolic regions and corresponding intensity nonuniformities within these regions, were also significant factors for prediction of response to therapy (P = 0.0002). Receiver-operating-characteristic curve analysis showed that tumor textural analysis can provide nonresponder, partial-responder, and complete-responder patient identification with higher sensitivity (76%-92%) than any SUV measurement.\n\n\nCONCLUSION\nTextural features of tumor metabolic distribution extracted from baseline (18)F-FDG PET images allow for the best stratification of esophageal carcinoma patients in the context of therapy-response prediction." }, { "pmid": "19403881", "title": "From RECIST to PERCIST: Evolving Considerations for PET response criteria in solid tumors.", "abstract": "UNLABELLED\nThe purpose of this article is to review the status and limitations of anatomic tumor response metrics including the World Health Organization (WHO) criteria, the Response Evaluation Criteria in Solid Tumors (RECIST), and RECIST 1.1. This article also reviews qualitative and quantitative approaches to metabolic tumor response assessment with (18)F-FDG PET and proposes a draft framework for PET Response Criteria in Solid Tumors (PERCIST), version 1.0.\n\n\nMETHODS\nPubMed searches, including searches for the terms RECIST, positron, WHO, FDG, cancer (including specific types), treatment response, region of interest, and derivative references, were performed. Abstracts and articles judged most relevant to the goals of this report were reviewed with emphasis on limitations and strengths of the anatomic and PET approaches to treatment response assessment. On the basis of these data and the authors' experience, draft criteria were formulated for PET tumor response to treatment.\n\n\nRESULTS\nApproximately 3,000 potentially relevant references were screened. Anatomic imaging alone using standard WHO, RECIST, and RECIST 1.1 criteria is widely applied but still has limitations in response assessments. For example, despite effective treatment, changes in tumor size can be minimal in tumors such as lymphomas, sarcoma, hepatomas, mesothelioma, and gastrointestinal stromal tumor. CT tumor density, contrast enhancement, or MRI characteristics appear more informative than size but are not yet routinely applied. RECIST criteria may show progression of tumor more slowly than WHO criteria. RECIST 1.1 criteria (assessing a maximum of 5 tumor foci, vs. 10 in RECIST) result in a higher complete response rate than the original RECIST criteria, at least in lymph nodes. Variability appears greater in assessing progression than in assessing response. Qualitative and quantitative approaches to (18)F-FDG PET response assessment have been applied and require a consistent PET methodology to allow quantitative assessments. Statistically significant changes in tumor standardized uptake value (SUV) occur in careful test-retest studies of high-SUV tumors, with a change of 20% in SUV of a region 1 cm or larger in diameter; however, medically relevant beneficial changes are often associated with a 30% or greater decline. The more extensive the therapy, the greater the decline in SUV with most effective treatments. Important components of the proposed PERCIST criteria include assessing normal reference tissue values in a 3-cm-diameter region of interest in the liver, using a consistent PET protocol, using a fixed small region of interest about 1 cm(3) in volume (1.2-cm diameter) in the most active region of metabolically active tumors to minimize statistical variability, assessing tumor size, treating SUV lean measurements in the 1 (up to 5 optional) most metabolically active tumor focus as a continuous variable, requiring a 30% decline in SUV for \"response,\" and deferring to RECIST 1.1 in cases that do not have (18)F-FDG avidity or are technically unsuitable. Criteria to define progression of tumor-absent new lesions are uncertain but are proposed.\n\n\nCONCLUSION\nAnatomic imaging alone using standard WHO, RECIST, and RECIST 1.1 criteria have limitations, particularly in assessing the activity of newer cancer therapies that stabilize disease, whereas (18)F-FDG PET appears particularly valuable in such cases. The proposed PERCIST 1.0 criteria should serve as a starting point for use in clinical trials and in structured quantitative clinical reporting. Undoubtedly, subsequent revisions and enhancements will be required as validation studies are undertaken in varying diseases and treatments." }, { "pmid": "25695610", "title": "Quantitative assessment of cardiac mechanical dyssynchrony and prediction of response to cardiac resynchronization therapy in patients with nonischaemic dilated cardiomyopathy using gated myocardial perfusion SPECT.", "abstract": "OBJECTIVE\nThe aim of the study was to evaluate gated myocardial perfusion SPECT (GMPS) in the prediction of response to cardiac resynchronization therapy (CRT) in nonischaemic dilated cardiomyopathy patients.\n\n\nPATIENTS AND METHODS\nThirty-two patients (23 men, mean age 57.5±12.1 years) with severe heart failure, who were selected for CRT implantation, were prospectively included in this study. Patients with coronary heart disease and structural heart diseases were excluded. ⁹⁹mTc-MIBI GMPS and clinical evaluation were performed at baseline and 3 months after CRT implantation. In GMPS, first-harmonic fast Fourier transform was used to extract a phase array using commercially available software. Phase standard deviation (PSD) and phase histogram bandwidth (PHB) were used to quantify cardiac mechanical dyssynchrony (CMD). Left ventricular ejection fraction was evaluated.\n\n\nRESULTS\nAt baseline evaluation the mean NYHA class was 3.3±0.5, left ventricular ejection fraction was 23.2±5.3% and mean QRS duration was 150.3±18.2 ms. PSD was 55.8±19.2° and PHB was 182.1±75.8°. At 3-month follow-up, 22 patients responded to CRT with improvement in NYHA class by more than 1 grade and in ejection fraction by more than 5%. Responders had significantly larger PSD (63.6±16.6 vs. 38.7±12.7°) and PHB (214.8±63.9 vs. 110.2±43.5°) compared with nonresponders. Receiver-operating characteristic curve analysis demonstrated 86% sensitivity and 80% specificity at a cutoff value of 43° for PSD and 86% sensitivity and 80% specificity at a cutoff value of 128° for PHB in the prediction of response to CRT.\n\n\nCONCLUSION\nBaseline PSD and PHB derived from GMPS are useful for prediction of response to CRT in nonischaemic dilated cardiomyopathy patients." }, { "pmid": "28975929", "title": "Radiomics: the bridge between medical imaging and personalized medicine.", "abstract": "Radiomics, the high-throughput mining of quantitative image features from standard-of-care medical imaging that enables data to be extracted and applied within clinical-decision support systems to improve diagnostic, prognostic, and predictive accuracy, is gaining importance in cancer research. Radiomic analysis exploits sophisticated image analysis tools and the rapid development and validation of medical imaging data that uses image-based signatures for precision diagnosis and treatment, providing a powerful tool in modern medicine. Herein, we describe the process of radiomics, its pitfalls, challenges, opportunities, and its capacity to improve clinical decision making, emphasizing the utility for patients with cancer. Currently, the field of radiomics lacks standardized evaluation of both the scientific integrity and the clinical relevance of the numerous published radiomics investigations resulting from the rapid growth of this area. Rigorous evaluation criteria and reporting guidelines need to be established in order for radiomics to mature as a discipline. Herein, we provide guidance for investigations to meet this urgent need in the field of radiomics." }, { "pmid": "25733904", "title": "In-use product stocks link manufactured capital to natural capital.", "abstract": "In-use stock of a product is the amount of the product in active use. In-use product stocks provide various functions or services on which we rely in our daily work and lives, and the concept of in-use product stock for industrial ecologists is similar to the concept of net manufactured capital stock for economists. This study estimates historical physical in-use stocks of 91 products and 9 product groups and uses monetary data on net capital stocks of 56 products to either approximate or compare with in-use stocks of the corresponding products in the United States. Findings include the following: (i) The development of new products and the buildup of their in-use stocks result in the increase in variety of in-use product stocks and of manufactured capital; (ii) substitution among products providing similar or identical functions reflects the improvement in quality of in-use product stocks and of manufactured capital; and (iii) the historical evolution of stocks of the 156 products or product groups in absolute, per capita, or per-household terms shows that stocks of most products have reached or are approaching an upper limit. Because the buildup, renewal, renovation, maintenance, and operation of in-use product stocks drive the anthropogenic cycles of materials that are used to produce products and that originate from natural capital, the determination of in-use product stocks together with modeling of anthropogenic material cycles provides an analytic perspective on the material linkage between manufactured capital and natural capital." }, { "pmid": "27085484", "title": "Radiomic phenotype features predict pathological response in non-small cell lung cancer.", "abstract": "BACKGROUND AND PURPOSE\nRadiomics can quantify tumor phenotype characteristics non-invasively by applying advanced imaging feature algorithms. In this study we assessed if pre-treatment radiomics data are able to predict pathological response after neoadjuvant chemoradiation in patients with locally advanced non-small cell lung cancer (NSCLC).\n\n\nMATERIALS AND METHODS\n127 NSCLC patients were included in this study. Fifteen radiomic features selected based on stability and variance were evaluated for its power to predict pathological response. Predictive power was evaluated using area under the curve (AUC). Conventional imaging features (tumor volume and diameter) were used for comparison.\n\n\nRESULTS\nSeven features were predictive for pathologic gross residual disease (AUC>0.6, p-value<0.05), and one for pathologic complete response (AUC=0.63, p-value=0.01). No conventional imaging features were predictive (range AUC=0.51-0.59, p-value>0.05). Tumors that did not respond well to neoadjuvant chemoradiation were more likely to present a rounder shape (spherical disproportionality, AUC=0.63, p-value=0.009) and heterogeneous texture (LoG 5mm 3D - GLCM entropy, AUC=0.61, p-value=0.03).\n\n\nCONCLUSION\nWe identified predictive radiomic features for pathological response, although no conventional features were significantly predictive. This study demonstrates that radiomics can provide valuable clinical information, and performed better than conventional imaging features." }, { "pmid": "28373718", "title": "Delta-radiomics features for the prediction of patient outcomes in non-small cell lung cancer.", "abstract": "Radiomics is the use of quantitative imaging features extracted from medical images to characterize tumor pathology or heterogeneity. Features measured at pretreatment have successfully predicted patient outcomes in numerous cancer sites. This project was designed to determine whether radiomics features measured from non-small cell lung cancer (NSCLC) change during therapy and whether those features (delta-radiomics features) can improve prognostic models. Features were calculated from pretreatment and weekly intra-treatment computed tomography images for 107 patients with stage III NSCLC. Pretreatment images were used to determine feature-specific image preprocessing. Linear mixed-effects models were used to identify features that changed significantly with dose-fraction. Multivariate models were built for overall survival, distant metastases, and local recurrence using only clinical factors, clinical factors and pretreatment radiomics features, and clinical factors, pretreatment radiomics features, and delta-radiomics features. All of the radiomics features changed significantly during radiation therapy. For overall survival and distant metastases, pretreatment compactness improved the c-index. For local recurrence, pretreatment imaging features were not prognostic, while texture-strength measured at the end of treatment significantly stratified high- and low-risk patients. These results suggest radiomics features change due to radiation therapy and their values at the end of treatment may be indicators of tumor response." }, { "pmid": "25670540", "title": "Lung texture in serial thoracic computed tomography scans: correlation of radiomics-based features with radiation therapy dose and radiation pneumonitis development.", "abstract": "PURPOSE\nTo assess the relationship between radiation dose and change in a set of mathematical intensity- and texture-based features and to determine the ability of texture analysis to identify patients who develop radiation pneumonitis (RP).\n\n\nMETHODS AND MATERIALS\nA total of 106 patients who received radiation therapy (RT) for esophageal cancer were retrospectively identified under institutional review board approval. For each patient, diagnostic computed tomography (CT) scans were acquired before (0-168 days) and after (5-120 days) RT, and a treatment planning CT scan with an associated dose map was obtained. 32- × 32-pixel regions of interest (ROIs) were randomly identified in the lungs of each pre-RT scan. ROIs were subsequently mapped to the post-RT scan and the planning scan dose map by using deformable image registration. The changes in 20 feature values (ΔFV) between pre- and post-RT scan ROIs were calculated. Regression modeling and analysis of variance were used to test the relationships between ΔFV, mean ROI dose, and development of grade ≥2 RP. Area under the receiver operating characteristic curve (AUC) was calculated to determine each feature's ability to distinguish between patients with and those without RP. A classifier was constructed to determine whether 2- or 3-feature combinations could improve RP distinction.\n\n\nRESULTS\nFor all 20 features, a significant ΔFV was observed with increasing radiation dose. Twelve features changed significantly for patients with RP. Individual texture features could discriminate between patients with and those without RP with moderate performance (AUCs from 0.49 to 0.78). Using multiple features in a classifier, AUC increased significantly (0.59-0.84).\n\n\nCONCLUSIONS\nA relationship between dose and change in a set of image-based features was observed. For 12 features, ΔFV was significantly related to RP development. This study demonstrated the ability of radiomics to provide a quantitative, individualized measurement of patient lung tissue reaction to RT and assess RP development." }, { "pmid": "26242589", "title": "Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models.", "abstract": "BACKGROUND\nProstate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data.\n\n\nMETHODS\nIn this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models.\n\n\nRESULTS\nThe performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy.\n\n\nCONCLUSIONS\nComprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a comprehensive set of texture features and a feature selection method, optimal texture feature models were constructed that improved the prostate cancer auto-detection significantly compared to conventional MP-MRI texture feature models." }, { "pmid": "22257792", "title": "Radiomics: extracting more information from medical images using advanced feature analysis.", "abstract": "Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics--the high-throughput extraction of large amounts of image features from radiographic images--addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory." }, { "pmid": "22898692", "title": "Radiomics: the process and the challenges.", "abstract": "\"Radiomics\" refers to the extraction and analysis of large amounts of advanced quantitative imaging features with high throughput from medical images obtained with computed tomography, positron emission tomography or magnetic resonance imaging. Importantly, these data are designed to be extracted from standard-of-care images, leading to a very large potential subject pool. Radiomics data are in a mineable form that can be used to build descriptive and predictive models relating image features to phenotypes or gene-protein signatures. The core hypothesis of radiomics is that these models, which can include biological or medical data, can provide valuable diagnostic, prognostic or predictive information. The radiomics enterprise can be divided into distinct processes, each with its own challenges that need to be overcome: (a) image acquisition and reconstruction, (b) image segmentation and rendering, (c) feature extraction and feature qualification and (d) databases and data sharing for eventual (e) ad hoc informatics analyses. Each of these individual processes poses unique challenges. For example, optimum protocols for image acquisition and reconstruction have to be identified and harmonized. Also, segmentations have to be robust and involve minimal operator input. Features have to be generated that robustly reflect the complexity of the individual volumes, but cannot be overly complex or redundant. Furthermore, informatics databases that allow incorporation of image features and image annotations, along with medical and genetic data, have to be generated. Finally, the statistical approaches to analyze these data have to be optimized, as radiomics is not a mature field of study. Each of these processes will be discussed in turn, as well as some of their unique challenges and proposed approaches to solve them. The focus of this article will be on images of non-small-cell lung cancer." }, { "pmid": "25586952", "title": "Safety of dose escalation by simultaneous integrated boosting radiation dose within the primary tumor guided by (18)FDG-PET/CT for esophageal cancer.", "abstract": "PURPOSE\nTo observe the safety of selective dose boost to the pre-treatment high (18)F-deoxyglucose (FDG) uptake areas of the esophageal GTV.\n\n\nMETHODS\nPatients with esophageal squamous cell carcinoma were treated with escalating radiation dose of 4 levels, with a simultaneous integrated boost (SIB) to the pre-treatment 50% SUVmax area of the primary tumor. Patients received 4 monthly cycles of cisplatin and fluorouracil. Dose-limiting toxicity (DLT) was defined as any Grade 3 or higher acute toxicities causing continuous interruption of radiation for over 1 week.\n\n\nRESULTS\nFrom April 2012 to February 2014, dose has been escalated up to LEVEL 4 (70Gy). All of the 25 patients finished the prescribed dose without DLT, and 10 of them developed Grade 3 acute esophagitis. One patient of LEVEL 2 died of esophageal hemorrhage within 1 month after completion of radiotherapy, which was not definitely correlated with treatment yet. Late toxicities remained under observation. With median follow up of 8.9months, one-year overall survival and local control was 69.2% and 77.4%, respectively.\n\n\nCONCLUSIONS\nDose escalation in esophageal cancer based on (18)FDG-PET/CT has been safely achieved up to 70Gy using the SIB technique. Acute toxicities were well tolerated, whereas late toxicities and long-term outcomes deserved further observation." }, { "pmid": "26176655", "title": "Stage III Non-Small Cell Lung Cancer: Prognostic Value of FDG PET Quantitative Imaging Features Combined with Clinical Prognostic Factors.", "abstract": "PURPOSE\nTo determine whether quantitative imaging features from pretreatment positron emission tomography (PET) can enhance patient overall survival risk stratification beyond what can be achieved with conventional prognostic factors in patients with stage III non-small cell lung cancer (NSCLC).\n\n\nMATERIALS AND METHODS\nThe institutional review board approved this retrospective chart review study and waived the requirement to obtain informed consent. The authors retrospectively identified 195 patients with stage III NSCLC treated definitively with radiation therapy between January 2008 and January 2013. All patients underwent pretreatment PET/computed tomography before treatment. Conventional PET metrics, along with histogram, shape and volume, and co-occurrence matrix features, were extracted. Linear predictors of overall survival were developed from leave-one-out cross-validation. Predictive Kaplan-Meier curves were used to compare the linear predictors with both quantitative imaging features and conventional prognostic factors to those generated with conventional prognostic factors alone. The Harrell concordance index was used to quantify the discriminatory power of the linear predictors for survival differences of at least 0, 6, 12, 18, and 24 months. Models were generated with features present in more than 50% of the cross-validation folds.\n\n\nRESULTS\nLinear predictors of overall survival generated with both quantitative imaging features and conventional prognostic factors demonstrated improved risk stratification compared with those generated with conventional prognostic factors alone in terms of log-rank statistic (P = .18 vs P = .0001, respectively) and concordance index (0.62 vs 0.58, respectively). The use of quantitative imaging features selected during cross-validation improved the model using conventional prognostic factors alone (P = .007). Disease solidity and primary tumor energy from the co-occurrence matrix were found to be selected in all folds of cross-validation.\n\n\nCONCLUSION\nPretreatment PET features were associated with overall survival when adjusting for conventional prognostic factors in patients with stage III NSCLC." } ]
Frontiers in Pharmacology
29997499
PMC6028717
10.3389/fphar.2018.00609
OpenPVSignal: Advancing Information Search, Sharing and Reuse on Pharmacovigilance Signals via FAIR Principles and Semantic Web Technologies
Signal detection and management is a key activity in pharmacovigilance (PV). When a new PV signal is identified, the respective information is publicly communicated in the form of periodic newsletters or reports by organizations that monitor and investigate PV-related information (such as the World Health Organization and national PV centers). However, this type of communication does not allow for systematic access, discovery and explicit data interlinking and, therefore, does not facilitate automated data sharing and reuse. In this paper, we present OpenPVSignal, a novel ontology aiming to support the semantic enrichment and rigorous communication of PV signal information in a systematic way, focusing on two key aspects: (a) publishing signal information according to the FAIR (Findable, Accessible, Interoperable, and Re-usable) data principles, and (b) exploiting automatic reasoning capabilities upon the interlinked PV signal report data. OpenPVSignal is developed as a reusable, extendable and machine-understandable model based on Semantic Web standards/recommendations. In particular, it can be used to model PV signal report data focusing on: (a) heterogeneous data interlinking, (b) semantic and syntactic interoperability, (c) provenance tracking and (d) knowledge expressiveness. OpenPVSignal is built upon widely-accepted semantic models, namely, the provenance ontology (PROV-O), the Micropublications semantic model, the Web Annotation Data Model (WADM), the Ontology of Adverse Events (OAE) and the Time ontology. To this end, we describe the design of OpenPVSignal and demonstrate its applicability as well as the reasoning capabilities enabled by its use. We also provide an evaluation of the model against the FAIR data principles. The applicability of OpenPVSignal is demonstrated by using PV signal information published in: (a) the World Health Organization's Pharmaceuticals Newsletter, (b) the Netherlands Pharmacovigilance Centre Lareb Web site and (c) the U.S. Food and Drug Administration (FDA) Drug Safety Communications, also available on the FDA Web site.
Related work: ADR representation formalisms and frameworksRepresentation formalisms concerning ADRs have been employed/proposed in various studies, as well as Linked Data models and ontologies with a focus on PV.For example, the Observational Health Data Sciences and Informatics collaborative (OHDSI) developed an evidence base that links evidence items (e.g., MEDLINE abstracts, drug product labels, spontaneous reports, etc.) to health outcomes of interest (Knowledge Base workgroup of the Observational Health Data Sciences Informatics (OHDSI) collaborative, 2017) using Web Annotation Data Model (WADM) graphs (Sanderson et al., 2017). Each graph represents drug and health outcome concepts mentioned in an evidence item as the Body of the annotation and the evidence item itself is summarized using metadata in the Target of the annotation. The concepts in the body of the annotation are mapped to the standard vocabulary used by the OHDSI collaborative12. This arrangement supports two use cases important to the collaborative: (1) to be able to quantify the evidence that supports a drug—health outcome of interest association, and (2) to enable users to review the context of the association in the original evidence sources. Investigators used the evidence base to develop machine learning algorithms that infer positive and negative drug—health outcome of interest associations (Voss et al., 2017).ADEpedia (Jiang et al., 2013) encodes Adverse Drug Events (ADE) knowledge using a Linked Data serialization format exploiting several data sources (e.g., FDA Structured Product Labels (SPLs), reports from the FDA Adverse Event Reporting System (FAERS) and Electronic Medical Records). Biomedical ontologies, thesauri, and vocabularies, such as RxNorm13, NDF-RT14, and the Unified Medical Language System (UMLS)15, are used to specify concepts and normalize the interlinked data. The ADEpedia ontology consists of a rather lean concept schema, including two main concepts, namely, “Medication” and “ADE,” and does not include provenance information or statistical information on ADEs (Jiang et al., 2011).OntoADR (Souvignet et al., 2016) is an OWL ontology, aiming to address the difficulties in expressing the inherent semantics of MedDRA in an OWL format, in order to support automatic reasoning via well-defined OWL semantics upon MedDRA terms. Similar to ADEpedia, OntoADR does not include statistical or provenance information regarding PV signals (Bousquet et al., 2014).The Ontology of Adverse Events (OAE) aims to standardize and integrate medical adverse events (including ADRs), as well as to support computer-assisted reasoning (He et al., 2014). The two key OAE concepts are the intervention and the adverse event. OAE focuses on the semantic categorization of the interventions and the separation of them regarding causality. However, OAE is neither oriented toward provenance, nor on modeling information contained in free-text PV signal reports communicated by PV monitoring organizations.Probably the most relevant ADR representation formalism compared to OpenPVSignal is the Adverse Event Reporting Ontology (AERO) (Courtot et al., 2014). AERO aims to support clinicians in the data entry phase, while reporting adverse events. It can also automate the classification of adverse event reports and improve the efficiency of discovering potential risks, with the ultimate goal to increase quality and accuracy of the reported information. However, AERO was not designed by taking into account the content of PV reports which are made publicly available by PV monitoring organizations and focuses on vaccine adverse effects (Adverse Events Following Immunization—AEFIs) via the application of a specific ADR signal analysis pipeline based on the Brighton guidelines. Apart from restricting its domain of application to vaccines and the specific ADR analysis workflow, AERO does not provide an explicit way to relate provenance or time-related information.Compared to the above representation models, OpenPVSignal focuses on the representation of evidence-based PV signal information as communicated through the signal reports released by drug safety authorities. As mentioned in the “Introduction” section, these reports include supporting data originated from various sources, statistical measures (e.g., regarding disproportionality analysis of SRS data), as well as descriptions of the respective biochemical ADR mechanisms. Therefore, a dedicated ontology had to be defined, in order to leverage all these information types into one cohesive knowledge representation structure. Nevertheless, the above-mentioned models were studied in the scope of the current work concerning their concept definitions and their use of the Linked Data paradigm.
[ "11234503", "24680984", "24985530", "26261718", "24667848", "28270198", "25093068", "28936969", "24561327", "24303245", "25749722", "27813420", "26481350", "24203711", "27239556", "15726901", "21575203", "27369567", "24347988", "27993747", "26978244", "28469412" ]
[ { "pmid": "24680984", "title": "Formalizing MedDRA to support semantic reasoning on adverse drug reaction terms.", "abstract": "Although MedDRA has obvious advantages over previous terminologies for coding adverse drug reactions and discovering potential signals using data mining techniques, its terminological organization constrains users to search terms according to predefined categories. Adding formal definitions to MedDRA would allow retrieval of terms according to a case definition that may correspond to novel categories that are not currently available in the terminology. To achieve semantic reasoning with MedDRA, we have associated formal definitions to MedDRA terms in an OWL file named OntoADR that is the result of our first step for providing an \"ontologized\" version of MedDRA. MedDRA five-levels original hierarchy was converted into a subsumption tree and formal definitions of MedDRA terms were designed using several methods: mappings to SNOMED-CT, semi-automatic definition algorithms or a fully manual way. This article presents the main steps of OntoADR conception process, its structure and content, and discusses problems and limits raised by this attempt to \"ontologize\" MedDRA." }, { "pmid": "24985530", "title": "Bridging islands of information to establish an integrated knowledge base of drugs and health outcomes of interest.", "abstract": "The entire drug safety enterprise has a need to search, retrieve, evaluate, and synthesize scientific evidence more efficiently. This discovery and synthesis process would be greatly accelerated through access to a common framework that brings all relevant information sources together within a standardized structure. This presents an opportunity to establish an open-source community effort to develop a global knowledge base, one that brings together and standardizes all available information for all drugs and all health outcomes of interest (HOIs) from all electronic sources pertinent to drug safety. To make this vision a reality, we have established a workgroup within the Observational Health Data Sciences and Informatics (OHDSI, http://ohdsi.org) collaborative. The workgroup's mission is to develop an open-source standardized knowledge base for the effects of medical products and an efficient procedure for maintaining and expanding it. The knowledge base will make it simpler for practitioners to access, retrieve, and synthesize evidence so that they can reach a rigorous and accurate assessment of causal relationships between a given drug and HOI. Development of the knowledge base will proceed with the measureable goal of supporting an efficient and thorough evidence-based assessment of the effects of 1,000 active ingredients across 100 HOIs. This non-trivial task will result in a high-quality and generally applicable drug safety knowledge base. It will also yield a reference standard of drug-HOI pairs that will enable more advanced methodological research that empirically evaluates the performance of drug safety analysis methods." }, { "pmid": "26261718", "title": "Micropublications: a semantic model for claims, evidence, arguments and annotations in biomedical communications.", "abstract": "BACKGROUND\nScientific publications are documentary representations of defeasible arguments, supported by data and repeatable methods. They are the essential mediating artifacts in the ecosystem of scientific communications. The institutional \"goal\" of science is publishing results. The linear document publication format, dating from 1665, has survived transition to the Web. Intractable publication volumes; the difficulty of verifying evidence; and observed problems in evidence and citation chains suggest a need for a web-friendly and machine-tractable model of scientific publications. This model should support: digital summarization, evidence examination, challenge, verification and remix, and incremental adoption. Such a model must be capable of expressing a broad spectrum of representational complexity, ranging from minimal to maximal forms.\n\n\nRESULTS\nThe micropublications semantic model of scientific argument and evidence provides these features. Micropublications support natural language statements; data; methods and materials specifications; discussion and commentary; challenge and disagreement; as well as allowing many kinds of statement formalization. The minimal form of a micropublication is a statement with its attribution. The maximal form is a statement with its complete supporting argument, consisting of all relevant evidence, interpretations, discussion and challenges brought forward in support of or opposition to it. Micropublications may be formalized and serialized in multiple ways, including in RDF. They may be added to publications as stand-off metadata. An OWL 2 vocabulary for micropublications is available at http://purl.org/mp. A discussion of this vocabulary along with RDF examples from the case studies, appears as OWL Vocabulary and RDF Examples in Additional file 1.\n\n\nCONCLUSION\nMicropublications, because they model evidence and allow qualified, nuanced assertions, can play essential roles in the scientific communications ecosystem in places where simpler, formalized and purely statement-based models, such as the nanopublications model, will not be sufficient. At the same time they will add significant value to, and are intentionally compatible with, statement-based formalizations. We suggest that micropublications, generated by useful software tools supporting such activities as writing, editing, reviewing, and discussion, will be of great value in improving the quality and tractability of biomedical communications." }, { "pmid": "24667848", "title": "The logic of surveillance guidelines: an analysis of vaccine adverse event reports from an ontological perspective.", "abstract": "BACKGROUND\nWhen increased rates of adverse events following immunization are detected, regulatory action can be taken by public health agencies. However to be interpreted reports of adverse events must be encoded in a consistent way. Regulatory agencies rely on guidelines to help determine the diagnosis of the adverse events. Manual application of these guidelines is expensive, time consuming, and open to logical errors. Representing these guidelines in a format amenable to automated processing can make this process more efficient.\n\n\nMETHODS AND FINDINGS\nUsing the Brighton anaphylaxis case definition, we show that existing clinical guidelines used as standards in pharmacovigilance can be logically encoded using a formal representation such as the Adverse Event Reporting Ontology we developed. We validated the classification of vaccine adverse event reports using the ontology against existing rule-based systems and a manually curated subset of the Vaccine Adverse Event Reporting System. However, we encountered a number of critical issues in the formulation and application of the clinical guidelines. We report these issues and the steps being taken to address them in current surveillance systems, and in the terminological standards in use.\n\n\nCONCLUSIONS\nBy standardizing and improving the reporting process, we were able to automate diagnosis confirmation. By allowing medical experts to prioritize reports such a system can accelerate the identification of adverse reactions to vaccines and the response of regulatory agencies. This approach of combining ontology and semantic technologies can be used to improve other areas of vaccine adverse event reports analysis and should inform both the design of clinical guidelines and how they are used in the future.\n\n\nAVAILABILITY\nSufficient material to reproduce our results is available, including documentation, ontology, code and datasets, at http://purl.obolibrary.org/obo/aero." }, { "pmid": "28270198", "title": "Large-scale adverse effects related to treatment evidence standardization (LAERTES): an open scalable system for linking pharmacovigilance evidence sources with clinical data.", "abstract": "BACKGROUND\nIntegrating multiple sources of pharmacovigilance evidence has the potential to advance the science of safety signal detection and evaluation. In this regard, there is a need for more research on how to integrate multiple disparate evidence sources while making the evidence computable from a knowledge representation perspective (i.e., semantic enrichment). Existing frameworks suggest well-promising outcomes for such integration but employ a rather limited number of sources. In particular, none have been specifically designed to support both regulatory and clinical use cases, nor have any been designed to add new resources and use cases through an open architecture. This paper discusses the architecture and functionality of a system called Large-scale Adverse Effects Related to Treatment Evidence Standardization (LAERTES) that aims to address these shortcomings.\n\n\nRESULTS\nLAERTES provides a standardized, open, and scalable architecture for linking evidence sources relevant to the association of drugs with health outcomes of interest (HOIs). Standard terminologies are used to represent different entities. For example, drugs and HOIs are represented in RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms respectively. At the time of this writing, six evidence sources have been loaded into the LAERTES evidence base and are accessible through prototype evidence exploration user interface and a set of Web application programming interface services. This system operates within a larger software stack provided by the Observational Health Data Sciences and Informatics clinical research framework, including the relational Common Data Model for observational patient data created by the Observational Medical Outcomes Partnership. Elements of the Linked Data paradigm facilitate the systematic and scalable integration of relevant evidence sources.\n\n\nCONCLUSIONS\nThe prototype LAERTES system provides useful functionality while creating opportunities for further research. Future work will involve improving the method for normalizing drug and HOI concepts across the integrated sources, aggregated evidence at different levels of a hierarchy of HOI concepts, and developing more advanced user interface for drug-HOI investigations." }, { "pmid": "25093068", "title": "OAE: The Ontology of Adverse Events.", "abstract": "BACKGROUND\nA medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health.\n\n\nDESCRIPTION\nThe Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term 'adverse event' denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA.\n\n\nCONCLUSION\nOAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of adverse events and of the factors (e.g., vaccinee age) important for determining their clinical outcomes." }, { "pmid": "28936969", "title": "Systematic integration of biomedical knowledge prioritizes drugs for repurposing.", "abstract": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members." }, { "pmid": "24303245", "title": "ADEpedia 2.0: Integration of Normalized Adverse Drug Events (ADEs) Knowledge from the UMLS.", "abstract": "A standardized Adverse Drug Events (ADEs) knowledge base that encodes known ADE knowledge can be very useful in improving ADE detection for drug safety surveillance. In our previous study, we developed the ADEpedia that is a standardized knowledge base of ADEs based on drug product labels. The objectives of the present study are 1) to integrate normalized ADE knowledge from the Unified Medical Language System (UMLS) into the ADEpedia; and 2) to enrich the knowledge base with the drug-disorder co-occurrence data from a 51-million-document electronic medical records (EMRs) system. We extracted 266,832 drug-disorder concept pairs from the UMLS, covering 14,256 (1.69%) distinct drug concepts and 19,006 (3.53%) distinct disorder concepts. Of them, 71,626 (26.8%) concept pairs from UMLS co-occurred in the EMRs. We performed a preliminary evaluation on the utility of the UMLS ADE data. In conclusion, we have built an ADEpedia 2.0 framework that intends to integrate known ADE knowledge from disparate sources. The UMLS is a useful source for providing standardized ADE knowledge relevant to indications, contraindications and adverse effects, and complementary to the ADE data from drug product labels. The statistics from EMRs would enable the meaningful use of ADE data for drug safety surveillance." }, { "pmid": "25749722", "title": "Computational approaches for pharmacovigilance signal detection: toward integrated and semantically-enriched frameworks.", "abstract": "Computational signal detection constitutes a key element of postmarketing drug monitoring and surveillance. Diverse data sources are considered within the 'search space' of pharmacovigilance scientists, and respective data analysis methods are employed, all with their qualities and shortcomings, towards more timely and accurate signal detection. Recent systematic comparative studies highlighted not only event-based and data-source-based differential performance across methods but also their complementarity. These findings reinforce the arguments for exploiting all possible information sources for drug safety and the parallel use of multiple signal detection methods. Combinatorial signal detection has been pursued in few studies up to now, employing a rather limited number of methods and data sources but illustrating well-promising outcomes. However, the large-scale realization of this approach requires systematic frameworks to address the challenges of the concurrent analysis setting. In this paper, we argue that semantic technologies provide the means to address some of these challenges, and we particularly highlight their contribution in (a) annotating data sources and analysis methods with quality attributes to facilitate their selection given the analysis scope; (b) consistently defining study parameters such as health outcomes and drugs of interest, and providing guidance for study setup; (c) expressing analysis outcomes in a common format enabling data sharing and systematic comparisons; and (d) assessing/supporting the novelty of the aggregated outcomes through access to reference knowledge sources related to drug safety. A semantically-enriched framework can facilitate seamless access and use of different data sources and computational methods in an integrated fashion, bringing a new perspective for large-scale, knowledge-intensive signal detection." }, { "pmid": "27813420", "title": "Exploiting heterogeneous publicly available data sources for drug safety surveillance: computational framework and case studies.", "abstract": "OBJECTIVE\nDriven by the need of pharmacovigilance centres and companies to routinely collect and review all available data about adverse drug reactions (ADRs) and adverse events of interest, we introduce and validate a computational framework exploiting dominant as well as emerging publicly available data sources for drug safety surveillance.\n\n\nMETHODS\nOur approach relies on appropriate query formulation for data acquisition and subsequent filtering, transformation and joint visualization of the obtained data. We acquired data from the FDA Adverse Event Reporting System (FAERS), PubMed and Twitter. In order to assess the validity and the robustness of the approach, we elaborated on two important case studies, namely, clozapine-induced cardiomyopathy/myocarditis versus haloperidol-induced cardiomyopathy/myocarditis, and apixaban-induced cerebral hemorrhage.\n\n\nRESULTS\nThe analysis of the obtained data provided interesting insights (identification of potential patient and health-care professional experiences regarding ADRs in Twitter, information/arguments against an ADR existence across all sources), while illustrating the benefits (complementing data from multiple sources to strengthen/confirm evidence) and the underlying challenges (selecting search terms, data presentation) of exploiting heterogeneous information sources, thereby advocating the need for the proposed framework.\n\n\nCONCLUSIONS\nThis work contributes in establishing a continuous learning system for drug safety surveillance by exploiting heterogeneous publicly available data sources via appropriate support tools." }, { "pmid": "26481350", "title": "The SIDER database of drugs and side effects.", "abstract": "Unwanted side effects of drugs are a burden on patients and a severe impediment in the development of new drugs. At the same time, adverse drug reactions (ADRs) recorded during clinical trials are an important source of human phenotypic data. It is therefore essential to combine data on drugs, targets and side effects into a more complete picture of the therapeutic mechanism of actions of drugs and the ways in which they cause adverse reactions. To this end, we have created the SIDER ('Side Effect Resource', http://sideeffects.embl.de) database of drugs and ADRs. The current release, SIDER 4, contains data on 1430 drugs, 5880 ADRs and 140 064 drug-ADR pairs, which is an increase of 40% compared to the previous version. For more fine-grained analyses, we extracted the frequency with which side effects occur from the package inserts. This information is available for 39% of drug-ADR pairs, 19% of which can be compared to the frequency under placebo treatment. SIDER furthermore contains a data set of drug indications, extracted from the package inserts using Natural Language Processing. These drug indications are used to reduce the rate of false positives by identifying medical terms that do not correspond to ADRs." }, { "pmid": "24203711", "title": "DrugBank 4.0: shedding new light on drug metabolism.", "abstract": "DrugBank (http://www.drugbank.ca) is a comprehensive online database containing extensive biochemical and pharmacological information about drugs, their mechanisms and their targets. Since it was first described in 2006, DrugBank has rapidly evolved, both in response to user requests and in response to changing trends in drug research and development. Previous versions of DrugBank have been widely used to facilitate drug and in silico drug target discovery. The latest update, DrugBank 4.0, has been further expanded to contain data on drug metabolism, absorption, distribution, metabolism, excretion and toxicity (ADMET) and other kinds of quantitative structure activity relationships (QSAR) information. These enhancements are intended to facilitate research in xenobiotic metabolism (both prediction and characterization), pharmacokinetics, pharmacodynamics and drug design/discovery. For this release, >1200 drug metabolites (including their structures, names, activity, abundance and other detailed data) have been added along with >1300 drug metabolism reactions (including metabolizing enzymes and reaction types) and dozens of drug metabolism pathways. Another 30 predicted or measured ADMET parameters have been added to each DrugCard, bringing the average number of quantitative ADMET values for Food and Drug Administration-approved drugs close to 40. Referential nuclear magnetic resonance and MS spectra have been added for almost 400 drugs as well as spectral and mass matching tools to facilitate compound identification. This expanded collection of drug information is complemented by a number of new or improved search tools, including one that provides a simple analyses of drug-target, -enzyme and -transporter associations to provide insight on drug-drug interactions." }, { "pmid": "15726901", "title": "Changes in gene expression in the lungs of Mg-deficient mice are related to an inflammatory process.", "abstract": "It has been well documented that experimental hypomagnesemia in rodents evokes, as an early consequence, an inflammatory response. This also leads to the activation of cells producing reactive species of oxygen and, as a result, to the oxidative damage of tissues. Several studies have shown that lungs might be a specific target of Mg deficiency. Here, we report that 3 weeks of Mg deficiency in mice resulted in inflammatory processes in the lungs, including interstitial and perivascular pneumonia, manifested by the infiltration of leukocytes, plasmocytes and histiocytes, as well as the phenomenon of disseminated intravascular coagulation (DIC). These phenomena were accompanied by changes in gene expression assessed by cDNA array. In this study we identified 26 genes significantly changed by Mg deficiency, mostly involved in the anti-oxidative response, regulation of cell cycle and growth, apoptosis as well as cell-cell and cell-matrix interactions. We conclude that these changes are related to the phenomena of inflammatory and oxidative processes and consecutive remodeling occurring in the tissues as a result of Mg deficiency. This may have implications for at least several lung pathologies, including allergies, asthma, SIDS (Sudden Infant Death Syndrome) or facilitate formation of lung metastases." }, { "pmid": "21575203", "title": "Linked open drug data for pharmaceutical research and development.", "abstract": "There is an abundance of information about drugs available on the Web. Data sources range from medicinal chemistry results, over the impact of drugs on gene expression, to the outcomes of drugs in clinical trials. These data are typically not connected together, which reduces the ease with which insights can be gained. Linking Open Drug Data (LODD) is a task force within the World Wide Web Consortium's (W3C) Health Care and Life Sciences Interest Group (HCLS IG). LODD has surveyed publicly available data about drugs, created Linked Data representations of the data sets, and identified interesting scientific and business questions that can be answered once the data sets are connected. The task force provides recommendations for the best practices of exposing data in a Linked Data representation. In this paper, we present past and ongoing work of LODD and discuss the growing importance of Linked Data as a foundation for pharmaceutical R&D data sharing." }, { "pmid": "27369567", "title": "OntoADR a semantic resource describing adverse drug reactions to support searching, coding, and information retrieval.", "abstract": "INTRODUCTION\nEfficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR.\n\n\nMETHODS\nThe method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts.\n\n\nRESULTS\nWe built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest.\n\n\nDISCUSSION\nThe methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal detection, which suggests that the graphical user interface we are currently implementing to process OntoADR could be usefully integrated into specialized pharmacovigilance software that rely on MedDRA." }, { "pmid": "24347988", "title": "Clinical and economic burden of adverse drug reactions.", "abstract": "Adverse drug reactions (ADRs) are unwanted drug effects that have considerable economic as well as clinical costs as they often lead to hospital admission, prolongation of hospital stay and emergency department visits. Randomized controlled trials (RCTs) are the main premarketing methods used to detect and quantify ADRs but these have several limitations, such as limited study sample size and limited heterogeneity due to the exclusion of the frailest patients. In addition, ADRs due to inappropriate medication use occur often in the real world of clinical practice but not in RCTs. Postmarketing drug safety monitoring through pharmacovigilance activities, including mining of spontaneous reporting and carrying out observational prospective cohort or retrospective database studies, allow longer follow-up periods of patients with a much wider range of characteristics, providing valuable means for ADR detection, quantification and where possible reduction, reducing healthcare costs in the process. Overall, pharmacovigilance is aimed at identifying drug safety signals as early as possible, thus minimizing potential clinical and economic consequences of ADRs. The goal of this review is to explore the epidemiology and the costs of ADRs in routine care." }, { "pmid": "27993747", "title": "Accuracy of an automated knowledge base for identifying drug adverse reactions.", "abstract": "INTRODUCTION\nDrug safety researchers seek to know the degree of certainty with which a particular drug is associated with an adverse drug reaction. There are different sources of information used in pharmacovigilance to identify, evaluate, and disseminate medical product safety evidence including spontaneous reports, published peer-reviewed literature, and product labels. Automated data processing and classification using these evidence sources can greatly reduce the manual curation currently required to develop reference sets of positive and negative controls (i.e. drugs that cause adverse drug events and those that do not) to be used in drug safety research.\n\n\nMETHODS\nIn this paper we explore a method for automatically aggregating disparate sources of information together into a single repository, developing a predictive model to classify drug-adverse event relationships, and applying those predictions to a real world problem of identifying negative controls for statistical method calibration.\n\n\nRESULTS\nOur results showed high predictive accuracy for the models combining all available evidence, with an area under the receiver-operator curve of ⩾0.92 when tested on three manually generated lists of drugs and conditions that are known to either have or not have an association with an adverse drug event.\n\n\nCONCLUSIONS\nResults from a pilot implementation of the method suggests that it is feasible to develop a scalable alternative to the time-and-resource-intensive, manual curation exercise previously applied to develop reference sets of positive and negative controls to be used in drug safety research." }, { "pmid": "26978244", "title": "The FAIR Guiding Principles for scientific data management and stewardship.", "abstract": "There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community." }, { "pmid": "28469412", "title": "Use of Biomedical Ontologies for Integration of Biological Knowledge for Learning and Prediction of Adverse Drug Reactions.", "abstract": "Drug-induced toxicity is a major public health concern that leads to patient morbidity and mortality. To address this problem, the Food and Drug Administration is working on the PredicTox initiative, a pilot research program on tyrosine kinase inhibitors, to build mechanistic and predictive models for drug-induced toxicity. This program involves integrating data acquired during preclinical studies and clinical trials within pharmaceutical company development programs that they have agreed to put in the public domain and in publicly available biological, pharmacological, and chemical databases. The integration process is accommodated by biomedical ontologies, a set of standardized vocabularies that define terms and logical relationships between them in each vocabulary. We describe a few programs that have used ontologies to address biomedical questions. The PredicTox effort is leveraging the experience gathered from these early initiatives to develop an infrastructure that allows evaluation of the hypothesis that having a mechanistic understanding underlying adverse drug reactions will improve the capacity to understand drug-induced clinical adverse drug reactions." } ]
BioData Mining
29988723
PMC6029133
10.1186/s13040-018-0175-7
PathCORE-T: identifying and visualizing globally co-occurring pathways in large transcriptomic compendia
BackgroundInvestigators often interpret genome-wide data by analyzing the expression levels of genes within pathways. While this within-pathway analysis is routine, the products of any one pathway can affect the activity of other pathways. Past efforts to identify relationships between biological processes have evaluated overlap in knowledge bases or evaluated changes that occur after specific treatments. Individual experiments can highlight condition-specific pathway-pathway relationships; however, constructing a complete network of such relationships across many conditions requires analyzing results from many studies.ResultsWe developed PathCORE-T framework by implementing existing methods to identify pathway-pathway transcriptional relationships evident across a broad data compendium. PathCORE-T is applied to the output of feature construction algorithms; it identifies pairs of pathways observed in features more than expected by chance as functionally co-occurring. We demonstrate PathCORE-T by analyzing an existing eADAGE model of a microbial compendium and building and analyzing NMF features from the TCGA dataset of 33 cancer types. The PathCORE-T framework includes a demonstration web interface, with source code, that users can launch to (1) visualize the network and (2) review the expression levels of associated genes in the original data. PathCORE-T creates and displays the network of globally co-occurring pathways based on features observed in a machine learning analysis of gene expression data.ConclusionsThe PathCORE-T framework identifies transcriptionally co-occurring pathways from the results of unsupervised analysis of gene expression data and visualizes the relationships between pathways as a network. PathCORE-T recapitulated previously described pathway-pathway relationships and suggested experimentally testable additional hypotheses that remain to be explored.Electronic supplementary materialThe online version of this article (10.1186/s13040-018-0175-7) contains supplementary material, which is available to authorized users.
Related workOur approach diverges from other algorithms that we identified in the literature in its intent: PathCORE-T finds pathway pairs within a biological system that are overrepresented in features constructed from diverse transcriptomic data. This complements other work that developed models specific to a single condition or disease. Approaches designed to capture pathway-pathway interactions from gene expression experiments for disease-specific, case-control studies have been published [15, 16]. For example, Pham et al. developed Latent Pathway Identification Analysis to find pathways that exert latent influences on transcriptionally altered genes [17]. Under this approach, the transcriptional response profiles for a binary condition (disease/normal), in conjunction with pathways specified in the KEGG and functions in Gene Ontology (GO) [18], are used to construct a pathway-pathway network where key pathways are identified by their network centrality scores [17]. Similarly, Pan et al. measured the betweenness centrality of pathways in disease-specific genetic interaction and coexpression networks to identify those most likely to be associated with bladder cancer risk [19]. These methods captured pathway relationships associated with a particular disease state.Global networks identify relationships between pathways that are not disease- or condition-specific. One such network, detailed by Li et al., relied on publicly available protein interaction data to determine pathway-pathway interactions [20]. Two pathways were connected in the network if the number of protein interactions between the pair was significant with respect to the computed background distribution. Such approaches rely on databases of interactions, though the interactions identified can be subsequently used for pathway-centric analyses of transcriptomic data [20, 21]. Pita-Juárez et al. created the Pathway Coexpression Network (PCxN) as a tool to discover pathways correlated with a pathway of interest [22]. They estimated correlations between pathways based on the expression of their underlying genes (as annotated in MSigDB) across a curated compendium of microarray data [22]. Software like PathCORE-T that generates global networks of pathway relationships from unsupervised feature analysis models built using transcriptomics data has not yet been published.The intention of PathCORE-T is to work from transcriptomic data in ways that do not give undue preference to combinations of pathways that share genes. Other methods have sought to consider shared genes between gene sets, protein-protein interactions, or other curated knowledgebases to define pathway-pathway interactions [20, 21, 23–25]. For example, Glass and Girvan described another network structure that relates functional terms in GO based on shared gene annotations [26]. In contrast with this approach, PathCORE-T specifically removes gene overlap in pathway definitions before they are used to build a network. Our software reports pathway-pathway connections overrepresented in gene expression patterns extracted from a large transcriptomic compendium while controlling for the fact that some pathways share genes.
[ "26776218", "23193258", "28711280", "20619355", "10592173", "23934932", "15016911", "12840046", "24071849", "27454244", "27816680", "21788508", "10802651", "18434343", "26345254", "29554099", "21738677", "19811689", "26588252", "27664720", "11985723", "22835944", "7551033", "24086521", "10350474", "23057863", "26909576", "24952901", "21376230", "19686080", "23842645", "23723247", "19544585", "19619488", "19000839", "20010874", "25380750", "21295686", "21537463", "20889557", "11150298", "16325811", "11937535", "8097338", "27280403", "22685333", "20725108" ]
[ { "pmid": "26776218", "title": "COMPUTATIONAL APPROACHES TO STUDY MICROBES AND MICROBIOMES.", "abstract": "Technological advances are making large-scale measurements of microbial communities commonplace. These newly acquired datasets are allowing researchers to ask and answer questions about the composition of microbial communities, the roles of members in these communities, and how genes and molecular pathways are regulated in individual community members and communities as a whole to effectively respond to diverse and changing environments. In addition to providing a more comprehensive survey of the microbial world, this new information allows for the development of computational approaches to model the processes underlying microbial systems. We anticipate that the field of computational microbiology will continue to grow rapidly in the coming years. In this manuscript we highlight both areas of particular interest in microbiology as well as computational approaches that begin to address these challenges." }, { "pmid": "23193258", "title": "NCBI GEO: archive for functional genomics data sets--update.", "abstract": "The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data." }, { "pmid": "28711280", "title": "Unsupervised Extraction of Stable Expression Signatures from Public Compendia with an Ensemble of Neural Networks.", "abstract": "Cross-experiment comparisons in public data compendia are challenged by unmatched conditions and technical noise. The ADAGE method, which performs unsupervised integration with denoising autoencoder neural networks, can identify biological patterns, but because ADAGE models, like many neural networks, are over-parameterized, different ADAGE models perform equally well. To enhance model robustness and better build signatures consistent with biological pathways, we developed an ensemble ADAGE (eADAGE) that integrated stable signatures across models. We applied eADAGE to a compendium of Pseudomonas aeruginosa gene expression profiling experiments performed in 78 media. eADAGE revealed a phosphate starvation response controlled by PhoB in media with moderate phosphate and predicted that a second stimulus provided by the sensor kinase, KinB, is required for this PhoB activation. We validated this relationship using both targeted and unbiased genetic approaches. eADAGE, which captures stable biological patterns, enables cross-experiment comparisons that can highlight measured but undiscovered relationships." }, { "pmid": "20619355", "title": "Independent component analysis: mining microarray data for fundamental human gene expression modules.", "abstract": "As public microarray repositories rapidly accumulate gene expression data, these resources contain increasingly valuable information about cellular processes in human biology. This presents a unique opportunity for intelligent data mining methods to extract information about the transcriptional modules underlying these biological processes. Modeling cellular gene expression as a combination of functional modules, we use independent component analysis (ICA) to derive 423 fundamental components of human biology from a 9395-array compendium of heterogeneous expression data. Annotation using the Gene Ontology (GO) suggests that while some of these components represent known biological modules, others may describe biology not well characterized by existing manually-curated ontologies. In order to understand the biological functions represented by these modules, we investigate the mechanism of the preclinical anti-cancer drug parthenolide (PTL) by analyzing the differential expression of our fundamental components. Our method correctly identifies known pathways and predicts that N-glycan biosynthesis and T-cell receptor signaling may contribute to PTL response. The fundamental gene modules we describe have the potential to provide pathway-level insight into new gene expression datasets." }, { "pmid": "10592173", "title": "KEGG: kyoto encyclopedia of genes and genomes.", "abstract": "KEGG (Kyoto Encyclopedia of Genes and Genomes) is a knowledge base for systematic analysis of gene functions, linking genomic information with higher order functional information. The genomic information is stored in the GENES database, which is a collection of gene catalogs for all the completely sequenced genomes and some partial genomes with up-to-date annotation of gene functions. The higher order functional information is stored in the PATHWAY database, which contains graphical representations of cellular processes, such as metabolism, membrane transport, signal transduction and cell cycle. The PATHWAY database is supplemented by a set of ortholog group tables for the information about conserved subpathways (pathway motifs), which are often encoded by positionally coupled genes on the chromosome and which are especially useful in predicting gene functions. A third database in KEGG is LIGAND for the information about chemical compounds, enzyme molecules and enzymatic reactions. KEGG provides Java graphics tools for browsing genome maps, comparing two genome maps and manipulating expression maps, as well as computational tools for sequence comparison, graph comparison and path computation. The KEGG databases are daily updated and made freely available (http://www. genome.ad.jp/kegg/)." }, { "pmid": "23934932", "title": "Analysis and correction of crosstalk effects in pathway analysis.", "abstract": "Identifying the pathways that are significantly impacted in a given condition is a crucial step in understanding the underlying biological phenomena. All approaches currently available for this purpose calculate a P-value that aims to quantify the significance of the involvement of each pathway in the given phenotype. These P-values were previously thought to be independent. Here we show that this is not the case, and that many pathways can considerably affect each other's P-values through a \"crosstalk\" phenomenon. Although it is intuitive that various pathways could influence each other, the presence and extent of this phenomenon have not been rigorously studied and, most importantly, there is no currently available technique able to quantify the amount of such crosstalk. Here, we show that all three major categories of pathway analysis methods (enrichment analysis, functional class scoring, and topology-based methods) are severely influenced by crosstalk phenomena. Using real pathways and data, we show that in some cases pathways with significant P-values are not biologically meaningful, and that some biologically meaningful pathways with nonsignificant P-values become statistically significant when the crosstalk effects of other pathways are removed. We describe a technique able to detect, quantify, and correct crosstalk effects, as well as identify independent functional modules. We assessed this novel approach on data from four experiments involving three phenotypes and two species. This method is expected to allow a better understanding of individual experiment results, as well as a more refined definition of the existing signaling pathways for specific phenotypes." }, { "pmid": "15016911", "title": "Metagenes and molecular pattern discovery using matrix factorization.", "abstract": "We describe here the use of nonnegative matrix factorization (NMF), an algorithm based on decomposition by parts that can reduce the dimension of expression data from thousands of genes to a handful of metagenes. Coupled with a model selection mechanism, adapted to work for any stochastic clustering algorithm, NMF is an efficient method for identification of distinct molecular patterns and provides a powerful method for class discovery. We demonstrate the ability of NMF to recover meaningful biological information from cancer-related microarray data. NMF appears to have advantages over other methods such as hierarchical clustering or self-organizing maps. We found it less sensitive to a priori selection of genes or initial conditions and able to detect alternative or context-dependent patterns of gene expression in complex biological systems. This ability, similar to semantic polysemy in text, provides a general method for robust molecular pattern discovery." }, { "pmid": "12840046", "title": "Subsystem identification through dimensionality reduction of large-scale gene expression data.", "abstract": "The availability of parallel, high-throughput biological experiments that simultaneously monitor thousands of cellular observables provides an opportunity for investigating cellular behavior in a highly quantitative manner at multiple levels of resolution. One challenge to more fully exploit new experimental advances is the need to develop algorithms to provide an analysis at each of the relevant levels of detail. Here, the data analysis method non-negative matrix factorization (NMF) has been applied to the analysis of gene array experiments. Whereas current algorithms identify relationships on the basis of large-scale similarity between expression patterns, NMF is a recently developed machine learning technique capable of recognizing similarity between subportions of the data corresponding to localized features in expression space. A large data set consisting of 300 genome-wide expression measurements of yeast was used as sample data to illustrate the performance of the new approach. Local features detected are shown to map well to functional cellular subsystems. Functional relationships predicted by the new analysis are compared with those predicted using standard approaches; validation using bioinformatic databases suggests predictions using the new approach may be up to twice as accurate as some conventional approaches." }, { "pmid": "24071849", "title": "The Cancer Genome Atlas Pan-Cancer analysis project.", "abstract": "The Cancer Genome Atlas (TCGA) Research Network has profiled and analyzed large numbers of human tumors to discover molecular aberrations at the DNA, RNA, protein and epigenetic levels. The resulting rich data provide a major opportunity to develop an integrated picture of commonalities, differences and emergent themes across tumor lineages. The Pan-Cancer initiative compares the first 12 tumor types profiled by TCGA. Analysis of the molecular aberrations and their functional roles across tumor types will teach us how to extend therapies effective in one cancer type to others with a similar genomic profile." }, { "pmid": "27454244", "title": "Identifying epigenetically dysregulated pathways from pathway-pathway interaction networks.", "abstract": "BACKGROUND\nIdentification of pathways that show significant difference in activity between disease and control samples have been an interesting topic of research for over a decade. Pathways so identified serve as potential indicators of aberrations in phenotype or a disease condition. Recently, epigenetic mechanisms such as DNA methylation are known to play an important role in altering the regulatory mechanism of biological pathways. It is reasonable to think that a set of genes that show significant difference in expression and methylation interact together to form a network of pathways. Existing pathway identification methods fail to capture the complex interplay between interacting pathways.\n\n\nRESULTS\nThis paper proposes a novel framework to identify biological pathways that are dysregulated by epigenetic mechanisms. Experiments on four benchmark cancer datasets and comparison with state-of-the-art pathway identification methods reveal the effectiveness of the proposed approach.\n\n\nCONCLUSION\nThe proposed framework incorporates both topology and biological relationships of pathways. Comparison with state-of-the-art techniques reveals promising results. Epigenetic signatures identified from pathway interaction networks can help to advance Molecular Pathological Epidemiology (MPE) research efforts by predicting tumor molecular changes." }, { "pmid": "27816680", "title": "Differential pathway network analysis used to identify key pathways associated with pediatric pneumonia.", "abstract": "We aimed to identify key pathways to further explore the molecular mechanism underlying pediatric pneumonia using differential pathway network which integrated protein-protein interactions (PPI) data and pathway information. PPI data and pathway information were obtained from STRING and Reactome database, respectively. Next, pathway interactions were identified on the basis of constructing gene-gene interactions randomly, and a weight value computed using Spearman correlation coefficient was assigned to each pathway-pathway interaction, thereby to further detect differential pathway interactions. Subsequently, construction of differential pathway network was implemented using Cytoscope, following by network clustering analysis using ClusterONE. Finally, topological analysis for differential pathway network was performed to identify hub pathway which had top 5% degree distribution. Significantly, 901 pathways were identified to construct pathway interactions. After discarding the pathway interactions with weight value < 1.2, a differential pathway network was constructed, which contained 499 interactions and 347 pathways. Topological analysis showed 17 hub pathways (FGFR1 fusion mutants, molecules associated with elastic fibres, FGFR1 mutant receptor activation, and so on) were identified. Significantly, signaling by FGFR1 fusion mutants and FGFR1 mutant receptor activation simultaneously appeared in two clusters. Molecules associated with elastic fibres existed in one cluster. Accordingly, differential pathway network method might serve as a predictive tool to help us to further understand the development of pediatric pneumonia. FGFR1 fusion mutants, FGFR1 mutant receptor activation, and molecules associated with elastic fibres might play important roles in the progression of pediatric pneumonia." }, { "pmid": "21788508", "title": "Network-based prediction for sources of transcriptional dysregulation using latent pathway identification analysis.", "abstract": "Understanding the systemic biological pathways and the key cellular mechanisms that dictate disease states, drug response, and altered cellular function poses a significant challenge. Although high-throughput measurement techniques, such as transcriptional profiling, give some insight into the altered state of a cell, they fall far short of providing by themselves a complete picture. Some improvement can be made by using enrichment-based methods to, for example, organize biological data of this sort into collections of dysregulated pathways. However, such methods arguably are still limited to primarily a transcriptional view of the cell. Augmenting these methods still further with networks and additional -omics data has been found to yield pathways that play more fundamental roles. We propose a previously undescribed method for identification of such pathways that takes a more direct approach to the problem than any published to date. Our method, called latent pathway identification analysis (LPIA), looks for statistically significant evidence of dysregulation in a network of pathways constructed in a manner that implicitly links pathways through their common function in the cell. We describe the LPIA methodology and illustrate its effectiveness through analysis of data on (i) metastatic cancer progression, (ii) drug treatment in human lung carcinoma cells, and (iii) diagnosis of type 2 diabetes. With these analyses, we show that LPIA can successfully identify pathways whose perturbations have latent influences on the transcriptionally altered genes." }, { "pmid": "18434343", "title": "A global pathway crosstalk network.", "abstract": "MOTIVATION\nGiven the complex nature of biological systems, pathways often need to function in a coordinated fashion in order to produce appropriate physiological responses to both internal and external stimuli. Therefore, understanding the interaction and crosstalk between pathways is important for understanding the function of both cells and more complex systems.\n\n\nRESULTS\nWe have developed a computational approach to detect crosstalk among pathways based on protein interactions between the pathway components. We built a global mammalian pathway crosstalk network that includes 580 pathways (covering 4753 genes) with 1815 edges between pathways. This crosstalk network follows a power-law distribution: P(k) approximately k(-)(gamma), gamma = 1.45, where P(k) is the number of pathways with k neighbors, thus pathway interactions may exhibit the same scale-free phenomenon that has been documented for protein interaction networks. We further used this network to understand colorectal cancer progression to metastasis based on transcriptomic data.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "26345254", "title": "Crosstalk events in the estrogen signaling pathway may affect tamoxifen efficacy in breast cancer molecular subtypes.", "abstract": "Steroid hormones are involved on cell growth, development and differentiation. Such effects are often mediated by steroid receptors. One paradigmatic example of this coupling is the estrogen signaling pathway. Its dysregulation is involved in most tumors of the mammary gland. It is thus an important pharmacological target in breast cancer. This pathway, however, crosstalks with several other molecular pathways, a fact that may have consequences for the effectiveness of hormone modulating drug therapies, such as tamoxifen. For this work, we performed a systematic analysis of the major routes involved in crosstalk phenomena with the estrogen pathway - based on gene expression experiments (819 samples) and pathway analysis (493 samples) - for biopsy-captured tissue and contrasted in two independent datasets with in vivo and in vitro pharmacological stimulation. Our results confirm the presence of a number of crosstalk events across the estrogen signaling pathway with others that are dysregulated in different molecular subtypes of breast cancer. These may be involved in proliferation, invasiveness and apoptosis-evasion in patients. The results presented may open the way to new designs of adjuvant and neoadjuvant therapies for breast cancer treatment." }, { "pmid": "29554099", "title": "The Pathway Coexpression Network: Revealing pathway relationships.", "abstract": "A goal of genomics is to understand the relationships between biological processes. Pathways contribute to functional interplay within biological processes through complex but poorly understood interactions. However, limited functional references for global pathway relationships exist. Pathways from databases such as KEGG and Reactome provide discrete annotations of biological processes. Their relationships are currently either inferred from gene set enrichment within specific experiments, or by simple overlap, linking pathway annotations that have genes in common. Here, we provide a unifying interpretation of functional interaction between pathways by systematically quantifying coexpression between 1,330 canonical pathways from the Molecular Signatures Database (MSigDB) to establish the Pathway Coexpression Network (PCxN). We estimated the correlation between canonical pathways valid in a broad context using a curated collection of 3,207 microarrays from 72 normal human tissues. PCxN accounts for shared genes between annotations to estimate significant correlations between pathways with related functions rather than with similar annotations. We demonstrate that PCxN provides novel insight into mechanisms of complex diseases using an Alzheimer's Disease (AD) case study. PCxN retrieved pathways significantly correlated with an expert curated AD gene list. These pathways have known associations with AD and were significantly enriched for genes independently associated with AD. As a further step, we show how PCxN complements the results of gene set enrichment methods by revealing relationships between enriched pathways, and by identifying additional highly correlated pathways. PCxN revealed that correlated pathways from an AD expression profiling study include functional clusters involved in cell adhesion and oxidative stress. PCxN provides expanded connections to pathways from the extracellular matrix. PCxN provides a powerful new framework for interrogation of global pathway relationships. Comprehensive exploration of PCxN can be performed at http://pcxn.org/." }, { "pmid": "21738677", "title": "Integrated bio-entity network: a system for biological knowledge discovery.", "abstract": "A significant part of our biological knowledge is centered on relationships between biological entities (bio-entities) such as proteins, genes, small molecules, pathways, gene ontology (GO) terms and diseases. Accumulated at an increasing speed, the information on bio-entity relationships is archived in different forms at scattered places. Most of such information is buried in scientific literature as unstructured text. Organizing heterogeneous information in a structured form not only facilitates study of biological systems using integrative approaches, but also allows discovery of new knowledge in an automatic and systematic way. In this study, we performed a large scale integration of bio-entity relationship information from both databases containing manually annotated, structured information and automatic information extraction of unstructured text in scientific literature. The relationship information we integrated in this study includes protein-protein interactions, protein/gene regulations, protein-small molecule interactions, protein-GO relationships, protein-pathway relationships, and pathway-disease relationships. The relationship information is organized in a graph data structure, named integrated bio-entity network (IBN), where the vertices are the bio-entities and edges represent their relationships. Under this framework, graph theoretic algorithms can be designed to perform various knowledge discovery tasks. We designed breadth-first search with pruning (BFSP) and most probable path (MPP) algorithms to automatically generate hypotheses--the indirect relationships with high probabilities in the network. We show that IBN can be used to generate plausible hypotheses, which not only help to better understand the complex interactions in biological systems, but also provide guidance for experimental designs." }, { "pmid": "19811689", "title": "HPD: an online integrated human pathway database enabling systems biology studies.", "abstract": "BACKGROUND\nPathway-oriented experimental and computational studies have led to a significant accumulation of biological knowledge concerning three major types of biological pathway events: molecular signaling events, gene regulation events, and metabolic reaction events. A pathway consists of a series of molecular pathway events that link molecular entities such as proteins, genes, and metabolites. There are approximately 300 biological pathway resources as of April 2009 according to the Pathguide database; however, these pathway databases generally have poor coverage or poor quality, and are difficult to integrate, due to syntactic-level and semantic-level data incompatibilities.\n\n\nRESULTS\nWe developed the Human Pathway Database (HPD) by integrating heterogeneous human pathway data that are either curated at the NCI Pathway Interaction Database (PID), Reactome, BioCarta, KEGG or indexed from the Protein Lounge Web sites. Integration of pathway data at syntactic, semantic, and schematic levels was based on a unified pathway data model and data warehousing-based integration techniques. HPD provides a comprehensive online view that connects human proteins, genes, RNA transcripts, enzymes, signaling events, metabolic reaction events, and gene regulatory events. At the time of this writing HPD includes 999 human pathways and more than 59,341 human molecular entities. The HPD software provides both a user-friendly Web interface for online use and a robust relational database backend for advanced pathway querying. This pathway tool enables users to 1) search for human pathways from different resources by simply entering genes/proteins involved in pathways or words appearing in pathway names, 2) analyze pathway-protein association, 3) study pathway-pathway similarity, and 4) build integrated pathway networks. We demonstrated the usage and characteristics of the new HPD through three breast cancer case studies.\n\n\nCONCLUSION\nHPD http://bio.informatics.iupui.edu/HPD is a new resource for searching, managing, and studying human biological pathways. Users of HPD can search against large collections of human biological pathways, compare related pathways and their molecular entity compositions, and build high-quality, expanded-scope disease pathway models. The current HPD software can help users address a wide range of pathway-related questions in human disease biology studies." }, { "pmid": "26588252", "title": "Finding New Order in Biological Functions from the Network Structure of Gene Annotations.", "abstract": "The Gene Ontology (GO) provides biologists with a controlled terminology that describes how genes are associated with functions and how functional terms are related to one another. These term-term relationships encode how scientists conceive the organization of biological functions, and they take the form of a directed acyclic graph (DAG). Here, we propose that the network structure of gene-term annotations made using GO can be employed to establish an alternative approach for grouping functional terms that captures intrinsic functional relationships that are not evident in the hierarchical structure established in the GO DAG. Instead of relying on an externally defined organization for biological functions, our approach connects biological functions together if they are performed by the same genes, as indicated in a compendium of gene annotation data from numerous different sources. We show that grouping terms by this alternate scheme provides a new framework with which to describe and predict the functions of experimentally identified sets of genes." }, { "pmid": "27664720", "title": "Phosphatidylcholine affects the secretion of the alkaline phosphatase PhoA in Pseudomonas strains.", "abstract": "Pseudomonas aeruginosa ATCC 27853 and Pseudomonas sp. 593 use the phosphatidylcholine synthase pathway (Pcs-pathway) for the biosynthesis of phosphatidylcholine (PC). Both bacterial strains contain the phoA and lapA genes encoding alkaline phosphatases (ALP) and display strong ALP activities. The PhoA and LapA enzymes are thought to be independently secreted via the Xcp and Hxc type II secretion system (T2SS) subtypes, in which the Hxc system may act as a complementary mechanism when the Xcp pathway becomes limiting. Inactivation of the pcs gene in both bacteria abolished PC synthesis and resulted in approximately 50% less ALP activity in the cell-free culture. Analysis by western blotting showed that LapA protein content in the wild type and the pcs- mutant was unchanged in the cytoplasmic, periplasmic or extracellular protein fractions. In contrast, the PhoA protein in the pcs- mutant was less prevalent among extracellular proteins but was more abundant in the periplasmic protein fraction compared to the wild type. Semi- quantitative reverse transcriptase PCR showed that phoA, lapA and 12 xcp genes were equally expressed at the transcriptional level in both the wild types and the pcs- mutants. Our results demonstrate that the absence of PC in bacterial membrane phospholipids does not interfere with the transcription of the phoA and lapA genes but primarily affects the export of PhoA from the cytoplasm to the extracellular environment via the Xcp T2SS." }, { "pmid": "11985723", "title": "A novel type II secretion system in Pseudomonas aeruginosa.", "abstract": "The genome sequence of Pseudomonas aeruginosa strain PAO1 has been determined to facilitate postgenomic studies aimed at understanding the capacity of adaptation of this ubiquitous opportunistic pathogen. P. aeruginosa produces toxins and hydrolytic enzymes that are secreted via the type II secretory pathway using the Xcp machinery or 'secreton'. In this study, we characterized a novel gene cluster, called hxc for homologous to xcp. Characterization of an hxcR mutant, grown in phosphate-limiting medium, revealed the absence of a 40 kDa protein found in the culture supernatant of wild-type or xcp derivative mutant strains. The protein corresponded to the alkaline phosphatase L-AP, renamed LapA, which is secreted in an xcp-independent but hxc-dependent manner. Finally, we showed that expression of the hxc gene cluster is under phosphate regulation. This is the first report of the existence of two functional type II secretory pathways within the same organism, which could be related to the high adaptation potential of P. aeruginosa." }, { "pmid": "22835944", "title": "Type II-dependent secretion of a Pseudomonas aeruginosa DING protein.", "abstract": "Pseudomonas aeruginosa is an opportunistic bacterial pathogen that uses a wide range of protein secretion systems to interact with its host. Genes encoding the PAO1 Hxc type II secretion system are linked to genes encoding phosphatases (LapA/LapB). Microarray genotyping suggested that Pseudomonas aeruginosa clinical isolates, including urinary tract (JJ692) and blood (X13273) isolates, lacked the lapA/lapB genes. Instead, we show that they carry a gene encoding a protein of the PstS family. This protein, which we call LapC, also has significant similarities with LapA/LapB. LapC belongs to the family of DING proteins and displays the canonical DINGGG motif within its N terminus. DING proteins are members of a prokaryotic phosphate binding protein superfamily. We show that LapC is secreted in an Hxc-dependent manner and is under the control of the PhoB response regulator. The genetic organization hxc-lapC found in JJ692 and X13273 is similar to PA14, which is the most frequent P. aeruginosa genotype. While the role of LapA, LapB and LapC proteins remains unclear in P. aeruginosa pathogenesis, they are likely to be part of a phosphate scavenging or sensing system needed to survive and thrive when low phosphate environments are encountered within the host." }, { "pmid": "7551033", "title": "Escherichia coli periplasmic protein FepB binds ferrienterobactin.", "abstract": "Most high-affinity systems for iron uptake in Gram-negative bacteria are thought to employ periplasmic-binding-protein-dependent transport. In Escherichia coli, FepB is a periplasmic protein required for uptake of iron complexed to its endogenously-synthesized siderophore enterobactin (Ent). Direct evidence that ferrienterobactin (FeEnt) binds to FepB is lacking because high background binding by FeEnt prevents use of the usual binding protein assays. Here the membrane localization vehicle LppOmpA [Francisco, J.A., Earhart, C.F. & Georgiou, G. (1992). Proc Natl Acad Sci USA 89, 2713-2717] was employed to place FepB in the E. coli outer membrane. Plasmid pTX700 was constructed and shown to encode, under lac operator control, the 'tribrid' protein LppOmpAFepB; the carboxy-terminal FepB portion lacks at most two amino acids of mature FepB. After short induction periods, most of the tribrid was in the outer membrane. A number of LppOmpAFepB species could be detected; some were degradation products and some may be related to the multiplicity of FepB forms previously observed in minicells and maxicells. Outer membrane harbouring the tribrid and lacking FepA, the normal outer membrane receptor for FeEnt, bound approximately four times more FeEnt than outer membrane from uninduced cells, from cells lacking pTX700 and from cells expressing only an LppOmpA 'dibrid'. Similarly, whole UT5600(fepA)/pTX700 cells induced for tribrid synthesis bound FeEnt and this binding was not affected by energy poisons. The results demonstrated that FepB can bind FeEnt, thereby definitely placing FeEnt transport in the periplasmic permease category of transport systems, and that the LppOmpA localization vehicle can be used with periplasmic binding proteins." }, { "pmid": "24086521", "title": "The transcriptional regulator Np20 is the zinc uptake regulator in Pseudomonas aeruginosa.", "abstract": "Zinc is essential for all bacteria, but excess amounts of the metal can have toxic effects. To address this, bacteria have developed tightly regulated zinc uptake systems, such as the ZnuABC zinc transporter which is regulated by the Fur-like zinc uptake regulator (Zur). In Pseudomonas aeruginosa, a Zur protein has yet to be identified experimentally, however, sequence alignment revealed that the zinc-responsive transcriptional regulator Np20, encoded by np20 (PA5499), shares high sequence identity with Zur found in other bacteria. In this study, we set out to determine whether Np20 was functioning as Zur in P. aeruginosa. Using RT-PCR, we determined that np20 (hereafter known as zur) formed a polycistronic operon with znuC and znuB. Mutant strains, lacking the putative znuA, znuB, or znuC genes were found to grow poorly in zinc deplete conditions as compared to wild-type strain PAO1. Intracellular zinc concentrations in strain PAO-Zur (Δzur) were found to be higher than those for strain PAO1, further implicating the zur as the zinc uptake regulator. Reporter gene fusions and real time RT-PCR revealed that transcription of znuA was repressed in a zinc-dependent manner in strain PAO1, however zinc-dependent transcriptional repression was alleviated in strain PAO-Zur, suggesting that the P. aeruginosa Zur homolog (ZurPA) directly regulates expression of znuA. Electrophoretic mobility shift assays also revealed that recombinant ZurPA specifically binds to the promoter region of znuA and does not bind in the presence of the zinc chelator N,N',N-tetrakis(2-pyridylmethyl) ethylenediamine (TPEN). Taken together, these data support the notion that Np20 is the P. aeruginosa Zur, which regulates the transcription of the genes encoding the high affinity ZnuABC zinc transport system." }, { "pmid": "10350474", "title": "The ferric uptake regulation (Fur) repressor is a zinc metalloprotein.", "abstract": "The Fur protein regulates the expression of a wide variety of iron-responsive genes; however, the interaction of this repressor with its cognate metal ion remains controversial. The iron-bound form of Fur has proved difficult to obtain, and conflicting results have been published using Mn(II) as a probe for in vitro DNA-binding studies. We report here that the purified protein contains tightly bound zinc and propose that Zn(II) is bound to the protein in vivo. Upon purification, Fur retains ca. 2.1 mol of Zn(II)/mol of Fur monomer (Zn2Fur). One zinc is easily removed by treatment of Zn2Fur with zinc chelating agents, resulting in Zn1Fur with ca. 0.9 mol of Zn(II)/mol of protein. The remaining zinc in Zn1Fur can only be removed under denaturing conditions to yield apo-Fur with ca. 0.1 mol of Zn(II)/mol of protein. Our results suggest that many literature descriptions of purified Fur protein do not correspond to the apo-protein, but to Zn1Fur or Zn2Fur. Dissociation constants (Kd) of protein-DNA complexes are ca. 20 nM for both Zn2Fur and Zn1Fur as determined by electrophoretic mobility shift assays and DNase I footprinting assays. The two metalated forms, however, show qualitative differences in the footprinting assays while apo-Fur does not bind specifically to the operator. The existence of these Zn(II) binding sites in Fur may resolve some discrepancies in the literature and have implications concerning Zur, a Fur homologue in E. coli that regulates zinc-responsive genes." }, { "pmid": "23057863", "title": "Origins of specificity and cross-talk in metal ion sensing by Bacillus subtilis Fur.", "abstract": "Fur (ferric uptake regulator) is the master regulator of iron homeostasis in many bacteria, but how it responds specifically to Fe(II) in vivo is not clear. Biochemical analyses of Bacillus subtilis Fur (BsFur) reveal that in addition to Fe(II), both Zn(II) and Mn(II) allosterically activate BsFur-DNA binding. Dimeric BsFur co-purifies with site 1 structural Zn(II) (Fur(2) Zn(2) ) and can bind four additional Zn(II) or Mn(II) ions per dimer. Metal ion binding at previously described site 3 occurs with highest affinity, but the Fur(2) Zn(2) :Me(2) form has only a modest increase in DNA binding affinity (approximately sevenfold). Metallation of site 2 (Fur(2) Zn(2) :Me(4) ) leads to a ~ 150-fold further enhancement in DNA binding affinity. Fe(II) binding studies indicate that BsFur buffers the intracellular Fe(II) concentration at ~ 1 μM. Both Mn(II) and Zn(II) are normally buffered at levels insufficient for metallation of BsFur site 2, thereby accounting for the lack of cross-talk observed in vivo. However, in a perR mutant, where the BsFur concentration is elevated, BsFur may now use Mn(II) as a co-repressor and inappropriately repress iron uptake. Since PerR repression of fur is enhanced by Mn(II), and antagonized by Fe(II), PerR may co-regulate Fe(II) homeostasis by modulating BsFur levels in response to the Mn(II)/Fe(II) ratio." }, { "pmid": "26909576", "title": "Genomic analyses identify molecular subtypes of pancreatic cancer.", "abstract": "Integrated genomic analysis of 456 pancreatic ductal adenocarcinomas identified 32 recurrently mutated genes that aggregate into 10 pathways: KRAS, TGF-β, WNT, NOTCH, ROBO/SLIT signalling, G1/S transition, SWI-SNF, chromatin modification, DNA repair and RNA processing. Expression analysis defined 4 subtypes: (1) squamous; (2) pancreatic progenitor; (3) immunogenic; and (4) aberrantly differentiated endocrine exocrine (ADEX) that correlate with histopathological characteristics. Squamous tumours are enriched for TP53 and KDM6A mutations, upregulation of the TP63∆N transcriptional network, hypermethylation of pancreatic endodermal cell-fate determining genes and have a poor prognosis. Pancreatic progenitor tumours preferentially express genes involved in early pancreatic development (FOXA2/3, PDX1 and MNX1). ADEX tumours displayed upregulation of genes that regulate networks involved in KRAS activation, exocrine (NR5A2 and RBPJL), and endocrine differentiation (NEUROD1 and NKX2-2). Immunogenic tumours contained upregulated immune networks including pathways involved in acquired immune suppression. These data infer differences in the molecular evolution of pancreatic cancer subtypes and identify opportunities for therapeutic development." }, { "pmid": "24952901", "title": "Assessing the clinical utility of cancer genomic and proteomic data across tumor types.", "abstract": "Molecular profiling of tumors promises to advance the clinical management of cancer, but the benefits of integrating molecular data with traditional clinical variables have not been systematically studied. Here we retrospectively predict patient survival using diverse molecular data (somatic copy-number alteration, DNA methylation and mRNA, microRNA and protein expression) from 953 samples of four cancer types from The Cancer Genome Atlas project. We find that incorporating molecular data with clinical variables yields statistically significantly improved predictions (FDR < 0.05) for three cancers but those quantitative gains were limited (2.2-23.9%). Additional analyses revealed little predictive power across tumor types except for one case. In clinically relevant genes, we identified 10,281 somatic alterations across 12 cancer types in 2,928 of 3,277 patients (89.4%), many of which would not be revealed in single-tumor analyses. Our study provides a starting point and resources, including an open-access model evaluation platform, for building reliable prognostic and therapeutic strategies that incorporate molecular data." }, { "pmid": "21376230", "title": "Hallmarks of cancer: the next generation.", "abstract": "The hallmarks of cancer comprise six biological capabilities acquired during the multistep development of human tumors. The hallmarks constitute an organizing principle for rationalizing the complexities of neoplastic disease. They include sustaining proliferative signaling, evading growth suppressors, resisting cell death, enabling replicative immortality, inducing angiogenesis, and activating invasion and metastasis. Underlying these hallmarks are genome instability, which generates the genetic diversity that expedites their acquisition, and inflammation, which fosters multiple hallmark functions. Conceptual progress in the last decade has added two emerging hallmarks of potential generality to this list-reprogramming of energy metabolism and evading immune destruction. In addition to cancer cells, tumors exhibit another dimension of complexity: they contain a repertoire of recruited, ostensibly normal cells that contribute to the acquisition of hallmark traits by creating the \"tumor microenvironment.\" Recognition of the widespread applicability of these concepts will increasingly affect the development of new means to treat human cancer." }, { "pmid": "19686080", "title": "How the fanconi anemia pathway guards the genome.", "abstract": "Fanconi Anemia (FA) is an inherited genomic instability disorder, caused by mutations in genes regulating replication-dependent removal of interstrand DNA crosslinks. The Fanconi Anemia pathway is thought to coordinate a complex mechanism that enlists elements of three classic DNA repair pathways, namely homologous recombination, nucleotide excision repair, and mutagenic translesion synthesis, in response to genotoxic insults. To this end, the Fanconi Anemia pathway employs a unique nuclear protein complex that ubiquitinates FANCD2 and FANCI, leading to formation of DNA repair structures. Lack of obvious enzymatic activities among most FA members has made it challenging to unravel its precise modus operandi. Here we review the current understanding of how the Fanconi Anemia pathway components participate in DNA repair and discuss the mechanisms that regulate this pathway to ensure timely, efficient, and correct restoration of chromosomal integrity." }, { "pmid": "23842645", "title": "The DREAM complex: master coordinator of cell cycle-dependent gene expression.", "abstract": "The dimerization partner, RB-like, E2F and multi-vulval class B (DREAM) complex provides a previously unsuspected unifying role in the cell cycle by directly linking p130, p107, E2F, BMYB and forkhead box protein M1. DREAM mediates gene repression during the G0 phase and coordinates periodic gene expression with peaks during the G1/S and G2/M phases. Perturbations in DREAM complex regulation shift the balance from quiescence towards proliferation and contribute to the increased mitotic gene expression levels that are frequently observed in cancers with a poor prognosis." }, { "pmid": "23723247", "title": "A novel interplay between the Fanconi anemia core complex and ATR-ATRIP kinase during DNA cross-link repair.", "abstract": "When DNA replication is stalled at sites of DNA damage, a cascade of responses is activated in the cell to halt cell cycle progression and promote DNA repair. A pathway initiated by the kinase Ataxia teleangiectasia and Rad3 related (ATR) and its partner ATR interacting protein (ATRIP) plays an important role in this response. The Fanconi anemia (FA) pathway is also activated following genomic stress, and defects in this pathway cause a cancer-prone hematologic disorder in humans. Little is known about how these two pathways are coordinated. We report here that following cellular exposure to DNA cross-linking damage, the FA core complex enhances binding and localization of ATRIP within damaged chromatin. In cells lacking the core complex, ATR-mediated phosphorylation of two functional response targets, ATRIP and FANCI, is defective. We also provide evidence that the canonical ATR activation pathway involving RAD17 and TOPBP1 is largely dispensable for the FA pathway activation. Indeed DT40 mutant cells lacking both RAD17 and FANCD2 were synergistically more sensitive to cisplatin compared with either single mutant. Collectively, these data reveal new aspects of the interplay between regulation of ATR-ATRIP kinase and activation of the FA pathway." }, { "pmid": "19544585", "title": "Crosstalk between Wnt and bone morphogenic protein signaling: a turbulent relationship.", "abstract": "The Wnt and the bone morphogenic protein (BMP) pathways are evolutionarily conserved and essentially independent signaling mechanisms, which, however, often regulate similar biological processes. Wnt and BMP signaling are functionally integrated in many biological processes, such as embryonic patterning in Drosophila and vertebrates, formation of kidney, limb, teeth and bones, maintenance of stem cells, and cancer progression. Detailed inspection of regulation in these and other tissues reveals that Wnt and BMP signaling are functionally integrated in four fundamentally different ways. The molecular mechanism evolved to mediate this integration can also be summarized in four different ways. However, a fundamental aspect of functional and mechanistic interaction between these pathways relies on tissue-specific mechanisms, which are often not conserved and cannot be extrapolated to other tissues. Integration of the two pathways contributes toward the sophisticated means necessary for creating the complexity of our bodies and the reliable and healthy function of its tissues and organs." }, { "pmid": "19619488", "title": "Wnt/beta-catenin signaling: components, mechanisms, and diseases.", "abstract": "Signaling by the Wnt family of secreted glycolipoproteins via the transcriptional coactivator beta-catenin controls embryonic development and adult homeostasis. Here we review recent progress in this so-called canonical Wnt signaling pathway. We discuss Wnt ligands, agonists, and antagonists, and their interactions with Wnt receptors. We also dissect critical events that regulate beta-catenin stability, from Wnt receptors to the cytoplasmic beta-catenin destruction complex, and nuclear machinery that mediates beta-catenin-dependent transcription. Finally, we highlight some key aspects of Wnt/beta-catenin signaling in human diseases including congenital malformations, cancer, and osteoporosis, and discuss potential therapeutic implications." }, { "pmid": "19000839", "title": "Wnt/beta-catenin and Fgf signaling control collective cell migration by restricting chemokine receptor expression.", "abstract": "Collective cell migration is a hallmark of embryonic morphogenesis and cancer metastases. However, the molecular mechanisms regulating coordinated cell migration remain poorly understood. A genetic dissection of this problem is afforded by the migrating lateral line primordium of the zebrafish. We report that interactions between Wnt/beta-catenin and Fgf signaling maintain primordium polarity by differential regulation of gene expression in the leading versus the trailing zone. Wnt/beta-catenin signaling in leader cells informs coordinated migration via differential regulation of the two chemokine receptors, cxcr4b and cxcr7b. These findings uncover a molecular mechanism whereby a migrating tissue maintains stable, polarized gene expression domains despite periodic loss of whole groups of cells. Our findings also bear significance for cancer biology. Although the Fgf, Wnt/beta-catenin, and chemokine signaling pathways are well known to be involved in cancer progression, these studies provide in vivo evidence that these pathways are functionally linked." }, { "pmid": "20010874", "title": "Smad2 and Smad3 have opposing roles in breast cancer bone metastasis by differentially affecting tumor angiogenesis.", "abstract": "Transforming growth factor (TGF)-beta can suppress and promote breast cancer progression. How TGF-beta elicits these dichotomous functions and which roles the principle intracellular effector proteins Smad2 and Smad3 have therein, is unclear. Here, we investigated the specific functions of Smad2 and Smad3 in TGF-beta-induced responses in breast cancer cells in vitro and in a mouse model for breast cancer metastasis. We stably knocked down Smad2 or Smad3 expression in MDA-MB-231 breast cancer cells. The TGF-beta-induced Smad3-mediated transcriptional response was mitigated and enhanced by Smad3 and Smad2 knockdown, respectively. This response was also seen for TGF-beta-induced vascular endothelial growth factor (VEGF) expression. TGF-beta induction of key target genes involved in bone metastasis, were found to be dependent on Smad3 but not Smad2. Strikingly, whereas knockdown of Smad3 in MDA-MB-231 resulted in prolonged latency and delayed growth of bone metastasis, Smad2 knockdown resulted in a more aggressive phenotype compared with control MDA-MB-231 cells. Consistent with differential effects of Smad knockdown on TGF-beta-induced VEGF expression, these opposing effects of Smad2 versus Smad3 could be directly correlated with divergence in the regulation of tumor angiogenesis in vivo. Thus, Smad2 and Smad3 differentially affect breast cancer bone metastasis formation in vivo." }, { "pmid": "25380750", "title": "Understanding the roles of FAK in cancer: inhibitors, genetic models, and new insights.", "abstract": "Focal adhesion kinase (FAK) is a protein tyrosine kinase that regulates cellular adhesion, motility, proliferation and survival in various types of cells. Interestingly, FAK is activated and/or overexpressed in advanced cancers, and promotes cancer progression and metastasis. For this reason, FAK became a potential therapeutic target in cancer, and small molecule FAK inhibitors have been developed and are being tested in clinical phase trials. These inhibitors have demonstrated to be effective by inducing tumor cell apoptosis in addition to reducing metastasis and angiogenesis. Furthermore, several genetic FAK mouse models have made advancements in understanding the specific role of FAK both in tumors and in the tumor environment. In this review, we discuss FAK inhibitors as well as genetic mouse models to provide mechanistic insights into FAK signaling and its potential in cancer therapy." }, { "pmid": "21295686", "title": "RB1, development, and cancer.", "abstract": "The RB1 gene is the first tumor suppressor gene identified whose mutational inactivation is the cause of a human cancer, the pediatric cancer retinoblastoma. The 25 years of research since its discovery has not only illuminated a general role for RB1 in human cancer, but also its critical importance in normal development. Understanding the molecular function of the RB1 encoded protein, pRb, is a long-standing goal that promises to inform our understanding of cancer, its relationship to normal development, and possible therapeutic strategies to combat this disease. Achieving this goal has been difficult, complicated by the complexity of pRb and related proteins. The goal of this review is to explore the hypothesis that, at its core, the molecular function of pRb is to dynamically regulate the location-specific assembly or disassembly of protein complexes on the DNA in response to the output of various signaling pathways. These protein complexes participate in a variety of molecular processes relevant to DNA including gene transcription, DNA replication, DNA repair, and mitosis. Through regulation of these processes, RB1 plays a uniquely prominent role in normal development and cancer." }, { "pmid": "21537463", "title": "Roles of sphingosine-1-phosphate signaling in angiogenesis.", "abstract": "Sphingosine-1-phosphate (S1P) is a blood-borne lipid mediator with pleiotropic biological activities. S1P acts via the specific cell surface G-protein-coupled receptors, S1P(1-5). S1P(1) and S1P(2) were originally identified from vascular endothelial cells (ECs) and smooth muscle cells, respectively. Emerging evidence shows that S1P plays crucial roles in the regulation of vascular functions, including vascular formation, barrier protection and vascular tone via S1P(1), S1P(2) and S1P(3). In particular, S1P regulates vascular formation through multiple mechanisms; S1P exerts both positive and negative effects on angiogenesis and vascular maturation. The positive and negative effects of S1P are mediated by S1P(1) and S1P(2), respectively. These effects of S1P(1) and S1P(2) are probably mediated by the S1P receptors expressed in multiple cell types including ECs and bone-marrow-derived cells. The receptor-subtype-specific, distinct effects of S1P favor the development of novel therapeutic tactics for antitumor angiogenesis in cancer and therapeutic angiogenesis in ischemic diseases." }, { "pmid": "20889557", "title": "High expression of sphingosine 1-phosphate receptors, S1P1 and S1P3, sphingosine kinase 1, and extracellular signal-regulated kinase-1/2 is associated with development of tamoxifen resistance in estrogen receptor-positive breast cancer patients.", "abstract": "Various studies in cell lines have previously demonstrated that sphingosine kinase 1 (SK1) and extracellular signal-regulated kinase 1/2 (ERK-1/2) interact in an estrogen receptor (ER)-dependent manner to influence both breast cancer cell growth and migration. A cohort of 304 ER-positive breast cancer patients was used to investigate the prognostic significance of sphingosine 1-phosphate (S1P) receptors 1, 2, and 3 (ie, S1P1, S1P2, and S1P3), SK1, and ERK-1/2 expression levels. Expression levels of both SK1 and ERK-1/2 were already available for the cohort, and S1P1, S1P2, and S1P3 levels were established by immunohistochemical analysis. High membrane S1P1 expression was associated with shorter time to recurrence (P=0.008). High cytoplasmic S1P1 and S1P3 expression levels were also associated with shorter disease-specific survival times (P=0.036 and P=0.019, respectively). Those patients with tumors that expressed high levels of both cytoplasmic SK1 and ERK-1/2 had significantly shorter recurrence times than those that expressed low levels of cytoplasmic SK1 and cytoplasmic ERK-1/2 (P=0.00008), with a difference in recurrence time of 10.5 years. Similarly, high cytoplasmic S1P1 and cytoplasmic ERK-1/2 expression levels (P=0.004) and high cytoplasmic S1P3 expression and cytoplasmic ERK-1/2 expression levels (P=0.004) were associated with shorter recurrence times. These results support a model in which the interaction between SK1, S1P1, and/or S1P3 and ERK-1/2 might drive breast cancer progression, and these findings, therefore, warrant further investigation." }, { "pmid": "11150298", "title": "Sphingosine 1-phosphate-induced endothelial cell migration requires the expression of EDG-1 and EDG-3 receptors and Rho-dependent activation of alpha vbeta3- and beta1-containing integrins.", "abstract": "Sphingosine 1-phosphate (SPP), a platelet-derived bioactive lysophospholipid, is a regulator of angiogenesis. However, molecular mechanisms involved in SPP-induced angiogenic responses are not fully defined. Here we report the molecular mechanisms involved in SPP-induced human umbilical vein endothelial cell (HUVEC) adhesion and migration. SPP-induced HUVEC migration is potently inhibited by antisense phosphothioate oligonucleotides against EDG-1 as well as EDG-3 receptors. In addition, C3 exotoxin blocked SPP-induced cell attachment, spreading and migration on fibronectin-, vitronectin- and Matrigel-coated surfaces, suggesting that endothelial differentiation gene receptor signaling via the Rho pathway is critical for SPP-induced cell migration. Indeed, SPP induced Rho activation in an adherence-independent manner, whereas Rac activation was dispensible for cell attachment and focal contact formation. Interestingly, both EDG-1 and -3 receptors were required for Rho activation. Since integrins are critical for cell adhesion, migration, and angiogenesis, we examined the effects of blocking antibodies against alpha(v)beta(3), beta(1), or beta(3) integrins. SPP induced Rho-dependent integrin clustering into focal contact sites, which was essential for cell adhesion, spreading and migration. Blockage of alpha(v)beta(3)- or beta(1)-containing integrins inhibited SPP-induced HUVEC migration. Together our results suggest that endothelial differentiation gene receptor-mediated Rho signaling is required for the activation of integrin alpha(v)beta(3) as well as beta(1)-containing integrins, leading to the formation of initial focal contacts and endothelial cell migration." }, { "pmid": "16325811", "title": "Integrins and angiogenesis: a sticky business.", "abstract": "From an evolutionary point of view, the development of a cardiovascular system allowed vertebrates to nourish the several organs that compose their wider multicellular organism and to survive. Acquisition of new genes encoding for extracellular matrix (ECM) proteins and their cognate integrin receptors as well as secreted pro- and anti-angiogenic factors proved to be essential for the development of vascular networks in the vertebrate embryo. Postnatal tissue neo-vascularization plays a key role during wound healing and pathological angiogenesis as well. There is now clear evidence that building blood vessels in the embryo and in the adult organism relies upon different endothelial integrins and ECM ligands. A successful vascular development depends on fibronectin and its major receptor alpha5beta1 integrin, but not on alphavbeta3, alphavbeta5, and alpha6beta4 integrins that are instead central regulators of postnatal tumor angiogenesis. Here, endothelial alphavbeta3 elicits anti- or pro-angiogenic signals depending respectively on whether it is occupied by a soluble (e.g. type IV collagen derived tumstatin) or an insoluble (vitronectin) ECM ligand. The laminin-5 receptor alpha6beta4 integrin, expressed only by endothelial cells of mature blood vessels, controls the invasive phase of tumor angiogenesis in the adult organism. Finally, regulation of vascular morphogenesis relies upon the fine modulation of integrin activation by chemoattractant and chemorepulsive cues, such as angiogenic growth factors and semaphorins." }, { "pmid": "11937535", "title": "The potency of TCR signaling differentially regulates NFATc/p activity and early IL-4 transcription in naive CD4+ T cells.", "abstract": "The potency of TCR signaling can regulate the differentiation of naive CD4(+) T cells into Th1 and Th2 subsets. In this work we demonstrate that TCR signaling by low-affinity, but not high-affinity, peptide ligands selectively induces IL-4 transcription within 48 h of priming naive CD4(+) T cells. This early IL-4 transcription is STAT6 independent and occurs before an increase in GATA-3. Furthermore, the strength of the TCR signal differentially affects the balance of NFATp and NFATc DNA binding activity, thereby regulating IL-4 transcription. Low-potency TCR signals result in high levels of nuclear NFATc and low levels of NFATp, which are permissive for IL-4 transcription. These data provide a model for how the strength of TCR signaling can influence the generation of Th1 and Th2 cells." }, { "pmid": "8097338", "title": "Development of TH1 CD4+ T cells through IL-12 produced by Listeria-induced macrophages.", "abstract": "Development of the appropriate CD4+ T helper (TH) subset during an immune response is important for disease resolution. With the use of naïve, ovalbumin-specific alpha beta T cell receptor transgenic T cell, it was found that heat-killed Listeria monocytogenes induced TH1 development in vitro through macrophage production of interleukin-12 (IL-12). Moreover, inhibition of macrophage production of IL-12 may explain the ability of IL-10 to suppress TH1 development. Murine immune responses to L. monocytogenes in vivo are of the appropriate TH1 phenotype. Therefore, this regulatory pathway may have evolved to enable innate immune cells, through interactions with microbial pathogens, to direct development of specific immunity toward the appropriate TH phenotype." }, { "pmid": "27280403", "title": "Exposure of Human CD4 T Cells to IL-12 Results in Enhanced TCR-Induced Cytokine Production, Altered TCR Signaling, and Increased Oxidative Metabolism.", "abstract": "Human CD4 T cells are constantly exposed to IL-12 during infections and certain autoimmune disorders. The current paradigm is that IL-12 promotes the differentiation of naïve CD4 T cells into Th1 cells, but recent studies suggest IL-12 may play a more complex role in T cell biology. We examined if exposure to IL-12 alters human CD4 T cell responses to subsequent TCR stimulation. We found that IL-12 pretreatment increased TCR-induced IFN-γ, TNF-α, IL-13, IL-4 and IL-10 production. This suggests that prior exposure to IL-12 potentiates the TCR-induced release of a range of cytokines. We observed that IL-12 mediated its effects through both transcriptional and post-transcriptional mechanisms. IL-12 pretreatment increased the phosphorylation of AKT, p38 and LCK following TCR stimulation without altering other TCR signaling molecules, potentially mediating the increase in transcription of cytokines. In addition, the IL-12-mediated enhancement of cytokines that are not transcriptionally regulated was partially driven by increased oxidative metabolism. Our data uncover a novel function of IL-12 in human CD4 T cells; specifically, it enhances the release of a range of cytokines potentially by altering TCR signaling pathways and by enhancing oxidative metabolism." }, { "pmid": "22685333", "title": "ATF2 - at the crossroad of nuclear and cytosolic functions.", "abstract": "An increasing number of transcription factors have been shown to elicit oncogenic and tumor suppressor activities, depending on the tissue and cell context. Activating transcription factor 2 (ATF2; also known as cAMP-dependent transcription factor ATF-2) has oncogenic activities in melanoma and tumor suppressor activities in non-malignant skin tumors and breast cancer. Recent work has shown that the opposing functions of ATF2 are associated with its subcellular localization. In the nucleus, ATF2 contributes to global transcription and the DNA damage response, in addition to specific transcriptional activities that are related to cell development, proliferation and death. ATF2 can also translocate to the cytosol, primarily following exposure to severe genotoxic stress, where it impairs mitochondrial membrane potential and promotes mitochondrial-based cell death. Notably, phosphorylation of ATF2 by the epsilon isoform of protein kinase C (PKCε) is the master switch that controls its subcellular localization and function. Here, we summarize our current understanding of the regulation and function of ATF2 in both subcellular compartments. This mechanism of control of a non-genetically modified transcription factor represents a novel paradigm for 'oncogene addiction'." }, { "pmid": "20725108", "title": "NFAT, immunity and cancer: a transcription factor comes of age.", "abstract": "Nuclear factor of activated T cells (NFAT) was first identified more than two decades ago as a major stimulation-responsive DNA-binding factor and transcriptional regulator in T cells. It is now clear that NFAT proteins have important functions in other cells of the immune system and regulate numerous developmental programmes in vertebrates. Dysregulation of these programmes can lead to malignant growth and cancer. This Review focuses on recent advances in our understanding of the transcriptional functions of NFAT proteins in the immune system and provides new insights into their potential roles in cancer development." } ]
Royal Society Open Science
30110420
PMC6030263
10.1098/rsos.180410
Committing to quantum resistance: a slow defence for Bitcoin against a fast quantum computing attack
Quantum computers are expected to have a dramatic impact on numerous fields due to their anticipated ability to solve classes of mathematical problems much more efficiently than their classical counterparts. This particularly applies to domains involving integer factorization and discrete logarithms, such as public key cryptography. In this paper, we consider the threats a quantum-capable adversary could impose on Bitcoin, which currently uses the Elliptic Curve Digital Signature Algorithm (ECDSA) to sign transactions. We then propose a simple but slow commit–delay–reveal protocol, which allows users to securely move their funds from old (non-quantum-resistant) outputs to those adhering to a quantum-resistant digital signature scheme. The transition protocol functions even if ECDSA has already been compromised. While our scheme requires modifications to the Bitcoin protocol, these can be implemented as a soft fork.
6.Related workThe possibility of QCs emerging in the near future is increasingly appreciated by members of the Bitcoin community and hence a number of approaches to make Bitcoin resilient against QCAs have recently been discussed.As discussed briefly in §4, a first step towards maintaining Bitcoin’s security properties in a post-quantum world is seen in replacing ECDSA with a signature scheme believed to be quantum resistant, which can be implemented on classical computers [56–58]. Other proposals rely on quantum hardware to exploit quantum effects to guarantee the security of the cryptocurrency against QCAs [59–61]. An alternative research direction focuses on identifying alternatives to PoW, as a countermeasure to possible unfair advantages in mining through Grover’s algorithm [62,63].However, the aforementioned works do not consider the issue our paper is most interested in, i.e. transitioning to post-quantum Bitcoin in the presence of an already-fast QCA. While our methods were developed independently, we provide an overview of relevant discussions, papers and articles we have become aware of, which attempt to solve this problem.A first brief public mention of a scheme transitioning Bitcoin to quantum resistance is made by Back, referring to an informal proposal by Lau [47,64–66]. The discussed approach leverages on a two-phase commit mechanism, similar to the one described in this paper, i.e. users commit H(pk,pkQR) in the OP_RETURN field of a transaction and, after waiting for confirmations, create a transaction which reveals (pk,pkQR). However, the scheme is not discussed in detail and the requirements for the security delay tsec between commit and reveal phase, necessary to mitigate transaction reordering attacks by QCAs potentially benefiting from Grover’s algorithm, are left open.An alternative scheme described by Ruffing in the Bitcoin-dev mailing list [48–50] requires users to create the transaction spending the non-quantum-resistant UTXOs in advance, as it must be part of the commitment. The transaction is thereby encrypted with a symmetric key k derived from the challenge chal used to generate the address associated with the transitioned UTXOs. Once the commitment transaction is confirmed, the user finalizes the transition by publishing chal, which allows the network to derive k and decrypt the spending transaction. However, similar to Back’s proposal, the scheme does not discuss the duration of tsec. Specifically, while it appears feasible for a user to predict the target of the spending transaction in case tsec is equal to a few blocks, this does not necessarily hold if circumstances require longer delays.Fawkescoin [67] is a cryptocurrency which relies only on secure hash functions, avoiding the use of asymmetric cryptography. While not aiming at transitioning to a quantum-resistant signature scheme, it introduces a commit–reveal scheme to move funds from an owner (OX) of secret X to the owner (OY) of a secret Y . Thereby, OY sends a hash H(Y) of her secret to OX, who proceeds to include a hash commitment H(X,H(Y)) in the underlying blockchain, thereby guaranteeing to send the funds linked to X to whoever can provide the secret Y . After a pre-defined confirmation period, OX publishes the input to the hash commitment (X,H(Y)), revealing X to the network. Consequently, (only) OY can now spend the funds linked to X, using her secret Y . Note in Fawkescoin, users must know the destination of the transfer at the time of commitment, while our scheme is flexible and imposes no such requirements by construction.
[ "29443962", "27488798", "26436453" ]
[ { "pmid": "29443962", "title": "A programmable two-qubit quantum processor in silicon.", "abstract": "Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots." }, { "pmid": "27488798", "title": "Demonstration of a small programmable quantum computer with atomic qubits.", "abstract": "Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels." }, { "pmid": "26436453", "title": "A two-qubit logic gate in silicon.", "abstract": "Quantum computation requires qubits that can be coupled in a scalable manner, together with universal and high-fidelity one- and two-qubit logic gates. Many physical realizations of qubits exist, including single photons, trapped ions, superconducting circuits, single defects or atoms in diamond and silicon, and semiconductor quantum dots, with single-qubit fidelities that exceed the stringent thresholds required for fault-tolerant quantum computing. Despite this, high-fidelity two-qubit gates in the solid state that can be manufactured using standard lithographic techniques have so far been limited to superconducting qubits, owing to the difficulties of coupling qubits and dephasing in semiconductor systems. Here we present a two-qubit logic gate, which uses single spins in isotopically enriched silicon and is realized by performing single- and two-qubit operations in a quantum dot system using the exchange interaction, as envisaged in the Loss-DiVincenzo proposal. We realize CNOT gates via controlled-phase operations combined with single-qubit operations. Direct gate-voltage control provides single-qubit addressability, together with a switchable exchange interaction that is used in the two-qubit controlled-phase gate. By independently reading out both qubits, we measure clear anticorrelations in the two-spin probabilities of the CNOT gate." } ]
Royal Society Open Science
30110428
PMC6030337
10.1098/rsos.180329
Individual performance in team-based online games
Complex real-world challenges are often solved through teamwork. Of special interest are ad hoc teams assembled to complete some task. Many popular multiplayer online battle arena (MOBA) video-games adopt this team formation strategy and thus provide a natural environment to study ad hoc teams. Our work examines data from a popular MOBA game, League of Legends, to understand the evolution of individual performance within ad hoc teams. Our analysis of player performance in successive matches of a gaming session demonstrates that a player’s success deteriorates over the course of the session, but this effect is mitigated by the player’s experience. We also find no significant long-term improvement in the individual performance of most players. Modelling the short-term performance dynamics allows us to accurately predict when players choose to continue to play or end the session. Our findings suggest possible directions for individualized incentives aimed at steering the player’s behaviour and improving team performance.
4.Related work4.1.Individual and team performance in gamesVarious recent studies explored human performance and activity in online games. Several authors investigated aspects of team performance [2,4,5,16], as well as individual performance [17–21] in multiplayer team-based games. In Mathieu et al. [22], an extensive review about team effectiveness is provided. Here, the authors analyse different aspects of teamwork, such as team outcomes (team performance, members’ affect and viability), mediator–team outcome relationships and team composition.Other aspects of social and group phenomena in virtual environments were covered in the review by Sivunen & Hakonen [23]. In this work, the authors identified four major topics related to virtual environment studies: testing that laws of social behaviours in real-life also apply in virtual environments, finding social behaviour norms, focusing on micro-level social phenomena, and filling the gap in well-established theoretical discussions and paradigms within social science.The ‘optimal’ composition of temporary teams also attracted a lot of research: Kim et al. [4,5] studied LoL to determine how team composition affects team performance. Using mixed-methods approaches, the authors studied in-game role proficiency, generality and congruency to determine the influence of these constructs on team performance. Proficiency in tacit cooperation and verbal communication highly correlate with team victories, and learning ability and speed of skill acquisition differentiate novice from elite players. The importance of communication and its effects on team performance has been extensively studied by Leavitt and collaborators [2] once again in LoL: the authors studied both explicit and implicit (non-verbal, i.e. pings) communication, highlighting differences based on player styles, and different extents of effectiveness in individual performance increase.Finally, the topic of individual performance in online games has been studied in different platforms. Shen et al. [24] suggested in their paper that gender-based performance disparities do not exist in massive multiplayer online games (MMO). In their work, the authors operationalized game performance as a function of character advancement and voluntary play time, based on Steinkuehler & Duncan [25] and show how character levels correlate with other types of performance metrics.Other works looking at individual performance analyse first-person shooter games: Microsoft researchers studied the performance trajectories of Halo players, as well as the effect that taking prolonged breaks from playing has on their skills [17]. Analysing individual game performance allowed them to categorize players in groups exhibiting different trajectories, and then study how other variables (demographics, in-game activity, etc.) relate to game performance. This analysis reveals the most common performance patterns associated with first-person online games, and it allows to model skill progression and learning mechanisms. Finally, Vicencio-Moreira et al. [18] studied individual performance as a tool to balance game design and game-play: the authors defined several statistical models of player performance and associated them to multiple dimensions of game proficiency, demonstrating a concept of an algorithm aimed at balancing individual skills by providing different levels of assistance (e.g. aim assistance, character-level assistance, etc.) to make the game-play experience more balanced and satisfactory by matching players of different skill levels.To the best of our knowledge, ours is the first study to focus on individual performance within temporary teams, to analyse the effect of performance deterioration over the short term, and to determine its interplay with engagement.4.2.Team-based online games and engagementVideo-games represent a natural setting to study human behaviour. Prior to this study, several works have been devoted to analysing the behaviour and activity of players in multiplayer games. In particular, behavioural dynamics of team-based online games have been extensively studied in role-playing games like World of Warcraft [26,27], in battle arena games like League of Legends [1,19,28] and in other games [21,29,30].The earlier studies focused on massively multiplayer online games like World of Warcraft, which exhibit both a strong component of individual game-play (e.g. solo quests aimed at increasing one’s character level and skills) as well as collaborative instances (e.g. raid bosses). First Nardi & Harris [26], and Bardzell and collaborators shortly after [27], analysed the five-person raid-boss instance runs to determine the ingredients of successful cooperative game-play. By means of a mixture of survey-based and data-driven analysis, the authors illustrated how the social component (i.e. chatting with teammates, and guild-based activity) was the leading factor to satisfaction and engagement.Later studies focused on MOBAs: Kuo et al. [1,28] investigated engagement mechanisms on LoL by means of semi-structured interviews with players, aimed to unveil the elements behind successful team composition in temporary teams. Communication (written and oral) and effective collaboration strategies were linked to satisfactory game experience. Similar results hold for other MOBAs [29,30]. Concluding, a recent study investigated the relation between brain activity and game-play experience in multiplayer games: playing with human teammates yields higher levels of satisfaction but lower overall performance and coordination than playing with computer-controlled teammates [31].Despite the fact that our work does not focus on the analysis of engagement in team-based online games, the results we found could be leveraged to design incentives to increase players’ engagement over time and used to prevent players from quitting the game.4.3.Performance deteriorationPerformance deterioration following a period of sustained engagement has been demonstrated in a variety of contexts, such as student performance [32], driving [33], data entry [34], self-control [35] and, more recently, online activity [7,6]. In particular, in vigilance tasks—i.e. tasks which require monitoring visual displays or auditory systems for infrequent signals—performance was shown to decrease over time, with concomitant increases in perceived mental effort [36]. For example, after long periods in flight simulators, pilots are more easily distracted by non-critical signals and less able to detect critical signals [37].Factors leading to a deteriorating performance are still debated [38–40]. However, deterioration has been shown to be associated with physiological brain changes [41–43], suggesting a cognitive origin, whether due to mental fatigue, boredom or strategic choices to limit attention. In particular, mental fatigue refers to the effects that people experience following and during the course of prolonged periods of demanding cognitive activity, requiring sustained mental efficiency [41]. Persistent mental fatigue has been shown to lead to burnout at work, lower motivation, increased distractibility and poor information processing [41,44–50].Moreover, mental fatigue is detrimental to individuals’ judgements and decisions, including those of experts—e.g. judges are more likely to deny a prisoner’s request as they advance through the sequence of cases without breaks on a given day [51], and evidence for the same type of cognitive fatigue has been documented in consumers making choices among different alternatives [52] and physicians prescribing unnecessary antibiotics [53]. Recent studies indicate that cognitive fatigue destabilizes economic decision-making, resulting in inconsistent preferences and informational strategies that may significantly reduce decision quality [54].Short-term deterioration of individual performance was previously observed in other online platforms. It has been shown that the quality of comments posted by users on Reddit social platform [6], the answers provided on StackExchange question-answering forums [55], and the messages written on Twitter [7] decline over the course of an activity session. In all previously studied platforms, users worked individually to produce content or achieve some results, while in the present work, we considered both measures for individual performance (i.e. KDA) and the performance achieved by the team (i.e. win rate). We can interpret the KDA ratio of a player as the quality of his/her playing style during a match, and this can be compared to the results previously achieved in other types of platforms.
[ "27560185", "26884183", "23116991", "15462620", "10748642", "18652844", "24304775", "19925871", "17999934", "11419809", "12679043", "16288951", "21482790", "25286067", "26230404" ]
[ { "pmid": "27560185", "title": "Evidence of Online Performance Deterioration in User Sessions on Reddit.", "abstract": "This article presents evidence of performance deterioration in online user sessions quantified by studying a massive dataset containing over 55 million comments posted on Reddit in April 2015. After segmenting the sessions (i.e., periods of activity without a prolonged break) depending on their intensity (i.e., how many posts users produced during sessions), we observe a general decrease in the quality of comments produced by users over the course of sessions. We propose mixed-effects models that capture the impact of session intensity on comments, including their length, quality, and the responses they generate from the community. Our findings suggest performance deterioration: Sessions of increasing intensity are associated with the production of shorter, progressively less complex comments, which receive declining quality scores (as rated by other users), and are less and less engaging (i.e., they attract fewer responses). Our contribution evokes a connection between cognitive and attention dynamics and the usage of online social peer production platforms, specifically the effects of deterioration of user performance." }, { "pmid": "26884183", "title": "Cognitive fatigue influences students' performance on standardized tests.", "abstract": "Using test data for all children attending Danish public schools between school years 2009/10 and 2012/13, we examine how the time of the test affects performance. Test time is determined by the weekly class schedule and computer availability at the school. We find that, for every hour later in the day, test performance decreases by 0.9% of an SD (95% CI, 0.7-1.0%). However, a 20- to 30-minute break improves average test performance by 1.7% of an SD (95% CI, 1.2-2.2%). These findings have two important policy implications: First, cognitive fatigue should be taken into consideration when deciding on the length of the school day and the frequency and duration of breaks throughout the day. Second, school accountability systems should control for the influence of external factors on test scores." }, { "pmid": "23116991", "title": "Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness.", "abstract": "This paper reviews published papers related to neurophysiological measurements (electroencephalography: EEG, electrooculography EOG; heart rate: HR) in pilots/drivers during their driving tasks. The aim is to summarise the main neurophysiological findings related to the measurements of pilot/driver's brain activity during drive performance and how particular aspects of this brain activity could be connected with the important concepts of \"mental workload\", \"mental fatigue\" or \"situational awareness\". Review of the literature suggests that exists a coherent sequence of changes for EEG, EOG and HR variables during the transition from normal drive, high mental workload and eventually mental fatigue and drowsiness. In particular, increased EEG power in theta band and a decrease in alpha band occurred in high mental workload. Successively, increased EEG power in theta as well as delta and alpha bands characterise the transition between mental workload and mental fatigue. Drowsiness is also characterised by increased blink rate and decreased HR values. The detection of such mental states is actually performed \"offline\" with accuracy around 90% but not online. A discussion on the possible future applications of findings provided by these neurophysiological measurements in order to improve the safety of the vehicles will be also presented." }, { "pmid": "15462620", "title": "Effects of prolonged work on data entry speed and accuracy.", "abstract": "In 2 experiments, participants used a keyboard to enter 4-digit numbers presented on a computer monitor under conditions promoting fatigue. In Experiment 1, accuracy of data entry declined but response times improved over time, reflecting an increasing speed-accuracy trade-off. In Experiment 2, the (largely cognitive) time to enter the initial digit decreased in the 1st half but increased in the 2nd half of the session. Accuracy and time to enter the remaining digits decreased across though not within session halves. The (largely motoric) time to press a concluding keystroke decreased over the session. Thus, through a combination of facilitation and inhibition, prolonged work affects the component cognitive and motoric processes of data entry differentially and at different points in practice." }, { "pmid": "10748642", "title": "Self-regulation and depletion of limited resources: does self-control resemble a muscle?", "abstract": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource." }, { "pmid": "18652844", "title": "Mental fatigue: costs and benefits.", "abstract": "A framework for mental fatigue is proposed, that involves an integrated evaluation of both expected rewards and energetical costs associated with continued performance. Adequate evaluation of predicted rewards and potential risks of actions is essential for successful adaptive behaviour. However, while both rewards and punishments can motivate to engage in activities, both types of motivated behaviour are associated with energetical costs. We will review findings that suggest that the nucleus accumbens, orbitofrontal cortex, amygdala, insula and anterior cingulate cortex are involved evaluating both the potential rewards associated with performing a task, as well as assessing the energetical demands involved in task performance. Behaviour will only proceed if this evaluation turns out favourably towards spending (additional) energy. We propose that this evaluation of predicted rewards and energetical costs is central to the phenomenon of mental fatigue: people will no longer be motivated to engage in task performance when energetical costs are perceived to outweigh predicted rewards." }, { "pmid": "24304775", "title": "An opportunity cost model of subjective effort and task performance.", "abstract": "Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternative explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost--that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternative explanations for both the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across sub-disciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternative models might be empirically distinguished." }, { "pmid": "19925871", "title": "Imaging brain fatigue from sustained mental workload: an ASL perfusion study of the time-on-task effect.", "abstract": "During sustained periods of a taxing cognitive workload, humans typically display time-on-task (TOT) effects, in which performance gets steadily worse over the period of task engagement. Arterial spin labeling (ASL) perfusion functional magnetic resonance imaging (fMRI) was used in this study to investigate the neural correlates of TOT effects in a group of 15 subjects as they performed a 20-min continuous psychomotor vigilance test (PVT). Subjects displayed significant TOT effects, as seen in progressively slower reaction times and significantly increased mental fatigue ratings after the task. Perfusion data showed that the PVT activates a right lateralized fronto-parietal attentional network in addition to the basal ganglia and sensorimotor cortices. The fronto-parietal network was less active during post-task rest compared to pre-task rest, and regional CBF decrease in this network correlated with performance decline. These results demonstrate the persistent effects of cognitive fatigue in the fronto-parietal network after a period of heavy mental work and indicate the critical role of this attentional network in mediating TOT effects. Furthermore, resting regional CBF in the thalamus and right middle frontal gyrus prior to task onset was predictive of subjects' subsequent performance decline, suggesting that resting CBF quantified by ASL perfusion fMRI may be a useful indicator of performance potential and a marker of the level of fatigue in the neural attentional system." }, { "pmid": "17999934", "title": "Psychophysiological investigation of vigilance decrement: boredom or cognitive fatigue?", "abstract": "The vigilance decrement has been described as a slowing in reaction times or an increase in error rates as an effect of time-on-task during tedious monitoring tasks. This decrement has been alternatively ascribed to either withdrawal of the supervisory attentional system, due to underarousal caused by the insufficient workload, or to a decreased attentional capacity and thus the impossibility to sustain mental effort. Furthermore, it has previously been reported that controlled processing is the locus of the vigilance decrement. This study aimed at answering three questions, to better define sustained attention. First, is endogenous attention more vulnerable to time-on-task than exogenous attention? Second, do measures of autonomic arousal provide evidence to support the underload vs overload hypothesis? And third, do these measures show a different effect for endogenous and exogenous attention? We applied a cued (valid vs invalid) conjunction search task, and ECG and respiration recordings were used to compute sympathetic (normalized low frequency power) and parasympathetic tone (respiratory sinus arrhythmia, RSA). Behavioural results showed a dual effect of time-on-task: the usually described vigilance decrement, expressed as increased reaction times (RTs) after 30 min for both conditions; and a higher cost in RTs after invalid cues for the endogenous condition only, appearing after 60 min. Physiological results clearly support the underload hypothesis to subtend the vigilance decrement, since heart period and RSA increased over time-on-task. There was no physiological difference between the endogenous and exogenous conditions. Subjective experience of participants was more compatible with boredom than with high mental effort." }, { "pmid": "11419809", "title": "The job demands-resources model of burnout.", "abstract": "The job demands-resources (JD-R) model proposes that working conditions can be categorized into 2 broad categories, job demands and job resources. that are differentially related to specific outcomes. A series of LISREL analyses using self-reports as well as observer ratings of the working conditions provided strong evidence for the JD-R model: Job demands are primarily related to the exhaustion component of burnout, whereas (lack of) job resources are primarily related to disengagement. Highly similar patterns were observed in each of 3 occupational groups: human services, industry, and transport (total N = 374). In addition, results confirmed the 2-factor structure (exhaustion and disengagement) of a new burnout instrument--the Oldenburg Burnout Inventory--and suggested that this structure is essentially invariant across occupational groups." }, { "pmid": "12679043", "title": "Mental fatigue and the control of cognitive processes: effects on perseveration and planning.", "abstract": "We tested whether behavioural manifestations of mental fatigue may be linked to compromised executive control, which refers to the ability to regulate perceptual and motor processes for goal-directed behaviour. In complex tasks, compromised executive control may become manifest as decreased flexibility and sub-optimal planning. In the study we use the Wisconsin Card Sorting Test (WCST) and the Tower of London (TOL), which respectively measure flexibility (e.g., perseverative errors) and planning. A simple memory task was used as a control measure. Fatigue was induced through working for 2 h on cognitively demanding tasks. The results showed that compared to a non-fatigued group, fatigued participants displayed more perseveration on the WCST and showed prolonged planning time on the TOL. Fatigue did not affect performance on the simple memory task. These findings indicate compromised executive control under fatigue, which may explain the typical errors and sub-optimal performance that are often found in fatigued people." }, { "pmid": "16288951", "title": "Mental fatigue, motivation and action monitoring.", "abstract": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences." }, { "pmid": "21482790", "title": "Extraneous factors in judicial decisions.", "abstract": "Are judicial rulings based solely on laws and facts? Legal formalism holds that judges apply legal reasons to the facts of a case in a rational, mechanical, and deliberative manner. In contrast, legal realists argue that the rational application of legal reasons does not sufficiently explain the decisions of judges and that psychological, political, and social factors influence judicial rulings. We test the common caricature of realism that justice is \"what the judge ate for breakfast\" in sequential parole decisions made by experienced judges. We record the judges' two daily food breaks, which result in segmenting the deliberations of the day into three distinct \"decision sessions.\" We find that the percentage of favorable rulings drops gradually from ≈ 65% to nearly zero within each decision session and returns abruptly to ≈ 65% after a break. Our findings suggest that judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions." }, { "pmid": "26230404", "title": "Cognitive Fatigue Destabilizes Economic Decision Making Preferences and Strategies.", "abstract": "OBJECTIVE\nIt is common for individuals to engage in taxing cognitive activity for prolonged periods of time, resulting in cognitive fatigue that has the potential to produce significant effects in behaviour and decision making. We sought to examine whether cognitive fatigue modulates economic decision making.\n\n\nMETHODS\nWe employed a between-subject manipulation design, inducing fatigue through 60 to 90 minutes of taxing cognitive engagement against a control group that watched relaxing videos for a matched period of time. Both before and after the manipulation, participants engaged in two economic decision making tasks (one for gains and one for losses). The analyses focused on two areas of economic decision making--preferences and choice strategies. Uncertainty preferences (risk and ambiguity) were quantified as premium values, defined as the degree and direction in which participants alter the valuation of the gamble in comparison to the certain option. The strategies that each participant engaged in were quantified through a choice strategy metric, which contrasts the degree to which choice behaviour relies upon available satisficing or maximizing information. We separately examined these metrics for alterations within both the gains and losses domains, through the two choice tasks.\n\n\nRESULTS\nThe fatigue manipulation resulted in significantly greater levels of reported subjective fatigue, with correspondingly higher levels of reported effort during the cognitively taxing activity. Cognitive fatigue did not alter uncertainty preferences (risk or ambiguity) or informational strategies, in either the gains or losses domains. Rather, cognitive fatigue resulted in greater test-retest variability across most of our economic measures. These results indicate that cognitive fatigue destabilizes economic decision making, resulting in inconsistent preferences and informational strategies that may significantly reduce decision quality." } ]
Enterprise Information Systems
30034513
PMC6036375
10.1080/17517575.2017.1390166
A template-based approach for responsibility management in executable business processes
ABSTRACTProcess-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.
3.Related workResponsibility management in business processes is a part of resource management in business processes, which involves the assignment of resources to process activities at design time as potential participants and the allocation of resources to activities at run time as actual participants.Resource assignment languages (van der Aalst and ter Hofstede 2005; Cabanillas et al. 2015b; Bertino, Ferrari, and Atluri 1999; Strembeck and Mendling 2011; Casati et al. 1996; Scheer 2000; Du et al. 1999; Tan, Crampton, and Gunter 2004; Cabanillas et al. 2015a; Wolter and Schaad 2007; Awad et al. 2009; Stroppi, Chiotti, and Villarreal 2011) serve the former purpose by enabling the definition of the conditions that the members of an organisation must meet in order to be allowed to participate in the activities of the processes executed in it, e.g., to belong to a specific department or to have certain skills. The outcome is a resource-aware process model. The set of conditions that can be defined depicts the expressiveness of the language and is usually evaluated with a subset of the well-known workflow resource patterns (Russell et al. 2005), namely, the creation patterns, which include, among others: Direct, Organisational, Role-Based, and Capability-Based Distribution, or the ability to specify the identity, position, role or capabilities of the resource that will take part in a task, respectively; (SoD), or the ability to specify that two tasks must be allocated to different resources in a given process instance; and Retain Familiar (also known as Binding of Duties (BoD)), or the ability to allocate an activity instance within a given process instance to the same resource that performed a preceding activity instance. A comparison of resource assignment languages can be found in Cabanillas et al. (2015b).Resource allocation techniques aim at distributing actual work to appropriate resources so that process instances are completed properly, e.g, in terms of high quality and low time and cost (Havur et al. 2015). All process engines must be provided with some resource allocation mechanism(s) in order to automate process execution.Traditional resource management in business processes considers that a process activity requires the workforce of one single resource who is in charge of the activity from the beginning to the end of its execution. However, common scenarios like the one described in Section 2 show the importance of other types of responsibilities, which tend to be disregarded by existing resource management approaches. In the following, we review the current state of the art on responsibility management in business processes, which is the problem addressed in this paper, and then report on approaches for process modelling based on templates, which relates to our solution.3.1.Responsibility management in business processesIn this section, we first introduce a generic responsibility management mechanism that is independent of process modelling notations or BPMS . Afterwards, we explore the related work for responsibility management in business processes in three groups: (i) the support provided by existing process modelling notations, (ii) the support provided by current modelling software tools and BPMS, and (iii) research proposals developed to bridge existing gaps.3.1.1.Responsibility assignment matrices (RAMs)A Responsibility Assignment Matrix (RAM) provides a way to plan, organise and coordinate work that consists of assigning different degrees of responsibility to the members of an organisation for each activity undertaken in it (Website 2016). RAMs were defined independently of Business Process Management (BPM) and thus, they are suitable for both process- and non process-oriented organisations. In the context of RAMs, the different responsibilities that may be assigned to an activity are usually called roles or task duties (Cabanillas et al. 2015b).RAMs are becoming a recommendation for the representation of the distribution of work in organisations. As a matter of fact, a specific type of RAMs called RACI (ARIS 2012) is a component of Six Sigma, 3 a methodology to improve the service or product that a company offers to its customers. There are also ongoing efforts to map RACI to the LEAN and CMMI for Services (CMMI-SVC) frameworks (Nuzen and Dayton 2011). The former defines a set of principles for continuous process improvement. The latter provides guidance for applying Capability Maturity Model Integration (CMMI) best practices in a service provider organisation. Similarly, the Information Technology Infrastructure Library (ITIL) framework defines the ITIL RACI matrices 4 as the way to illustrate the participation of the ITIL roles in the ITIL processes. ITIL is the worldwide de-facto standard for service management. Specifically, it uses a modality of RAMs called RASCI (Website 2014), which relies on the following five responsibilities: Responsible (R): person who must perform the work, responsible for the activity until the work is finished and approved by the person accountable for the activity. There is typically only one person responsible for an activity. Accountable – also Approver or Final Approving Authority – (A): person who must approve the work performed by the person responsible for an activity, and who becomes responsible for it after approval. There is generally one person accountable for each activity. Support (S): person who may assist in completing an activity by actively contributing in its execution, i.e., the person in charge can delegate work to her. In general, there may be several people assigned to this responsibility for an activity instance. Consulted – sometimes Counsel – (C): person whose opinion is sought while performing the work, and with whom there is two-way communication. She helps to complete the activity in a passive way. In general, there may be several people assigned to this responsibility for an activity instance. Informed (I): person who is kept up-to-date about the progress of an activity and/or the results of the work, and with whom there is just one-way communication. In general, there may be more than one person informed about an activity. Table 2 illustrates an example of a RAM for the scenario described in Section 2, specifically a RASCI matrix. The rows represent the process activities, the columns of the matrix are organisational roles, 5 and each cell contains zero or more RASCI initials indicating the responsibility of that role on that activity.Table 2.RASCI matrix for the process at pool ISA Research Group.  ProjectAccountWP   CoordinatorAdmin.LeaderResearcherClerkPrepareAIIRCAuthorisation     Send   R Authorisation     CheckII R Response     Register atC/II R Conference     MakeReservationsCC RS      Note that RAMs are intended to be a responsibility modelling mechanism and are not provided with support for automated analysis that could help to use them together with business processes during process execution. Their expressive power is high in terms of the number of responsibilities that can be assigned but low regarding the number of workflow resource patterns supported, as constraints like sod and bod cannot be defined.3.1.2.Process modelling notationsThe default support for responsibility management in current process modelling notations is limited. BPMN 2.0 (OMG 2011), the de-facto standard for process modelling, provides a mechanism to assign responsibilities to an activity. However, the only responsibility type that is defined by default is Responsible (so-called Potential Owner in BPMN). Other types of responsibilities can be added by extending the BPMN metamodel. In addition, nothing is said about the implications of adding new responsibilities during process execution.The EPC notation (Dumas, van der Aalst, and ter Hofstede 2005) is more expressive than BPMN for resource modelling in the sense that it provides a specific representation of organisational units and allows defining organisational relations. However, to the best of our knowledge there is no support for responsibilities other than the resource in charge of executing the activity.The so-called activity partitions of Unified Modeling Language (UML) Activity Diagrams (Russell et al. 2006) are classifiers similar to the BPMN swimlanes, although enriched with dimensions for hierarchical modelling. Therefore, they allow grouping process activities according to any criterion, which includes organisational information. Besides that, this modelling approach is very little expressive in terms of the support provided for the creation patterns (Russell et al. 2005). There is no notion of responsibility modelling either.Finally, BPEL4People (OASIS 2009) is an extension of the BPEL notation (OASIS 2007) based on the WS-HumanTask specification (OASIS 2010), which enables the integration of human beings in service-oriented applications. It provides support for the execution of business processes with three types of responsibilities, namely: Responsible, Accountable and Informed. However, although it provides a rather flexible mechanism for defining the notifications that the people with responsibility Informed receive, the participation of people with responsibility Accountable is limited to intervening when a deadline is missed. Other forms of interaction, such as checking that an activity was correctly performed, are not allowed.3.1.3.Modelling software tools and BPMSModelling software tools, such as Visual Paradigm, 6 facilitate the automatic generation of a RACI matrix from a resource-aware BPMN model. Specifically, the responsibility type Responsible can be automatically extracted and the RACI matrix can then be manually filled out to include information about the other types of responsibilities. However, the output is just used for documentation purposes, since BPMN does not support the definition of responsibilities Accountable, Consulted and Informed.Signavio Process Editor 7 also allows for defining RACI responsibilities in process models by making use of BPMN elements. While those models can be used for generating reports subsequently, process engines will not take into account the responsibilities Accountable, Consulted and Informed for automatic process execution.The support for responsibility management is a novel functionality in BPMSs . Bizagi 8 and ARIS (Scheer 2000) allow for the definition of RASCI responsibilities in BPMN models by making use of extended attributes in process activities. Nevertheless, similar to the tools focused on modelling, only the responsibility Responsible is considered for execution and the rest are used for process documentation and reporting. RACI matrices can be defined in the Red Hat JBoss BPM Suite 9 aside of a process model for broader documentation of the responsibilities involved in the process (Cumberlidge 2007). To the best of our knowledge, only (YAWL) (Adams 2016) slightly supports responsibility-aware process execution by means of the concept of secondary resources (human and non-human), which may assist in the completion of the work (hence providing support). Any kind of support for responsibility modelling and execution other than Responsible is still missing, however, in other BPMSs, such as Camunda 10 and Bonita BPM. 11 3.1.4.Research proposalsDue to the limitations of the process modelling notations and systems for responsibility management a few research proposals have been developed to support the assignment of different responsibilities to process activities and the automation of such responsibility-aware process models. In particular, Grosskopf (2007) extended BPMN 1.0 to support accountability.Resource Assignment Language (RAL) (Cabanillas et al. 2015b; Cabanillas, Resinas, and Ruiz-Cortés 2011a) is an expressive language for defining resource assignments that supports all the creation patterns (Russell et al. 2005). RAL is independent of the process modelling notation. Therefore, it can be decoupled from the process model or it can be integrated in it, as shown in Cabanillas, Resinas, and Ruiz-Cortés (2011) with BPMN. Furthermore, RAL is suited to be used for modelling any kind of responsibility as long as that is supported by the process modelling notation with which it is used.A graphical notation with a similar expressive power than RAL (RALph) was designed to allow for graphically defining resource assignments in process models (Cabanillas et al. 2015a). Similarly to the case of RAL, RALph is not actually equipped with support for modelling specific responsibilities. Therefore, that support depends on the process modelling notation with which RALph is used. Otherwise, the notation should be extended.To a greater or lesser extent, these proposals only address responsibility modelling and they do not provide details about the implications on the execution of the responsibility-aware process models generated.Since process execution is also a concern and the different responsibilities modelled with a process should also be considered at run time, the approach described in Cabanillas, Resinas, and Ruiz-Cortés (2011b) presented a pattern-based mechanism for including specific activities in a BPMN model that represent accountability, support, consultancy and notification functions. The result is thus a responsibility-aware executable process model that can be automated by BPMN process engines. However, due to the extra elements added in order to include RASCI responsibilities, the model is likely to become unreadable and deviate from the original one, hence turning out to be less eligible for other purposes, such as documentation, due to the large amount of implementation details. As an illustrative example, applying this technique to the scenario described in Section 2, the number of process activities would increase from 5 to 15. Moreover, the RASCI patterns defined are fixed and hence, there is no flexibility for adapting the joint use of the responsibilities to the organisational needs. Our preliminary work in this area (Cabanillas, Resinas, and Ruiz-Cortés 2012) also generated executable process models provided with RASCI information avoiding the aforementioned readability problem. However, flexibility remained an issue, as the way of including responsibilities in the process model was fixed.3.2.Template-based process modellingProcess templates have been defined with different notations and used for different purposes and in different domains. For instance, BPMN templates were defined for generating so-called business process studios by means of model transformations (Mos and Cortés-Cornax 2016). Configurable processes rely on process fragments or templates for adapting an abstract process to a specific context. They have been used, e.g., for addressing the problem of variability in service implementation, a.k.a. service differentiation, with BPEL (Tao and Yang 2007) as well as the problem of reference model implementation with Configurable epc (C-EPC) (Recker et al. 2005). In addition, configurable processes have been applied in industry to solve real problems, as described in Gottschalk, van der Aalst, and Jansen-Vullers (2007) for SAP processes.Most of these approaches, however, focus on control-flow aspects of business process and disregard other perspectives. Nevertheless, notations like the one presented in La Rosa et al. (2011) allow for defining configurable process models considering control flow, data and resources. These three perspectives are also supported by a template-based approach for modelling process performance indicators (del-Río-Ortega et al. 2016).Table 3.Support for responsibility management in business processes. √ indicates that the feature is supported, ≈ indicates that the feature is partly supported, − indicates that the feature is not supported, and n/a indicates that the evaluation criterion is not applicable.GroupApproachResponsibilitiesModellingExecutionFlexibilityGenericRAM (Website 2016)Anyout–n/aBP modelling notationsBPMN 2.0 (OMG 2011)Anyin–n/a BPMN 2.0 (OMG 2011)BPEL4People (OASIS 2009)/WSHumanTask (OASIS 2010)RAIin✓≈Modelling tools and BPMSsVisual ParadigmRACIout–n/a SignavioRACIin–n/a BizagiAnyin–n/a ARIS (Scheer 2000)RASCIin–n/a JBoss BPM Suite (Cumberlidge 2007)YAWL (Adams 2016)RACIout–n/a YAWL adams_yawl_2016RSin✓–Research proposalsBPMN 1.0 Ext. by Grosskopf (2007)RAin–n/a RAL (Cabanillas et al. 2015b)Anyin/out–n/a RALph (Cabanillas et al. 2015a)Anyin–n/a RASCI patterns (Cabanillas, Resinas, and Ruiz-Cortés 2011b)RASCIin✓– RACI2BPMN (Cabanillas, Resinas, and Ruiz-Cortés 2012)RASCIout✓– Our proposalAnyout✓✓ All the previous approaches have shown benefits for the purpose they were conceived. However, none of them has taken into consideration activity responsibilities since they did not specifically focus on the organisational perspective of business processes.
[]
[]
BioData Mining
30026812
PMC6047369
10.1186/s13040-018-0172-x
Soft document clustering using a novel graph covering approach
BackgroundIn text mining, document clustering describes the efforts to assign unstructured documents to clusters, which in turn usually refer to topics. Clustering is widely used in science for data retrieval and organisation.ResultsIn this paper we present and discuss a novel graph-theoretical approach for document clustering and its application on a real-world data set. We will show that the well-known graph partition to stable sets or cliques can be generalized to pseudostable sets or pseudocliques. This allows to perform a soft clustering as well as a hard clustering. The software is freely available on GitHub.ConclusionsThe presented integer linear programming as well as the greedy approach for this \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {NP}$\end{document}NP-complete problem lead to valuable results on random instances and some real-world data for different similarity measures. We could show that PS-Document Clustering is a remarkable approach to document clustering and opens the complete toolbox of graph theory to this field.Electronic supplementary materialThe online version of this article (10.1186/s13040-018-0172-x) contains supplementary material, which is available to authorized users.
Related work and state of the artRecent research has focused on methods and heuristics to solve document clustering. The authors of [6] for example tried to cluster documents received from MEDLINE database using evolutionary algorithms, whereas [7] used machine learning approaches, see also the work of [8]. As mentioned previously, only a few authors like [9] mentioned graphs. As [10] points out, unfortunately “no single definition of a cluster in graphs is universally accepted, and the variants used in the literature are numerous”.There has also been a lot of research which is related, but had a different scope. The authors of [11] for example discussed document clustering in the context of search queries, whereas [12] discussed the topic of hierarchical clustering. In the field of bioinformatics or life science informatics, the automatic classification and recognition of texts according to their medical, chemical or biological entities is in the scope of researchers (see [13], [14] or [15]). Document Clustering has been in the focus of research for the last decades and interest is steadily growing. This gets also obvious when observing the increasing number of competitions in this field, for example TREC – Text REtrieval Conference –, see [16].Using a Graph Partition for clustering has been widely discussed in literature. Schaeffer points out that “the field of graph clustering has grown quite popular and the number of published proposals for clustering algorithms as well as reported applications is high” [10]. Usually directed or weighted graphs are subject of research. However, we would like to emphasize that for problem complexity reasons it is suitable to focus on simple graphs. The work reported in [17] explains that a graph partition in cliques or stable sets is most common.We can conclude that focusing on graph clustering only is a novel approach and the generalization of soft document clustering introduced in [1] leads to the conclusion that we can focus on the graph-theoretical toolbox to get new insights on document clustering – or clustering in general.
[ "18593717", "15663789" ]
[ { "pmid": "18593717", "title": "PuReD-MCL: a graph-based PubMed document clustering methodology.", "abstract": "MOTIVATION\nBiomedical literature is the principal repository of biomedical knowledge, with PubMed being the most complete database collecting, organizing and analyzing such textual knowledge. There are numerous efforts that attempt to exploit this information by using text mining and machine learning techniques. We developed a novel approach, called PuReD-MCL (Pubmed Related Documents-MCL), which is based on the graph clustering algorithm MCL and relevant resources from PubMed.\n\n\nMETHODS\nPuReD-MCL avoids using natural language processing (NLP) techniques directly; instead, it takes advantage of existing resources, available from PubMed. PuReD-MCL then clusters documents efficiently using the MCL graph clustering algorithm, which is based on graph flow simulation. This process allows users to analyse the results by highlighting important clues, and finally to visualize the clusters and all relevant information using an interactive graph layout algorithm, for instance BioLayout Express 3D.\n\n\nRESULTS\nThe methodology was applied to two different datasets, previously used for the validation of the document clustering tool TextQuest. The first dataset involves the organisms Escherichia coli and yeast, whereas the second is related to Drosophila development. PuReD-MCL successfully reproduces the annotated results obtained from TextQuest, while at the same time provides additional insights into the clusters and the corresponding documents.\n\n\nAVAILABILITY\nSource code in perl and R are available from http://tartara.csd.auth.gr/~theodos/" }, { "pmid": "15663789", "title": "A robust two-way semi-linear model for normalization of cDNA microarray data.", "abstract": "BACKGROUND\nNormalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values.\n\n\nMETHODS\nWe propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach.\n\n\nRESULTS\nThe simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method.\n\n\nCONCLUSIONS\nOur simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods." } ]
Scientific Reports
30022035
PMC6052167
10.1038/s41598-018-29174-3
Compressing Networks with Super Nodes
Community detection is a commonly used technique for identifying groups in a network based on similarities in connectivity patterns. To facilitate community detection in large networks, we recast the network as a smaller network of ‘super nodes’, where each super node comprises one or more nodes of the original network. We can then use this super node representation as the input into standard community detection algorithms. To define the seeds, or centers, of our super nodes, we apply the ‘CoreHD’ ranking, a technique applied in network dismantling and decycling problems. We test our approach through the analysis of two common methods for community detection: modularity maximization with the Louvain algorithm and maximum likelihood optimization for fitting a stochastic block model. Our results highlight that applying community detection to the compressed network of super nodes is significantly faster while successfully producing partitions that are more aligned with the local network connectivity and more stable across multiple (stochastic) runs within and between community detection algorithms, yet still overlap well with the results obtained using the full network.
Related WorkOur objective to define a smaller network of super nodes is a form of network compression. Several references have explored useful ways to compress networks19–24, with Yang et al.20 and Peng et al.22 using graph compression in the context of community detection. A review of network compression and summarization techniques is given in ref.25. These compression approaches can either be classified as network pre-processing or network size reduction. Under these definitions, pre-processing refers to a method that uses all of the nodes to pre-partition the network or agglomerate nodes to form a smaller network of pre-agglomerated nodes or’super nodes’. Creating a super node representation of the network can assist in visualization, gives control over how many nodes to split the network into, and allows for the input of a pre-processed network into standard network analysis tools. Alternatively, in network size reduction approaches, nodes are systematically removed and further analysis is performed on a smaller subnetwork. Such an approach may be useful if one has prior knowledge of unimportant or redundant nodes. Two network pre-processing methods that define super nodes are explored by Yang et al.20 and Lisewski et al.19; but these approaches differ from our proposal in that they seek to define super nodes along with additional side information about relationships between node pairs. First, Lisewski et al.19, describes ‘super genomic network compression’ to reduce the number of edges in a large protein interaction network. To do this, the authors identify ‘clusters of orthologous groups’ of proteins, or proteins that give rise to similar functions in different species and originated from a common ancestor. Members of an orthologous group are connected as a star network, with the center node as one member of the orthologous group. Furthermore, edges between orthologous groups are replaced by a single weighted link reflecting the pairwise group evolutionary similarity. Next, Yang et al.20 define super nodes by specifying ‘must link’ and ‘cannot link’ constraints between pairs of nodes, agglomerating as many nodes as possible sharing must link constraints while being cautious about agglomerating nodes that cannot link. Finally, Slashburn introduced by Lim et al.23 is another pre-processing approach for network compression that seeks to identify a permutation or ordering of the nodes, such that the adjacency matrix is pre-processed to have sets of clustered edges. To accomplish this task, hubs are removed iteratively and nodes are re-ordered so that high degree nodes appear first in the ultimate ordering of nodes.Alternatively, approaches that perform network compression through network size reduction were presented in three works21,22,24. Gilbert et al., introduce the ‘KeepAll’ method21, which seeks to prioritize a set of nodes according to their importance in the network and retain only the smallest set of additional nodes required for the induced subgraph of prioritized nodes to be connected. Results in this paper highlight the method’s ability to remove redundant and noisy nodes that allow for clearer analysis of the original set of prioritized nodes. Peng et al.22 extract a smaller network through a k-core decomposition, and perform community detection on the subnetwork. While we also seek to perform community detection on a smaller version of the network, we seek to do this in the network pre-processing manner so that all nodes are effectively included as the input to the community detection algorithm, with flexibility to choose the number of super nodes or size to represent the network with. Given that the number of nodes in the k-core of a network decreases dramatically with an increasing k, there is not much flexibility in the scale or size of the network representation. Finally, Liu et al. also use a k-core based approach to decompose the network in a different manner. The authors define CONDENSE24, an information theoretic based method to reduce a large network into a set of representative substructures. In particular, induced subgraphs resulting from the k-core based clustering technique are each treated as representative substructures.
[ "16723398", "28508065", "25126794" ]
[ { "pmid": "16723398", "title": "Modularity and community structure in networks.", "abstract": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets." }, { "pmid": "28508065", "title": "The ground truth about metadata and community detection in networks.", "abstract": "Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures." }, { "pmid": "25126794", "title": "Supergenomic network compression and the discovery of EXP1 as a glutathione transferase inhibited by artesunate.", "abstract": "A central problem in biology is to identify gene function. One approach is to infer function in large supergenomic networks of interactions and ancestral relationships among genes; however, their analysis can be computationally prohibitive. We show here that these biological networks are compressible. They can be shrunk dramatically by eliminating redundant evolutionary relationships, and this process is efficient because in these networks the number of compressible elements rises linearly rather than exponentially as in other complex networks. Compression enables global network analysis to computationally harness hundreds of interconnected genomes and to produce functional predictions. As a demonstration, we show that the essential, but functionally uncharacterized Plasmodium falciparum antigen EXP1 is a membrane glutathione S-transferase. EXP1 efficiently degrades cytotoxic hematin, is potently inhibited by artesunate, and is associated with artesunate metabolism and susceptibility in drug-pressured malaria parasites. These data implicate EXP1 in the mode of action of a frontline antimalarial drug." } ]
Scientific Reports
30030483
PMC6054631
10.1038/s41598-018-29282-0
Analysis of the influence of imaging-related uncertainties on cerebral aneurysm deformation quantification using a no-deformation physical flow phantom
Cardiac-cycle related pulsatile aneurysm motion and deformation is assumed to provide valuable information for assessing cerebral aneurysm rupture risk. Accordingly, numerous studies addressed quantification of cerebral aneurysm wall motion and deformation. Most of them utilized in vivo imaging data, but image-based aneurysm deformation quantification is subject to pronounced uncertainties: unknown ground-truth deformation; image resolution in the order of the expected deformation; direct interplay between contrast agent inflow and image intensity. To analyze the impact of the uncertainties on deformation quantification, a multi-imaging modality ground-truth phantom study is performed. A physical flow phantom was designed that allowed simulating pulsatile flow through a variety of modeled cerebral vascular structures. The phantom was imaged using different modalities [MRI, CT, 3D-RA] and mimicking physiologically realistic flow conditions. Resulting image data was analyzed by an established registration-based approach for automated wall motion quantification. The data reveals severe dependency between contrast media inflow-related image intensity changes and the extent of estimated wall deformation. The study illustrates that imaging-related uncertainties affect the accuracy of cerebral aneurysm deformation quantification, suggesting that in vivo imaging studies have to be accompanied by ground-truth phantom experiments to foster data interpretation and to prove plausibility of the applied image analysis algorithms.
Cerebral Aneurysm Wall Motion Quantification: Overview of Related WorkAn association between wall motion and aneurysm rupture has already been suggested in the initial work of Meyer et al.13. Using phase-contrast MR angiography (PC MRA), the pulsation-related change in ruptured aneurysm volume was reported to be (51% ± 10%), compared to (17.6% ± 8.9%) for non-ruptured aneurysms. Aneurysm volume estimation relied on manual measurements of aneurysm diameters along x, y and z image axes and an assumed spherical aneurysm geometry. Given the simplicity of the method, plus additional uncertainties due to, e. g., potential flow artifacts, caution is required with respect to the interpretation of the results1. Nevertheless, Hayakawa et al. as well as Ishida et al. also observed aneurysm wall motion and pulsating blebs in 4D-CTA data9,10,14,15. During surgery, wall motion positions could even be confirmed to be aneurysm rupture sites for two patients10 and pulsating blebs as rupture points9. Moreover, aneurysm pulsation was more frequently observed for ruptured aneurysms14. These observations further substantiated the early results of Meyer et al. and the suggested link between wall motion and impending aneurysm rupture13, but they were only based on visual assessment of aneurysm wall motion. As a natural next step, numerous studies aimed at image-based quantification of pulsatile aneurysm wall motion. Following the review of Vanrossomme et al.1, an abridged overview is given in Table 1, enriched by respective information about image resolution and exploited wall motion quantification approaches that are in the focus of the present work. Most of the studies directly worked on in vivo data, with typical imaging modalities being aforementioned PC MRI/MRA, 4D-CTA, and 3D-RA. From a perspective of image analysis, two approaches dominate: threshold- and registration-based aneurysm dynamics and wall motion quantification. Thresholding mainly refers to separating vasculature and structures of interests from the image background. Window/level settings are usually operator-specifically chosen. The resulting images and segmented structures are then used to calculate changes in volume over time or the like16–18. Such methods are, however, observer-dependent (in the case of a manual selection of thresholds). Furthermore, intensity fluctuations due to changes in blood velocity or inflow of contrast agent are usually not explicitly accounted for and introduce additional uncertainties during quantification of aneurysm deformation and cardiac cycle-related wall motion.Table 1Previous studies on aneurysm wall motion (WM) detection/quantification in patient image data.AuthorsImage modalityImage resolutionWMOWMQWM(Q) assessmentMeyer et al.13PC-MRAunclear15/161.0–1.5 mmamanualWardlaw et al.27PD-USunclearyes53%bmanualKato et al.284D-CTAunclear10/15nounclearHayakawa et al.104D-CTAunclear4/23novisual inspectionIshida et al.94D-CTAunclear13/34novisual inspectionDempere-Marco et al.293D-RAunclear2/3yesregistrationOubel et al.193D-RAunclear4/40.5 mmregistrationOubel et al.203D-RA0.07–0.28 mm10/180.0–0.29 mmregistrationKarmonik et al.162D PC-MRI0.625 mm7/70.15 mm (range:0.04–0.31 mm)csemi-automatic,threshold-basedHayakawa et al.144D-CTAunclear24/65novisual inspectionZhang et al.233D RA0.154 mm1/2yesregistrationKuroda et al.174D-CTA0.25–0.5 mmyes5.40% ± 4.17%dthreshold-basedFirouzian et al.224D-CTA0.23 mm19/190.17 ± 0.10 mmeregistrationHayakawa et al.154D-CTA0.5 mm20/56novisual inspectionIllies et al.184D-CTA0.39 mmyesyessemi-automatic,threshold-basedThe studies are listed in chronological order. Image resolution refers to the in-plane spatial resolution of the reconstructed data. WMO: wall motion observed; if numbers are given, they refer to the frequency of wall motion observation. WMQ, wall motion quantification. PC-MRA: phase-contrast MR angiography; CTA: CT angiography; 3D-RA: 3D rotational angiography; PD-US: power Doppler ultrasonography.aReported as typical change in size of ruptured aneurysms in at least one dimension.bAverage increase of aneurysm cross-sectional area between diastole and systole.cAverage wall displacement, evaluated in 2D slices.dCardiac cycle-related aneurysm volume changes.eAneurysm diameter change.Registration-based cerebral aneurysm wall motion quantification has been initiated by Oubel et al.19. The idea was to apply non-linear registration between a pre-defined reference image (like the first acquired image frame) and the other images of the respective temporal image sequence. The resulting deformation fields are assumed to represent pulsatile deformation with respect to the reference time point. In particular, Oubel et al. applied deformation fields computed in high-frame-rate DSA (digital subtraction angiography) to automatically propagate landmarks that were manually located on the aneurysm wall as represented in the first DSA frame19,20. Aneurysm wall motion was then quantified by Euclidean distances between original and propagated landmark positions. A similar application of non-linear registration for quantification of cardiac cycle-related aneurysm dynamics has also been reported for 4D-CTA21. As the applied non-linear registration approaches are intensity-based, they are (depending on the imaging modality) sensitive to inhomogeneities of contrast distributions20, contrast inflow and/or changes of flow velocity, and image noise. Without appropriate quantification of such uncertainties, interpretation of computed deformation fields and derived quantities is, therefore, hardly feasible. Due to the absence of ground truth in vivo deformation data, quantification of related uncertainties during (semi-)automatic image analysis is usually based on phantom data. For instance, Firouzian et al. and Zhang et al. simulated image sequences and thereby estimated uncertainties of registration-based quantification of cardiac cycle-related aneurysm volume changes to be in the order of 4% and below 10%, respectively22,23. Such in silico phantoms, however, almost always simplify details of the imaging process and resulting effects (system noise, occurrence of artifacts, etc.). In this regard, physical phantoms (also referred to as in vitro phantoms23) add reliability. For instance, Kuroda et al. imaged a syringe filled with normal saline and determined obtained volume changes of 0.248% as an indicator of insignificant changes17. The influence of actual flow dynamics was, however, not considered. In turn, Yaghmai et al., Umeda et al. and Zhang et al. constructed physical (flow) phantoms that allowed for illustration of the feasibility of aneurysm wall motion imaging by means of 4D-CTA and 3D-RA7,23,24. Similar to aforementioned in vivo studies, exact aneurysm deformation data was again not known or reported for these phantoms; thus, feasibility was demonstrated qualitatively, but uncertainties regarding wall motion quantification remain. This shortcoming of previous studies represented the motivation of the present study.
[ "25929878", "9445359", "23449260", "12867109", "23406828", "21273572", "15956499", "23913018", "19859863", "21998064", "27880805", "20651422", "20346606", "22884403", "21520841", "17538143", "2961104", "8733959", "15343426", "17354802" ]
[ { "pmid": "25929878", "title": "Intracranial Aneurysms: Wall Motion Analysis for Prediction of Rupture.", "abstract": "Intracranial aneurysms are a common pathologic condition with a potential severe complication: rupture. Effective treatment options exist, neurosurgical clipping and endovascular techniques, but guidelines for treatment are unclear and focus mainly on patient age, aneurysm size, and localization. New criteria to define the risk of rupture are needed to refine these guidelines. One potential candidate is aneurysm wall motion, known to be associated with rupture but difficult to detect and quantify. We review what is known about the association between aneurysm wall motion and rupture, which structural changes may explain wall motion patterns, and available imaging techniques able to analyze wall motion." }, { "pmid": "9445359", "title": "Prevalence and risk of rupture of intracranial aneurysms: a systematic review.", "abstract": "BACKGROUND AND PURPOSE\nThe estimates on the prevalence and the risk of rupture of intracranial saccular aneurysms vary widely between studies. We conducted a systematic review on prevalence and risk of rupture of intracranial aneurysms and classified the data according to study design, study population, and aneurysm characteristics.\n\n\nMETHODS\nWe searched for studies published between 1955 and 1996 by means of a MEDLINE search and a cumulative review of the reference lists of all relevant publications. Two authors independently assessed eligibility of all studies and extracted data on study design and on numbers and characteristics of patients and aneurysms.\n\n\nRESULTS\nFor data on prevalence we found 23 studies, totalling 56,304 patients; 6685 (12%) of these patients were from 15 angiography studies. Prevalence was 0.4% (95% confidence interval, 0.4% to 0.5%) in retrospective autopsy studies, 3.6% (3.1 to 4.1) for prospective autopsy studies, 3.7% (3.0 to 4.4) in retrospective angiography studies, and 6.0% (5.3 to 6.8) in prospective angiography studies. For adults without specific risk factors, the prevalence was 2.3% (1.7 to 3.1); it tended to increase with age. The prevalence was higher in patients with autosomal dominant polycystic kidney disease (relative risk [RR], 4.4 [2.7 to 7.2]), a familial predisposition (RR, 4.0 [2.7 to 6.0]), or atherosclerosis (RR, 2.3 [1.7 to 3.1]). Only 8% (5 to 11) of the aneurysms were >10 mm. For the risk of rupture, we found nine studies, totalling 3907 patient-years. The overall risk per year was 1.9% (1.5 to 2.4); for aneurysms = 10 mm, the annual risk was 0.7% (0.5 to 1.0). The risk was higher in women (RR, 2.1[1.1 to 3.9]) and for aneurysms that were symptomatic (RR, 8.3 [4.0 to 17]), >10 mm (RR, 5.5 [3.3 to 9.4]), or in the posterior circulation (RR, 4.1 [1.5 to 11]).\n\n\nCONCLUSIONS\nData on prevalence and risk of rupture vary considerably according to study design, study population, and aneurysm characteristics. If all available evidence with inherent overestimation and underestimation is taken together, for adults without risk factors for subarachnoid hemorrhage, aneurysms are found in approximately 2%. The vast majority of these aneurysms are small (=10 mm) and have an annual risk of rupture of approximately 0.7%." }, { "pmid": "23449260", "title": "Comparative effectiveness of unruptured cerebral aneurysm therapies: propensity score analysis of clipping versus coiling.", "abstract": "BACKGROUND AND PURPOSE\nEndovascular therapy has increasingly become the most common treatment for unruptured cerebral aneurysms in the United States. We evaluated a national, multi-hospital database to examine recent utilization trends and compare periprocedural outcomes between clipping and coiling treatments of unruptured aneurysms.\n\n\nMETHODS\nThe Premier Perspective database was used to identify patients hospitalized between 2006 to 2011 for unruptured cerebral aneurysm who underwent clipping or coiling therapy. A logistic propensity score was generated for each patient using relevant patient, procedure, and hospital variables, representing the probability of receiving clipping. Covariate balance was assessed using conditional logistic regression. Following propensity score adjustment using 1:1 matching methods, the risk of in-hospital mortality and morbidity was compared between clipping and coiling cohorts.\n\n\nRESULTS\nA total of 4899 unruptured aneurysm patients (1388 clipping, 3551 coiling) treated at 120 hospitals were identified. Following propensity score adjustment, clipping patients had a similar likelihood of in-hospital mortality (odds ratio [OR], 1.43; 95% confidence interval [CI], 0.49-4.44; P=0.47) but a significantly higher likelihood of unfavorable outcomes, including discharge to long-term care (OR, 4.78; 95% CI, 3.51-6.58; P<0.0001), ischemic complications (OR, 3.42; 95% CI, 2.39-4.99; P<0.0001), hemorrhagic complications (OR, 2.16; 95% CI, 1.33-3.57; P<0.0001), postoperative neurological complications (OR, 3.39; 95% CI, 2.25-5.22; P<0.0001), and ventriculostomy (OR, 2.10; 95% CI, 1.01-4.61; P=0.0320) compared with coiling patients.\n\n\nCONCLUSIONS\nAmong patients treated for unruptured intracranial aneurysms in a large sample of hospitals in the United States, clipping was associated with similar mortality risk but significantly higher periprocedural morbidity risk compared with coiling." }, { "pmid": "12867109", "title": "Unruptured intracranial aneurysms: natural history, clinical outcome, and risks of surgical and endovascular treatment.", "abstract": "BACKGROUND\nThe management of unruptured intracranial aneurysms is controversial. Investigators from the International Study of Unruptured Intracranial Aneurysms aimed to assess the natural history of unruptured intracranial aneurysms and to measure the risk associated with their repair.\n\n\nMETHODS\nCentres in the USA, Canada, and Europe enrolled patients for prospective assessment of unruptured aneurysms. Investigators recorded the natural history in patients who did not have surgery, and assessed morbidity and mortality associated with repair of unruptured aneurysms by either open surgery or endovascular procedures.\n\n\nFINDINGS\n4060 patients were assessed-1692 did not have aneurysmal repair, 1917 had open surgery, and 451 had endovascular procedures. 5-year cumulative rupture rates for patients who did not have a history of subarachnoid haemorrhage with aneurysms located in internal carotid artery, anterior communicating or anterior cerebral artery, or middle cerebral artery were 0%, 2. 6%, 14 5%, and 40% for aneurysms less than 7 mm, 7-12 mm, 13-24 mm, and 25 mm or greater, respectively, compared with rates of 2 5%, 14 5%, 18 4%, and 50%, respectively, for the same size categories involving posterior circulation and posterior communicating artery aneurysms. These rates were often equalled or exceeded by the risks associated with surgical or endovascular repair of comparable lesions. Patients' age was a strong predictor of surgical outcome, and the size and location of an aneurysm predict both surgical and endovascular outcomes.\n\n\nINTERPRETATION\nMany factors are involved in management of patients with unruptured intracranial aneurysms. Site, size, and group specific risks of the natural history should be compared with site, size, and age-specific risks of repair for each patient." }, { "pmid": "23406828", "title": "European Stroke Organization guidelines for the management of intracranial aneurysms and subarachnoid haemorrhage.", "abstract": "BACKGROUND\nIntracranial aneurysm with and without subarachnoid haemorrhage (SAH) is a relevant health problem: The overall incidence is about 9 per 100,000 with a wide range, in some countries up to 20 per 100,000. Mortality rate with conservative treatment within the first months is 50-60%. About one third of patients left with an untreated aneurysm will die from recurrent bleeding within 6 months after recovering from the first bleeding. The prognosis is further influenced by vasospasm, hydrocephalus, delayed ischaemic deficit and other complications. The aim of these guidelines is to provide comprehensive recommendations on the management of SAH with and without aneurysm as well as on unruptured intracranial aneurysm.\n\n\nMETHODS\nWe performed an extensive literature search from 1960 to 2011 using Medline and Embase. Members of the writing group met in person and by teleconferences to discuss recommendations. Search results were graded according to the criteria of the European Federation of Neurological Societies. Members of the Guidelines Committee of the European Stroke Organization reviewed the guidelines.\n\n\nRESULTS\nThese guidelines provide evidence-based information on epidemiology, risk factors and prognosis of SAH and recommendations on diagnostic and therapeutic methods of both ruptured and unruptured intracranial aneurysms. Several risk factors of aneurysm growth and rupture have been identified. We provide recommendations on diagnostic work up, monitoring and general management (blood pressure, blood glucose, temperature, thromboprophylaxis, antiepileptic treatment, use of steroids). Specific therapeutic interventions consider timing of procedures, clipping and coiling. Complications such as hydrocephalus, vasospasm and delayed ischaemic deficit were covered. We also thought to add recommendations on SAH without aneurysm and on unruptured aneurysms.\n\n\nCONCLUSION\nRuptured intracranial aneurysm with a high rate of subsequent complications is a serious disease needing prompt treatment in centres having high quality of experience of treatment for these patients. These guidelines provide practical, evidence-based advice for the management of patients with intracranial aneurysm with or without rupture. Applying these measures can improve the prognosis of SAH." }, { "pmid": "21273572", "title": "Novel dynamic four-dimensional CT angiography revealing 2-type motions of cerebral arteries.", "abstract": "BACKGROUND AND PURPOSE\nWe developed a novel dynamic 4-dimensional CT angiography to accurately evaluate dynamics in cerebral aneurysm.\n\n\nMETHODS\nDynamic 4-dimensional CT angiography achieved high-resolution 3-dimensional imaging with temporal resolution in a beating heart using dynamic scanning data sets reconstructed with a retrospective simulated R-R interval reconstruction algorithm.\n\n\nRESULTS\nMovie artifacts disappeared on dynamic 4-dimensional CT angiography movies of 2 kinds of stationary phantoms (titanium clips and dry bone). In the virtual pulsating aneurysm model, pulsation on the dynamic 4-dimensional CT angiography movie resembled actual movement in terms of pulsation size. In a clinical study, dynamic 4-dimensional CT angiography showed 2-type motions: pulsation and anatomic positional changes of the cerebral artery.\n\n\nCONCLUSIONS\nThis newly developed 4-dimensional visualizing technique may deliver some clues to clarify the pathophysiology of cerebral aneurysms." }, { "pmid": "15956499", "title": "CT angiography with electrocardiographically gated reconstruction for visualizing pulsation of intracranial aneurysms: identification of aneurysmal protuberance presumably associated with wall thinning.", "abstract": "Electrocardiographically (ECG) gated multisection helical CT images were obtained in 23 patients with ruptured intracranial aneurysms. 4D-CTA (3D CT angiography plus phase data) images were generated by ECG-gated reconstruction. Four patients showed pulsation of an aneurysmal bleb. Clipping was performed in two of these patients, and the rupture site matched the pulsatile bleb seen in 4D-CTA." }, { "pmid": "23913018", "title": "Detection of pulsation in unruptured cerebral aneurysms by ECG-gated 3D-CT angiography (4D-CTA) with 320-row area detector CT (ADCT) and follow-up evaluation results: assessment based on heart rate at the time of scanning.", "abstract": "PURPOSE\nMany epidemiological studies on unruptured cerebral aneurysms have reported that the larger the aneurysm, the higher the risk of rupture. However, many ruptured aneurysms are not large. Electrocardiography (ECG)-gated 3D-computed tomography angiography (4D-CTA) was used to detect pulsation in unruptured cerebral aneurysms. The differences in the clinical course of patients in whom pulsation was or was not detected were then evaluated.\n\n\nMETHODS\nForty-two patients with 62 unruptured cystiform cerebral aneurysms who underwent 4D-CTA and follow-up 3D-CTA more than 120 days later were studied. The tube voltage, tube current, and rotation speed were 120 kV, 270 mA, and 0.35 s/rot., respectively. ECG-gated reconstruction was performed, with the cardiac cycle divided into 20 phases. Patients with heart rates higher than 80 bpm were excluded, so 37 patients with 56 aneurysms were analyzed.\n\n\nRESULTS\nPulsation was detected in 20 of the 56 unruptured aneurysms. Of these 20 aneurysms, 6 showed a change in shape at the time of follow-up. Of the 36 aneurysms in which pulsation was not detected, 2 showed a change in shape at follow-up. There was no significant difference in the follow-up interval between the two groups. The aneurysms in which pulsation was detected were significantly more likely to show a change in shape (P = 0.04), with a higher odds ratio of 7.286.\n\n\nCONCLUSION\nUnruptured aneurysms in which pulsation was detected by 4D-CTA were more likely to show a change in shape at follow-up, suggesting that 4D-CTA may be useful for identifying aneurysms with a higher risk of rupture." }, { "pmid": "19859863", "title": "In-vivo quantification of wall motion in cerebral aneurysms from 2D cine phase contrast magnetic resonance images.", "abstract": "PURPOSE\nThe quantification of wall motion in cerebral aneurysms is of interest for the assessment of aneurysmal rupture risk, for providing boundary conditions for computational simulations and as a validation tool for theoretical models.\n\n\nMATERIALS AND METHODS\n2D cine phase contrast magnetic resonance imaging (2D pcMRI) in combination with quantitative magnetic resonance angiography (QMRA) was evaluated for measuring wall motion in 7 intracranial aneurysms. In each aneurysm, 2 (in one case 3) cross sections, oriented approximately perpendicular to each other, were measured.\n\n\nRESULTS\nThe maximum aneurysmal wall distention ranged from 0.16 mm to 1.6 mm (mean 0.67 mm), the maximum aneurysmal wall contraction was -1.91 mm to -0.34 mm (mean 0.94 mm), and the average wall displacement ranged from 0.04 mm to 0.31 mm (mean 0.15 mm). Statistically significant correlations between average wall displacement and the shape of inflow curves (p-value < 0.05) were found in 7 of 15 cross sections; statistically significant correlations between the displacement of the luminal boundary center point and the shape of inflow curves (p-value < 0.05) were found in 6 of 15 cross sections.\n\n\nCONCLUSION\n2D pcMRI in combination with QMRA is capable of visualizing and quantifying wall motion in cerebral aneurysms. However, application of this technique is currently restricted by its limited spatial resolution." }, { "pmid": "21998064", "title": "Cardiac cycle-related volume change in unruptured cerebral aneurysms: a detailed volume quantification study using 4-dimensional CT angiography.", "abstract": "BACKGROUND AND PURPOSE\nThe hemodynamic factors of aneurysms were recently evaluated using computational fluid dynamics in a static vessel model in an effort to understand the mechanisms of initiation and rupture of aneurysms. However, few reports have evaluated the dynamic wall motion of aneurysms due to the cardiac cycle. The objective of this study was to quantify cardiac cycle-related volume changes in aneurysms using 4-dimensional CT angiography.\n\n\nMETHODS\nFour-dimensional CT angiography was performed in 18 patients. Image data of 1 cardiac cycle were divided into 10 phases and the volume of the aneurysm was then quantified in each phase. These data were also compared with intracranial vessels of normal appearance.\n\n\nRESULTS\nThe observed cardiac cycle-related volume changes were in good agreement with the sizes of the aneurysms and normal vessels. The cardiac cycle-related volume changes of the intracranial aneurysms and intracranial normal arteries were 5.40%±4.17% and 4.20±2.04%, respectively, but these did not differ statistically (P=0.12).\n\n\nCONCLUSIONS\nWe successfully quantified the volume change in intracranial aneurysms and intracranial normal arteries in human subjects. The data may indicate that cardiac cycle-related volume changes do not differ between unruptured aneurysms and normal intracranial arteries, suggesting that the global integrity of an unruptured aneurysmal wall is not different from that of normal intracranial arteries." }, { "pmid": "27880805", "title": "Feasibility of Quantification of Intracranial Aneurysm Pulsation with 4D CTA with Manual and Computer-Aided Post-Processing.", "abstract": "BACKGROUND AND PURPOSE\nThe analysis of the pulsation of unruptured intracranial aneurysms might improve the assessment of their stability and risk of rupture. Pulsations can easily be concealed due to the small movements of the aneurysm wall, making post-processing highly demanding. We hypothesized that the quantification of aneurysm pulsation is technically feasible and can be improved by computer-aided post-processing.\n\n\nMATERIALS AND METHODS\nImages of 14 cerebral aneurysms were acquired with an ECG-triggered 4D CTA. Aneurysms were post-processed manually and computer-aided on a 3D model. Volume curves and random noise-curves were compared with the arterial pulse wave and volume curves were compared between both post-processing modalities.\n\n\nRESULTS\nThe aneurysm volume curves showed higher similarity with the pulse wave than the random curves (Hausdorff-distances 0.12 vs 0.25, p<0.01). Both post-processing methods did not differ in intra- (r = 0.45 vs r = 0.54, p>0.05) and inter-observer (r = 0.45 vs r = 0.54, p>0.05) reliability. Time needed for segmentation was significantly reduced in the computer-aided group (3.9 ± 1.8 min vs 20.8 ± 7.8 min, p<0.01).\n\n\nCONCLUSION\nOur results show pulsatile changes in a subset of the studied aneurysms with the final prove of underlying volume changes remaining unsettled. Semi-automatic post-processing significantly reduces post-processing time but cannot yet replace manual segmentation." }, { "pmid": "20651422", "title": "Wall motion estimation in intracranial aneurysms.", "abstract": "The quantification of wall motion in cerebral aneurysms is becoming important owing to its potential connection to rupture, and as a way to incorporate the effects of vascular compliance in computational fluid dynamics simulations. Most of papers report values obtained with experimental phantoms, simulated images or animal models, but the information for real patients is limited. In this paper, we have combined non-rigid registration with signal processing techniques to measure pulsation in real patients from high frame rate digital subtraction angiography. We have obtained physiological meaningful waveforms with amplitudes in the range 0 mm-0.3 mm for a population of 18 patients including ruptured and unruptured aneurysms. Statistically significant differences in pulsation were found according to the rupture status, in agreement with differences in biomechanical properties reported in the literature." }, { "pmid": "20346606", "title": "Intracranial aneurysm segmentation in 3D CT angiography: method and quantitative validation with and without prior noise filtering.", "abstract": "Intracranial aneurysm volume and shape are important factors for predicting rupture risk, for pre-surgical planning and for follow-up studies. To obtain these parameters, manual segmentation can be employed; however, this is a tedious procedure, which is prone to inter- and intra-observer variability. Therefore there is a need for an automated method, which is accurate, reproducible and reliable. This study aims to develop and validate an automated method for segmenting intracranial aneurysms in Computed Tomography Angiography (CTA) data. Also, it is investigated whether prior smoothing improves segmentation robustness and accuracy. The proposed segmentation method is implemented in the level set framework, more specifically Geodesic Active Surfaces, in which a surface is evolved to capture the aneurysmal wall via an energy minimization approach. The energy term is composed of three different image features, namely; intensity, gradient magnitude and intensity variance. The method requires minimal user interaction, i.e. a single seed point inside the aneurysm needs to be placed, based on which image intensity statistics of the aneurysm are derived and used in defining the energy term. The method has been evaluated on 15 aneurysms in 11 CTA data sets by comparing the results to manual segmentations performed by two expert radiologists. Evaluation measures were Similarity Index, Average Surface Distance and Volume Difference. The results show that the automated aneurysm segmentation method is reproducible, and performs in the range of inter-observer variability in terms of accuracy. Smoothing by nonlinear diffusion with appropriate parameter settings prior to segmentation, slightly improves segmentation accuracy." }, { "pmid": "22884403", "title": "Quantification of intracranial aneurysm morphodynamics from ECG-gated CT angiography.", "abstract": "RATIONALE AND OBJECTIVES\nAneurysm morphodynamics is potentially relevant for assessing aneurysm rupture risk. A method is proposed for automated quantification and visualization of intracranial aneurysm morphodynamics from electrocardiogram (ECG)-gated computed tomography angiography (CTA) data.\n\n\nMATERIALS AND METHODS\nA prospective study was performed in 19 aneurysms from 14 patients with diagnostic workup for recently discovered aneurysms (n = 15) or follow-up of untreated known aneurysms (n = 4). The study was approved by the Institutional Review Board of the hospital and written informed consent was obtained from each patient. An image postprocessing method was developed for quantifying aneurysm volume changes and visualizing local displacement of the aneurysmal wall over a heart cycle using multiphase ECG-gated (four-dimensional) CTA. Percentage volume changes over the heart cycle were determined for aneurysms, surrounding arteries, and the skull.\n\n\nRESULTS\nPulsation of the aneurysm and its surrounding vasculature during the heart cycle could be assessed from ECG-gated CTA data. The percentage aneurysmal volume change ranged from 3% to 18%.\n\n\nCONCLUSION\nECG-gated CTA can be used to study morphodynamics of intracranial aneurysms. The proposed image analysis method is capable of quantifying the volume changes and visualizing local displacement of the vascular structures over the cardiac cycle." }, { "pmid": "21520841", "title": "Dynamic estimation of three-dimensional cerebrovascular deformation from rotational angiography.", "abstract": "PURPOSE\nThe objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall.\n\n\nMETHODS\nThe whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D + t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D + t measured projection sequence and the corresponding forward projections of the deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation.\n\n\nRESULTS\nEvaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall.\n\n\nCONCLUSIONS\nThe authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time, although it has to be applied to a larger patient population prior to possible wide application to routine endovascular procedures. In particular, for the first time, this feasibility study has shown that in vivo cerebrovascular motion can be obtained intraprocedurally from a 3DRA acquisition. Results have also shown the potential of performing strain analysis using this imaging modality, thus making possible for the future modeling of biomechanical properties of the vascular wall." }, { "pmid": "17538143", "title": "Pulsatility imaging of saccular aneurysm model by 64-slice CT with dynamic multiscan technique.", "abstract": "The feasibility of imaging pulsatility in an aneurysm model with the high-resolution dynamic multiscan technique of 64-slice computed tomography (CT) was studied. A pulsatile aneurysm phantom was constructed and imaged with dynamic multiscan technique. The aneurysm model was filled with iodinated contrast material (250 Hounsfield Units) and was scanned with use of a gantry rotation time of 0.33 seconds, slice thickness of 1.2 mm, effective coverage of 24 mm, and total imaging time of 4 seconds. Images were reconstructed at 50-msec intervals. The visualization of wall motion was qualitatively evaluated by direct comparison of four-dimensional images versus phantom motion. Pulsatility imaging without perceptible artifact or need for cardiac gating was achieved with the use of this technique." }, { "pmid": "2961104", "title": "Variations in middle cerebral artery blood flow investigated with noninvasive transcranial blood velocity measurements.", "abstract": "Observations on blood velocity in the middle cerebral artery using transcranial Doppler ultrasound and on the ipsilateral internal carotid artery flow volume were obtained during periods of transient, rapid blood flow variations in 7 patients. Five patients were investigated after carotid endarterectomy. A further 2 patients having staged carotid endarterectomy and open heart surgery were investigated during nonpulsatile cardiopulmonary bypass. The patient selection permitted the assumption that middle cerebral artery flow remained proportional to internal carotid artery flow. The integrated time-mean values from consecutive 5-second periods were computed. The arithmetic mean internal carotid artery flow varied from 167 to 399 ml/min in individual patients, with individual ranges between +/- 15% and +/- 35% of the mean flow. The mean middle cerebral artery blood velocity varied from 32 to 78 cm/sec. The relation between flow volume and blood velocity was nearly linear under these conditions. Normalization of the data as percent of the individual arithmetic means permitted a composite analysis of data from all patients. Linear regression of normalized blood velocity (V') on normalized flow volume (Q') showed V' = 1.05 Q' - 5.08 (r2 = 0.898)." }, { "pmid": "8733959", "title": "Use of color power transcranial Doppler sonography to monitor aneurysmal coiling.", "abstract": "We describe the use of a recently developed technique in the field of color Doppler sonography, called power Doppler or color Doppler energy, that produces better images of the intracranial arteries than those obtained by conventional color Doppler techniques. Color Doppler energy makes it possible to identify aneurysms and their relationship to the parent artery, thus allowing one to observe how much of an aneurysm remains patent and the condition of adjacent arteries during endovascular treatment. We describe the use of this technique during the insertion of Guglielmi detachable coils into aneurysms and during subsequent follow-up examination." }, { "pmid": "15343426", "title": "Prediction of impending rupture in aneurysms using 4D-CTA: histopathological verification of a real-time minimally invasive tool in unruptured aneurysms.", "abstract": "The authors describe the use of a 4D-CT angiogram to predict impending rupture in intact aneurysms, as a real-time, less invasive imaging technique. Histopathological verification and immunostaining of the bleb site performed on the study population reveals the significant predictive value of this tool. The point of maximum amplitude of pulsation of the aneurysm wall in unison with the RR interval of the electrocardiogram determines the potential rupture point. This helps in prioritizing the intervention for unruptured aneurysm cases, provides an effective screening of the high-risk population, and aids preoperative planning of clip application." }, { "pmid": "17354802", "title": "CFD analysis incorporating the influence of wall motion: application to intracranial aneurysms.", "abstract": "Haemodynamics, and in particular wall shear stress, is thought to play a critical role in the progression and rupture of intracranial aneurysms. A novel method is presented that combines image-based wall motion estimation obtained through non-rigid registration with computational fluid dynamics (CFD) simulations in order to provide realistic intra-aneurysmal flow patterns and understand the effects of deforming walls on the haemodynamic patterns. In contrast to previous approaches, which assume rigid walls or ad hoc elastic parameters to perform the CFD simulations, wall compliance has been included in this study through the imposition of measured wall motions. This circumvents the difficulties in estimating personalized elasticity properties. Although variations in the aneurysmal haemodynamics were observed when incorporating the wall motion, the overall characteristics of the wall shear stress distribution do not seem to change considerably. Further experiments with more cases will be required to establish the clinical significance of the observed variations." } ]
Computational and Structural Biotechnology Journal
30069284
PMC6068317
10.1016/j.csbj.2018.06.003
Blockchain Technology for Healthcare: Facilitating the Transition to Patient-Driven Interoperability
Interoperability in healthcare has traditionally been focused around data exchange between business entities, for example, different hospital systems. However, there has been a recent push towards patient-driven interoperability, in which health data exchange is patient-mediated and patient-driven. Patient-centered interoperability, however, brings with it new challenges and requirements around security and privacy, technology, incentives, and governance that must be addressed for this type of data sharing to succeed at scale. In this paper, we look at how blockchain technology might facilitate this transition through five mechanisms: (1) digital access rules, (2) data aggregation, (3) data liquidity, (4) patient identity, and (5) data immutability. We then look at barriers to blockchain-enabled patient-driven interoperability, specifically clinical data transaction volume, privacy and security, patient engagement, and incentives. We conclude by noting that while patient-driving interoperability is an exciting trend in healthcare, given these challenges, it remains to be seen whether blockchain can facilitate the transition from institution-centric to patient-centric data sharing.
6Related WorkBlockchain's potential to enable better health data sharing and ownership has been previously described by several authors. Using a public or private blockchain to actually store clinical data is one example—for example, Yue et al. described a “Healthcare Data Gateway” (HDG) which would enable patients to manage their own health data stored on a private blockchain [44]. Similarly, Ivan described a public blockchain implementation, where healthcare data is encrypted but stored publicly, creating a blockchain-based Personal Health Record [45]. MedChain is another example, where a permissioned network of medication stakeholders (including the patient) could be used to facilitate medication-specific data sharing between patients, hospitals, and pharmacies [46]. While we imagine that a model storing actual clinical data on a blockchain—permissioned or public—would have substantial privacy and scalability concerns, it is important to continue to understand the privacy and security implications of on-chain data storage.Another approach to sharing health data leverages blockchain not for the storage of the actual clinical data, but for facilitating management or governance of that data. Zyskind et al. have described a general-purpose decentralized access and control manager for encrypted off-chain data; the blockchain layer enforces access control policies, but data is stored off chain [47]. In the healthcare space, FHIRChain is a smart-contract based system for exchanging health data based on the standard FHIR [48], where clinical data is stored off chain, and the blockchain itself stores encrypted meta-data which serve as pointers to the primary data source (like an EHR) [49]. Azaria et al. introduced MedRec, which uses a permissioned blockchain network to facilitate data sharing and authentication. MedRec has a novel proof-of-work incentive method built around access to anonymized medical data (for research, as an example) [50]. Finally, Dubovitskaya et al. also propose a permissioned blockchain (focused on oncologic care) which leverages off-chain cloud storage for clinical data, using the blockchain to manage consent and authorization [51]. Both MedRec and Dubovitskaya's work have been prototyped but do not appear to be operational.Additionally, it is worth noting that in the drive towards patient-driven interoperability, blockchain may not be the only solution. Private, vendor-based solutions may also take hold. For example, Apple recently announced a product that would allow patients to pull their clinical EHR data from participating institutions using APIs (based on FHIR and the Argonaut project specification) [52]. Similarly, Sync 4 Science is a pilot effort to allow patients to contribute their EHR data to research efforts, also through standard APIs, using an authorization workflow (i.e., the data need never be stored or managed by the patient individually) [53]. Though the idea of a digital Personal Health Record has been described for decades, there has been noticeable traction from a technology and regulatory perspective in recent years.
[ "26488690", "24629653", "20442154", "25954386", "25693009", "28720625", "27301748", "11687563", "27288854", "26911829", "29016974", "23304405", "26911821", "24551372" ]
[ { "pmid": "24629653", "title": "The impact of interoperability of electronic health records on ambulatory physician practices: a discrete-event simulation study.", "abstract": "BACKGROUND\nThe effect of health information technology (HIT) on efficiency and workload among clinical and nonclinical staff has been debated, with conflicting evidence about whether electronic health records (EHRs) increase or decrease effort. None of this paper to date, however, examines the effect of interoperability quantitatively using discrete event simulation techniques.\n\n\nOBJECTIVE\nTo estimate the impact of EHR systems with various levels of interoperability on day-to-day tasks and operations of ambulatory physician offices.\n\n\nMETHODS\nInterviews and observations were used to collect workflow data from 12 adult primary and specialty practices. A discrete event simulation model was constructed to represent patient flows and clinical and administrative tasks of physicians and staff members.\n\n\nRESULTS\nHigh levels of EHR interoperability were associated with reduced time spent by providers on four tasks: preparing lab reports, requesting lab orders, prescribing medications, and writing referrals. The implementation of an EHR was associated with less time spent by administrators but more time spent by physicians, compared with time spent at paper-based practices. In addition, the presence of EHRs and of interoperability did not significantly affect the time usage of registered nurses or the total visit time and waiting time of patients.\n\n\nCONCLUSION\nThis paper suggests that the impact of using HIT on clinical and nonclinical staff work efficiency varies, however, overall it appears to improve time efficiency more for administrators than for physicians and nurses." }, { "pmid": "20442154", "title": "A preliminary look at duplicate testing associated with lack of electronic health record interoperability for transferred patients.", "abstract": "Duplication of medical testing results in a financial burden to the healthcare system. Authors undertook a retrospective review of duplicate testing on patients receiving coordinated care across two institutions, each with its own electronic medical record system. In order to determine whether duplicate testing occurred and if such testing was clinically indicated, authors analyzed records of 85 patients transferred from one site to the other between January 1, 2006 and December 31, 2007. Duplication of testing (repeat within 12 hours) was found in 32% of the cases examined; 20% of cases had at least one duplicate test not clinically indicated. While previous studies document that inaccessibility of paper records leads to duplicate testing when patients are transferred between care facilities, the current study suggests that incomplete electronic record transfer among incompatible electronic medical record systems can also lead to potentially costly duplicate testing behaviors. The authors believe that interoperable systems with integrated decision support could assist in minimizing duplication of testing at time of patient transfers." }, { "pmid": "25954386", "title": "Applications of health information exchange information to public health practice.", "abstract": "Increased information availability, timeliness, and comprehensiveness through health information exchange (HIE) can support public health practice. The potential benefits to disease monitoring, disaster response, and other public health activities served as an important justification for the US' investments in HIE. After several years of HIE implementation and funding, we sought to determine if any of the anticipated benefits of exchange participation were accruing to state and local public health practitioners participating in five different exchanges. Using qualitative interviews and template analyses, we identified public health efforts and activities that were improved by participation in HIE. HIE supported public health activities consistent with expectations in the literature. However, no single department realized all the potential benefits of HIE identified. These findings suggest ways to improve HIE usage in public health." }, { "pmid": "28720625", "title": "Developing Electronic Health Record (EHR) Strategies Related to Health Center Patients' Social Determinants of Health.", "abstract": "BACKGROUND\n\"Social determinants of heath\" (SDHs) are nonclinical factors that profoundly affect health. Helping community health centers (CHCs) document patients' SDH data in electronic health records (EHRs) could yield substantial health benefits, but little has been reported about CHCs' development of EHR-based tools for SDH data collection and presentation.\n\n\nMETHODS\nWe worked with 27 diverse CHC stakeholders to develop strategies for optimizing SDH data collection and presentation in their EHR, and approaches for integrating SDH data collection and the use of those data (eg, through referrals to community resources) into CHC workflows.\n\n\nRESULTS\nWe iteratively developed a set of EHR-based SDH data collection, summary, and referral tools for CHCs. We describe considerations that arose while developing the tools and present some preliminary lessons learned.\n\n\nCONCLUSION\nStandardizing SDH data collection and presentation in EHRs could lead to improved patient and population health outcomes in CHCs and other care settings. We know of no previous reports of processes used to develop similar tools. This article provides an example of 1 such process. Lessons from our process may be useful to health care organizations interested in using EHRs to collect and act on SDH data. Research is needed to empirically test the generalizability of these lessons." }, { "pmid": "27301748", "title": "Health information exchange policies of 11 diverse health systems and the associated impact on volume of exchange.", "abstract": "BACKGROUND\nProvider organizations increasingly have the ability to exchange patient health information electronically. Organizational health information exchange (HIE) policy decisions can impact the extent to which external information is readily available to providers, but this relationship has not been well studied.\n\n\nOBJECTIVE\nOur objective was to examine the relationship between electronic exchange of patient health information across organizations and organizational HIE policy decisions. We focused on 2 key decisions: whether to automatically search for information from other organizations and whether to require HIE-specific patient consent.\n\n\nMETHODS\nWe conducted a retrospective time series analysis of the effect of automatic querying and the patient consent requirement on the monthly volume of clinical summaries exchanged. We could not assess degree of use or usefulness of summaries, organizational decision-making processes, or generalizability to other vendors.\n\n\nRESULTS\nBetween 2013 and 2015, clinical summary exchange volume increased by 1349% across 11 organizations. Nine of the 11 systems were set up to enable auto-querying, and auto-querying was associated with a significant increase in the monthly rate of exchange (P = .006 for change in trend). Seven of the 11 organizations did not require patient consent specifically for HIE, and these organizations experienced a greater increase in volume of exchange over time compared to organizations that required consent.\n\n\nCONCLUSIONS\nAutomatic querying and limited consent requirements are organizational HIE policy decisions that impact the volume of exchange, and ultimately the information available to providers to support optimal care. Future efforts to ensure effective HIE may need to explicitly address these factors." }, { "pmid": "11687563", "title": "The HL7 Clinical Document Architecture.", "abstract": "Many people know of Health Level 7 (HL7) as an organization that creates health care messaging standards. Health Level 7 is also developing standards for the representation of clinical documents (such as discharge summaries and progress notes). These document standards make up the HL7 Clinical Document Architecture (CDA). The HL7 CDA Framework, release 1.0, became an ANSI-approved HL7 standard in November 2000. This article presents the approach and objectives of the CDA, along with a technical overview of the standard. The CDA is a document markup standard that specifies the structure and semantics of clinical documents. A CDA document is a defined and complete information object that can include text, images, sounds, and other multimedia content. The document can be sent inside an HL7 message and can exist independently, outside a transferring message. The first release of the standard has attempted to fill an important gap by addressing common and largely narrative clinical notes. It deliberately leaves out certain advanced and complex semantics, both to foster broad implementation and to give time for these complex semantics to be fleshed out within HL7. Being a part of the emerging HL7 version 3 family of standards, the CDA derives its semantic content from the shared HL7 Reference Information Model and is implemented in Extensible Markup Language. The HL7 mission is to develop standards that enable semantic interoperability across all platforms. The HL7 version 3 family of standards, including the CDA, are moving us closer to the realization of this vision." }, { "pmid": "26911829", "title": "SMART on FHIR: a standards-based, interoperable apps platform for electronic health records.", "abstract": "OBJECTIVE\nIn early 2010, Harvard Medical School and Boston Children's Hospital began an interoperability project with the distinctive goal of developing a platform to enable medical applications to be written once and run unmodified across different healthcare IT systems. The project was called Substitutable Medical Applications and Reusable Technologies (SMART).\n\n\nMETHODS\nWe adopted contemporary web standards for application programming interface transport, authorization, and user interface, and standard medical terminologies for coded data. In our initial design, we created our own openly licensed clinical data models to enforce consistency and simplicity. During the second half of 2013, we updated SMART to take advantage of the clinical data models and the application-programming interface described in a new, openly licensed Health Level Seven draft standard called Fast Health Interoperability Resources (FHIR). Signaling our adoption of the emerging FHIR standard, we called the new platform SMART on FHIR.\n\n\nRESULTS\nWe introduced the SMART on FHIR platform with a demonstration that included several commercial healthcare IT vendors and app developers showcasing prototypes at the Health Information Management Systems Society conference in February 2014. This established the feasibility of SMART on FHIR, while highlighting the need for commonly accepted pragmatic constraints on the base FHIR specification.\n\n\nCONCLUSION\nIn this paper, we describe the creation of SMART on FHIR, relate the experience of the vendors and developers who built SMART on FHIR prototypes, and discuss some challenges in going from early industry prototyping to industry-wide production use." }, { "pmid": "29016974", "title": "Blockchain distributed ledger technologies for biomedical and health care applications.", "abstract": "OBJECTIVES\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTARGET AUDIENCE\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nSCOPE\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains." }, { "pmid": "23304405", "title": "Duplicate patient records--implication for missed laboratory results.", "abstract": "INTRODUCTION\nAlthough duplicate records are a potential patient safety hazard, the actual clinical harm associated with these records has never been studied. We hypothesized that duplicate records will be associated with missed abnormal laboratory results.\n\n\nMETHODS\nA retrospective, matched, cohort study of 904 events of abnormal laboratory result (HgbA1c, TSH, Vitamin B(12), LDL). We compared the rates of missed laboratory results between patients with duplicate and non-duplicate records from the ambulatory clinics. Cases were matched according to test and ordering physician.\n\n\nRESULTS\nDuplicate records were associated with a higher rate of missed laboratory results (OR=1.44, 95% CI 1.1-1.9). Other factors associated with missed lab results were tests performed as screening (OR=2.22, 95% CI 1.4-3.4), and older age (OR=1.15 for every decade, 95% CI 1.01-1.2). In most cases test results were reported into the main patient record.\n\n\nDISCUSSION\nDuplicate records were associated with a higher risk of missing important laboratory results." }, { "pmid": "26911821", "title": "Harnessing person-generated health data to accelerate patient-centered outcomes research: the Crohn's and Colitis Foundation of America PCORnet Patient Powered Research Network (CCFA Partners).", "abstract": "The Crohn's and Colitis Foundation of America Partners Patient-Powered Research Network (PPRN) seeks to advance and accelerate comparative effectiveness and translational research in inflammatory bowel diseases (IBDs). Our IBD-focused PCORnet PPRN has been designed to overcome the major obstacles that have limited patient-centered outcomes research in IBD by providing the technical infrastructure, patient governance, and patient-driven functionality needed to: 1) identify, prioritize, and undertake a patient-centered research agenda through sharing person-generated health data; 2) develop and test patient and provider-focused tools that utilize individual patient data to improve health behaviors and inform health care decisions and, ultimately, outcomes; and 3) rapidly disseminate new knowledge to patients, enabling them to improve their health. The Crohn's and Colitis Foundation of America Partners PPRN has fostered the development of a community of citizen scientists in IBD; created a portal that will recruit, retain, and engage members and encourage partnerships with external scientists; and produced an efficient infrastructure for identifying, screening, and contacting network members for participation in research." }, { "pmid": "24551372", "title": "Optimized dual threshold entity resolution for electronic health record databases--training set size and active learning.", "abstract": "Clinical databases may contain several records for a single patient. Multiple general entity-resolution algorithms have been developed to identify such duplicate records. To achieve optimal accuracy, algorithm parameters must be tuned to a particular dataset. The purpose of this study was to determine the required training set size for probabilistic, deterministic and Fuzzy Inference Engine (FIE) algorithms with parameters optimized using the particle swarm approach. Each algorithm classified potential duplicates into: definite match, non-match and indeterminate (i.e., requires manual review). Training sets size ranged from 2,000-10,000 randomly selected record-pairs. We also evaluated marginal uncertainty sampling for active learning. Optimization reduced manual review size (Deterministic 11.6% vs. 2.5%; FIE 49.6% vs. 1.9%; and Probabilistic 10.5% vs. 3.5%). FIE classified 98.1% of the records correctly (precision=1.0). Best performance required training on all 10,000 randomly-selected record-pairs. Active learning achieved comparable results with 3,000 records. Automated optimization is effective and targeted sampling can reduce the required training set size." } ]
BMC Medical Informatics and Decision Making
30066656
PMC6069291
10.1186/s12911-018-0633-7
Query-constraint-based mining of association rules for exploratory analysis of clinical datasets in the National Sleep Research Resource
BackgroundAssociation Rule Mining (ARM) has been widely used by biomedical researchers to perform exploratory data analysis and uncover potential relationships among variables in biomedical datasets. However, when biomedical datasets are high-dimensional, performing ARM on such datasets will yield a large number of rules, many of which may be uninteresting. Especially for imbalanced datasets, performing ARM directly would result in uninteresting rules that are dominated by certain variables that capture general characteristics.MethodsWe introduce a query-constraint-based ARM (QARM) approach for exploratory analysis of multiple, diverse clinical datasets in the National Sleep Research Resource (NSRR). QARM enables rule mining on a subset of data items satisfying a query constraint. We first perform a series of data-preprocessing steps including variable selection, merging semantically similar variables, combining multiple-visit data, and data transformation. We use Top-k Non-Redundant (TNR) ARM algorithm to generate association rules. Then we remove general and subsumed rules so that unique and non-redundant rules are resulted for a particular query constraint.ResultsApplying QARM on five datasets from NSRR obtained a total of 2517 association rules with a minimum confidence of 60% (using top 100 rules for each query constraint). The results show that merging similar variables could avoid uninteresting rules. Also, removing general and subsumed rules resulted in a more concise and interesting set of rules.ConclusionsQARM shows the potential to support exploratory analysis of large biomedical datasets. It is also shown as a useful method to reduce the number of uninteresting association rules generated from imbalanced datasets. A preliminary literature-based analysis showed that some association rules have supporting evidence from biomedical literature, while others without literature-based evidence may serve as the candidates for new hypotheses to explore and investigate. Together with literature-based evidence, the association rules mined over the NSRR clinical datasets may be used to support clinical decisions for sleep-related problems.Electronic supplementary materialThe online version of this article (10.1186/s12911-018-0633-7) contains supplementary material, which is available to authorized users.
Distinction with related workARM has been widely applied to biomedical datasets for data-driven knowledge discovery. However, exploratory ARM based on a particular query constraint has been rarely investigated. QARM would allow researchers to perform exploratory analysis based on a subset of data of interest by composing a specific query criteria to filter out irrelevant data.The heuristic of our approach is to some extent similar to that of traditional constraint-based mining [29], which enables users to specify constraints to confine the search space. In another related work, Kubat et al. [30] have presented an approach that converts a market-based database into an itemset tree to get a quick response to targeted association queries. Our approach differs from other constraint-based mining approaches [29] and targeted association querying [30], in that we directly apply the query constraint on the input data before starting the mining process rather than applying it to the output rules or applying it during the mining process. Another important distinction is that unlike other approaches that always include the constraint in the mined rules, the rules mined by our approach do not contain the query constraint itself. Although one of the motivations behind QARM is to reduce the number of uninteresting rules generated from an imbalanced dataset, it is not used to address the issue of the imbalance of the dataset. To the best of our knowledge, constraint-based mining has not been employed for the reducing purpose before. Furthermore, in terms of the datasets used, this is the first rule-mining-based work on analyzing NSRR datasets.We performed a preliminary study [31] on query-constraint-based ARM in NSRR which motivated this work. However, in [31] we did not perform any post-processing on the results. The results contained a lot of general as well as subsumed rules. To address this issue, in this work, we have introduced two post-processing steps to remove such rules from the results so that a concise, interesting rule set will be provided as the output for a query. From Table 5 it could be noted that a large potion of rules were removed as a result of these two steps. In addition, we also perform a literature survey to validate a random sample of the rules obtained.
[ "16702590", "22549152", "27070134", "16617622", "2474099", "23795809", "21516306", "26904689", "18207451", "25738481", "7845427", "25265976", "26417808", "18664607", "23904952", "28207279", "24290900", "24242190", "6338085", "29581804", "19846222", "16754527", "26062915", "14985157", "19440241", "17679026", "18191733", "11379468", "11822535", "2892881" ]
[ { "pmid": "16702590", "title": "Systematic review: impact of health information technology on quality, efficiency, and costs of medical care.", "abstract": "BACKGROUND\nExperts consider health information technology key to improving efficiency and quality of health care.\n\n\nPURPOSE\nTo systematically review evidence on the effect of health information technology on quality, efficiency, and costs of health care.\n\n\nDATA SOURCES\nThe authors systematically searched the English-language literature indexed in MEDLINE (1995 to January 2004), the Cochrane Central Register of Controlled Trials, the Cochrane Database of Abstracts of Reviews of Effects, and the Periodical Abstracts Database. We also added studies identified by experts up to April 2005.\n\n\nSTUDY SELECTION\nDescriptive and comparative studies and systematic reviews of health information technology.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted information on system capabilities, design, effects on quality, system acquisition, implementation context, and costs.\n\n\nDATA SYNTHESIS\n257 studies met the inclusion criteria. Most studies addressed decision support systems or electronic health records. Approximately 25% of the studies were from 4 academic institutions that implemented internally developed systems; only 9 studies evaluated multifunctional, commercially developed systems. Three major benefits on quality were demonstrated: increased adherence to guideline-based care, enhanced surveillance and monitoring, and decreased medication errors. The primary domain of improvement was preventive health. The major efficiency benefit shown was decreased utilization of care. Data on another efficiency measure, time utilization, were mixed. Empirical cost data were limited.\n\n\nLIMITATIONS\nAvailable quantitative research was limited and was done by a small number of institutions. Systems were heterogeneous and sometimes incompletely described. Available financial and contextual data were limited.\n\n\nCONCLUSIONS\nFour benchmark institutions have demonstrated the efficacy of health information technologies in improving quality and efficiency. Whether and how other institutions can achieve similar benefits, and at what costs, are unclear." }, { "pmid": "22549152", "title": "Mining electronic health records: towards better research applications and clinical care.", "abstract": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype-phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality." }, { "pmid": "27070134", "title": "Scaling Up Scientific Discovery in Sleep Medicine: The National Sleep Research Resource.", "abstract": "Professional sleep societies have identified a need for strategic research in multiple areas that may benefit from access to and aggregation of large, multidimensional datasets. Technological advances provide opportunities to extract and analyze physiological signals and other biomedical information from datasets of unprecedented size, heterogeneity, and complexity. The National Institutes of Health has implemented a Big Data to Knowledge (BD2K) initiative that aims to develop and disseminate state of the art big data access tools and analytical methods. The National Sleep Research Resource (NSRR) is a new National Heart, Lung, and Blood Institute resource designed to provide big data resources to the sleep research community. The NSRR is a web-based data portal that aggregates, harmonizes, and organizes sleep and clinical data from thousands of individuals studied as part of cohort studies or clinical trials and provides the user a suite of tools to facilitate data exploration and data visualization. Each deidentified study record minimally includes the summary results of an overnight sleep study; annotation files with scored events; the raw physiological signals from the sleep record; and available clinical and physiological data. NSRR is designed to be interoperable with other public data resources such as the Biologic Specimen and Data Repository Information Coordinating Center Demographics (BioLINCC) data and analyzed with methods provided by the Research Resource for Complex Physiological Signals (PhysioNet). This article reviews the key objectives, challenges and operational solutions to addressing big data opportunities for sleep research in the context of the national sleep research agenda. It provides information to facilitate further interactions of the user community with NSRR, a community resource." }, { "pmid": "16617622", "title": "Association rule discovery with the train and test approach for heart disease prediction.", "abstract": "Association rules represent a promising technique to improve heart disease prediction. Unfortunately, when association rules are applied on a medical data set, they produce an extremely large number of rules. Most of such rules are medically irrelevant and the time required to find them can be impractical. A more important issue is that, in general, association rules are mined on the entire data set without validation on an independent sample. To solve these limitations, we introduce an algorithm that uses search constraints to reduce the number of rules, searches for association rules on a training set, and finally validates them on an independent test set. The medical significance of discovered rules is evaluated with support, confidence, and lift. Association rules are applied on a real data set containing medical records of patients with heart disease. In medical terms, association rules relate heart perfusion measurements and risk factors to the degree of disease in four specific arteries. Search constraints and test set validation significantly reduce the number of association rules and produce a set of rules with high predictive accuracy. We exhibit important rules with high confidence, high lift, or both, that remain valid on the test set on several runs. These rules represent valuable medical knowledge." }, { "pmid": "2474099", "title": "Loop diuretics combined with an ACE inhibitor for treatment of hypertension: a study with furosemide, piretanide, and ramipril in spontaneously hypertensive rats.", "abstract": "Both angiotensin converting enzyme (ACE) inhibition and sodium-diuresis lower blood pressure in spontaneously hypertensive rats (SHR). The purpose of the present study was to examine whether long-term therapy with ramipril (RA, and ACE inhibitor) would lower blood pressure more effectively and without adverse reactions in combination with the loop diuretics piretanide (PI) or furosemide (FU). Groups of 15 SHR each were treated once daily for 3 weeks by gavage with 1 and 10 mg/kg RA, 2 and 4 mg/kg PI, and 8 and 16 mg/kg FU alone and with 1 mg/kg RA in combination with each of these diuretics at both the high and low doses. Sustained and marked ACE inhibition with 10 mg/kg RA normalized BP, but this was accompanied with slightly impaired kidney function as assessed by increases in both urea and creatinine. Low-dose diuretic therapy, producing little diuresis, or treatment with 1 mg/kg RA, producing less sustained ACE inhibition were less effective on blood pressure and scarcely altered serum solute levels, except 4 mg/kg PI, which produced slight reductions in Na+, K+, Mg2+, and PO4(3-). Combined treatment with the 1 mg/kg RA with either diuretic given at low or high dose was well tolerated at much improved reduction in blood pressure compared to their effects individually and without changes in serum solute concentrations and without hemoconcentration. Thus, combined treatment with low doses of loop diuretics and ACE inhibitors that permit partial recovery of serum ACE activity during the 24 h after dosing synergistically lowers blood pressure without adverse reactions associated with larger doses of either therapy alone." }, { "pmid": "23795809", "title": "Loop diuretics and ultrafiltration in heart failure.", "abstract": "INTRODUCTION\nDespite widespread use of loop diuretics in congestive heart failure (HF) to achieve decongestion and relief of symptoms, as recommended by the current guidelines, there is uncertainty as to their long-term therapeutic efficacy and safety. Their efficacy and safety compared to venous ultrafiltration are currently under investigation in acute decompensated HF patients.\n\n\nAREAS COVERED\nIn this article, the authors review current available data related to efficacy and safety of loop diuretics and ultrafiltration in HF.\n\n\nEXPERT OPINION\nThe literature review highlights an unmet clinical need for evidence-based algorithms, potentially using not only the classical clinical signs and symptoms of congestion as well as the estimated glomerular filtration rate and serum electrolytes, but also biomarkers of congestion/decongestion, neurohumoural activation or urinary kidney injury molecules, in order to optimize both loop diuretics and renin-angiotensin-aldosterone system blocker use in HF patients." }, { "pmid": "21516306", "title": "Loop diuretics in heart failure.", "abstract": "Congestion is a major component of the clinical syndrome of heart failure, and diuretic therapy remains the cornerstone of congestion management. Despite being widely used, there is very limited evidence from prospective randomized studies to guide the prescription and titration of diuretics. A thorough understanding of the pharmacology of loop diuretics is crucial to the optimal use of these agents. Although multiple observational studies have suggested that high doses of diuretics may be harmful, all such analyses are confounded by the association of higher diuretic doses with greater severity of illness and comorbidity. Recent data from randomized trials suggest that higher doses of diuretics may be more effective at relieving congestion and that associated changes in renal function are typically transient. Data from other ongoing trials will continue to inform our understanding of the optimal role for loop diuretics in the treatment of heart failure." }, { "pmid": "26904689", "title": "Association between Self-Reported Habitual Snoring and Diabetes Mellitus: A Systemic Review and Meta-Analysis.", "abstract": "AIM\nSeveral studies have reported an association between self-reported habitual snoring and diabetes mellitus (DM); however, the results are inconsistent.\n\n\nMETHODS\nElectronic databases including PubMed and EMBASE were searched. Odds ratios (ORs) and 95% confidence intervals (CIs) were used to assess the strength of the association between snoring and DM using a random-effects model. Heterogeneity, subgroup, and sensitivity analyses were also evaluated. Begg's, Egger's tests and funnel plots were used to evaluate publication bias.\n\n\nRESULTS\nA total of eight studies (six cross sectional and two prospective cohort studies) pooling 101,246 participants were included. Of the six cross sectional studies, the summary OR and 95% CI of DM in individuals that snore compared with nonsnorers were 1.37 (95% CI: 1.20-1.57, p < 0.001). There was no heterogeneity across the included studies (I (2) = 2.9%, p = 0.408). When stratified by gender, the pooled OR (95% CI) was 1.59 (1.20-2.11) in females (n = 12298), and 0.89 (0.65-1.22) in males (n = 4276). Of the two prospective studies, the pooled RR was 1.65 (95% CI, 1.30-2.08).\n\n\nCONCLUSIONS\nSelf-reported habitual snoring is statistically associated with DM in females, but not in males. This meta-analysis indicates a need to paying attention to the effect of snoring on the occurrence of DM in females." }, { "pmid": "18207451", "title": "Snoring and witnessed sleep apnea is related to diabetes mellitus in women.", "abstract": "BACKGROUND\nGender differences in the relationship of snoring and diabetes mellitus are mainly unknown. We aimed to analyze the relationship between snoring, witnessed sleep apnea and diabetes mellitus and to analyze possible gender related differences in an unselected population.\n\n\nMETHODS\nQuestions on snoring and witnessed sleep apneas were included in the Northern Sweden component of the WHO, MONICA study. Invited were 10,756 men and women aged 25-79 years, randomly selected from the population register.\n\n\nRESULTS\nThere were 7905 (73%) subjects, 4047 women and 3858 men who responded to the questionnaire and attended a visit for a physical examination. Habitual snoring was related to diabetes mellitus in women, with an adjusted odds ratio (OR)=1.58 (95% confidence interval (CI) 1.02-2.44, p=0.041) independent of smoking, age, body mass index and waist circumference. Witnessed sleep apnea was also independently related to diabetes mellitus in women, with an adjusted OR=3.29 (95% CI 1.20-8.32, p=0.012). Neither snoring, nor witnessed sleep apneas were associated with diabetes mellitus among men, except for witnessed sleep apnea in men aged 25-54 years old. They had an adjusted OR=3.84 (95% CI 1.36-10.9, p=0.011) for diabetes mellitus.\n\n\nCONCLUSIONS\nSnoring and witnessed sleep apneas are related to diabetes mellitus in women. Witnessed sleep apnea is related to diabetes mellitus in men younger than 55 years old." }, { "pmid": "25738481", "title": "Cost-utility of angiotensin-converting enzyme inhibitor-based treatment compared with thiazide diuretic-based treatment for hypertension in elderly Australians considering diabetes as comorbidity.", "abstract": "The objective of this study was to examine the cost-effectiveness of angiotensin-converting enzyme inhibitor (ACEI)-based treatment compared with thiazide diuretic-based treatment for hypertension in elderly Australians considering diabetes as an outcome along with cardiovascular outcomes from the Australian government's perspective.We used a cost-utility analysis to estimate the incremental cost-effectiveness ratio (ICER) per quality-adjusted life-year (QALY) gained. Data on cardiovascular events and new onset of diabetes were used from the Second Australian National Blood Pressure Study, a randomized clinical trial comparing diuretic-based (hydrochlorothiazide) versus ACEI-based (enalapril) treatment in 6083 elderly (age ≥65 years) hypertensive patients over a median 4.1-year period. For this economic analysis, the total study population was stratified into 2 groups. Group A was restricted to participants diabetes free at baseline (n = 5642); group B was restricted to participants with preexisting diabetes mellitus (type 1 or type 2) at baseline (n = 441). Data on utility scores for different events were used from available published literatures; whereas, treatment and adverse event management costs were calculated from direct health care costs available from Australian government reimbursement data. Costs and QALYs were discounted at 5% per annum. One-way and probabilistic sensitivity analyses were performed to assess the uncertainty around utilities and cost data.After a treatment period of 5 years, for group A, the ICER was Australian dollars (AUD) 27,698 (&OV0556; 18,004; AUD 1-&OV0556; 0.65) per QALY gained comparing ACEI-based treatment with diuretic-based treatment (sensitive to the utility value for new-onset diabetes). In group B, ACEI-based treatment was a dominant strategy (both more effective and cost-saving). On probabilistic sensitivity analysis, the ICERs per QALY gained were always below AUD 50,000 for group B; whereas for group A, the probability of being below AUD 50,000 was 85%.Although the dispensed price of diuretic-based treatment of hypertension in the elderly is lower, upon considering the potential enhanced likelihood of the development of diabetes in addition to the costs of treating cardiovascular disease, ACEI-based treatment may be a more cost-effective strategy in this population." }, { "pmid": "7845427", "title": "Hemostatic factors and the risk of myocardial infarction or sudden death in patients with angina pectoris. European Concerted Action on Thrombosis and Disabilities Angina Pectoris Study Group.", "abstract": "BACKGROUND\nIncreased levels of certain hemostatic factors may play a part in the development of acute coronary syndromes and may be associated with an increased risk of coronary events in patients with angina pectoris.\n\n\nMETHODS\nWe conducted a prospective multicenter study of 3043 patients with angina pectoris who underwent coronary angiography and were followed for two years. Base-line measurements included the concentrations of selected hemostatic factors indicative of a thrombophilic state or endothelial injury. The results were analyzed in relation to the subsequent incidence of myocardial infarction or sudden coronary death.\n\n\nRESULTS\nAfter adjustment for the extent of coronary artery disease and other risk factors, an increased incidence of myocardial infarction or sudden death was associated with higher base-line concentrations of fibrinogen (mean +/- SD, 3.28 +/- 0.74 g per liter in patients who subsequently had coronary events, as compared with 3.00 +/- 0.71 g per liter in those who did not; P = 0.01), von Willebrand factor antigen (138 +/- 49 percent vs. 125 +/- 49 percent, P = 0.05), and tissue plasminogen activator (t-PA) antigen (11.9 +/- 4.7 ng per milliliter vs. 10.0 +/- 4.2 ng per milliliter, P = 0.02). The concentration of C-reactive protein was also directly correlated with the incidence of coronary events (P = 0.05), except when we adjusted for the fibrinogen concentration. In patients with high serum cholesterol levels, the risk of coronary events rose with increasing levels of fibrinogen and C-reactive protein, but the risk remained low even given high serum cholesterol levels in the presence of low fibrinogen concentrations.\n\n\nCONCLUSIONS\nIn patients with angina pectoris, the levels of fibrinogen, von Willebrand factor antigen, and t-PA antigen are independent predictors of subsequent acute coronary syndromes. In addition, low fibrinogen concentrations characterize patients at low risk for coronary events despite increased serum cholesterol levels. Our data are consistent with a pathogenetic role of impaired fibrinolysis, endothelial-cell injury, and inflammatory activity in the progression of coronary artery disease." }, { "pmid": "25265976", "title": "Relationship between angina pectoris and outcomes in patients with heart failure and reduced ejection fraction: an analysis of the Controlled Rosuvastatin Multinational Trial in Heart Failure (CORONA).", "abstract": "AIM\nAngina pectoris is common in patients with heart failure and reduced ejection fraction (HF-REF) but its relationship with outcomes has not been well defined. This relationship was investigated further in a retrospective analysis of the Controlled Rosuvastatin Multinational Trial in Heart Failure (CORONA).\n\n\nMETHODS AND RESULTS\nFour thousand, eight hundred and seventy-eight patients were divided into three categories: no history of angina and no chest pain at baseline (Group A; n = 1240), past history of angina but no chest pain at baseline (Group B; n = 1353) and both a history of angina and chest pain at baseline (Group C; n = 2285). Outcomes were examined using Kaplan-Meier and Cox regression survival analysis. Compared with Group A, Group C had a higher risk of non-fatal myocardial infarction or unstable angina (HR: 2.36, 1.54-3.61; P < 0.001), this composite plus coronary revascularization (HR: 2.54, 1.76-3.68; P < 0.001), as well as HF hospitalization (HR: 1.35, 1.13-1.63; P = 0.001), over a median follow-up period of 33 months. There was no difference in cardiovascular or all-cause mortality. Group B had a smaller increase in risk of coronary events but not of heart failure hospitalization.\n\n\nCONCLUSION\nPatients with HF-REF and ongoing angina are at an increased risk of acute coronary syndrome and HF hospitalization. Whether these patients would benefit from more aggressive medical therapy or percutaneous revascularization is not known and merits further investigation." }, { "pmid": "26417808", "title": "What evidence is there to show which antipsychotics are more diabetogenic than others?", "abstract": "BACKGROUND\nThe use of antipsychotic therapy has been proven to have an association with the incidence of diabetes mellitus. The use of atypical antipsychotics is shown to have a higher association, in contrast with typical antipsychotics. Olanzapine and Clozapine appear to have the highest rates of diabetes mellitus incidence, due to their tendency to affect glucose metabolism compared with other antipsychotic drugs. In this research the main goal is to understand which antipsychotic drugs are the most diabetogenic and to show the mechanisms involved in the glucose metabolism dysregulations with special focus on Olanzapine considering it is a very commonly prescribed and used drug especially among patients with schizophrenia.\n\n\nMETHODS\nOur study is a literature based research. For our research we reviewed 41 Pubmed published articles from 2005 to 2015.\n\n\nCONCLUSION\nAccording to most of the literature, from all the antipsychotics, Clozapine followed by Olanzapine appear to be the atypical neuroleptics that most relate to metabolic syndrome and Diabetes. The basis for this metabolic dysregulations appears to be multifactorial in origin and a result of the drugs, environment and genes interaction." }, { "pmid": "18664607", "title": "Antipsychotics and diabetes: an age-related association.", "abstract": "BACKGROUND\nPrevious studies have reported an association between anti-psychotic medications and diabetes.\n\n\nOBJECTIVE\nTo explore the association between antipsychotic medications and diabetes in patients of different ages.\n\n\nMETHODS\nA retrospective analysis of a large health maintenance organization's drug claim database (3.7 million members) was performed. All patients treated with antipsychotic drugs during 1998-2004 were identified. Patients with diabetes were defined by a record of antidiabetic drug use during 2004. The prevalence of diabetes in different age groups treated with antipsychotics was compared with the prevalence of diabetes among enrollees in the same age groups not treated with antipsychotics.\n\n\nRESULTS\nAmong 82,754 patients treated with antipsychotics, the association between diabetes and consumption of antipsychotics was strongest in the younger age groups and decreased with increasing age: for patients aged 0-24 years, OR 8.9 (95% CI 7.0 to 11.3); 25-44 years, OR 4.2 (95% CI 3.8 to 4.5); 45-54 years, OR 1.9 (95% CI 1.8 to 2.1); 55-64 years, OR 1.3 (95% CI 1.2 to 1.4); and 65 years or older, OR 0.93 (95% CI 0.9 to 1.0). However, the risk associated with atypical antipsychotics was lower than the risk associated with typical antipsychotics, with ORs ranging from 0.7 in patients 0-24 years old to 0.3 in those 65 years or older.\n\n\nCONCLUSIONS\nAntipsychotic drug use was associated with diabetes mellitus. This association was stronger in younger patients. In older adults, the difference was much smaller and, in some cases, there was no association. A lower risk was associated with atypical agents, as compared with typical antipsychotics. Clinicians should be aware that young adults treated with antipsychotics are at increased risk for diabetes." }, { "pmid": "23904952", "title": "Antidepressant use and diabetes mellitus risk: a meta-analysis.", "abstract": "BACKGROUND\nEpidemiologic studies have reported inconsistent findings regarding the association between the use of antidepressants and type 2 diabetes mellitus (DM) risk. We performed a meta-analysis to systematically assess the association between antidepressants and type 2 DM risk.\n\n\nMETHODS\nWe searched MEDLINE (PubMed), EMBASE, and the Cochrane Library (through Dec 31, 2011), including references of qualifying articles. Studies concerning the use of tricyclic antidepressants (TCAs), selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitors (SNRIs), or other antidepressants and the associated risk of diabetes mellitus were included.\n\n\nRESULTS\nOut of 2,934 screened articles, 3 case-control studies, 9 cohort studies, and no clinical trials were included in the final analyses. When all studies were pooled, use of antidepressants was significantly associated with an increased risk of DM in a random effect model (relative risk [RR], 1.49; 95% confidence interval [CI], 1.29 to 1.71). In subgroup analyses, the risk of DM increased among both SSRI users (RR, 1.35; 95% CI, 1.15 to 1.58) and TCA users (RR, 1.57; 95% CI, 1.26 to 1.96). The subgroup analyses were consistent with overall results regardless of study type, information source, country, duration of medication, or study quality. The subgroup results considering body weight, depression severity, and physical activity also showed a positive association (RR, 1.14; 95% CI, 1.01 to 1.28). A publication bias was observed in the selected studies (Egger's test, P for bias = 0.09).\n\n\nCONCLUSION\nOur results suggest that the use of antidepressants is associated with an increased risk of DM." }, { "pmid": "28207279", "title": "Angiotensin-Converting Inhibitors and Angiotensin II Receptor Blockers and Longitudinal Change in Percent Emphysema on Computed Tomography. The Multi-Ethnic Study of Atherosclerosis Lung Study.", "abstract": "RATIONALE\nAlthough emphysema on computed tomography (CT) is associated with increased morbidity and mortality in patients with and without spirometrically defined chronic obstructive pulmonary disease, no available medications target emphysema outside of alpha-1 antitrypsin deficiency. Transforming growth factor-β and endothelial dysfunction are implicated in emphysema pathogenesis, and angiotensin II receptor blockers (ARBs) inhibit transforming growth factor-β, improve endothelial function, and restore airspace architecture in murine models. Evidence in humans is, however, lacking.\n\n\nOBJECTIVES\nTo determine whether angiotensin-converting enzyme (ACE) inhibitor and ARB dose is associated with slowed progression of percent emphysema by CT.\n\n\nMETHODS\nThe Multi-Ethnic Study of Atherosclerosis researchers recruited participants ages 45-84 years from the general population from 2000 to 2002. Medication use was assessed by medication inventory. Percent emphysema was defined as the percentage of lung regions less than -950 Hounsfield units on CTs. Mixed-effects regression models were used to adjust for confounders.\n\n\nRESULTS\nAmong 4,472 participants, 12% used an ACE inhibitor and 6% used an ARB at baseline. The median percent emphysema was 3.0% at baseline, and the rate of progression was 0.64 percentage points over a median of 9.3 years. Higher doses of ACE or ARB were independently associated with a slower change in percent emphysema (P = 0.03). Over 10 years, in contrast to a predicted mean increase in percent emphysema of 0.66 percentage points in those who did not take ARBs or ACE inhibitors, the predicted mean increase in participants who used maximum doses of ARBs or ACE inhibitors was 0.06 percentage points (P = 0.01). The findings were of greatest magnitude among former smokers (P < 0.001). Indications for ACE inhibitor or ARB drugs (hypertension and diabetes) and other medications for hypertension and diabetes were not associated independently with change in percent emphysema. There was no evidence that ACE inhibitor or ARB dose was associated with decline in lung function.\n\n\nCONCLUSIONS\nIn a large population-based study, ACE inhibitors and ARBs were associated with slowed progression of percent emphysema by chest CT, particularly among former smokers. Randomized clinical trials of ACE and ARB agents are warranted for the prevention and treatment of emphysema." }, { "pmid": "24290900", "title": "Clinical characteristics and prediction of pulmonary hypertension in severe emphysema.", "abstract": "BACKGROUND\nWe explored the prevalence, clinical and physiologic correlates of pulmonary hypertension (PH), and screening strategies in patients with severe emphysema evaluated for the National Emphysema Treatment Trial (NETT).\n\n\nMETHODS\nPatients undergoing Doppler echocardiography (DE) and right heart catheterization were included. Patients with mean pulmonary arterial pressure ≥ 25 mmHg (PH Group) were compared to the remainder (non-PH Group).\n\n\nRESULTS\nOf 797 patients, 302 (38%) had PH and 18 (2.2%) had severe PH. Compared to the non-PH Group, patients with PH had lower % predicted FEV1 (p < 0.001), % predicted diffusion capacity for carbon monoxide (p = 0.006), and resting room air PaO2 (p < 0.001). By multivariate analysis, elevated right ventricular systolic pressure, reduced resting room air PaO2, reduced post-bronchodilator % predicted FEV1, and enlarged pulmonary arteries on computed tomographic scan were the best predictors of PH. A strategy using % predicted FEV1, % predicted DLCO, PaO2, and RVSP was predictive of the presence of pre-capillary PH and was highly predictive of its absence.\n\n\nCONCLUSIONS\nMildly elevated pulmonary artery pressures are found in a significant proportion of patients with severe emphysema. However, severe PH is uncommon in the absence of co-morbidities. Simple non-invasive tests may be helpful in screening patients for pre-capillary PH in severe emphysema but none is reliably predictive of its presence." }, { "pmid": "24242190", "title": "Non-steroidal anti-inflammatory drugs and hypertension.", "abstract": "Non-steroidal anti-inflammatory drugs (NSAIDs) are frequently used to alleviate pain of the patients who suffer from inflammatory conditions like rheumatoid arthritis, osteoarthritis, and other painful conditions like gout. This class of drugs works by blocking cyclooxgenases which in turn block the prostaglandin production in the body. Most often, NSAIDs and antihypertensive drugs are used at the same time, and their use increases with increasing age. Moreover, hypertension and arthritis are common in the elderly patients requiring pharmacological managements. An ample amount of studies put forth evidence that NSAIDs reduce the efficiency of antihypertensive drugs plus aggravate pre-existing hypertension or make the individuals prone to develop high blood pressure through renal dysfunction. This review will help doctors to consider the effects and risk factors of concomitant prescription of NSAIDs and hypertensive drugs." }, { "pmid": "6338085", "title": "Hypertension and myocardial infarction.", "abstract": "Because hypertension and myocardial infarction are closely linked in several ways, a better understanding of this relation leads to more effective prophylaxis and management. Management should be directed at three different areas: 1) the prevention of a first myocardial infarction, 2) the prevention of complications after an infarction, and 3) the management of hypertension during evolution of an acute infarction. There is good evidence that beta-receptor blocking agents are beneficial to long-term management. When therapy is required in the acute situation, arteriolar vasodilators are to be avoided and combined arteriolar/venular dilators are the drugs of choice." }, { "pmid": "29581804", "title": "Anxiety and Depression Among Adult Patients With Diabetic Foot: Prevalence and Associated Factors.", "abstract": "BACKGROUND\nDiabetic foot is a frequent complication of diabetes mellitus with subsequent disturbances in the daily life of the patients. The co-existence of depression and anxiety among diabetic foot patients is a common phenomenon and the role of each of them in perpetuating the other is highlighted in the literature. Our study aimed to determine the prevalence rates of anxiety and depression, and to examine the associated risk factors among diabetic foot patients.\n\n\nMETHODS\nThis is a cross-sectional study. A total of 260 diabetic foot patients in the Diabetic Foot Clinic at the National Center for Diabetes, Endocrinology and Genetics (NCDEG), Amman, Jordan, participated in the study. Sociodemographic and health data were gathered through review of medical charts and a structured questionnaire. Depression and anxiety status were also assessed. The Generalized Anxiety Disorder Scale (GAD-7) was used to screen for anxiety and the Patient Health Questionnaire (PHQ-9) was used to screen for depression. A cutoff of ≥ 10 was used for each scale to identify those who tested positive for anxiety and depression.\n\n\nRESULTS\nPrevalence rate of anxiety was 37.7% and that of depression was 39.6%. Multiple logistic regression analysis showed that anxiety is positively associated with duration of diabetes of < 10 years (P = 0.01), with ≥ three comorbid diseases (P = 0.00), and HbA1c level of > 7% (P = 0.03). Multiple logistic regression analysis also showed that depression is positively associated with patients of < 50 years of age (P = 0.03), females (P = 0.01), current smokers (P = 0.01), patients with foot ulcer duration ≥ 7 months (P = 0.00), with ≥ three comorbid diseases (P = 0.00) than their counterparts.\n\n\nCONCLUSIONS\nAnxiety and depression are widely prevalent among diabetic foot patients. Mental health status of those patients gets even worse among those suffering other comorbid diseases, which was a finding that requires special attention in the management of patients with diabetic foot." }, { "pmid": "19846222", "title": "Prevalence of habitual snoring and symptoms of sleep-disordered breathing in adolescents.", "abstract": "OBJECTIVE\nSleep-disordered breathing is an important public health problem in adolescents. The aim of this study was to investigate the prevalence and risk factors of habitual snoring and symptoms of sleep-disordered breathing in adolescents.\n\n\nMETHODS\nA cross-sectional study was conducted with children from primary schools and high schools that the ages ranged from 12 to 17 years. Data were collected by physical examination and questionnaires filled in by parents regarding sleep habits and possible risk factors of snoring. According to answers, children were classified into three groups: non-snorers, occasional snorers, and habitual snorers.\n\n\nRESULTS\nThe response rate was 79.2%; 1030 of 1300 questionnaires were fully completed and analyzed. The prevalence of habitual snoring was 4.0%. Habitual snorers had significantly more nighttime symptoms including observed apneas, difficulty breathing, restless sleep and mouth breathing during sleep compared to occasional and non-snorers. Prevalence of habitual snoring was increased in children who had had tonsillar hypertrophy, allergic rhinitis, and maternal smoking.\n\n\nCONCLUSION\nWe found the prevalence of habitual snoring to be 4.0% in adolescents from the province of Manisa, Turkey which is low compared to previous studies. Habitual snoring is an important problem in adolescents and habitual snorers had significantly more nighttime symptoms of sleep-disordered breathing compared to non-snorers." }, { "pmid": "16754527", "title": "Hypercholesterolemia is a potential risk factor for asthma.", "abstract": "INTRODUCTION\nThe effect of hyperlipidemia on asthma has never been addressed. Recent literature implicates a pro-inflammatory role for hypercholesterolemia. This study evaluates the effect of serum cholesterol level on asthma frequency.\n\n\nMETHODS\nFactors associated with asthma risk were examined in a retrospective study design. Study subjects were between the 4 and 20 years of age who presented to a rural pediatric clinic and whose total serum cholesterol level was obtained. Diagnosis of asthma was determined by the treating physician. Multivariable logistic regression was performed to identify variables that were related to the odds of having asthma.\n\n\nRESULTS\nA total of 188 patients were included. Asthma was present in 50 patients. Total serum cholesterol (mean +/- SD) for the asthma group was 176.7 +/- 39.8 compared to 162.9 +/- 12.8 in the non-asthma group (P = 0.028). A total of 21 of the 50 (42%) asthma patients were obese compared to 31 of the 138 (22%) non-asthma patients (p = 0.014). There was no difference between both groups regarding age and gender. Hypercholesterolemia and obesity were identified by logistic regression analysis to increase the probability of asthma independently.\n\n\nCONCLUSION\nHypercholesterolemia is a potential risk factor for asthma independent of obesity." }, { "pmid": "26062915", "title": "Hypercholesterolemia and Hypertension: Two Sides of the Same Coin.", "abstract": "The aim of this review article is to summarize the current knowledge about mechanisms that connect blood pressure regulation and hypercholesterolemia, the mutual interaction between hypertension and hypercholesterolemia, and their influence on atherosclerosis development. Our research shows that at least one-third of the population of Western Europe has hypertension and hypercholesterolemia. Several biohumoral mechanisms could explain the relationship between hypertension and hypercholesterolemia and the association between these risk factors and accelerated atherosclerosis. The most investigated mechanisms are the renin-angiotensin-aldosterone system, oxidative stress, endothelial dysfunction, and increased production of endothelin-1. Arterial hypertension is frequently observed in combination with hypercholesterolemia, and this is related to accelerated atherosclerosis. Understanding the mechanisms behind this relationship could help explain the benefits of therapy that simultaneously reduce blood pressure and cholesterol levels." }, { "pmid": "14985157", "title": "Anxiety and depression in patients with chronic obstructive pulmonary disease (COPD). A review.", "abstract": "A review of the literature revealed high comorbidity of chronic obstructive pulmonary disease (COPD) and states of anxiety and depression, indicative of excess, psychiatric morbidity in COPD. The existing studies point to a prevalence of clinical significant symptoms of depression and anxiety amounting to around 50%. The prevalence of panic disorder and major depression in COPD patients is correspondingly markedly increased compared to the general population. Pathogenetic mechanisms remain unclear but both psychological and organic factors seem to play a role. The clinical and social implications are severe and the concurrent psychiatric disorders may lead to increased morbidity and impaired quality of life. Furthermore, the risk of missing the proper diagnosis and treatment of a concurrent psychiatric complication is evident when COPD patients are treated in medical clinics. Until now only few intervention studies have been conducted, but results suggest that treatment of concurrent psychiatric disorder leads to improvement in the physical as well as the psychological state of the patient. Panic anxiety as well as generalized anxiety in COPD patients is most safely treated with newer antidepressants. Depression is treated with antidepressants according to usual clinical guidelines. There is a need for further intervention studies to determine the overall effect of antidepressants in the treatment of anxiety and depression in this group of patients." }, { "pmid": "19440241", "title": "The association between hypertension and depression and anxiety disorders: results from a nationally-representative sample of South African adults.", "abstract": "OBJECTIVE\nGrowing evidence suggests high levels of comorbidity between hypertension and mental illness but there are few data from low- and middle-income countries. We examined the association between hypertension and depression and anxiety in South Africa.\n\n\nMETHODS\nData come from a nationally-representative survey of adults (n = 4351). The Composite International Diagnostic Interview was used to measure DSM-IV mental disorders during the previous 12-months. The relationships between self-reported hypertension and anxiety disorders, depressive disorders and comorbid anxiety-depression were assessed after adjustment for participant characteristics including experience of trauma and other chronic physical conditions.\n\n\nRESULTS\nOverall 16.7% reported a previous medical diagnosis of hypertension, and 8.1% and 4.9% were found to have a 12-month anxiety or depressive disorder, respectively. In adjusted analyses, hypertension diagnosis was associated with 12-month anxiety disorders [Odds ratio (OR) = 1.55, 95% Confidence interval (CI) = 1.10-2.18] but not 12-month depressive disorders or 12-month comorbid anxiety-depression. Hypertension in the absence of other chronic physical conditions was not associated with any of the 12-month mental health outcomes (p-values all <0.05), while being diagnosed with both hypertension and another chronic physical condition were associated with 12-month anxiety disorders (OR = 2.25, 95% CI = 1.46-3.45), but not 12-month depressive disorders or comorbid anxiety-depression.\n\n\nCONCLUSIONS\nThese are the first population-based estimates to demonstrate an association between hypertension and mental disorders in sub-Saharan Africa. Further investigation is needed into role of traumatic life events in the aetiology of hypertension as well as the temporality of the association between hypertension and mental disorders." }, { "pmid": "17679026", "title": "Snoring as an independent risk factor for hypertension in the nonobese population: the Korean Health and Genome Study.", "abstract": "BACKGROUND\nAlthough the close relationship between sleep-disordered breathing and hypertension has been strengthened by the accumulated evidence, the issues of controlling for coexisting factors and the lack of definite evidence in presenting a cause-effect relationship still remain. This study aimed to evaluate the independent association between habitual snoring and the 2-year incidence of hypertension in a nonobese population in Korea.\n\n\nMETHODS\nSubjects were drawn from the Korean Health and Genome Study, which is an ongoing population-based prospective study of Korean adults aged 40 to 69 years. The final sample comprised 2730 men and 2723 women without obesity and hypertension at the time of their initial examinations. All participants were reevaluated after an interval of 2 years. Hypertension was defined on the basis of blood pressure>or=140/90 mm Hg or the use of antihypertensive medications. Habitual snorers were defined as those who snored>or=4 days per week.\n\n\nRESULTS\nHabitual snoring was significantly associated with increased odds ratios of the incidence rate of hypertension in every stratum of confounding factors, including age, sex, smoking, and level of blood pressure and body mass index at baseline, except for age>or=60 years. After adjustments of other covariates, habitual snoring was independently associated with a 1.49-fold and 1.56-fold excess for odds ratios of the 2-year incidence of hypertension in men and women, respectively.\n\n\nCONCLUSIONS\nAlthough further evidence is needed, our results support the contention that habitual snoring is an important predisposing factor in future hypertension, even for nonobese adults." }, { "pmid": "18191733", "title": "Anxiety characteristics independently and prospectively predict myocardial infarction in men the unique contribution of anxiety among psychologic factors.", "abstract": "OBJECTIVES\nThis study investigated whether anxiety characteristics independently predicted the onset of myocardial infarction (MI) over an average of 12.4 years and whether this relationship was independent of other psychologic variables and risk factors.\n\n\nBACKGROUND\nAlthough several psychosocial factors have been associated with risk for MI, anxiety has not been examined extensively. Earlier studies also rarely addressed whether the association between a psychologic variable and MI was specific and independent of other psychosocial correlates.\n\n\nMETHODS\nParticipants were 735 older men (mean age 60 years) without a history of coronary disease or diabetes at baseline from the Normative Aging Study. Anxiety characteristics were assessed with 4 scales (psychasthenia, social introversion, phobia, and manifest anxiety) and an overall anxiety factor derived from these scales.\n\n\nRESULTS\nAnxiety characteristics independently and prospectively predicted MI incidence after controlling for age, education, marital status, fasting glucose, body mass index, high-density lipoprotein cholesterol, and systolic blood pressure in proportional hazards models. The adjusted relative risk (95% confidence interval [CI]) of MI associated with each standard deviation increase in anxiety variable was 1.37 (95% CI 1.12 to 1.68) for psychasthenia, 1.31 (95% CI 1.05 to 1.63) for social introversion, 1.36 (95% CI 1.10 to 1.68) for phobia, 1.42 (95% CI 1.14 to 1.76) for manifest anxiety, and 1.43 (95% CI 1.17 to 1.75) for overall anxiety. These relationships remained significant after further adjusting for health behaviors (drinking, smoking, and caloric intake), medications for hypertension, high cholesterol, and diabetes during follow-up and additional psychologic variables (depression, type A behavior, hostility, anger, and negative emotion).\n\n\nCONCLUSIONS\nAnxiety-prone dispositions appear to be a robust and independent risk factor of MI among older men." }, { "pmid": "11379468", "title": "Bronchial asthma: a risk factor for hypertension?", "abstract": "Several attempts have been made to improve primary prevention of essential hypertension and many of these have been directed at avoiding the well known risk factors. Both asthma and hypertension are spastic disorders of smooth muscle, also asthmatics and hypertensives have been found to be salt sensitive. There is a suspicion that the similarities between these two diseases may predispose the individuals with one disease to the other, as pulmonary hypertension has been described during exercise-induced bronchoconstriction. We therefore, studied the blood pressure pattern during and after acute severe asthma (ASA) along with the frequency of hypertension in stable asthmatic patients. Two groups of patients were studied. Group 1 consisted of 12 patients with ASA (2 males, 10 females) with a mean age of 30 +/- 9.9 years. The mean blood pressure during attack of ASA (147 +/- 16.9/100 +/- 8.2 mmHg) was higher than the mean BP (132 +/- 8.3/82 +/- 7 mmHg) 2 weeks after discharge from hospital without treatment in all patients (P < 0.05). Group 2 included 134 asthmatic subjects in stable state (54 males, 80 females) with a mean age of 45 +/- 15 years and a range of 15-90 years. The overall frequency of hypertension was 37% with a proportion of 39% in males and 35% in females. Hypertension was defined as systolic blood pressure of > or = 140 mmHg and or diastolic blood pressure of > or = 90 mmHg. There was no difference between the frequency of attack of ASA in hypertensives (5.7 +/- 5.6 per year) and nonhypertensives (5.5 +/- 3.8 per year), P < 0.05. We concluded that transient elevation of blood pressure may occur during ASA. The frequency of hypertension among asthmatics is quite high and concurrent family history of hypertension and frequency of attack of ASA did not seem to determine the status of blood pressure. Patients with asthma should have regular blood pressure check during follow-up visits." }, { "pmid": "11822535", "title": "Diabetes and hypertension.", "abstract": "The incidence of hypertension is increased in individuals with diabetes mellitus. This is especially true in patients with type 2 diabetes. In these patients high blood pressure is common at the time of diagnosis of diabetes, but the development of diabetes is often preceded by a period during which hyperinsulinemia and insulin resistance is already present. Diabetes represents by itself a major risk of cardiovascular morbidity and mortality. This risk is considerably enhanced by the co-existence of hypertension. One of the main complications of type 2 diabetes is nephropathy, which manifests initially by microalbuminuria, then by clinical proteinuria, leading to a progressive chronic renal failure and end-stage renal disease. Microalbuminuria is considered today as an indicator of renal endothelial dysfunction as well as an independent predictor of the cardiovascular risk. During recent years a number of studies have shown that tight blood pressure control is essential in diabetic patients in order to provide maximal protection against cardiovascular events and the deterioration of renal function. Of note, there is recent evidence indicating that blockade of the renin-angiotensin system with angiotensin II antagonists has marked nephroprotective effects in patients with hypertension and type 2 diabetes, both at early and late stages of renal disease." }, { "pmid": "2892881", "title": "Beta-blockers versus diuretics in hypertensive men: main results from the HAPPHY trial.", "abstract": "Men aged 40-64 years with mild to moderate hypertension [diastolic blood pressure (DBP) 100-130 mmHg] were randomized to treatment with a diuretic (n = 3272) or a beta-blocker (n = 3297), with additional drugs if necessary, to determine whether a beta-blocker based treatment differs from thiazide diuretic based treatment with regard to the prevention of coronary heart disease (CHD) events and death. Patients with previous CHD, stroke or other serious diseases, or with contraindications to diuretics or beta-blockers were excluded. If normotension (DBP less than 95 mmHg) was not achieved by monotherapy, other antihypertensive drugs were added, but the two basic drugs were not crossed over. Patients were assessed at 6-monthly intervals. The mean follow-up for end-points was 45.1 months. Blood pressure (BP) side effects and end-points were recorded in a standardized manner. Entry characteristics and the BP reduction achieved were very similar in both treatment groups. All analyses were made on an intention-to-treat basis. The incidence of CHD did not differ between the two treatment groups. The incidence of fatal stroke tended to be lower in the beta-blocker treated group than in the diuretic treated group. Total mortality and the total number of end-points were similar in both groups. The percentage of patients withdrawn due to side effects was similar, whereas the number of reported symptoms, according to a questionnaire, was higher for patients on beta-blockers. The incidence of diabetes did not differ between the two groups. Subgroup analyses did not detect a difference in the effect of beta-blockers compared with diuretics in smokers as opposed to non-smokers, and beta-blockers also had the same effects as diuretics in the quartile with the highest predicted risk for CHD. Beta-blockers and thiazide diuretics were approximately equally well tolerated. The two drugs had a similar BP reducing effect although additional drugs had to be given more often in the diuretic group. Antihypertensive treatment based on a beta-blocker or on a thiazide diuretic could not be shown to affect the prevention of hypertensive complications, including CHD, to a different extent." } ]
Journal of Clinical Medicine
29997313
PMC6069472
10.3390/jcm7070173
Automatic Infants’ Pain Assessment by Dynamic Facial Representation: Effects of Profile View, Gestational Age, Gender, and Race
Infants’ early exposure to painful procedures can have negative short and long-term effects on cognitive, neurological, and brain development. However, infants cannot express their subjective pain experience, as they do not communicate in any language. Facial expression is the most specific pain indicator, which has been effectively employed for automatic pain recognition. In this paper, dynamic pain facial expression representation and fusion scheme for automatic pain assessment in infants is proposed by combining temporal appearance facial features and temporal geometric facial features. We investigate the effects of various factors that influence pain reactivity in infants, such as individual variables of gestational age, gender, and race. Different automatic infant pain assessment models are constructed, depending on influence factors as well as facial profile view, which affect the model ability of pain recognition. It can be concluded that the profile-based infant pain assessment is feasible, as its performance is almost as good as that of the whole face. Moreover, gestational age is the most influencing factor for pain assessment, and it is necessary to construct specific models depending on it. This is mainly because of a lack of behavioral communication ability in infants with low gestational age, due to limited neurological development. To our best knowledge, this is the first study investigating infants’ pain recognition, highlighting profile facial views and various individual variables.
2. Related Work and ContributionsThere has been an increasing interest in understanding individual behavioral responses to pain based on facial expressions [19,20,21,22,23], body or head movements [24,25], and sound signals (crying) [26,27,28]. Pain-related behavior analysis is non-invasive and easily acquired by video recording technique. Evidence supports the fact that facial expression is the most specific indicator and is more salient and consistent than other behavioral indicators [29,30].Facial expressions can provide insight into an individual’s emotional state, and automatic facial expression analysis (AFEA) is a topic of broad research [31]. However, the contribution of pain expression is less extensive, especially for infant pain assessment. There are several researches on pain facial expression recognition in adults [19,20,23,32,33,34]. However, the methods designed for adult pain assessment may not show similar performance and may completely fail in infants for three main reasons: First, facial morphology and dynamics vary between infants and adults, as reported in [35]. Moreover, infant facial expressions include additional important units that are not present in the Facial Action Coding System. As such, NFACS (Neonatal Facial Coding System) is introduced as an extension of FACS [35,36]. Second, infants with different individual variables (such as gestational age) have different pain facial characteristics due to a less developed central nervous system [36]. Third, the preprocessing stage is more challenging in case of infants, since they are uncooperative subjects recorded in an unconstrained environment. In this paper, we focus on studies on infant pain assessment.Brahnam et al. [37] utilized the holistic eigenfaces approach to recognize pain facial expressions in newborn babies, and compared the performances of distance-based classifier and Support Vector Machine (SVM) for pain detection. This work was extended by employing Sequential Floating Forward Selection for feature selection and Neutral Network Simultaneous Optimization Algorithm (NNSOA) for classification; an average classification rate of 90.2% was obtained [38]. Gholami et al. [39] presented Relevance Vector Machine (RVM) to assess infant pain and its intensity. The classification accuracy of RVM (85%) was found to be close to assessments by experts. Nanni et al. [40] used several histogram-based descriptors to detect infant pain facial expression, including Local Binary Pattern (LBP), Local Ternary Pattern (LTP), Elongated Ternary Pattern (ELTP), and Elongated Binary Pattern (ELBP). The highest accuracy was achieved by ELTP with Area under the Curve of Receiver Operating Characteristic Curve (AUC) score of 0.93. The above researches were conducted using the COPE database, which is the only open access infant pain database, consisting of 204 static 2D images of 26 infants photographed under pain stimuli. Static images reveal facial expressions in certain photographs, while ignoring temporal information of pain.A few studies have recently focused on dynamic pain facial expression analysis. Fotiadou et al. [41] applied the Active Appearance Model (AAM) to extract facial features and global motion for each video frame. SVM classifier was utilized for pain detection in 15 videos of eight infants, and the AUC achieved 0.98. Zamzmi et al. [42,43] extracted pain-relevant facial features from video sequences by estimating the optical strain magnitudes corresponding to pain facial expressions. SVM and KNN (K-nearest neighbor) classifiers were employed for pain detection, and an overall accuracy of 96% was obtained.Most AFEA systems focus on facial expression analysis in near-frontal-view of facial recordings, and very few studies investigate the effect of the profile view of a facial image [44,45]. According to clinical observations, head movements occur commonly during pain experiences. Head shaking results in multi-view faces and may lead to failure of face detection and pain recognition. Facial expression recognition for a profile-view is challenging, as a lot of the facial representation information is lost. There is no research available that investigates pain facial expression recognition performances on profile view.According to clinical research, infants with low gestational age have less-developed central nervous systems, and show limited ability to behaviorally communicate pain in comparison to full-term or post-term infants [46]. Derbyshire [47] revealed high pain sensitivity in female adults than in males with different situational cases, while pain responses for early age infants were not found to be extremely affected by sex differences. Due to the complexity of clinical contexture, infant pain facial expression analysis is more challenging, and contextual and individual factors are worth considering (i.e., age, gender, and race).There is growing evidence in psychological research of the fact that temporal dynamics of facial behavior (e.g., the duration of facial activity) is a critical factor in the interpretation of observed behavior [45]. In this paper, we propose a dynamic pain facial expression representation and fusion scheme for automatic infant pain assessment, by combining temporal appearance facial features and temporal geometric facial features. Different automatic pain assessment models are constructed to gain a better understanding of the various factors that influence pain reactivity in infants, including gestational age, gender, and race. Moreover, a pain assessment model based on facial profile view is also investigated. The effectiveness of a specific model constructed according to individual variables is analyzed and compared with a general model. To the best of our knowledge, this is the first study investigating infant pain recognition depending on multiple facial views and various individual variables.
[ "15273943", "25433779", "12824463", "16997468", "15231909", "17236892", "20438855", "9220806", "9718246", "18165830", "11420320", "7816493", "19262913", "19225606", "24656830", "24982986", "22837587", "15336280", "12512639", "26985326", "3960577", "26357337", "14600535", "17431293", "15979291", "12240510" ]
[ { "pmid": "15273943", "title": "A systematic integrative review of infant pain assessment tools.", "abstract": "PURPOSE\nTo examine the issue of pain assessment in infants by acquiring all available published pain assessment tools and evaluating their reported reliability, validity, clinical utility, and feasibility.\n\n\nDESIGN AND METHODS\nA systematic integrative review of the literature was conducted using the following databases: MEDLINE and CINAHL (through February 2004), and Health and Psychosocial Instruments, and Cochrane Systematic Reviews (through 2003). MeSH headings searched included \"pain measurement,\" with limit of \"newborn infant\"; \"infant newborn\"; and \"pain perception.\"\n\n\nSUBJECTS\nThirty-five neonatal pain assessment tools were found and evaluated using predetermined criteria. The critique consisted of a structured comparison of the classification and dimensions measured. Further, the population tested and reports of reliability, validity, clinical utility, and feasibility were reviewed.\n\n\nRESULTS\nOf the 35 measures reviewed, 18 were unidimensional and 17 were multidimensional. Six of the multidimensional measures were published as abstracts only, were not published at all, or the original work could not be obtained. None of the existing instruments fulfilled all criteria for an ideal measure; many require further psychometric testing.\n\n\nCONCLUSIONS\nWhen choosing a pain assessment tool, one must also consider the infant population and setting, and the type of pain experienced. The decision should be made after carefully considering the existing published options. Confidence that the instrument will assess pain in a reproducible way is essential, and must be demonstrated with validity and reliability testing. Using an untested instrument is not recommended, and should only occur within a research protocol, with appropriate ethics and parental approval. Because pain is a multidimensional phenomenon, well-tested multidimensional instruments may be preferable." }, { "pmid": "25433779", "title": "[Pain assessment using the Facial Action Coding System. A systematic review].", "abstract": "Self-reporting is the most widely used pain measurement tool, although it may not be useful in patients with loss or deficit in communication skills. The aim of this paper was to undertake a systematic review of the literature of pain assessment through the Facial Action Coding System (FACS). The initial search found 4,335 references and, within the restriction «FACS», these were reduced to 40 (after exclusion of duplicates). Finally, only 26 articles meeting the inclusion criteria were included. Methodological quality was assessed using the GRADE system. Most patients were adults and elderly health conditions, or cognitive deficits and/or chronic pain. Our conclusion is that FACS is a reliable and objective tool in the detection and quantification of pain in all patients." }, { "pmid": "12824463", "title": "Neural correlates of interindividual differences in the subjective experience of pain.", "abstract": "Some individuals claim that they are very sensitive to pain, whereas others say that they tolerate pain well. Yet, it is difficult to determine whether such subjective reports reflect true interindividual experiential differences. Using psychophysical ratings to define pain sensitivity and functional magnetic resonance imaging to assess brain activity, we found that highly sensitive individuals exhibited more frequent and more robust pain-induced activation of the primary somatosensory cortex, anterior cingulate cortex, and prefrontal cortex than did insensitive individuals. By identifying objective neural correlates of subjective differences, these findings validate the utility of introspection and subjective reporting as a means of communicating a first-person experience." }, { "pmid": "16997468", "title": "Determining behavioural and physiological responses to pain in infants at risk for neurological impairment.", "abstract": "Multiple researchers have validated indicators and measures of infant pain. However, infants at risk for neurologic impairment (NI) have been under studied. Therefore, whether their pain responses are similar to those of other infants is unknown. Pain responses to heel lance from 149 neonates (GA>25-40 weeks) from 3 Canadian Neonatal Intensive Care units at high (Cohort A, n=54), moderate (Cohort B, n=45) and low (Cohort C, n=50) risk for NI were compared in a prospective observational cohort study. A significant Cohort by Phase interaction for total facial action (F(6,409)=3.50, p=0.0022) and 4 individual facial actions existed; with Cohort C demonstrating the most facial action. A significant Phase effect existed for increased maximum Heart Rate (F(3,431)=58.1, p=0.001), minimum Heart Rate (F(3,431)=78.7, p=0.001), maximum Oxygen saturation (F(3,425)=47.6, p=0.001), and minimum oxygen saturation (F(3,425)=12.2, p=0.001) with no Cohort differences. Cohort B had significantly higher minimum (F(2,79)=3.71, p=0.029), and mean (F(2,79)=4.04, p=0.021) fundamental cry frequencies. A significant Phase effect for low/high frequency Heart Rate Variability (HRV) ratio (F(2,216)=4.97, p=0.008) was found with the greatest decrease in Cohort A. Significant Cohort by Phase interactions existed for low and high frequency HRV. All infants responded to the most painful phase of the heel lance; however, infants at moderate and highest risk for NI exhibited decreased responses in some indicators." }, { "pmid": "15231909", "title": "Specific Newborn Individualized Developmental Care and Assessment Program movements are associated with acute pain in preterm infants in the neonatal intensive care unit.", "abstract": "OBJECTIVE\nThe Newborn Individualized Developmental Care and Assessment Program (NIDCAP) is widely used in neonatal intensive care units and comprises 85 discrete infant behaviors, some of which may communicate infant distress. The objective of this study was to identify developmentally relevant movements indicative of pain in preterm infants.\n\n\nMETHODS\nForty-four preterm infants were assessed at 32 weeks' gestational age (GA) during 3 phases (baseline, lance/squeeze, and recovery) of routine blood collection in the neonatal intensive care unit. The NIDCAP and Neonatal Facial Coding System (NFCS) were coded from separate continuous bedside video recordings; mean heart rate (mHR) was derived from digitally sampled continuous electrographic recordings. Analysis of variance (phase x gender) with Bonferroni corrections was used to compare differences in NIDCAP, NFCS, and mHR. Pearson correlations were used to examine relationships between the NIDCAP and infant background characteristics.\n\n\nRESULTS\nNFCS and mHR increased significantly to lance/squeeze. Eight NIDCAP behaviors also increased significantly to lance/squeeze. Another 5 NIDCAP behaviors decreased significantly to lance/squeeze. Infants who had lower GA at birth, had been sicker, had experienced more painful procedures, or had greater morphine exposure showed increased hand movements indicative of increased distress.\n\n\nCONCLUSIONS\nOf the 85 NIDCAP behaviors, a subset of 8 NIDCAP movements were associated with pain. Particularly for infants who are born at early GAs, addition of these movements to commonly used measures may improve the accuracy of pain assessment." }, { "pmid": "17236892", "title": "Altered basal cortisol levels at 3, 6, 8 and 18 months in infants born at extremely low gestational age.", "abstract": "OBJECTIVE\nLittle is known about the developmental trajectory of cortisol levels in preterm infants after hospital discharge.\n\n\nSTUDY DESIGN\nIn a cohort of 225 infants (gestational age at birth <33 weeks) basal salivary cortisol levels were compared in infants born at extremely low gestational age (ELGA, 23-28 weeks), very low gestational age (29-32 weeks), and term (37-42 weeks) at 3, 6, 8, and 18 months corrected age (CA). Infants with major neurosensory or motor impairment were excluded.\n\n\nRESULTS\nAt 3 months CA, salivary cortisol levels were lower in both preterm groups compared with the term infants (P = .003). Conversely, at 8 and 18 months CA, the ELGA infants had significantly higher basal cortisol levels than the very low gestational age and term infants (P = .016 and P = .006, respectively).\n\n\nCONCLUSIONS\nIn ELGA infants, the shift from low basal cortisol levels at 3 months to significantly high levels at 8 and 18 months CA suggests long-term \"resetting\" of endocrine stress systems. Multiple factors may contribute to these higher cortisol levels in the ELGA infants, including physiological immaturity at birth, cumulative stress related to multiple procedures, and mechanical ventilation during lengthy hospitalization. Prolonged elevation of the cortisol \"set-point\" may have negative implications for neurodevelopment and later health." }, { "pmid": "20438855", "title": "Premature infants display increased noxious-evoked neuronal activity in the brain compared to healthy age-matched term-born infants.", "abstract": "This study demonstrates that infants who are born prematurely and who have experienced at least 40days of intensive or special care have increased brain neuronal responses to noxious stimuli compared to healthy newborns at the same postmenstrual age. We have measured evoked potentials generated by noxious clinically-essential heel lances in infants born at term (8 infants; born 37-40weeks) and in infants born prematurely (7 infants; born 24-32weeks) who had reached the same postmenstrual age (mean age at time of heel lance 39.2+/-1.2weeks). These noxious-evoked potentials are clearly distinguishable from shorter latency potentials evoked by non-noxious tactile sensory stimulation. While the shorter latency touch potentials are not dependent on the age of the infant at birth, the noxious-evoked potentials are significantly larger in prematurely-born infants. This enhancement is not associated with specific brain lesions but reflects a functional change in pain processing in the brain that is likely to underlie previously reported changes in pain sensitivity in older ex-preterm children. Our ability to quantify and measure experience-dependent changes in infant cortical pain processing will allow us to develop a more rational approach to pain management in neonatal intensive care." }, { "pmid": "9220806", "title": "The FLACC: a behavioral scale for scoring postoperative pain in young children.", "abstract": "PURPOSE\nTo evaluate the reliability and validity of the FLACC Pain Assessment Tool which incorporates five categories of pain behaviors: facial expression; leg movement; activity; cry; and consolability.\n\n\nMETHOD\nEighty-nine children aged 2 months to 7 years, (3.0 +/- 2.0 yrs.) who had undergone a variety of surgical procedures, were observed in the Post Anesthesia Care Unit (PACU). The study consisted of: 1) measuring interrater reliability; 2) testing validity by measuring changes in FLACC scores in response to administration of analgesics; and 3) comparing FLACC scores to other pain ratings.\n\n\nFINDINGS\nThe FLACC tool was found to have high interrater reliability. Preliminary evidence of validity was provided by the significant decrease in FLACC scores related to administration of analgesics. Validity was also supported by the correlation with scores assigned by the Objective Pain Scale (OPS) and nurses' global ratings of pain.\n\n\nCONCLUSIONS\nThe FLACC provides a simple framework for quantifying pain behaviors in children who may not be able to verbalize the presence or severity of pain. Our preliminary data indicates the FLACC pain assessment tool is valid and reliable." }, { "pmid": "9718246", "title": "Bedside application of the Neonatal Facial Coding System in pain assessment of premature neonates.", "abstract": "Assessment of infant pain is a pressing concern, especially within the context of neonatal intensive care where infants may be exposed to prolonged and repeated pain during lengthy hospitalization. In the present study the feasibility of carrying out the complete Neonatal Facial Coding System (NFCS) in real time at bedside, specifically reliability, construct and concurrent validity, was evaluated in a tertiary level Neonatal Intensive Care Unit (NICU). Heel lance was used as a model of procedural pain, and observed with n = 40 infants at 32 weeks gestational age. Infant sleep/wake state, NFCS facial activity and specific hand movements were coded during baseline, unwrap, swab, heel lance, squeezing and recovery events. Heart rate was recorded continuously and digitally sampled using a custom designed computer system. Repeated measures analysis of variance (ANOVA) showed statistically significant differences across events for facial activity (P < 0.0001) and heart rate (P < 0.0001). Planned comparisons showed facial activity unchanged during baseline, swab and unwrap, then increased significantly during heel lance (P < 0.0001), increased further during squeezing (P < 0.003), then decreased during recovery (P < 0.0001). Systematic shifts in sleep/wake state were apparent. Rise in facial activity was consistent with increased heart rate, except that facial activity more closely paralleled initiation of the invasive event. Thus facial display was more specific to tissue damage compared with heart rate. Inter-observer reliability was high. Construct validity of the NFCS at bedside was demonstrated as invasive procedures were distinguished from tactile. While bedside coding of behavior does not permit raters to be blind to events, mechanical recording of heart rate allowed for an independent source of concurrent validation for bedside application of the NFCS scale." }, { "pmid": "18165830", "title": "Clinical reliability and validity of the N-PASS: neonatal pain, agitation and sedation scale with prolonged pain.", "abstract": "OBJECTIVE\nTo establish beginning evidence of clinical validity and reliability of the Neonatal Pain, Agitation and Sedation Scale (N-PASS) in neonates with prolonged pain postoperatively and during mechanical ventilation.\n\n\nSTUDY DESIGN\nProspective psychometric evaluation. Two nurses administered the N-PASS simultaneously and independently before and after pharmacologic interventions for pain or sedation. One nurse also administered the premature infant pain profile (PIPP) concurrently with the N-PASS. The setting consisted of 50-bed level III neonatal intensive care unit. Convenience sample of 72 observations of 46 ventilated and/or postoperative infants, 0 to 100 days of age, gestational age 23 to 40 weeks was used. Outcome measures comprised convergent and construct validity, interrater reliability and internal consistency.\n\n\nRESULT\nInterrater reliability measured by intraclass coefficients of 0.85 to 0.95 was high (P<0.001 to 0.0001). Convergent validity was demonstrated by correlation with the PIPP scores (Spearman's rank correlation coefficient of 0.83 at high pain scores, 0.61 at low pain scores). Internal consistency, measured by Cronbach's alpha, was evident with pain scores (0.82), and with sedation scores (0.87). Construct validity was established via the Wilcoxon signed-rank test, comparing the distribution of N-PASS scores before and after pharmacologic intervention showing pain scores of 4.86 (3.38) and 1.81 (1.53) (mean (s.d.), P<0.0001) and sedation scores of 0.85 (1.66) and -2.78 (2.81) (P<0.0001) for pre- and postintervention assessments, respectively.\n\n\nCONCLUSIONS\nThis research provides beginning evidence that the N-PASS is a valid and reliable tool for assessing pain/agitation and sedation in ventilated and/or postoperative infants 0 to 100 days of age, and 23 weeks gestation and above." }, { "pmid": "11420320", "title": "Development and initial validation of the EDIN scale, a new tool for assessing prolonged pain in preterm infants.", "abstract": "OBJECTIVE\nTo develop and validate a scale suitable for use in clinical practice as a tool for assessing prolonged pain in premature infants.\n\n\nMETHODS\nPain indicators identified by observation of preterm infants and selected by a panel of experts were used to develop the EDIN scale (Echelle Douleur Inconfort Nouveau-Né, neonatal pain and discomfort scale). A cohort of preterm infants was studied prospectively to determine construct validity, inter-rater reliability, and internal consistency of the scale.\n\n\nRESULTS\nThe EDIN scale uses five behavioural indicators of prolonged pain: facial activity, body movements, quality of sleep, quality of contact with nurses, and consolability. The validation study included 76 preterm infants with a mean gestational age of 31.5 weeks. Inter-rater reliability was acceptable, with a kappa coefficient range of 0.59-0.74. Internal consistency was high: Cronbach's alpha coefficients calculated after deleting each item ranged from 0.86 to 0.94. To establish construct validity, EDIN scores in two extreme situations (pain and no pain) were compared, and a significant difference was observed.\n\n\nCONCLUSIONS\nThe validation data suggest that the EDIN is appropriate for assessing prolonged pain in preterm infants. Further studies are warranted to obtain further evidence of construct validity by comparing scores in less extreme situations." }, { "pmid": "7816493", "title": "Encoding and decoding of pain expressions: a judgement study.", "abstract": "The communication of pain requires a sufferer to encode and transmit the experience and an observer to decode and interpret it. Rosenthal's (1982) model of communication was applied to an analysis of the role of facial expression in the transmission of pain information. Videotapes of patients with shoulder pain undergoing a series of movements of the shoulder were shown to a group of 5 judges. Observers and patients provided ratings of the patients' pain on the same verbal descriptor scales. Analyses addressed relationships among patients' pain reports, observers' judgements of patients' pain and measures of patients' facial expressions based on the Facial Action Coding System. The results indicated that although observers can make coarse distinctions among patients' pain states, they (1) are not especially sensitive, and (2) are likely to systematically downgrade the intensity of patients' suffering. Moreover, observers appear to make insufficient use of information that is available in patients' facial expression. Implications of the findings for pain patients and for training of health-care workers are discussed as are directions for future research." }, { "pmid": "19262913", "title": "Assessing pain in infancy: the caregiver context.", "abstract": "BACKGROUND\nPain is largely accepted as being influenced by social context. Unlike most other developmental stages throughout the lifespan, infancy is marked by complete dependence on the caregiver. The present paper discusses the primary importance of understanding the caregiver context when assessing infant pain expression.\n\n\nOBJECTIVES\nBased on a review of research from both the infant pain and infant mental health fields, three lines of evidence are presented. First, pain assessment is as subjective as the pain experience itself. Second, assessors must be cognizant of the relationship between infant pain expression, and caregiver sensitivity and emotional displays. Finally, larger systemic factors of the infant (such as caregiver relationship styles, caregiver psychological distress or caregiver acculturative stress) directly impact on infant expression.\n\n\nCONCLUSIONS\nAs a result of infants' inability to give a self-report of their pain experience, caregivers play a crucial role in assessing the pain and taking appropriate action to manage it. Caregiver behaviours and predispositions have been shown to have a significant impact on infant pain reactivity and, accordingly, should not be ignored when assessing the infant in pain." }, { "pmid": "19225606", "title": "Understanding caregiver judgments of infant pain: contrasts of parents, nurses and pediatricians.", "abstract": "BACKGROUND\nResearch suggests that caregivers' beliefs pertaining to infant pain and which infant pain cues are perceived to be important play an integral role in pediatric pain assessment and management.\n\n\nOBJECTIVES\nFollowing a recent quasi-experimental study reporting on caregiver background and age differences in actual infant pain judgments, the present study clarified these findings by analyzing caregivers' pain beliefs and the cues they use to make pain assessments, and by examining how the wording of belief questions influenced caregivers' responses.\n\n\nMETHODS\nAfter making pain judgments based on video footage of infants between two and 18 months of age receiving immunizations, parents, nurses and pediatricians were required to respond to questionnaires regarding pain beliefs and importance of cues.\n\n\nRESULTS\nParents generally differed from pediatricians. Parents tended to have less optimal beliefs regarding medicating the youngest infants, were more influenced by question wording, and reported using many more cues when judging older infants than other caregiver groups. In terms of beliefs, influence of question wording and cue use, nurses tended to fall in between both groups; they displayed similarities to both parents and pediatricians.\n\n\nCONCLUSIONS\nParalleling the original findings on pain judgments, these findings suggest that parents differ from pediatricians in their pain beliefs and the cues they use to make pain judgments. Moreover, some similarities were found between parents and nurses, and between nurses and pediatricians. Finally, caution must be taken when interpreting research pertaining to beliefs about infant pain because question wording appears to influence interpretation." }, { "pmid": "24656830", "title": "Automatic decoding of facial movements reveals deceptive pain expressions.", "abstract": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling." }, { "pmid": "24982986", "title": "Pain expression recognition based on pLSA model.", "abstract": "We present a new approach to automatically recognize the pain expression from video sequences, which categorize pain as 4 levels: \"no pain,\" \"slight pain,\" \"moderate pain,\" and \" severe pain.\" First of all, facial velocity information, which is used to characterize pain, is determined using optical flow technique. Then visual words based on facial velocity are used to represent pain expression using bag of words. Final pLSA model is used for pain expression recognition, in order to improve the recognition accuracy, the class label information was used for the learning of the pLSA model. Experiments were performed on a pain expression dataset built by ourselves to test and evaluate the proposed method, the experiment results show that the average recognition accuracy is over 92%, which validates its effectiveness." }, { "pmid": "22837587", "title": "The Painful Face - Pain Expression Recognition Using Active Appearance Models.", "abstract": "Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?" }, { "pmid": "15336280", "title": "Ambulatory system for the quantitative and qualitative analysis of gait and posture in chronic pain patients treated with spinal cord stimulation.", "abstract": "The physical activity in normal daily life is determined to a large extent by the functional ability of a subject. As a result, the measurement of the physical activity that a subject performs spontaneously could be a useful and objective measurement of disability, particularly in patients with disease-related functional impairment. The aim of this study is to provide an accurate method for the measurement and analysis of the physical activity under normal life conditions. Using three kinematical sensors strapped to the body, both the posture and the gait parameters can be assessed qualitatively and quantitatively. A detailed description of the algorithms used to analyse both the posture and the gait are presented in this paper. Two methods, based on different sensor configurations and signal processing, are proposed for the detection of sitting and standing postures (Methods P1 and P2). Two other methods are used for the quantitative assessment of walking (Methods W1 and W2). The performance of the algorithms (expressed in terms of sensitivity, specificity and error) is based on the comparison of data recorded simultaneously by a non-interfering observer (reference data) with the data provided by the recording system (21 patients, 61 h). Sensitivity and specificity are respectively 98.2% and 98.8% (P1), 97.8% and 98.1% (P2) for sitting; 98.0% and 98.5% (P1), 97.4% and 97.8% (P2) for standing; 97.1% and 97.9% (W1), 92.4% and 94.9% (W2) for walking; and finally, 99.2% and 98.6% for lying. Overall detection errors (as a percent of range) are as follows: 1.15% (P1) and 1.20% (P2) for sitting, 1.36% (P1) and 1.40% (P2) for standing, 1.20% (W1) and 1.60% (W2) for walking and 0.40% for lying. The error for the estimated walking distance and the speed is 6.8% and 9.6%, respectively. We conclude that both methods can be used for the accurate measurement of the basic physical activity in normal daily life. Measurements performed before and after the delivery of a treatment can therefore provide information of unprecedented accuracy and objectivity on the ability of a procedure, in this case spinal cord stimulation, to restore functional capabilities." }, { "pmid": "12512639", "title": "Acoustic analyses of developmental changes and emotional expression in the preverbal vocalizations of infants.", "abstract": "The nonverbal vocal utterances of seven normally hearing infants were studied within their first year of life with respect to age- and emotion-related changes. Supported by a multiparametric acoustic analysis it was possible to distinguish one inspiratory and eleven expiratory call types. Most of the call types appeared within the first two months; some emerged in the majority of infants not until the 5th (\"laugh\") or 7th month (\"babble\"). Age-related changes in acoustic structure were found in only 4 call types (\"discomfort cry,\" \"short discomfort cry,\" \"wail,\" \"moan\"). The acoustic changes were characterized mainly by an increase in harmonic-to-noise ratio and homogeneity of the call, a decrease in frequency range and a downward shift of acoustic energy from higher to lower frequencies. Emotion-related differences were found in the acoustic structure of single call types as well as in the frequency of occurrence of different call types. A change from positive to negative emotional state was accompanied by an increase in call duration, frequency range, and peak frequency (frequency with the highest amplitude within the power spectrum). Negative emotions, in addition, were characterized by a significantly higher rate of \"crying,\" \"hic\" and \"ingressive vocalizations\" than positive emotions, while positive emotions showed a significantly higher rate of \"babble,\" \"laugh,\" and \"raspberry.\"" }, { "pmid": "26985326", "title": "Detecting Depression Severity from Vocal Prosody.", "abstract": "To investigate the relation between vocal prosody and change in depression severity over time, 57 participants from a clinical trial for treatment of depression were evaluated at seven-week intervals using a semi-structured clinical interview for depression severity (Hamilton Rating Scale for Depression: HRSD). All participants met criteria for Major Depressive Disorder at week 1. Using both perceptual judgments by naive listeners and quantitative analyses of vocal timing and fundamental frequency, three hypotheses were tested: 1) Naive listeners can perceive the severity of depression from vocal recordings of depressed participants and interviewers. 2) Quantitative features of vocal prosody in depressed participants reveal change in symptom severity over the course of depression. And 3) Interpersonal effects occur as well; such that vocal prosody in interviewers shows corresponding effects. These hypotheses were strongly supported. Together, participants' and interviewers' vocal prosody accounted for about 60% of variation in depression scores, and detected ordinal range of depression severity (low, mild, and moderate-to-severe) in 69% of cases (kappa = 0.53). These findings suggest that analysis of vocal prosody could be a powerful tool to assist in depression screening and monitoring over the course of depressive disorder and recovery." }, { "pmid": "3960577", "title": "Acute pain response in infants: a multidimensional description.", "abstract": "Fourteen infants who were undergoing routine immunization were studied from a multidimensional perspective. The measures used were heart rate, crying, body movement/posturing, and voice spectrographs. There was wide variability between infants on the measures, especially on the cry spectrographs, although facial expression was consistent across infants. The pattern that did emerge was characterized by an initial response: a drop in heart rate, a long, high pitched cry followed by a period of apnea, rigidity of the torso and limbs, and a facial expression of pain. This was followed by a sharp increase in heart rate, lower pitched, but dysphonated cries, less body rigidity, but still facial expression was of pain. Finally, in the second half of the minute's response, heart rate remained elevated, cries were lower pitched, more rhythmic, with a rising-falling pattern, and were mostly phonated, and body posturing returned to normal. Those faces that could be viewed also were returning to the at rest configuration. It was suggested that facial expression may be the most consistent across-infant indicator of pain at this point in time." }, { "pmid": "26357337", "title": "Automatic Analysis of Facial Affect: A Survey of Registration, Representation, and Recognition.", "abstract": "Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite of major efforts, there are several open questions on what the important cues to interpret facial expressions are and how to encode them. In this paper, we review the progress across a range of affect recognition applications to shed light on these fundamental questions. We analyse the state-of-the-art solutions by decomposing their pipelines into fundamental components, namely face registration, representation, dimensionality reduction and recognition. We discuss the role of these components and highlight the models and new trends that are followed in their design. Moreover, we provide a comprehensive analysis of facial representations by uncovering their advantages and limitations; we elaborate on the type of information they encode and discuss how they deal with the key challenges of illumination variations, registration errors, head-pose variations, occlusions, and identity bias. This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems." }, { "pmid": "14600535", "title": "Neonatal Facial Coding System for assessing postoperative pain in infants: item reduction is valid and feasible.", "abstract": "OBJECTIVE\nThe objectives of this study were to: (1). evaluate the validity of the Neonatal Facial Coding System (NFCS) for assessment of postoperative pain and (2). explore whether the number of NFCS facial actions could be reduced for assessing postoperative pain.\n\n\nDESIGN\nProspective, observational study.\n\n\nPATIENTS\nThirty-seven children (0-18 months old) undergoing major abdominal or thoracic surgery.\n\n\nOUTCOME MEASURES\nThe outcome measures were the NFCS, COMFORT \"behavior\" scale, and a Visual Analog Scale (VAS), as well as heart rate, blood pressure, and catecholamine and morphine plasma concentrations. At 3-hour intervals during the first 24 hours after surgery, nurses recorded the children's heart rates and blood pressures and assigned COMFORT \"behavior\" and VAS scores. Simultaneously we videotaped the children's faces for NFCS coding. Plasma concentrations of catecholamine, morphine, and its metabolite M6G were determined just after surgery, and at 6, 12, and 24 hours postoperatively.\n\n\nRESULTS\nAll 10 NFCS items were combined into a single index of pain. This index was significantly associated with COMFORT \"behavior\" and VAS scores, and with heart rate and blood pressure, but not with catecholamine, morphine, or M6G plasma concentrations. Multidimensional scaling revealed that brow bulge, eye squeeze, nasolabial furrow, horizontal mouth stretch, and taut tongue could be combined into a reduced measure of pain. The remaining items were not interrelated. This reduced NFCS measure was also significantly associated with COMFORT \"behavior\" and VAS scores, and with heart rate and blood pressure, but not with the catecholamine, morphine, or M6G plasma concentrations.\n\n\nCONCLUSION\nThis study demonstrates that the NFCS is a reliable, feasible, and valid tool for assessing postoperative pain. The reduction of the NFCS to 5 items increases the specificity for pain assessment without reducing the sensitivity and validity for detecting changes in pain." }, { "pmid": "17431293", "title": "Dynamic texture recognition using local binary patterns with an application to facial expressions.", "abstract": "Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation." }, { "pmid": "15979291", "title": "Machine recognition and representation of neonatal facial displays of acute pain.", "abstract": "OBJECTIVE\nIt has been reported in medical literature that health care professionals have difficulty distinguishing a newborn's facial expressions of pain from facial reactions to other stimuli. Although a number of pain instruments have been developed to assist health professionals, studies demonstrate that health professionals are not entirely impartial in their assessment of pain and fail to capitalize on all the information exhibited in a newborn's facial displays. This study tackles these problems by applying three different state-of-the-art face classification techniques to the task of distinguishing a newborn's facial expressions of pain.\n\n\nMETHODS\nThe facial expressions of 26 neonates between the ages of 18 h and 3 days old were photographed experiencing the pain of a heel lance and a variety of stressors, including transport from one crib to another (a disturbance that can provoke crying that is not in response to pain), an air stimulus on the nose, and friction on the external lateral surface of the heel. Three face classification techniques, principal component analysis (PCA), linear discriminant analysis (LDA), and support vector machine (SVM), were used to classify the faces.\n\n\nRESULTS\nIn our experiments, the best recognition rates of pain versus nonpain (88.00%), pain versus rest (94.62%), pain versus cry (80.00%), pain versus air puff (83.33%), and pain versus friction (93.00%) were obtained from an SVM with a polynomial kernel of degree 3. The SVM outperformed two commonly used methods in face classification: PCA and LDA, each using the L1 distance metric.\n\n\nCONCLUSION\nThe results of this study indicate that the application of face classification techniques in pain assessment and management is a promising area of investigation." }, { "pmid": "12240510", "title": "Validation of the Pain Assessment in Neonates (PAIN) scale with the Neonatal Infant Pain Scale (NIPS).", "abstract": "PURPOSE\nTo establish the validity and clinical usefulness of a modified pain assessment scale, the Pain Assessment in Neonates (PAIN) scale.\n\n\nDESIGN\nCorrelational design to compare scores obtained on the PAIN with scores obtained on the Neonatal Infant Pain Scale (NIPS).\n\n\nSAMPLE\nA convenience sample of 196 neonates from an NICU and a step-down unit with gestational ages of 26 to 47 weeks.\n\n\nMETHOD\nBedside nurses observed the neonates for two minutes and then scored their responses on both scales. The scales were scored sequentially and in a randomized order.\n\n\nMAIN OUTCOME VARIABLE\nCorrelation of individual item scores and total scores on the PAIN and the NIPS.\n\n\nRESULTS\nThe scores for individual items on the PAIN were significantly associated with scores obtained on the NIPS. Overall correlation between the scales was 0.93. These associations suggest that the PAIN is a valid scale for assessment of neonatal pain." } ]
BMC Medical Informatics and Decision Making
30066643
PMC6069692
10.1186/s12911-018-0627-5
Identifying direct temporal relations between time and events from clinical notes
BackgroundMost of the current work on clinical temporal relation identification follows the convention developed in the general domain, aiming to identify a comprehensive set of temporal relations from a document including both explicit and implicit relations. While such a comprehensive set can represent temporal information in a document in a complete manner, some of the temporal relations in the comprehensive set may not be essential depending on the clinical application of interest. Moreover, as the types of evidence that should be used to identify explicit and implicit relations are different, current clinical temporal relation identification systems that target both explicit and implicit relations still show low performances for practical use.MethodsIn this paper, we propose to focus on a sub-task of conventional temporal relation identification task in order to provide insight into building practical temporal relation identification modules for clinical text. We focus on identification of direct temporal relations, a subset of temporal relations that is chosen to minimize the amount of inference required to identify the relations. A corpus on direct temporal relations between time expressions and event mentions is constructed, and an automatic system tailored for direct temporal relations is developed.ResultsIt is shown that the direct temporal relations constitute a major category of temporal relations that contain important information needed for clinical applications. The system optimized for direct temporal relations achieves better performance than the state-of-the-art system developed with comprehensive set of both explicit and implicit relations in mind.ConclusionsWe expect direct temporal relations to facilitate the development of practical temporal information extraction tools in clinical domain.
Related workThe task of temporal relation identification from clinical narratives has been tackled with various approaches, including machine-learning frameworks such as SVM [8–10], Markov Logic Network (MLN) [16], and structured learning [17]. In many systems, the entire set of temporal relations is often decomposed into several groups based on their characteristics. For instance, the Vanderbilt system [10] divides the temporal relations into six groups (i.e., event-admission time relations, event-discharge time relations, intra-sentential event-event relations, intra-sentential event-time relations, inter-sentential relations across consecutive sentences, and inter-sentential relations with co-references), and trains a separate SVM classifier for each group. Similar approach is adopted by other systems that use SVM [8, 9]. These systems differentiate intra-sentential relations from inter-sentential relations, but do not differentiate implicit relation within a sentence from explicitly stated relations.Some focus on identifying implicit relations. Xu et al. [16] train 10 separate SVM classifiers to identify both explicit and implicit relations, and then apply MLN to further infer implicit relations based on the results produced by the SVM classifiers. Leeuwenberg and Moens [17] use structured perceptron model that jointly learns the relations between events and the document-creation time and the relations between events and time expressions in the text. The model training and prediction is done at a document level using global features that can exploit local evidences. While these systems report increased performance with enhanced identification of implicit relations, the systems do not include any specialized method for explicit relations.The rest of this paper is organized as follows: in the METHODS Section, we first introduce direct temporal relation and describe the procedure to construct a corpus of direct temporal relations (Section Direct Temporal Relations). And then, we introduce an automatic relation identification system tailored to the direct relations (Section Automatic Identification System). After that, we detail the experiments done in this paper (Section Experimental Setup). The results of the experiments are reported in Section RESULTS, followed by Section DISCUSSION and Section CONCLUSION.
[ "23872518", "23571849", "17947618", "23564629", "23467472", "27585838", "19025688", "18433469" ]
[ { "pmid": "23872518", "title": "Annotating temporal information in clinical narratives.", "abstract": "Temporal information in clinical narratives plays an important role in patients' diagnosis, treatment and prognosis. In order to represent narrative information accurately, medical natural language processing (MLP) systems need to correctly identify and interpret temporal information. To promote research in this area, the Informatics for Integrating Biology and the Bedside (i2b2) project developed a temporally annotated corpus of clinical narratives. This corpus contains 310 de-identified discharge summaries, with annotations of clinical events, temporal expressions and temporal relations. This paper describes the process followed for the development of this corpus and discusses annotation guideline development, annotation methodology, and corpus quality." }, { "pmid": "23571849", "title": "A hybrid system for temporal information extraction from clinical text.", "abstract": "OBJECTIVE\nTo develop a comprehensive temporal information extraction system that can identify events, temporal expressions, and their temporal relations in clinical text. This project was part of the 2012 i2b2 clinical natural language processing (NLP) challenge on temporal information extraction.\n\n\nMATERIALS AND METHODS\nThe 2012 i2b2 NLP challenge organizers manually annotated 310 clinic notes according to a defined annotation guideline: a training set of 190 notes and a test set of 120 notes. All participating systems were developed on the training set and evaluated on the test set. Our system consists of three modules: event extraction, temporal expression extraction, and temporal relation (also called Temporal Link, or 'TLink') extraction. The TLink extraction module contains three individual classifiers for TLinks: (1) between events and section times, (2) within a sentence, and (3) across different sentences. The performance of our system was evaluated using scripts provided by the i2b2 organizers. Primary measures were micro-averaged Precision, Recall, and F-measure.\n\n\nRESULTS\nOur system was among the top ranked. It achieved F-measures of 0.8659 for temporal expression extraction (ranked fourth), 0.6278 for end-to-end TLink track (ranked first), and 0.6932 for TLink-only track (ranked first) in the challenge. We subsequently investigated different strategies for TLink extraction, and were able to marginally improve performance with an F-measure of 0.6943 for TLink-only track." }, { "pmid": "17947618", "title": "The evaluation of a temporal reasoning system in processing clinical discharge summaries.", "abstract": "CONTEXT\nTimeText is a temporal reasoning system designed to represent, extract, and reason about temporal information in clinical text.\n\n\nOBJECTIVE\nTo measure the accuracy of the TimeText for processing clinical discharge summaries.\n\n\nDESIGN\nSix physicians with biomedical informatics training served as domain experts. Twenty discharge summaries were randomly selected for the evaluation. For each of the first 14 reports, 5 to 8 clinically important medical events were chosen. The temporal reasoning system generated temporal relations about the endpoints (start or finish) of pairs of medical events. Two experts (subjects) manually generated temporal relations for these medical events. The system and expert-generated results were assessed by four other experts (raters). All of the twenty discharge summaries were used to assess the system's accuracy in answering time-oriented clinical questions. For each report, five to ten clinically plausible temporal questions about events were generated. Two experts generated answers to the questions to serve as the gold standard. We wrote queries to retrieve answers from system's output.\n\n\nMEASUREMENTS\nCorrectness of generated temporal relations, recall of clinically important relations, and accuracy in answering temporal questions.\n\n\nRESULTS\nThe raters determined that 97% of subjects' 295 generated temporal relations were correct and that 96.5% of the system's 995 generated temporal relations were correct. The system captured 79% of 307 temporal relations determined to be clinically important by the subjects and raters. The system answered 84% of the temporal questions correctly.\n\n\nCONCLUSION\nThe system encoded the majority of information identified by experts, and was able to answer simple temporal questions." }, { "pmid": "23564629", "title": "Evaluating temporal relations in clinical text: 2012 i2b2 Challenge.", "abstract": "BACKGROUND\nThe Sixth Informatics for Integrating Biology and the Bedside (i2b2) Natural Language Processing Challenge for Clinical Records focused on the temporal relations in clinical narratives. The organizers provided the research community with a corpus of discharge summaries annotated with temporal information, to be used for the development and evaluation of temporal reasoning systems. 18 teams from around the world participated in the challenge. During the workshop, participating teams presented comprehensive reviews and analysis of their systems, and outlined future research directions suggested by the challenge contributions.\n\n\nMETHODS\nThe challenge evaluated systems on the information extraction tasks that targeted: (1) clinically significant events, including both clinical concepts such as problems, tests, treatments, and clinical departments, and events relevant to the patient's clinical timeline, such as admissions, transfers between departments, etc; (2) temporal expressions, referring to the dates, times, durations, or frequencies phrases in the clinical text. The values of the extracted temporal expressions had to be normalized to an ISO specification standard; and (3) temporal relations, between the clinical events and temporal expressions. Participants determined pairs of events and temporal expressions that exhibited a temporal relation, and identified the temporal relation between them.\n\n\nRESULTS\nFor event detection, statistical machine learning (ML) methods consistently showed superior performance. While ML and rule based methods seemed to detect temporal expressions equally well, the best systems overwhelmingly adopted a rule based approach for value normalization. For temporal relation classification, the systems using hybrid approaches that combined ML and heuristics based methods produced the best results." }, { "pmid": "23467472", "title": "An end-to-end system to identify temporal relation in discharge summaries: 2012 i2b2 challenge.", "abstract": "OBJECTIVE\nTo create an end-to-end system to identify temporal relation in discharge summaries for the 2012 i2b2 challenge. The challenge includes event extraction, timex extraction, and temporal relation identification.\n\n\nDESIGN\nAn end-to-end temporal relation system was developed. It includes three subsystems: an event extraction system (conditional random fields (CRF) name entity extraction and their corresponding attribute classifiers), a temporal extraction system (CRF name entity extraction, their corresponding attribute classifiers, and context-free grammar based normalization system), and a temporal relation system (10 multi-support vector machine (SVM) classifiers and a Markov logic networks inference system) using labeled sequential pattern mining, syntactic structures based on parse trees, and results from a coordination classifier. Micro-averaged precision (P), recall (R), averaged P&R (P&R), and F measure (F) were used to evaluate results.\n\n\nRESULTS\nFor event extraction, the system achieved 0.9415 (P), 0.8930 (R), 0.9166 (P&R), and 0.9166 (F). The accuracies of their type, polarity, and modality were 0.8574, 0.8585, and 0.8560, respectively. For timex extraction, the system achieved 0.8818, 0.9489, 0.9141, and 0.9141, respectively. The accuracies of their type, value, and modifier were 0.8929, 0.7170, and 0.8907, respectively. For temporal relation, the system achieved 0.6589, 0.7129, 0.6767, and 0.6849, respectively. For end-to-end temporal relation, it achieved 0.5904, 0.5944, 0.5921, and 0.5924, respectively. With the F measure used for evaluation, we were ranked first out of 14 competing teams (event extraction), first out of 14 teams (timex extraction), third out of 12 teams (temporal relation), and second out of seven teams (end-to-end temporal relation).\n\n\nCONCLUSIONS\nThe system achieved encouraging results, demonstrating the feasibility of the tasks defined by the i2b2 organizers. The experiment result demonstrates that both global and local information is useful in the 2012 challenge." }, { "pmid": "27585838", "title": "Leveraging syntactic and semantic graph kernels to extract pharmacokinetic drug drug interactions from biomedical literature.", "abstract": "BACKGROUND\nInformation about drug-drug interactions (DDIs) supported by scientific evidence is crucial for establishing computational knowledge bases for applications like pharmacovigilance. Since new reports of DDIs are rapidly accumulating in the scientific literature, text-mining techniques for automatic DDI extraction are critical. We propose a novel approach for automated pharmacokinetic (PK) DDI detection that incorporates syntactic and semantic information into graph kernels, to address the problem of sparseness associated with syntactic-structural approaches. First, we used a novel all-path graph kernel using shallow semantic representation of sentences. Next, we statistically integrated fine-granular semantic classes into the dependency and shallow semantic graphs.\n\n\nRESULTS\nWhen evaluated on the PK DDI corpus, our approach significantly outperformed the original all-path graph kernel that is based on dependency structure. Our system that combined dependency graph kernel with semantic classes achieved the best F-scores of 81.94 % for in vivo PK DDIs and 69.34 % for in vitro PK DDIs, respectively. Further, combining shallow semantic graph kernel with semantic classes achieved the highest precisions of 84.88 % for in vivo PK DDIs and 74.83 % for in vitro PK DDIs, respectively.\n\n\nCONCLUSIONS\nWe presented a graph kernel based approach to combine syntactic and semantic information for extracting pharmacokinetic DDIs from Biomedical Literature. Experimental results showed that our proposed approach could extract PK DDIs from literature effectively, which significantly enhanced the performance of the original all-path graph kernel based on dependency structure." }, { "pmid": "19025688", "title": "All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning.", "abstract": "BACKGROUND\nAutomated extraction of protein-protein interactions (PPI) is an important and widely studied task in biomedical text mining. We propose a graph kernel based approach for this task. In contrast to earlier approaches to PPI extraction, the introduced all-paths graph kernel has the capability to make use of full, general dependency graphs representing the sentence structure.\n\n\nRESULTS\nWe evaluate the proposed method on five publicly available PPI corpora, providing the most comprehensive evaluation done for a machine learning based PPI-extraction system. We additionally perform a detailed evaluation of the effects of training and testing on different resources, providing insight into the challenges involved in applying a system beyond the data it was trained on. Our method is shown to achieve state-of-the-art performance with respect to comparable evaluations, with 56.4 F-score and 84.8 AUC on the AImed corpus.\n\n\nCONCLUSION\nWe show that the graph kernel approach performs on state-of-the-art level in PPI extraction, and note the possible extension to the task of extracting complex interactions. Cross-corpus results provide further insight into how the learning generalizes beyond individual corpora. Further, we identify several pitfalls that can make evaluations of PPI-extraction systems incomparable, or even invalid. These include incorrect cross-validation strategies and problems related to comparing F-score results achieved on different evaluation resources. Recommendations for avoiding these pitfalls are provided." }, { "pmid": "18433469", "title": "Extraction of semantic biomedical relations from text using conditional random fields.", "abstract": "BACKGROUND\nThe increasing amount of published literature in biomedicine represents an immense source of knowledge, which can only efficiently be accessed by a new generation of automated information extraction tools. Named entity recognition of well-defined objects, such as genes or proteins, has achieved a sufficient level of maturity such that it can form the basis for the next step: the extraction of relations that exist between the recognized entities. Whereas most early work focused on the mere detection of relations, the classification of the type of relation is also of great importance and this is the focus of this work. In this paper we describe an approach that extracts both the existence of a relation and its type. Our work is based on Conditional Random Fields, which have been applied with much success to the task of named entity recognition.\n\n\nRESULTS\nWe benchmark our approach on two different tasks. The first task is the identification of semantic relations between diseases and treatments. The available data set consists of manually annotated PubMed abstracts. The second task is the identification of relations between genes and diseases from a set of concise phrases, so-called GeneRIF (Gene Reference Into Function) phrases. In our experimental setting, we do not assume that the entities are given, as is often the case in previous relation extraction work. Rather the extraction of the entities is solved as a subproblem. Compared with other state-of-the-art approaches, we achieve very competitive results on both data sets. To demonstrate the scalability of our solution, we apply our approach to the complete human GeneRIF database. The resulting gene-disease network contains 34758 semantic associations between 4939 genes and 1745 diseases. The gene-disease network is publicly available as a machine-readable RDF graph.\n\n\nCONCLUSION\nWe extend the framework of Conditional Random Fields towards the annotation of semantic relations from text and apply it to the biomedical domain. Our approach is based on a rich set of textual features and achieves a performance that is competitive to leading approaches. The model is quite general and can be extended to handle arbitrary biological entities and relation types. The resulting gene-disease network shows that the GeneRIF database provides a rich knowledge source for text mining. Current work is focused on improving the accuracy of detection of entities as well as entity boundaries, which will also greatly improve the relation extraction performance." } ]
BMC Medical Informatics and Decision Making
30066651
PMC6069806
10.1186/s12911-018-0630-x
Evaluating semantic relations in neural word embeddings with biomedical and general domain knowledge bases
BackgroundIn the past few years, neural word embeddings have been widely used in text mining. However, the vector representations of word embeddings mostly act as a black box in downstream applications using them, thereby limiting their interpretability. Even though word embeddings are able to capture semantic regularities in free text documents, it is not clear how different kinds of semantic relations are represented by word embeddings and how semantically-related terms can be retrieved from word embeddings.MethodsTo improve the transparency of word embeddings and the interpretability of the applications using them, in this study, we propose a novel approach for evaluating the semantic relations in word embeddings using external knowledge bases: Wikipedia, WordNet and Unified Medical Language System (UMLS). We trained multiple word embeddings using health-related articles in Wikipedia and then evaluated their performance in the analogy and semantic relation term retrieval tasks. We also assessed if the evaluation results depend on the domain of the textual corpora by comparing the embeddings of health-related Wikipedia articles with those of general Wikipedia articles.ResultsRegarding the retrieval of semantic relations, we were able to retrieve semanti. Meanwhile, the two popular word embedding approaches, Word2vec and GloVe, obtained comparable results on both the analogy retrieval task and the semantic relation retrieval task, while dependency-based word embeddings had much worse performance in both tasks. We also found that the word embeddings trained with health-related Wikipedia articles obtained better performance in the health-related relation retrieval tasks than those trained with general Wikipedia articles.ConclusionIt is evident from this study that word embeddings can group terms with diverse semantic relations together. The domain of the training corpus does have impact on the semantic relations represented by word embeddings. We thus recommend using domain-specific corpus to train word embeddings for domain-specific text mining tasks.
Related workNeural word embeddingsIn an early work, Lund et al. [16] introduced HAL (Hyperspace Analogue to Language), which uses a sliding window to capture the co-occurrence information. By moving the ramped window through the corpus, a co-occurrence matrix is formed. The value of each cell of the matrix is the number of co-occurrences of the corresponding word pairs in the text corpus. HAL is a robust unsupervised word embedding method that can represent certain kinds of semantic relations. However, it suffers from the sparseness of the matrix.In 2003, Bengio et al. [17] proposed a neural probabilistic language model to learn the distributed representations for words. In their model, a word is represented as a distributed feature vector. The joint probability function of a word sequence is a smooth function. As such, a small change in the vectors of the sequence of word vectors will induce a small change in probability. This implies that similar words would have similar feature vectors. For example, the sentence “A dog is walking in the bedroom” can be changed to “A cat is walking in the bedroom” by replacing dog with cat. This model outperforms the N-gram model in many text mining tasks with a big margin but suffers from high computational complexity.Later, Mikolov et al. [18] proposed the Recurrent Neural Network Language Model (RNNLM). RNNLM is a powerful embedding model because the recurrent networks incorporate the entire input history using the short-term memory recursively. It outperforms the traditional N-gram model. Nevertheless, one of the shortcomings of RNNLM is its computational complexity in the hidden layer of the network. In 2013, Mikolov et al. [1, 2] proposed a simplified RNNLM using multiple simple networks in Word2vec. It assumes that training a simple network with much more data can achieve similar performance as more complex networks such as RNN. Word2vec can efficiently cluster similar words together and predict regularity relations, such as “man is to woman as king is to queen”. The Word2vec method based on skip-gram with negative sampling [1, 2] is widely used mainly because of its accompanying software package, which enabled efficient training of dense word representations and a straightforward integration into downstream models. Word2vec uses techniques such as negative sampling and sub-sampling to reduce the computational complexity. To a certain extent, Work2vec successfully promoted word embeddings to be the de facto input in many recent text mining and NLP projects.Pennington et al. [3] argued that Word2vec does not sufficiently utilize the global statistics of word co-occurrences. They proposed a new embedding model called GloVe by incorporating the statistics of the entire corpus explicitly using all the word-word co-occurrence counts. The computational complexity is reduced by including only non-zero count entries. In [3], GloVe significantly overperforms the Word2vec in the semantic analogy tasks.Most of the embedding models used the words surrounding the target word as the context based on the assumption that words with a similar context have the similar meanings [19]. Levy and Goldberg [10] proposed a dependency-based word embeddings with the argument that syntactic dependencies are more inclusive and focused. They assumed one can use syntactic dependency information to skip the words that are close but not related to the target word, meanwhile capturing the distantly related words that are out of the context window. Their results showed that dependency-based word embeddings captured less topical similarity but more functional similarity.To improve the interpretability of Word2vec, Levy and Goldberg [20] illustrated that Word2vec implicitly factorizes a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. Arora et al. [21] proposed a generative random walk model to provide theoretical justifications for nonlinear models like PMI, Word2vec, and GloVe, as well as some hyper-parameter choices.Evaluation of word embeddingsLund and Burgess’s experiments based on HAL [16] demonstrated that the nearest neighbors of a word have certain relations to the word. However, they did not investigate the specific types of relations that these nearest neighbors have with the word. Mikolov et al. [1] demonstrated that neural word embeddings could effectively capture analogy relations. They also released a widely used analogy and syntactic evaluation dataset. Finkelstein et al. [22] released another widely used dataset for word relation evaluation, WordSim-353, which provides obscure relations between words rather than specific relations.Ono et al. [23] leveraged supervised synonym and antonym information from the thesauri as well as the objectives in Skip-Gram Negative Sampling (SGNS) model to detect antonyms from unlabeled text. They reached the state-of-the-art accuracy on the GRE antonym questions task.Schnabel et al. [24] presented a comprehensive evaluation method for word embedding models, which used both the widely-used evaluation datasets from Baroni et al. [2, 25] and a dataset manually labeled by themselves. They categorized the evaluation tasks into three classes: absolute intrinsic, coherence, and extrinsic. Their method involves extensive manual labeling of the correlation of words, for which they leveraged crowdsourcing on the Amazon Mechanical Turk (MTurk). In our study, we investigated the relations among terms in an automated fashion.Levy and Goldberg [20] showed that the skip-gram with negative sampling can implicitly factorize a word-context matrix, whose cells are the Pointwise Mutual Information (PMI) of the corresponding word and its context, shifted by a constant number, i.e., MPMI− log(k), where k is the negative sampling number. Later, they [26] systematically evaluated and compared four word embedding methods: PPMI (Positive Pointwise Mutual Information) matrix, SVD (Singular Value Decomposition) factorization PPMI matrix, Skip-Gram Negative Sampling (SGNS), and GloVe, with nine hyperparameters. The results showed that none of these methods can alway outperform the others with the same hyperparameters. They also found that tuning the hyperparameters had a higher impact on the performance than the algorithm chosen.Zhu et al. [27] recently examined Word2vec’s ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. They preprocessed and grouped over 18 million PubMed abstracts and over 750k full-text articles from PubMed Central into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against the reference standards. They found that increasing the size of dataset does not always enhance the performance. It can result in the identification of more relations of biomedical terms, but it does not guarantee better precision.Visualization of word embeddingsRecently, Liu et al. [28] presented an embedding technique for visualizing semantic and syntactic analogies and performed tests to determine whether the resulting visualizations capture the salient structure of the word embeddings generated with Word2vec and GloVe. Principal Component Analysis projection, Cosine distance histogram, and semantic axis were used as the visualization techniques. In our work, we also explored other types of relations that are related to medicine, e.g., morphology, finding site. Google released Embedding Projector [29] which includes PCA [30] and t-SNE [31] as an embedding visualization tool in the TensorFlow framework [32].
[ "28866574", "28359728" ]
[ { "pmid": "28866574", "title": "Visual Exploration of Semantic Relationships in Neural Word Embeddings.", "abstract": "Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools." }, { "pmid": "28359728", "title": "Enriching consumer health vocabulary through mining a social Q&A site: A similarity-based approach.", "abstract": "The widely known vocabulary gap between health consumers and healthcare professionals hinders information seeking and health dialogue of consumers on end-user health applications. The Open Access and Collaborative Consumer Health Vocabulary (OAC CHV), which contains health-related terms used by lay consumers, has been created to bridge such a gap. Specifically, the OAC CHV facilitates consumers' health information retrieval by enabling consumer-facing health applications to translate between professional language and consumer friendly language. To keep up with the constantly evolving medical knowledge and language use, new terms need to be identified and added to the OAC CHV. User-generated content on social media, including social question and answer (social Q&A) sites, afford us an enormous opportunity in mining consumer health terms. Existing methods of identifying new consumer terms from text typically use ad-hoc lexical syntactic patterns and human review. Our study extends an existing method by extracting n-grams from a social Q&A textual corpus and representing them with a rich set of contextual and syntactic features. Using K-means clustering, our method, simiTerm, was able to identify terms that are both contextually and syntactically similar to the existing OAC CHV terms. We tested our method on social Q&A corpora on two disease domains: diabetes and cancer. Our method outperformed three baseline ranking methods. A post-hoc qualitative evaluation by human experts further validated that our method can effectively identify meaningful new consumer terms on social Q&A." } ]
Scientific Reports
30076341
PMC6076239
10.1038/s41598-018-30044-1
Reciprocal Perspective for Improved Protein-Protein Interaction Prediction
All protein-protein interaction (PPI) predictors require the determination of an operational decision threshold when differentiating positive PPIs from negatives. Historically, a single global threshold, typically optimized via cross-validation testing, is applied to all protein pairs. However, we here use data visualization techniques to show that no single decision threshold is suitable for all protein pairs, given the inherent diversity of protein interaction profiles. The recent development of high throughput PPI predictors has enabled the comprehensive scoring of all possible protein-protein pairs. This, in turn, has given rise to context, enabling us now to evaluate a PPI within the context of all possible predictions. Leveraging this context, we introduce a novel modeling framework called Reciprocal Perspective (RP), which estimates a localized threshold on a per-protein basis using several rank order metrics. By considering a putative PPI from the perspective of each of the proteins within the pair, RP rescores the predicted PPI and applies a cascaded Random Forest classifier leading to improvements in recall and precision. We here validate RP using two state-of-the-art PPI predictors, the Protein-protein Interaction Prediction Engine and the Scoring PRotein INTeractions methods, over five organisms: Homo sapiens, Saccharomyces cerevisiae, Arabidopsis thaliana, Caenorhabditis elegans, and Mus musculus. Results demonstrate the application of a post hoc RP rescoring layer significantly improves classification (p < 0.001) in all cases over all organisms and this new rescoring approach can apply to any PPI prediction method.
Related Work and Previous Approaches to Local Threshold DeterminationIn PPI prediction tasks, a quantitative score can be assigned to a given protein pair of interest, say proteins xi and yj in pair Pij. Researchers studying protein xi would typically consider the set of sorted scores for all pairs \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${P}_{in},\forall \,n$$\end{document}Pin,∀n and investigate the top-ranking PPIs through experimental validation. However, the arbitrary selection of the top-k ranking interactors for a given protein fails to impart any confidence in the resulting PPIs. The choice of the value of k is arbitrary since no single value of k can be optimal for all proteins. Furthermore, when considering the interaction Pij, this top-k ranking approach only considers the scores of pairs involving xi, but not all pairs involving yj. That is, they are only leveraging half of the available context.A widely used algorithm that does examine both partners within a putative relationship is the Reciprocal Best Hit method to identify putative orthologs. Here, two genes in different genomes are considered to be orthologs if and only if they find each other to be the top-scoring BLAST hit in the other genome22. Reciprocal Best Hit is an example of the most conservative application of a local threshold, where k = 1. While useful for determining an orthologous relationship between the two genes that are typically expected to have a single ortholog in other species, our situation differs as proteins are expected to participate in several PPIs (i.e. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k\ge 1$$\end{document}k≥1); therefore, a more suitable approach is required.A similar challenge arises in the control of false discovery rates (FDRs) in applications such as genomics where high dimensional genotyping arrays are used to evaluate millions of variants (e.g. single nucleotide polymorphisms) for correlation with phenotype or experimental condition. The established method for controlling against multiple testing has been to adjust family-wise error rates (FWERs) such as using the Holm-Bonferroni method23. The control of FWERs has been considered too conservative and severely compromises statistical power resulting in many true loci of small effect being missed24. Recently local FDRs (LFDRs) have been proposed and are defined as the probability of a test result being false given the exact value of the test statistic25. The LFDR correction, through re-ranking of test statistic value, has been demonstrated to eliminate biases of the former non-local FDR (NFDR) estimators24,26. Our application is similarly motivated, though differs in that we consider the paired relationships between two elements and leverage the context of a given protein relative to all others.The network-based analysis of PPI networks contextualizes proteins using undirected connected graphs, often with a scale-free topology and hierarchal modularity27. The importance of a PPI is determined by considering topological features, path distances, and centrality measures of a given protein relative to the neighbours in its vicinity (e.g. k-hop distance). While useful for post hoc evaluation of cluster density, cliques, and protein complex prediction28, these approaches are notoriously plagued with false positives29. Network-based PPI prediction methods will attribute scores based on these quantitative metrics and often incorporate external information to re-weight their score such as protein localization, co-expression, and literature-curated data.In essence, various methods have taken into account localized decision thresholds, paired comparison, PPI context, and rank-order metrics, however no one modality has leveraged these in combination and proposed a unified method to determine localized decision thresholds for predicted PPIs based on their interactome context. Again, such analysis has only recently become possible with the development of efficient high-throughput methods capable of assessing all possible protein pairs.
[ "22711592", "29141584", "16872538", "23193263", "19079255", "18390576", "25657331", "28545462", "21635751", "28073761", "28029645", "29220074", "25218442", "21071205", "25738806", "25599932", "18042555", "28931044", "23600810", "22453911", "18651760", "25531225", "23233352", "18156468", "19185053", "20480050", "18586826", "29112936", "20003442", "22355752", "27332507" ]
[ { "pmid": "22711592", "title": "History of protein-protein interactions: from egg-white to complex networks.", "abstract": "Today, it is widely appreciated that protein-protein interactions play a fundamental role in biological processes. This was not always the case. The study of protein interactions started slowly and evolved considerably, together with conceptual and technological progress in different areas of research through the late 19th and the 20th centuries. In this review, we present some of the key experiments that have introduced major conceptual advances in biochemistry and molecular biology, and review technological breakthroughs that have paved the way for today's systems-wide approaches to protein-protein interaction analysis." }, { "pmid": "29141584", "title": "SPRINT: ultrafast protein-protein interaction prediction of the entire human interactome.", "abstract": "BACKGROUND\nProteins perform their functions usually by interacting with other proteins. Predicting which proteins interact is a fundamental problem. Experimental methods are slow, expensive, and have a high rate of error. Many computational methods have been proposed among which sequence-based ones are very promising. However, so far no such method is able to predict effectively the entire human interactome: they require too much time or memory.\n\n\nRESULTS\nWe present SPRINT (Scoring PRotein INTeractions), a new sequence-based algorithm and tool for predicting protein-protein interactions. We comprehensively compare SPRINT with state-of-the-art programs on seven most reliable human PPI datasets and show that it is more accurate while running orders of magnitude faster and using very little memory.\n\n\nCONCLUSION\nSPRINT is the only sequence-based program that can effectively predict the entire human interactome: it requires between 15 and 100 min, depending on the dataset. Our goal is to transform the very challenging problem of predicting the entire human interactome into a routine task.\n\n\nAVAILABILITY\nThe source code of SPRINT is freely available from https://github.com/lucian-ilie/SPRINT/ and the datasets and predicted PPIs from www.csd.uwo.ca/faculty/ilie/SPRINT/ ." }, { "pmid": "16872538", "title": "PIPE: a protein-protein interaction prediction engine based on the re-occurring short polypeptide sequences between known interacting protein pairs.", "abstract": "BACKGROUND\nIdentification of protein interaction networks has received considerable attention in the post-genomic era. The currently available biochemical approaches used to detect protein-protein interactions are all time and labour intensive. Consequently there is a growing need for the development of computational tools that are capable of effectively identifying such interactions.\n\n\nRESULTS\nHere we explain the development and implementation of a novel Protein-Protein Interaction Prediction Engine termed PIPE. This tool is capable of predicting protein-protein interactions for any target pair of the yeast Saccharomyces cerevisiae proteins from their primary structure and without the need for any additional information or predictions about the proteins. PIPE showed a sensitivity of 61% for detecting any yeast protein interaction with 89% specificity and an overall accuracy of 75%. This rate of success is comparable to those associated with the most commonly used biochemical techniques. Using PIPE, we identified a novel interaction between YGL227W (vid30) and YMR135C (gid8) yeast proteins. This lead us to the identification of a novel yeast complex that here we term vid30 complex (vid30c). The observed interaction was confirmed by tandem affinity purification (TAP tag), verifying the ability of PIPE to predict novel protein-protein interactions. We then used PIPE analysis to investigate the internal architecture of vid30c. It appeared from PIPE analysis that vid30c may consist of a core and a secondary component. Generation of yeast gene deletion strains combined with TAP tagging analysis indicated that the deletion of a member of the core component interfered with the formation of vid30c, however, deletion of a member of the secondary component had little effect (if any) on the formation of vid30c. Also, PIPE can be used to analyse yeast proteins for which TAP tagging fails, thereby allowing us to predict protein interactions that are not included in genome-wide yeast TAP tagging projects.\n\n\nCONCLUSION\nPIPE analysis can predict yeast protein-protein interactions. Also, PIPE analysis can be used to study the internal architecture of yeast protein complexes. The data also suggests that a finite set of short polypeptide signals seem to be responsible for the majority of the yeast protein-protein interactions." }, { "pmid": "23193263", "title": "PrePPI: a structure-informed database of protein-protein interactions.", "abstract": "PrePPI (http://bhapp.c2b2.columbia.edu/PrePPI) is a database that combines predicted and experimentally determined protein-protein interactions (PPIs) using a Bayesian framework. Predicted interactions are assigned probabilities of being correct, which are derived from calculated likelihood ratios (LRs) by combining structural, functional, evolutionary and expression information, with the most important contribution coming from structure. Experimentally determined interactions are compiled from a set of public databases that manually collect PPIs from the literature and are also assigned LRs. A final probability is then assigned to every interaction by combining the LRs for both predicted and experimentally determined interactions. The current version of PrePPI contains ∼2 million PPIs that have a probability more than ∼0.1 of which ∼60 000 PPIs for yeast and ∼370 000 PPIs for human are considered high confidence (probability > 0.5). The PrePPI database constitutes an integrated resource that enables users to examine aggregate information on PPIs, including both known and potentially novel interactions, and that provides structural models for many of the PPIs." }, { "pmid": "19079255", "title": "Integrated network analysis platform for protein-protein interactions.", "abstract": "There is an increasing demand for network analysis of protein-protein interactions (PPIs). We introduce a web-based protein interaction network analysis platform (PINA), which integrates PPI data from six databases and provides network construction, filtering, analysis and visualization tools. We demonstrated the advantages of PINA by analyzing two human PPI networks; our results suggested a link between LKB1 and TGFbeta signaling, and revealed possible competitive interactors of p53 and c-Jun." }, { "pmid": "18390576", "title": "Using support vector machine combined with auto covariance to predict protein-protein interactions from protein sequences.", "abstract": "Compared to the available protein sequences of different organisms, the number of revealed protein-protein interactions (PPIs) is still very limited. So many computational methods have been developed to facilitate the identification of novel PPIs. However, the methods only using the information of protein sequences are more universal than those that depend on some additional information or predictions about the proteins. In this article, a sequence-based method is proposed by combining a new feature representation using auto covariance (AC) and support vector machine (SVM). AC accounts for the interactions between residues a certain distance apart in the sequence, so this method adequately takes the neighbouring effect into account. When performed on the PPI data of yeast Saccharomyces cerevisiae, the method achieved a very promising prediction result. An independent data set of 11,474 yeast PPIs was used to evaluate this prediction model and the prediction accuracy is 88.09%. The performance of this method is superior to those of the existing sequence-based methods, so it can be a useful supplementary tool for future proteomics studies. The prediction software and all data sets used in this article are freely available at http://www.scucic.cn/Predict_PPI/index.htm." }, { "pmid": "25657331", "title": "Evolutionary profiles improve protein-protein interaction prediction from sequence.", "abstract": "MOTIVATION\nMany methods predict the physical interaction between two proteins (protein-protein interactions; PPIs) from sequence alone. Their performance drops substantially for proteins not used for training.\n\n\nRESULTS\nHere, we introduce a new approach to predict PPIs from sequence alone which is based on evolutionary profiles and profile-kernel support vector machines. It improved over the state-of-the-art, in particular for proteins that are sequence-dissimilar to proteins with known interaction partners. Filtering by gene expression data increased accuracy further for the few, most reliably predicted interactions (low recall). The overall improvement was so substantial that we compiled a list of the most reliably predicted PPIs in human. Our method makes a significant difference for biology because it improves most for the majority of proteins without experimental annotations.\n\n\nAVAILABILITY AND IMPLEMENTATION\nImplementation and most reliably predicted human PPIs available at https://rostlab.org/owiki/index.php/Profppikernel." }, { "pmid": "28545462", "title": "Sequence-based prediction of protein protein interaction using a deep-learning algorithm.", "abstract": "BACKGROUND\nProtein-protein interactions (PPIs) are critical for many biological processes. It is therefore important to develop accurate high-throughput methods for identifying PPI to better understand protein function, disease occurrence, and therapy design. Though various computational methods for predicting PPI have been developed, their robustness for prediction with external datasets is unknown. Deep-learning algorithms have achieved successful results in diverse areas, but their effectiveness for PPI prediction has not been tested.\n\n\nRESULTS\nWe used a stacked autoencoder, a type of deep-learning algorithm, to study the sequence-based PPI prediction. The best model achieved an average accuracy of 97.19% with 10-fold cross-validation. The prediction accuracies for various external datasets ranged from 87.99% to 99.21%, which are superior to those achieved with previous methods.\n\n\nCONCLUSIONS\nTo our knowledge, this research is the first to apply a deep-learning algorithm to sequence-based PPI prediction, and the results demonstrate its potential in this field." }, { "pmid": "21635751", "title": "Binding site prediction for protein-protein interactions and novel motif discovery using re-occurring polypeptide sequences.", "abstract": "BACKGROUND\nWhile there are many methods for predicting protein-protein interaction, very few can determine the specific site of interaction on each protein. Characterization of the specific sequence regions mediating interaction (binding sites) is crucial for an understanding of cellular pathways. Experimental methods often report false binding sites due to experimental limitations, while computational methods tend to require data which is not available at the proteome-scale. Here we present PIPE-Sites, a novel method of protein specific binding site prediction based on pairs of re-occurring polypeptide sequences, which have been previously shown to accurately predict protein-protein interactions. PIPE-Sites operates at high specificity and requires only the sequences of query proteins and a database of known binary interactions with no binding site data, making it applicable to binding site prediction at the proteome-scale.\n\n\nRESULTS\nPIPE-Sites was evaluated using a dataset of 265 yeast and 423 human interacting proteins pairs with experimentally-determined binding sites. We found that PIPE-Sites predictions were closer to the confirmed binding site than those of two existing binding site prediction methods based on domain-domain interactions, when applied to the same dataset. Finally, we applied PIPE-Sites to two datasets of 2347 yeast and 14,438 human novel interacting protein pairs predicted to interact with high confidence. An analysis of the predicted interaction sites revealed a number of protein subsequences which are highly re-occurring in binding sites and which may represent novel binding motifs.\n\n\nCONCLUSIONS\nPIPE-Sites is an accurate method for predicting protein binding sites and is applicable to the proteome-scale. Thus, PIPE-Sites could be useful for exhaustive analysis of protein binding patterns in whole proteomes as well as discovery of novel binding motifs. PIPE-Sites is available online at http://pipe-sites.cgmlab.org/." }, { "pmid": "28073761", "title": "Seeing the trees through the forest: sequence-based homo- and heteromeric protein-protein interaction sites prediction using random forest.", "abstract": "MOTIVATION\nGenome sequencing is producing an ever-increasing amount of associated protein sequences. Few of these sequences have experimentally validated annotations, however, and computational predictions are becoming increasingly successful in producing such annotations. One key challenge remains the prediction of the amino acids in a given protein sequence that are involved in protein-protein interactions. Such predictions are typically based on machine learning methods that take advantage of the properties and sequence positions of amino acids that are known to be involved in interaction. In this paper, we evaluate the importance of various features using Random Forest (RF), and include as a novel feature backbone flexibility predicted from sequences to further optimise protein interface prediction.\n\n\nRESULTS\nWe observe that there is no single sequence feature that enables pinpointing interacting sites in our Random Forest models. However, combining different properties does increase the performance of interface prediction. Our homomeric-trained RF interface predictor is able to distinguish interface from non-interface residues with an area under the ROC curve of 0.72 in a homomeric test-set. The heteromeric-trained RF interface predictor performs better than existing predictors on a independent heteromeric test-set. We trained a more general predictor on the combined homomeric and heteromeric dataset, and show that in addition to predicting homomeric interfaces, it is also able to pinpoint interface residues in heterodimers. This suggests that our random forest model and the features included capture common properties of both homodimer and heterodimer interfaces.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe predictors and test datasets used in our analyses are freely available ( http://www.ibi.vu.nl/downloads/RF_PPI/ ).\n\n\nCONTACT\[email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "28029645", "title": "An ensemble approach for large-scale identification of protein- protein interactions using the alignments of multiple sequences.", "abstract": "Protein-Protein Interactions (PPI) is not only the critical component of various biological processes in cells, but also the key to understand the mechanisms leading to healthy and diseased states in organisms. However, it is time-consuming and cost-intensive to identify the interactions among proteins using biological experiments. Hence, how to develop a more efficient computational method rapidly became an attractive topic in the post-genomic era. In this paper, we propose a novel method for inference of protein-protein interactions from protein amino acids sequences only. Specifically, protein amino acids sequence is firstly transformed into Position-Specific Scoring Matrix (PSSM) generated by multiple sequences alignments; then the Pseudo PSSM is used to extract feature descriptors. Finally, ensemble Rotation Forest (RF) learning system is trained to predict and recognize PPIs based solely on protein sequence feature. When performed the proposed method on the three benchmark data sets (Yeast, H. pylori, and independent dataset) for predicting PPIs, our method can achieve good average accuracies of 98.38%, 89.75%, and 96.25%, respectively. In order to further evaluate the prediction performance, we also compare the proposed method with other methods using same benchmark data sets. The experiment results demonstrate that the proposed method consistently outperforms other state-of-the-art method. Therefore, our method is effective and robust and can be taken as a useful tool in exploring and discovering new relationships between proteins. A web server is made publicly available at the URL http://202.119.201.126:8888/PsePSSM/ for academic use." }, { "pmid": "29220074", "title": "Prediction of Protein-Protein Interactions.", "abstract": "The authors provide an overview of physical protein-protein interaction prediction, covering the main strategies for predicting interactions, approaches for assessing predictions, and online resources for accessing predictions. This unit focuses on the main advancements in each of these areas over the last decade. The methods and resources that are presented here are not an exhaustive set, but characterize the current state of the field-highlighting key challenges and achievements. © 2017 by John Wiley & Sons, Inc." }, { "pmid": "25218442", "title": "Biological messiness vs. biological genius: Mechanistic aspects and roles of protein promiscuity.", "abstract": "In contrast to the traditional biological paradigms focused on 'specificity', recent research and theoretical efforts have focused on functional 'promiscuity' exhibited by proteins and enzymes in many biological settings, including enzymatic detoxication, steroid biochemistry, signal transduction and immune responses. In addition, divergent evolutionary processes are apparently facilitated by random mutations that yield promiscuous enzyme intermediates. The intermediates, in turn, provide opportunities for further evolution to optimize new functions from existing protein scaffolds. In some cases, promiscuity may simply represent the inherent plasticity of proteins resulting from their polymeric nature with distributed conformational ensembles. Enzymes or proteins that bind or metabolize noncognate substrates create 'messiness' or noise in the systems they contribute to. With our increasing awareness of the frequency of these promiscuous behaviors it becomes interesting and important to understand the molecular bases for promiscuous behavior and to distinguish between evolutionarily selected promiscuity and evolutionarily tolerated messiness. This review provides an overview of current understanding of these aspects of protein biochemistry and enzymology." }, { "pmid": "21071205", "title": "Protein binding specificity versus promiscuity.", "abstract": "Interactions between macromolecules in general, and between proteins in particular, are essential for any life process. Examples include transfer of information, inhibition or activation of function, molecular recognition as in the immune system, assembly of macromolecular structures and molecular machines, and more. Proteins interact with affinities ranging from millimolar to femtomolar and, because affinity determines the concentration required to obtain 50% binding, the amount of different complexes formed is very much related to local concentrations. Although the concentration of a specific binding partner is usually quite low in the cell (nanomolar to micromolar), the total concentration of other macromolecules is very high, allowing weak and non-specific interactions to play important roles. In this review we address the question of binding specificity, that is, how do some proteins maintain monogamous relations while others are clearly polygamous. We examine recent work that addresses the molecular and structural basis for specificity versus promiscuity. We show through examples how multiple solutions exist to achieve binding via similar interfaces and how protein specificity can be tuned using both positive and negative selection (specificity by demand). Binding of a protein to numerous partners can be promoted through variation in which residues are used for binding, conformational plasticity and/or post-translational modification. Natively unstructured regions represent the extreme case in which structure is obtained only upon binding. Many natively unstructured proteins serve as hubs in protein-protein interaction networks and such promiscuity can be of functional importance in biology." }, { "pmid": "25738806", "title": "The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets.", "abstract": "Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets." }, { "pmid": "25599932", "title": "Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: a discussion and proposal for an alternative approach.", "abstract": "OBJECTIVES\nThe objectives are to describe the disadvantages of the area under the receiver operating characteristic curve (ROC AUC) to measure diagnostic test performance and to propose an alternative based on net benefit.\n\n\nMETHODS\nWe use a narrative review supplemented by data from a study of computer-assisted detection for CT colonography.\n\n\nRESULTS\nWe identified problems with ROC AUC. Confidence scoring by readers was highly non-normal, and score distribution was bimodal. Consequently, ROC curves were highly extrapolated with AUC mostly dependent on areas without patient data. AUC depended on the method used for curve fitting. ROC AUC does not account for prevalence or different misclassification costs arising from false-negative and false-positive diagnoses. Change in ROC AUC has little direct clinical meaning for clinicians. An alternative analysis based on net benefit is proposed, based on the change in sensitivity and specificity at clinically relevant thresholds. Net benefit incorporates estimates of prevalence and misclassification costs, and it is clinically interpretable since it reflects changes in correct and incorrect diagnoses when a new diagnostic test is introduced.\n\n\nCONCLUSIONS\nROC AUC is most useful in the early stages of test assessment whereas methods based on net benefit are more useful to assess radiological tests where the clinical context is known. Net benefit is more useful for assessing clinical impact.\n\n\nKEY POINTS\n• The area under the receiver operating characteristic curve (ROC AUC) measures diagnostic accuracy. • Confidence scores used to build ROC curves may be difficult to assign. • False-positive and false-negative diagnoses have different misclassification costs. • Excessive ROC curve extrapolation is undesirable. • Net benefit methods may provide more meaningful and clinically interpretable results than ROC AUC." }, { "pmid": "18042555", "title": "Choosing BLAST options for better detection of orthologs as reciprocal best hits.", "abstract": "MOTIVATION\nThe analyses of the increasing number of genome sequences requires shortcuts for the detection of orthologs, such as Reciprocal Best Hits (RBH), where orthologs are assumed if two genes each in a different genome find each other as the best hit in the other genome. Two BLAST options seem to affect alignment scores the most, and thus the choice of a best hit: the filtering of low information sequence segments and the algorithm used to produce the final alignment. Thus, we decided to test whether such options would help better detect orthologs.\n\n\nRESULTS\nUsing Escherichia coli K12 as an example, we compared the number and quality of orthologs detected as RBH. We tested four different conditions derived from two options: filtering of low-information segments, hard (default) versus soft; and alignment algorithm, default (based on matching words) versus Smith-Waterman. All options resulted in significant differences in the number of orthologs detected, with the highest numbers obtained with the combination of soft filtering with Smith-Waterman alignments. We compared these results with those of Reciprocal Shortest Distances (RSD), supposed to be superior to RBH because it uses an evolutionary measure of distance, rather than BLAST statistics, to rank homologs and thus detect orthologs. RSD barely increased the number of orthologs detected over those found with RBH. Error estimates, based on analyses of conservation of gene order, found small differences in the quality of orthologs detected using RBH. However, RSD showed the highest error rates. Thus, RSD have no advantages over RBH.\n\n\nAVAILABILITY\nOrthologs detected as Reciprocal Best Hits using soft masking and Smith-Waterman alignments can be downloaded from http://popolvuh.wlu.ca/Orthologs." }, { "pmid": "28931044", "title": "The performance of a new local false discovery rate method on tests of association between coronary artery disease (CAD) and genome-wide genetic variants.", "abstract": "The maximum entropy (ME) method is a recently-developed approach for estimating local false discovery rates (LFDR) that incorporates external information allowing assignment of a subset of tests to a category with a different prior probability of following the null hypothesis. Using this ME method, we have reanalyzed the findings from a recent large genome-wide association study of coronary artery disease (CAD), incorporating biologic annotations. Our revised LFDR estimates show many large reductions in LFDR, particularly among the genetic variants belonging to annotation categories that were known to be of particular interest for CAD. However, among SNPs with rare minor allele frequencies, the reductions in LFDR were modest in size." }, { "pmid": "23600810", "title": "A survey of computational methods for protein complex prediction from protein interaction networks.", "abstract": "Complexes of physically interacting proteins are one of the fundamental functional units responsible for driving key biological mechanisms within the cell. Their identification is therefore necessary to understand not only complex formation but also the higher level organization of the cell. With the advent of \"high-throughput\" techniques in molecular biology, significant amount of physical interaction data has been cataloged from organisms such as yeast, which has in turn fueled computational approaches to systematically mine complexes from the network of physical interactions among proteins (PPI network). In this survey, we review, classify and evaluate some of the key computational methods developed till date for the identification of protein complexes from PPI networks. We present two insightful taxonomies that reflect how these methods have evolved over the years toward improving automated complex prediction. We also discuss some open challenges facing accurate reconstruction of complexes, the crucial ones being the presence of high proportion of errors and noise in current high-throughput datasets and some key aspects overlooked by current complex detection methods. We hope this review will not only help to condense the history of computational complex detection for easy reference but also provide valuable insights to drive further research in this area." }, { "pmid": "22453911", "title": "Protein interaction data curation: the International Molecular Exchange (IMEx) consortium.", "abstract": "The International Molecular Exchange (IMEx) consortium is an international collaboration between major public interaction data providers to share literature-curation efforts and make a nonredundant set of protein interactions available in a single search interface on a common website (http://www.imexconsortium.org/). Common curation rules have been developed, and a central registry is used to manage the selection of articles to enter into the dataset. We discuss the advantages of such a service to the user, our quality-control measures and our data-distribution practices." }, { "pmid": "18651760", "title": "Intrinsic disorder in nuclear hormone receptors.", "abstract": "Many proteins possess intrinsic disorder (ID) and lack a rigid three-dimensional structure in at least part of their sequence. ID has been hypothesized to influence protein-protein and protein-ligand interactions. We calculated ID for nearly 400 vertebrate and invertebrate members of the biomedically important nuclear hormone receptor (NHR) superfamily, including all 48 known human NHRs. The predictions correctly identified regions in 20 of the 23 NHRs suggested as disordered based on published X-ray and NMR structures. Of the four major NHR domains (N-terminal domain, DNA-binding domain, D-domain, and ligand-binding domain), we found ID to be highest in the D-domain, a region of NHRs critical in DNA recognition and heterodimerization, coactivator/corepressor interactions and protein-protein interactions. ID in the D-domain and LBD was significantly higher in \"hub\" human NHRs that have 10 or more downstream proteins in their interaction networks compared to \"non-hub\" NHRs that interact with fewer than 10 downstream proteins. ID in the D-domain and LBD was also higher in classic, ligand-activated NHRs than in orphan, ligand-independent NHRs in human. The correlation between ID in human and mouse NHRs was high. Less correlation was found for ID between mammalian and non-mammalian vertebrate NHRs. For some invertebrate species, particularly sea squirts ( Ciona), marked differences were observed in ID between invertebrate NHRs and their vertebrate orthologs. Our results indicate that variability of ID within NHRs, particularly in the D-domain and LBD, is likely an important evolutionary force in shaping protein-protein interactions and NHR function. This information enables further understanding of these therapeutic targets." }, { "pmid": "25531225", "title": "Intrinsically disordered proteins in cellular signalling and regulation.", "abstract": "Intrinsically disordered proteins (IDPs) are important components of the cellular signalling machinery, allowing the same polypeptide to undertake different interactions with different consequences. IDPs are subject to combinatorial post-translational modifications and alternative splicing, adding complexity to regulatory networks and providing a mechanism for tissue-specific signalling. These proteins participate in the assembly of signalling complexes and in the dynamic self-assembly of membrane-less nuclear and cytoplasmic organelles. Experimental, computational and bioinformatic analyses combine to identify and characterize disordered regions of proteins, leading to a greater appreciation of their widespread roles in biological processes." }, { "pmid": "23233352", "title": "Exploring the binding diversity of intrinsically disordered proteins involved in one-to-many binding.", "abstract": "Molecular recognition features (MoRFs) are intrinsically disordered protein regions that bind to partners via disorder-to-order transitions. In one-to-many binding, a single MoRF binds to two or more different partners individually. MoRF-based one-to-many protein-protein interaction (PPI) examples were collected from the Protein Data Bank, yielding 23 MoRFs bound to 2-9 partners, with all pairs of same-MoRF partners having less than 25% sequence identity. Of these, 8 MoRFs were bound to 2-9 partners having completely different folds, whereas 15 MoRFs were bound to 2-5 partners having the same folds but with low sequence identities. For both types of partner variation, backbone and side chain torsion angle rotations were used to bring about the conformational changes needed to enable close fits between a single MoRF and distinct partners. Alternative splicing events (ASEs) and posttranslational modifications (PTMs) were also found to contribute to distinct partner binding. Because ASEs and PTMs both commonly occur in disordered regions, and because both ASEs and PTMs are often tissue-specific, these data suggest that MoRFs, ASEs, and PTMs may collaborate to alter PPI networks in different cell types. These data enlarge the set of carefully studied MoRFs that use inherent flexibility and that also use ASE-based and/or PTM-based surface modifications to enable the same disordered segment to selectively associate with two or more partners. The small number of residues involved in MoRFs and in their modifications by ASEs or PTMs may simplify the evolvability of signaling network diversity." }, { "pmid": "18156468", "title": "Identification of transient hub proteins and the possible structural basis for their multiple interactions.", "abstract": "Proteins that can interact with multiple partners play central roles in the network of protein-protein interactions. They are called hub proteins, and recently it was suggested that an abundance of intrinsically disordered regions on their surfaces facilitates their binding to multiple partners. However, in those studies, the hub proteins were identified as proteins with multiple partners, regardless of whether the interactions were transient or permanent. As a result, a certain number of hub proteins are subunits of stable multi-subunit proteins, such as supramolecules. It is well known that stable complexes and transient complexes have different structural features, and thus the statistics based on the current definition of hub proteins will hide the true nature of hub proteins. Therefore, in this paper, we first describe a new approach to identify proteins with multiple partners dynamically, using the Protein Data Bank, and then we performed statistical analyses of the structural features of these proteins. We refer to the proteins as transient hub proteins or sociable proteins, to clarify the difference with hub proteins. As a result, we found that the main difference between sociable and nonsociable proteins is not the abundance of disordered regions, in contrast to the previous studies, but rather the structural flexibility of the entire protein. We also found greater predominance of charged and polar residues in sociable proteins than previously reported." }, { "pmid": "19185053", "title": "Evolutionary constraints on hub and non-hub proteins in human protein interaction network: insight from protein connectivity and intrinsic disorder.", "abstract": "It has been claimed that proteins with more interacting partners (hubs) are structurally more disordered and have a slow evolutionary rate. Here, in this paper we analyzed the evolutionary rate and structural disorderness of human hub and non-hub proteins present/absent in protein complexes. We observed that both non-hub and hub proteins present in protein complexes, are characterized by high structural disorderness. There exists no significant difference in average evolutionary rate of complex-forming hub and non-hub proteins while we have found a significant difference in the average evolutionary rate between hub and non-hub proteins which are not present in protein complexes. We concluded that higher disorderness in complex forming non-hub proteins facilitates higher number of interactions with a large number of protein subunits. High interaction among protein subunits of complex forming non-hub proteins imposes a selective constraint on their evolutionary rate." }, { "pmid": "20480050", "title": "Hub promiscuity in protein-protein interaction networks.", "abstract": "Hubs are proteins with a large number of interactions in a protein-protein interaction network. They are the principal agents in the interaction network and affect its function and stability. Their specific recognition of many different protein partners is of great interest from the structural viewpoint. Over the last few years, the structural properties of hubs have been extensively studied. We review the currently known features that are particular to hubs, possibly affecting their binding ability. Specifically, we look at the levels of intrinsic disorder, surface charge and domain distribution in hubs, as compared to non-hubs, along with differences in their functional domains." }, { "pmid": "18586826", "title": "Global investigation of protein-protein interactions in yeast Saccharomyces cerevisiae using re-occurring short polypeptide sequences.", "abstract": "Protein-protein interaction (PPI) maps provide insight into cellular biology and have received considerable attention in the post-genomic era. While large-scale experimental approaches have generated large collections of experimentally determined PPIs, technical limitations preclude certain PPIs from detection. Recently, we demonstrated that yeast PPIs can be computationally predicted using re-occurring short polypeptide sequences between known interacting protein pairs. However, the computational requirements and low specificity made this method unsuitable for large-scale investigations. Here, we report an improved approach, which exhibits a specificity of approximately 99.95% and executes 16,000 times faster. Importantly, we report the first all-to-all sequence-based computational screen of PPIs in yeast, Saccharomyces cerevisiae in which we identify 29,589 high confidence interactions of approximately 2 x 10(7) possible pairs. Of these, 14,438 PPIs have not been previously reported and may represent novel interactions. In particular, these results reveal a richer set of membrane protein interactions, not readily amenable to experimental investigations. From the novel PPIs, a novel putative protein complex comprised largely of membrane proteins was revealed. In addition, two novel gene functions were predicted and experimentally confirmed to affect the efficiency of non-homologous end-joining, providing further support for the usefulness of the identified PPIs in biological investigations." }, { "pmid": "29112936", "title": "Designing anti-Zika virus peptides derived from predicted human-Zika virus protein-protein interactions.", "abstract": "The production of anti-Zika virus (ZIKV) therapeutics has become increasingly important as the propagation of the devastating virus continues largely unchecked. Notably, a causal relationship between ZIKV infection and neurodevelopmental abnormalities has been widely reported, yet a specific mechanism underlying impaired neurological development has not been identified. Here, we report on the design of several synthetic competitive inhibitory peptides against key pathogenic ZIKV proteins through the prediction of protein-protein interactions (PPIs). Often, PPIs between host and viral proteins are crucial for infection and pathogenesis, making them attractive targets for therapeutics. Using two complementary sequence-based PPI prediction tools, we first produced a comprehensive map of predicted human-ZIKV PPIs (involving 209 human protein candidates). We then designed several peptides intended to disrupt the corresponding host-pathogen interactions thereby acting as anti-ZIKV therapeutics. The data generated in this study constitute a foundational resource to aid in the multi-disciplinary effort to combat ZIKV infection, including the design of additional synthetic proteins." }, { "pmid": "20003442", "title": "Critical assessment of sequence-based protein-protein interaction prediction methods that do not require homologous protein sequences.", "abstract": "BACKGROUND\nProtein-protein interactions underlie many important biological processes. Computational prediction methods can nicely complement experimental approaches for identifying protein-protein interactions. Recently, a unique category of sequence-based prediction methods has been put forward--unique in the sense that it does not require homologous protein sequences. This enables it to be universally applicable to all protein sequences unlike many of previous sequence-based prediction methods. If effective as claimed, these new sequence-based, universally applicable prediction methods would have far-reaching utilities in many areas of biology research.\n\n\nRESULTS\nUpon close survey, I realized that many of these new methods were ill-tested. In addition, newer methods were often published without performance comparison with previous ones. Thus, it is not clear how good they are and whether there are significant performance differences among them. In this study, I have implemented and thoroughly tested 4 different methods on large-scale, non-redundant data sets. It reveals several important points. First, significant performance differences are noted among different methods. Second, data sets typically used for training prediction methods appear significantly biased, limiting the general applicability of prediction methods trained with them. Third, there is still ample room for further developments. In addition, my analysis illustrates the importance of complementary performance measures coupled with right-sized data sets for meaningful benchmark tests.\n\n\nCONCLUSIONS\nThe current study reveals the potentials and limits of the new category of sequence-based protein-protein interaction prediction methods, which in turn provides a firm ground for future endeavours in this important area of contemporary bioinformatics." }, { "pmid": "22355752", "title": "Short Co-occurring Polypeptide Regions Can Predict Global Protein Interaction Maps.", "abstract": "A goal of the post-genomics era has been to elucidate a detailed global map of protein-protein interactions (PPIs) within a cell. Here, we show that the presence of co-occurring short polypeptide sequences between interacting protein partners appears to be conserved across different organisms. We present an algorithm to automatically generate PPI prediction method parameters for various organisms and illustrate that global PPIs can be predicted from previously reported PPIs within the same or a different organism using protein primary sequences. The PPI prediction code is further accelerated through the use of parallel multi-core programming, which improves its usability for large scale or proteome-wide PPI prediction. We predict and analyze hundreds of novel human PPIs, experimentally confirm protein functions and importantly predict the first genome-wide PPI maps for S. pombe (∼9,000 PPIs) and C. elegans (∼37,500 PPIs)." }, { "pmid": "27332507", "title": "From Static to Interactive: Transforming Data Visualization to Improve Transparency.", "abstract": "Data presentation for scientific publications in small sample size studies has not changed substantially in decades. It relies on static figures and tables that may not provide sufficient information for critical evaluation, particularly of the results from small sample size studies. Interactive graphics have the potential to transform scientific publications from static reports of experiments into interactive datasets. We designed an interactive line graph that demonstrates how dynamic alternatives to static graphics for small sample size studies allow for additional exploration of empirical datasets. This simple, free, web-based tool (http://statistika.mfub.bg.ac.rs/interactive-graph/) demonstrates the overall concept and may promote widespread use of interactive graphics." } ]
Frontiers in Genetics
30108606
PMC6079268
10.3389/fgene.2018.00248
Prediction of Drug–Gene Interaction by Using Metapath2vec
Heterogeneous information networks (HINs) currently play an important role in daily life. HINs are applied in many fields, such as science research, e-commerce, recommendation systems, and bioinformatics. Particularly, HINs have been used in biomedical research. Algorithms have been proposed to calculate the correlations between drugs and targets and between diseases and genes. Recently, the interaction between drugs and human genes has become an important subject in the research on drug efficacy and human genomics. In previous studies, numerous prediction methods using machine learning and statistical prediction models were proposed to explore this interaction on the biological network. In the current work, we introduce a representation learning method into the biological heterogeneous network and use the representation learning models metapath2vec and metapath2vec++ on our dataset. We combine the adverse drug reaction (ADR) data in the drug–gene network with causal relationship between drugs and ADRs. This article first presents an analysis of the importance of predicting drug–gene relationships and discusses the existing prediction methods. Second, the skip-gram model commonly used in representation learning for natural language processing tasks is explained. Third, the metapath2vec and metapath2vec++ models for the example of drug–gene-ADR network are described. Next, the kernelized Bayesian matrix factorization algorithm is used to complete the prediction. Finally, the experimental results of both models are compared with Katz, CATAPULT, and matrix factorization, the prediction visualized using the receiver operating characteristic curves are presented, and the area under the receiver operating characteristic values for three varying algorithm parameters are calculated.
Related workSkip-gramSkip-gram is a language model widely used for training word representation vectors to determine the relationships between words in a network. To help predict the context words of the target word in a sentence or in an entire document, a skip-gram model finds the representations of these words (Cai et al., 2014). Simply, a skip-gram model can provide the information surrounding a word. A skip-gram model generally has three or more layers; a center word is inputted in the input layer, and consequently, a certain amount of words related to the input word are generated with a high probability. Given an example of a drug set, if a series of drugs (i.e., d1, d2, …, dN) constitutes the training set, some of these drugs are related, regardless of whether the relationships between others are unknown. The average log probability that the skip-gram model should maximize can be defined as follows: (1)1N∑n = 1N∑-c≤j≤c,j≠0logp(dn+j|dn), where N is the number of drugs, dn and dn+j indicate two related drug nodes in the training set, and c is the number of drugs in the training set. A higher prediction accuracy can be achieved with more training samples. In the original skip-gram model, vd is the input representation vector, v′d is the output representation vector of drug d, and D is the total number of drugs. Accordingly, the probability of dn+j related to dn can be computed by the following softmax function:(2)p(dn+j|dn)= .e(v′dn+j⊤vdn)∑d= 1De(v′d⊤vdn).Hierarchical softmaxIn a typical skip-gram model, the output layer commonly uses a softmax function to yield the probability distribution. In general, the softmax function can squash a vector of real values into another vector whose values are controlled within the range (0, 1). To reduce the computational cost and time, a replacement function called hierarchical softmax was proposed in (Morin and Bengio, 2005). The hierarchical softmax function requires less computational space and time by obtaining a vector with a length of no more than log2|D|, whereas the standard softmax must compute a D-dimension vector (Mikolov et al., 2013). Hierarchical softmax constructs a binary tree with all the nodes as leaves (Figure 2) to achieve exponential speed-up of computation. The output of learning a drug relationship dataset is formalized as a Huffman tree with a train of binary decisions. The more related to the root, the closer the distance to the current node is. The algorithm then assigns 1 to the left branch and 0 to the right branch of each node on the tree to formalize these nodes into vectors, which denote the paths from the root node to the current nodes. In Figure 2, the red line indicates the metapath between drug “D013999,” the root, and drug “C014374” and corresponds to the information learned from the input dataset.Figure 2Diagram of hierarchical softmax.Noise-contrastive estimationTo present an alternative to hierarchical softmax and further improve computational performance, Gutmann and Hyvarinen (Morin and Bengio, 2005) proposed noise-contrastive estimation (NCE), which is a method based on sampling. The core idea of NCE is that for each instance of sampling n labels from the entire dataset including its own label, only the possibility of the instance belonging to the n+1 labels should be computed instead of calculating the probabilities of the objects related to every label. In Figure 3, the genes are temporarily regarded as labels of drugs. When the NCE strategy is used to identify labels for drug “D020849,” noise labels such as gene “3108” and gene “9143” can be randomly sampled. Furthermore, gene “148022” can be sampled on the basis of the similarity between drug “D020849” and drug “D013999” (an interaction occurs between “D013999” and gene “9053”), or gene “1027” can be sampled because of the similarity between gene “1027” and gene “3108,” which is related to drug “D020849.” The NCE method divides the labels of the central node into two categories: true label and noise label. Subsequently, the multilabel classification problem can be translated to a binary classification task, thereby significantly reducing the time cost.Figure 3Diagram of drug–gene network. The blue nodes indicate drugs, and the red nodes indicate genes. The solid lines represent drug–gene interactions, with the blue lines indicating a similarity between two drugs, and the red lines indicating a similarity between two genes. The black line means that the nodes are selected as candidate labels of drug “D020849,” and the dotted line indicates that the node on the end side of the line is a sampled label.The probability of the true label can be defined as (3)p(gi=1|G,d)=pθ(d|G)pθ(d|G)+k*q(d), where gi is a gene label of the central drug d and G is the label set of d. Meanwhile, k noise labels are selected from a noise distribution q(d). In (3), θ is a parameter used to maximize the conditional likelihood of the label set (Gutmann and Hyvarinen, 2012).Next, the noise sample probability can be computed as follows: (4)p(gi=0|G,d)=k*q(d)pθ(d|G)+k*q(d). Accordingly, the cost function for N total number of central drugs is computed as follows: (5)1N∑iN{logp(gi=1|G,d)+∑logp(gi=0|G,d)}.Negative samplingMikolov (Cai et al., 2014) proposed negative sampling to replace hierarchical softmax. Negative sampling can simplify NCE and maintain the quality of the representation vectors. It is similar to NCE as it also uses a noise label set to change the task into a binary classification problem. Thus, negative sampling can be regarded as a specific version of NCE with the constant q and k = |V|. Accordingly, the probability computation in (3) can be changed into (6)p(gi=1|G,d)=pθ(d|G)pθ(d|G)+1, and Equation (4) is simplified to (7)p(gi=0|G,d)=1pθ(d|G)+1.
[ "25361966", "17211405", "7063747", "26357232", "10688178", "28982286", "15301563", "23650495", "17696640", "11289852", "18587383", "16162296", "22923290", "16204113" ]
[ { "pmid": "25361966", "title": "ADReCS: an ontology database for aiding standardization and hierarchical classification of adverse drug reaction terms.", "abstract": "Adverse drug reactions (ADRs) are noxious and unexpected effects during normal drug therapy. They have caused significant clinical burden and been responsible for a large portion of new drug development failure. Molecular understanding and in silico evaluation of drug (or candidate) safety in laboratory is thus so desired, and unfortunately has been largely hindered by misuse of ADR terms. The growing impact of bioinformatics and systems biology in toxicological research also requires a specialized ADR term system that works beyond a simple glossary. Adverse Drug Reaction Classification System (ADReCS; http://bioinf.xmu.edu.cn/ADReCS) is a comprehensive ADR ontology database that provides not only ADR standardization but also hierarchical classification of ADR terms. The ADR terms were pre-assigned with unique digital IDs and at the same time were well organized into a four-level ADR hierarchy tree for building an ADR-ADR relation. Currently, the database covers 6544 standard ADR terms and 34,796 synonyms. It also incorporates information of 1355 single active ingredient drugs and 134,022 drug-ADR pairs. In summary, ADReCS offers an opportunity for direct computation on ADR terms and also provides clues to mining common features underlying ADRs." }, { "pmid": "17211405", "title": "Structure-based maximal affinity model predicts small-molecule druggability.", "abstract": "Lead generation is a major hurdle in small-molecule drug discovery, with an estimated 60% of projects failing from lack of lead matter or difficulty in optimizing leads for drug-like properties. It would be valuable to identify these less-druggable targets before incurring substantial expenditure and effort. Here we show that a model-based approach using basic biophysical principles yields good prediction of druggability based solely on the crystal structure of the target binding site. We quantitatively estimate the maximal affinity achievable by a drug-like molecule, and we show that these calculated values correlate with drug discovery outcomes. We experimentally test two predictions using high-throughput screening of a diverse compound collection. The collective results highlight the utility of our approach as well as strategies for tackling difficult targets." }, { "pmid": "7063747", "title": "The meaning and use of the area under a receiver operating characteristic (ROC) curve.", "abstract": "A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the \"rating\" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect differences in the accuracy of diagnostic techniques." }, { "pmid": "26357232", "title": "Identifying Driver Nodes in the Human Signaling Network Using Structural Controllability Analysis.", "abstract": "Cell signaling governs the basic cellular activities and coordinates the actions in cell. Abnormal regulations in cell signaling processing are responsible for many human diseases, such as diabetes and cancers. With the accumulation of massive data related to human cell signaling, it is feasible to obtain a human signaling network. Some studies have shown that interesting biological phenomenon and drug-targets could be discovered by applying structural controllability analysis to biological networks. In this work, we apply structural controllability to a human signaling network and detect driver nodes, providing a systematic analysis of the role of different proteins in controlling the human signaling network. We find that the proteins in the upstream of the signaling information flow and the low in-degree proteins play a crucial role in controlling the human signaling network. Interestingly, inputting different control signals on the regulators of the cancer-associated genes could cost less than controlling the cancer-associated genes directly in order to control the whole human signaling network in the sense that less drive nodes are needed. This research provides a fresh perspective for controlling the human cell signaling system." }, { "pmid": "28982286", "title": "Spiking Neural P Systems with Communication on Request.", "abstract": "Spiking Neural [Formula: see text] Systems are Neural System models characterized by the fact that each neuron mimics a biological cell and the communication between neurons is based on spikes. In the Spiking Neural [Formula: see text] systems investigated so far, the application of evolution rules depends on the contents of a neuron (checked by means of a regular expression). In these [Formula: see text] systems, a specified number of spikes are consumed and a specified number of spikes are produced, and then sent to each of the neurons linked by a synapse to the evolving neuron. [Formula: see text]In the present work, a novel communication strategy among neurons of Spiking Neural [Formula: see text] Systems is proposed. In the resulting models, called Spiking Neural [Formula: see text] Systems with Communication on Request, the spikes are requested from neighboring neurons, depending on the contents of the neuron (still checked by means of a regular expression). Unlike the traditional Spiking Neural [Formula: see text] systems, no spikes are consumed or created: the spikes are only moved along synapses and replicated (when two or more neurons request the contents of the same neuron). [Formula: see text]The Spiking Neural [Formula: see text] Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used. Following this work, further research questions are listed to be open problems." }, { "pmid": "15301563", "title": "Drug-gene interactions between genetic polymorphisms and antihypertensive therapy.", "abstract": "Genetic factors may influence the response to antihypertensive medication. A number of studies have investigated genetic polymorphisms as determinants of cardiovascular response to antihypertensive drug therapy. In most candidate gene studies, no such drug-gene interactions were found. However, there is observational evidence that hypertensive patients with the 460 W allele of the alpha-adducin gene have a lower risk of myocardial infarction and stroke when treated with diuretics compared with other antihypertensive therapies. With regard to blood pressure response, interactions were found between genetic polymorphisms for endothelial nitric oxide synthase and diuretics, the alpha-adducin gene and diuretics, the alpha-subunit of G protein and beta-adrenoceptor antagonists, and the ACE gene and angiotensin II type 1 (AT(1)) receptor antagonists. Other studies found an interaction between ACE inhibitors and the ACE insertion/deletion (I/D) polymorphism, which resulted in differences in AT(1) receptor mRNA expression, left ventricular hypertrophy and arterial stiffness between different genetic variants. Also, drug-gene interactions between calcium channel antagonists and ACE I/D polymorphism regarding arterial stiffness have been reported. Unfortunately, the quality of these studies is quite variable. Given the methodological problems, the results from the candidate gene studies are still inconclusive and further research is necessary." }, { "pmid": "23650495", "title": "Prediction and validation of gene-disease associations using methods inspired by social network analyses.", "abstract": "Correctly identifying associations of genes with diseases has long been a goal in biology. With the emergence of large-scale gene-phenotype association datasets in biology, we can leverage statistical and machine learning methods to help us achieve this goal. In this paper, we present two methods for predicting gene-disease associations based on functional gene associations and gene-phenotype associations in model organisms. The first method, the Katz measure, is motivated from its success in social network link prediction, and is very closely related to some of the recent methods proposed for gene-disease association inference. The second method, called Catapult (Combining dATa Across species using Positive-Unlabeled Learning Techniques), is a supervised machine learning method that uses a biased support vector machine where the features are derived from walks in a heterogeneous gene-trait network. We study the performance of the proposed methods and related state-of-the-art methods using two different evaluation strategies, on two distinct data sets, namely OMIM phenotypes and drug-target interactions. Finally, by measuring the performance of the methods using two different evaluation strategies, we show that even though both methods perform very well, the Katz measure is better at identifying associations between traits and poorly studied genes, whereas Catapult is better suited to correctly identifying gene-trait associations overall [corrected]." }, { "pmid": "17696640", "title": "Translating pharmacogenomics: challenges on the road to the clinic.", "abstract": "Pharmacogenomics is one of the first clinical applications of the postgenomic era. It promises personalized medicine rather than the established \"one size fits all\" approach to drugs and dosages. The expected reduction in trial and error should ultimately lead to more efficient and safer drug therapy. In recent years, commercially available pharmacogenomic tests have been approved by the Food and Drug Administration (FDA), but their application in patient care remains very limited. More generally, the implementation of pharmacogenomics in routine clinical practice presents significant challenges. This article presents specific clinical examples of such challenges and discusses how obstacles to implementation of pharmacogenomic testing can be addressed." }, { "pmid": "11289852", "title": "Efficient, multiple-range random walk algorithm to calculate the density of states.", "abstract": "We present a new Monte Carlo algorithm that produces results of high accuracy with reduced simulational effort. Independent random walks are performed (concurrently or serially) in different, restricted ranges of energy, and the resultant density of states is modified continuously to produce locally flat histograms. This method permits us to directly access the free energy and entropy, is independent of temperature, and is efficient for the study of both 1st order and 2nd order phase transitions. It should also be useful for the study of complex systems with a rough energy landscape." }, { "pmid": "18587383", "title": "Creating and evaluating genetic tests predictive of drug response.", "abstract": "A key goal of pharmacogenetics--the use of genetic variation to elucidate inter-individual variation in drug treatment response--is to aid the development of predictive genetic tests that could maximize drug efficacy and minimize drug toxicity. The completion of the Human Genome Project and the associated HapMap Project, together with advances in technologies for investigating genetic variation, have greatly advanced the potential to develop such tests; however, many challenges remain. With the aim of helping to address some of these challenges, this article discusses the steps that are involved in the development of predictive tests for drug treatment response based on genetic variation, and factors that influence the development and performance of these tests." }, { "pmid": "16162296", "title": "Systematic survey reveals general applicability of \"guilt-by-association\" within gene coexpression networks.", "abstract": "BACKGROUND\nBiological processes are carried out by coordinated modules of interacting molecules. As clustering methods demonstrate that genes with similar expression display increased likelihood of being associated with a common functional module, networks of coexpressed genes provide one framework for assigning gene function. This has informed the guilt-by-association (GBA) heuristic, widely invoked in functional genomics. Yet although the idea of GBA is accepted, the breadth of GBA applicability is uncertain.\n\n\nRESULTS\nWe developed methods to systematically explore the breadth of GBA across a large and varied corpus of expression data to answer the following question: To what extent is the GBA heuristic broadly applicable to the transcriptome and conversely how broadly is GBA captured by a priori knowledge represented in the Gene Ontology (GO)? Our study provides an investigation of the functional organization of five coexpression networks using data from three mammalian organisms. Our method calculates a probabilistic score between each gene and each Gene Ontology category that reflects coexpression enrichment of a GO module. For each GO category we use Receiver Operating Curves to assess whether these probabilistic scores reflect GBA. This methodology applied to five different coexpression networks demonstrates that the signature of guilt-by-association is ubiquitous and reproducible and that the GBA heuristic is broadly applicable across the population of nine hundred Gene Ontology categories. We also demonstrate the existence of highly reproducible patterns of coexpression between some pairs of GO categories.\n\n\nCONCLUSION\nWe conclude that GBA has universal value and that transcriptional control may be more modular than previously realized. Our analyses also suggest that methodologies combining coexpression measurements across multiple genes in a biologically-defined module can aid in characterizing gene function or in characterizing whether pairs of functions operate together." }, { "pmid": "22923290", "title": "Positive-unlabeled learning for disease gene identification.", "abstract": "BACKGROUND\nIdentifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be.\n\n\nRESULT\nInstead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly.\n\n\nCONCLUSION\nThe proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe executable program and data are available at http://www1.i2r.a-star.edu.sg/~xlli/PUDI/PUDI.html." }, { "pmid": "16204113", "title": "A probabilistic model for mining implicit 'chemical compound-gene' relations from literature.", "abstract": "MOTIVATION\nThe importance of chemical compounds has been emphasized more in molecular biology, and 'chemical genomics' has attracted a great deal of attention in recent years. Thus an important issue in current molecular biology is to identify biological-related chemical compounds (more specifically, drugs) and genes. Co-occurrence of biological entities in the literature is a simple, comprehensive and popular technique to find the association of these entities. Our focus is to mine implicit 'chemical compound and gene' relations from the co-occurrence in the literature.\n\n\nRESULTS\nWe propose a probabilistic model, called the mixture aspect model (MAM), and an algorithm for estimating its parameters to efficiently handle different types of co-occurrence datasets at once. We examined the performance of our approach not only by a cross-validation using the data generated from the MEDLINE records but also by a test using an independent human-curated dataset of the relationships between chemical compounds and genes in the ChEBI database. We performed experimentation on three different types of co-occurrence datasets (i.e. compound-gene, gene-gene and compound-compound co-occurrences) in both cases. Experimental results have shown that MAM trained by all datasets outperformed any simple model trained by other combinations of datasets with the difference being statistically significant in all cases. In particular, we found that incorporating compound-compound co-occurrences is the most effective in improving the predictive performance. We finally computed the likelihoods of all unknown compound-gene (more specifically, drug-gene) pairs using our approach and selected the top 20 pairs according to the likelihoods. We validated them from biological, medical and pharmaceutical viewpoints." } ]
Micromachines
30424258
PMC6082280
10.3390/mi9070325
Development of a Sensor Network System with High Sampling Rate Based on Highly Accurate Simultaneous Synchronization of Clock and Data Acquisition and Experimental Verification †
In this paper, we develop a new sensor network system with a high sampling rate (over 500 Hz) based on the simultaneous synchronization of clock and data acquisition for integrating the data obtained from various sensors. Hence, we also propose a method for the synchronization of clock and data acquisition in the sensor network system. In the proposed scheme, multiple sensor nodes including PCs are connected via Ethernet for data communication and for clock synchronization. The timing of the data acquisition of each sensor is locally controlled based on the PC’s clock locally provided in the node, and the clocks are globally synchronized over the network. We construct three types of high-speed sensor network systems using the proposed method: the first one is composed of a high-speed tactile sensor node and a high-speed vision node; the second one is composed of a high-speed tactile sensor node and three acceleration sensor nodes; and the last one is composed of a high-speed tactile sensor node, two acceleration sensor nodes, and a gyro sensor node. Through experiments, we verify that the timing error between the sensor nodes for data acquisition is less than 15 μs, which is significantly smaller than the time interval of 2 ms or a shorter sampling time (less than 2 ms). We also confirm the effectiveness of the proposed method and it is expected that the system can be applied to various applications.
1.2. Related WorksThe proposition for synchronizing the shutter timings of multiple cameras in a vision network system has been discussed. In a typical 30 frames per second (fps) camera network system, Litos et al. suggested a clock-synchronization-based method using the network time protocol (NPT) [5]. However, the accuracy of the synchronization by NPT was approximately in the millisecond order, which is not acceptable for high sampling rates over intervals of millisecond or submillisecond order. In particular, this method cannot be applied to feedback control systems that require real-time response. Hou et al. proposed an illumination-based method for the real-time contactless synchronization of high-speed vision systems [6]; however, the system is not sufficiently flexible for sensor network systems. Further, the method is considered applicable to only the vision network system. Namely, a slight lack of the general-purpose properties exists. Other methods for data synchronization in multisensor network systems were proposed in [7,8,9].To summarize, conventional sensor network systems do not possess a higher sampling rate, a highly accurate synchronization of data acquisition, and general-purpose properties. Our objectives are to solve these problems and develop a high-speed sensor network system with these properties using our proposed method.
[ "23201987", "29690512" ]
[ { "pmid": "23201987", "title": "On increasing network lifetime in body area networks using global routing with energy consumption balancing.", "abstract": "Global routing protocols in wireless body area networks are considered. Global routing is augmented with a novel link cost function designed to balance energy consumption across the network. The result is a substantial increase in network lifetime at the expense of a marginal increase in energy per bit. Network maintenance requirements are reduced as well, since balancing energy consumption means all batteries need to be serviced at the same time and less frequently. The proposed routing protocol is evaluated using a hardware experimental setup comprising multiple nodes and an access point. The setup is used to assess network architectures, including an on-body access point and an off-body access point with varying number of antennas. Real-time experiments are conducted in indoor environments to assess performance gains. In addition, the setup is used to record channel attenuation data which are then processed in extensive computer simulations providing insight on the effect of protocol parameters on performance. Results demonstrate efficient balancing of energy consumption across all nodes, an average increase of up to 40% in network lifetime corresponding to a modest average increase of 0.4 dB in energy per bit, and a cutoff effect on required transmission power to achieve reliable connectivity." }, { "pmid": "29690512", "title": "Synchronized High-Speed Vision Sensor Network for Expansion of Field of View.", "abstract": "We propose a 500-frames-per-second high-speed vision (HSV) sensor network that acquires frames at a timing that is precisely synchronized across the network. Multiple vision sensor nodes, individually comprising a camera and a PC, are connected via Ethernet for data transmission and for clock synchronization. A network of synchronized HSV sensors provides a significantly expanded field-of-view compared with that of each individual HSV sensor. In the proposed system, the shutter of each camera is controlled based on the clock of the PC locally provided inside the node, and the shutters are globally synchronized using the Precision Time Protocol (PTP) over the network. A theoretical analysis and experiment results indicate that the shutter trigger skew among the nodes is a few tens of microseconds at most, which is significantly smaller than the frame interval of 1000-fps-class high-speed cameras. Experimental results obtained with the proposed system comprising four nodes demonstrated the ability to capture the propagation of a small displacement along a large-scale structure." } ]
Computational and Structural Biotechnology Journal
30108685
PMC6082774
10.1016/j.csbj.2018.07.004
FHIRChain: Applying Blockchain to Securely and Scalably Share Clinical Data
Secure and scalable data sharing is essential for collaborative clinical decision making. Conventional clinical data efforts are often siloed, however, which creates barriers to efficient information exchange and impedes effective treatment decision made for patients. This paper provides four contributions to the study of applying blockchain technology to clinical data sharing in the context of technical requirements defined in the “Shared Nationwide Interoperability Roadmap” from the Office of the National Coordinator for Health Information Technology (ONC). First, we analyze the ONC requirements and their implications for blockchain-based systems. Second, we present FHIRChain, which is a blockchain-based architecture designed to meet ONC requirements by encapsulating the HL7 Fast Healthcare Interoperability Resources (FHIR) standard for shared clinical data. Third, we demonstrate a FHIRChain-based decentralized app using digital health identities to authenticate participants in a case study of collaborative decision making for remote cancer care. Fourth, we highlight key lessons learned from our case study.
3.3Differentiating our Research Focus of FHIRChain from Related WorkThis paper presents our blockchain-based framework, called FHIRChain, whose architectural choices were explicitly designed to meet key technical requirements defined by the ONC interoperability roadmap. Our design differs from related work on blockchain infrastructures and associated consensus mechanisms since it is decoupled from any particular blockchain framework and instead focuses on design decisions of smart contract and other blockchain-interfacing components. FHIRChain is thus compatible with any existing blockchains that support the execution of smart contracts.In the remainder of this paper we describe how our FHIRChain-based DApp demonstrates the use of digital health identities that do not directly encode private information and can thus be replaced for lost or stolen identities, even in a blockchain system. While our approach is similar to the use of digital IDs in the HIE of One Gropper (2016) system, FHIRChain provides a more streamlined solution. In addition, we incorporate a token-based access exchange mechanism in FHIRChain that conforms with the FHIR clinical data standards. Finally, we leverage public key cryptography to simplify secure authentication and permission authorizations, while simultaneously preventing attackers from obtaining unauthorized data access.
[ "15497196", "25834725", "23440149", "12824090", "19901140", "26792258", "27518656", "26560699", "3102006", "10785586", "24845366", "20539724", "19199839", "24169275", "17911683", "17712081", "29016974", "12549898", "27565509", "22865671", "16221939" ]
[ { "pmid": "15497196", "title": "Technology and managed care: patient benefits of telemedicine in a rural health care network.", "abstract": "Rural health providers have looked to telemedicine as a technology to reduce costs. However, virtual access to physicians and specialists may alter patients' demand for face-to-face physician access. We develop a model of service demand under managed care, and apply the model to a telemedicine application in rural Alaska. Provider-imposed delays and patient costs were highly significant predictors of patient contingent choices in a survey of ENT clinic patients. The results suggest that telemedicine increased estimated patient benefits by about $40 per visit, and reduced patients' loss from rationing of access to physicians by about 20%." }, { "pmid": "25834725", "title": "Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.", "abstract": "As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal, and logistical concerns. Ensuring data security and protection of patient rights while simultaneously facilitating standardization is paramount to maintaining public support. The capabilities of supercomputing need to be applied strategically. A standardized, methodological implementation must be applied to developed artificial intelligence systems with the ability to integrate data and information into clinically relevant knowledge. Ultimately, the integration of bioinformatics and clinical data in a clinical decision support system promises precision medicine and cost effective and personalized patient care." }, { "pmid": "23440149", "title": "Types and origins of diagnostic errors in primary care settings.", "abstract": "IMPORTANCE\nDiagnostic errors are an understudied aspect of ambulatory patient safety.\n\n\nOBJECTIVES\nTo determine the types of diseases missed and the diagnostic processes involved in cases of confirmed diagnostic errors in primary care settings and to determine whether record reviews could shed light on potential contributory factors to inform future interventions.\n\n\nDESIGN\nWe reviewed medical records of diagnostic errors detected at 2 sites through electronic health record-based triggers. Triggers were based on patterns of patients' unexpected return visits after an initial primary care index visit.\n\n\nSETTING\nA large urban Veterans Affairs facility and a large integrated private health care system.\n\n\nPARTICIPANTS\nOur study focused on 190 unique instances of diagnostic errors detected in primary care visits between October 1, 2006, and September 30, 2007.\n\n\nMAIN OUTCOME MEASURES\nThrough medical record reviews, we collected data on presenting symptoms at the index visit, types of diagnoses missed, process breakdowns, potential contributory factors, and potential for harm from errors.\n\n\nRESULTS\nIn 190 cases, a total of 68 unique diagnoses were missed. Most missed diagnoses were common conditions in primary care, with pneumonia (6.7%), decompensated congestive heart failure (5.7%), acute renal failure (5.3%), cancer (primary) (5.3%), and urinary tract infection or pyelonephritis (4.8%) being most common. Process breakdowns most frequently involved the patient-practitioner clinical encounter (78.9%) but were also related to referrals (19.5%), patient-related factors (16.3%), follow-up and tracking of diagnostic information (14.7%), and performance and interpretation of diagnostic tests (13.7%). A total of 43.7% of cases involved more than one of these processes. Patient-practitioner encounter breakdowns were primarily related to problems with history-taking (56.3%), examination (47.4%), and/or ordering diagnostic tests for further workup (57.4%). Most errors were associated with potential for moderate to severe harm.\n\n\nCONCLUSIONS AND RELEVANCE\nDiagnostic errors identified in our study involved a large variety of common diseases and had significant potential for harm. Most errors were related to process breakdowns in the patient-practitioner clinical encounter. Preventive interventions should target common contributory factors across diagnoses, especially those that involve data gathering and synthesis in the patient-practitioner encounter." }, { "pmid": "12824090", "title": "Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review.", "abstract": "BACKGROUND\nIatrogenic injuries related to medications are common, costly, and clinically significant. Computerized physician order entry (CPOE) and clinical decision support systems (CDSSs) may reduce medication error rates.\n\n\nMETHODS\nWe identified trials that evaluated the effects of CPOE and CDSSs on medication safety by electronically searching MEDLINE and the Cochrane Library and by manually searching the bibliographies of retrieved articles. Studies were included for systematic review if the design was a randomized controlled trial, a nonrandomized controlled trial, or an observational study with controls and if the measured outcomes were clinical (eg, adverse drug events) or surrogate (eg, medication errors) markers. Two reviewers extracted all the data. Discussion resolved any disagreements.\n\n\nRESULTS\nFive trials assessing CPOE and 7 assessing isolated CDSSs met the criteria. Of the CPOE studies, 2 demonstrated a marked decrease in the serious medication error rate, 1 an improvement in corollary orders, 1 an improvement in 5 prescribing behaviors, and 1 an improvement in nephrotoxic drug dose and frequency. Of the 7 studies evaluating isolated CDSSs, 3 demonstrated statistically significant improvements in antibiotic-associated medication errors or adverse drug events and 1 an improvement in theophylline-associated medication errors. The remaining 3 studies had nonsignificant results.\n\n\nCONCLUSIONS\nUse of CPOE and isolated CDSSs can substantially reduce medication error rates, but most studies have not been powered to detect differences in adverse drug events and have evaluated a small number of \"homegrown\" systems. Research is needed to evaluate commercial systems, to compare the various applications, to identify key components of applications, and to identify factors related to successful implementation of these systems." }, { "pmid": "19901140", "title": "Diagnostic error in medicine: analysis of 583 physician-reported errors.", "abstract": "BACKGROUND\nMissed or delayed diagnoses are a common but understudied area in patient safety research. To better understand the types, causes, and prevention of such errors, we surveyed clinicians to solicit perceived cases of missed and delayed diagnoses.\n\n\nMETHODS\nA 6-item written survey was administered at 20 grand rounds presentations across the United States and by mail at 2 collaborating institutions. Respondents were asked to report 3 cases of diagnostic errors and to describe their perceived causes, seriousness, and frequency.\n\n\nRESULTS\nA total of 669 cases were reported by 310 clinicians from 22 institutions. After cases without diagnostic errors or lacking sufficient details were excluded, 583 remained. Of these, 162 errors (28%) were rated as major, 241 (41%) as moderate, and 180 (31%) as minor or insignificant. The most common missed or delayed diagnoses were pulmonary embolism (26 cases [4.5% of total]), drug reactions or overdose (26 cases [4.5%]), lung cancer (23 cases [3.9%]), colorectal cancer (19 cases [3.3%]), acute coronary syndrome (18 cases [3.1%]), breast cancer (18 cases [3.1%]), and stroke (15 cases [2.6%]). Errors occurred most frequently in the testing phase (failure to order, report, and follow-up laboratory results) (44%), followed by clinician assessment errors (failure to consider and overweighing competing diagnosis) (32%), history taking (10%), physical examination (10%), and referral or consultation errors and delays (3%).\n\n\nCONCLUSIONS\nPhysicians readily recalled multiple cases of diagnostic errors and were willing to share their experiences. Using a new taxonomy tool and aggregating cases by diagnosis and error type revealed patterns of diagnostic failures that suggested areas for improvement. Systematic solicitation and analysis of such errors can identify potential preventive strategies." }, { "pmid": "26560699", "title": "Opening clinical trial data: are the voluntary data-sharing portals enough?", "abstract": "Data generated by the numerous clinical trials conducted annually worldwide have the potential to be extremely beneficial to the scientific and patient communities. This potential is well recognized and efforts are being made to encourage the release of raw patient-level data from these trials to the public. The issue of sharing clinical trial data has recently gained attention, with many agreeing that this type of data should be made available for research in a timely manner. The availability of clinical trial data is most important for study reproducibility, meta-analyses, and improvement of study design. There is much discussion in the community over key data sharing issues, including the risks this practice holds. However, one aspect that remains to be adequately addressed is that of the accessibility, quality, and usability of the data being shared. Herein, experiences with the two current major platforms used to store and disseminate clinical trial data are described, discussing the issues encountered and suggesting possible solutions." }, { "pmid": "3102006", "title": "The role of the tumor board in a community hospital.", "abstract": "A hospital tumor board is a multidisciplinary group of physicians that meets on a regular basis to review cancer cases. Through regular meetings, the tumor board will improve the quality of cancer care, provide educational opportunities for participants, and become an asset to the hospital and to the community. The use of multidisciplinary tumor-board consultations can ensure that the cancer patient has access to the best current thinking about cancer management. This structure provides the individual practitioner and his hospital with the educational, quality assurance, and legal mechanisms to deliver state-of-the-art care." }, { "pmid": "10785586", "title": "Telemedicine and its impact on cancer management.", "abstract": "The latest dramatic progress in the technologies of the computer industry is likely to increasingly influence the oncologist's daily routine. Besides well known and established telemedical services such as videoconferencing, the most influential trends are the spread of digital hospital infrastructures with unlimited, secured access to all relevant patient information. This article seeks to summarise the most imminent influences of telemedical developments on the future of the oncologist: the effects of telemedical services and electronic infrastructures on clinical workflow and on medical quality management. In addition, the history of telemedicine, recent technologies and the performance of electronic patient records are described." }, { "pmid": "24845366", "title": "Implementation of a regional virtual tumor board: a prospective study evaluating feasibility and provider acceptance.", "abstract": "BACKGROUND\nTumor board (TB) conferences facilitate multidisciplinary cancer care and are associated with overall improved outcomes. Because of shortages of the oncology workforce and limited access to TB conferences, multidisciplinary care is not available at every institution. This pilot study assessed the feasibility and acceptance of using telemedicine to implement a virtual TB (VTB) program within a regional healthcare network.\n\n\nMATERIALS AND METHODS\nThe VTB program was implemented through videoconference technology and electronic medical records between the Houston (TX) Veterans Affairs Medical Center (VAMC) (referral center) and the New Orleans (LA) VAMC (referring center). Feasibility was assessed as the proportion of completed VTB encounters, rate of technological failures/mishaps, and presentation duration. Validated surveys for confidence and satisfaction were administered to 36 TB participants to assess acceptance (1-5 point Likert scale). Secondary outcomes included preliminary data on VTB utilization and its effectiveness in providing access to quality cancer care within the region.\n\n\nRESULTS\nNinety TB case presentations occurred during the study period, of which 14 (15%) were VTB cases. Although one VTB encounter had a technical mishap during presentation, all scheduled encounters were completed (100% completion rate). Case presentations took longer for VTB than for regular TB cases (p=0.0004). However, VTB was highly accepted with mean scores for satisfaction and confidence of 4.6. Utilization rate of VTB was 75%, and its effectiveness was equivalent to that observed for non-VTB cases.\n\n\nCONCLUSIONS\nImplementation of VTB is feasible and highly accepted by its participants. Future studies should focus on widespread implementation and validating the effectiveness of this model." }, { "pmid": "20539724", "title": "Ensuring quality cancer care through the oncology workforce.", "abstract": "A summary of the discussion at the Institute of Medicine National Cancer Policy Forum Workshop to examine oncology workforce shortages and describe current and potential solutions." }, { "pmid": "24169275", "title": "Health data use, stewardship, and governance: ongoing gaps and challenges: a report from AMIA's 2012 Health Policy Meeting.", "abstract": "Large amounts of personal health data are being collected and made available through existing and emerging technological media and tools. While use of these data has significant potential to facilitate research, improve quality of care for individuals and populations, and reduce healthcare costs, many policy-related issues must be addressed before their full value can be realized. These include the need for widely agreed-on data stewardship principles and effective approaches to reduce or eliminate data silos and protect patient privacy. AMIA's 2012 Health Policy Meeting brought together healthcare academics, policy makers, and system stakeholders (including representatives of patient groups) to consider these topics and formulate recommendations. A review of a set of Proposed Principles of Health Data Use led to a set of findings and recommendations, including the assertions that the use of health data should be viewed as a public good and that achieving the broad benefits of this use will require understanding and support from patients." }, { "pmid": "17911683", "title": "Challenges in telemedicine and eHealth: lessons learned from 20 years with telemedicine in Tromsø.", "abstract": "The Norwegian Centre for Telemedicine (NST) has, over the past two decades, contributed to the development and implementation of telemedicine and ehealth services in Norway. From 2002, NST has been a WHO Collaboration Center for telemedicine. In August 1996, Norway became the first country to implement an official telemedicine fee schedule making telemedicine services reimbursable by the national health insurer. Telemedicine is widely used in Northern Norway. Since the late 1980's, the University Hospital of North-Norway has experience in the following areas: teleradiology, telepathology, teledermatology, teleotorhinolaryngology (remote endoscopy), remote gastroscopy, tele-echocardiography, remote transmission of ECGs, telepsychiatry, teleophthalmology, teledialysis, teleemergency medicine, teleoncology, telecare, telegeriatric, teledentistry, maritime telemedicine, referrals and discharge letters, electronic delivery of laboratory results and distant teaching for healthcare personnel and patients. Based on the result achieved, the health authority in North-Norway plans to implement several large-scale telemedicine services: Teleradiology (incl. solutions for neurosurgery, orthopedic, different kinds of surgery, nuclear medicine, acute traumatic and oncology), digital communication and integration of patient data, and distant education. In addition, the following services will also be considered for large-scale implementation: teledialysis, prehospital thrombolysis, telepsychiatry, teledermatology. Last in line for implementation are: pediatric, district medical center (DMS), teleophthalmology and ear-nose-throat (ENT)." }, { "pmid": "17712081", "title": "Data standards in clinical research: gaps, overlaps, challenges and future directions.", "abstract": "Current efforts to define and implement health data standards are driven by issues related to the quality, cost and continuity of care, patient safety concerns, and desires to speed clinical research findings to the bedside. The President's goal for national adoption of electronic medical records in the next decade, coupled with the current emphasis on translational research, underscore the urgent need for data standards in clinical research. This paper reviews the motivations and requirements for standardized clinical research data, and the current state of standards development and adoption--including gaps and overlaps--in relevant areas. Unresolved issues and informatics challenges related to the adoption of clinical research data and terminology standards are mentioned, as are the collaborations and activities the authors perceive as most likely to address them." }, { "pmid": "29016974", "title": "Blockchain distributed ledger technologies for biomedical and health care applications.", "abstract": "OBJECTIVES\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTARGET AUDIENCE\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nSCOPE\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains." }, { "pmid": "12549898", "title": "Pertussis outbreak among adults at an oil refinery--Illinois, August-October 2002.", "abstract": "On September 16, 2002, the Crawford County Health Department (CCHD) reported to the Illinois Department of Public Health (IDPH) four cases of cough illness among workers at an oil refinery (total worker population: 750) in Crawford County, Illinois. On August 14, a worker aged 39 years reported to the plant's health unit with a cough lasting 14 days. On the same day, the worker's supervisor aged 50 years visited the health unit for a paroxysmal cough of 3 days' duration and an incident of cough syncope. Both patients were referred to private health-care providers; blood samples from both patients had serologic test results suggestive of recent Bondetella pertusis infection, and CCHD was contacted. On September 18, IDPH and CCHD initiated active surveillance and case investigations. This report summarizes the results of that investigation, which found that during August 1-October 9, pertussis was diagnosed in 15 (10%) of 150 oil refinery workers from two separate operations (n=95) and maintenance (n=55) complexes, who were linked by contact with the ill supervisor. Through enhanced case finding, 24 cases of pertussis, 21 (88%) of which occurred in adults aged > or = 20 years, were identified in this outbreak, underscoring the need to recognize this highly infectious disease in adults and to improve national diagnostic and preventive strategies." }, { "pmid": "27565509", "title": "Healthcare Data Gateways: Found Healthcare Intelligence on Blockchain with Novel Privacy Risk Control.", "abstract": "Healthcare data are a valuable source of healthcare intelligence. Sharing of healthcare data is one essential step to make healthcare system smarter and improve the quality of healthcare service. Healthcare data, one personal asset of patient, should be owned and controlled by patient, instead of being scattered in different healthcare systems, which prevents data sharing and puts patient privacy at risks. Blockchain is demonstrated in the financial field that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we proposed an App (called Healthcare Data Gateway (HGD)) architecture based on blockchain to enable patient to own, control and share their own data easily and securely without violating privacy, which provides a new potential way to improve the intelligence of healthcare systems while keeping patient data private. Our proposed purpose-centric access model ensures patient own and control their healthcare data; simple unified Indicator-Centric Schema (ICS) makes it possible to organize all kinds of personal healthcare data practically and easily. We also point out that MPC (Secure Multi-Party Computing) is one promising solution to enable untrusted third-party to conduct computation over patient data without violating privacy." }, { "pmid": "22865671", "title": "Key principles for a national clinical decision support knowledge sharing framework: synthesis of insights from leading subject matter experts.", "abstract": "OBJECTIVE\nTo identify key principles for establishing a national clinical decision support (CDS) knowledge sharing framework.\n\n\nMATERIALS AND METHODS\nAs part of an initiative by the US Office of the National Coordinator for Health IT (ONC) to establish a framework for national CDS knowledge sharing, key stakeholders were identified. Stakeholders' viewpoints were obtained through surveys and in-depth interviews, and findings and relevant insights were summarized. Based on these insights, key principles were formulated for establishing a national CDS knowledge sharing framework.\n\n\nRESULTS\nNineteen key stakeholders were recruited, including six executives from electronic health record system vendors, seven executives from knowledge content producers, three executives from healthcare provider organizations, and three additional experts in clinical informatics. Based on these stakeholders' insights, five key principles were identified for effectively sharing CDS knowledge nationally. These principles are (1) prioritize and support the creation and maintenance of a national CDS knowledge sharing framework; (2) facilitate the development of high-value content and tooling, preferably in an open-source manner; (3) accelerate the development or licensing of required, pragmatic standards; (4) acknowledge and address medicolegal liability concerns; and (5) establish a self-sustaining business model.\n\n\nDISCUSSION\nBased on the principles identified, a roadmap for national CDS knowledge sharing was developed through the ONC's Advancing CDS initiative.\n\n\nCONCLUSION\nThe study findings may serve as a useful guide for ongoing activities by the ONC and others to establish a national framework for sharing CDS knowledge and improving clinical care." }, { "pmid": "16221939", "title": "HL7 Clinical Document Architecture, Release 2.", "abstract": "Clinical Document Architecture, Release One (CDA R1), became an American National Standards Institute (ANSI)-approved HL7 Standard in November 2000, representing the first specification derived from the Health Level 7 (HL7) Reference Information Model (RIM). CDA, Release Two (CDA R2), became an ANSI-approved HL7 Standard in May 2005 and is the subject of this article, where the focus is primarily on how the standard has evolved since CDA R1, particularly in the area of semantic representation of clinical events. CDA is a document markup standard that specifies the structure and semantics of a clinical document (such as a discharge summary or progress note) for the purpose of exchange. A CDA document is a defined and complete information object that can include text, images, sounds, and other multimedia content. It can be transferred within a message and can exist independently, outside the transferring message. CDA documents are encoded in Extensible Markup Language (XML), and they derive their machine processable meaning from the RIM, coupled with terminology. The CDA R2 model is richly expressive, enabling the formal representation of clinical statements (such as observations, medication administrations, and adverse events) such that they can be interpreted and acted upon by a computer. On the other hand, CDA R2 offers a low bar for adoption, providing a mechanism for simply wrapping a non-XML document with the CDA header or for creating a document with a structured header and sections containing only narrative content. The intent is to facilitate widespread adoption, while providing a mechanism for incremental semantic interoperability." } ]
Royal Society Open Science
30109034
PMC6083703
10.1098/rsos.171176
Simultaneous inpainting and denoising by directional global three-part decomposition: connecting variational and Fourier domain-based image processing
We consider the very challenging task of restoring images (i) that have a large number of missing pixels, (ii) whose existing pixels are corrupted by noise, and (iii) that ideally contain both cartoon and texture elements. The combination of these three properties makes this inverse problem a very difficult one. The solution proposed in this manuscript is based on directional global three-part decomposition (DG3PD) (Thai, Gottschlich. 2016 EURASIP. J. Image Video Process. 2016, 1–20 (doi:10.1186/s13640-015-0097-y)) with a directional total variation norm, directional G-norm and ℓ∞-norm in the curvelet domain as key ingredients of the model. Image decomposition by DG3PD enables a decoupled inpainting and denoising of the cartoon and texture components. A comparison with existing approaches for inpainting and denoising shows the advantages of the proposed method. Moreover, we regard the image restoration problem from the viewpoint of a Bayesian framework and we discuss the connections between the proposed solution by function space and related image representation by harmonic analysis and pyramid decomposition.
1.Introduction and related workImage enhancement and image restoration are two superordinate concepts in image processing which encompass a plethora of methods to solve a multitude of important real-world problems [1,2]. Image enhancement has the goal of improving an input image for a specific application, e.g. in areas such as medical image processing, biometric recognition, computer vision, optical character recognition, texture recognition or machine inspection of surfaces [3–5]. Methods for image enhancement can be grouped by the domain in which they perform their operations: images are processed in the spatial domain or Fourier domain, or modified, e.g. in the wavelet or curvelet domain [6]. The types of enhancement methods include contextual filtering, e.g. for fingerprint image enhancement [7–9], contrast enhancement, e.g. by histogram equalization [10], and image super-resolution [11]. Image restoration is connected to the notion that a given input image suffers from degradation and the goal is restore an ideal version of it. Degradations are caused by various types of noise, missing pixels or blurring and their countermeasures are denoising, inpainting and deblurring. In general, one has to solve a linear or nonlinear inverse problem to reconstruct the ideal image from its given degraded version. Denoising aims to remove noise from an image and denoising methods include total variation (TV) minimization-based approaches [12], the application of non-local means (NL means) [13] or other dictionaries of image patches for smoothing, and adaptive thresholding in the wavelet domain [14]. Inpainting [15] is the filling-in of missing pixels from the available information in the image, and it is applied for scratch removal from scanned photographs, for occlusion filling, for removing objects or persons from images (in image forgery [16] or for special effects), and for filling-in of pixels which were lost during the transmission of an image or left out on purpose for image compression [17]. Deblurring [18] addresses the removal of blurring artefacts and is not the focus of this paper.Rudin et al. [19] pioneered two-part image decomposition by TV regularization for denoising. Shen & Chan [20] applied TV regularization to image inpainting, called the TV inpainting model, and they also suggested image inpainting by curvature-driven diffusions [21]. Starck et al. [22] defined a model for two-part decomposition based on the dictionary approach. Then, Elad et al. [23] applied this decomposition idea for image inpainting by introducing the indicator function in the ℓ2 norm of the residual; see eqn (6) in [23]. Esedoglu & Shen [24] introduced two inpainting models based on the Mumford–Shah model [25] and its higher order correction—the Mumford–Shah–Euler image model. They also presented numerical computation based on the Γ-convergence approximations [26,27]. Shen et al. [28] proposed image inpainting based on bounded variation and elastica models for non-textured images.Image inpainting can be an easy or difficult problem depending on the amount of missing pixels [21], the complexity of the image content and whether prior knowledge about the image content is available. Methods have been proposed which perform only cartoon inpainting (also referred to as structure inpainting) [20,28,29] or only texture inpainting [30]. Images which consist of both cartoon (structure) and texture components are more challenging to inpaint. Bertalmio et al. [31], Elad et al. [23] and Cai et al. [32] have proposed methods for inpainting which can handle images with both cartoon (structure) and texture components.In this paper, we tackle an even more challenging problem. Consider an input image f which has the following three properties: (i) a large percentage of pixels in f are missing and shall be inpainted,(ii) the known pixels in f are corrupted by noise,(iii) f contains both cartoon and texture elements.The co-occurrence of noise and missing pixels in an image with cartoon and texture components increases the difficulty of both the inpainting problem and the denoising problem. A multitude of methods have been proposed for inpainting and denoising. Existing inpainting methods in the literature typically assume that the non-missing pixels in a given image contain only a small amount of noise or are noise free, and existing methods for denoising typically assume that all pixels of the noisy image are known. The proposed method for solving this challenging problem is inspired by the works of Efros & Leung [30], Bertalmio et al. [31], Vese & Osher [33], Aujol & Chambolle [34], Buades et al. [13] and Elad et al. [23], and is based on the directional global three-part decomposition (DG3PD) [35]. The DG3PD method decomposes an image into three parts: a cartoon image, a texture image and a residual image. Advantages of the DG3PD model lie in the properties which are enforced on the cartoon and texture images. The geometric objects in the cartoon image have a very smooth surface and sharp edges. The texture image yields oscillating patterns on a defined scale which is both smooth and sparse. Recently, the texture images have been applied as a very useful feature for fingerprint segmentation [35–37].We address the challenging task of simultaneous inpainting and denoising in the following way. The advanced DG3PD model introduced in the next section decomposes a noisy input image f (with missing regions D) into cartoon u, texture v and residual ϵ components. At the same time, the missing regions D of the cartoon component u are interpolated and the available regions of u are denoised by the advantage of multi-directional bounded variation. This effect benefits from the help of the indicator function in the measurement of the residual, i.e. ∥C{χDc⋅×ϵ}∥ℓ∞ in (2.1). However, texture v is not interpolated due to the ‘cancelling’ effect of this supremum norm for residual in unknown regions. Therefore, the texture component v is inpainted and denoised by a dictionary-based approach instead. The DG3PD decomposition drives noise into the residual component ϵ, which is discarded. The reconstruction of the ideal version of f is obtained by summation of the inpainted and denoised cartoon and texture components (see figure 1 for a visual overview). Figure 1.Overview over the DG3PD image inpainting and denoising process.Moreover, we uncover the link between the calculus of variations [38–40] and filtering in the Fourier domain [41] by analysing the solution of the convex minimization in equation (2.1), i.e. roughly speaking the solution of the DG3PD inpainting model which can be understood as the response of the lowpass filter LP^(ω), highpass filter HP^(ω) and bandpass filter BP^(ω), and the unity condition is satisfied, i.e. LP^(ω)+BP^(ω)+HP^(ω)=1,where ω∈[ − π, π]2 is a coordinator in the Fourier domain. We observe that this decomposition is similar to the wavelet or pyramidal decomposition scheme [42–44]. However, the basis elements obtaining the decomposition, i.e. the scaling function and frame (or wavelet-like) function, are constructed by discrete differential operators (due to the discrete setting in minimizing (2.1)), which are referred to as wavelet-like operators in [45]. In particular, — the scaling function and wavelet-like function for the cartoon u are from the effect of the multi-directional TV norm,— the scaling function and wavelet-like function to extract the texture v are reconstructed by the effect of the multi-directional G-norm,— the effect of the ℓ∞ norm ∥C{χDc⋅×ϵ}∥ℓ∞ is to remove the remaining signal in the known regions of the residual ϵ (due to the duality property of ℓ∞).We also describe flowcharts to show that the method of variational calculus (or the DG3PD inpainting) is a closed loop pyramidal decomposition, which is different from an open loop one, e.g. wavelet [46], curvelet [47], see §7. By numerics, we observe that the closed loop filter design by the calculus of variation will result in lowpass, highpass and bandpass filters which are ‘unique’ for different images (figure 17). We also analyse the DG3PD inpainting model from the perspective of a Bayesian framework and then define a discrete innovation model for this inverse problem.This paper is organized as follows. In §2, we describe the DG3PD model for image inpainting and denoising. In §3, we show how to compute the solution of the convex minimization in the DG3PD inpainting problem by the augmented Lagrangian method. In §4, we describe the proposed method for texture inpainting and denoising. In §5, we compare the proposed method with existing ones (TVL2 inpainting [48,49]). Moreover, in order to enhance our evaluation in terms of simultaneous inpainting and denoising effects, we also perform a comparison with the NL-means filter for denoising, namely block-matching and three-dimensional filtering (BM3D), in [50]. In §6, we consider our inverse problem from a statistical point of view, i.e. the Bayesian framework, to describe how to select priors for cartoon u and texture v. We analyse the relation between the calculus of variations and the traditional pyramid decomposition scheme, e.g. Gaussian pyramid, in §7. We conclude the study with §8. For more detailed notation and mathematical preliminaries, we refer the reader to [36,51].
[ "21984503", "23014753", "20483687", "18262991", "16238062", "18255537", "18237962", "21478076", "17688213", "15449582", "16370462", "20031498" ]
[ { "pmid": "21984503", "title": "Curved-region-based ridge frequency estimation and curved Gabor filters for fingerprint image enhancement.", "abstract": "Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods." }, { "pmid": "23014753", "title": "Adaptive fingerprint image enhancement with emphasis on preprocessing of data.", "abstract": "This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image. Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system. Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied. These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases." }, { "pmid": "20483687", "title": "Image super-resolution via sparse representation.", "abstract": "This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework." }, { "pmid": "18262991", "title": "Adaptive wavelet thresholding for image denoising and compression.", "abstract": "The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising." }, { "pmid": "16238062", "title": "Image decomposition via the combination of sparse representations and a variational approach.", "abstract": "The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance." }, { "pmid": "18255537", "title": "Filling-in by joint interpolation of vector fields and gray levels.", "abstract": "A variational approach for filling-in regions of missing data in digital images is introduced. The approach is based on joint interpolation of the image gray levels and gradient/isophotes directions, smoothly extending in an automatic fashion the isophote lines into the holes of missing data. This interpolation is computed by solving the variational problem via its gradient descent flow, which leads to a set of coupled second order partial differential equations, one for the gray-levels and one for the gradient orientations. The process underlying this approach can be considered as an interpretation of the Gestaltist's principle of good continuation. No limitations are imposed on the topology of the holes, and all regions of missing data can be simultaneously processed, even if they are surrounded by completely different structures. Applications of this technique include the restoration of old photographs and removal of superimposed text like dates, subtitles, or publicity. Examples of these applications are given. We conclude the paper with a number of theoretical results on the proposed variational approach and its corresponding gradient descent flow." }, { "pmid": "18237962", "title": "Simultaneous structure and texture image inpainting.", "abstract": "An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture filling-in algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filled-in with texture synthesis techniques. The original image is then reconstructed adding back these two sub-images. The novel contribution of this paper is then in the combination of these three previously developed components, image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach." }, { "pmid": "21478076", "title": "Steerable pyramids and tight wavelet frames in L2(R(d)).", "abstract": "We present a functional framework for the design of tight steerable wavelet frames in any number of dimensions. The 2-D version of the method can be viewed as a generalization of Simoncelli's steerable pyramid that gives access to a larger palette of steerable wavelets via a suitable parametrization. The backbone of our construction is a primal isotropic wavelet frame that provides the multiresolution decomposition of the signal. The steerable wavelets are obtained by applying a one-to-many mapping (Nth-order generalized Riesz transform) to the primal ones. The shaping of the steerable wavelets is controlled by an M×M unitary matrix (where M is the number of wavelet channels) that can be selected arbitrarily; this allows for a much wider range of solutions than the traditional equiangular configuration (steerable pyramid). We give a complete functional description of these generalized wavelet transforms and derive their steering equations. We describe some concrete examples of transforms, including some built around a Mallat-type multiresolution analysis of L(2)(R(d)), and provide a fast Fourier transform-based decomposition algorithm. We also propose a principal-component-based method for signal-adapted wavelet design. Finally, we present some illustrative examples together with a comparison of the denoising performance of various brands of steerable transforms. The results are in favor of an optimized wavelet design (equalized principal component analysis), which consistently performs best." }, { "pmid": "17688213", "title": "Image denoising by sparse 3-D transform-domain collaborative filtering.", "abstract": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call \"groups.\" Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality." }, { "pmid": "15449582", "title": "Region filling and object removal by exemplar-based image inpainting.", "abstract": "A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) \"texture synthesis\" algorithms for generating large image regions from sample textures and 2) \"inpainting\" techniques for filling in small image gaps. The former has been demonstrated for \"textures\"--repeating two-dimensional patterns with some stochasticity; the latter focus on linear \"structures\" which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. In this paper, the simultaneous propagation of texture and structure information is achieved by a single, efficient algorithm. Computational efficiency is achieved by a block-based sampling process. A number of examples on real and synthetic images demonstrate the effectiveness of our algorithm in removing large occluding objects, as well as thin scratches. Robustness with respect to the shape of the manually selected target region is also demonstrated. Our results compare favorably to those obtained by existing techniques." }, { "pmid": "16370462", "title": "The contourlet transform: an efficient directional multiresolution image representation.", "abstract": "The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a \"true\" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications. Index Terms-Contourlets, contours, filter banks, geometric image processing, multidirection, multiresolution, sparse representation, wavelets." }, { "pmid": "20031498", "title": "Wavelet steerability and the higher-order Riesz transform.", "abstract": "Our main goal in this paper is to set the foundations of a general continuous-domain framework for designing steerable, reversible signal transformations (a.k.a. frames) in multiple dimensions ( d >or= 2). To that end, we introduce a self-reversible, Nth-order extension of the Riesz transform. We prove that this generalized transform has the following remarkable properties: shift-invariance, scale-invariance, inner-product preservation, and steerability. The pleasing consequence is that the transform maps any primary wavelet frame (or basis) of [Formula: see text] into another \"steerable\" wavelet frame, while preserving the frame bounds. The concept provides a functional counterpart to Simoncelli's steerable pyramid whose construction was primarily based on filterbank design. The proposed mechanism allows for the specification of wavelets with any order of steerability in any number of dimensions; it also yields a perfect reconstruction filterbank algorithm. We illustrate the method with the design of a novel family of multidimensional Riesz-Laplace wavelets that essentially behave like the N th-order partial derivatives of an isotropic Gaussian kernel." } ]
Frontiers in Psychology
30135668
PMC6092602
10.3389/fpsyg.2018.01392
Confronting a Paradox: A New Perspective of the Impact of Uncertainty in Suspense
Suspense is a key narrative issue in terms of emotional gratifications. Reactions in response to this type of entertainment are positively related to enjoyment, having a significant impact on the audience's immersion and suspension of disbelief. Related to computational modeling of this feature, some automatic storytelling systems include limited implementations of suspense management system in their core. In this way, the interest of this subject in the area of creativity has resorted to different definitions from fields as narratology and the film industry, as much as several proposals of its constituent features. Among their characteristics, uncertainty is one of the most discussed in terms of impact and need: while many authors affirm that uncertainty is essential to evoke suspense, others limit or reject its influence. Furthermore, the paradox of suspense reflects the problem of including uncertainty as a component required in suspense creation systems. Due to this need to contrast the effects of the uncertainty in order to compute a general model for automatic storytelling systems, we conducted an experiment measuring suspense experienced by a group of subjects that read a story. While a group of them were told the ending of the story in advance, the members of the other group experimented the same story in chronological order. Both the subjects' reported suspense and their physiological responses are gathered and analyzed. Results provide evidence to conclude that uncertainty affects the emotional response of readers, but independently and in a different form than suspense does. It will help to propose a model in which uncertainty is processed separately as management of the amount of knowledge about the outcome available to the spectator, which acts as a control signal to modulate the input features, but not directly in suspense computing.
2. Related workIn this section we review different approaches to uncertainty is suspense and several automatic storytelling systems focused on suspenseful story generation.2.1. Discussions about uncertainty as factor of suspenseUncertainty refers to the state of an organism that lacks information about whether, where, when, how, or why an event has occurred or will occur (Knight, 1921). It has been defined as a lack of information about an event and has been characterized as an aversive state that people are motivated to reduce (Bar-Anan et al., 2009, p. 123) or –less often– prolong in the case of following a positive event (Wilson et al., 2005, p. 5). It is related to a state of curiosity in which people desire more information about something that produces pleasure only when it is satisfied (Lowenstein, 2011, p. 75).For its part, suspense is broadly described as an effect of anticipation (Nomikos et al., 1968; de Beaugrande, 1982; Carroll, 1990; de Wied, 1995; Mikos, 1996; Wulff, 1996; Yanal, 1996; Prieto-Pablos, 1998; Vorderer and Knobloch, 2000; Caplin and Leahy, 2001; Allen, 2007; O'Neill, 2013). In a suspenseful passage, the reader expects or anticipates the outcome of the protagonist (Iwata, 2009, p. 30), and this state remains until the presentation of the outcome event (de Wied, 1995, p. 111). According to this, specific definitions and components of suspense vary according to the authors' point of view.The existing academic literature provides several definitions that discuss the influence of uncertainty in suspense. Ortony et al. (1990, p. 131) affirm that, along with fear and hope, a “cognitive state of uncertainty” is one of the three component of suspense. For Zillmann (1991, p. 283), suspense is conceptualized as the “experience of uncertainty regarding the outcome of a potentially hostile confrontation.” Perron (2004, p. 134) defends that the notion of uncertainty is, “without a doubt,” at the core of suspense: when a danger or threat is revealed and you are sure of the situation's outcome, there is no suspense. Iwata (2009, p. 36) points a relation between increasing reader's uncertainty and inducing suspense. Madrigal et al. (2011, p. 261) also relate that uncertainty over how an episode will end is a core ingredient of suspense. Likewise, Knight and McKnight (1999, p. 108) claim that “suspense relies upon the audience's strong sense of uncertainty about how events will play out.” For Khrypko and Andreae (2011, p. 5:2), the key element in suspense is uncertainty about which of the possible outcomes is going to occur when there is a balance between desired and non-desired outcomes. Along with this, O'Neill (2013, p. 9) affirms that the degree of suspense is correlated with the reader's uncertainty over the means of escape for a hero. Abbott (2008, p. 242) defines suspense as “uncertainty (together with the desire to diminish it) about how the story will develop,” linking its resolution with some degree of surprise. A similar definition is proposed by Carroll (1996b, p. 101), who alleges that classical suspense implies to question about “what happens next?” Analogously, Frome and Smuts (2004, p. 16) assure that, while suspense depends on something at stake, if there is no uncertainty, then can be no suspense. Lauteren (2002, p. 219) affirms that “the element of suspense is created through the uncertainty of its outcome.” For Wulff (1996, p. 7), suspense comes from the uncertainty –about character's roles, intentions, etc, what he includes in the concept of anticipation–, as the degree of probability with which the story can develop in one or another direction can be calculated. Prieto-Pablos (1998, p. 100) describes a type of suspense which is the consequence of our cognitive response to conditions of uncertainty.All these definitions claim that suspense requires uncertainty about a particular outcome –on the part of the audience–, where the outcome is significantly desirable or undesirable (O'Neill and Riedl, 2014, p. 944).Controversially, some authors question uncertainty as a factor implied in suspense. Hitchcock himself implicitly questioned this feature, observing that the key feature of suspense is that the audience be aware of the anticipated outcome: “in the usual form of suspense it is indispensable that the public be made perfectly aware of all of the facts involved” (Truffaut, 1985, p. 72). This is supported by other affirmations. Burget (2014, p. 45) affirms that suspense is a fear emotion about an outcome, and the spectators can fear the outcome of a situation despite the fact that they know it. Smuts (2008, p. 284) links uncertainty with surprise, and claims that “surprise is clearly not involved in all or even most cases of suspense.” Moreover and based on an example of Walton (1978) in which a kid felt repeated suspense even when he had memorized all the story, Gerrig (1997, p. 168) also questions the role of uncertainty. Like him, other authors who consider uncertainty as part of suspense also cast doubts about its level of influence. For instance, (Hoeken and van Vliet, 2000) do not appear to take uncertainty of the story's outcome to be so vital to suspense creation, considering other narrative techniques more significant (Iwata, 2009, p. 29). Zillmann (1996, p. 102) points that uncertainty is not introduced by explicit description, but rather introduced implicitly by the suggestion of a number of possible negative outcomes. That could mean that, taken together with the fact that suspense is not maximized when the uncertainty is maximized2 (van Vught and Schott, 2012, p. 95), uncertainty could be overvalued as a factor in the production of suspense (O'Neill, 2013, p. 10). In the same vein, Frome and Smuts (2004, p. 17) add that “higher uncertainty might make the scene more suspenseful, but if enough is at stake, you can have suspense even with a likely desirable outcome.” Ryan (2001, p. 180) defends that suspense requires, in addition to the empathy with the character, that the audience perceives different potential outcomes of the situation even though there will be uncertainty about which of the outcomes will occur; however, the more potential outcomes the situation presents, the weaker suspense is. Oliver and Sanders (2004, p. 251) defend that suspense is related with the impression that the protagonist's suffering is very likely –uncertainty is involved but at a low degree– and the film that they are considering ultimately shows the protagonist escaping. Additionally, de Wied (1995, p. 113) proposes that suspense may be more intense the higher the viewer's subjective certainty about when in time the outcome event will occur –keeping viewers for one or two seconds in a heightened state of uncertainty may add to suspense–, except in instances of total subjective certainty. He defined the experience of suspense as an anticipatory stress reaction, prompted by an initiating event in the discourse structure, and terminated by the actual presentation of the harmful outcome event, focusing the duration of this anticipation as essential factor (p. 111).None of the experimental approaches to the matter seem to resolve the controversy entirely. For instance, in the experiment of Comisky and Bryant (1982) a “balanced collision” could be expected between both uncertainty and suspense, but, instead, low levels of perceived outcome-uncertainty to produce high levels of suspense to certain point, in which suspense seems to decrease to its lowest value (Comisky and Bryant, 1982, p. 57). Comparable results were obtained by Epstein and Roupenian (1970), who found that a probability of 5% or less evokes the highest psychological responses. Along with these authors, Zillmann (1996, p. 208) defends that suspense increases as the uncertainty decreases a minimum just before total certainty. Likewise, (Iwata, 2009) experiments for creating suspense and surprise in short literary fiction conclude that, for a narrative episode to be suspenseful, a state of uncertainty must be sustained for a certain period —or space— in the story, whereas the duration of sustainment is not easily definable in a measurable way (Iwata, 2009, p. 136, 139). However, the author comes to define “uncertainty” as “delay in showing the resolution” (p. 174), which can differ from the notion of unawareness used by other studies. Nevertheless, (Niemelä, 1969) and (Breznitz, 2013) detected an increasing of heart rate as the probability of success increased. A third group —the experiment of Cantor et al. (1984) and (Monat et al., 1972)— shows that knowledge about an upcoming frightening event does not affect the “emotional defenses” of the audience. Conversely, subjects who were being warned reported a higher fright that those who were not warned, although anxiety seemed not affected by this forewarning (Cantor et al., 1984, p. 23, 30). Likewise, in a study entitled “Suspense is the Absence of Uncertainty” based on non-fictional texts, Gerrig (1989, p. 645, 646) defends that suspense does not need uncertainty, but it arises because audience repeatedly immersed in an episode —although repeated— would fail to seek out appropriate information in long-term memory. Based in a similar experiment, Hoeken and van Vliet (2000, p. 284, 286) conclude that, apparently, uncertainty about a story's outcome is not a prerequisite for the story to be suspenseful and, consequently, suspense is not simply the result of uncertainty about the outcome.Summing up, the relationship between the degree of perceived uncertainty about the outcome and the amount of experienced suspense is not clear (Comisky and Bryant, 1982, p. 51). Current theories of suspense do not include a robust account for varying probability, and there is no consensus on the relation between probability and uncertainty (Guidry, 2004, p. 131).2.2. The paradox of suspenseIn addition to this debate, the inclusion of uncertainty as factor of suspense leads to an apparent inconsistency. Yanal (1996, p. 148) enunciated the concept of the paradox of suspense to describe this fact. The inconsistency is patent when observing the reactions of spectators exposed to a narrative more than once, referred to as repeaters (p. 147; Gerrig, 1997, p. 168).In brief, the paradox of suspense can be explained like this: (i) repeaters experience suspense regarding a certain narrative's outcome; (ii) repeaters are certain of what that outcome is; (iii) suspense requires uncertainty. Further, these points were re-written and classified as following (Uidhir, 2011a, p. 122): (i) suspense requires uncertainty –Uncertainty Premise–; (ii) knowledge of a story's outcome precludes uncertainty –Knowledge Preclusion Premise–; (iii) we feel suspense in response to some narratives even when we have knowledge of the outcome –Repeater Suspense Premise–. In words of Smuts (2008, p. 282): “If uncertainty is integral to the creation of suspense, then how is it that some films can still be suspenseful on repeated viewings?”Directly or not, so far there have been many attempts to solve this paradox, either from —mainly— theoretical (Brewer, 1996; Carroll, 1996a; Prieto-Pablos, 1998; Smuts, 2009; Uidhir, 2011b; Manresa, 2016) and experimental perspective (Comisky and Bryant, 1982; Iwata, 2009; Klimmt et al., 2009; Ian, 2012), analyzing the real impact of the uncertainty in suspense. In all cases, to resolve the paradox of suspense, the two workable options are proposed (Uidhir, 2011b, p. 163): deny the necessity of uncertainty or deny repeater suspense. Although some of these points have been exposed above, the main theories are summarized next.Firstly, (Yanal, 1996) denies the existence of the paradox by rejecting repeater suspense. In this way, seeing a potential suspenseful scene again does not evoke “the same as feeling suspense,” but “a certain quality perhaps easily misidentified as suspense, namely anticipation” (Yanal, 1996, p. 157). On the basis that uncertainty is required for suspense, Yanal argues that, if true repeaters experience any kind of emotional response to suspenseful situation, their emotions must be of another kind (Prieto-Pablos, 1998, p. 109). Thus, he classifies re-readers who seem to be experiencing suspense into one of two categories: either they have forgotten some aspects of the story —in which case they are not really repeaters, so he also rejects the paradox explanation of Gerrig (1989)—, or, as said, what they are experiencing is some combination of other emotions –such as aforementioned anticipation–, which do not require uncertainty (Ian, 2012, p. 14). Yanal does not deny that repeaters experience emotions with respect to narratives, only that they do not experience suspense (Gerrig, 1997, p. 170) or at least not any emotion “grounded in uncertainty” (Yanal, 1996, p. 157).On the other hand, it seems clear that recipients can reuse media not only to re-experience the same emotions, but also due to any other motivations and gratification research (Hoffmann, 2006, p. 393). However, there is not enough evidence that it prevents the audiences from having the same or near the same experience (Burget, 2014, p. 46), which would contradict Yanal's discourse.On his part, Gerrig differs from Yanal in the sense that he considers that the audience revive some kind of internal representation and reaction, and this would argue strongly that Yanal is wrong. Gerrig reuses Yanal's own example of Marion Crane in the shower, in the film Psycho (Stefano, 1959). Gerrig describes that some subset of repeaters would hear their mental voices call out “Get out of the shower!” or “Look out!,” which would reflect momentary uncertainty that what the repeaters know to happen does, in fact, happen. Willing to take these mental voices as evidence, Gerrig see that knowledge of the outcome produces more rather than less moment-by-moment uncertainty (Gerrig, 1997, p. 171). Accordingly, Gerrig rejects the paradox, affirming it is the result of what he calls “anomalous suspense,” what makes repeaters genuinely re-experience suspense (p. 168). For him, it is an emergent property of ordinary memory processes, suggesting that memory processes actuate an expectation of uniqueness (p. 172), reflecting “a systematic failure of memory processes to produce relevant knowledge as a narrative unfolds” (p. 172). That is, repeaters are not really repeaters in the strict, operative sense, but instead more loosely akin to narratively functional amnesiacs —or operatively offline repeaters— (Uidhir, 2011b, p. 162). Consequently, audience expects a unique outcome regardless of the circumstances in which it finds itself respect to the narrative events (Prieto-Pablos, 1998, p. 110).However, Uidhir (2011b, p. 162) mentions that Gerrig could fall into some imprecisions. On the one hand, he claims that “in some (but not all cases) and given certain conditions and dispositions” —which is clearly inaccurate—, repeaters when narratively engaged can be sufficiently immersed in or transported by the narrative so as to render their experiences saliently approximate to non-repeater experiences. On the other hand, it may be argued that Gerrig employs far too broad a notion of repeater and so merely substitutes one imprecision for another. Likewise, Carroll (1996a, p. 90) argues that, if re-reading implied uniqueness, it would not be possible to get bored even after a number of repeated similar experiences.Carroll proposes an extended theory of suspense, in which suspense is an emotional response to narrative fiction which requires not just uncertainty but also moral concern for the outcome, an emotion which he suggests readers continue to feel even in the absence of uncertainty (Ian, 2012, p. 14). To solve the paradox, Carroll distinguishes real beliefs from fictional beliefs, in which thoughts about them can give rise to emotions as suspense. Thus, effectively asked to imagine —that is, to entertain the thought— that the main character is at risk by the author of the fiction, the audience appropriately and intelligibly feels concern and suspense (Carroll, 1996a, p. 90). He argues that even if we know that a film will end in a certain way, we can still imagine, while watching it, that it could not end that way. Merely imagining that an event's outcome is uncertain is enough to create suspense (Frome and Smuts, 2004, p. 19).From this point of view, Carroll rejects the Knowledge Preclusion Premise –that is, knowledge of a story's outcome precludes uncertainty–, due to the audience may be “entertaining the mind” with fictional alternatives. Nevertheless, he does not get to explain how the psychological mechanism works to get a new mental state of uncertainty, remaining his contribution in an uncompleted and merely theoretical plane (Manresa, 2016, p. 58). Moreover, Ohler and Nieding (1996, p. 139) question the morality as the basis on suspense, considering that a moral concern evoked by a scene is not necessarily a prerequisite to experience suspense.Unlike the previous authors, in his desire-frustration theory Smuts (2008, p. 284) rejects the most widely accepted premise of the paradox –the Uncertainty Premise, or the assumption that suspense requires uncertainty–. Instead, he affirms that suspense is felt by the frustrated desire to “jump” in the scene and to “help” the characters: “Our desire to make use of the information is frustrated –that is, we want to help, but there is nothing we can do–” (p. 285). Thus, suspense lays on the basis of manipulating the narrative information to create emotional situations when the audience is “forced to entertain the prospect of a narrative outcome which is contrary to the one that is desired” (Allen, 2007, p. 38) and the frustration comes from the inability to influence narrative (Burget, 2014, p. 49). As uncertainty is not necessary, this indeed would solve the paradox.However, Smuts himself objects to his own theory when applied to real cases –as a lottery–, in which he affirms that, in this cases, uncertainty is essential for suspense. Actually, he notes that “uncertainty is not necessary for all cases of suspense, that one can feel suspense on some occasions without uncertainty” (Smuts, 2008, p. 287).On the basis of the above, other proposals have been made. For example, Manresa (2016, p. 63) explains the repeated suspense as the result of a process of re-sympathizing with the characters, and Prieto-Pablos (1998, p. 111) affirms that the paradox can be explained by taking into consideration the potential variability of the different emotions that are involved in a narrative experience.In the words of Beecher (2007, p. 258), all proposals to explain the paradox “are ingenious but not entirely convincing.” Actually, none of these proposals is entirely free of possible inconsistencies. Thus, the paradox of suspense remains as a matter not resolved yet.2.3. Uncertainty in automatic storytellingThis section summarizes how automatic story generation systems address uncertainty.MEXICA (Pérez y Pérez, 2007) is a program that generates short stories about the old inhabitants of what today is Mexico City (p. 2). These stories are represented as clusters of emotional links and tensions between characters, progressing during development. MEXICA assumes that a story is interesting when it includes degradation-improvement processes –i.e., conflict and resolution– (p. 4). Throughout the history, emotional links among the characters vary as a result of their interactions. However, uncertainty is not explicitly treated in the system.MINSTREL (Turner, 2014) is a complex program that writes short stories about Arthurian legends, implemented on a case-based problem-solver where past cases are stored in an episodic memory (Pérez y Pérez and Sharples, 2004, p. 4). MINSTREL recognizes narrative tension plots and tries to increase the suspense by adding more emotionally charged scenes, by storing a simple ranking which tells when such inclusion is reasonable (Turner, 2014, p. 123). Just like in MEXICA, there is no specific implementation of uncertainty.IDtension (Szilas, 2003) is a drama project which comes up to demonstrate the possibility of combining narrative and interactivity. Unlike approaches based in character's chances or the course of the actions, it conceives the stories based on narrative properties –conflict or suspense–. It neither include uncertainty as explicit part of the computational model, but as part of a general manage of “known” information (p. 768).Another initiative is Suspenser (Cheong and Young, 2006), that creates stories with the objective of increasing the reader's suspense. It provides an intermediate layer between the fabula generation and the discourse generation, which selects the steps of the plot according to their value of importance for the final goal. For this and based on the assumption of Gerrig and Bernardo (1994)3, Suspenser uses a set of heuristics grounded in the number of paths available for the character to reach its goal, considering optimal the probability of protagonists' success as 1/100 (Cheong, 2007, p. 59). To meet the uncertainty condition of suspense, the suspense measurer first checks if the reader model would be uncertain about the goal state using the planning space4. Therefore, the model returns certainty when the planning space contains either only complete plans –absolute success– or only failed plans –absolute failure– (Cheong and Young, 2015, p. 44).Also based in Gerrig & Bernardo's work, Dramatis (O'Neill, 2013, p. 5) proposes an implementation of a system to evaluate suspense in stories that utilizes a memory model and a goal selection process, assuming that the reader, when faced with a narrative, evaluates the set of possible future states in order to find the best option for the protagonist, which seems to assume the treatment of uncertainty. Actually, the requisite state of uncertainty is implicitly represented as the reduction in possible escapes for the protagonist from the negative outcome (p. 31).In summary, only one of analyzed systems offers an explicit computational treatment of uncertainty. A possible explanation is that the execution of the plan in the absence of uncertainty guarantees that the goal state will become true, which allows for a better control of the results. The only reason why the planner would have any uncertainty as to the true state of the story world would be if the human author specified that there was uncertainty (Riedl, 2004, p. 48, 120). In any case, we cannot rule out another reason as that there is no general agreement on the impact of the uncertainty on suspense, as we have recounted.
[ "19186925", "7962581", "15050778", "5485942", "25304498", "19025464", "24416639", "21752003", "5081195", "5399421", "5644483", "15631571" ]
[ { "pmid": "19186925", "title": "The feeling of uncertainty intensifies affective reactions.", "abstract": "Uncertainty has been defined as a lack of information about an event and has been characterized as an aversive state that people are motivated to reduce. The authors propose an uncertainty intensification hypothesis, whereby uncertainty during an emotional event makes unpleasant events more unpleasant and pleasant events more pleasant. The authors hypothesized that this would happen even when uncertainty is limited to the feeling of \"not knowing,\" separable from a lack of information. In 4 studies, the authors held information about positive and negative film clips constant while varying the feeling of not knowing by having people repeat phrases connoting certainty or uncertainty while watching the films. As predicted, the subjective feeling of uncertainty intensified people's affective reactions to the film clips." }, { "pmid": "7962581", "title": "Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.", "abstract": "The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts." }, { "pmid": "15050778", "title": "Gender differences in implicit and explicit memory for affective passages.", "abstract": "Thirty-two participants were administered 4 verbal tasks, an Implicit Affective Task, an Implicit Neutral Task, an Explicit Affective Task, and an Explicit Neutral Task. For the Implicit Tasks, participants were timed while reading passages aloud as quickly as possible, but not so quickly that they did not understand. A target verbal passage was repeated three times, and alternated with other previously unread passages. The Implicit Affective and Neutral passages had strong affective or neutral content, respectively. The Explicit Tasks were administered at the end of testing, and consisted of multiple choice questions regarding the passages. Priming effects in terms of more rapid reading speed for the target compared to non-target passages were seen for both the Implicit Affective Task and the Implicit Neutral Task. Overall reading speed was faster for the passages with neutral compared to affective content, consistent with studies of the emotional Stroop effect. For the Explicit memory tasks, overall performance was better on the items from the repeated passage, and on the Affective compared to Neutral Task. The male subjects showed greater priming for affective material than female subjects, and a greater gain than female subjects in explicit memory for affective compared to neutral material." }, { "pmid": "25304498", "title": "Fiction feelings in Harry Potter: haemodynamic response in the mid-cingulate cortex correlates with immersive reading experience.", "abstract": "Immersion in reading, described as a feeling of 'getting lost in a book', is a ubiquitous phenomenon widely appreciated by readers. However, it has been largely ignored in cognitive neuroscience. According to the fiction feeling hypothesis, narratives with emotional contents invite readers more to be empathic with the protagonists and thus engage the affective empathy network of the brain, the anterior insula and mid-cingulate cortex, than do stories with neutral contents. To test the hypothesis, we presented participants with text passages from the Harry Potter series in a functional MRI experiment and collected post-hoc immersion ratings, comparing the neural correlates of passage mean immersion ratings when reading fear-inducing versus neutral contents. Results for the conjunction contrast of baseline brain activity of reading irrespective of emotional content against baseline were in line with previous studies on text comprehension. In line with the fiction feeling hypothesis, immersion ratings were significantly higher for fear-inducing than for neutral passages, and activity in the mid-cingulate cortex correlated more strongly with immersion ratings of fear-inducing than of neutral passages. Descriptions of protagonists' pain or personal distress featured in the fear-inducing passages apparently caused increasing involvement of the core structure of pain and affective empathy the more readers immersed in the text. The predominant locus of effects in the mid-cingulate cortex seems to reflect that the immersive experience was particularly facilitated by the motor component of affective empathy for our stimuli from the Harry Potter series featuring particularly vivid descriptions of the behavioural aspects of emotion." }, { "pmid": "19025464", "title": "Experimental evidence for suspense as determinant of video game enjoyment.", "abstract": "Based on theoretical assumptions from film psychology and their application to video games, the hypothesis is tested that suspense is a major factor in video game enjoyment. A first-person shooter game was experimentally manipulated to create either a low level or a high level of suspense. Sixty-three participants were randomly assigned to experimental conditions; enjoyment was assessed after playing by a 10-item rating scale. Results support the assumption that suspense is a driver of video game enjoyment." }, { "pmid": "24416639", "title": "Story Immersion of Videogames for Youth Health Promotion: A Review of Literature.", "abstract": "This article reviews research in the fields of psychology, literature, communication, human-computer interaction, public health, and consumer behavior on narrative and its potential relationships with videogames and story immersion. It also reviews a narrative's role in complementing behavioral change theories and the potential of story immersion for health promotion through videogames. Videogames have potential for health promotion and may be especially promising when attempting to reach youth. An understudied characteristic of videogames is that many contain a narrative, or story. Story immersion (transportation) is a mechanism through which a narrative influences players' cognition, affect, and, potentially, health behavior. Immersion promotes the suspension of disbelief and the reduction of counterarguments, enables the story experience as a personal experience, and creates the player's deep affection for narrative protagonists. Story immersion complements behavioral change theories, including the Theory of Planned Behavior, Social Cognitive Theory, and Self-Determination Theory. Systematic investigations are needed to realize the powerful potential of interactive narratives within theory-driven research." }, { "pmid": "21752003", "title": "The effect of manipulating context-specific information on perceptual-cognitive processes during a simulated anticipation task.", "abstract": "We manipulated contextual information in order to examine the perceptual-cognitive processes that support anticipation using a simulated cricket-batting task. Skilled (N= 10) and less skilled (N= 10) cricket batters responded to video simulations of opponents bowling a cricket ball under high and low contextual information conditions. Skilled batters were more accurate, demonstrated more effective search behaviours, and provided more detailed verbal reports of thinking. Moreover, when they viewed their opponent multiple times (high context), they reduced their mean fixation time. All batters improved performance and altered thought processes when in the high context, compared to when they responded to their opponent without previously seeing them bowl (low context). Findings illustrate how context influences performance and the search for relevant information when engaging in a dynamic, time-constrained task." }, { "pmid": "15631571", "title": "The pleasures of uncertainty: prolonging positive moods in ways people do not anticipate.", "abstract": "The authors hypothesized that uncertainty following a positive event prolongs the pleasure it causes and that people are generally unaware of this effect of uncertainty. In 3 experimental settings, people experienced a positive event (e.g., received an unexpected gift of a dollar coin attached to an index card) under conditions of certainty or uncertainty (e.g., it was easy or difficult to make sense of the text on the card). As predicted, people's positive moods lasted longer in the uncertain conditions. The results were consistent with a pleasure paradox, whereby the cognitive processes used to make sense of positive events reduce the pleasure people obtain from them. Forecasters seemed unaware of this paradox; they overwhelmingly preferred to be in the certain conditions and tended to predict that they would be in better moods in these conditions." } ]
Scientific Reports
30127447
PMC6102227
10.1038/s41598-018-30471-0
Feedback Between Behavioral Adaptations and Disease Dynamics
We study the feedback processes between individual behavior, disease prevalence, interventions and social networks during an influenza pandemic when a limited stockpile of antivirals is shared between the private and the public sectors. An economic model that uses prevalence-elastic demand for interventions is combined with a detailed social network and a disease propagation model to understand the feedback mechanism between epidemic dynamics, market behavior, individual perceptions, and the social network. An urban and a rural region are simulated to assess the robustness of results. Results show that an optimal split between the private and public sectors can be reached to contain the disease but the accessibility of antivirals from the private sector is skewed towards the richest income quartile. Also, larger allocations to the private sector result in wastage where individuals who do not need it are able to purchase it but who need it cannot afford it. Disease prevalence increases with household size and total contact time but not by degree in the social network, whereas wastage of antivirals decreases with degree and contact time. The best utilization of drugs is achieved when individuals with high contact time use them, who tend to be the school-aged children of large families.
Related WorkThe US Homeland Security Council on National Strategy for Pandemic Influenza4 states that “Private stockpiles, in coordination with public health stockpiles, would extend protection more broadly than could be achieved through the public sector alone and improve the ability to achieve the national pandemic response goals of mitigating disease, suffering, and death, and minimizing impacts on the economy and functioning of society.” As the Centers for Disease Control and Prevention (CDC) considers alternative distribution methods for antivirals through private systems during a pandemic, an Association of State and Territorial Health Officials (ASTHO) report3 highlights the need to answer the following question: how should CDC decide to breakdown the stockpile among private and public distributors? This study takes a step in answering this question.Previous researchers12–17 have considered many related aspects of this problem. For example, Althouse et al.15 consider market-based distribution of antivirals during an influenza pandemic but only for treatment purposes. With their parameter settings, a market-based distribution results in over or under-use of antivirals relative to the efficient level. Too few people buy it if required to purchase in advance of the pandemic and too many people buy it if allowed to purchase at the time of infection. Goldstein et al.8 examine the benefits of pre-dispensing antivirals under a variety of scenarios including the case when demand exceeds supply.Our research focuses on building a detailed individual-based causal model and a micro economic framework to study emergent macro behaviors and disease dynamics. It considers both market and public sector based distribution of antivirals, and both treatment and preventive use of antivirals, along with the behavioral adaptations by individuals during the course of the pandemic.A report by the US Department of Health and Human Services discusses conditions under which interested employers can stockpile antivirals2. The Institute of Medicine report recommends coordination and communication with the private sector on dispensing and distributing antivirals1. Wu et al.18 study the possible benefits of multidrug strategies over mono-therapy for reducing the impact of antiviral resistance. Acemoglu et al.19 provide a general discussion on the importance of using both markets and governments in resource allocation.Other researchers have studied similar questions but in other contexts. For example, previous research20–23 studies the coevolution of friendship and smoking behavior under a variety of scenarios; Adams et al.24 show how edges made through sex-ties and drug-ties differentially contribute to observed network racial segregation; Mitleton-Kelly25 studies coevolution of intelligent social systems; Hammond et al.26 discuss feedback loops between agri-food, health, disease, and environmental systems; and Epstein et al.27 use an agent based model of interacting contagion processes, i.e. disease and fear, to study interaction between behavior and social networks in the event of epidemics.However none of the studies consider detailed interactions between the market factors, preventive behavior, changes in the social contact network and epidemic outcomes, and their cause and effect on each other. This is the first study that explains the feedback loops between all these components.
[ "21283514", "12447733", "20018681", "17253899", "23237162", "22826247", "19079607", "18332436", "21829625", "21966439", "21339828", "28570660", "27687898", "2061695", "20377412", "21149721", "18230677", "23613721", "21342886" ]
[ { "pmid": "21283514", "title": "Optimizing tactics for use of the U.S. antiviral strategic national stockpile for pandemic influenza.", "abstract": "In 2009, public health agencies across the globe worked to mitigate the impact of the swine-origin influenza A (pH1N1) virus. These efforts included intensified surveillance, social distancing, hygiene measures, and the targeted use of antiviral medications to prevent infection (prophylaxis). In addition, aggressive antiviral treatment was recommended for certain patient subgroups to reduce the severity and duration of symptoms. To assist States and other localities meet these needs, the U.S. Government distributed a quarter of the antiviral medications in the Strategic National Stockpile within weeks of the pandemic's start. However, there are no quantitative models guiding the geo-temporal distribution of the remainder of the Stockpile in relation to pandemic spread or severity. We present a tactical optimization model for distributing this stockpile for treatment of infected cases during the early stages of a pandemic like 2009 pH1N1, prior to the wide availability of a strain-specific vaccine. Our optimization method efficiently searches large sets of intervention strategies applied to a stochastic network model of pandemic influenza transmission within and among U.S. cities. The resulting optimized strategies depend on the transmissability of the virus and postulated rates of antiviral uptake and wastage (through misallocation or loss). Our results suggest that an aggressive community-based antiviral treatment strategy involving early, widespread, pro-rata distribution of antivirals to States can contribute to slowing the transmission of mildly transmissible strains, like pH1N1. For more highly transmissible strains, outcomes of antiviral use are more heavily impacted by choice of distribution intervals, quantities per shipment, and timing of shipments in relation to pandemic spread. This study supports previous modeling results suggesting that appropriate antiviral treatment may be an effective mitigation strategy during the early stages of future influenza pandemics, increasing the need for systematic efforts to optimize distribution strategies and provide tactical guidance for public health policy-makers." }, { "pmid": "12447733", "title": "Zanamivir prophylaxis: an effective strategy for the prevention of influenza types A and B within households.", "abstract": "A double-blind, randomized study of inhaled zanamivir for the prevention of influenza in families was conducted. Once a person with a suspected case of influenza was identified (index patient), treatment of all other household members (contacts) > or =5 years old was initiated. Contacts received either 10 mg zanamivir or placebo inhaled once daily for 10 days. Index patients received relief medication only. In total, 487 households (242 placebo and 245 zanamivir) were enrolled, with 1291 contacts randomly assigned to receive prophylaxis. Four percent of zanamivir versus 19% of placebo households (P<.001) had at least 1 contact who developed symptomatic, laboratory-confirmed influenza, representing 81% protective efficacy (95% confidence interval, 64%-90%). Protective efficacy was similarly high for individuals (82%) and against both influenza types A and B (78% and 85%, respectively, for households). Zanamivir was well tolerated and was effective in preventing influenza types A and B within households where the index patient was not treated." }, { "pmid": "20018681", "title": "Evolution in health and medicine Sackler colloquium: a public choice framework for controlling transmissible and evolving diseases.", "abstract": "Control measures used to limit the spread of infectious disease often generate externalities. Vaccination for transmissible diseases can reduce the incidence of disease even among the unvaccinated, whereas antimicrobial chemotherapy can lead to the evolution of antimicrobial resistance and thereby limit its own effectiveness over time. We integrate the economic theory of public choice with mathematical models of infectious disease to provide a quantitative framework for making allocation decisions in the presence of these externalities. To illustrate, we present a series of examples: vaccination for tetanus, vaccination for measles, antibiotic treatment of otitis media, and antiviral treatment of pandemic influenza." }, { "pmid": "17253899", "title": "Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions.", "abstract": "BACKGROUND\nThe highly pathogenic H5N1 avian influenza virus, which is now widespread in Southeast Asia and which diffused recently in some areas of the Balkans region and Western Europe, has raised a public alert toward the potential occurrence of a new severe influenza pandemic. Here we study the worldwide spread of a pandemic and its possible containment at a global level taking into account all available information on air travel.\n\n\nMETHODS AND FINDINGS\nWe studied a metapopulation stochastic epidemic model on a global scale that considers airline travel flow data among urban areas. We provided a temporal and spatial evolution of the pandemic with a sensitivity analysis of different levels of infectiousness of the virus and initial outbreak conditions (both geographical and seasonal). For each spreading scenario we provided the timeline and the geographical impact of the pandemic in 3,100 urban areas, located in 220 different countries. We compared the baseline cases with different containment strategies, including travel restrictions and the therapeutic use of antiviral (AV) drugs. We investigated the effect of the use of AV drugs in the event that therapeutic protocols can be carried out with maximal coverage for the populations in all countries. In view of the wide diversity of AV stockpiles in different regions of the world, we also studied scenarios in which only a limited number of countries are prepared (i.e., have considerable AV supplies). In particular, we compared different plans in which, on the one hand, only prepared and wealthy countries benefit from large AV resources, with, on the other hand, cooperative containment scenarios in which countries with large AV stockpiles make a small portion of their supplies available worldwide.\n\n\nCONCLUSIONS\nWe show that the inclusion of air transportation is crucial in the assessment of the occurrence probability of global outbreaks. The large-scale therapeutic usage of AV drugs in all hit countries would be able to mitigate a pandemic effect with a reproductive rate as high as 1.9 during the first year; with AV supply use sufficient to treat approximately 2% to 6% of the population, in conjunction with efficient case detection and timely drug distribution. For highly contagious viruses (i.e., a reproductive rate as high as 2.3), even the unrealistic use of supplies corresponding to the treatment of approximately 20% of the population leaves 30%-50% of the population infected. In the case of limited AV supplies and pandemics with a reproductive rate as high as 1.9, we demonstrate that the more cooperative the strategy, the more effective are the containment results in all regions of the world, including those countries that made part of their resources available for global use." }, { "pmid": "23237162", "title": "Sex, drugs, and race: how behaviors differentially contribute to the sexually transmitted infection risk network structure.", "abstract": "OBJECTIVES\nWe examined how risk behaviors differentially connect a population at high risk for sexually transmitted infections.\n\n\nMETHODS\nStarting from observed networks representing the full risk network and the risk network among respondents only, we constructed a series of edge-deleted counterfactual networks that selectively remove sex ties, drug ties, and ties involving both sex and drugs and a comparison random set. With these edge-deleted networks, we have demonstrated how each tie type differentially contributes to the connectivity of the observed networks on a series of standard network connectivity measures (component and bicomponent size, distance, and transitivity ratio) and the observed network racial segregation.\n\n\nRESULTS\nSex ties are unique from the other tie types in the network, providing wider reach in the network in relatively nonredundant ways. In this population, sex ties are more likely to bridge races than are other tie types.\n\n\nCONCLUSIONS\nInterventions derived from only 1 mode of transmission at a time (e.g., condom promotion or needle exchange) would have different potential for curtailing sexually transmitted infection spread through the population than would attempts that simultaneously address all risk-relevant behaviors." }, { "pmid": "22826247", "title": "A systems science perspective and transdisciplinary models for food and nutrition security.", "abstract": "We argue that food and nutrition security is driven by complex underlying systems and that both research and policy in this area would benefit from a systems approach. We present a framework for such an approach, examine key underlying systems, and identify transdisciplinary modeling tools that may prove especially useful." }, { "pmid": "19079607", "title": "Coupled contagion dynamics of fear and disease: mathematical and computational explorations.", "abstract": "BACKGROUND\nIn classical mathematical epidemiology, individuals do not adapt their contact behavior during epidemics. They do not endogenously engage, for example, in social distancing based on fear. Yet, adaptive behavior is well-documented in true epidemics. We explore the effect of including such behavior in models of epidemic dynamics.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nUsing both nonlinear dynamical systems and agent-based computation, we model two interacting contagion processes: one of disease and one of fear of the disease. Individuals can \"contract\" fear through contact with individuals who are infected with the disease (the sick), infected with fear only (the scared), and infected with both fear and disease (the sick and scared). Scared individuals--whether sick or not--may remove themselves from circulation with some probability, which affects the contact dynamic, and thus the disease epidemic proper. If we allow individuals to recover from fear and return to circulation, the coupled dynamics become quite rich, and can include multiple waves of infection. We also study flight as a behavioral response.\n\n\nCONCLUSIONS/SIGNIFICANCE\nIn a spatially extended setting, even relatively small levels of fear-inspired flight can have a dramatic impact on spatio-temporal epidemic dynamics. Self-isolation and spatial flight are only two of many possible actions that fear-infected individuals may take. Our main point is that behavioral adaptation of some sort must be considered." }, { "pmid": "18332436", "title": "Modeling targeted layered containment of an influenza pandemic in the United States.", "abstract": "Planning a response to an outbreak of a pandemic strain of influenza is a high public health priority. Three research groups using different individual-based, stochastic simulation models have examined the consequences of intervention strategies chosen in consultation with U.S. public health workers. The first goal is to simulate the effectiveness of a set of potentially feasible intervention strategies. Combinations called targeted layered containment (TLC) of influenza antiviral treatment and prophylaxis and nonpharmaceutical interventions of quarantine, isolation, school closure, community social distancing, and workplace social distancing are considered. The second goal is to examine the robustness of the results to model assumptions. The comparisons focus on a pandemic outbreak in a population similar to that of Chicago, with approximately 8.6 million people. The simulations suggest that at the expected transmissibility of a pandemic strain, timely implementation of a combination of targeted household antiviral prophylaxis, and social distancing measures could substantially lower the illness attack rate before a highly efficacious vaccine could become available. Timely initiation of measures and school closure play important roles. Because of the current lack of data on which to base such models, further field research is recommended to learn more about the sources of transmission and the effectiveness of social distancing measures in reducing influenza transmission." }, { "pmid": "21829625", "title": "Sensitivity of household transmission to household contact structure and size.", "abstract": "OBJECTIVE\nStudy the influence of household contact structure on the spread of an influenza-like illness. Examine whether changes to in-home care giving arrangements can significantly affect the household transmission counts.\n\n\nMETHOD\nWe simulate two different behaviors for the symptomatic person; either s/he remains at home in contact with everyone else in the household or s/he remains at home in contact with only the primary caregiver in the household. The two different cases are referred to as full mixing and single caregiver, respectively.\n\n\nRESULTS\nThe results show that the household's cumulative transmission count is lower in case of a single caregiver configuration than in the full mixing case. The household transmissions vary almost linearly with the household size in both single caregiver and full mixing cases. However the difference in household transmissions due to the difference in household structure grows with the household size especially in case of moderate flu.\n\n\nCONCLUSIONS\nThese results suggest that details about human behavior and household structure do matter in epidemiological models. The policy of home isolation of the sick has significant effect on the household transmission count depending upon the household size." }, { "pmid": "21966439", "title": "Comparing effectiveness of top-down and bottom-up strategies in containing influenza.", "abstract": "This research compares the performance of bottom-up, self-motivated behavioral interventions with top-down interventions targeted at controlling an \"Influenza-like-illness\". Both types of interventions use a variant of the ring strategy. In the first case, when the fraction of a person's direct contacts who are diagnosed exceeds a threshold, that person decides to seek prophylaxis, e.g. vaccine or antivirals; in the second case, we consider two intervention protocols, denoted Block and School: when a fraction of people who are diagnosed in a Census Block (resp., School) exceeds the threshold, prophylax the entire Block (resp., School). Results show that the bottom-up strategy outperforms the top-down strategies under our parameter settings. Even in situations where the Block strategy reduces the overall attack rate well, it incurs a much higher cost. These findings lend credence to the notion that if people used antivirals effectively, making them available quickly on demand to private citizens could be a very effective way to control an outbreak." }, { "pmid": "21339828", "title": "Economic and social impact of influenza mitigation strategies by demographic class.", "abstract": "BACKGROUND\nWe aim to determine the economic and social impact of typical interventions proposed by the public health officials and preventive behavioral changes adopted by the private citizens in the event of a \"flu-like\" epidemic.\n\n\nMETHOD\nWe apply an individual-based simulation model to the New River Valley area of Virginia for addressing this critical problem. The economic costs include not only the loss in productivity due to sickness but also the indirect cost incurred through disease avoidance and caring for dependents.\n\n\nRESULTS\nThe results show that the most important factor responsible for preventing income loss is the modification of individual behavior; it drops the total income loss by 62% compared to the base case. The next most important factor is the closure of schools which reduces the total income loss by another 40%.\n\n\nCONCLUSIONS\nThe preventive behavior of the private citizens is the most important factor in controlling the epidemic." }, { "pmid": "28570660", "title": "Epidemiological and economic impact of pandemic influenza in Chicago: Priorities for vaccine interventions.", "abstract": "The study objective is to estimate the epidemiological and economic impact of vaccine interventions during influenza pandemics in Chicago, and assist in vaccine intervention priorities. Scenarios of delay in vaccine introduction with limited vaccine efficacy and limited supplies are not unlikely in future influenza pandemics, as in the 2009 H1N1 influenza pandemic. We simulated influenza pandemics in Chicago using agent-based transmission dynamic modeling. Population was distributed among high-risk and non-high risk among 0-19, 20-64 and 65+ years subpopulations. Different attack rate scenarios for catastrophic (30.15%), strong (21.96%), and moderate (11.73%) influenza pandemics were compared against vaccine intervention scenarios, at 40% coverage, 40% efficacy, and unit cost of $28.62. Sensitivity analysis for vaccine compliance, vaccine efficacy and vaccine start date was also conducted. Vaccine prioritization criteria include risk of death, total deaths, net benefits, and return on investment. The risk of death is the highest among the high-risk 65+ years subpopulation in the catastrophic influenza pandemic, and highest among the high-risk 0-19 years subpopulation in the strong and moderate influenza pandemics. The proportion of total deaths and net benefits are the highest among the high-risk 20-64 years subpopulation in the catastrophic, strong and moderate influenza pandemics. The return on investment is the highest in the high-risk 0-19 years subpopulation in the catastrophic, strong and moderate influenza pandemics. Based on risk of death and return on investment, high-risk groups of the three age group subpopulations can be prioritized for vaccination, and the vaccine interventions are cost saving for all age and risk groups. The attack rates among the children are higher than among the adults and seniors in the catastrophic, strong, and moderate influenza pandemic scenarios, due to their larger social contact network and homophilous interactions in school. Based on return on investment and higher attack rates among children, we recommend prioritizing children (0-19 years) and seniors (65+ years) after high-risk groups for influenza vaccination during times of limited vaccine supplies. Based on risk of death, we recommend prioritizing seniors (65+ years) after high-risk groups for influenza vaccination during times of limited vaccine supplies." }, { "pmid": "27687898", "title": "Effect of modelling slum populations on influenza spread in Delhi.", "abstract": "OBJECTIVES\nThis research studies the impact of influenza epidemic in the slum and non-slum areas of Delhi, the National Capital Territory of India, by taking proper account of slum demographics and residents' activities, using a highly resolved social contact network of the 13.8 million residents of Delhi.\n\n\nMETHODS\nAn SEIR model is used to simulate the spread of influenza on two different synthetic social contact networks of Delhi, one where slums and non-slums are treated the same in terms of their demographics and daily sets of activities and the other, where slum and non-slum regions have different attributes.\n\n\nRESULTS\nDifferences between the epidemic outcomes on the two networks are large. Time-to-peak infection is overestimated by several weeks, and the cumulative infection rate and peak infection rate are underestimated by 10-50%, when slum attributes are ignored.\n\n\nCONCLUSIONS\nSlum populations have a significant effect on influenza transmission in urban areas. Improper specification of slums in large urban regions results in underestimation of infections in the entire population and hence will lead to misguided interventions by policy planners." }, { "pmid": "2061695", "title": "Some epidemiological models with nonlinear incidence.", "abstract": "Epidemiological models with nonlinear incidence rates can have very different dynamic behaviors than those with the usual bilinear incidence rate. The first model considered here includes vital dynamics and a disease process where susceptibles become exposed, then infectious, then removed with temporary immunity and then susceptible again. When the equilibria and stability are investigated, it is found that multiple equilibria exist for some parameter values and periodic solutions can arise by Hopf bifurcation from the larger endemic equilibrium. Many results analogous to those in the first model are obtained for the second model which has a delay in the removed class but no exposed class." }, { "pmid": "20377412", "title": "Viral shedding and clinical illness in naturally acquired influenza virus infections.", "abstract": "BACKGROUND\nVolunteer challenge studies have provided detailed data on viral shedding from the respiratory tract before and through the course of experimental influenza virus infection. There are no comparable quantitative data to our knowledge on naturally acquired infections.\n\n\nMETHODS\nIn a community-based study in Hong Kong in 2008, we followed up initially healthy individuals to quantify trends in viral shedding on the basis of cultures and reverse-transcription polymerase chain reaction (RT-PCR) through the course of illness associated with seasonal influenza A and B virus infection.\n\n\nRESULTS\nTrends in symptom scores more closely matched changes in molecular viral loads measured with RT-PCR for influenza A than for influenza B. For influenza A virus infections, the replicating viral loads determined with cultures decreased to undetectable levels earlier after illness onset than did molecular viral loads. Most viral shedding occurred during the first 2-3 days after illness onset, and we estimated that 1%-8% of infectiousness occurs prior to illness onset. Only 14% of infections with detectable shedding at RT-PCR were asymptomatic, and viral shedding was low in these cases.\n\n\nCONCLUSIONS\nOur results suggest that \"silent spreaders\" (ie, individuals who are infectious while asymptomatic or presymptomatic) may be less important in the spread of influenza epidemics than previously thought." }, { "pmid": "21149721", "title": "A high-resolution human contact network for infectious disease transmission.", "abstract": "The most frequent infectious diseases in humans--and those with the highest potential for rapid pandemic spread--are usually transmitted via droplets during close proximity interactions (CPIs). Despite the importance of this transmission route, very little is known about the dynamic patterns of CPIs. Using wireless sensor network technology, we obtained high-resolution data of CPIs during a typical day at an American high school, permitting the reconstruction of the social network relevant for infectious disease transmission. At 94% coverage, we collected 762,868 CPIs at a maximal distance of 3 m among 788 individuals. The data revealed a high-density network with typical small-world properties and a relatively homogeneous distribution of both interaction time and interaction partners among subjects. Computer simulations of the spread of an influenza-like disease on the weighted contact graph are in good agreement with absentee data during the most recent influenza season. Analysis of targeted immunization strategies suggested that contact network data are required to design strategies that are significantly more effective than random immunization. Immunization strategies based on contact network data were most effective at high vaccination coverage." }, { "pmid": "18230677", "title": "Time lines of infection and disease in human influenza: a review of volunteer challenge studies.", "abstract": "The dynamics of viral shedding and symptoms following influenza virus infection are key factors when considering epidemic control measures. The authors reviewed published studies describing the course of influenza virus infection in placebo-treated and untreated volunteers challenged with wild-type influenza virus. A total of 56 different studies with 1,280 healthy participants were considered. Viral shedding increased sharply between 0.5 and 1 day after challenge and consistently peaked on day 2. The duration of viral shedding averaged over 375 participants was 4.80 days (95% confidence interval: 4.31, 5.29). The frequency of symptomatic infection was 66.9% (95% confidence interval: 58.3, 74.5). Fever was observed in 37.0% of A/H1N1, 40.6% of A/H3N2 (p = 0.86), and 7.5% of B infections (p = 0.001). The total symptoms scores increased on day 1 and peaked on day 3. Systemic symptoms peaked on day 2. No such data exist for children or elderly subjects, but epidemiologic studies suggest that the natural history might differ. The present analysis confirms prior expert opinion on the duration of viral shedding or the frequency of asymptomatic influenza infection, extends prior knowledge on the dynamics of viral shedding and symptoms, and provides original results on the frequency of respiratory symptoms or fever." }, { "pmid": "23613721", "title": "Compliance to oseltamivir among two populations in Oxfordshire, United Kingdom affected by influenza A(H1N1)pdm09, November 2009--a waste water epidemiology study.", "abstract": "Antiviral provision remains the focus of many pandemic preparedness plans, however, there is considerable uncertainty regarding antiviral compliance rates. Here we employ a waste water epidemiology approach to estimate oseltamivir (Tamiflu®) compliance. Oseltamivir carboxylate (oseltamivir's active metabolite) was recovered from two waste water treatment plant (WWTP) catchments within the United Kingdom at the peak of the autumnal wave of the 2009 Influenza A (H1N1)pdm09 pandemic. Predictions of oseltamivir consumption from detected levels were compared with two sources of national government statistics to derive compliance rates. Scenario and sensitivity analysis indicated between 3-4 and 120-154 people were using oseltamivir during the study period in the two WWTP catchments and a compliance rate between 45-60%. With approximately half the collected antivirals going unused, there is a clear need to alter public health messages to improve compliance. We argue that a near real-time understanding of drug compliance at the scale of the waste water treatment plant (hundreds to millions of people) can potentially help public health messages become more timely, targeted, and demographically sensitive, while potentially leading to less mis- and un-used antiviral, less wastage and ultimately a more robust and efficacious pandemic preparedness plan." }, { "pmid": "21342886", "title": "Non-pharmaceutical interventions during an outbreak of 2009 pandemic influenza A (H1N1) virus infection at a large public university, April-May 2009.", "abstract": "Nonpharmaceutical interventions (NPIs), such as home isolation, social distancing, and infection control measures, are recommended by public health agencies as strategies to mitigate transmission during influenza pandemics. However, NPI implementation has rarely been studied in large populations. During an outbreak of 2009 Pandemic Influenza A (H1N1) virus infection at a large public university in April 2009, an online survey was conducted among students, faculty, and staff to assess knowledge of and adherence to university-recommended NPI. Although 3924 (65%) of 6049 student respondents and 1057 (74%) of 1401 faculty respondents reported increased use of self-protective NPI, such as hand washing, only 27 (6.4%) of 423 students and 5 (8.6%) of 58 faculty with acute respiratory infection (ARI) reported staying home while ill. Nearly one-half (46%) of student respondents, including 44.7% of those with ARI, attended social events. Results indicate a need for efforts to increase compliance with home isolation and social distancing measures." } ]
Frontiers in Psychology
30186196
PMC6113567
10.3389/fpsyg.2018.01486
Insights Into the Factors Influencing Student Motivation in Augmented Reality Learning Experiences in Vocational Education and Training
Research on Augmented Reality (AR) in education has demonstrated that AR applications designed with diverse components boost student motivation in educational settings. However, most of the research conducted to date, does not define exactly what those components are and how these components positively affect student motivation. This study, therefore, attempts to identify some of the components that positively affect student motivation in mobile AR learning experiences to contribute to the design and development of motivational AR learning experiences for the Vocational Education and Training (VET) level of education. To identify these components, a research model constructed from the literature was empirically validated with data obtained from two sources: 35 students from four VET institutes interacting with an AR application for learning for a period of 20 days, and a self-report measure obtained from the Instructional Materials Motivation Survey (IMMS). We found that the following variables: use of scaffolding, real-time feedback, degree of success, time on-task and learning outcomes are positively correlated with the four dimensions of the ARCS model of motivation: Attention, Relevance, Confidence, and Satisfaction. Implications of these results are also described.
Related WorkAR and Student MotivationResearch on AR in education has shown that, among many other advantages, AR experiences are useful for increasing student motivation when compared to non-AR experiences (Radu, 2014; Akçayır and Akçayır, 2017). Some studies have analyzed the impact of AR on student motivation using the ARCS model of motivation as summarized in Table 1. For each dimension, a (✓) indicates the dimensions in which AR had a remarkable effect and a (+) symbol indicates a positive effect but not remarkable.Table 1Studies that used the ARCS model to analyze the impact of AR on student motivation.StudyAttentionRelevanceConfidenceSatisfactionLearning domain/topicChen et al., 2016++✓✓Food chain (science)Chiang et al., 2014✓✓✓+Aquatic animals and plants (science)Ibanez et al., 2015✓++✓Principles of electricityChen, 2013✓++✓MathDi Serio et al., 2013✓++✓Italian renaissance artChin et al., 2015+++✓Liberal artsWei et al., 2015++++Creative design teachingTogether these studies have used the ARCS model of motivation to represent the students’ levels of motivation. However, these studies do not clearly report which are the components of each AR application that positively affect the dimensions of the ARCS model of motivation. Consequently, it is still unclear how an AR application might affect student motivation. Apart from the ARCS model and the IMMS instrument, some researchers have used other questionnaires (and models) of motivation and they have found a positive impact of AR on student motivation. For instance, the study by Nachairit and Srisawasdi (2015) used the Scientific Motivation Questionnaire (SMQ), Martin-Gutierrez and Meneses (2014) used the R-SPQ-2F instrument. Other researchers have developed their own questionnaires to collect data about student motivation: Yin et al. (2013); Fonseca et al. (2014); Restivo et al. (2014); Laine et al. (2016). However, all of these studies also fall short in identifying the components of AR applications that might help to increase student motivation. According to Cheng and Tsai (2013), more research needs to be conducted in other dimensions of the learning experience such as motivation.Predictors of Student MotivationSome studies report features, aspects or traits that might have impact on student motivation in ARLEs. Table 2 shows these studies and the variables reported on each study.Table 2Studies that report variables that might impact on student motivation.Author(s)Variable - predictor (feature, aspect, trait, etc.)Impact on student motivationFerrer et al., 2013UsabilityDespite the usability issues in mobile AR, student motivation can be improved.Huang and Liaw, 2014Immersion and interactionImmersion and interactivity features are predictors of student motivation, but immersion is a stronger predictor.Chen and Liao, 2015Type of AR content (static and dynamic) Type of guiding strategies (procedure-guided or question-guided)Learners in the static-AR and the procedure-guided strategy outperformed those learners in the dynamic-AR and the question-guided strategy in the dimension of intrinsic goal orientationChen and Wang, 2015Learning stylesLearning styles do not affect learning motivation in mobile AR instruction.Gopalan et al., 2016Engagement Enjoyment Fun Ease of useEngagement, Enjoyment and Fun were significant predictors of student motivation. Ease of use was not a predictor of motivation.Overall, these studies provide insights into the variables that influence student motivation in ARLEs. However, these studies do not clearly report how these variables are connected with the components of AR applications and therefore it is not possible to determine which components of AR applications might produce a positive impact on student motivation. Thus, our study aims to contribute to the identification of the components of AR applications that might positively affect student motivation (modeled by the ARCS model of motivation) in ARLEs. We hypothesize that the identification of the components of AR applications that positively affect student motivation might help to inform the design and development of AR applications that effectively increase student motivation.The Mobile AR Application: Paint-cARPaint-cAR is a marker-based mobile AR application for supporting the teaching and learning process of repairing paint on a car in the context of the VET program on Car Maintenance. Repairing paint on a car is a complex process comprising a total of 30 steps divided into 6 phases (Cleaning, Sand down, Applying putties, Applying sealers, Painting, and Applying Clear Coats). Each phase has an average of five steps and each step in the process represents a task that needs to be done by using chemical products and/or tools to repair the paint. The steps must be done in a fixed order with respect to the other steps and only when all the steps in a phase are completed, that phase is completed and the next phase can start. In that regard, students need to learn how to perform each task (step in the process) and need to learn which are the chemical products and tools they need to use for each step in the process. Learning how to do this requires a considerable amount of time and combines theoretical and hands-on activities with chemical products and tools.The Paint-cAR application was developed by the authors and is the result of a co-creation process, as described in the work by Bacca et al. (2015), in which VET teachers, software developers, and educational technology experts participated. Using the application, students learn about the chemical products and tools they need to use for each step of the paint repairing process. The application was developed with the following modules: a Scaffolding Module, a Real-time feedback Module, an Assessment Module, the AR Module, and a Monitoring Module. Furthermore, a booklet containing the AR markers that the application recognizes was given to students so that they can also use the application at home.By using the application, students are guided through the process of repairing paint on a car step-by-step. For each one of the 30 steps, students must complete three activities that were designed by the VET teachers: (1) Watch a video that explains how experts go through the repairing process in that step. (2) Answer five multiple-choice questions about that step. (3) Identify the chemical products and/or tools they need to use for that step in the process. This last activity includes a mobile AR experience in which students need to move around the classroom (usually a workshop) and scan AR markers that are stuck to the tools and chemical products they need to use for that step in the process. The application recognizes if the product or tool is appropriate for a particular step in the process by identifying an ID associated to each marker.In the AR experience, by using the Scaffolding Module students can ask the application for help at any time to obtain hints and information to help them to find the appropriate tools and chemical products in the workshop. The Real-time feedback Module provides feedback to students when they scan the markers stuck to the chemical products and tools so that students can reflect on their choices, successes and mistakes. The augmented information shown for each product and tool includes the characteristics of the product, the safety measures required when using it and its technical datasheet. Finally, the Monitoring Module captures students’ interaction with all the other modules. Figure 1 shows a screenshot of the Paint-cAR application in the AR mode.FIGURE 1A screenshot of the Paint-cAR application in the AR mode.
[]
[]
Frontiers in Neuroscience
30190670
PMC6116788
10.3389/fnins.2018.00524
SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a “divide and learn” strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.
4.4. Comparison with related worksWe compare the proposed Liquid-SNN and SpiLinC models with the LSM presented in Verstraeten et al. (2005a) that uses a similar pre-processing front-end for the TI46 speech recognition task. We use a smaller subset containing a total of 500 speech samples that includes 10 utterances each of digits 0–9 spoken by 5 different female speakers since it is de facto used in existing works to evaluate models for speech recognition. We trained our models on 300 randomly selected speech samples and report the classification accuracy on the remaining 200 samples. Table 6 shows that both the Liquid-SNN and two-liquid SpiLinC yield comparable albeit slightly lower classification accuracy than that provided by the LSM, which requires a readout layer trained in a supervised manner to infer the class of an input pattern. Next, we evaluate the proposed models against a two-layered fully-connected SNN (Diehl and Cook, 2015) that is commonly used for unsupervised image recognition. Table 7 shows that the classification accuracy of both the models on the MNIST testing dataset is lower than an accuracy of 95% achieved by the two-layered SNN. However, the Liquid-SNN (with 12,800 neurons) and SpiLinC (4 × 3,200 neurons), respectively offer 3.6 × and 9.4 × sparsity in synaptic connectivity compared to the two-layered SNN containing 6,400 excitatory and 6,400 inhibitory neurons. Note that number of synapses of the baseline two-layered SNN (shown in Table 7) is computed from (8) using the following parameters: ninp = 784, ne = ni = 6,400, pinp−e = 1, pee = pii = 0, pei = 1/ne, and pie = 1−pei. Further, the four-liquid SpiLinC with 3,200 neurons per liquid (out of which 2,560 neurons are excitatory) would converge faster then the two-layered SNN with 6,400 excitatory neurons if both the networks were trained with the same learning rate. We find that the classification accuracy of the proposed models is lower than that achieved by the two-layered SNN because of the sparse recurrent inhibitory connections inside the liquid as explained below. When a test pattern is presented to the liquid, the neurons that learnt the corresponding pattern during training fire and only sparsely inhibit the remaining liquid neurons. This could potentially cause neurons that learnt different input classes but share common features with the presented test pattern to fire, leading to degradation in the classification accuracy. In order to precisely recognize a test pattern, it is important to attribute higher weight to the spike count of the correctly firing neurons and lower weight to the spike count of the incorrectly firing neurons. This can be accomplished by adding a readout layer and suitably adjusting the liquid to readout synaptic weights. We refer the readers to the Supplementary Material for performance characterization of SpiLinC with readout layer. Our results show that SpiLinC augmented with readout layer provides classification accuracy of 97.49% on the MNIST dataset and 97.29% on the TI46 digit subset.Table 6Classification accuracy of different SNN models (with similar audio pre-processing front-end) on 200 test samples from the TI46 speech corpus.SNN modelsNetwork sizeTraining methodologyAccuracy (%)LSM (Verstraeten et al., 2005a)1,200 liquid neuronsSupervised linear classifier94.0Liquid-SNN (our work)1,600 liquid neuronsUnsupervised STDP91.6Two-liquid SpiLinC (our work)2 × 800 liquid neuronsUnsupervised STDP91.4Table 7Classification accuracy of different SNN models trained using unsupervised STDP on the MNIST dataset.SNN modelsNetwork sizeNumber of synapsesAccuracy (%)Two-layered SNN (Diehl and Cook, 2015)12,800 neurons45,977,60095Liquid-SNN (our work)12,800 liquid neurons12,697,60089.65Four-liquid SpiLinC (our work)4 × 3,200 liquid neurons4,866,04890.90Finally, we note that the deep learning networks (Wan et al., 2013) have been shown to achieve 99.79% classification accuracy on the MNIST dataset while the Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) (Graves et al., 2004) provide 98% classification accuracy on the TI46 digit subset. Although the proposed liquid models yield lower classification accuracy than the deep learning networks and the LSTM-RNNs, they offer the following benefits with respect to computational efficiency and training complexity. First, the event-driven spike-based computing capability of the Liquid-SNN and SpiLinC naturally leads to improved computational efficiency than the deep learning networks including the binary networks (Hubara et al., 2016) that are data-driven and operate on continuous real-valued and discrete neuronal activations, respectively. Second, the deep learning networks and the LSTM-RNNs are respectively trained using error backpropagation (Rumelhart et al., 1986) and backpropagation-through-time (Werbos, 1990) algorithms, which are computationally expensive compared to the STDP-based localized training rule used in this work for the input to liquid synaptic weights. Last, the deep learning networks are iteratively trained on multiple presentations of the training dataset to minimize the training loss and achieve convergence. On the other hand, the proposed models are capable of achieving convergence with fewer training examples as evidenced by the four-liquid SpiLinC for MNIST digit recognition, which needed 64,000 training examples for convergence that roughly translates to single presentation of the training dataset. We note that meta-learning strategies (Hochreiter et al., 2001) have been proposed for LSTM-RNNs to learn quickly from fewer data samples by exploiting the internal memory in LSTM-RNNs. Recently, a new class of networks known as the memory-augmented networks (Santoro et al., 2016), where the networks are augmented with an external memory module, have been demonstrated for one-short learning or learning new information after a single presentation. Similar learning strategies, which either exploit the internal memory of a recurrently-connected liquid or incorporate an external memory module, can be used to improve the training efficacy of the proposed models.
[ "26300769", "26941637", "29674961", "19115011", "20192230", "29328958", "30123103", "27877107", "12433288", "17305422", "25104385", "29551962", "29311774", "27760125", "29962943", "29875621" ]
[ { "pmid": "26300769", "title": "Learning structure of sensory inputs with synaptic plasticity leads to interference.", "abstract": "Synaptic plasticity is often explored as a form of unsupervised adaptation in cortical microcircuits to learn the structure of complex sensory inputs and thereby improve performance of classification and prediction. The question of whether the specific structure of the input patterns is encoded in the structure of neural networks has been largely neglected. Existing studies that have analyzed input-specific structural adaptation have used simplified, synthetic inputs in contrast to complex and noisy patterns found in real-world sensory data. In this work, input-specific structural changes are analyzed for three empirically derived models of plasticity applied to three temporal sensory classification tasks that include complex, real-world visual and auditory data. Two forms of spike-timing dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM) plasticity rule are used to adapt the recurrent network structure during the training process before performance is tested on the pattern recognition tasks. It is shown that synaptic adaptation is highly sensitive to specific classes of input pattern. However, plasticity does not improve the performance on sensory pattern recognition tasks, partly due to synaptic interference between consecutively presented input samples. The changes in synaptic strength produced by one stimulus are reversed by the presentation of another, thus largely preventing input-specific synaptic changes from being retained in the structure of the network. To solve the problem of interference, we suggest that models of plasticity be extended to restrict neural activity and synaptic modification to a subset of the neural circuit, which is increasingly found to be the case in experimental neuroscience." }, { "pmid": "26941637", "title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.", "abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks." }, { "pmid": "29674961", "title": "Unsupervised Feature Learning With Winner-Takes-All Based STDP.", "abstract": "We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods." }, { "pmid": "19115011", "title": "Brian: a simulator for spiking neural networks in python.", "abstract": "\"Brian\" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience." }, { "pmid": "20192230", "title": "Nanoscale memristor device as synapse in neuromorphic systems.", "abstract": "A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. Here we experimentally demonstrate a nanoscale silicon-based memristor device and show that a hybrid system composed of complementary metal-oxide semiconductor neurons and memristor synapses can support important synaptic functions such as spike timing dependent plasticity. Using memristors as synapses in neuromorphic circuits can potentially offer both high connectivity and high density required for efficient computing." }, { "pmid": "29328958", "title": "STDP-based spiking deep convolutional neural networks for object recognition.", "abstract": "Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions." }, { "pmid": "30123103", "title": "Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning.", "abstract": "Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training." }, { "pmid": "27877107", "title": "Training Deep Spiking Neural Networks Using Backpropagation.", "abstract": "Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations." }, { "pmid": "12433288", "title": "Real-time computing without stable states: a new framework for neural computation based on perturbations.", "abstract": "A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology." }, { "pmid": "17305422", "title": "Unsupervised learning of visual features through spike timing dependent plasticity.", "abstract": "Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "29551962", "title": "Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model.", "abstract": "A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models." }, { "pmid": "29311774", "title": "Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.", "abstract": "Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations." }, { "pmid": "27760125", "title": "Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.", "abstract": "We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture." }, { "pmid": "29962943", "title": "Event-Based, Timescale Invariant Unsupervised Online Deep Learning With STDP.", "abstract": "Learning of hierarchical features with spiking neurons has mostly been investigated in the database framework of standard deep learning systems. However, the properties of neuromorphic systems could be particularly interesting for learning from continuous sensor data in real-world settings. In this work, we introduce a deep spiking convolutional neural network of integrate-and-fire (IF) neurons which performs unsupervised online deep learning with spike-timing dependent plasticity (STDP) from a stream of asynchronous and continuous event-based data. In contrast to previous approaches to unsupervised deep learning with spikes, where layers were trained successively, we introduce a mechanism to train all layers of the network simultaneously. This allows approximate online inference already during the learning process and makes our architecture suitable for online learning and inference. We show that it is possible to train the network without providing implicit information about the database, such as the number of classes and the duration of stimuli presentation. By designing an STDP learning rule which depends only on relative spike timings, we make our network fully event-driven and able to operate without defining an absolute timescale of its dynamics. Our architecture requires only a small number of generic mechanisms and therefore enforces few constraints on a possible neuromorphic hardware implementation. These characteristics make our network one of the few neuromorphic architecture which could directly learn features and perform inference from an event-based vision sensor." }, { "pmid": "29875621", "title": "Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.", "abstract": "Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics." } ]
Scientific Reports
30166560
PMC6117260
10.1038/s41598-018-31263-2
Detection of antibiotics synthetized in microfluidic picolitre-droplets by various actinobacteria
The natural bacterial diversity is regarded as a treasure trove for natural products. However, accessing complex cell mixtures derived from environmental samples in standardized high-throughput screenings is challenging. Here, we present a droplet-based microfluidic platform for ultrahigh-throughput screenings able to directly harness the diversity of entire microbial communities. This platform combines extensive cultivation protocols in aqueous droplets starting from single cells or spores with modular detection methods for produced antimicrobial compounds. After long-term incubation for bacterial cell propagation and metabolite production, we implemented a setup for mass spectrometric analysis relying on direct electrospray ionization and injection of single droplets. Even in the presence of dense biomass we show robust detection of streptomycin on the single droplet level. Furthermore, we developed an ultrahigh-throughput screening based on a functional whole-cell assay by picoinjecting reporter cells into droplets. Depending on the survival of reporter cells, droplets were selected for the isolation of producing bacteria, which we demonstrated for a microbial soil community. The established ultrahigh-throughput screening for producers of antibiotics in miniaturized bioreactors in which diverse cell mixtures can be screened on the single cell level is a promising approach to find novel antimicrobial scaffolds.
Nucleic acid related work and phylogenetic analysisDNA was extracted from axenic cultures using the QIAamp DNA Mini Kit (Qiagen, Germany). The 16S rRNA gene was amplified in 50 μl reactions using the primers 27F (AGA GTT TGA TCM TGG CTC AG) and 1492 R (CGG TTA CCT TGT TAC GAC TT) and the PrimeSTAR GXL polymerase (Takara Bio, USA). The PCR was carried out as follows: pre-denaturation at 98 °C for 30 s, 35 cycles of denaturation at 98 °C for 20 s, annealing at 55 °C for 40 s, elongation at 68 °C for 30 s and final elongation at 68 °C for 3 min. Sanger sequencing of forward and reverse strands with the same primers used for amplification was done by Macrogen (Netherlands). Consensus sequences of forward and reverse reads were computed with SeqTrace51 (v0.9.0) applying a Needleman-Wunsch alignment algorithm and a quality cut off for base calls of 30. After automatic trimming until 20 out of 20 bases were correctly called, consensus sequences were examined and curated manually. The consensus sequences of the nearly full-length 16S rRNA gene were aligned in SINA52 (v1.2.11). Phylogenetic placements of the isolates were investigated by reconstructing phylogenetic trees with ARB53 (v6.0.6) using the ‘All species living tree project’ database54 (release LTPs128, February 2017). Sequences were added into the LTP type strain reference tree using ARB parsimony (Quick add marked) and the alignment was corrected manually. Phylogenetic tree calculation with all family members was based on the maximum-likelihood algorithm using RAxML55 (v7.04) with GTR-GAMMA and rapid bootstrap analysis.
[ "27212581", "27025767", "26558892", "18694386", "20445884", "25496166", "26815064", "28374582", "28199790", "22753519", "28846506", "28892076", "24733162", "27382153", "27995916", "23955804", "24366236", "28390246", "15031493", "26852623", "21682647", "26739136", "12004133", "12438682", "25561178", "28820533", "29062959", "29552652", "26087412", "28513728", "20962271", "26797564", "19532959", "16544359", "21175680", "28202731", "26226550", "23881253", "23558786", "25448819", "22743772", "22942788", "22556368", "14985472", "18692976", "16928733" ]
[ { "pmid": "27212581", "title": "Droplet microfluidics for microbiology: techniques, applications and challenges.", "abstract": "Droplet microfluidics has rapidly emerged as one of the key technologies opening up new experimental possibilities in microbiology. The ability to generate, manipulate and monitor droplets carrying single cells or small populations of bacteria in a highly parallel and high throughput manner creates new approaches for solving problems in diagnostics and for research on bacterial evolution. This review presents applications of droplet microfluidics in various fields of microbiology: i) detection and identification of pathogens, ii) antibiotic susceptibility testing, iii) studies of microbial physiology and iv) biotechnological selection and improvement of strains. We also list the challenges in the dynamically developing field and new potential uses of droplets in microbiology." }, { "pmid": "27025767", "title": "Droplet-based microfluidics in drug discovery, transcriptomics and high-throughput molecular genetics.", "abstract": "Droplet-based microfluidics enables assays to be carried out at very high throughput (up to thousands of samples per second) and enables researchers to work with very limited material, such as primary cells, patient's biopsies or expensive reagents. An additional strength of the technology is the possibility to perform large-scale genotypic or phenotypic screens at the single-cell level. Here we critically review the latest developments in antibody screening, drug discovery and highly multiplexed genomic applications such as targeted genetic workflows, single-cell RNAseq and single-cell ChIPseq. Starting with a comprehensive introduction for non-experts, we pinpoint current limitations, analyze how they might be overcome and give an outlook on exciting future applications." }, { "pmid": "18694386", "title": "Functional cell-based assays in microliter volumes for ultra-high throughput screening.", "abstract": "Functional cell-based assays have gained increasing importance for microplate-based high throughput screening (HTS). The use of high-density microplates, most prominently 1536-well plates, and miniaturized assay formats allow screening of comprehensive compound collections with more than 1 million compounds at ultra-high throughput, i.e. in excess of 100,000 samples per day. uHTS operations with numerous campaigns per year should generally support this throughput at all different steps of the process, including the underlying compound logistics, the (automated) testing of the corporate compound collection in the bioassay, and the subsequent follow-up studies for hit confirmation and characterization. A growing number of reports document the general feasibility of cell-based uHTS in microliter volumes. In addition, full automation with integrated robotic systems allows the realization of also complex assay protocols with multiple liquid handling and signal detection steps. For this review, cell-based assays are categorized based on the kinetics of the cellular response to be quantified in the test and the readout method employed. Thus, assays measuring fast cellular responses with high temporal resolution, e.g., receptor mediated calcium signals or changes in membrane potential, are at one end of this spectrum, while tests quantifying cellular transcriptional responses mark the opposite end. Trends for cell-based uHTS assays developed at Bayer-Schering Pharma are, first, to incorporate assay integral reference signals allowing the experimental differentiation of target hits from non-specifically acting compounds, and second, to make use of kinetic, real-time readouts providing additional information on the mode-of-action of test compounds." }, { "pmid": "20445884", "title": "An automated two-phase microfluidic system for kinetic analyses and the screening of compound libraries.", "abstract": "Droplet-based microfluidic systems allow biological and chemical reactions to be performed on a drastically decreased scale. However, interfacing the outside world with such systems and generating high numbers of microdroplets of distinct chemical composition remain challenging. We describe here an automated system in which arrays of chemically distinct plugs are generated from microtiter plates. Each array can be split into multiple small-volume copies, thus allowing several screens of the same library. The system is fully compatible with further on-chip manipulation(s) and allows monitoring of individual plugs over time (e.g. for recording reaction kinetics). Hence the technology eliminates several bottlenecks of current droplet-based microfluidic systems and should open the way for (bio-)chemical and cell-based screens." }, { "pmid": "25496166", "title": "Interfacing microwells with nanoliter compartments: a sampler generating high-resolution concentration gradients for quantitative biochemical analyses in droplets.", "abstract": "Analysis of concentration dependencies is key to the quantitative understanding of biological and chemical systems. In experimental tests involving concentration gradients such as inhibitor library screening, the number of data points and the ratio between the stock volume and the volume required in each test determine the quality and efficiency of the information gained. Titerplate assays are currently the most widely used format, even though they require microlitre volumes. Compartmentalization of reactions in pico- to nanoliter water-in-oil droplets in microfluidic devices provides a solution for massive volume reduction. This work addresses the challenge of producing microfluidic-based concentration gradients in a way that every droplet represents one unique reagent combination. We present a simple microcapillary technique able to generate such series of monodisperse water-in-oil droplets (with a frequency of up to 10 Hz) from a sample presented in an open well (e.g., a titerplate). Time-dependent variation of the well content results in microdroplets that represent time capsules of the composition of the source well. By preserving the spatial encoding of the droplets in tubing, each reactor is assigned an accurate concentration value. We used this approach to record kinetic time courses of the haloalkane dehalogenase DbjA and analyzed 150 combinations of enzyme/substrate/inhibitor in less than 5 min, resulting in conclusive Michaelis-Menten and inhibition curves. Avoiding chips and merely requiring two pumps, a magnetic plate with a stirrer, tubing, and a pipet tip, this easy-to-use device rivals the output of much more expensive liquid handling systems using a fraction (∼100-fold less) of the reagents consumed in microwell format." }, { "pmid": "26815064", "title": "hνSABR: Photochemical Dose-Response Bead Screening in Droplets.", "abstract": "With the potential for each droplet to act as a unique reaction vessel, droplet microfluidics is a powerful tool for high-throughput discovery. Any attempt at compound screening miniaturization must address the significant scaling inefficiencies associated with library handling and distribution. Eschewing microplate-based compound collections for one-bead-one-compound (OBOC) combinatorial libraries, we have developed hνSABR (Light-Induced and -Graduated High-Throughput Screening After Bead Release), a microfluidic architecture that integrates a suspension hopper for compound library bead introduction, droplet generation, microfabricated waveguides to deliver UV light to the droplet flow for photochemical compound dosing, incubation, and laser-induced fluorescence for assay readout. Avobenzone-doped PDMS (0.6% w/w) patterning confines UV exposure to the desired illumination region, generating intradroplet compound concentrations (>10 μM) that are reproducible between devices. Beads displaying photochemically cleavable pepstatin A were distributed into droplets and exposed with five different UV intensities to demonstrate dose-response screening in an HIV-1 protease activity assay. This microfluidic architecture introduces a new analytical approach for OBOC library screening, and represents a key component of a next-generation distributed small molecule discovery platform." }, { "pmid": "28374582", "title": "Detection of Enzyme Inhibitors in Crude Natural Extracts Using Droplet-Based Microfluidics Coupled to HPLC.", "abstract": "Natural product screening for new bioactive compounds can greatly benefit from low reagents consumption and high throughput capacity of droplet-based microfluidic systems. However, the creation of large droplet libraries in which each droplet carries a different compound is a challenging task. A possible solution is to use an HPLC coupled to a droplet generating microfluidic device to sequentially encapsulate the eluting compounds. In this work we demonstrate the feasibility of carrying out enzyme inhibiting assays inside nanoliter droplets with the different components of a natural crude extract after being separated by a coupled HPLC column. In the droplet formation zone, the eluted components are mixed with an enzyme and a fluorogenic substrate that permits to follow the enzymatic reaction in the presence of each chromatographic peak and identify those inhibiting the enzyme activity. Using a fractal shape channel design and automated image analysis, we were able to identify inhibitors of Clostridium perfringens neuraminidase present in a root extract of the Pelargonium sidoides plant. This work demonstrates the feasibility of bioprofiling a natural crude extract after being separated in HPLC using microfluidic droplets online and represents an advance in the miniaturization of natural products screening." }, { "pmid": "28199790", "title": "An Integrated Microfluidic Processor for DNA-Encoded Combinatorial Library Functional Screening.", "abstract": "DNA-encoded synthesis is rekindling interest in combinatorial compound libraries for drug discovery and in technology for automated and quantitative library screening. Here, we disclose a microfluidic circuit that enables functional screens of DNA-encoded compound beads. The device carries out library bead distribution into picoliter-scale assay reagent droplets, photochemical cleavage of compound from the bead, assay incubation, laser-induced fluorescence-based assay detection, and fluorescence-activated droplet sorting to isolate hits. DNA-encoded compound beads (10-μm diameter) displaying a photocleavable positive control inhibitor pepstatin A were mixed (1920 beads, 729 encoding sequences) with negative control beads (58 000 beads, 1728 encoding sequences) and screened for cathepsin D inhibition using a biochemical enzyme activity assay. The circuit sorted 1518 hit droplets for collection following 18 min incubation over a 240 min analysis. Visual inspection of a subset of droplets (1188 droplets) yielded a 24% false discovery rate (1166 pepstatin A beads; 366 negative control beads). Using template barcoding strategies, it was possible to count hit collection beads (1863) using next-generation sequencing data. Bead-specific barcodes enabled replicate counting, and the false discovery rate was reduced to 2.6% by only considering hit-encoding sequences that were observed on >2 beads. This work represents a complete distributable small molecule discovery platform, from microfluidic miniaturized automation to ultrahigh-throughput hit deconvolution by sequencing." }, { "pmid": "22753519", "title": "Functional single-cell hybridoma screening using droplet-based microfluidics.", "abstract": "Monoclonal antibodies can specifically bind or even inhibit drug targets and have hence become the fastest growing class of human therapeutics. Although they can be screened for binding affinities at very high throughput using systems such as phage display, screening for functional properties (e.g., the inhibition of a drug target) is much more challenging. Typically these screens require the generation of immortalized hybridoma cells, as well as clonal expansion in microtiter plates over several weeks, and the number of clones that can be assayed is typically no more than a few thousand. We present here a microfluidic platform allowing the functional screening of up to 300,000 individual hybridoma cell clones within less than a day. This approach should also be applicable to nonimmortalized primary B-cells, as no cell proliferation is required: Individual cells are encapsulated into aqueous microdroplets and assayed directly for the release of antibodies inhibiting a drug target based on fluorescence. We used this system to perform a model screen for antibodies that inhibit angiotensin converting enzyme 1, a target for hypertension and congestive heart failure drugs. When cells expressing these antibodies were spiked into an unrelated hybridoma cell population in a ratio of 1:10,000 we observed a 9,400-fold enrichment after fluorescence activated droplet sorting. A wide variance in antibody expression levels at the single-cell level within a single hybridoma line was observed and high expressors could be successfully sorted and recultivated." }, { "pmid": "28846506", "title": "Rare, high-affinity mouse anti-PD-1 antibodies that function in checkpoint blockade, discovered using microfluidics and molecular genomics.", "abstract": "Conventionally, mouse hybridomas or well-plate screening are used to identify therapeutic monoclonal antibody candidates. In this study, we present an alternative to hybridoma-based discovery that combines microfluidics, yeast single-chain variable fragment (scFv) display, and deep sequencing to rapidly interrogate and screen mouse antibody repertoires. We used our approach on six wild-type mice to identify 269 molecules that bind to programmed cell death protein 1 (PD-1), which were present at an average of 1 in 2,000 in the pre-sort scFv libraries. Two rounds of fluorescence-activated cell sorting (FACS) produced populations of PD-1-binding scFv with a mean enrichment of 800-fold, whereas most scFv present in the pre-sort mouse repertoires were de-enriched. Therefore, our work suggests that most of the antibodies present in the repertoires of immunized mice are not strong binders to PD-1. We observed clusters of related antibody sequences in each mouse following FACS, suggesting evolution of clonal lineages. In the pre-sort repertoires, these putative clonal lineages varied in both the complementary-determining region (CDR)3K and CDR3H, while the FACS-selected PD-1-binding subsets varied primarily in the CDR3H. PD-1 binders were generally not highly diverged from germline, showing 98% identity on average with germline V-genes. Some CDR3 sequences were discovered in more than one animal, even across different mouse strains, suggesting convergent evolution. We synthesized 17 of the anti-PD-1 binders as full-length monoclonal antibodies. All 17 full-length antibodies bound recombinant PD-1 with KD < 500 nM (average = 62 nM). Fifteen of the 17 full-length antibodies specifically bound surface-expressed PD-1 in a FACS assay, and nine of the antibodies functioned as checkpoint inhibitors in a cellular assay. We conclude that our method is a viable alternative to hybridomas, with key advantages in comprehensiveness and turnaround time." }, { "pmid": "28892076", "title": "Single-cell deep phenotyping of IgG-secreting cells for high-resolution immune monitoring.", "abstract": "Studies of the dynamics of the antibody-mediated immune response have been hampered by the absence of quantitative, high-throughput systems to analyze individual antibody-secreting cells. Here we describe a simple microfluidic system, DropMap, in which single cells are compartmentalized in tens of thousands of 40-pL droplets and analyzed in two-dimensional droplet arrays using a fluorescence relocation-based immunoassay. Using DropMap, we characterized antibody-secreting cells in mice immunized with tetanus toxoid (TT) over a 7-week protocol, simultaneously analyzing the secretion rate and affinity of IgG from over 0.5 million individual cells enriched from spleen and bone marrow. Immunization resulted in dramatic increases in the range of both single-cell secretion rates and affinities, which spanned at maximum 3 and 4 logs, respectively. We observed differences over time in dynamics of secretion rate and affinity within and between anatomical compartments. This system will not only enable immune monitoring and optimization of immunization and vaccination protocols but also potentiate antibody screening." }, { "pmid": "24733162", "title": "CotA laccase: high-throughput manipulation and analysis of recombinant enzyme libraries expressed in E. coli using droplet-based microfluidics.", "abstract": "We present a high-throughput droplet-based microfluidic analysis/screening platform for directed evolution of CotA laccase: droplet-based microfluidic modules were combined to develop an efficient system that allows cell detection and sorting based on the enzymatic activity. This platform was run on two different operating modes: the \"analysis\" mode allowing the analysis of the enzymatic activity in droplets at very high rates (>1000 Hz) and the \"screening\" mode allowing sorting of active droplets at 400 Hz. The screening mode was validated for the directed evolution of the cytoplasmic CotA laccase from B. subtilis, a potential interesting thermophilic cathodic catalyst for biofuel cells. Single E. coli cells expressing either the active CotA laccase (E. coli CotA) or an inactive frameshifted variant (E. coli ΔCotA) were compartmentalized in aqueous droplets containing expression medium. After cell growth and protein expression within the droplets, a fluorogenic substrate was \"picoinjected\" in each droplet. Fluorescence-activated droplet sorting was then used to sort the droplets containing the desired activity and the corresponding cells were then recultivated and identified using colorimetric assays. We demonstrated that E. coli CotA cells were enriched 191-fold from a 1 : 9 initial ratio of E. coli CotA to E. coli ΔCotA cells (or 437-fold from a 1 : 99 initial ratio) using a sorting rate of 400 droplets per s. This system allows screening of 10(6) cells in only 4 h, compared to 11 days for screening using microtitre plate-based systems. Besides this low error rate sorting mode, the system can also be used at higher throughputs in \"enrichment\" screening mode to make an initial purification of a library before further steps of selection. Analysis mode, without sorting, was used to rapidly quantify the activity of a CotA library constructed using error-prone PCR. This mode allows analysis of 10(6) cells in only 1.5 h." }, { "pmid": "27382153", "title": "Lasso adjustments of treatment effect estimates in randomized experiments.", "abstract": "We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS." }, { "pmid": "27995916", "title": "Emergence of a catalytic tetrad during evolution of a highly active artificial aldolase.", "abstract": "Designing catalysts that achieve the rates and selectivities of natural enzymes is a long-standing goal in protein chemistry. Here, we show that an ultrahigh-throughput droplet-based microfluidic screening platform can be used to improve a previously optimized artificial aldolase by an additional factor of 30 to give a >109 rate enhancement that rivals the efficiency of class I aldolases. The resulting enzyme catalyses a reversible aldol reaction with high stereoselectivity and tolerates a broad range of substrates. Biochemical and structural studies show that catalysis depends on a Lys-Tyr-Asn-Tyr tetrad that emerged adjacent to a computationally designed hydrophobic pocket during directed evolution. This constellation of residues is poised to activate the substrate by Schiff base formation, promote mechanistically important proton transfers and stabilize multiple transition states along a complex reaction coordinate. The emergence of such a sophisticated catalytic centre shows that there is nothing magical about the catalytic activities or mechanisms of naturally occurring enzymes, or the evolutionary process that gave rise to them." }, { "pmid": "23955804", "title": "A high-throughput screen for antibiotic drug discovery.", "abstract": "We describe an ultra-high-throughput screening platform enabling discovery and/or engineering of natural product antibiotics. The methodology involves creation of hydrogel-in-oil emulsions in which recombinant microorganisms are co-emulsified with bacterial pathogens; antibiotic activity is assayed by use of a fluorescent viability dye. We have successfully utilized both bulk emulsification and microfluidic technology for the generation of hydrogel microdroplets that are size-compatible with conventional flow cytometry. Hydrogel droplets are ∼25 pL in volume, and can be synthesized and sorted at rates exceeding 3,000 drops/s. Using this technique, we have achieved screening throughputs exceeding 5 million clones/day. Proof-of-concept experiments demonstrate efficient selection of antibiotic-secreting yeast from a vast excess of negative controls. In addition, we have successfully used this technique to screen a metagenomic library for secreted antibiotics that kill the human pathogen Staphylococcus aureus. Our results establish the practical utility of the screening platform, and we anticipate that the accessible nature of our methods will enable others seeking to identify and engineer the next generation of antibacterial biomolecules." }, { "pmid": "24366236", "title": "High-throughput screening for industrial enzyme production hosts by droplet microfluidics.", "abstract": "A high-throughput method for single cell screening by microfluidic droplet sorting is applied to a whole-genome mutated yeast cell library yielding improved production hosts of secreted industrial enzymes. The sorting method is validated by enriching a yeast strain 14 times based on its α-amylase production, close to the theoretical maximum enrichment. Furthermore, a 10(5) member yeast cell library is screened yielding a clone with a more than 2-fold increase in α-amylase production. The increase in enzyme production results from an improvement of the cellular functions of the production host in contrast to previous droplet-based directed evolution that has focused on improving enzyme protein structure. In the workflow presented, enzyme producing single cells are encapsulated in 20 pL droplets with a fluorogenic reporter substrate. The coupling of a desired phenotype (secreted enzyme concentration) with the genotype (contained in the cell) inside a droplet enables selection of single cells with improved enzyme production capacity by droplet sorting. The platform has a throughput over 300 times higher than that of the current industry standard, an automated microtiter plate screening system. At the same time, reagent consumption for a screening experiment is decreased a million fold, greatly reducing the costs of evolutionary engineering of production strains." }, { "pmid": "28390246", "title": "Exploring sequence space in search of functional enzymes using microfluidic droplets.", "abstract": "Screening of enzyme mutants in monodisperse picoliter compartments, generated at kilohertz speed in microfluidic devices, is coming of age. After a decade of proof-of-principle experiments, workflows have emerged that combine existing microfluidic modules to assay reaction progress quantitatively and yield improved enzymes. Recent examples of the screening of libraries of randomised proteins and from metagenomic sources suggest that this approach is not only faster and cheaper, but solves problems beyond the feasibility scope of current methodologies. The establishment of new assays in this format - so far covering hydrolases, aldolases, polymerases and dehydrogenases - will enable the exploration of sequence space for new catalysts of natural and non-natural chemical transformations." }, { "pmid": "15031493", "title": "Polyketide and nonribosomal peptide antibiotics: modularity and versatility.", "abstract": "Polyketide (PK) and nonribosomal peptides (NRP), constructed on multimodular enzymatic assembly lines, often attain the conformations that establish biological activity by cyclization constraints introduced by tailoring enzymes. The dedicated tailoring enzymes are encoded by genes clustered with the assembly line genes for coordinated regulation. NRP heterocyclizations to thiazoles and oxazoles can occur on the elongating framework of acyl-S enzyme intermediates, whereas tandem cyclic PK polyether formation of furans and pyrans can be initiated by post-assembly line epoxidases. Macrocyclizations of NRP, PK, and hybrid NRP-PK scaffolds occur in assembly line chain termination steps. Post-assembly line cascades of enzymatic oxidations also create cross-linked and cyclized architectures that generate the mature scaffolds of natural product antibiotics. The modularity of the natural product assembly lines and permissivity of tailoring enzymes offer prospects for reprogramming to create novel antibiotics with optimized properties." }, { "pmid": "26852623", "title": "Natural Products as Sources of New Drugs from 1981 to 2014.", "abstract": "This contribution is a completely updated and expanded version of the four prior analogous reviews that were published in this journal in 1997, 2003, 2007, and 2012. In the case of all approved therapeutic agents, the time frame has been extended to cover the 34 years from January 1, 1981, to December 31, 2014, for all diseases worldwide, and from 1950 (earliest so far identified) to December 2014 for all approved antitumor drugs worldwide. As mentioned in the 2012 review, we have continued to utilize our secondary subdivision of a \"natural product mimic\", or \"NM\", to join the original primary divisions and the designation \"natural product botanical\", or \"NB\", to cover those botanical \"defined mixtures\" now recognized as drug entities by the U.S. FDA (and similar organizations). From the data presented in this review, the utilization of natural products and/or their novel structures, in order to discover and develop the final drug entity, is still alive and well. For example, in the area of cancer, over the time frame from around the 1940s to the end of 2014, of the 175 small molecules approved, 131, or 75%, are other than \"S\" (synthetic), with 85, or 49%, actually being either natural products or directly derived therefrom. In other areas, the influence of natural product structures is quite marked, with, as expected from prior information, the anti-infective area being dependent on natural products and their structures. We wish to draw the attention of readers to the rapidly evolving recognition that a significant number of natural product drugs/leads are actually produced by microbes and/or microbial interactions with the \"host from whence it was isolated\", and therefore it is considered that this area of natural product research should be expanded significantly." }, { "pmid": "21682647", "title": "Approaches to capturing and designing biologically active small molecules produced by uncultured microbes.", "abstract": "Bacteria are one of the most important sources of bioactive natural products for drug discovery. Yet, in most habitats only a small percentage of all existing prokaryotes is amenable to cultivation and chemical study. There is strong evidence that the uncultivated diversity represents an enormous resource of novel biosynthetic enzymes and secondary metabolites. In addition, many animal-derived drug candidates that are structurally characterized but difficult to access seem to be produced by uncultivated, symbiotic bacteria. This review provides an overview about established and emerging techniques for the investigation and exploitation of the environmental metabolome. These include metagenomic library construction and screening, heterologous expression, community sequencing, and single-cell methods. Such tools, the advantages and shortcomings of which are discussed, have just begun to reveal the full metabolic potential of free-living and symbiotic bacteria, providing exciting new avenues for natural product research and environmental microbiology." }, { "pmid": "26739136", "title": "Natural product discovery: past, present, and future.", "abstract": "Microorganisms have provided abundant sources of natural products which have been developed as commercial products for human medicine, animal health, and plant crop protection. In the early years of natural product discovery from microorganisms (The Golden Age), new antibiotics were found with relative ease from low-throughput fermentation and whole cell screening methods. Later, molecular genetic and medicinal chemistry approaches were applied to modify and improve the activities of important chemical scaffolds, and more sophisticated screening methods were directed at target disease states. In the 1990s, the pharmaceutical industry moved to high-throughput screening of synthetic chemical libraries against many potential therapeutic targets, including new targets identified from the human genome sequencing project, largely to the exclusion of natural products, and discovery rates dropped dramatically. Nonetheless, natural products continued to provide key scaffolds for drug development. In the current millennium, it was discovered from genome sequencing that microbes with large genomes have the capacity to produce about ten times as many secondary metabolites as was previously recognized. Indeed, the most gifted actinomycetes have the capacity to produce around 30-50 secondary metabolites. With the precipitous drop in cost for genome sequencing, it is now feasible to sequence thousands of actinomycete genomes to identify the \"biosynthetic dark matter\" as sources for the discovery of new and novel secondary metabolites. Advances in bioinformatics, mass spectrometry, proteomics, transcriptomics, metabolomics and gene expression are driving the new field of microbial genome mining for applications in natural product discovery and development." }, { "pmid": "12004133", "title": "Isolating \"uncultivable\" microorganisms in pure culture in a simulated natural environment.", "abstract": "The majority (>99%) of microorganisms from the environment resist cultivation in the laboratory. Ribosomal RNA analysis suggests that uncultivated organisms are found in nearly every prokaryotic group, and several divisions have no known cultivable representatives. We designed a diffusion chamber that allowed the growth of previously uncultivated microorganisms in a simulated natural environment. Colonies of representative marine organisms were isolated in pure culture. These isolates did not grow on artificial media alone but formed colonies in the presence of other microorganisms. This observation may help explain the nature of microbial uncultivability." }, { "pmid": "12438682", "title": "Cultivating the uncultured.", "abstract": "The recent application of molecular phylogeny to environmental samples has resulted in the discovery of an abundance of unique and previously unrecognized microorganisms. The vast majority of this microbial diversity has proved refractory to cultivation. Here, we describe a universal method that provides access to this immense reservoir of untapped microbial diversity. This technique combines encapsulation of cells in gel microdroplets for massively parallel microbial cultivation under low nutrient flux conditions, followed by flow cytometry to detect microdroplets containing microcolonies. The ability to grow and study previously uncultured organisms in pure culture will enhance our understanding of microbial physiology and metabolic adaptation and will provide new sources of microbial metabolites. We show that this technology can be applied to samples from several different environments, including seawater and soil." }, { "pmid": "25561178", "title": "A new antibiotic kills pathogens without detectable resistance.", "abstract": "Antibiotic resistance is spreading faster than the introduction of new compounds into clinical practice, causing a public health crisis. Most antibiotics were produced by screening soil microorganisms, but this limited resource of cultivable bacteria was overmined by the 1960s. Synthetic approaches to produce antibiotics have been unable to replace this platform. Uncultured bacteria make up approximately 99% of all species in external environments, and are an untapped source of new antibiotics. We developed several methods to grow uncultured organisms by cultivation in situ or by using specific growth factors. Here we report a new antibiotic that we term teixobactin, discovered in a screen of uncultured bacteria. Teixobactin inhibits cell wall synthesis by binding to a highly conserved motif of lipid II (precursor of peptidoglycan) and lipid III (precursor of cell wall teichoic acid). We did not obtain any mutants of Staphylococcus aureus or Mycobacterium tuberculosis resistant to teixobactin. The properties of this compound suggest a path towards developing antibiotics that are likely to avoid development of resistance." }, { "pmid": "28820533", "title": "Actinomycetes: still a source of novel antibiotics.", "abstract": "Covering: 2006 to 2017Actinomycetes have been, for decades, one of the most important sources for the discovery of new antibiotics with an important number of drugs and analogs successfully introduced in the market and still used today in clinical practice. The intensive antibacterial discovery effort that generated the large number of highly potent broad-spectrum antibiotics, has seen a dramatic decline in the large pharma industry in the last two decades resulting in a lack of new classes of antibiotics with novel mechanisms of action reaching the clinic. Whereas the decline in the number of new chemical scaffolds and the rediscovery problem of old known molecules has become a hurdle for industrial natural products discovery programs, new actinomycetes compounds and leads have continued to be discovered and developed to the preclinical stages. Actinomycetes are still one of the most important sources of chemical diversity and a reservoir to mine for novel structures that is requiring the integration of diverse disciplines. These can range from novel strategies to isolate species previously not cultivated, innovative whole cell screening approaches and on-site analytical detection and dereplication tools for novel compounds, to in silico biosynthetic predictions from whole gene sequences and novel engineered heterologous expression, that have inspired the isolation of new NPs and shown their potential application in the discovery of novel antibiotics. This review will address the discovery of antibiotics from actinomycetes from two different perspectives including: (1) an update of the most important antibiotics that have only reached the clinical development in the recent years despite their early discovery, and (2) an overview of the most recent classes of antibiotics described from 2006 to 2017 in the framework of the different strategies employed to untap novel compounds previously overlooked with traditional approaches." }, { "pmid": "29062959", "title": "Evaluation of fermentation conditions triggering increased antibacterial activity from a near-shore marine intertidal environment-associated Streptomyces species.", "abstract": "A near-shore marine intertidal environment-associated Streptomyces isolate (USC-633) from the Sunshine Coast Region of Queensland, Australia, cultivated under a range of chemically defined and complex media to determine optimal parameters resulting in the secretion of diverse array of secondary metabolites with antimicrobial properties against various antibiotic resistant bacteria. Following extraction, fractioning and re-testing of active metabolites resulted in persistent antibacterial activity against Escherichia coli (Migula) (ATCC 13706) and subsequent Nuclear Magnetic Resonance (NMR) analysis of the active fractions confirmed the induction of metabolites different than the ones in fractions which did not display activity against the same bacterial species. Overall findings again confirmed the value of One Strain-Many Compounds (OSMAC) approach that tests a wide range of growth parameters to trigger bioactive compound secretion increasing the likelihood of finding novel therapeutic agents. The isolate was found to be adaptable to both marine and terrestrial conditions corresponding to its original near-shore marine intertidal environment. Wide variations in its morphology, sporulation and diffusible pigment production were observed when different growth media were used." }, { "pmid": "29552652", "title": "A systems approach using OSMAC, Log P and NMR fingerprinting: An approach to novelty.", "abstract": "The growing number of sequenced microbial genomes has revealed a remarkably large number of secondary metabolite biosynthetic clusters for which the compounds are still unknown. The aim of the present work was to apply a strategy to detect newly induced natural products by cultivating microorganisms in different fermentation conditions. The metabolomic analysis of 4160 fractions generated from 13 actinomycetes under 32 different culture conditions was carried out by 1H NMR spectroscopy and multivariate analysis. The principal component analysis (PCA) of the 1H NMR spectra showed a clear discrimination between those samples within PC1 and PC2. The fractions with induced metabolites that are only produced under specific growth conditions was identified by PCA analysis. This method allows an efficient differentiation within a large dataset with only one fractionation step. This work demonstrates the potential of NMR spectroscopy in combination with metabolomic data analysis for the screening of large sets of fractions." }, { "pmid": "26087412", "title": "Elicitation of secondary metabolism in actinomycetes.", "abstract": "Genomic sequence data have revealed the presence of a large fraction of putatively silent biosynthetic gene clusters in the genomes of actinomycetes that encode for secondary metabolites, which are not detected under standard fermentation conditions. This review focuses on the effects of biological (co-cultivation), chemical, as well as molecular elicitation on secondary metabolism in actinomycetes. Our review covers the literature until June 2014 and exemplifies the diversity of natural products that have been recovered by such approaches from the phylum Actinobacteria." }, { "pmid": "28513728", "title": "A droplet-chip/mass spectrometry approach to study organic synthesis at nanoliter scale.", "abstract": "A droplet-based microfluidic device with seamless hyphenation to electrospray mass spectrometry was developed to rapidly investigate organic reactions in segmented flow providing a versatile tool for drug development. A chip-MS interface with an integrated counterelectrode allowed for a flexible positioning of the chip-emitter in front of the MS orifice as well as an independent adjustment of the electrospray potentials. This was necessary to avoid contamination of the mass spectrometer as well as sample overloading due to the high analyte concentrations. The device was exemplarily applied to study the scope of an amino-catalyzed domino reaction with low picomole amount of catalyst in individual nanoliter sized droplets." }, { "pmid": "20962271", "title": "High-throughput injection with microfluidics using picoinjectors.", "abstract": "Adding reagents to drops is one of the most important functions in droplet-based microfluidic systems; however, a robust technique to accomplish this does not exist. Here, we introduce the picoinjector, a robust device to add controlled volumes of reagent using electro-microfluidics at kilohertz rates. It can also perform multiple injections for serial and combinatorial additions." }, { "pmid": "26797564", "title": "Controlling molecular transport in minimal emulsions.", "abstract": "Emulsions are metastable dispersions in which molecular transport is a major mechanism driving the system towards its state of minimal energy. Determining the underlying mechanisms of molecular transport between droplets is challenging due to the complexity of a typical emulsion system. Here we introduce the concept of 'minimal emulsions', which are controlled emulsions produced using microfluidic tools, simplifying an emulsion down to its minimal set of relevant parameters. We use these minimal emulsions to unravel the fundamentals of transport of small organic molecules in water-in-fluorinated-oil emulsions, a system of great interest for biotechnological applications. Our results are of practical relevance to guarantee a sustainable compartmentalization of compounds in droplet microreactors and to design new strategies for the dynamic control of droplet compositions." }, { "pmid": "19532959", "title": "Fluorescence-activated droplet sorting (FADS): efficient microfluidic cell sorting based on enzymatic activity.", "abstract": "We describe a highly efficient microfluidic fluorescence-activated droplet sorter (FADS) combining many of the advantages of microtitre-plate screening and traditional fluorescence-activated cell sorting (FACS). Single cells are compartmentalized in emulsion droplets, which can be sorted using dielectrophoresis in a fluorescence-activated manner (as in FACS) at rates up to 2000 droplets s(-1). To validate the system, mixtures of E. coli cells, expressing either the reporter enzyme beta-galactosidase or an inactive variant, were compartmentalized with a fluorogenic substrate and sorted at rates of approximately 300 droplets s(-1). The false positive error rate of the sorter at this throughput was <1 in 10(4) droplets. Analysis of the sorted cells revealed that the primary limit to enrichment was the co-encapsulation of E. coli cells, not sorting errors: a theoretical model based on the Poisson distribution accurately predicted the observed enrichment values using the starting cell density (cells per droplet) and the ratio of active to inactive cells. When the cells were encapsulated at low density ( approximately 1 cell for every 50 droplets), sorting was very efficient and all of the recovered cells were the active strain. In addition, single active droplets were sorted and cells were successfully recovered." }, { "pmid": "28202731", "title": "Microfluidic droplet platform for ultrahigh-throughput single-cell screening of biodiversity.", "abstract": "Ultrahigh-throughput screening (uHTS) techniques can identify unique functionality from millions of variants. To mimic the natural selection mechanisms that occur by compartmentalization in vivo, we developed a technique based on single-cell encapsulation in droplets of a monodisperse microfluidic double water-in-oil-in-water emulsion (MDE). Biocompatible MDE enables in-droplet cultivation of different living species. The combination of droplet-generating machinery with FACS followed by next-generation sequencing and liquid chromatography-mass spectrometry analysis of the secretomes of encapsulated organisms yielded detailed genotype/phenotype descriptions. This platform was probed with uHTS for biocatalysts anchored to yeast with enrichment close to the theoretically calculated limit and cell-to-cell interactions. MDE-FACS allowed the identification of human butyrylcholinesterase mutants that undergo self-reactivation after inhibition by the organophosphorus agent paraoxon. The versatility of the platform allowed the identification of bacteria, including slow-growing oral microbiota species that suppress the growth of a common pathogen, Staphylococcus aureus, and predicted which genera were associated with inhibitory activity." }, { "pmid": "26226550", "title": "The Poisson distribution and beyond: methods for microfluidic droplet production and single cell encapsulation.", "abstract": "There is a recognized and growing need for rapid and efficient cell assays, where the size of microfluidic devices lend themselves to the manipulation of cellular populations down to the single cell level. An exceptional way to analyze cells independently is to encapsulate them within aqueous droplets surrounded by an immiscible fluid, so that reagents and reaction products are contained within a controlled microenvironment. Most cell encapsulation work has focused on the development and use of passive methods, where droplets are produced continuously at high rates by pumping fluids from external pressure-driven reservoirs through defined microfluidic geometries. With limited exceptions, the number of cells encapsulated per droplet in these systems is dictated by Poisson statistics, reducing the proportion of droplets that contain the desired number of cells and thus the effective rate at which single cells can be encapsulated. Nevertheless, a number of recently developed actively-controlled droplet production methods present an alternative route to the production of droplets at similar rates and with the potential to improve the efficiency of single-cell encapsulation. In this critical review, we examine both passive and active methods for droplet production and explore how these can be used to deterministically and non-deterministically encapsulate cells." }, { "pmid": "23881253", "title": "Real-time image processing for label-free enrichment of Actinobacteria cultivated in picolitre droplets.", "abstract": "The majority of today's antimicrobial therapeutics is derived from secondary metabolites produced by Actinobacteria. While it is generally assumed that less than 1% of Actinobacteria species from soil habitats have been cultivated so far, classic screening approaches fail to supply new substances, often due to limited throughput and frequent rediscovery of already known strains. To overcome these restrictions, we implement high-throughput cultivation of soil-derived Actinobacteria in microfluidic pL-droplets by generating more than 600,000 pure cultures per hour from a spore suspension that can subsequently be incubated for days to weeks. Moreover, we introduce triggered imaging with real-time image-based droplet classification as a novel universal method for pL-droplet sorting. Growth-dependent droplet sorting at frequencies above 100 Hz is performed for label-free enrichment and extraction of microcultures. The combination of both cultivation of Actinobacteria in pL-droplets and real-time detection of growing Actinobacteria has great potential in screening for yet unknown species as well as their undiscovered natural products." }, { "pmid": "23558786", "title": "Single-cell analysis and sorting using droplet-based microfluidics.", "abstract": "We present a droplet-based microfluidics protocol for high-throughput analysis and sorting of single cells. Compartmentalization of single cells in droplets enables the analysis of proteins released from or secreted by cells, thereby overcoming one of the major limitations of traditional flow cytometry and fluorescence-activated cell sorting. As an example of this approach, we detail a binding assay for detecting antibodies secreted from single mouse hybridoma cells. Secreted antibodies are detected after only 15 min by co-compartmentalizing single mouse hybridoma cells, a fluorescent probe and single beads coated with anti-mouse IgG antibodies in 50-pl droplets. The beads capture the secreted antibodies and, when the captured antibodies bind to the probe, the fluorescence becomes localized on the beads, generating a clearly distinguishable fluorescence signal that enables droplet sorting at ∼200 Hz as well as cell enrichment. The microfluidic system described is easily adapted for screening other intracellular, cell-surface or secreted proteins and for quantifying catalytic or regulatory activities. In order to screen ∼1 million cells, the microfluidic operations require 2-6 h; the entire process, including preparation of microfluidic devices and mammalian cells, requires 5-7 d." }, { "pmid": "25448819", "title": "New tools for comparing microscopy images: quantitative analysis of cell types in Bacillus subtilis.", "abstract": "Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed." }, { "pmid": "22743772", "title": "Fiji: an open-source platform for biological-image analysis.", "abstract": "Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities." }, { "pmid": "22942788", "title": "SeqTrace: a graphical tool for rapidly processing DNA sequencing chromatograms.", "abstract": "Modern applications of Sanger DNA sequencing often require converting a large number of chromatogram trace files into high-quality DNA sequences for downstream analyses. Relatively few nonproprietary software tools are available to assist with this process. SeqTrace is a new, free, and open-source software application that is designed to automate the entire workflow by facilitating easy batch processing of large numbers of trace files. SeqTrace can identify, align, and compute consensus sequences from matching forward and reverse traces, filter low-quality base calls, and end-trim finished sequences. The software features a graphical interface that includes a full-featured chromatogram viewer and sequence editor. SeqTrace runs on most popular operating systems and is freely available, along with supporting documentation, at http://seqtrace.googlecode.com/." }, { "pmid": "22556368", "title": "SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.", "abstract": "MOTIVATION\nIn the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements.\n\n\nRESULTS\nIn this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks.\n\n\nAVAILABILITY\nAlignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license." }, { "pmid": "14985472", "title": "ARB: a software environment for sequence data.", "abstract": "The ARB (from Latin arbor, tree) project was initiated almost 10 years ago. The ARB program package comprises a variety of directly interacting software tools for sequence database maintenance and analysis which are controlled by a common graphical user interface. Although it was initially designed for ribosomal RNA data, it can be used for any nucleic and amino acid sequence data as well. A central database contains processed (aligned) primary structure data. Any additional descriptive data can be stored in database fields assigned to the individual sequences or linked via local or worldwide networks. A phylogenetic tree visualized in the main window can be used for data access and visualization. The package comprises additional tools for data import and export, sequence alignment, primary and secondary structure editing, profile and filter calculation, phylogenetic analyses, specific hybridization probe design and evaluation and other components for data analysis. Currently, the package is used by numerous working groups worldwide." }, { "pmid": "18692976", "title": "The All-Species Living Tree project: a 16S rRNA-based phylogenetic tree of all sequenced type strains.", "abstract": "The signing authors together with the journal Systematic and Applied Microbiology (SAM) have started an ambitious project that has been conceived to provide a useful tool especially for the scientific microbial taxonomist community. The aim of what we have called \"The All-Species Living Tree\" is to reconstruct a single 16S rRNA tree harboring all sequenced type strains of the hitherto classified species of Archaea and Bacteria. This tree is to be regularly updated by adding the species with validly published names that appear monthly in the Validation and Notification lists of the International Journal of Systematic and Evolutionary Microbiology. For this purpose, the SAM executive editors, together with the responsible teams of the ARB, SILVA, and LPSN projects (www.arb-home.de, www.arb-silva.de, and www.bacterio.cict.fr, respectively), have prepared a 16S rRNA database containing over 6700 sequences, each of which represents a single type strain of a classified species up to 31 December 2007. The selection of sequences had to be undertaken manually due to a high error rate in the names and information fields provided for the publicly deposited entries. In addition, from among the often occurring multiple entries for a single type strain, the best-quality sequence was selected for the project. The living tree database that SAM now provides contains corrected entries and the best-quality sequences with a manually checked alignment. The tree reconstruction has been performed by using the maximum likelihood algorithm RAxML. The tree provided in the first release is a result of the calculation of a single dataset containing 9975 single entries, 6728 corresponding to type strain gene sequences, as well as 3247 additional high-fquality sequences to give robustness to the reconstruction. Trees are dynamic structures that change on the basis of the quality and availability of the data used for their calculation. Therefore, the addition of new type strain sequences in further subsequent releases may help to resolve certain branching orders that appear ambiguous in this first release. On the web sites: www.elsevier.de/syapm and www.arb-silva.de/living-tree, the All-Species Living Tree team will release a regularly updated database compatible with the ARB software environment containing the whole 16S rRNA dataset used to reconstruct \"The All-Species Living Tree\". As a result, the latest reconstructed phylogeny will be provided. In addition to the ARB file, a readable multi-FASTA universal sequence editor file with the complete alignment will be provided for those not using ARB. There is also a complete set of supplementary tables and figures illustrating the selection procedure and its outcome. It is expected that the All-Species Living Tree will help to improve future classification efforts by simplifying the selection of the correct type strain sequences. For queries, information updates, remarks on the dataset or tree reconstructions shown, a contact email address has been created ([email protected]). This provides an entry point for anyone from the scientific community to provide additional input for the construction and improvement of the first tree compiling all sequenced type strains of all prokaryotic species for which names had been validly published." }, { "pmid": "16928733", "title": "RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.", "abstract": "UNLABELLED\nRAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively.\n\n\nAVAILABILITY\nicwww.epfl.ch/~stamatak" } ]
Materials
30126192
PMC6119856
10.3390/ma11081469
Product Lifecycle Management as Data Repository for Manufacturing Problem Solving
Fault diagnosis presents a considerable difficulty to human operators in supervisory control of manufacturing systems. Implementing Internet of Things (IoT) technologies in existing manufacturing facilities implies an investment, since it requires upgrading them with sensors, connectivity capabilities, and IoT software platforms. Aligned with the technological vision of Industry 4.0 and based on currently existing information databases in the industry, this work proposes a lower-investment alternative solution for fault diagnosis and problem solving. This paper presents the details of the information and communication models of an application prototype oriented to production. It aims at assisting shop-floor actors during a Manufacturing Problem Solving (MPS) process. It captures and shares knowledge, taking existing Process Failure Mode and Effect Analysis (PFMEA) documents as an initial source of information related to potential manufacturing problems. It uses a Product Lifecycle Management (PLM) system as source of manufacturing context information related to the problems under investigation and integrates Case-Based Reasoning (CBR) technology to provide information about similar manufacturing problems.
2. Related WorksThe developed approach is grounded on three basic models: an MPS process model, an MPS knowledge representation model, and an MPS system architecture model [5].The MPS process model defines the steps to be taken by the user to solve a problem with the support of the proposed system. This process model basically follows the steps defined in the 8D method [13] and specifies the kind of interaction expected at each step between the user and the system. The left side of Figure 1 shows the developed Graphical User Interface (GUI), the center part of Figure 1 shows the main steps of the MPS process, and the right side of Figure 1 shows the main systems of the developed prototype. The main steps of the MPS process are explained next and their links with the developed GUI are shown in Figure 1. The user inputs a basic description of a manufacturing problem into the system (S1).Based on the user input, the system searches and collects context information related to the problem from the PLM system repository. The result is shown to the user (S1).Combining the input from the user and the data collected from the PLM repository, the system creates a global query to search for possible solutions (S2).The system distributes the global query among its agents [16] that look for the most similar proposals out of their own case bases by applying CBR. The 10 most similar proposals coming from the agents are presented to the user. Initially, only the proposed containment actions and problem causes are displayed (S2).The user must check the proposed failure modes or causes at the manufacturing location where the problem was identified and give feedback to the system. At this stage, the user may decide to refine his problem formulation and then go back to Step 1 (S2).Once the possible root causes are identified, the system provides the related proposals for corrective and preventive actions (S3).As part of the lessons learned step, the user gives feedback to be analyzed by a Knowledge Engineer. When appropriate, the Knowledge Engineer will update the CBR subsystems to extend the case bases (S4, S5, S6).The MPS knowledge representation model is based on an ontology that allows for the representation of any knowledge related to the MPS process [5]. It comprises the following main concepts: Problem, Component, Function, Failure, Context, and Solution (Figure 2). The relations among these six concepts, their associated taxonomies, and their parameters have been designed to fulfil several constraints: support a generic definition of a manufacturing process and its location, be compatible with the information structure of the PFMEA method, comprise concepts to describe different aspects of a manufacturing problem, and to allow case similarity determination.The proposed ontology defines the concept “Problem” similarly as in FMEA (Failure Mode and Effect Analysis) [14], where a component performs a function, and the latter fails in a defined mode. Component, Function, and Failure form a unique trio. The concept “Component” is subdivided into six subtypes: Process, Man, Machine, Material, Method, and Environment. The concept “Context” allows for representing the setting of a problem, is subdivided into seven different types of contexts: Material, Process, Machine, Event, Method, Man, and Environment, and has an associated taxonomy represented through the relationship type “is part of” pointing to itself. Each subtype of Context has different types of attributes to specify each type of technical information in the context (e.g., pressures, temperatures, and dimensions). These attributes are used in the configuration of the PLM system to store PPR information explicitly associated with the problem. More detailed information about this ontology can be found in Camarillo et al. [5].The proposed MPS system architecture model is based on SEASALT (Shared Experience using an Agent-based System Architecture LayouT) [16]. The developed architecture supports the deployment of the different agents across different manufacturing plants of a company. Within each plant, agents can be deployed across the areas with different manufacturing processes. In this way, each topic agent, hosted in a specific manufacturing process of a specific manufacturing plant, will be able to collect and to store knowledge related to its own area, becoming an expert of its process and plant. By means of a coordinator agent, a topic agent can communicate and interchange information with all the other topic agents hosted in different processes and/or plants through the company’s intranet. Each topic agent has its own case base and uses CBR technology to find the most similar cases related to a user query. This information exchange supports the MPS process by providing the user with solutions for the most similar failures stored in any topic agent of the architecture [5].In the literature review, several relevant works developed by other researchers were identified. Firstly, in relation to the modeling of PFMEA concepts. Dittmann et al. [19] presented an ontology to support FMEA concepts. The information model proposed in this work enhanced that ontology mainly by adding the concepts of Problem and Context [5]. In relation to the use of a PLM repository, Bertin et at. [6] propose, as part of a Lessons Learned System (LLS), the use of a PLM system as the central repository of data, but they put the focus on the Engineering Change Request (ECR) process of the company, whereas this work focuses on problem solving at production lines.The work of Yang et al. [1] presents a fault diagnosis system for software intensive manufacturing systems and processes. They also profit from the stored information in the FMEA documents of the company and use CBR as an Artificial Intelligence (AI) tool. Nevertheless, they propose a second AI technology, deep-level Bayesian diagnosis network, to be used in cases of dynamic multi-fault diagnosis with uncertainty. The approach presented in this paper shares with them the use of FMEA and CBR but remains at a simpler AI level. However, the application scope of this work considers the sharing of knowledge among different manufacturing processes and plants (represented by topic agents). Also, contrary to the single-diagnosis suggestion proposed by Yang et al. [1], this proposed system uses an MPS method to guide the user step-by-step through the resolution of problems, which allows multiple cycles of problem redefinition, and that is fundamental when addressing very complex problems.Finally, two relevant research works were identified in the field of fault diagnosis in aircraft maintenance: Chiu et al. [20] and Reus et al. [21]. Chiu et al. [20] propose the use of CBR together with genetic algorithms to enhance dynamic weighting and the design of non-similarity functions. With this approach, the proposed CBR system is able to achieve superior learning performance. As in the previous case, the approach presented in this paper remains at a simpler AI level, but proposes knowledge sharing among different MPS units. Reus et al. [21], as in this work, propose the use of SEASALT as a multi-agent architecture to share knowledge among multiple units. Nevertheless, the use of extended context-related information to enrich the similarity calculation is not addressed. Therefore, the link to a PLM system to enrich the similarity calculation and the search for solutions is outside its scope.The next section introduces the developed information models and their link to the data sources, with special focus on the data related to PFMEA and the PPR concepts to be supported by the PLM system repository.
[]
[]
Computational and Structural Biotechnology Journal
30181840
PMC6120721
10.1016/j.csbj.2018.08.002
A Blockchain-Based Notarization Service for Biomedical Knowledge Retrieval
Biomedical research and clinical decision depend increasingly on scientific evidence realized by a number of authoritative databases, mostly public and continually enriched via peer scientific contributions. Given the dynamic nature of biomedical evidence data and their usage in the sensitive domain of biomedical science, it is important to ensure retrieved data integrity and non-repudiation. In this work, we present a blockchain-based notarization service that uses smart digital contracts to seal a biomedical database query and the respective results. The goal is to ensure that retrieved data cannot be modified after retrieval and that the database cannot validly deny that the particular data has been provided as a result of a specific query. Biomedical evidence data versioning is also supported. The feasibility of the proposed notarization approach is demonstrated using a real blockchain infrastructure and is tested on two different biomedical evidence databases: a publicly available medical risk factor reference repository and on the PubMed database of biomedical literature references and abstracts.
2Background & Related Work2.1Integrity and Non-repudiationData integrity and non-repudiation are well studied topics. A recent survey paper [10] lists and compares different existing methods in order to achieve integrity, authenticity, non-repudiation and proof of existence. Furthermore, the authors of the systematic review [11] provide a comprehensive and structured overview about security requirements and solutions in the area of cloud computing. Accordingly, some other interesting survey papers, in the field of the distributed large-scale data processing in MapReduce [12] and the vehicular ad hoc networks (VANETs) [13], review the current security and privacy aspects on these technologies.Commonly used methods to ensure data integrity are to backup the data, to employ checksums techniques or to use cryptographic hash functions [14]. The most common method uses cryptographic hash functions that have as input arbitrary length data and as output a fixed sized sequence of bits. These are one-way functions, i.e., it is computationally infeasible to compute the input from the output, and they are deterministic, i.e., a specific input always provides the same output, and a slight change of the input results in a completely different output. Thus, to ensure the integrity of a message, a cryptographic hash function is used to compute the hash value of the message. At a later time, the integrity of the message can be checked by comparing the initial, stored hash value with the hash value that is provided by the same cryptographic hash function on the alleged message.One of the most common techniques to deal with non-repudiation are digital signatures [14], the analogue of a handwritten or manual signature. The sender signs the message or the hash value of the message that is produced by a cryptographically secure hash function. Digital signatures are implemented using asymmetric cryptography, that uses a public-private pair of keys. To ensure non-repudiation, the sender signs the message with the private key and the receiver uses the sender's public key to validate this signature. Assumed that the private key is kept secret, it is computationally infeasible for any third party to alter the signed message without invalidating the signature. A problem that occurs is that when someone uses the public key to validate the signature of a message there is not a way to ensure that the public key belongs to a specific identity. For this purpose, a trusted third-party (usually a certification authority) is required to certify that a specific public key belongs to a specific person. Consequently, digital signatures can be used for protection against non-repudiation.Although there is a large amount of work on data integrity and non-repudiation, the advent of the blockchain infrastructures and especially the recent emergence of smart contracts technology opens new perspectives. The existing methods for data integrity and non-repudiation can be combined with the features of blockchains like robustness, traceability and cost-effectiveness as well as their decentralized applications (Dapps).2.2Blockchain TechnologyBlockchain is a distributed, incorruptible transaction management technology without one single trusted party. The first blockchain was proposed for and implemented in Bitcoin [15], a distributed infrastructure where users can make financial transactions without the need of a regulator (e.g. a bank). Nowadays, other blockchain infrastructures are emerging, for example the Ethereum [7], where everyone can participate in the blockchain generation, and the Hyperledger Fabric [16], where only approved parties can post to the blockchain.In a blockchain, each new transaction is broadcasted to a distributed network of nodes; once all nodes agree the transaction is valid, the transaction is added to a block. Every block contains a timestamp and the hash of the previous block and the transaction data, thus creating an immutable, append-only chain. Copies of the entire blockchain are maintained by each participating node.Some blockchain infrastructures, like the Ethereum1, support smart contracts, which are immutable computer codes running on top of a blockchain. The functions within a contract and can be invoked in the context of blockchain transactions.An abstract overview of the implementation of a blockchain and its blocks is shown in Fig. 1. Within each block, transaction data are coded into hash trees (Merkle Particia trees [17]) that have a ‘root hash’ that refers to the entire tree; leaf nodes (shown with a square symbol in Fig. 1) correspond to data blocks, while non-leaf nodes (shown with a circle symbol in Fig. 1) correspond to cryptographic hashes of the child nodes. Data on the contract is held within each leaf node; this includes another hash tree that stores contract data (‘Storage Root’), the hash of the contract code (‘Code Hash’), number of transactions sent from the contract (‘Nonce’), and the financial balance (‘Balance’). When there is a change in a contract, the hash tree only stores this change and simply points back to the previous tree for all other contract data.Fig. 1An abstract overview of the implementation of a blockchain and its smart contracts inside the blocks.Fig. 1Blockchains infrastructures charge for each transaction a fee proportional to the computational burden that the execution will impose on the blockchain. This fuel is known as ‘gas’.A recent systematic review on current state, limitations and open research on blockchain technology [18] discusses a number of blockchain applications that extend from cryptocurrency to Internet of things, smart contracts, smart property, digital content distribution, Botnet, and P2P broadcast protocols.2.3Blockchain Applications in the Biomedical DomainCurrently, there is considerable optimism that blockchain technology will revolutionize the healthcare industry [19], and there are review articles that thoroughly describe the advantages and challenges of using blockchain technologies in the biomedical domain ([20,21]). Major advantages include [20] the ability to support (a) decentralized data management (e.g. when different healthcare stakeholders need to access patient data); (b) immutable audit trails, implementing the only read and write function for medical data preventing tampering; (c) data provenance, where the origins of the data are traceable, e.g. in the case of patient consent; (d) robustness and availability, highly important to life critically medical data; and (e) security and privacy.A major application of the blockchain technology in biomedical domain is the field of electronic health records (EHR) which consists of fragments of clinical data related to the patient as generated and maintained by healthcare providers. Such applications include use of blockchain technologies for EHR integration [22], sharing and access control [[23], [24], [25]], preservation [26] and overall management [27,28]. Other application areas on patient data address personal data and services; in particular personal health records generated and maintained by the patient [29], and mobile or other personal ehealth applications [30]. Another interesting field are healthcare services logistics, including medical insurance transactions [31] and drug supply [32]. Furthermore, blockchain technology can also be applied in clinical trial management, with emphasis on participant consent management [33] and privacy preservation [34]. To the best of our knowledge, there is no other work exploiting blockchain technology for managing biomedical evidence data integrity and non-repudiation, other than the preliminary presentation of the proof of concept [6] of the solution described in this paper.
[ "20307864", "19418229", "17884971", "29016974", "30108685", "28545835" ]
[ { "pmid": "20307864", "title": "Uses and limitations of registry and academic databases.", "abstract": "A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice." }, { "pmid": "19418229", "title": "Database resources in metabolomics: an overview.", "abstract": "Metabolomics is the characterization, identification, and quantitation of metabolites resulting from a wide range of biochemical processes in living systems. Its rapid development over the past few years has increased the demands for bioinformatics and cheminformatics resources that span from data processing tools, comprehensive databases, statistical tools, and computational tools for modeling metabolic networks. With the wealth of information that is being amassed, new types of metabolomic databases are emerging that are not only designed to store, manage, and analyze metabolomic data but are also designed to serve as gateways to the vast information space of metabolism in living systems. At present, metabolomics is underpinned by a number of freely and commercially available databases that provide information on the chemical structures, physicochemical and pharmacological properties, spectral profiles, experimental workflows, and biological functions of metabolites. This review provides an overview of the recent progress in databases employed in metabolomics." }, { "pmid": "17884971", "title": "Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.", "abstract": "The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information." }, { "pmid": "29016974", "title": "Blockchain distributed ledger technologies for biomedical and health care applications.", "abstract": "OBJECTIVES\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTARGET AUDIENCE\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nSCOPE\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains." }, { "pmid": "30108685", "title": "FHIRChain: Applying Blockchain to Securely and Scalably Share Clinical Data.", "abstract": "Secure and scalable data sharing is essential for collaborative clinical decision making. Conventional clinical data efforts are often siloed, however, which creates barriers to efficient information exchange and impedes effective treatment decision made for patients. This paper provides four contributions to the study of applying blockchain technology to clinical data sharing in the context of technical requirements defined in the \"Shared Nationwide Interoperability Roadmap\" from the Office of the National Coordinator for Health Information Technology (ONC). First, we analyze the ONC requirements and their implications for blockchain-based systems. Second, we present FHIRChain, which is a blockchain-based architecture designed to meet ONC requirements by encapsulating the HL7 Fast Healthcare Interoperability Resources (FHIR) standard for shared clinical data. Third, we demonstrate a FHIRChain-based decentralized app using digital health identities to authenticate participants in a case study of collaborative decision making for remote cancer care. Fourth, we highlight key lessons learned from our case study." }, { "pmid": "28545835", "title": "OmniPHR: A distributed architecture model to integrate personal health records.", "abstract": "The advances in the Information and Communications Technology (ICT) brought many benefits to the healthcare area, specially to digital storage of patients' health records. However, it is still a challenge to have a unified viewpoint of patients' health history, because typically health data is scattered among different health organizations. Furthermore, there are several standards for these records, some of them open and others proprietary. Usually health records are stored in databases within health organizations and rarely have external access. This situation applies mainly to cases where patients' data are maintained by healthcare providers, known as EHRs (Electronic Health Records). In case of PHRs (Personal Health Records), in which patients by definition can manage their health records, they usually have no control over their data stored in healthcare providers' databases. Thereby, we envision two main challenges regarding PHR context: first, how patients could have a unified view of their scattered health records, and second, how healthcare providers can access up-to-date data regarding their patients, even though changes occurred elsewhere. For addressing these issues, this work proposes a model named OmniPHR, a distributed model to integrate PHRs, for patients and healthcare providers use. The scientific contribution is to propose an architecture model to support a distributed PHR, where patients can maintain their health history in an unified viewpoint, from any device anywhere. Likewise, for healthcare providers, the possibility of having their patients data interconnected among health organizations. The evaluation demonstrates the feasibility of the model in maintaining health records distributed in an architecture model that promotes a unified view of PHR with elasticity and scalability of the solution." } ]
BMC Medical Informatics and Decision Making
30180839
PMC6124014
10.1186/s12911-018-0658-y
Data to diagnosis in global health: a 3P approach
BackgroundWith connected medical devices fast becoming ubiquitous in healthcare monitoring there is a deluge of data coming from multiple body-attached sensors. Transforming this flood of data into effective and efficient diagnosis is a major challenge.MethodsTo address this challenge, we present a 3P approach: personalized patient monitoring, precision diagnostics, and preventive criticality alerts. In a collaborative work with doctors, we present the design, development, and testing of a healthcare data analytics and communication framework that we call RASPRO (Rapid Active Summarization for effective PROgnosis). The heart of RASPRO is Physician Assist Filters (PAF) that transform unwieldy multi-sensor time series data into summarized patient/disease specific trends in steps of progressive precision as demanded by the doctor for patient’s personalized condition at hand and help in identifying and subsequently predictively alerting the onset of critical conditions. The output of PAFs is a clinically useful, yet extremely succinct summary of a patient’s medical condition, represented as a motif, which could be sent to remote doctors even over SMS, reducing the need for data bandwidths. We evaluate the clinical validity of these techniques using SVM machine learning models measuring both the predictive power and its ability to classify disease condition. We used more than 16,000 min of patient data (N=70) from the openly available MIMIC II database for conducting these experiments. Furthermore, we also report the clinical utility of the system through doctor feedback from a large super-speciality hospital in India.ResultsThe results show that the RASPRO motifs perform as well as (and in many cases better than) raw time series data. In addition, we also see improvement in diagnostic performance using optimized sensor severity threshold ranges set using the personalization PAF severity quantizer.ConclusionThe RASPRO-PAF system and the associated techniques are found to be useful in many healthcare applications, especially in remote patient monitoring. The personalization, precision, and prevention PAFs presented in the paper successfully shows remarkable performance in satisfying the goals of 3Ps, thereby providing the advantages of three A’s: availability, affordability, and accessibility in the global health scenario.Electronic supplementary materialThe online version of this article (10.1186/s12911-018-0658-y) contains supplementary material, which is available to authorized users.
Related workWe begin by analyzing the existing systems that simply generate alerts every time one or more sensors cross the abnormality thresholds. Due to the humongous volume of such alerts, they are difficult to manage even in the case of hospital in-patient settings, let alone for a much larger number of remotely monitored patients. Starting from some of the initial attempts reported in [2], to more recent works such as [3–5], and [6], the severity detection and alert generation is typically based either on predefined thresholds, or based on training of thresholds using machine learning followed by online classification of multi-sensor data. Very similar techniques of machine learning have also been used in fall detection [7, 8]. Hristoskova et al. [9] propose another system wherein patient conditions are mapped to medical conditions using ontology-driven methods, and alerts generated based on corresponding risk stratification. Even though there has been noticeable success in detection and diagnosis of specific disease conditions, most of these works have not explored the opportunity for personalized and precision diagnosis. In an extensive review of Big Data for Health, Andreu-Perez et al. [10] specifically emphasize on the opportunity for stratified patient management and personalized health diagnostics citing examples of customized blood pressure management [11]. More specifically, Bates et al. [12] discuss the utility of using analytics to predict adverse events, which could reduce the associated morbidity and mortality rates. Furthermore, Bates et al. [12] argue that patient data analytics based on early information supplied to the hospital prior to admission can result in better management of staffing and other hospital resources. One of the recent works in personalized criticality detection is reported in [13], which propose an analytical unit in which the Improved Particle Swarm Optimization (IPSO) algorithm is used to arrive at patient-specific threat ranges. To improve precision in diagnosis we also need to arrive at a balance between a completely automated system on one hand, and physician assist systems on the other. Celler et al. [14] propose a balanced approach wherein sophisticated analytics are presented to physicians who in turn identify the changes and decide on the diagnosis. This is also supported by many results including that reported in [6], wherein domain knowledge based method performed as well as other trained machine learning models. These arguments and results provide further impetus for personalized, precision, and preventive diagnostic techniques that are amenable to physician interventions.
[ "15615032", "24445411", "26173222", "24158470", "25163076" ]
[ { "pmid": "15615032", "title": "AMON: a wearable multiparameter medical monitoring and alert system.", "abstract": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation." }, { "pmid": "24445411", "title": "Ontology-driven monitoring of patient's vital signs enabling personalized medical detection and alert.", "abstract": "A major challenge related to caring for patients with chronic conditions is the early detection of exacerbations of the disease. Medical personnel should be contacted immediately in order to intervene in time before an acute state is reached, ensuring patient safety. This paper proposes an approach to an ambient intelligence (AmI) framework supporting real-time remote monitoring of patients diagnosed with congestive heart failure (CHF). Its novelty is the integration of: (i) personalized monitoring of the patients health status and risk stage; (ii) intelligent alerting of the dedicated physician through the construction of medical workflows on-the-fly; and (iii) dynamic adaptation of the vital signs' monitoring environment on any available device or smart phone located in close proximity to the physician depending on new medical measurements, additional disease specifications or the failure of the infrastructure. The intelligence lies in the adoption of semantics providing for a personalized and automated emergency alerting that smoothly interacts with the physician, regardless of his location, ensuring timely intervention during an emergency. It is evaluated on a medical emergency scenario, where in the case of exceeded patient thresholds, medical personnel are localized and contacted, presenting ad hoc information on the patient's condition on the most suited device within the physician's reach." }, { "pmid": "26173222", "title": "Big data for health.", "abstract": "This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled." }, { "pmid": "24158470", "title": "Attenuation of systolic blood pressure and pulse transit time hysteresis during exercise and recovery in cardiovascular patients.", "abstract": "Pulse transit time (PTT) is a cardiovascular parameter of emerging interest due to its potential to estimate blood pressure (BP) continuously and without a cuff. Both linear and nonlinear equations have been used in the estimation of BP based on PTT. This study, however, demonstrates that there is a hysteresis phenomenon between BP and PTT during and after dynamic exercise. A total of 46 subjects including 16 healthy subjects, 13 subjects with one or more cardiovascular risk factors, and 17 patients with cardiovascular disease underwent graded exercise stress test. PTT was measured from electrocardiogram and photoplethysmogram of the left index finger of the subject, i.e., a pathway that includes predominately aorta, brachial, and radial arteries. The results of this study showed that, for the same systolic BP (SBP), PTT measured during exercise was significantly larger than PTT measured during recovery for all subject groups. This hysteresis was further quantified as both normalized area bounded by the SBP-PTT relationship (AreaN) and SBP difference at PTT during peak exercise plus 20 ms (ΔSBP20). Significant attenuation of both AreaN (p <; 0.05) and ΔSBP20 (p <; 0.01) is observed in cardiovascular patients compared with healthy subjects, independent of resting BP. Since the SBP-PTT relationship are determined by the mechanical properties of arterial wall, which is predominately mediated by the sympathetic nervous system through altered vascular smooth muscle (VSM) tone during exercise, results of this study are consistent with the previous findings of autonomic nervous dysfunction in cardiovascular patients. We further conclude that VSM tone has a nonnegligible influence on the BP-PTT relationship and thus should be considered in the PTT-based BP estimation." }, { "pmid": "25163076", "title": "Home telemonitoring of vital signs--technical challenges and future directions.", "abstract": "The telemonitoring of vital signs from the home is an essential element of telehealth services for the management of patients with chronic conditions, such as congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD), diabetes, or poorly controlled hypertension. Telehealth is now being deployed widely in both rural and urban settings, and in this paper, we discuss the contribution made by biomedical instrumentation, user interfaces, and automated risk stratification algorithms in developing a clinical diagnostic quality longitudinal health record at home. We identify technical challenges in the acquisition of high-quality biometric signals from unsupervised patients at home, identify new technical solutions and user interfaces, and propose new measurement modalities and signal processing techniques for increasing the quality and value of vital signs monitoring at home. We also discuss use of vital signs data for the automated risk stratification of patients, so that clinical resources can be targeted to those most at risk of unscheduled admission to hospital. New research is also proposed to integrate primary care, hospital, personal genomic, and telehealth electronic health records, and apply predictive analytics and data mining for enhancing clinical decision support." } ]
Royal Society Open Science
30225004
PMC6124062
10.1098/rsos.180089
Scalable funding of Bitcoin micropayment channel networks
The Bitcoin network has scalability problems. To increase its transaction rate and speed, micropayment channel networks have been proposed; however, these require to lock funds into specific channels. Moreover, the available space in the blockchain does not allow scaling to a worldwide payment system. We propose a new layer that sits in between the blockchain and the payment channels. The new layer addresses the scalability problem by enabling trustless off-blockchain channel funding. It consists of shared accounts of groups of nodes that flexibly create one-to-one channels for the payment network. The new system allows rapid changes of the allocation of funds to channels and reduces the cost of opening new channels. Instead of one blockchain transaction per channel, each user only needs one transaction to enter a group of nodes—within the group the user can create arbitrarily many channels. For a group of 20 users with 100 intra-group channels, the cost of the blockchain transactions is reduced by 90% compared to 100 regular micropayment channels opened on the blockchain. This can be increased further to 96% if Bitcoin introduces Schnorr signatures with signature aggregation.
5.Related workThe need for scalability is well understood. Apart from simply changing the parameters [13,14], the efficiency of the original Bitcoin protocol still offers space for improvement [15–19].Increasing the transaction speed without payment networks has been investigated. It was shown that double spending is easily achievable without doing any mining if the receiver is not waiting for any confirmation blocks after a transaction [20,21].Some work has been done to introduce sharding for cryptocurrencies [22–24]. If the validation of transactions could be securely distributed and every node only had to process a part of all transactions, the transaction rate could scale linearly with the number of nodes. One especially interesting approach has been published as our submission of the conference version of this work, called Plasma [25]. Plasma has the property that the members of a shard are the same people that care about its contents, similar to payment channels. Indeed, one could also interpret payment channels as interest-based shards of a blockchain. Plasma also introduces trees of blockchains, splitting interest groups into smaller subgroups. The same hierarchical structure has been introduced to payment channels with this work!5.1.Payment networksSolutions to find routes through a payment network in a scalable and decentralized way have been proposed, based on central hubs [26], rotating global beacons [27], personal beacons, where overlaps between sender and receiver provide paths [28], or combinations of multiple schemes [29].A known way to rebalance channels in a payment network are cyclic transactions, shown in figure 14. The idea has originated in private communication between the developers of the Lightning Network.6 Figure 14.Rebalancing a cycle of channels, which have become one sided. The channels between A, B and C have been heavily used in one direction, e.g. external transactions being routed counterclockwise. As a result, one direction of each channel cannot be used any more due to insufficient funds. An atomic cyclic transfer, shown by the red arrows, can turn the three channels usable again. The transaction does not change the total stake of any involved party.While cyclic rebalancing allows us to reset channels which have run out of funds, it has limitations. If the amount of funds running through a specific edge has been estimated wrong at funding time, or changes over time, rebalancing might become necessary frequently. This slows down transactions which have to wait for the rebalancing to finish. Our solution with channel factories allows moving the locked-in funds to a different channel to solve the problem for a longer time.
[]
[]
Frontiers in Plant Science
30210509
PMC6124392
10.3389/fpls.2018.01162
High-Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System With Refinement Filter Bank
A fundamental problem that confronts deep neural networks is the requirement of a large amount of data for a system to be efficient in complex applications. Promising results of this problem are made possible through the use of techniques such as data augmentation or transfer learning of pre-trained models in large datasets. But the problem still persists when the application provides limited or unbalanced data. In addition, the number of false positives resulting from training a deep model significantly cause a negative impact on the performance of the system. This study aims to address the problem of false positives and class unbalance by implementing a Refinement Filter Bank framework for Tomato Plant Diseases and Pests Recognition. The system consists of three main units: First, a Primary Diagnosis Unit (Bounding Box Generator) generates the bounding boxes that contain the location of the infected area and class. The promising boxes belonging to each class are then used as input to a Secondary Diagnosis Unit (CNN Filter Bank) for verification. In this second unit, misclassified samples are filtered through the training of independent CNN classifiers for each class. The result of the CNN Filter Bank is a decision of whether a target belongs to the category as it was detected (True) or not (False) otherwise. Finally, an integration unit combines the information from the primary and secondary units while keeping the True Positive samples and eliminating the False Positives that were misclassified in the first unit. By this implementation, the proposed approach is able to obtain a recognition rate of approximately 96%, which represents an improvement of 13% compared to our previous work in the complex task of tomato diseases and pest recognition. Furthermore, our system is able to deal with the false positives generated by the bounding box generator, and class unbalances that appear especially on datasets with limited data.
Related worksIn this section, we first introduce methods based on neural networks for object detection and recognition. Then, we review some techniques used for detecting anomalies in plants and, finally, investigate advances in false positives reduction.Image-based object detection and feature extractorsRecent years have seen an explosion of visual media available through the internet. This large volume of data has brought new opportunities and challenges for neural network applications. Since the first application of Convolutional Neural Networks (CNN) on the image classification task in the ImageNet Large Scale Visual Recognition Competition 2012 (ILSVRC-2012) (Russakovsky et al., 2015) by AlexNet (Krizhevsky et al., 2012), a CNN composed of 8 layers demonstrated an outstanding performance compared to traditional handcrafted-based computer vision algorithms (Russakovsky et al., 2015). Consequently, in the last few years, several deep neural network architectures have been proposed with the goal of improving the accuracy in the same task.Object detection and recognition have played an important issue in recent years. In the case of detecting particular categories, earlier applications focused on classification from object-centric images (Russakovsky et al., 2012). Where the goal is to classify an image that likely contains an object in it. However, the new dominant paradigm is not only to classify but also precisely localize objects in the image (Szegedy et al., 2013). Consequently, current state-of-the-art object methods for object detection are mainly based on deep CNNs (Russakovsky et al., 2015). They have been categorized into two types: two-stage and one-stage methods. Two-stage methods are commonly related to the Region-based Convolutional Neural Networks, such as Faster R-CNN (Ren et al., 2016), Region-based Fully Connected Network (R-FCN) (Dai et al., 2016). In these frameworks, a Region Proposal Network (RPN) generated a set of candidate object locations in the first stage, and the second stage classifies each candidate location as one of the classes or background using a CNN. It uses a deep network to generate the features that are posteriorly used by the RPN to extract the proposals. In addition to systems based on region proposals, one-stage frameworks have been also proposed for object detection. Most recently SSD (Liu et al., 2016), YOLO (Redmon et al., 2015) and YOLO V2 (Redmon and Farhadi, 2017) have demonstrated promising results, yielding real-time detectors with accuracy similar to two-stage detectors.Over the last few years, it has been also demonstrated that deeper neural networks have achieved higher performance compared to simple models in the task of image classification (Russakovsky et al., 2015). However, along with the significant performance improvement, the complexity of deep architectures has been also increased, such as VGG (Simonyan and Zissermann, 2014), ResNet (He et al., 2016), GoogLeNet (Szegedy et al., 2015), ResNeXt (Xie et al., 2017), DenseNet (Huang et al., 2017), Dual Path Net (Chen et al., 2017) and SENet (Hu et al., 2017), etc. As a result, deep artificial neural networks often have far more trainable model parameters than the number of samples they are trained on (Zhang et al., 2017). Despite using large datasets, neural networks are prone to overfitting (Pereyra et al., 2017). On the other hand, several strategies have been applied to improve performance in deep neural networks. For example, data augmentation to increase the number of samples (Bloice et al., 2017), weights regularization to reduce model overfitting (Van-Laarhoven, 2017), randomly dropping activations with Dropout (Srivastava et al., 2014), batch normalization (Ioffe and Szegedy, 2015). Although these strategies have proven to be effective in large networks, the lack of data or class unbalances problems for several applications are still a challenge to deal with. There is no a certain way yet of understanding the complexity of artificial neural networks for their application to any problem. Therefore, the importance of developing strategies that are designed specifically for applications that include limited data and class unbalance issues. In addition, depending on the complexity of the application, the challenge nowadays is to design deep learning methods that can perform a complex task while maintaining a lower computational cost.Anomaly detection in plantsThe problem of plant diseases is an important issue that is directly related to the food safety and well-being of the people. Diseases and pest affect food crops, that in turn causes significant losses in the farmer's economy. The effects of diseases on plants are becoming a challenging approach in terms of crop protection and production of healthy food. Traditional methods for the identification and diagnosis of plant diseases depend mainly on the visual analysis of an expert in the area, or a study in the laboratory. These studies generally require a high professional knowledge in the field, beside the probability of failure to successfully diagnose specific diseases, which consequently led to erroneous conclusions and treatments (Ferentinos, 2018). Under those circumstances, to obtain a fast and accurate decision, an automatic system would offer a highly efficient support to identify diseases and pest of infected plants (Mohanty et al., 2016; Fuentes et al., 2017b). Recent advances in computational technology, in particular, Graphics Processing Units (GPUs), have led to the development of new image-based technology, such as high efficient deep neural networks. The application of deep learning has been also extended to the area of precision agriculture, in that, it has shown a satisfactory performance when dealing with complex problems in real time. Some applications include the study of diseases identification of several crops, such as tomato (Fuentes et al., 2017b), apple (Liu et al., 2018), banana (Amara et al., 2017), wheat (Sankaran et al., 2010), cucumber (Kawasaki et al., 2015).CNN-based methods constitute a powerful tool that has been used as a feature extractor in several works. Mohanty et al. (Mohanty et al., 2016) compare two CNN architectures AlexNet and GoogLeNet to identify 14 crop species and 26 diseases using a large database of diseases and healthy plants. Their results show a system that is able to efficiently classify images that contain a particular disease in a crop using transfer learning. However, the drawback of this work is that its analysis is only based on images that are collected in the laboratory, not in the real field scenario. Therefore, it does not cover all the variations included there. Similarly, Sladojevic et al. (2016) identify 13 types of plant diseases out of healthy leaves with an AlexNet CNN architecture. They used several strategies to avoid overfitting and improve classification accuracy, such as data augmentation techniques to increase the dataset size, and finetuning to increase efficiency while training the CNN. The system achieved an average accuracy of 96.3%. Recently, Liu et al. (2018) proposed an approach for apple leaf disease identification based on a combination of AlexNet and GoogLeNet architectures. Using a dataset of images collected in the laboratory, that system is trained to identify four types of apple leaf diseases with an overall accuracy of 97.62%. In (Ferentinos, 2018), Ferentinos evaluates various CNN models to detect and diagnose plant diseases using leaves images of healthy and infected plants. The system is able to classify 58 distinct plant/disease combinations from 25 different plants. In addition, the experimental results show an interesting comparison when using images collected in the laboratory vs. images collected in the field. Promising results are presented using both types of images, with the best accuracy of 99.53% given by a VGG network. However, the success rate is significantly lower when images collected in the field are used for testing instead of laboratory images. In fact, according to the author, this demonstrates that image classification under real field conditions is much more difficult and complex than using images collected in the laboratory.Although the works mentioned above show promising results in the task of plant diseases identification, challenges such as the complex field conditions, variation of infection, various pathologies in the same image, surrounding objects, are not investigated. They mainly use images collected in the laboratory, and therefore, do not deal with all the conditions presented in a real scenario. Furthermore, they are diseases classification-based methods.In contrast, Fuentes et al. (2017b) presented a system that is able to successfully detect and localize 9 types of diseases and pests of tomato plant using images collected in the field, including real cultivation conditions. That approach differs from the others in that it generates a set of bounding boxes that contain the location, size, and class of diseases and/or pest in the image. This work investigates different meta-architectures and CNN feature extractors to recognize and localize the suspicious areas in the image. As a result, the authors show a satisfactory performance of 83%. However, the system presents some difficulties that do not allow it to obtain a higher performance. They mention that due to the lack of samples, some classes with high variability tend to be confused with others, resulting in false positives or lower precision.Following the idea in (Fuentes et al., 2017b), our current work aims to address the problems mentioned above and improve their results by focusing on false positives and class unbalance issues. On the other hand, our approach studies several techniques to make the system more robust against the inter- and intra-class variations of tomato diseases and pests.The problem of false positivesAlthough the efficiency of object detectors has been improved since deeper neural networks are used as feature extractors, they cannot be generalized for all applications. In addition to the complexity of collecting a dataset for a specific purpose, class unbalance has shown to be a problem when training deep networks for object detection. Consequently, the number of false positives generated by the network is high, which in fact results in a lower precision rate.In classification problems, the error can be caused by many facts. It can be a measure of true positives (correct classification) and true negatives compared to false positives (false alarms) and false negatives (misses). In object detection, the false positives deserve special attention as they are used to calculate precision. A higher number of false positives yields a lower precision value. Therefore, several techniques have been proposed to overcome this issue. For instance, in (Sun et al., 2016), the problem of object classification and localization is addressed by Cascade Neural Networks that use a multi-stream multi-scale architecture without object-level annotations. In this work, a multi-scale network is trained to propose boxes that likely contain objects, and then a cascade architecture is constructed by zooming onto promising boxes and train new classifiers to verify them. Another approach in (Yang et al., 2016), proposes a technique based on the concept of divide and conquer. Each task is divided via cascade structure for proposal generation and object classification. In proposal generation, they add another CNN classifier to distinguish objects from the background given the output of a previous Region Proposal Network. In the classification task, a binary classifier for each category focuses on false positives caused by mainly inter- and intra-category variances.Hard examples miningIn conventional methods, an important assumption to trade off the error generated by the high number of false positives is mentioned in (Viola and Jones, 2001). They suggest that setting a threshold yields classifiers with fewer positives and lower detection rate. Lower thresholds yield classifiers with more false positives and higher detection rate. However, at this point, that concept is unknot yet clear, whether adjusting a threshold preserves the training and helps generalization in deep learning.Recently, the concept of hard examples mining has been applied to make the training of neural networks easier and efficient. In (Shrivastava et al., 2016), a technique called “Online Hard Example Mining” (OHEM) aims to improve the training of two-stage CNN detectors by constructing mini batches using high-loss examples. This technique removes the need for several heuristic and hyperparameters used in Region-based Convolutional Networks by focusing on the hard-negative examples. In contrast, the scope of this work is to understand whether the use of a refinement strategy can deal with the false positives generated by an object detection network.The design of our multi-level approach points out two steps for object detection with a specific application in tomato diseases and pest recognition, in particular, the concept of Region-Based Neural Networks for bounding box generation (Fuentes et al., 2017b) and the CNN filter bank for “false positives” reduction. We emphasize that although our previous approach (Fuentes et al., 2017b) shows a satisfactory performance, the results can be further improved with the techniques proposed in our current approach. This aims to make the system more robust to inter- and intra-class variations.
[ "28869539", "27713752", "27295650", "27418923" ]
[ { "pmid": "28869539", "title": "A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition.", "abstract": "Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called \"deep learning meta-architectures\". We combine each of these meta-architectures with \"deep feature extractors\" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area." }, { "pmid": "27713752", "title": "Using Deep Learning for Image-Based Plant Disease Detection.", "abstract": "Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale." }, { "pmid": "27295650", "title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.", "abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available." }, { "pmid": "27418923", "title": "Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.", "abstract": "The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%." } ]
Frontiers in Neurorobotics
30214404
PMC6125413
10.3389/fnbot.2018.00045
Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop
Active inference is an ambitious theory that treats perception, inference, and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g., different environments or agent morphologies. In the literature, paradigms that share this independence have been summarized under the notion of intrinsic motivations. In general and in contrast to active inference, these models of motivation come without a commitment to particular inference and action selection mechanisms. In this article, we study if the inference and action selection machinery of active inference can also be used by alternatives to the originally included intrinsic motivation. The perception-action loop explicitly relates inference and action selection to the environment and agent memory, and is consequently used as foundation for our analysis. We reconstruct the active inference approach, locate the original formulation within, and show how alternative intrinsic motivations can be used while keeping many of the original features intact. Furthermore, we illustrate the connection to universal reinforcement learning by means of our formalism. Active inference research may profit from comparisons of the dynamics induced by alternative intrinsic motivations. Research on intrinsic motivations may profit from an additional way to implement intrinsically motivated agents that also share the biological plausibility of active inference.
2. Related workOur work is largely based on Friston et al. (2015) and we adopt the setup and models from it. This means many of our assumptions are due to the original paper. Recently, Buckley et al. (2017) have provided an overview of continuous-variable active inference with a focus on the mathematical aspects, rather than the relationship to thermodynamic free energy, biological interpretations or neural correlates. Our work here is in as similar spirit but focuses on the discrete formulation of active inference and how it can be decomposed. As we point out in the text, the case of direct Bayesian inference with separate action selection is strongly related to general reinforcement learning (Hutter, 2005; Leike, 2016; Aslanides et al., 2017). This approach also tackles unknown environments with- and in later versions also without externally specified reward in a Bayesian way. Other work focusing on unknown environments with rewards are e.g., (Ross and Pineau, 2008; Doshi-Velez et al., 2015). We would like to stress that we do not propose agents using Bayesian or variational inference as competitors to any of the existing methods. Instead, our goal is to provide an unbiased investigation of active inference with a particular focus on extending the inference methods, objective functions and action-selection mechanisms. Furthermore, these agents follow almost completely in a straightforward (if quite involved) way from the model in Friston et al. (2015). A small difference is the extension to parameterizations of environment and sensor dynamics. These parameterizations can be found in Friston et al. (2016b).We note that work on planning as inference (Attias, 2003; Toussaint, 2009; Botvinick and Toussaint, 2012) is generally related to active inference. In this line of work the probability distribution over actions or action sequences that lead to a given goal specified as a sensor value is inferred. Since active inference also tries to obtain a probability distribution over actions the approaches are related. The formalization of the goal however differs, at least at first sight. How exactly the two approaches relate is beyond the scope of this publication.
[ "29887647", "22125233", "26650201", "26038544", "22940577", "26353250", "20068583", "23825119", "27375276", "27870614", "25689102", "22864468", "28777724", "29417960", "23663756", "23723979", "18958277", "15811222", "10620381", "24273511" ]
[ { "pmid": "29887647", "title": "From cognitivism to autopoiesis: towards a computational framework for the embodied mind.", "abstract": "Predictive processing (PP) approaches to the mind are increasingly popular in the cognitive sciences. This surge of interest is accompanied by a proliferation of philosophical arguments, which seek to either extend or oppose various aspects of the emerging framework. In particular, the question of how to position predictive processing with respect to enactive and embodied cognition has become a topic of intense debate. While these arguments are certainly of valuable scientific and philosophical merit, they risk underestimating the variety of approaches gathered under the predictive label. Here, we first present a basic review of neuroscientific, cognitive, and philosophical approaches to PP, to illustrate how these range from solidly cognitivist applications-with a firm commitment to modular, internalistic mental representation-to more moderate views emphasizing the importance of 'body-representations', and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind. Any nascent predictive processing theory (e.g., of attention or consciousness) must take into account this continuum of views, and associated theoretical commitments. As a final point, we illustrate how the Free Energy Principle (FEP) attempts to dissolve tension between internalist and externalist accounts of cognition, by providing a formal synthetic account of how internal 'representations' arise from autopoietic self-organization. The FEP thus furnishes empirically productive process theories (e.g., predictive processing) by which to guide discovery through the formal modelling of the embodied mind." }, { "pmid": "22125233", "title": "Information-driven self-organization: the dynamical system approach to autonomous robot behavior.", "abstract": "In recent years, information theory has come into the focus of researchers interested in the sensorimotor dynamics of both robots and living beings. One root for these approaches is the idea that living beings are information processing systems and that the optimization of these processes should be an evolutionary advantage. Apart from these more fundamental questions, there is much interest recently in the question how a robot can be equipped with an internal drive for innovation or curiosity that may serve as a drive for an open-ended, self-determined development of the robot. The success of these approaches depends essentially on the choice of a convenient measure for the information. This article studies in some detail the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process. The PI of a process quantifies the total information of past experience that can be used for predicting future events. However, the application of information theoretic measures in robotics mostly is restricted to the case of a finite, discrete state-action space. This article aims at applying the PI in the dynamical systems approach to robot control. We study linear systems as a first step and derive exact results for the PI together with explicit learning rules for the parameters of the controller. Interestingly, these learning rules are of Hebbian nature and local in the sense that the synaptic update is given by the product of activities available directly at the pertinent synaptic ports. The general findings are exemplified by a number of case studies. In particular, in a two-dimensional system, designed at mimicking embodied systems with latent oscillatory locomotion patterns, it is shown that maximizing the PI means to recognize and amplify the latent modes of the robotic system. This and many other examples show that the learning rules derived from the maximum PI principle are a versatile tool for the self-organization of behavior in complex robotic systems." }, { "pmid": "26650201", "title": "The Umwelt of an embodied agent--a measure-theoretic definition.", "abstract": "We consider a general model of the sensorimotor loop of an agent interacting with the world. This formalises Uexküll's notion of a function-circle. Here, we assume a particular causal structure, mechanistically described in terms of Markov kernels. In this generality, we define two σ-algebras of events in the world that describe two respective perspectives: (1) the perspective of an external observer, (2) the intrinsic perspective of the agent. Not all aspects of the world, seen from the external perspective, are accessible to the agent. This is expressed by the fact that the second σ-algebra is a subalgebra of the first one. We propose the smaller one as formalisation of Uexküll's Umwelt concept. We show that, under continuity and compactness assumptions, the global dynamics of the world can be simplified without changing the internal process. This simplification can serve as a minimal world model that the system must have in order to be consistent with the internal process." }, { "pmid": "26038544", "title": "Predictive information in a sensory population.", "abstract": "Guiding behavior requires the brain to make predictions about the future values of sensory inputs. Here, we show that efficient predictive computation starts at the earliest stages of the visual system. We compute how much information groups of retinal ganglion cells carry about the future state of their visual inputs and show that nearly every cell in the retina participates in a group of cells for which this predictive information is close to the physical limit set by the statistical structure of the inputs themselves. Groups of cells in the retina carry information about the future state of their own activity, and we show that this information can be compressed further and encoded by downstream predictor neurons that exhibit feature selectivity that would support predictive computations. Efficient representation of predictive information is a candidate principle that can be applied at each stage of neural computation." }, { "pmid": "22940577", "title": "Planning as inference.", "abstract": "Recent developments in decision-making research are bringing the topic of planning back to center stage in cognitive science. This renewed interest reopens an old, but still unanswered question: how exactly does planning happen? What are the underlying information processing operations and how are they implemented in the brain? Although a range of interesting possibilities exists, recent work has introduced a potentially transformative new idea, according to which planning is accomplished through probabilistic inference." }, { "pmid": "26353250", "title": "Bayesian Nonparametric Methods for Partially-Observable Reinforcement Learning.", "abstract": "Making intelligent decisions from incomplete information is critical in many applications: for example, robots must choose actions based on imperfect sensors, and speech-based interfaces must infer a user's needs from noisy microphone inputs. What makes these tasks hard is that often we do not have a natural representation with which to model the domain and use for choosing actions; we must learn about the domain's properties while simultaneously performing the task. Learning a representation also involves trade-offs between modeling the data that we have seen previously and being able to make predictions about new data. This article explores learning representations of stochastic systems using Bayesian nonparametric statistics. Bayesian nonparametric methods allow the sophistication of a representation to scale gracefully with the complexity in the data. Our main contribution is a careful empirical evaluation of how representations learned using Bayesian nonparametric methods compare to other standard learning approaches, especially in support of planning and control. We show that the Bayesian aspects of the methods result in achieving state-of-the-art performance in decision making with relatively few samples, while the nonparametric aspects often result in fewer computations. These results hold across a variety of different techniques for choosing actions given a representation." }, { "pmid": "20068583", "title": "The free-energy principle: a unified brain theory?", "abstract": "A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework." }, { "pmid": "23825119", "title": "Life as we know it.", "abstract": "This paper presents a heuristic proof (and simulations of a primordial soup) suggesting that life-or biological self-organization-is an inevitable and emergent property of any (ergodic) random dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if the coupling among an ensemble of dynamical systems is mediated by short-range forces, then the states of remote systems must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states in a statistical sense. The existence of a Markov blanket means that internal states will appear to minimize a free energy functional of the states of their Markov blanket. Crucially, this is the same quantity that is optimized in Bayesian inference. Therefore, the internal states (and their blanket) will appear to engage in active Bayesian inference. In other words, they will appear to model-and act on-their world to preserve their functional and structural integrity, leading to homoeostasis and a simple form of autopoiesis." }, { "pmid": "27375276", "title": "Active inference and learning.", "abstract": "This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity." }, { "pmid": "27870614", "title": "Active Inference: A Process Theory.", "abstract": "This article describes a process theory based on active inference and belief propagation. Starting from the premise that all neuronal processing (and action selection) can be explained by maximizing Bayesian model evidence-or minimizing variational free energy-we ask whether neuronal responses can be described as a gradient descent on variational free energy. Using a standard (Markov decision process) generative model, we derive the neuronal dynamics implicit in this description and reproduce a remarkable range of well-characterized neuronal phenomena. These include repetition suppression, mismatch negativity, violation responses, place-cell activity, phase precession, theta sequences, theta-gamma coupling, evidence accumulation, race-to-bound dynamics, and transfer of dopamine responses. Furthermore, the (approximately Bayes' optimal) behavior prescribed by these dynamics has a degree of face validity, providing a formal explanation for reward seeking, context learning, and epistemic foraging. Technically, the fact that a gradient descent appears to be a valid description of neuronal activity means that variational free energy is a Lyapunov function for neuronal dynamics, which therefore conform to Hamilton's principle of least action." }, { "pmid": "25689102", "title": "Active inference and epistemic value.", "abstract": "We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms." }, { "pmid": "22864468", "title": "Active inference and agency: optimal control without cost functions.", "abstract": "This paper describes a variational free-energy formulation of (partially observable) Markov decision problems in decision making under uncertainty. We show that optimal control can be cast as active inference. In active inference, both action and posterior beliefs about hidden states minimise a free energy bound on the negative log-likelihood of observed states, under a generative model. In this setting, reward or cost functions are absorbed into prior beliefs about state transitions and terminal states. Effectively, this converts optimal control into a pure inference problem, enabling the application of standard Bayesian filtering techniques. We then consider optimal trajectories that rest on posterior beliefs about hidden states in the future. Crucially, this entails modelling control as a hidden state that endows the generative model with a representation of agency. This leads to a distinction between models with and without inference on hidden control states; namely, agency-free and agency-based models, respectively." }, { "pmid": "28777724", "title": "Active Inference, Curiosity and Insight.", "abstract": "This article offers a formal account of curiosity and insight in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how people attain insight and understanding using just a handful of observations, which are solicited through curious behavior. We use simulations of abstract rule learning and approximate Bayesian inference to show that minimizing (expected) variational free energy leads to active sampling of novel contingencies. This epistemic behavior closes explanatory gaps in generative models of the world, thereby reducing uncertainty and satisfying curiosity. We then move from epistemic learning to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries (i.e., invariances or rules) in their generative models. The ensuing Bayesian model reduction evinces mechanisms associated with sleep and has all the hallmarks of \"aha\" moments. This formulation moves toward a computational account of consciousness in the pre-Cartesian sense of sharable knowledge (i.e., con: \"together\"; scire: \"to know\")." }, { "pmid": "29417960", "title": "The graphical brain: Belief propagation and active inference.", "abstract": "This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference-and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models. Crucially, these models can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to elucidate the requisite message passing in terms of its form and scheduling. To accommodate mixed generative models (of discrete and continuous states), one also has to consider link nodes or factors that enable discrete and continuous representations to talk to each other. When mapping the implicit computational architecture onto neuronal connectivity, several interesting features emerge. For example, Bayesian model averaging and comparison, which link discrete and continuous states, may be implemented in thalamocortical loops. These and other considerations speak to a computational connectome that is inherently state dependent and self-organizing in ways that yield to a principled (variational) account. We conclude with simulations of reading that illustrate the implicit neuronal message passing, with a special focus on how discrete (semantic) representations inform, and are informed by, continuous (visual) sampling of the sensorium.\n\n\nAUTHOR SUMMARY\nThis paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference-and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models that can entertain both discrete and continuous states. This leads to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to characterize the requisite message passing, and link this formal characterization to canonical microcircuits and extrinsic connectivity in the brain." }, { "pmid": "23663756", "title": "Maximal mutual information, not minimal entropy, for escaping the \"Dark Room\".", "abstract": "A behavioral drive directed solely at minimizing prediction error would cause an agent to seek out states of unchanging, and thus easily predictable, sensory inputs (such as a dark room). The default to an evolutionarily encoded prior to avoid such untenable behaviors is unsatisfying. We suggest an alternate information theoretic interpretation to address this dilemma." }, { "pmid": "23723979", "title": "Information driven self-organization of complex robotic behaviors.", "abstract": "Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well." }, { "pmid": "18958277", "title": "What is Intrinsic Motivation? A Typology of Computational Approaches.", "abstract": "Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics." }, { "pmid": "15811222", "title": "New robotics: design principles for intelligent systems.", "abstract": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e. g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only \"nice to have\" but is in fact a necessary tool for designing embodied agents." }, { "pmid": "10620381", "title": "Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions.", "abstract": "Intrinsic and extrinsic types of motivation have been widely studied, and the distinction between them has shed important light on both developmental and educational practices. In this review we revisit the classic definitions of intrinsic and extrinsic motivation in light of contemporary research and theory. Intrinsic motivation remains an important construct, reflecting the natural human propensity to learn and assimilate. However, extrinsic motivation is argued to vary considerably in its relative autonomy and thus can either reflect external control or true self-regulation. The relations of both classes of motives to basic human needs for autonomy, competence and relatedness are discussed. Copyright 2000 Academic Press." }, { "pmid": "24273511", "title": "Which is the best intrinsic motivation signal for learning multiple skills?", "abstract": "Humans and other biological agents are able to autonomously learn and cache different skills in the absence of any biological pressure or any assigned task. In this respect, Intrinsic Motivations (i.e., motivations not connected to reward-related stimuli) play a cardinal role in animal learning, and can be considered as a fundamental tool for developing more autonomous and more adaptive artificial agents. In this work, we provide an exhaustive analysis of a scarcely investigated problem: which kind of IM reinforcement signal is the most suitable for driving the acquisition of multiple skills in the shortest time? To this purpose we implemented an artificial agent with a hierarchical architecture that allows to learn and cache different skills. We tested the system in a setup with continuous states and actions, in particular, with a kinematic robotic arm that has to learn different reaching tasks. We compare the results of different versions of the system driven by several different intrinsic motivation signals. The results show (a) that intrinsic reinforcements purely based on the knowledge of the system are not appropriate to guide the acquisition of multiple skills, and (b) that the stronger the link between the IM signal and the competence of the system, the better the performance." } ]
Frontiers in Neuroscience
30233295
PMC6127296
10.3389/fnins.2018.00608
Deep Supervised Learning Using Local Errors
Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the CIFAR10 dataset, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads. Code used to run all learning experiments is available under https://gitlab.com/hesham-mostafa/learning-using-local-erros.git.
2. Related workTraining of deep convolutional networks is currently dominated by approaches where all weights are simultaneously trained to minimize a global objective. This is typically done in a purely supervised setting where the training objective is the classification loss at the top layer. To ameliorate the problem of exploding/vanishing errors in deep layers (Hochreiter et al., 2001), auxiliary classifiers are sometimes added to provide additional error information to deep layers (Szegedy et al., 2014; Lee et al., 2015). Unlike our training approach, however, training still involves backpropagating errors across the entire network and simultaneous adjustments of all weights.Several learning mechanisms have been traditionally used to pre-train a deep network layer-by-layer using local error signals in order to learn the probability distribution of the input layer activations, or in order to minimize local reconstruction errors (Hinton et al., 2006; Hinton and Salakhutdinov, 2006; Bengio et al., 2007; Vincent et al., 2008; Erhan et al., 2010). These mechanisms, however, are unsupervised and the networks need to be augmented by a classifier layer, typically added on top of the deepest layer. The network weights are then fine-tuned using standard backpropagation to minimize the error at the classifier layer. Supervised layer-wise training has been pursued in Bengio et al. (2007), with auxiliary classifiers that are co-trained, unlike the random fixed auxiliary classifiers proposed here. The supervised layer-wise training is used only as a pre-training step, and results are reported after full network fine-tuning using backpropagation from the top classifier layer. Some approaches forego the fine-tuning step and keep the network fixed after the unsupervised layer-wise training phase, and only train the top classifier layer or SVM on the features learned (Ranzato et al., 2007; Lee et al., 2009; Kavukcuoglu et al., 2010). Local learning in Ranzato et al. (2007) and Kavukcuoglu et al. (2010) involves an iterative procedure for learning sparse codes which is computationally demanding. The network architectures in Ranzato et al. (2007), Lee et al. (2009), and Kavukcuoglu et al. (2010) fail to yield intermediate classification results from the intermediate layers. Moreover, their applicability to datasets that are more complex than MNIST is unclear since labels are not used to guide the learning of feature. In more complex learning scenarios with an abundance of possible features, these networks could very well learn few label-relevant features, thereby compromising the performance of the top classifier.Instead of layer-wise pre-training, several recent approaches train the whole network using a hybrid objective that contains supervised and unsupervised error terms (Zhao et al., 2015). In some of these network configurations, the unsupervised error terms are local to each layer (Zhang et al., 2016). The supervised error term, however, requires backpropagating errors through the whole network. This requirement is avoided in the training approach in Ranzato and Szummer (2008) used to learn to extract compact feature vectors from documents: training proceeds layer by layer where the error in each layer is a combination of a reconstruction error and a supervised error coming from a local classifier. The local auxiliary decoder and classifier pathways still require training, however. Other approaches also make use of a combination of supervised (label-dependent) and unsupervised error signals to train Boltzmann machines as discriminative models (Larochelle and Bengio, 2008; Goodfellow et al., 2013). Learning in Goodfellow et al. (2013), however, is more computationally demanding than our approach as as it involves several iterations to approach the mean-field equilibrium point of the network, and errors are still backpropagated through multiple layers. In Larochelle and Bengio (2008), multi-layer networks are not considered and only a single layer RBM is used.Several approaches use clustering techniques to learn convolutional layer features in an unsupervised manner (Coates and Ng, 2012; Dundar et al., 2015). A biologically-motivated technique that yields clustering-like behavior is the technique used in self-organizing maps (Kohonen, 1988) where competition between different feature neurons coupled with Hebbian plasticity fosters the formation of dissimilar and informative features. These methods share the limitation that features are not learned in a label-guided manner. Auto-encoding-based methods learn features locally by minimizing the error in reconstructing one layer using the activity of the layer above (Bengio, 2014). Predictive coding methods attempts to minimize a similar reconstruction loss (Rao and Ballard, 1999). The unsupervised auto-encoding loss can be augmented by a supervised label-dependent loss to learn features that are label-guided and can thus be used to discriminate between different classes (Rasmus et al., 2015; Valpola, 2015). The supervised label-dependent error, however, is non-local.In Baldi et al. (2016), Lillicrap et al. (2016), Nøkland (2016), and Neftci et al. (2017), the backpropagation scheme is modified to use random fixed weights in the backward path. This relaxes one of the biologically unrealistic requirements of backpropagation which is weight symmetry between the forward and backward pathways. Errors are still non-local, however, as they are generated by the top layer. A learning mechanism that is able to generate error signals locally is the synthetic gradients mechanism (Jaderberg et al., 2016; Czarnecki et al., 2017) in which errors are generated by dedicated error modules in each layer based only on the layer's activity and the label. The parameters of these dedicated error modules are themselves updated based on errors arriving from higher layers in order to make the error modules better predictors of the true, globally-derived, error signal. Our approach generates errors in a different manner through the use of a local classifier, and each layer receives no error information from the layer above.
[ "30047912", "29731511", "18255589", "17526352", "16873662", "16764513", "9377276", "29997087", "17428910", "26017442", "27824044", "28932180", "28783639", "28680387", "10195184", "28095195", "28522969", "11797008", "12590814" ]
[ { "pmid": "30047912", "title": "NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps.", "abstract": "Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1×1 to 7×7 . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the multiply-accumulate units, and achieves a power efficiency of over 3 TOp/s/W in a core area of 6.3 mm2. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations." }, { "pmid": "29731511", "title": "Learning in the Machine: Random Backpropagation and the Deep Learning Channel.", "abstract": "Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR-10 bechnmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a follow-up, we study also the low-end of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results, including the convergence to fixed points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with decorrelated data, the long-term existence of solutions for linear systems with a single hidden layer and convergence in special cases, and the convergence to fixed points of non-linear chains, when the derivative of the activation functions is included." }, { "pmid": "18255589", "title": "An analog VLSI recurrent neural network learning a continuous-time trajectory.", "abstract": "Real-time algorithms for gradient descent supervised learning in recurrent dynamical neural networks fail to support scalable VLSI implementation, due to their complexity which grows sharply with the network dimension. We present an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics. The network contains six fully recurrent neurons with continuous-time dynamics, providing 42 free parameters which comprise connection strengths and thresholds. The chip implementing the network includes local provisions supporting both the learning and storage of the parameters, integrated in a scalable architecture which can be readily expanded for applications of learning recurrent dynamical networks requiring larger dimensionality. We describe and characterize the functional elements comprising the implemented recurrent network and integrated learning system, and include experimental results obtained from training the network to represent a quadrature-phase oscillator." }, { "pmid": "17526352", "title": "Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization.", "abstract": "This paper presents a hardware implementation of multilayer feedforward neural networks (NN) using reconfigurable field-programmable gate arrays (FPGAs). Despite improvements in FPGA densities, the numerous multipliers in an NN limit the size of the network that can be implemented using a single FPGA, thus making NN applications not viable commercially. The proposed implementation is aimed at reducing resource requirement, without much compromise on the speed, so that a larger NN can be realized on a single chip at a lower cost. The sequential processing of the layers in an NN has been exploited in this paper to implement large NNs using a method of layer multiplexing. Instead of realizing a complete network, only the single largest layer is implemented. The same layer behaves as different layers with the help of a control block. The control block ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. Multilayer networks have been implemented using Xilinx FPGA \"XCV400hq240\". The concept used is shown to be very effective in reducing resource requirements at the cost of a moderate overhead on speed. This implementation is proposed to make NN applications viable in terms of cost and speed for online applications. An NN-based flux estimator is implemented in FPGA and the results obtained are presented." }, { "pmid": "16873662", "title": "Reducing the dimensionality of data with neural networks.", "abstract": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data." }, { "pmid": "16764513", "title": "A fast learning algorithm for deep belief nets.", "abstract": "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "29997087", "title": "[Segmentation of brain tumor on magnetic resonance images using 3D full-convolutional densely connected convolutional networks].", "abstract": "Accurate segmentation of multiple gliomas from multimodal MRI is a prerequisite for many precision medical procedures. To effectively use the characteristics of glioma MRI and im-prove the segmentation accuracy, we proposes a multi-Dice loss function structure and used pre-experiments to select the good hyperparameters (i.e. data dimension, image fusion step, and the implementation of loss function) to construct a 3D full convolution DenseNet-based image feature learning network. This study included 274 segmented training sets of glioma MRI and 110 test sets without segmentation. After grayscale normalization of the image, the 3D image block was extracted as a network input, and the network output used the image block fusion method to obtain the final segmentation result. The proposed structure improved the accuracy of glioma segmentation compared to a general structure. In the on-line assessment of the open BraTS2015 data set, the Dice values for the entire tumor area, tumor core area, and enhanced tumor area were 0.85, 0.71, and 0.63, respectively." }, { "pmid": "17428910", "title": "Object category structure in response patterns of neuronal population in monkey inferior temporal cortex.", "abstract": "Our mental representation of object categories is hierarchically organized, and our rapid and seemingly effortless categorization ability is crucial for our daily behavior. Here, we examine responses of a large number (>600) of neurons in monkey inferior temporal (IT) cortex with a large number (>1,000) of natural and artificial object images. During the recordings, the monkeys performed a passive fixation task. We found that the categorical structure of objects is represented by the pattern of activity distributed over the cell population. Animate and inanimate objects created distinguishable clusters in the population code. The global category of animate objects was divided into bodies, hands, and faces. Faces were divided into primate and nonprimate faces, and the primate-face group was divided into human and monkey faces. Bodies of human, birds, and four-limb animals clustered together, whereas lower animals such as fish, reptile, and insects made another cluster. Thus the cluster analysis showed that IT population responses reconstruct a large part of our intuitive category structure, including the global division into animate and inanimate objects, and further hierarchical subdivisions of animate objects. The representation of categories was distributed in several respects, e.g., the similarity of response patterns to stimuli within a category was maintained by both the cells that maximally responded to the category and the cells that responded weakly to the category. These results advance our understanding of the nature of the IT neural code, suggesting an inherently categorical representation that comprises a range of categories including the amply investigated face category." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "27824044", "title": "Random synaptic feedback weights support error backpropagation for deep learning.", "abstract": "The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning." }, { "pmid": "28932180", "title": "Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.", "abstract": "Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks." }, { "pmid": "28783639", "title": "Supervised Learning Based on Temporal Coding in Spiking Neural Networks.", "abstract": "Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information." }, { "pmid": "28680387", "title": "Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.", "abstract": "An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning." }, { "pmid": "10195184", "title": "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.", "abstract": "We describe a model of visual processing in which feedback connections from a higher- to a lower-order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images." }, { "pmid": "28095195", "title": "Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.", "abstract": "Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task." }, { "pmid": "28522969", "title": "Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation.", "abstract": "We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal \"back-propagated\" during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task." }, { "pmid": "11797008", "title": "Visual categorization shapes feature selectivity in the primate temporal cortex.", "abstract": "The way that we perceive and interact with objects depends on our previous experience with them. For example, a bird expert is more likely to recognize a bird as a sparrow, a sandpiper or a cockatiel than a non-expert. Neurons in the inferior temporal cortex have been shown to be important in the representation of visual objects; however, it is unknown which object features are represented and how these representations are affected by categorization training. Here we show that feature selectivity in the macaque inferior temporal cortex is shaped by categorization of objects on the basis of their visual features. Specifically, we recorded from single neurons while monkeys performed a categorization task with two sets of parametric stimuli. Each stimulus set consisted of four varying features, but only two of the four were important for the categorization task (diagnostic features). We found enhanced neuronal representation of the diagnostic features relative to the non-diagnostic ones. These findings demonstrate that stimulus features important for categorization are instantiated in the activity of single units (neurons) in the primate inferior temporal cortex." }, { "pmid": "12590814", "title": "Equivalence of backpropagation and contrastive Hebbian learning in a layered network.", "abstract": "Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks." } ]
JMIR Serious Games
30143476
PMC6128959
10.2196/11631
3MD for Chronic Conditions, a Model for Motivational mHealth Design: Embedded Case Study
BackgroundChronic conditions are the leading cause of death in the world. Major improvements in acute care and diagnostics have created a tendency toward the chronification of formerly terminal conditions, requiring people with these conditions to learn how to self-manage. Mobile technologies hold promise as self-management tools due to their ubiquity and cost-effectiveness. The delivery of health-related services through mobile technologies (mobile health, mHealth) has grown exponentially in recent years. However, only a fraction of these solutions take into consideration the views of relevant stakeholders such as health care professionals or even patients. The use of behavioral change models (BCMs) has proven important in developing successful health solutions, yet engaging patients remains a challenge. There is a trend in mHealth solutions called gamification that attempts to use game elements to drive user behavior and increase engagement. As it stands, designers of mHealth solutions for behavioral change in chronic conditions have no clear way of deciding what factors are relevant to consider.ObjectiveThe goal of this work is to discover factors for the design of mHealth solutions for chronic patients using negotiations between medical knowledge, BCMs, and gamification.MethodsThis study uses an embedded case study research methodology consisting of 4 embedded units: 1) cross-sectional studies of mHealth applications; 2) statistical analysis of gamification presence; 3) focus groups and interviews to relevant stakeholders; and 4) research through design of an mHealth solution. The data obtained was thematically analyzed to create a conceptual model for the design of mHealth solutions.ResultsThe Model for Motivational Mobile-health Design (3MD) for chronic conditions guides the design of condition-oriented gamified behavioral change mHealth solutions. The main components are (1) condition specific, which describe factors that need to be adjusted and adapted for each particular chronic condition; (2) motivation related, which are factors that address how to influence behaviors in an engaging manner; and (3) technology based, which are factors that are directly connected to the technical capabilities of mobile technologies. The 3MD also provides a series of high-level illustrative design questions for designers to use and consider during the design process.ConclusionsThis work addresses a recognized gap in research and practice, and proposes a unique model that could be of use in the generation of new solutions to help chronic patients.
Related WorksThis section presents the theoretical background and scientific works related to this paper. Relevant medical concepts, behavioral change theories, and gamification considerations are described.Chronic ConditionsChronic conditions have a course that varies over time that is specific to the particular illness and can be very intrusive to everyday life. However, some common challenges across managing chronic conditions exist, such as recognizing symptoms and taking appropriate actions, handling complex treatment regimens, developing coping strategies, and dealing with frequent interactions with the health care system over time [34].The context of this study (see Setting) provided the opportunity to work on two very different conditions: breast cancer and multiple sclerosis (MS).Breast cancer is the most common cancer in women both in the developed and less developed world [1]. Thanks to advancements in treatments, breast cancer survivorship is on a steady rise and this cancer is no longer thought of as an acute illness but rather a chronic condition [3,4]. It is common to find mHealth solutions for breast cancer in the scientific literature such as tracking sleep patterns [35], symptoms and treatment side effect management [35-37], breast health and well-being assessments [38,39], and even comprehensive lifestyle programs with nutrition and physical activity elements [40].MS is one of the world’s most common neurologic disorders [41]. The most common symptoms are overwhelming fatigue, visual disturbances, altered sensation, cognitive problems, and difficulties with mobility [42]. There have been recommendations that suggest the incorporation of standard MS management tools into mHealth solutions [43], and the scientific literature shows that some health apps do exist for fatigue assessment and fatigue management [44], emotional support [45], or self-management [46].Behavioral ChangeThere are several theories and behavioral change models (BCMs) that are used in health behavior science with the main goal of making the healthy choice the easy choice.The use of computerized health behavior interventions has expanded rapidly in the last decade and existing BCMs have been used to guide mHealth interventions: There is a growing body of evidence suggesting that mHealth can support health behavioral change in areas such as smoking cessation, physical activity, and other health care problems [47-51].The use of instant feedback and positive reinforcement from learning theories are in common use in mHealth apps [29,47]. The Health Belief Model has been used in mHealth interventions for self-management and health promotion [52-54], the Transtheoretical Model has been used in mobile solutions for smoking cessation and other addictive behaviors [55-58], and physical activity and fitness interventions use the theory of planned behavior [29,50,59] as well as self-regulation theories [29,60-63]. The basis for social cognitive theories can be found in many interventions using health apps for disease management [64-66] and goal setting is very often used in mHealth apps [60,67]. It has been noted that each BCM carries its limitations and problems [68-70]. A multitheory approach is usually recommended in behavioral change intervention design [71] and this should be considered when designing mHealth solutions.Mobile devices have the capacity to interact with the individual with much greater frequency and in the context of the behavior [72]. mHealth interventions allow for tailoring not only during the beginning of an intervention process, but also during the course of intervention [73]. As such, these mobile technologies are “always on” and are carried on the person throughout the day, offering more chances for interaction and intervention [17]. Therefore, mHealth interventions for behavioral change would benefit from contemplating the dynamic nature that mobile capabilities have to offer: rapid intervention adaptation based on the individual’s current and past behavior and situational context [17]. A behavior change support system (BCSS) is a sociotechnical information system with psychological and behavioral outcomes designed to form, alter, or reinforce attitudes, behaviors, or an act of complying without using coercion or deception [48]. The creation of BCSS involves a variety of disciplines from human sciences to information systems.There are BCSS design models such as the Persuasive Systems Design (PSD) [74], which concerns the design of persuasive technologies in general. In this model, the need for recognizing the intent of persuasion, understanding the persuasion event, and defining and/or recognizing the strategies in use are key. Another BCSS design model is the IDEAS (Integrate, Design, Assess, and Share) framework [75]. In this model, behavioral change theory and design thinking are integrated to guide the development of digital health interventions. The Chronic Disease mHealth App Intervention Design Framework [76] is specific to mHealth and it focuses on chronic conditions, addressing issues present in the other frameworks. The issue of enjoying doing the behavior, however, is not addressed in these models.GamificationIt is not surprising that efforts to translate the feeling of engagement and enjoyment that games have to other areas of our life have been attempted. Gamification is generally understood as the use of game elements in nongame contexts [30] and its use can be seen as one form of persuasive or motivational design [77].In this work, the terms gamification design and gameful design are used interchangeably, since they frame the same extension of phenomena through different intentional properties [78].Gamification ElementsGame elements are varied, but usually the literature on game design considers the following to be the basic set [78-80]:Points and leveling systems, which provide feedback and inform the user of their level of familiarity of the system.Leaderboards that are used to dynamically rank individual user progress and achievements as compared to their peers.Badges, achievements, and trophies, which act as rewards for the accomplishment of specific tasks.Challenges and quests that constitute objectives and create a narrative within the system.Social features are used to support and reinforce interaction between users.Each of these elements by themselves are not seen as “gameful” [78], but combined and arranged in certain ways, they can tap into something greater and unlock a unique experience. In the context of mobile apps, these elements are integrated as specific features for purposes of bolstering usability and compelling continued use [31,32].Users and Player TypesAs with BCMs, the literature suggests that the different user or player types will have different needs and it could be useful to keep them in mind during the design process. Asking gamers why they play videogames shows that there is no single and unified answer [81].There have been many attempts to create “player types” for design and analysis purposes. Game designer Richard Bartle observed the way users of an online game behaved and wrote down his observations creating what is now known as Bartle’s taxonomy [82]. However, Bartle’s taxonomy was never intended to be a general typology, only a description of his observations in one particular context [83]. Others have tried to address this problem, such as Yee [84] with his empirical model of player motivations, or Marczewski [85] who developed the Gamification User Types Hexad framework using self-determination theory as the theoretical background and research on human motivation, player types, and practical design experience. According to Marczewski, user types are segmented and supported in the following ways:Philanthropists are individuals motivated by altruistic purposes, willing to give without expecting a reward within the system.Socializers want to interact with others and create social connections. The system is important to them but as a means to connect.Free Spirits desire the freedom to express themselves and act without external control. They like to create and explore within a system.Achievers seek to progress their status by completing tasks or prove themselves by tackling difficult challenges. The system is a challenge to be overcome.Players are motivated by extrinsic rewards. The specific type of reward is not important, only that the system is providing it.Disruptors enjoy testing the limits of the system, looking to push past them. Sometimes they can be negative agents, sometimes their work improves the system.Gameful Design ModelsGameful design is about intentionally designing for gamefulness in the development of nongame environments using game design thinking [78]. Simply inserting the different game elements into any nongame context is not sufficient—the tasks themselves have to be designed in a manner similar to game design [86]. In this sense, game design should be approached as a lens to improve the overall experience of the task.There are models for game design such as the Mechanics, Dynamics and Aesthetics framework [87] that aim to help game designers. Designers have used this kind of game design model before [33] to gamify activities, but it is clear that the process of gameful design is somewhat different from game design. Games are mostly directed toward pure entertainment, whereas gamification attempts to enhance engagement and user experience in different contexts [88]. The design approach of a gameful system is different than that of a conventional game.The gamification framework of the Werbach and Hunter [89] gamification framework, commonly known as 6D, is one of the most popular and referenced gamification design frameworks, created with the purpose of designing a service or product with business goals. Another commonly used framework is called Octalysis [90]. In this framework, the design process is viewed from a “human-focused” lens as opposed to “function-focused” points of view. The authors propose that design processes concentrate normally on optimizing efficiency, getting the job done, rather than on human motivation.Even if these gamification models exist, it is important to keep in mind that one cannot expect that they perfectly translate to health scenarios. In generic gamification models, the goal is usually to increase a certain task efficiency or improve user retention [33]. Although these may look appropriate on a surface level, there are hidden dangers inherent to health care. Generic gamification models often do not contemplate potential negative consequences. Ethics should guide the design of health technologies and recognized principles of bioethics play an important role in this process [91]. Because of these issues, specific conceptual frameworks for gamification in health are being developed.The Wheel of Sukr is a health-specific gamification framework for assisting diabetic patients to self-manage and reinforce positive behaviors [92]. The Wheel of Sukr framework uses reward systems to motivate users toward healthy behaviors. Its theoretical basis lies in reaching the state of flow and motivation as understood by self-determination theory. Another health-oriented gamification framework is PACT (People, Aesthetics, Context, and Technology) [93], a participatory design framework for the gamification of rehabilitation systems that looks to involve all the relevant stakeholders from the beginning of a rehabilitation design process. This framework, however, does not use any behavioral change theory as foundation.Despite the existence of some health gamification frameworks, a systematic review [33] found that as far as gamification design frameworks are concerned, the health sector is the least developed.
[ "24711547", "16690691", "21884371", "17881753", "17934984", "10864552", "24904713", "23697600", "24050427", "23032424", "21796270", "19411947", "15694887", "20164043", "25658754", "25654660", "24986104", "11816692", "24860070", "18953579", "24771299", "25245774", "25681782", "25200713", "22605909", "29157401", "29649789", "28287673", "29196279", "24842742", "27742604", "26076688", "22564332", "23616865", "28119278", "15495883", "19033148", "28550004", "27154792", "17478409", "26168926", "25654304", "27806926", "20144402", "18471588", "18201644", "28739558", "25053006", "18346282", "19102817", "20186636", "19745466", "18550322", "27986647", "26883135", "17201605", "25480724", "23316537", "16907794", "29331247", "29426814", "16367493", "17213046", "29500159", "843571", "26163456", "27558951", "24139771", "24139770", "25639757", "24294329", "2945228", "8457793", "14728379", "18952344", "26944611", "28423815", "22646729", "23801277", "22283748", "26464800", "24883008", "11552552", "12366654", "27462182" ]
[ { "pmid": "16690691", "title": "Costs and quality of life of patients with multiple sclerosis in Europe.", "abstract": "OBJECTIVE\nTo assess overall resource consumption, work capacity and quality of life of patients with multiple sclerosis in nine European countries.\n\n\nMETHODS\nInformation on resource consumption related to multiple sclerosis, informal care by relatives, productivity losses and overall quality of life (utility) was collected with a standardised pre-tested questionnaire from 13,186 patients enrolled in national multiple sclerosis societies or followed up in neurology clinics. Information on disease included disease duration, self-assessed disease severity and relapses. Mean annual costs per patient (Euro, 2005) were estimated from the societal perspective.\n\n\nRESULTS\nThe mean age ranged from 45.1 to 53.4 years, and all levels of disease severity were represented. Between 16% and 29% of patients reported experiencing a relapse in the 3 months preceding data collection. The proportion of patients in early retirement because of multiple sclerosis ranged from 33% to 45%. The use of direct medical resources (eg, hospitalisation, consultations and drugs) varied considerably across countries, whereas the use of non-medical resources (eg, walking sticks, wheel chairs, modifications to house and car) and services (eg, home care and transportation) was comparable. Informal care use was highly correlated with disease severity, but was further influenced by healthcare systems and family structure. All types of costs increased with worsening disease. The total mean annual costs per patient (adjusted for gross domestic product purchasing power) were estimated at Euro 18,000 for mild disease (Expanded Disability Status Scale (EDSS) <4.0), Euro 36,500 for moderate disease (EDSS 4.0-6.5) and Euro 62,000 for severe disease (EDSS >7.0). Utility was similar across countries at around 0.70 for a patient with an EDSS of 2.0 and around 0.45 for a patient with an EDSS of 6.5. Intangible costs were estimated at around Euro 13,000 per patient." }, { "pmid": "21884371", "title": "Delineation of self-care and associated concepts.", "abstract": "PURPOSE\nThe purpose of this paper is to delineate five concepts that are often used synonymously in the nursing and related literature: self-care, self-management, self-monitoring, symptom management, and self-efficacy for self-care.\n\n\nMETHOD\nConcepts were delineated based on a review of literature, identification of relationships, and examination of commonalities and differences.\n\n\nFINDINGS\nMore commonalities than differences exist among self-care, self-management, and self-monitoring. Symptom management extends beyond the self-care concepts to include healthcare provider activities. Self-efficacy can mediate or moderate the four other concepts. Relationships among the concepts are depicted in a model.\n\n\nCONCLUSIONS\nA clearer understanding of the overlap, differences, and relationships among the five concepts can provide clarity, direction and specificity to nurse researchers, policy makers, and clinicians in addressing their goals for health delivery.\n\n\nCLINICAL RELEVANCE\nConcept clarity enables nurses to use evidence that targets specific interventions to individualize care toward achieving the most relevant goals." }, { "pmid": "17934984", "title": "The dilemma of patient responsibility for lifestyle change: perceptions among primary care physicians and nurses.", "abstract": "OBJECTIVE\nTo explore physicians' and nurses' views on patient and professional roles in the management of lifestyle-related diseases and their risk factors.\n\n\nDESIGN\nA questionnaire study with a focus on adult obesity, dyslipidemia, high blood pressure, type 2 diabetes, and smoking.\n\n\nSETTING\nHealthcare centres in Päijät-Häme hospital district, Finland.\n\n\nSUBJECTS\nPhysicians and nurses working in primary healthcare (n =220).\n\n\nMAIN OUTCOME MEASURES\nPerceptions of barriers to treatment of lifestyle-related conditions, perceptions of patients' responsibilities in self-care, experiences of awkwardness in intervening in obesity and smoking, perceptions of rushed schedules, and perceptions of health professionals' roles and own competence in lifestyle counselling.\n\n\nRESULTS\nA majority agreed that a major barrier to the treatment of lifestyle-related conditions is patients' unwillingness to change their habits. Patients' insufficient knowledge was considered as such a barrier less often. Self-care was actively encouraged. Although a majority of both physicians and nurses agreed that providing information, and motivating and supporting patients in lifestyle change are part of their tasks, only slightly more than one half estimated that they have sufficient skills in lifestyle counselling. Among nurses, those with less professional experience more often reported having sufficient skills than those with more experience. Two-thirds of the respondents reported that they had been able to help many patients to change their lifestyles into healthier ones.\n\n\nCONCLUSIONS\nThe primary care professionals experienced a dilemma in patients' role in the treatment of lifestyle-related diseases: the patient was recognized as central in disease management but also, if reluctant to change, a major potential barrier to treatment." }, { "pmid": "24904713", "title": "Uncovering patterns of technology use in consumer health informatics.", "abstract": "Internet usage and accessibility has grown at a staggering rate, influencing technology use for healthcare purposes. The amount of health information technology (Health IT) available through the Internet is immeasurable and growing daily. Health IT is now seen as a fundamental aspect of patient care as it stimulates patient engagement and encourages personal health management. It is increasingly important to understand consumer health IT patterns including who is using specific technologies, how technologies are accessed, factors associated with use, and perceived benefits. To fully uncover consumer patterns it is imperative to recognize common barriers and which groups they disproportionately affect. Finally, exploring future demand and predictions will expose significant opportunities for health IT. The most frequently used health information technologies by consumers are gathering information online, mobile health (mHealth) technologies, and personal health records (PHRs). Gathering health information online is the favored pathway for healthcare consumers as it is used by more consumers and more frequently than any other technology. In regard to mHealth technologies, minority Americans, compared with White Americans utilize social media, mobile Internet, and mobile applications more frequently. Consumers believe PHRs are the most beneficial health IT. PHR usage is increasing rapidly due to PHR integration with provider health systems and health insurance plans. Key issues that have to be explicitly addressed in health IT are privacy and security concerns, health literacy, unawareness, and usability. Privacy and security concerns are rated the number one reason for the slow rate of health IT adoption." }, { "pmid": "23697600", "title": "Mapping mHealth research: a decade of evolution.", "abstract": "BACKGROUND\nFor the last decade, mHealth has constantly expanded as a part of eHealth. Mobile applications for health have the potential to target heterogeneous audiences and address specific needs in different situations, with diverse outcomes, and to complement highly developed health care technologies. The market is rapidly evolving, making countless new mobile technologies potentially available to the health care system; however, systematic research on the impact of these technologies on health outcomes remains scarce.\n\n\nOBJECTIVE\nTo provide a comprehensive view of the field of mHealth research to date and to understand whether and how the new generation of smartphones has triggered research, since their introduction 5 years ago. Specifically, we focused on studies aiming to evaluate the impact of mobile phones on health, and we sought to identify the main areas of health care delivery where mobile technologies can have an impact.\n\n\nMETHODS\nA systematic literature review was conducted on the impact of mobile phones and smartphones in health care. Abstracts and articles were categorized using typologies that were partly adapted from existing literature and partly created inductively from publications included in the review.\n\n\nRESULTS\nThe final sample consisted of 117 articles published between 2002 and 2012. The majority of them were published in the second half of our observation period, with a clear upsurge between 2007 and 2008, when the number of articles almost doubled. The articles were published in 77 different journals, mostly from the field of medicine or technology and medicine. Although the range of health conditions addressed was very wide, a clear focus on chronic conditions was noted. The research methodology of these studies was mostly clinical trials and pilot studies, but new designs were introduced in the second half of our observation period. The size of the samples drawn to test mobile health applications also increased over time. The majority of the studies tested basic mobile phone features (eg, text messaging), while only a few assessed the impact of smartphone apps. Regarding the investigated outcomes, we observed a shift from assessment of the technology itself to assessment of its impact. The outcome measures used in the studies were mostly clinical, including both self-reported and objective measures.\n\n\nCONCLUSIONS\nResearch interest in mHealth is growing, together with an increasing complexity in research designs and aim specifications, as well as a diversification of the impact areas. However, new opportunities offered by new mobile technologies do not seem to have been explored thus far. Mapping the evolution of the field allows a better understanding of its strengths and weaknesses and can inform future developments." }, { "pmid": "24050427", "title": "Current mHealth technologies for physical activity assessment and promotion.", "abstract": "CONTEXT\nNovel mobile assessment and intervention capabilities are changing the face of physical activity (PA) research. A comprehensive systematic review of how mobile technology has been used for measuring PA and promoting PA behavior change is needed.\n\n\nEVIDENCE ACQUISITION\nArticle collection was conducted using six databases from February to June 2012 with search terms related to mobile technology and PA. Articles that described the use of mobile technologies for PA assessment, sedentary behavior assessment, and/or interventions for PA behavior change were included. Articles were screened for inclusion and study information was extracted.\n\n\nEVIDENCE SYNTHESIS\nAnalyses were conducted from June to September 2012. Mobile phone-based journals and questionnaires, short message service (SMS) prompts, and on-body PA sensing systems were the mobile technologies most utilized. Results indicate that mobile journals and questionnaires are effective PA self-report measurement tools. Intervention studies that reported successful promotion of PA behavior change employed SMS communication, mobile journaling, or both SMS and mobile journaling.\n\n\nCONCLUSIONS\nmHealth technologies are increasingly being employed to assess and intervene on PA in clinical, epidemiologic, and intervention research. The wide variations in technologies used and outcomes measured limit comparability across studies, and hamper identification of the most promising technologies. Further, the pace of technologic advancement currently outstrips that of scientific inquiry. New adaptive, sequential research designs that take advantage of ongoing technology development are needed. At the same time, scientific norms must shift to accept \"smart,\" adaptive, iterative, evidence-based assessment and intervention technologies that will, by nature, improve during implementation." }, { "pmid": "23032424", "title": "Issues in mHealth: findings from key informant interviews.", "abstract": "BACKGROUND\nmHealth is enjoying considerable interest and private investment in the United States. A small but growing body of evidence indicates some promise in supporting healthy behavior change and self-management of long-term conditions. The unique benefits mobile phones bring to health initiatives, such as direct access to health information regardless of time or location, may create specific issues for the implementation of such initiatives. Other issues may be shared with general health information technology developments.\n\n\nOBJECTIVE\nTo determine the important issues facing the implementation of mHealth from the perspective of those within the US health system and those working in mHealth in the United States.\n\n\nMETHODS\nSemistructured interviews were conducted with 27 key informants from across the health and mHealth sectors in the United States. Interviewees were approached directly following an environmental scan of mHealth in the United States or recommendation by those working in mHealth.\n\n\nRESULTS\nThe most common issues were privacy and data security, funding, a lack of good examples of the efficacy and cost effectiveness of mHealth in practice, and the need for more high-quality research. The issues are outlined and categorized according to the environment within which they predominantly occur: policy and regulatory environments; the wireless industry; the health system; existing mHealth practice; and research.\n\n\nCONCLUSIONS\nMany of these issues could be addressed by making the most of the current US health reform environment, developing a strategic and coordinated approach, and seeking to improve mHealth practice." }, { "pmid": "21796270", "title": "Health behavior models in the age of mobile interventions: are our theories up to the task?", "abstract": "Mobile technologies are being used to deliver health behavior interventions. The study aims to determine how health behavior theories are applied to mobile interventions. This is a review of the theoretical basis and interactivity of mobile health behavior interventions. Many of the mobile health behavior interventions reviewed were predominately one way (i.e., mostly data input or informational output), but some have leveraged mobile technologies to provide just-in-time, interactive, and adaptive interventions. Most smoking and weight loss studies reported a theoretical basis for the mobile intervention, but most of the adherence and disease management studies did not. Mobile health behavior intervention development could benefit from greater application of health behavior theories. Current theories, however, appear inadequate to inform mobile intervention development as these interventions become more interactive and adaptive. Dynamic feedback system theories of health behavior can be developed utilizing longitudinal data from mobile devices and control systems engineering models." }, { "pmid": "19411947", "title": "User-centered design and interactive health technologies for patients.", "abstract": "Despite recommendations that patients be involved in the design and testing of health technologies, few reports describe how to involve patients in systematic and meaningful ways to ensure that applications are customized to meet their needs. User-centered design is an approach that involves end users throughout the development process so that technologies support tasks, are easy to operate, and are of value to users. In this article, we provide an overview of user-centered design and use the development of Pocket Personal Assistant for Tracking Health (Pocket PATH) to illustrate how these principles and techniques were applied to involve patients in the development of this interactive health technology. Involving patient-users in the design and testing ensured functionality and usability, therefore increasing the likelihood of promoting the intended health outcomes." }, { "pmid": "15694887", "title": "A user-centered framework for redesigning health care interfaces.", "abstract": "Numerous health care systems are designed without consideration of user-centered design guidelines. Consequently, systems are created ad hoc, users are dissatisfied and often systems are abandoned. This is not only a waste of human resources, but economic resources as well. In order to salvage such systems, we have combined different methods from the area of computer science, cognitive science, psychology, and human-computer interaction to formulate a framework for guiding the redesign process. The paper provides a review of the different methods involved in this process and presents a life cycle of our redesign approach. Following the description of the methods, we present a case study, which shows a successfully applied example of the use of this framework. A comparison between the original and redesigned interfaces showed improvements in system usefulness, information quality, and interface quality." }, { "pmid": "20164043", "title": "Using the internet to promote health behavior change: a systematic review and meta-analysis of the impact of theoretical basis, use of behavior change techniques, and mode of delivery on efficacy.", "abstract": "BACKGROUND\nThe Internet is increasingly used as a medium for the delivery of interventions designed to promote health behavior change. However, reviews of these interventions to date have not systematically identified intervention characteristics and linked these to effectiveness.\n\n\nOBJECTIVES\nThe present review sought to capitalize on recently published coding frames for assessing use of theory and behavior change techniques to investigate which characteristics of Internet-based interventions best promote health behavior change. In addition, we wanted to develop a novel coding scheme for assessing mode of delivery in Internet-based interventions and also to link different modes to effect sizes.\n\n\nMETHODS\nWe conducted a computerized search of the databases indexed by ISI Web of Knowledge (including BIOSIS Previews and Medline) between 2000 and 2008. Studies were included if (1) the primary components of the intervention were delivered via the Internet, (2) participants were randomly assigned to conditions, and (3) a measure of behavior related to health was taken after the intervention.\n\n\nRESULTS\nWe found 85 studies that satisfied the inclusion criteria, providing a total sample size of 43,236 participants. On average, interventions had a statistically small but significant effect on health-related behavior (d(+) = 0.16, 95% CI 0.09 to 0.23). More extensive use of theory was associated with increases in effect size (P = .049), and, in particular, interventions based on the theory of planned behavior tended to have substantial effects on behavior (d(+) = 0.36, 95% CI 0.15 to 0.56). Interventions that incorporated more behavior change techniques also tended to have larger effects compared to interventions that incorporated fewer techniques (P < .001). Finally, the effectiveness of Internet-based interventions was enhanced by the use of additional methods of communicating with participants, especially the use of short message service (SMS), or text, messages.\n\n\nCONCLUSIONS\nThe review provides a framework for the development of a science of Internet-based interventions, and our findings provide a rationale for investing in more intensive theory-based interventions that incorporate multiple behavior change techniques and modes of delivery." }, { "pmid": "25658754", "title": "Gamification: what it is and why it matters to digital health behavior change developers.", "abstract": "This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions." }, { "pmid": "25654660", "title": "Just a fad? Gamification in health and fitness apps.", "abstract": "BACKGROUND\nGamification has been a predominant focus of the health app industry in recent years. However, to our knowledge, there has yet to be a review of gamification elements in relation to health behavior constructs, or insight into the true proliferation of gamification in health apps.\n\n\nOBJECTIVE\nThe objective of this study was to identify the extent to which gamification is used in health apps, and analyze gamification of health and fitness apps as a potential component of influence on a consumer's health behavior.\n\n\nMETHODS\nAn analysis of health and fitness apps related to physical activity and diet was conducted among apps in the Apple App Store in the winter of 2014. This analysis reviewed a sample of 132 apps for the 10 effective game elements, the 6 core components of health gamification, and 13 core health behavior constructs. A regression analysis was conducted in order to measure the correlation between health behavior constructs, gamification components, and effective game elements.\n\n\nRESULTS\nThis review of the most popular apps showed widespread use of gamification principles, but low adherence to any professional guidelines or industry standard. Regression analysis showed that game elements were associated with gamification (P<.001). Behavioral theory was associated with gamification (P<.05), but not game elements, and upon further analysis gamification was only associated with composite motivational behavior scores (P<.001), and not capacity or opportunity/trigger.\n\n\nCONCLUSIONS\nThis research, to our knowledge, represents the first comprehensive review of gamification use in health and fitness apps, and the potential to impact health behavior. The results show that use of gamification in health and fitness apps has become immensely popular, as evidenced by the number of apps found in the Apple App Store containing at least some components of gamification. This shows a lack of integrating important elements of behavioral theory from the app industry, which can potentially impact the efficacy of gamification apps to change behavior. Apps represent a very promising, burgeoning market and landscape in which to disseminate health behavior change interventions. Initial results show an abundant use of gamification in health and fitness apps, which necessitates the in-depth study and evaluation of the potential of gamification to change health behaviors." }, { "pmid": "24986104", "title": "A game plan: Gamification design principles in mHealth applications for chronic disease management.", "abstract": "Effective chronic disease management is essential to improve positive health outcomes, and incentive strategies are useful in promoting self-care with longevity. Gamification, applied with mHealth (mobile health) applications, has the potential to better facilitate patient self-management. This review article addresses a knowledge gap around the effective use of gamification design principles, or mechanics, in developing mHealth applications. Badges, leaderboards, points and levels, challenges and quests, social engagement loops, and onboarding are mechanics that comprise gamification. These mechanics are defined and explained from a design and development perspective. Health and fitness applications with gamification mechanics include: bant which uses points, levels, and social engagement, mySugr which uses challenges and quests, RunKeeper which uses leaderboards as well as social engagement loops and onboarding, Fitocracy which uses badges, and Mango Health, which uses points and levels. Specific design considerations are explored, an example of the efficacy of a gamified mHealth implementation in facilitating improved self-management is provided, limitations to this work are discussed, a link between the principles of gaming and gamification in health and wellness technologies is provided, and suggestions for future work are made. We conclude that gamification could be leveraged in developing applications with the potential to better facilitate self-management in persons with chronic conditions." }, { "pmid": "11816692", "title": "Improving chronic illness care: translating evidence into action.", "abstract": "The growing number of persons suffering from major chronic illnesses face many obstacles in coping with their condition, not least of which is medical care that often does not meet their needs for effective clinical management, psychological support, and information. The primary reason for this may be the mismatch between their needs and care delivery systems largely designed for acute illness. Evidence of effective system changes that improve chronic care is mounting. We have tried to summarize this evidence in the Chronic Care Model (CCM) to guide quality improvement. In this paper we describe the CCM, its use in intensive quality improvement activities with more than 100 health care organizations, and insights gained in the process." }, { "pmid": "24860070", "title": "Daily collection of self-reporting sleep disturbance data via a smartphone app in breast cancer patients receiving chemotherapy: a feasibility study.", "abstract": "BACKGROUND\nImprovements in mobile telecommunication technologies have enabled clinicians to collect patient-reported outcome (PRO) data more frequently, but there is as yet limited evidence regarding the frequency with which PRO data can be collected via smartphone applications (apps) in breast cancer patients receiving chemotherapy.\n\n\nOBJECTIVE\nThe primary objective of this study was to determine the feasibility of an app for sleep disturbance-related data collection from breast cancer patients receiving chemotherapy. A secondary objective was to identify the variables associated with better compliance in order to identify the optimal subgroups to include in future studies of smartphone-based interventions.\n\n\nMETHODS\nBetween March 2013 and July 2013, patients who planned to receive neoadjuvant chemotherapy for breast cancer at Asan Medical Center who had access to a smartphone app were enrolled just before the start of their chemotherapy and asked to self-report their sleep patterns, anxiety severity, and mood status via a smartphone app on a daily basis during the 90-day study period. Push notifications were sent to participants daily at 9 am and 7 pm. Data regarding the patients' demographics, interval from enrollment to first self-report, baseline Beck's Depression Inventory (BDI) score, and health-related quality of life score (as assessed using the EuroQol Five Dimensional [EQ5D-3L] questionnaire) were collected to ascertain the factors associated with compliance with the self-reporting process.\n\n\nRESULTS\nA total of 30 participants (mean age 45 years, SD 6; range 35-65 years) were analyzed in this study. In total, 2700 daily push notifications were sent to these 30 participants over the 90-day study period via their smartphones, resulting in the collection of 1215 self-reporting sleep-disturbance data items (overall compliance rate=45.0%, 1215/2700). The median value of individual patient-level reporting rates was 41.1% (range 6.7-95.6%). The longitudinal day-level compliance curve fell to 50.0% at day 34 and reached a nadir of 13.3% at day 90. The cumulative longitudinal compliance curve exhibited a steady decrease by about 50% at day 70 and continued to fall to 45% on day 90. Women without any form of employment exhibited the higher compliance rate. There was no association between any of the other patient characteristics (ie, demographics, and BDI and EQ5D-3L scores) and compliance. The mean individual patient-level reporting rate was higher for the subgroup with a 1-day lag time, defined as starting to self-report on the day immediately after enrollment, than for those with a lag of 2 or more days (51.6%, SD 24.0 and 29.6%, SD 25.3, respectively; P=.03).\n\n\nCONCLUSIONS\nThe 90-day longitudinal collection of daily self-reporting sleep-disturbance data via a smartphone app was found to be feasible. Further research should focus on how to sustain compliance with this self-reporting for a longer time and select subpopulations with higher rates of compliance for mobile health care." }, { "pmid": "18953579", "title": "Evaluation of a mobile phone-based, advanced symptom management system (ASyMS) in the management of chemotherapy-related toxicity.", "abstract": "OBJECTIVES\nTo evaluate the impact of a mobile phone-based, remote monitoring, advanced symptom management system (ASyMS) on the incidence, severity and distress of six chemotherapy-related symptoms (nausea, vomiting, fatigue, mucositis, hand-foot syndrome and diarrhoea) in patients with lung, breast or colorectal cancer.\n\n\nDESIGN\nA two group (intervention and control) by five time points (baseline, pre-cycle 2, pre-cycle 3, pre-cycle 4 and pre-cycle 5) randomised controlled trial.\n\n\nSETTING\nSeven clinical sites in the UK; five specialist cancer centres and two local district hospitals.\n\n\nPARTICIPANTS\nOne hundred and twelve people with breast, lung or colorectal cancer receiving outpatient chemotherapy.\n\n\nINTERVENTIONS\nA mobile phone-based, remote monitoring, advanced symptom management system (ASyMS).\n\n\nMAIN OUTCOME MEASURES\nChemotherapy-related morbidity of six common chemotherapy-related symptoms (nausea, vomiting, fatigue, mucositis, hand-foot syndrome and diarrhoea).\n\n\nRESULTS\nThere were significantly higher reports of fatigue in the control group compared to the intervention group (odds ratio = 2.29, 95%CI = 1.04 to 5.05, P = 0.040) and reports of hand-foot syndrome were on average lower in the control group (odds ratio control/intervention = 0.39, 95%CI = 0.17 to 0.92, P = 0.031).\n\n\nCONCLUSION\nThe study demonstrates that ASyMS can support the management of symptoms in patients with lung, breast and colorectal cancer receiving chemotherapy." }, { "pmid": "24771299", "title": "A pilot study: dose adaptation of capecitabine using mobile phone toxicity monitoring - supporting patients in their homes.", "abstract": "PURPOSE\nReal-time symptom monitoring using a mobile phone is potentially advantageous for patients receiving oral chemotherapy. We therefore conducted a pilot study of patient dose adaptation using mobile phone monitoring of specific symptoms to investigate relative dose intensity of capecitabine, level of toxicity and perceived supportive care.\n\n\nMETHODS\nPatients with breast or colorectal cancer receiving capecitabine completed a symptom, temperature and dose diary twice a day using a mobile phone application. This information was encrypted and automatically transmitted in real time to a secure server, with moderate levels of toxicity automatically prompting self-care symptom management messages on the screen of the patient's mobile phone or in severe cases, a call from a specialist nurse to advise on care according to an agreed protocol.\n\n\nRESULTS\nPatients (n = 26) completed the mobile phone diary on 92.6 % of occasions. Twelve patients had a maximum toxicity grade of 3 (46.2 %). The average dose intensity for all patients as a percentage of standard dose was 90 %. In eight patients, the dose of capecitabine was reduced, and in eight patients, the dose of capecitabine was increased. Patients and healthcare professionals involved felt reassured by the novel monitoring system, in particular, during out of hours.\n\n\nCONCLUSION\nIt is possible to optimise the individual dose of oral chemotherapy safely including dose increase and to manage chemotherapy side effects effectively using real-time mobile phone monitoring of toxicity parameters entered by the patient." }, { "pmid": "25245774", "title": "Replacing ambulatory surgical follow-up visits with mobile app home monitoring: modeling cost-effective scenarios.", "abstract": "BACKGROUND\nWomen's College Hospital (WCH) offers specialized surgical procedures, including ambulatory breast reconstruction in post-mastectomy breast cancer patients. Most patients receiving ambulatory surgery have low rates of postoperative events necessitating clinic visits. Increasingly, mobile monitoring and follow-up care is used to overcome the distance patients must travel to receive specialized care at a reduced cost to society. WCH has completed a feasibility study using a mobile app (QoC Health Inc, Toronto) that suggests high patient satisfaction and adequate detection of postoperative complications.\n\n\nOBJECTIVE\nThe proposed cost-effectiveness study models the replacement of conventional, in-person postoperative follow-up care with mobile app follow-up care following ambulatory breast reconstruction in post-mastectomy breast cancer patients.\n\n\nMETHODS\nThis is a societal perspective cost-effectiveness analysis, wherein all costs are assessed irrespective of the payer. The patient/caregiver, health care system, and externally borne costs are calculated within the first postoperative month based on cost information provided by WCH and QoC Health Inc. The effectiveness of telemedicine and conventional follow-up care is measured as successful surgical outcomes at 30-days postoperative, and is modeled based on previous clinical trials containing similar patient populations and surgical risks.\n\n\nRESULTS\nThis costing assumes that 1000 patients are enrolled in bring-your-own-device (BYOD) mobile app follow-up per year and that 1.64 in-person follow-ups are attended in the conventional arm within the first month postoperatively. The total cost difference between mobile app and in-person follow-up care is $245 CAD ($223 USD based on the current exchange rate), with in-person follow-up being more expensive ($381 CAD) than mobile app follow-up care ($136 CAD). This takes into account the total of health care system, patient, and external borne costs. If we examine health care system costs alone, in-person follow-up is $38 CAD ($35 USD) more expensive than mobile app follow-up care over the first postoperative month. The baseline difference in effect is modeled to be zero based on clinical trials examining the effectiveness of telephone follow-up care in similar patient populations. An incremental cost-effectiveness ratio (ICER) is not reportable in this scenario. An incremental net benefit (INB) is reportable, and reflects merely the cost difference between the two interventions for any willingness-to-pay value (INB=$245 CAD). The cost-effectiveness of mobile app follow-up even holds in scenarios where all mobile patients attend one in-person follow-up.\n\n\nCONCLUSIONS\nMobile app follow-up care is suitably targeted to low-risk postoperative ambulatory patients. It can be cost-effective from a societal and health care system perspective." }, { "pmid": "25681782", "title": "Feasibility of a lifestyle intervention for overweight/obese endometrial and breast cancer survivors using an interactive mobile application.", "abstract": "OBJECTIVE\nThe study aimed to assess a one-month lifestyle intervention delivered via a web- and mobile-based weight-loss application (app) (LoseIt!) using a healthcare-provider interface.\n\n\nMETHODS\nEarly-stage overweight/obese (body mass index [BMI]≥25kg/m(2)) cancer survivors (CS) diagnosed in the past three years, and without recurrent disease were enrolled and received exercise and nutrition counseling using the LoseIt! app. Entry and exit quality of life (FACT-G) and Weight Efficacy Lifestyle Questionnaire (WEL) measuring self-efficacy were measured along with anthropometrics, daily food intake, and physical activity (PA) using the app.\n\n\nRESULTS\nMean participant age was 58.4±10.3years (n=50). Significant reductions (p<0.0006) in anthropometrics were noted between pre- and post-intervention weight (105.0±21.8kg versus 98.6±22.5kg); BMI (34.9±8.7kg/m(2) versus 33.9±8.4kg/m(2)); and waist circumference (108.1±14.9cm versus 103.7±15.1cm). A significant improvement in pre- and post-intervention total WEL score was noted (99.38±41.8 versus 120.19±47.1, p=0.043). No significant differences were noted in FACT-G, macronutrient consumption, and PA patterns.\n\n\nCONCLUSION\nThese results indicate that a lifestyle intervention delivered via a web- and mobile-based weight-loss app is a feasible option by which to elicit short-term reductions in weight. Though these results parallel the recent survivors of uterine cancer empowered by exercise and healthy diet (SUCCEED) trial, it is notable that they were achieved without encumbering significant cost and barrier-access issues (i.e. time, transportation, weather, parking, etc.)." }, { "pmid": "29157401", "title": "Multiple Sclerosis in the Contemporary Age: Understanding the Millennial Patient with Multiple Sclerosis to Create Next-Generation Care.", "abstract": "The average age of onset of multiple sclerosis (MS) is between 20 and 40 years of age. Therefore, most new patients diagnosed with MS within the next 10 to 15 years will be from the millennial generation, representing those born between 1982 and 2000. Certain preferences and trends of this contemporary generation will present new challenges to the MS physician and effective MS care. By first understanding these challenges, relevant and successful solutions can be created to craft a system of care that best benefits the millennial patient with MS." }, { "pmid": "29649789", "title": "Improving fatigue in multiple sclerosis by smartphone-supported energy management: The MS TeleCoach feasibility study.", "abstract": "BACKGROUND\nFatigue is a frequently occurring, often disabling symptom in MS with no single effective treatment. In current fatigue management interventions, personalized, real-time follow-up is often lacking. The objective of the study is to assess the feasibility of the MS TeleCoach, a novel intervention offering telemonitoring of fatigue and telecoaching of physical activity and energy management in persons with MS (pwMS) over a 12-week period. The goal of the MS TeleCoach, conceived as a combination of monitoring, self-management and motivational messages, is to enhance levels of physical activity thereby improving fatigue in pwMS in an accessible and interactive way, reinforcing self-management of patients.\n\n\nMETHODS\nWe conducted a prospective, open-label feasibility study of the MS TeleCoach in pwMS with Expanded Disability Status Scale ≤ 4 and moderate to severe fatigue as measured by the Fatigue Scale for Motor and Cognitive Functions (FSMC). Following a 2-week run-in period to assess the baseline activity level per patient, the target number of activity counts was gradually increased over the 12-week period through telecoaching. The primary efficacy outcome was change in FSMC total score from baseline to study end. A subset of patients was asked to fill in D-QUEST 2.0, a usability questionnaire, to evaluate the satisfaction with the MS TeleCoach device and the experienced service.\n\n\nRESULTS\nSeventy-five patients were recruited from 16 centres in Belgium, of which 57 patients (76%) completed the study. FSMC total score (p = 0.009) and motor and cognitive subscores (p = 0.007 and p = 0.02 respectively) decreased from baseline to week 12, indicating an improvement in fatigue. One third of participants with severe fatigue changed to a lower FSMC category for both FSMC total score and subscores. The post-study evaluation of patient satisfaction showed that the intervention was well accepted and that patients were very satisfied with the quality of the professional services.\n\n\nCONCLUSION\nUsing MS TeleCoach as a self-management tool in pwMS suffering from mild disability and moderate to severe fatigue appeared to be feasible, both technically and from a content perspective. Its use was associated with improved fatigue levels in the participants who completed the study. The MS Telecoach seems to meet the need for a low-cost, accessible and interactive self-management tool in MS." }, { "pmid": "28287673", "title": "A Nurse-Led Telehealth Program to Improve Emotional Health in Individuals With Multiple Sclerosis.", "abstract": "Individuals with multiple sclerosis (MS) have many barriers to health care and participate in less health promotion activities than the general population. The current feasibility project was a trial implementation of an existing telehealth promotion program within a community neurology clinic with a single MS provider. The program comprised an initial face-to-face meeting followed by five scheduled telephone calls over a 12-week period. Progress was facilitated through motivational interviewing techniques and performed by a MS-certified nurse working as a nurse navigator in the clinic. Of 10 participants, nine (90%) showed overall progress toward their goals over 12 weeks. Within limits of a feasibility pilot, the program showed positive patient outcomes and was well received by participants and not a burden to clinic staff. The delivery model eliminates many barriers to care and increases patient satisfaction with the clinic while keeping costs to the clinic low. [Journal of Psychosocial Nursing and Mental Health Services, 55(3), 31-37.]." }, { "pmid": "29196279", "title": "Using mHealth Technology in a Self-Management Intervention to Promote Physical Activity Among Adults With Chronic Disabling Conditions: Randomized Controlled Trial.", "abstract": "BACKGROUND\nPhysical activity is considered a comprehensive approach for managing limitations in physical function among adults with chronic disabling conditions. However, adults with chronic disabling conditions often face many barriers to engaging in physical activity. A strategy to promote physical activity among adults with chronic disabling conditions is to encourage the use of mobile health (mHealth) apps.\n\n\nOBJECTIVE\nThe objective of this pilot study was to examine the potential benefits of using commercially available mHealth apps in a self-management intervention among 46 adults with musculoskeletal or neurological conditions.\n\n\nMETHODS\nParticipants were randomized to one of 3 intervention groups: (1) mHealth-based self-management intervention, (2) paper-based self-management intervention, and (3) contact-control intervention. Participants in all 3 groups met in person once and received 3 follow-up phone calls with a trained graduate assistant. Participants in the mHealth-based and paper-based groups received a computer tablet or a paper diary, respectively, to facilitate goal setting, self-monitoring, and action planning. Participants in the contact-control group received information on healthy behaviors without being taught skills to change behaviors. The following outcomes were measured at baseline and at the 7th week: physical activity (Physical Activity and Disability Survey-revised), psychosocial factors (self-efficacy, self-regulation, and social support), and physical function (Patient Report Outcomes Measurement Information System, 6-min walk test, 1-min chair stands, and 1-min arm curls).\n\n\nRESULTS\nRepeated-measures multivariate analysis of variance (MANOVA) indicated significant differences between groups in physical activity levels (Wilks λ=0.71, F6,76=2.34, P=.04). Both the mHealth-based and paper-based groups had large effect size increases in planned exercise and leisure-time physical activity compared with the contact-control group (Cohen d=1.20 and d=0.82, respectively). Repeated-measures MANOVA indicated nonsignificant differences between groups in psychosocial factors (Wilks λ=0.85, F6,76=1.10, P=.37). However, both the mHealth-based and paper-based groups had moderate effect size improvements in self-efficacy (d=0.48 and d=0.75, respectively) and self-regulation (d=0.59 and d=0.43, respectively) compared with the contact-control group. Repeated-measures MANOVA indicated nonsignificant differences between groups in physical function (Wilks λ=0.94, F8,66=0.27, P=.97). There were small and nonsignificant changes between the mHealth-based and paper-based groups with regard to most outcomes. However, the mHealth-based group had moderate effect size increases (d=0.47) in planned exercise and leisure-time physical activity compared with the paper-based group.\n\n\nCONCLUSIONS\nWe found that using commercially available mHealth apps in a self-management intervention shows promise in promoting physical activity among adults with musculoskeletal and neurological conditions. Further research is needed to identify the best ways of using commercially available mobile apps in self-management interventions.\n\n\nTRIAL REGISTRATION\nClinicaltrials.gov NCT02833311; https://clinicaltrials.gov/ct2/show/NCT02833311 (Archived by WebCite at http://www.webcitation.org/6vDVSAw1w)." }, { "pmid": "24842742", "title": "Behavior change techniques in top-ranked mobile apps for physical activity.", "abstract": "BACKGROUND\nMobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps.\n\n\nPURPOSE\nThe aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity.\n\n\nMETHODS\nTop-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013.\n\n\nRESULTS\nMost descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques.\n\n\nCONCLUSIONS\nBehavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market." }, { "pmid": "27742604", "title": "A Review of Persuasive Principles in Mobile Apps for Chronic Arthritis Patients: Opportunities for Improvement.", "abstract": "BACKGROUND\nChronic arthritis (CA), an umbrella term for inflammatory rheumatic and other musculoskeletal diseases, is highly prevalent. Effective disease-modifying antirheumatic drugs for CA are available, with the exception of osteoarthritis, but require a long-term commitment of patients to comply with the medication regimen and management program as well as a tight follow-up by the treating physician and health professionals. Additionally, patients are advised to participate in physical exercise programs. Adherence to exercises and physical activity programs is often very low. Patients would benefit from support to increase medication compliance as well as compliance to the physical exercise programs. To address these shortcomings, health apps for CA patients have been created. These mobile apps assist patients in self-management of overall health measures, health prevention, and disease management. By including persuasive principles designed to reinforce, change, or shape attitudes or behaviors, health apps can transform into support tools that motivate and stimulate users to achieve or keep up with target behavior, also called persuasive systems. However, the extent to which health apps for CA patients consciously and successfully employ such persuasive principles remains unknown.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the number and type of persuasive principles present in current health apps for CA patients.\n\n\nMETHODS\nA review of apps for arthritis patients was conducted across the three major app stores (Google Play, Apple App Store, and Windows Phone Store). Collected apps were coded according to 37 persuasive principles, based on an altered version of the Persuasive System Design taxonomy of Oinas-Kukkonen and Harjuma and the taxonomy of Behavior Change Techniques of Michie and Abraham. In addition, user ratings, number of installs, and price of the apps were also coded.\n\n\nRESULTS\nWe coded 28 apps. On average, 5.8 out of 37 persuasive principles were used in each app. The most used category of persuasive principles was System Credibility with an average of 2.6 principles. Task Support was the second most used, with an average of 2.3 persuasive principles. Next was Dialogue Support with an average of 0.5 principles. Social Support was last with an average of 0.01 persuasive principles only.\n\n\nCONCLUSIONS\nCurrent health apps for CA patients would benefit from adding Social Support techniques (eg, social media, user fora) and extending Dialogue Support techniques (eg, rewards, praise). The addition of automated tracking of health-related parameters (eg, physical activity, step count) could further reduce the effort for CA patients to manage their disease and thus increase Task Support. Finally, apps for health could benefit from a more evidence-based approach, both in developing the app as well as ensuring that content can be verified as scientifically proven, which will result in enhanced System Credibility." }, { "pmid": "26076688", "title": "Effectiveness of a mHealth Lifestyle Program With Telephone Support (TXT2BFiT) to Prevent Unhealthy Weight Gain in Young Adults: Randomized Controlled Trial.", "abstract": "BACKGROUND\nWeight gained in young adulthood often persists throughout later life with associated chronic disease risk. Despite this, current population prevention strategies are not specifically designed for young adults.\n\n\nOBJECTIVE\nWe designed and assessed the efficacy of an mHealth prevention program, TXT2BFiT, in preventing excess weight gain and improving dietary and physical activity behaviors in young adults at increased risk of obesity and unhealthy lifestyle choices.\n\n\nMETHODS\nA two-arm, parallel-group randomized controlled trial was conducted. Subjects and analyzing researchers were blinded. A total of 250 18- to 35-year-olds with a high risk of weight gain, a body mass index (BMI) of 23.0 to 24.9 kg/m(2) with at least 2 kg of weight gain in the previous 12 months, or a BMI of 25.0 to 31.9 kg/m(2) were randomized to the intervention or control group. In the 12-week intervention period, the intervention group received 8 text messages weekly based on the transtheoretical model of behavior change, 1 email weekly, 5 personalized coaching calls, a diet booklet, and access to resources and mobile phone apps on a website. Control group participants received only 4 text messages and printed dietary and physical activity guidelines. Measured body weight and height were collected at baseline and at 12 weeks. Outcomes were assessed via online surveys at baseline and at 12 weeks, including self-reported weight and dietary and physical activity measures.\n\n\nRESULTS\nA total of 214 participants-110 intervention and 104 control-completed the 12-week intervention period. A total of 10 participants out of 250 (4.0%)-10 intervention and 0 control-dropped out, and 26 participants (10.4%)-5 intervention and 21 control-did not complete postintervention online surveys. Adherence to coaching calls and delivery of text messages was over 90%. At 12 weeks, the intervention group were 2.2 kg (95% CI 0.8-3.6) lighter than controls (P=.005). Intervention participants consumed more vegetables (P=.009), fewer sugary soft drinks (P=.002), and fewer energy-dense takeout meals (P=.001) compared to controls. They also increased their total physical activity by 252.5 MET-minutes (95% CI 1.2-503.8, P=.05) and total physical activity by 1.3 days (95% CI 0.5-2.2, P=.003) compared to controls.\n\n\nCONCLUSIONS\nThe TXT2BFiT low-intensity intervention was successful in preventing weight gain with modest weight loss and improvement in lifestyle behaviors among overweight young adults. The short-term success of the 12-week intervention period shows potential. Maintenance of the behavior change will be monitored at 9 months.\n\n\nTRIAL REGISTRATION\n\n\n\nTRIAL REGISTRATION\nThe Australian New Zealand Clinical Trials Registry ACTRN12612000924853; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12612000924853 (Archived by WebCite at http://www.webcitation.org/6Z6w9LlS9)." }, { "pmid": "22564332", "title": "Design of an mHealth app for the self-management of adolescent type 1 diabetes: a pilot study.", "abstract": "BACKGROUND\nThe use of mHealth apps has shown improved health outcomes in adult populations with type 2 diabetes mellitus. However, this has not been shown in the adolescent type 1 population, despite their predisposition to the use of technology. We hypothesized that a more tailored approach and a strong adherence mechanism is needed for this group.\n\n\nOBJECTIVE\nTo design, develop, and pilot an mHealth intervention for the management of type 1 diabetes in adolescents.\n\n\nMETHODS\nWe interviewed adolescents with type 1 diabetes and their family caregivers. Design principles were derived from a thematic analysis of the interviews. User-centered design was then used to develop the mobile app bant. In the 12-week evaluation phase, a pilot group of 20 adolescents aged 12-16 years, with a glycated hemoglobin (HbA(1c)) of between 8% and 10% was sampled. Each participant was supplied with the bant app running on an iPhone or iPod Touch and a LifeScan glucometer with a Bluetooth adapter for automated transfers to the app. The outcome measure was the average daily frequency of blood glucose measurement during the pilot compared with the preceding 12 weeks.\n\n\nRESULTS\nThematic analysis findings were the role of data collecting rather than decision making; the need for fast, discrete transactions; overcoming decision inertia; and the need for ad hoc information sharing. Design aspects of the resultant app emerged through the user-centered design process, including simple, automated transfer of glucometer readings; the use of a social community; and the concept of gamification, whereby routine behaviors and actions are rewarded in the form of iTunes music and apps. Blood glucose trend analysis was provided with immediate prompting of the participant to suggest both the cause and remedy of the adverse trend. The pilot evaluation showed that the daily average frequency of blood glucose measurement increased 50% (from 2.4 to 3.6 per day, P = .006, n = 12). A total of 161 rewards (average of 8 rewards each) were distributed to participants. Satisfaction was high, with 88% (14/16 participants) stating that they would continue to use the system. Demonstrating improvements in HbA(1c) will require a properly powered study of sufficient duration.\n\n\nCONCLUSIONS\nThis mHealth diabetes app with the use of gamification incentives showed an improvement in the frequency of blood glucose monitoring in adolescents with type 1 diabetes. Extending this to improved health outcomes will require the incentives to be tied not only to frequency of blood glucose monitoring but also to patient actions and decision making based on those readings such that glycemic control can be improved." }, { "pmid": "23616865", "title": "Healthy Gaming - Video Game Design to promote Health.", "abstract": "BACKGROUND\nThere is an increasing interest in health games including simulation tools, games for specific conditions, persuasive games to promote a healthy life style or exergames where physical exercise is used to control the game.\n\n\nOBJECTIVE\nThe objective of the article is to review current literature about available health games and the impact related to game design principles as well as some educational theory aspects.\n\n\nMETHODS\nLiterature from the big databases and known sites with games for health has been searched to find articles about games for health purposes. The focus has been on educational games, persuasive games and exergames as well as articles describing game design principles.\n\n\nRESULTS\nThe medical objectives can either be a part of the game theme (intrinsic) or be totally dispatched (extrinsic), and particularly persuasive games seem to use extrinsic game design. Peer support is important, but there is only limited research on multiplayer health games. Evaluation of health games can be both medical and technical, and the focus will depend on the game purpose.\n\n\nCONCLUSION\nThere is still not enough evidence to conclude which design principles work for what purposes since most of the literature in health serious games does not specify design methodologies, but it seems that extrinsic methods work in persuasion. However, when designing health care games it is important to define both the target group and main objective, and then design a game accordingly using sound game design principles, but also utilizing design elements to enhance learning and persuasion. A collaboration with health professionals from an early design stage is necessary both to ensure that the content is valid and to have the game validated from a clinical viewpoint. Patients need to be involved, especially to improve usability. More research should be done on social aspects in health games, both related to learning and persuasion." }, { "pmid": "28119278", "title": "Mobile Health Physical Activity Intervention Preferences in Cancer Survivors: A Qualitative Study.", "abstract": "BACKGROUND\nCancer survivors are at an elevated risk for several negative health outcomes, but physical activity (PA) can decrease those risks. Unfortunately, adherence to PA recommendations among survivors is low. Fitness mobile apps have been shown to facilitate the adoption of PA in the general population, but there are limited apps specifically designed for cancer survivors. This population has unique needs and barriers to PA, and most existing PA apps do not address these issues. Moreover, incorporating user preferences has been identified as an important priority for technology-based PA interventions, but at present there is limited literature that serves to establish these preferences in cancer survivors. This is especially problematic given the high cost of app development and because the majority of downloaded apps fail to engage users over the long term.\n\n\nOBJECTIVE\nThe aim of this study was to take a qualitative approach to provide practical insight regarding this population's preferences for the features and messages of an app to increase PA.\n\n\nMETHODS\nA total of 35 cancer survivors each attended 2 focus groups; a moderator presented slide shows on potential app features and messages and asked open-ended questions to elicit participant preferences. All sessions were audio recorded and transcribed verbatim. Three reviewers independently conducted thematic content analysis on all transcripts, then organized and consolidated findings to identify salient themes.\n\n\nRESULTS\nParticipants (mean age 63.7, SD 10.8, years) were mostly female (24/35, 69%) and mostly white (25/35, 71%). Participants generally had access to technology and were receptive to engaging with an app to increase PA. Themes identified included preferences for (1) a casual, concise, and positive tone, (2) tools for personal goal attainment, (3) a prescription for PA, and (4) an experience that is tailored to the user. Participants reported wanting extensive background data collection with low data entry burden and to have a trustworthy source translate their personal data into individualized PA recommendations. They expressed a desire for app functions that could facilitate goal achievement and articulated a preference for a more private social experience. Finally, results indicated that PA goals might be best established in the context of personally held priorities and values.\n\n\nCONCLUSIONS\nMany of the desired features identified are compatible with both empirically supported methods of behavior change and the relative strengths of an app as a delivery vehicle for behavioral intervention. Participating cancer survivors' preferences contrasted with many current standard practices for mobile app development, including value-based rather than numeric goals, private socialization in small groups rather than sharing with broader social networks, and interpretation of PA data rather than merely providing numerical data. Taken together, these insights may help increase the acceptability of theory-based mHealth PA interventions in cancer survivors." }, { "pmid": "15495883", "title": "College smoking-cessation using cell phone text messaging.", "abstract": "Although rates of smoking among college-aged students continue to rise, few interventions that focus on college smokers' unique motivations and episodic smoking patterns exist. The authors developed and evaluated a prototype program targeting college students that integrates Web and cell phone technologies to deliver a smoking-cessation intervention. To guide the user through the creation and initialization of an individualized quitting program delivered by means of cell phone text messaging, the program uses assessment tools delivered with the program Web site. Forty-six regular smokers were recruited from local colleges and provided access to the program. At 6-week follow-up, 43% had made at least one 24-hour attempt to quit, and 22% were quit--based on a 7-day prevalence criterion. The findings provide support for using wireless text messages to deliver potentially effective smoking-cessation behavioral interventions to college students." }, { "pmid": "19033148", "title": "A multimedia mobile phone-based youth smoking cessation intervention: findings from content development and piloting studies.", "abstract": "BACKGROUND\nWhile most young people who smoke want to quit, few access cessation support services. Mobile phone-based cessation programs are ideal for young people: mobile phones are the most common means of peer communication, and messages can be delivered in an anonymous manner, anywhere, anytime. Following the success of our text messaging smoking cessation program, we developed an innovative multimedia mobile phone smoking cessation intervention.\n\n\nOBJECTIVE\nThe aim of the study was to develop and pilot test a youth-oriented multimedia smoking cessation intervention delivered solely by mobile phone.\n\n\nMETHODS\nDevelopment included creating content and building the technology platform. Content development was overseen by an expert group who advised on youth development principles, observational learning (from social cognitive theory), effective smoking cessation interventions, and social marketing. Young people participated in three content development phases (consultation via focus groups and an online survey, content pre-testing, and selection of role models). Video and text messages were then developed, incorporating the findings from this research. Information technology systems were established to support the delivery of the multimedia messages by mobile phone. A pilot study using an abbreviated 4-week program of video and text content tested the reliability of the systems and the acceptability of the intervention.\n\n\nRESULTS\nApproximately 180 young people participated in the consultation phase. There was a high priority placed on music for relaxation (75%) and an interest in interacting with others in the program (40% would read messages, 36% would read a blog). Findings from the pre-testing phase (n = 41) included the importance of selecting \"real\" and \"honest\" role models with believable stories, and an interest in animations (37%). Of the 15 participants who took part in the pilot study, 13 (87%) were available for follow-up interviews at 4 weeks: 12 participants liked the program or liked it most of the time and found the role model to be believable; 7 liked the role model video messages (5 were unsure); 8 used the extra assistance for cravings; and 9 were happy with two messages per day. Nine participants (60%) stopped smoking during the program. Some technical challenges were encountered during the pilot study.\n\n\nCONCLUSIONS\nA multimedia mobile phone smoking cessation program is technically feasible, and the content developed is appropriate for this medium and is acceptable to our target population. These results have informed the design of a 6-month intervention currently being evaluated for its effectiveness in increasing smoking cessation rates in young people." }, { "pmid": "28550004", "title": "An mHealth App for Supporting Quitters to Manage Cigarette Cravings With Short Bouts of Physical Activity: A Randomized Pilot Feasibility and Acceptability Study.", "abstract": "BACKGROUND\nWhile gains in reducing smoking rates in Finland have been made, prevalence rates are still substantial. Relapse rates among smokers engaged in quit-smoking programs are high. Physical activity has been proposed as one means to help smokers manage cravings. Software and apps on mobile phone and handheld devices offer an opportunity to communicate messages on how to use physical activity to manage cravings as part of quit-smoking programs.\n\n\nOBJECTIVE\nWe aimed to test the feasibility, acceptability, usability, and preliminary efficacy of an mHealth mobile phone app, Physical activity over Smoking (PhoS), to assist smokers in quitting smoking in a randomized controlled trial. The app was designed to prompt smokers to engage in physical activities to manage their smoking cravings.\n\n\nMETHODS\nRegular smokers (n=44) attended a group-based behavioral counselling program aimed at promoting physical activity as an additional aid to quit. After quit day, participants were randomly allocated to an intervention (n=25) or to a comparison (n=19) group. Participants in the intervention group were provided with the PhoS app and training on how to use it to assist with relapse prevention. Participants in the comparison condition were provided with generalized relapse prevention training.\n\n\nRESULTS\nSome participants reported that the PhoS app was useful in assisting them to successfully manage their cigarette cravings, although compliance across the sample was modest and participants reported low levels of usability. Participants receiving the PhoS app did not report greater abstinence than those who did not receive the app. However, participants receiving the app were more likely to report greater abstinence if they did not use pharmacological support, while those who did not receive the app reported greater abstinence when using pharmacological support. Participants receiving the app reported greater levels of physical activity than those who did not. Results revealed that the app resulted in better retention.\n\n\nCONCLUSIONS\nThe PhoS app showed some potential to reduce abstinence among participants not using pharmacological therapy and to increase physical activity. However, problems with usability and lack of effects on abstinence raise questions over the app's long-term effectiveness. Future research should prioritize further development of the app to maximize usability and test effects of the intervention independent of quit-smoking programs.\n\n\nTRIAL REGISTRATION\nInternational Standard Randomized Controlled Trial Number (ISRCTN): 55259451; http://www.controlled-trials.com/ISRCTN55259451 (Archived by WebCite at http://www.webcitation.org/6cKF2mzEI)." }, { "pmid": "27154792", "title": "Using a Mobile App to Promote Smoking Cessation in Hospitalized Patients.", "abstract": "BACKGROUND\nThe potential of interactive health education for preventive health applications has been widely demonstrated. However, use of mobile apps to promote smoking cessation in hospitalized patients has not been systematically assessed.\n\n\nOBJECTIVE\nThis study was conducted to assess the feasibility of using a mobile app for the hazards of smoking education delivered via touch screen tablets to hospitalized smokers.\n\n\nMETHODS\nFifty-five consecutive hospitalized smokers were recruited. Patient sociodemographics and smoking history was collected at baseline. The impact of the mobile app was assessed by measuring cognitive and behavioral factors shown to promote smoking cessation before and after the mobile app use including hazards of smoking knowledge score (KS), smoking attitudes, and stages of change.\n\n\nRESULTS\nAfter the mobile app use, mean KS increased from 27(3) to 31(3) ( P<0.0001). Proportion of patients who felt they \"cannot quit smoking\" reduced from 36% (20/55) to 18% (10/55) ( P<0.03). Overall, 13% (7/55) of patients moved toward a more advanced stage of change with the proportion of patients in the preparation stage increased from 40% (22/55) to 51% (28/55). Multivariate regression analysis demonstrated that knowledge gains and mobile app acceptance did not depend on age, gender, race, computer skills, income, or education level. The main factors affecting knowledge gain were initial knowledge level ( P<0.02), employment status ( P<0.05), and high app acceptance ( P<0.01). Knowledge gain was the main predictor of more favorable attitudes toward the mobile app (odds ratio (OR)=4.8; 95% confidence interval (CI) (1.1, 20.0)). Attitudinal surveys and qualitative interviews identified high acceptance of the mobile app by hospitalized smokers. Over 92% (51/55) of the study participants recommended the app for use by other hospitalized smokers and 98% (54/55) of the patients were willing to use such an app in the future.\n\n\nCONCLUSIONS\nOur results suggest that a mobile app promoting smoking cessation is well accepted by hospitalized smokers. The app can be used for interactive patient education and counseling during hospital stays. Development and evaluation of mobile apps engaging patients in their care during hospital stays is warranted." }, { "pmid": "17478409", "title": "Using internet and mobile phone technology to deliver an automated physical activity program: randomized controlled trial.", "abstract": "BACKGROUND\nThe Internet has potential as a medium for health behavior change programs, but no controlled studies have yet evaluated the impact of a fully automated physical activity intervention over several months with real-time objective feedback from a monitor.\n\n\nOBJECTIVE\nThe aim was to evaluate the impact of a physical activity program based on the Internet and mobile phone technology provided to individuals for 9 weeks.\n\n\nMETHODS\nA single-center, randomized, stratified controlled trial was conducted from September to December 2005 in Bedfordshire, United Kingdom, with 77 healthy adults whose mean age was 40.4 years (SD = 7.6) and mean body mass index was 26.3 (SD = 3.4). Participants were randomized to a test group that had access to an Internet and mobile phone-based physical activity program (n = 47) or to a control group (n = 30) that received no support. The test group received tailored solutions for perceived barriers, a schedule to plan weekly exercise sessions with mobile phone and email reminders, a message board to share their experiences with others, and feedback on their level of physical activity. Both groups were issued a wrist-worn accelerometer to monitor their level of physical activity; only the test group received real-time feedback via the Internet. The main outcome measures were accelerometer data and self-report of physical activity.\n\n\nRESULTS\nAt the end of the study period, the test group reported a significantly greater increase over baseline than did the control group for perceived control (P < .001) and intention/expectation to exercise (P < .001). Intent-to-treat analyses of both the accelerometer data (P = .02) and leisure time self-report data (P = .03) found a higher level of moderate physical activity in the test group. The average increase (over the control group) in accelerometer-measured moderate physical activity was 2 h 18 min per week. The test group also lost more percent body fat than the control group (test group: -2.18, SD = 0.59; control group: -0.17, SD = 0.81; P = .04).\n\n\nCONCLUSIONS\nA fully automated Internet and mobile phone-based motivation and action support system can significantly increase and maintain the level of physical activity in healthy adults." }, { "pmid": "26168926", "title": "Health Behavior Theory in Physical Activity Game Apps: A Content Analysis.", "abstract": "BACKGROUND\nPhysical activity games developed for a mobile phone platform are becoming increasingly popular, yet little is known about their content or inclusion of health behavior theory (HBT).\n\n\nOBJECTIVE\nThe objective of our study was to quantify elements of HBT in physical activity games developed for mobile phones and to assess the relationship between theoretical constructs and various app features.\n\n\nMETHODS\nWe conducted an analysis of exercise and physical activity game apps in the Apple App Store in the fall of 2014. A total of 52 apps were identified and rated for inclusion of health behavior theoretical constructs using an established theory-based rubric. Each app was coded for 100 theoretical items, containing 5 questions for 20 different constructs. Possible total theory scores ranged from 0 to 100. Descriptive statistics and Spearman correlations were used to describe the HBT score and association with selected app features, respectively.\n\n\nRESULTS\nThe average HBT score in the sample was 14.98 out of 100. One outlier, SuperBetter, scored higher than the other apps with a score of 76. Goal setting, self-monitoring, and self-reward were the most-reported constructs found in the sample. There was no association between either app price and theory score (P=.5074), or number of gamification elements and theory score (P=.5010). However, Superbetter, with the highest HBT score, was also the most expensive app.\n\n\nCONCLUSIONS\nThere are few content analyses of serious games for health, but a comparison between these findings and previous content analyses of non-game health apps indicates that physical activity mobile phone games demonstrate higher levels of behavior theory. The most common theoretical constructs found in this sample are known to be efficacious elements in physical activity interventions. It is unclear, however, whether app designers consciously design physical activity mobile phone games with specific constructs in mind; it may be that games lend themselves well to inclusion of theory and any constructs found in significant levels are coincidental. Health games developed for mobile phones could be potentially used in health interventions, but collaboration between app designers and behavioral specialists is crucial. Additionally, further research is needed to better characterize mobile phone health games and the relative importance of educational elements versus gamification elements in long-term behavior change." }, { "pmid": "25654304", "title": "Active fantasy sports: rationale and feasibility of leveraging online fantasy sports to promote physical activity.", "abstract": "BACKGROUND\nThe popularity of active video games (AVGs) has skyrocketed over the last decade. However, research suggests that the most popular AVGs, which rely on synchronous integration between players' activity and game features, fail to promote physical activity outside of the game or for extended periods of engagement. This limitation has led researchers to consider AVGs that involve asynchronous integration of players' ongoing physical activity with game features. Rather than build an AVG de novo, we selected an established sedentary video game uniquely well suited for the incorporation of asynchronous activity: online fantasy sports.\n\n\nOBJECTIVE\nThe primary aim of this study was to explore the feasibility of a new asynchronous AVG-active fantasy sports-designed to promote physical activity.\n\n\nMETHODS\nWe conducted two pilot studies of an active fantasy sports game designed to promote physical activity. Participants wore a low cost triaxial accelerometer and participated in an online fantasy baseball (Study 1, n=9, 13-weeks) or fantasy basketball (Study 2, n=10, 17-weeks) league. Privileges within the game were made contingent on meeting weekly physical activity goals (eg, averaging 10,000 steps/day).\n\n\nRESULTS\nAcross the two studies, the feasibility of integrating physical activity contingent features and privileges into online fantasy sports games was supported. Participants found the active fantasy sports game enjoyable, as or more enjoyable than traditional (sedentary) online fantasy sports (Study 1: t8=4.43, P<.01; Study 2: t9=2.09, P=.07). Participants in Study 1 increased their average steps/day, t8=2.63, P<.05, while participants in Study 2 maintained (ie, did not change) their activity, t9=1.57, P=.15). In postassessment interviews, social support within the game was cited as a key motivating factor for increasing physical activity.\n\n\nCONCLUSIONS\nPreliminary evidence supports potential for the active fantasy sports system as a sustainable and scalable intervention for promoting adult physical activity." }, { "pmid": "27806926", "title": "Can Mobile Phone Apps Influence People's Health Behavior Change? An Evidence Review.", "abstract": "BACKGROUND\nGlobally, mobile phones have achieved wide reach at an unprecedented rate, and mobile phone apps have become increasingly prevalent among users. The number of health-related apps that were published on the two leading platforms (iOS and Android) reached more than 100,000 in 2014. However, there is a lack of synthesized evidence regarding the effectiveness of mobile phone apps in changing people's health-related behaviors.\n\n\nOBJECTIVE\nThe aim was to examine the effectiveness of mobile phone apps in achieving health-related behavior change in a broader range of interventions and the quality of the reported studies.\n\n\nMETHODS\nWe conducted a comprehensive bibliographic search of articles on health behavior change using mobile phone apps in peer-reviewed journals published between January 1, 2010 and June 1, 2015. Databases searched included Medline, PreMedline, PsycINFO, Embase, Health Technology Assessment, Education Resource Information Center (ERIC), and Cumulative Index to Nursing and Allied Health Literature (CINAHL). Articles published in the Journal of Medical Internet Research during that same period were hand-searched on the journal's website. Behavior change mechanisms were coded and analyzed. The quality of each included study was assessed by the Cochrane Risk of Bias Assessment Tool.\n\n\nRESULTS\nA total of 23 articles met the inclusion criteria, arranged under 11 themes according to their target behaviors. All studies were conducted in high-income countries. Of these, 17 studies reported statistically significant effects in the direction of targeted behavior change; 19 studies included in this analysis had a 65% or greater retention rate in the intervention group (range 60%-100%); 6 studies reported using behavior change theories with the theory of planned behavior being the most commonly used (in 3 studies). Self-monitoring was the most common behavior change technique applied (in 12 studies). The studies suggest that some features improve the effectiveness of apps, such as less time consumption, user-friendly design, real-time feedback, individualized elements, detailed information, and health professional involvement. All studies were assessed as having some risk of bias.\n\n\nCONCLUSIONS\nOur results provide a snapshot of the current evidence of effectiveness for a range of health-related apps. Large sample, high-quality, adequately powered, randomized controlled trials are required. In light of the bias evident in the included studies, better reporting of health-related app interventions is also required. The widespread adoption of mobile phones highlights a significant opportunity to impact health behaviors globally, particularly in low- and middle-income countries." }, { "pmid": "20144402", "title": "Health technologies for monitoring and managing diabetes: a systematic review.", "abstract": "BACKGROUND\nThe primary objective of this review was to determine the strength of evidence for the effectiveness of self-monitoring devices and technologies for individuals with type 1 diabetes mellitus (T1DM) or type 2 diabetes mellitus (T2DM) based on specific health-related outcome measures. Self-monitoring devices included those that assist patients with managing diabetes and preventing cardiovascular complications (CVCs). A secondary objective was to explore issues of feasibility, usability, and compliance among patients and providers.\n\n\nMETHODS\nStudy criteria included individuals >or=14 years and youth (7-14 years) with T1DM or T2DM, intervention with a self-monitoring device, assessment of clinical outcomes with the device, literature in English, and >or=10 participants. Relevant published literature was searched from 1985 to 2008. Randomized controlled trials and observational studies were included. Data were extracted for clinical outcomes, feasibility and compliance methods, and results. Selected studies were independently evaluated with a validated instrument for assessing methodological quality.\n\n\nRESULTS\nEighteen trials were selected. Predominant types of device interventions included self-monitoring of blood glucose, pedometers, and cell phone or wireless technologies. Feasibility and compliance were measured in the majority of studies.\n\n\nCONCLUSIONS\nSelf-monitoring of blood glucose continues to be an effective tool for the management of diabetes. Wireless technologies can improve diabetes self-care, and pedometers are effective lifestyle modification tools. The results of this review indicate a need for additional controlled trial research on existing and novel technologies for diabetes self-monitoring, on health outcomes associated with diabetes and CVCs, and device feasibility and compliance." }, { "pmid": "18471588", "title": "Using hand-held computer technologies to improve dietary intake.", "abstract": "BACKGROUND\nPortable hand-held information technology offers much promise not only in assessing dietary intake in the real world, but also in providing dietary feedback to individuals. However, stringent research designs have not been employed to examine whether it can be effective in modifying dietary behaviors. The purpose of this pilot study was to evaluate the efficacy of a hand-held computer (i.e., personal digital assistant [PDA]) for increasing vegetable and whole-grain intake over 8 weeks in mid-life and older adults, using a randomized study design.\n\n\nMETHODS\nTwenty-seven healthy adults aged > or =50 were randomized and completed the 8-week study. Intervention participants received an instructional session and a PDA programmed to monitor their vegetable and whole-grain intake levels twice per day and to provide daily individualized feedback, goal-setting, and support. Controls received standard, age-appropriate, written nutritional education materials. Dietary intake was assessed via the Block Food Frequency Questionnaire at baseline and 8 weeks.\n\n\nRESULTS\nRelative to controls, intervention participants reported significantly greater increases in vegetable servings (1.5-2.5 servings/day; p=0.02), as well as a trend toward greater intake of dietary fiber from grains (3.7-4.5 servings/day; p=0.10).\n\n\nCONCLUSIONS\nThis study's findings provide preliminary evidence that using portable hand-held technology to provide daily individualized feedback on dietary behavior in the real world can increase the dietary intake of healthy food groups." }, { "pmid": "18201644", "title": "Promoting physical activity through hand-held computer technology.", "abstract": "BACKGROUND\nEfforts to achieve population-wide increases in walking and similar moderate-intensity physical activities potentially can be enhanced through relevant applications of state-of-the-art interactive communication technologies. Yet few systematic efforts to evaluate the efficacy of hand-held computers and similar devices for enhancing physical activity levels have occurred. The purpose of this first-generation study was to evaluate the efficacy of a hand-held computer (i.e., personal digital assistant [PDA]) for increasing moderate intensity or more vigorous (MOD+) physical activity levels over 8 weeks in mid-life and older adults relative to a standard information control arm.\n\n\nDESIGN\nRandomized, controlled 8-week experiment. Data were collected in 2005 and analyzed in 2006-2007.\n\n\nSETTING/PARTICIPANTS\nCommunity-based study of 37 healthy, initially underactive adults aged 50 years and older who were randomized and completed the 8-week study (intervention=19, control=18).\n\n\nINTERVENTION\nParticipants received an instructional session and a PDA programmed to monitor their physical activity levels twice per day and provide daily and weekly individualized feedback, goal setting, and support. Controls received standard, age-appropriate written physical activity educational materials.\n\n\nMAIN OUTCOME MEASURE\nPhysical activity was assessed via the Community Healthy Activities Model Program for Seniors (CHAMPS) questionnaire at baseline and 8 weeks.\n\n\nRESULTS\nRelative to controls, intervention participants reported significantly greater 8-week mean estimated caloric expenditure levels and minutes per week in MOD+ activity (p<0.04). Satisfaction with the PDA was reasonably high in this largely PDA-naive sample.\n\n\nCONCLUSIONS\nResults from this first-generation study indicate that hand-held computers may be effective tools for increasing initial physical activity levels among underactive adults." }, { "pmid": "28739558", "title": "Design of Mobile Health Tools to Promote Goal Achievement in Self-Management Tasks.", "abstract": "BACKGROUND\nGoal-setting within rehabilitation is a common practice ultimately geared toward helping patients make functional progress.\n\n\nOBJECTIVE\nThe purposes of this study were to (1) qualitatively analyze data from a wellness program for patients with spina bifida (SB) and spinal cord injury (SCI) in order to generate software requirements for a goal-setting module to support their complex goal-setting routines, (2) design a prototype of a goal-setting module within an existing mobile health (mHealth) system, and (3) identify what educational content might be necessary to integrate into the system.\n\n\nMETHODS\nA total of 750 goals were analyzed from patients with SB and SCI enrolled in a wellness program. These goals were qualitatively analyzed in order to operationalize a set of software requirements for an mHealth goal-setting module and identify important educational content.\n\n\nRESULTS\nThose of male sex (P=.02) and with SCI diagnosis (P<.001) were more likely to achieve goals than females or those with SB. Temporality (P<.001) and type (P<.001) of goal were associated with likelihood that the goal would be achieved. Nearly all (210/213; 98.6%) of the fact-finding goals were achieved. There was no significant difference in achievement based on goal theme. Checklists, data tracking, and fact-finding tools were identified as three functionalities that could support goal-setting and achievement in an mHealth system. Based on the qualitative analysis, a list of software requirements for a goal-setting module was generated, and a prototype was developed. Targets for educational content were also generated.\n\n\nCONCLUSIONS\nInnovative mHealth tools can be developed to support commonly set goals by individuals with disabilities." }, { "pmid": "25053006", "title": "Facilitating progress in health behaviour theory development and modification: the reasoned action approach as a case study.", "abstract": "This paper explores the question: what are barriers to health behaviour theory development and modification, and what potential solutions can be proposed? Using the reasoned action approach (RAA) as a case study, four areas of theory development were examined: (1) the theoretical domain of a theory; (2) tension between generalisability and utility, (3) criteria for adding/removing variables in a theory, and (4) organisational tracking of theoretical developments and formal changes to theory. Based on a discussion of these four issues, recommendations for theory development are presented, including: (1) the theoretical domain for theories such as RAA should be clarified; (2) when there is tension between generalisability and utility, utility should be given preference given the applied nature of the health behaviour field; (3) variables should be formally removed/amended/added to a theory based on their performance across multiple studies and (4) organisations and researchers with a stake in particular health areas may be best suited for tracking the literature on behaviour-specific theories and making refinements to theory, based on a consensus approach. Overall, enhancing research in this area can provide important insights for more accurately understanding health behaviours and thus producing work that leads to more effective health behaviour change interventions." }, { "pmid": "18346282", "title": "Does parenting affect children's eating and weight status?", "abstract": "BACKGROUND\nWorldwide, the prevalence of obesity among children has increased dramatically. Although the etiology of childhood obesity is multifactorial, to date, most preventive interventions have focused on school-aged children in school settings and have met with limited success. In this review, we focus on another set of influences that impact the development of children's eating and weight status: parenting and feeding styles and practices. Our review has two aims: (1) to assess the extent to which current evidence supports the hypothesis that parenting, via its effects on children's eating, is causally implicated in childhood obesity; and (2) to identify a set of promising strategies that target aspects of parenting, which can be further evaluated as possible components in childhood obesity prevention.\n\n\nMETHODS\nA literature review was conducted between October 2006 and January 2007. Studies published before January 2007 that assessed the association between some combination of parenting, child eating and child weight variables were included.\n\n\nRESULTS\nA total of 66 articles met the inclusion criteria. The preponderance of these studies focused on the association between parenting and child eating. Although there was substantial experimental evidence for the influence of parenting practices, such as pressure, restriction, modeling and availability, on child eating, the majority of the evidence for the association between parenting and child weight, or the mediation of this association by child eating, was cross-sectional.\n\n\nCONCLUSION\nTo date, there is substantial causal evidence that parenting affects child eating and there is much correlational evidence that child eating and weight influence parenting. There are few studies, however, that have used appropriate meditational designs to provide causal evidence for the indirect effect of parenting on weight status via effects on child eating. A new approach is suggested for evaluating the effectiveness of intervention components and creating optimized intervention programs using a multiphase research design. Adoption of approaches such as the Multiphase Optimization Strategy (MOST) is necessary to provide the mechanistic evidence-base needed for the design and implementation of effective childhood obesity prevention programs." }, { "pmid": "19102817", "title": "Towards a theory of intentional behaviour change: plans, planning, and self-regulation.", "abstract": "PURPOSE\nBriefly review the current state of theorizing about volitional behaviour change and identification of challenges and possible solutions for future theory development.\n\n\nMETHOD\nReview of the literature and theoretical analysis.\n\n\nRESULTS\nReasoned action theories have made limited contributions to the science of behaviour change as they do not propose means of changing cognitions or account for existing effective behaviour change techniques. Changing beliefs does not guarantee behaviour change. The implementation intentions (IMPs) approach to planning has advanced theorizing but the applications to health behaviours often divert substantially from the IMPs paradigm with regard to interventions, effects, mediators and moderators. Better construct definitions and differentiations are needed to make further progress in integrating theory and understanding behaviour change.\n\n\nCONCLUSIONS\nFurther progress in theorizing can be achieved by (a) disentangling planning constructs to study their independent and joint effects on behaviour, (b) progressing research on moderators and mediators of planning effects outside the laboratory and (c) integrating planning processes within learning theory and self-regulation theory." }, { "pmid": "20186636", "title": "Decoding health education interventions: the times are a-changin'.", "abstract": "The development of theory- and evidence-based health education interventions is a complex process in which interventionists in collaboration with priority groups and stakeholders make many decisions about objectives, change techniques, intervention materials and activities, delivery modes and implementation issues. In this development process, interventionists have to find a balance between employing change techniques that should be effective in an ideal world, and intervention activities and materials that match the reality of priority populations and intervention contexts. Intervention descriptions providing information about what behaviour change techniques have been employed, do not reflect the complexity of this decision-making process. They do not reveal why interventionists have decided to include or exclude particular behaviour change techniques. They do not reveal that interventions are based not only upon considerations of health psychologists and other scientists, but also on practical and political boundaries and opportunities that set the scene for the effectiveness of change techniques. Intervention descriptions should therefore reveal not only what is included in the interventions, but also why the intervention is as it is. Intervention Mapping provides the tools that enable the production of such descriptions." }, { "pmid": "19745466", "title": "Portable devices, sensors and networks: wireless personalized eHealth services.", "abstract": "The 21st century healthcare systems aim at involving citizen and health professionals alike entitling especially the citizens to take over a higher level of responsibility for their own health status. Applied technologies like, e.g., Internet, notebooks, and mobile phones enable patients to actively participate in treatment and rehabilitation. It's not any longer just health cards; it's an ongoing standardized personalization of health services including application of portable devices, sensors and actuators stipulating the personalized health approach while offering chances for practicing high quality wireless personalized shared care. The path from cards to personalized and portable devices tackles aspects like health advisors, RFDI technology, the EHR, chips, and smart objects. It is important to identify criteria and factors determining the application of such personalized devices in a wirelessly operated healthcare and welfare, the paradigm change from cards to secure wireless devices to mobile sensors, and the citizen's acceptance of underlying technologies. The presentations of the workshop jointly organized by EFMI WG \"Personal Portable Devices (PPD)\" and ISO/IEC JTC 1 \"Study Group on Sensor Networks (SGSN)\" therefore aim at introducing technical approaches and standardization activities as well as emerging implementations in the addressed domain." }, { "pmid": "27986647", "title": "IDEAS (Integrate, Design, Assess, and Share): A Framework and Toolkit of Strategies for the Development of More Effective Digital Interventions to Change Health Behavior.", "abstract": "Developing effective digital interventions to change health behavior has been a challenging goal for academics and industry players alike. Guiding intervention design using the best combination of approaches available is necessary if effective technologies are to be developed. Behavioral theory, design thinking, user-centered design, rigorous evaluation, and dissemination each have widely acknowledged merits in their application to digital health interventions. This paper introduces IDEAS, a step-by-step process for integrating these approaches to guide the development and evaluation of more effective digital interventions. IDEAS is comprised of 10 phases (empathize, specify, ground, ideate, prototype, gather, build, pilot, evaluate, and share), grouped into 4 overarching stages: Integrate, Design, Assess, and Share (IDEAS). Each of these phases is described and a summary of theory-based behavioral strategies that may inform intervention design is provided. The IDEAS framework strives to provide sufficient detail without being overly prescriptive so that it may be useful and readily applied by both investigators and industry partners in the development of their own mHealth, eHealth, and other digital health behavior change interventions." }, { "pmid": "26883135", "title": "Evidence-Based mHealth Chronic Disease Mobile App Intervention Design: Development of a Framework.", "abstract": "BACKGROUND\nMobile technology offers new capabilities that can help to drive important aspects of chronic disease management at both an individual and population level, including the ability to deliver real-time interventions that can be connected to a health care team. A framework that supports both development and evaluation is needed to understand the aspects of mHealth that work for specific diseases, populations, and in the achievement of specific outcomes in real-world settings. This framework should incorporate design structure and process, which are important to translate clinical and behavioral evidence, user interface, experience design and technical capabilities into scalable, replicable, and evidence-based mobile health (mHealth) solutions to drive outcomes.\n\n\nOBJECTIVE\nThe purpose of this paper is to discuss the identification and development of an app intervention design framework, and its subsequent refinement through development of various types of mHealth apps for chronic disease.\n\n\nMETHODS\nThe process of developing the framework was conducted between June 2012 and June 2014. Informed by clinical guidelines, standards of care, clinical practice recommendations, evidence-based research, best practices, and translated by subject matter experts, a framework for mobile app design was developed and the refinement of the framework across seven chronic disease states and three different product types is described.\n\n\nRESULTS\nThe result was the development of the Chronic Disease mHealth App Intervention Design Framework. This framework allowed for the integration of clinical and behavioral evidence for intervention and feature design. The application to different diseases and implementation models guided the design of mHealth solutions for varying levels of chronic disease management.\n\n\nCONCLUSIONS\nThe framework and its design elements enable replicable product development for mHealth apps and may provide a foundation for the digital health industry to systematically expand mobile health interventions and validate their effectiveness across multiple implementation settings and chronic diseases." }, { "pmid": "17201605", "title": "Motivations for play in online games.", "abstract": "An empirical model of player motivations in online games provides the foundation to understand and assess how players differ from one another and how motivations of play relate to age, gender, usage patterns, and in-game behaviors. In the current study, a factor analytic approach was used to create an empirical model of player motivations. The analysis revealed 10 motivation subcomponents that grouped into three overarching components (achievement, social, and immersion). Relationships between motivations and demographic variables (age, gender, and usage patterns) are also presented." }, { "pmid": "25480724", "title": "How bioethics principles can aid design of electronic health records to accommodate patient granular control.", "abstract": "Ethics should guide the design of electronic health records (EHR), and recognized principles of bioethics can play an important role. This approach was recently adopted by a team of informaticists who are designing and testing a system where patients exert granular control over who views their personal health information. While this method of building ethics in from the start of the design process has significant benefits, questions remain about how useful the application of bioethics principles can be in this process, especially when principles conflict. For instance, while the ethical principle of respect for autonomy supports a robust system of granular control, the principles of beneficence and nonmaleficence counsel restraint due to the danger of patients being harmed by restrictions on provider access to data. Conflict between principles has long been recognized by ethicists and has even motivated attacks on approaches that state and apply principles. In this paper, we show how using ethical principles can help in the design of EHRs by first explaining how ethical principles can and should be used generally, and then by discussing how attention to details in specific cases can show that the tension between principles is not as bad as it initially appeared. We conclude by suggesting ways in which the application of these (and other) principles can add value to the ongoing discussion of patient involvement in their health care. This is a new approach to linking principles to informatics design that we expect will stimulate further interest." }, { "pmid": "23316537", "title": "Methodological triangulation: an approach to understanding data.", "abstract": "AIM\nTo describe the use of methodological triangulation in a study of how people who had moved to retirement communities were adjusting.\n\n\nBACKGROUND\nMethodological triangulation involves using more than one kind of method to study a phenomenon. It has been found to be beneficial in providing confirmation of findings, more comprehensive data, increased validity and enhanced understanding of studied phenomena. While many researchers have used this well-established technique, there are few published examples of its use.\n\n\nDATA SOURCES\nThe authors used methodological triangulation in their study of people who had moved to retirement communities in Ohio, US.\n\n\nREVIEW METHODS\nA blended qualitative and quantitative approach was used.\n\n\nDISCUSSION\nThe collected qualitative data complemented and clarified the quantitative findings by helping to identify common themes. Qualitative data also helped in understanding interventions for promoting 'pulling' factors and for overcoming 'pushing' factors of participants. The authors used focused research questions to reflect the research's purpose and four evaluative criteria--'truth value', 'applicability', 'consistency' and 'neutrality'--to ensure rigour.\n\n\nCONCLUSION\nThis paper provides an example of how methodological triangulation can be used in nursing research. It identifies challenges associated with methodological triangulation, recommends strategies for overcoming them, provides a rationale for using triangulation and explains how to maintain rigour.\n\n\nIMPLICATIONS FOR RESEARCH/PRACTICE\nMethodological triangulation can be used to enhance the analysis and the interpretation of findings. As data are drawn from multiple sources, it broadens the researcher's insight into the different issues underlying the phenomena being studied." }, { "pmid": "16907794", "title": "Improving understanding and rigour through triangulation: an exemplar based on patient participation in interaction.", "abstract": "AIM\nIn this paper, we aim to explore the benefits of triangulation and to expose the positive contribution of using 'triangulation for completeness' within a study of a complex concept, namely patient participation during healthcare interaction.\n\n\nBACKGROUND\nComplex concepts, such as patient participation, are often the focus of nursing research. Triangulation has been proposed as a technique for studying complexity but, although debates about triangulation are becoming more prevalent in the literature, there is little deliberation about the process through which triangulation for completeness, with its claims of forming more comprehensive and rigorous descriptions of concepts through use of multiple data sources, yields it purported benefits.\n\n\nMETHODS\nA seminar series, held between 2001 and 2003, brought together researchers actively involved in the study of patient participation in healthcare consultations. The group came from diverse methodological traditions and had undertaken research with a range of informants and a range of methods.\n\n\nDISCUSSION\nThe various studies used triangulation at different levels: within studies, across studies and across disciplines. Our examples support theoretical arguments that triangulation for completeness can lead to a more holistic understanding of a concept and can improve scientific rigour. Furthermore, we suggest that triangulation can improve research skills for individuals. Our examples suggest that the process through which understanding is enhanced is discursive and centres on discussions of convergent and unique findings; rigour is improved is through challenging findings, being encouraged to explain aspects of your research that may be taken for granted and improving transparency; and individual researcher's skills and abilities are improved is through a process of discussion and reflexivity.\n\n\nCONCLUSIONS\nTriangulation for completeness, on various levels, can improve the quality and utility of research about complex concepts through a range of discursive processes. Developing greater opportunity to collaborate at various levels of analysis could be an important development in nursing research." }, { "pmid": "29331247", "title": "A biopsy of Breast Cancer mobile applications: state of the practice review.", "abstract": "BACKGROUND\nBreast cancer is the most common cancer in women. The use of mobile software applications for health and wellbeing promotion has grown exponentially in recent years. We systematically reviewed the breast cancer apps available in today's leading smartphone application stores and characterized them based on their features, evidence base and target audiences.\n\n\nMETHODS\nA cross-sectional study was performed to characterize breast cancer apps from the two major smartphone app stores (iOS and Android). Apps that matched the keywords \"breast cancer\" were identified and data was extracted using a structured form. Reviewers independently evaluated the eligibility and independently classified the apps.\n\n\nRESULTS\nA total of 1473 apps were a match. After removing duplicates and applying the selection criteria only 599 apps remained. Inter-rater reliability was determined using Fleiss-Cohen's Kappa. The majority of apps were free 471 (78.63%). The most common type of application was Disease and Treatment information apps (29.22%), Disease Management (19.03%) and Awareness Raising apps (15.03%). Close to 1 out of 10 apps dealt with alternative or homeopathic medicine. The majority of the apps were intended for patients (75.79%). Only one quarter of all apps (24.54%) had a disclaimer about usage and less than one fifth (19.70%) mentioned references or source material. Gamification specialists determined that 19.36% contained gamification elements.\n\n\nCONCLUSIONS\nThis study analyzed a large number of breast cancer-focused apps available to consumers. There has been a steady increase of breast cancer apps over the years. The breast cancer app ecosystem largely consists of start-ups and entrepreneurs. Evidence base seems to be lacking in these apps and it would seem essential that expert medical personnel be involved in the creation of medical apps." }, { "pmid": "29426814", "title": "Exploring the Specific Needs of Persons with Multiple Sclerosis for mHealth Solutions for Physical Activity: Mixed-Methods Study.", "abstract": "BACKGROUND\nMultiple sclerosis (MS) is one of the world's most common neurologic disorders, with symptoms such as fatigue, cognitive problems, and issues with mobility. Evidence suggests that physical activity (PA) helps people with MS reduce fatigue and improve quality of life. The use of mobile technologies for health has grown in recent years with little involvement from relevant stakeholders. User-centered design (UCD) is a design philosophy with the goal of creating solutions specific to the needs and tasks of the intended users. UCD involves stakeholders early and often in the design process. In a preliminary study, we assessed the landscape of commercially available MS mobile health (mHealth) apps; to our knowledge, no study has explored what persons with MS and their formal care providers think of mHealth solutions for PA.\n\n\nOBJECTIVE\nThe aim of this study was to (1) explore MS-specific needs for MS mHealth solutions for PA, (2) detect perceived obstacles and facilitators for mHealth solutions from persons with MS and health care professionals, and (3) understand the motivational aspects behind adoption of mHealth solutions for MS.\n\n\nMETHODS\nA mixed-methods design study was conducted in Kliniken Valens, Switzerland, a clinic specializing in neurological rehabilitation. We explored persons with MS and health care professionals who work with them separately. The study had a qualitative part comprising focus groups and interviews, and a quantitative part with standardized tools such as satisfaction with life scale and electronic health (eHealth) literacy.\n\n\nRESULTS\nA total of 12 persons with relapsing-remitting MS and 12 health care professionals from different backgrounds participated in the study. Participants were well-educated with an even distribution between genders. Themes identified during analysis were MS-related barriers and facilitators, mHealth design considerations, and general motivational aspects. The insights generated were used to create MS personas for design purposes. Desired mHealth features were as follows: (1) activity tracking, (2) incentives for completing tasks and objectives, (3) customizable goal setting, (4) optional sociability, and (5) game-like attitude among others. Potential barriers to mHealth apps adoption were as follows: (1) rough on-boarding experiences, (2) lack of clear use benefits, and (3) disruption of the health care provider-patient relationship. Potential facilitators were identified: (1) endorsements from experts, (2) playfulness, and (3) tailored to specific persons with MS needs. A total of 4 MS personas were developed to provide designers and computer scientists means to help in the creation of future mHealth solutions for MS.\n\n\nCONCLUSIONS\nmHealth solutions for increasing PA in persons with MS hold promise. Allowing for realistic goal setting and positive feedback, while minimizing usability burdens, seems to be critical for the adoption of such apps. Fatigue management is especially important in this population; more attention should be brought to this area." }, { "pmid": "16367493", "title": "The Satisfaction With Life Scale.", "abstract": "This article reports the development and validation of a scale to measure global life satisfaction, the Satisfaction With Life Scale (SWLS). Among the various components of subjective well-being, the SWLS is narrowly focused to assess global life satisfaction and does not tap related constructs such as positive affect or loneliness. The SWLS is shown to have favorable psychometric properties, including high internal consistency and high temporal reliability. Scores on the SWLS correlate moderately to highly with other measures of subjective well-being, and correlate predictably with specific personality characteristics. It is noted that the SWLS is Suited for use with different age groups, and other potential uses of the scale are discussed." }, { "pmid": "17213046", "title": "eHEALS: The eHealth Literacy Scale.", "abstract": "BACKGROUND\nElectronic health resources are helpful only when people are able to use them, yet there remain few tools available to assess consumers' capacity for engaging in eHealth. Over 40% of US and Canadian adults have low basic literacy levels, suggesting that eHealth resources are likely to be inaccessible to large segments of the population. Using information technology for health requires eHealth literacy-the ability to read, use computers, search for information, understand health information, and put it into context. The eHealth Literacy Scale (eHEALS) was designed (1) to assess consumers' perceived skills at using information technology for health and (2) to aid in determining the fit between eHealth programs and consumers.\n\n\nOBJECTIVES\nThe eHEALS is an 8-item measure of eHealth literacy developed to measure consumers' combined knowledge, comfort, and perceived skills at finding, evaluating, and applying electronic health information to health problems. The objective of the study was to psychometrically evaluate the properties of the eHEALS within a population context. A youth population was chosen as the focus for the initial development primarily because they have high levels of eHealth use and familiarity with information technology tools.\n\n\nMETHODS\nData were collected at baseline, post-intervention, and 3- and 6-month follow-up using control group data as part of a single session, randomized intervention trial evaluating Web-based eHealth programs. Scale reliability was tested using item analysis for internal consistency (coefficient alpha) and test-retest reliability estimates. Principal components factor analysis was used to determine the theoretical fit of the measures with the data.\n\n\nRESULTS\nA total of 664 participants (370 boys; 294 girls) aged 13 to 21 (mean = 14.95; SD = 1.24) completed the eHEALS at four time points over 6 months. Item analysis was performed on the 8-item scale at baseline, producing a tight fitting scale with alpha = .88. Item-scale correlations ranged from r = .51 to .76. Test-retest reliability showed modest stability over time from baseline to 6-month follow-up (r = .68 to .40). Principal components analysis produced a single factor solution (56% of variance). Factor loadings ranged from .60 to .84 among the 8 items.\n\n\nCONCLUSIONS\nThe eHEALS reliably and consistently captures the eHealth literacy concept in repeated administrations, showing promise as tool for assessing consumer comfort and skill in using information technology for health. Within a clinical environment, the eHEALS has the potential to serve as a means of identifying those who may or may not benefit from referrals to an eHealth intervention or resource. Further research needs to examine the applicability of the eHEALS to other populations and settings while exploring the relationship between eHealth literacy and health care outcomes." }, { "pmid": "29500159", "title": "More Stamina, a Gamified mHealth Solution for Persons with Multiple Sclerosis: Research Through Design.", "abstract": "BACKGROUND\nMultiple sclerosis (MS) is one of the world's most common neurologic disorders. Fatigue is one of most common symptoms that persons with MS experience, having significant impact on their quality of life and limiting their activity levels. Self-management strategies are used to support them in the care of their health. Mobile health (mHealth) solutions are a way to offer persons with chronic conditions tools to successfully manage their symptoms and problems. Gamification is a current trend among mHealth apps used to create engaging user experiences and is suggested to be effective for behavioral change. To be effective, mHealth solutions need to be designed to specifically meet the intended audience needs. User-centered design (UCD) is a design philosophy that proposes placing end users' needs and characteristics in the center of design and development, involving users early in the different phases of the software life cycle. There is a current gap in mHealth apps for persons with MS, which presents an interesting area to explore.\n\n\nOBJECTIVE\nThe purpose of this study was to describe the design and evaluation process of a gamified mHealth solution for behavioral change in persons with MS using UCD.\n\n\nMETHODS\nBuilding on previous work of our team where we identified needs, barriers, and facilitators for mHealth apps for persons with MS, we followed UCD to design and evaluate a mobile app prototype aimed to help persons with MS self-manage their fatigue. Design decisions were evidence-driven and guided by behavioral change models (BCM). Usability was assessed through inspection methods using Nielsen's heuristic evaluation.\n\n\nRESULTS\nThe mHealth solution More Stamina was designed. It is a task organization tool designed to help persons with MS manage their energy to minimize the impact of fatigue in their day-to-day life. The tool acts as a to-do list where users can input tasks in a simple manner and assign Stamina Credits, a representation of perceived effort, to the task to help energy management and energy profiling. The app also features personalization and positive feedback. The design process gave way to relevant lessons to the design of a gamified behavioral change mHealth app such as the importance of metaphors in concept design, negotiate requirements with the BCM constructs, and tailoring of gamified experiences among others. Several usability problems were discovered during heuristic evaluation and guided the iterative design of our solution.\n\n\nCONCLUSIONS\nIn this paper, we designed an app targeted for helping persons with MS in their fatigue management needs. We illustrate how UCD can help in designing mHealth apps and the benefits and challenges that designers might face when using this approach. This paper provides insight into the design process of gamified behavioral change mHealth apps and the negotiation process implied in it." }, { "pmid": "843571", "title": "The measurement of observer agreement for categorical data.", "abstract": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature." }, { "pmid": "26163456", "title": "How to Increase Reach and Adherence of Web-Based Interventions: A Design Research Viewpoint.", "abstract": "Nowadays, technology is increasingly used to increase people's well-being. For example, many mobile and Web-based apps have been developed that can support people to become mentally fit or to manage their daily diet. However, analyses of current Web-based interventions show that many systems are only used by a specific group of users (eg, women, highly educated), and that even they often do not persist and drop out as the intervention unfolds. In this paper, we assess the impact of design features of Web-based interventions on reach and adherence and conclude that the power that design can have has not been used to its full potential. We propose looking at design research as a source of inspiration for new (to the field) design approaches. The paper goes on to specify and discuss three of these approaches: personalization, ambient information, and use of metaphors. Central to our viewpoint is the role of positive affect triggered by well-designed persuasive features to boost adherence and well-being. Finally, we discuss the future of persuasive eHealth interventions and suggest avenues for follow-up research." }, { "pmid": "27558951", "title": "Patient Insights Into the Design of Technology to Support a Strengths-Based Approach to Health Care.", "abstract": "BACKGROUND\nAn increasing number of research studies in the psychological and biobehavioral sciences support incorporating patients' personal strengths into illness management as a way to empower and activate the patients, thus improving their health and well-being. However, lack of attention to patients' personal strengths is still reported in patient-provider communication. Information technology (IT) has great potential to support strengths-based patient-provider communication and collaboration, but knowledge about the users' requirements and preferences is inadequate.\n\n\nOBJECTIVE\nThis study explored the aspirations and requirements of patients with chronic conditions concerning IT tools that could help increase their awareness of their own personal strengths and resources, and support discussion of these assets in consultations with health care providers.\n\n\nMETHODS\nWe included patients with different chronic conditions (chronic pain, morbid obesity, and chronic obstructive pulmonary disease) and used various participatory research methods to gain insight into the participants' needs, values, and opinions, and the contexts in which they felt strengths-based IT tools could be used.\n\n\nRESULTS\nParticipants were positive toward using technology to support them in identifying and discussing their personal strengths in clinical consultation, but also underlined the importance of fitting it to their specific requirements and the right contexts of use. Participants recommended that technology be designed for use in preconsultation settings (eg, at home) and felt that it should support them in both identifying strengths and in finding out new ways how strengths can be used to attain personal health-related goals. Participants advocated use of technology to support advance preparation for consultations and empower them to take a more active role. IT tools were suggested to be potentially useful in specific contexts, including individual or group consultations with health care providers (physician, nurse, specialist, care team) in clinical consultations but also outside health care settings (eg, as a part of a self-management program). Participants' requirements for functionality and design include, among others: providing examples of strengths reported by other patients with chronic conditions, along with an option to extend the list with personal examples; giving an option to briefly summarize health-related history; using intuitive, easy-to-use but also engaging user interface design. Additionally, the findings are exemplified with a description of a low-fidelity paper prototype of a strengths-based tool, developed with participants in this study.\n\n\nCONCLUSIONS\nUsers requirements for IT support of a strengths-based approach to health care appear feasible. The presented findings reflect patients' values and lists potential contexts where they feel that technology could facilitate meaningful patient-provider communication that focuses not just on symptoms and problems, but also takes into account patients' strengths and resources. The findings can be used to inform further development of IT tools for use in clinical consultations." }, { "pmid": "24139771", "title": "Mobile applications for weight management: theory-based content analysis.", "abstract": "BACKGROUND\nThe use of smartphone applications (apps) to assist with weight management is increasingly prevalent, but the quality of these apps is not well characterized.\n\n\nPURPOSE\nThe goal of the study was to evaluate diet/nutrition and anthropometric tracking apps based on incorporation of features consistent with theories of behavior change.\n\n\nMETHODS\nA comparative, descriptive assessment was conducted of the top-rated free apps in the Health and Fitness category available in the iTunes App Store. Health and Fitness apps (N=200) were evaluated using predetermined inclusion/exclusion criteria and categorized based on commonality in functionality, features, and developer description. Four researchers then evaluated the two most popular apps in each category using two instruments: one based on traditional behavioral theory (score range: 0-100) and the other on the Fogg Behavioral Model (score range: 0-6). Data collection and analysis occurred in November 2012.\n\n\nRESULTS\nEligible apps (n=23) were divided into five categories: (1) diet tracking; (2) healthy cooking; (3) weight/anthropometric tracking; (4) grocery decision making; and (5) restaurant decision making. The mean behavioral theory score was 8.1 (SD=4.2); the mean persuasive technology score was 1.9 (SD=1.7). The top-rated app on both scales was Lose It! by Fitnow Inc.\n\n\nCONCLUSIONS\nAll apps received low overall scores for inclusion of behavioral theory-based strategies." }, { "pmid": "24139770", "title": "Evidence-based strategies in weight-loss mobile apps.", "abstract": "BACKGROUND\nPhysicians have limited time for weight-loss counseling, and there is a lack of resources to which they can refer patients for assistance with weight loss. Weight-loss mobile applications (apps) have the potential to be a helpful tool, but the extent to which they include the behavioral strategies included in evidence-based interventions is unknown.\n\n\nPURPOSE\nThe primary aims of the study were to determine the degree to which commercial weight-loss mobile apps include the behavioral strategies included in evidence-based weight-loss interventions, and to identify features that enhance behavioral strategies via technology.\n\n\nMETHODS\nThirty weight-loss mobile apps, available on iPhone and/or Android platforms, were coded for whether they included any of 20 behavioral strategies derived from an evidence-based weight-loss program (i.e., Diabetes Prevention Program). Data on available apps were collected in January 2012; data were analyzed in June 2012.\n\n\nRESULTS\nThe apps included on average 18.83% (SD=13.24; range=0%-65%) of the 20 strategies. Seven of the strategies were not found in any app. The most common technology-enhanced features were barcode scanners (56.7%) and a social network (46.7%).\n\n\nCONCLUSIONS\nWeight-loss mobile apps typically included only a minority of the behavioral strategies found in evidence-based weight-loss interventions. Behavioral strategies that help improve motivation, reduce stress, and assist with problem solving were missing across apps. Inclusion of additional strategies could make apps more helpful to users who have motivational challenges." }, { "pmid": "25639757", "title": "The person-based approach to intervention development: application to digital health-related behavior change interventions.", "abstract": "This paper describes an approach that we have evolved for developing successful digital interventions to help people manage their health or illness. We refer to this as the \"person-based\" approach to highlight the focus on understanding and accommodating the perspectives of the people who will use the intervention. While all intervention designers seek to elicit and incorporate the views of target users in a variety of ways, the person-based approach offers a distinctive and systematic means of addressing the user experience of intended behavior change techniques in particular and can enhance the use of theory-based and evidence-based approaches to intervention development. There are two key elements to the person-based approach. The first is a developmental process involving qualitative research with a wide range of people from the target user populations, carried out at every stage of intervention development, from planning to feasibility testing and implementation. This process goes beyond assessing acceptability, usability, and satisfaction, allowing the intervention designers to build a deep understanding of the psychosocial context of users and their views of the behavioral elements of the intervention. Insights from this process can be used to anticipate and interpret intervention usage and outcomes, and most importantly to modify the intervention to make it more persuasive, feasible, and relevant to users. The second element of the person-based approach is to identify \"guiding principles\" that can inspire and inform the intervention development by highlighting the distinctive ways that the intervention will address key context-specific behavioral issues. This paper describes how to implement the person-based approach, illustrating the process with examples of the insights gained from our experience of carrying out over a thousand interviews with users, while developing public health and illness management interventions that have proven effective in trials involving tens of thousands of users." }, { "pmid": "24294329", "title": "mHealth approaches to child obesity prevention: successes, unique challenges, and next directions.", "abstract": "Childhood obesity continues to be a significant public health issue. mHealth systems offer state-of-the-art approaches to intervention design, delivery, and diffusion of treatment and prevention efforts. Benefits include cost effectiveness, potential for real-time data collection, feedback capability, minimized participant burden, relevance to multiple types of populations, and increased dissemination capability. However, these advantages are coupled with unique challenges. This commentary discusses challenges with using mHealth strategies for child obesity prevention, such as lack of scientific evidence base describing effectiveness of commercially available applications; relatively slower speed of technology development in academic research settings as compared with industry; data security, and patient privacy; potentially adverse consequences of increased sedentary screen time, and decreased focused attention due to technology use. Implications for researchers include development of more nuanced measures of screen time and other technology-related activities, and partnering with industry for developing healthier technologies. Implications for health practitioners include monitoring, assessing, and providing feedback to child obesity program designers about users' data transfer issues, perceived security and privacy, sedentary behavior, focused attention, and maintenance of behavior change. Implications for policy makers include regulation of claims and quality of apps (especially those aimed at children), supporting standardized data encryption and secure open architecture, and resources for research-industry partnerships that improve the look and feel of technology. Partnerships between academia and industry may promote solutions, as discussed in this commentary." }, { "pmid": "2945228", "title": "How families manage chronic conditions: an analysis of the concept of normalization.", "abstract": "Concept analysis entails the systematic examination of the attributes or characteristics of a given concept for the purpose of clarifying the meaning of that concept. The term \"normalization\" appears frequently in reports of how families respond to the illness or disability of a member. While most authors agree that families who have an ill or disabled member attempt to normalize their family life, definitions and applications of the concept vary considerably across investigators. Following guidelines for concept analysis proposed by Chinn and Jacobs (1983), the origin and development of the concept normalization are traced and criteria are presented for distinguishing between normalization and other responses families make to a member's illness or disability. Three case examples are presented: a model (normalization) case, a contrary (disassociation) case, and a related (denial) case." }, { "pmid": "8457793", "title": "Managing life with a chronic condition: the story of normalization.", "abstract": "One way that families and individuals manage living with a chronic condition is to construct and live a story of \"life as normal.\" The conceptualization of this process is based on constant comparative analysis of accounts of individuals and family members who are managing chronic conditions. The process begins with construction of the story of life as normal and continues as the story is lived over time. As the story is enacted, persons reauthor their lives. Thus the reciprocal nature of the process becomes evident. Specifically, how individuals and families construct and enact the story is discussed along with the role of health care professionals in the process and associated costs and benefits." }, { "pmid": "14728379", "title": "Just-in-time technology to encourage incremental, dietary behavior change.", "abstract": "Our multi-disciplinary team is developing mobile computing software that uses \"just-in-time\" presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase." }, { "pmid": "18952344", "title": "The evolving concept of health literacy.", "abstract": "The relationship between poor literacy skills and health status is now well recognized and better understood. Interest in this relationship has led to the emergence of the concept of health literacy. The concept has emerged from two different roots - in clinical care and in public health. This paper describes the two distinctive concepts that reflect health literacy, respectively, as a clinical \"risk\", or a personal \"asset\". In the former case a strong science is developing to support screening for poor literacy skills in clinical care and this is leading to a range of changes to clinical practice and organization. The conceptualization of health literacy as an asset has its roots in educational research into literacy, concepts of adult learning, and health promotion. The science to support this conceptualization is less well developed and is focused on the development of skills and capacities intended to enable people to exert greater control over their health and the factors that shape health. The paper concludes that both conceptualizations are important and are helping to stimulate a more sophisticated understanding of the process of health communication in both clinical and community settings, as well as highlighting factors impacting on its effectiveness. These include more personal forms of communication and community based educational outreach. It recommends improved interaction between researchers working within the two health literacy perspectives, and further research on the measurement of health literacy. The paper also emphasizes the importance of more general strategies to promote literacy, numeracy and language skills in populations." }, { "pmid": "26944611", "title": "Understanding persuasion contexts in health gamification: A systematic analysis of gamified health behavior change support systems literature.", "abstract": "BACKGROUND\nGamification is increasingly used as a design strategy when developing behavior change support systems in the healthcare domain. It is commonly agreed that understanding the contextual factors is critical for successful gamification, but systematic analyses of the persuasive contexts have been lacking so far within gamified health intervention studies.\n\n\nOBJECTIVES AND METHODS\nThrough a persuasion context analysis of the gamified health behavior change support systems (hBCSSs) literature, we inspect how the contextual factors have been addressed in the prior gamified health BCSS studies. The implications of this study are to provide the practitioners and researchers examples of how to conduct a systematic analysis to help guide the design and research on gamified health BCSSs. The ideas derived from the analysis of the included studies will help identify potential pitfalls and shortcomings in both the research and implementations of gamified health behavior change support systems.\n\n\nRESULTS\nWe systematically analyzed the persuasion contexts of 15 gamified health intervention studies. According to our results, gamified hBCSSs are implemented under different facets of lifestyle change and treatments compliance, and use a multitude of technologies and methods. We present a set of ideas and concepts to help improve endeavors in studying gamified health intervention through comprehensive understanding of the persuasive contextual factors.\n\n\nCONCLUSIONS\nFuture research on gamified hBCSSs should systematically compare the different combinations of contextual factors, related theories, chosen gamification strategies, and the study of outcomes to help understand how to achieve the most efficient use of gamification on the different aspects of healthcare. Analyzing the persuasion context is essential to achieve this. With the attained knowledge, those planning health interventions can choose the 'tried-and-tested' approaches for each particular situation, rather than develop solutions in an ad-hoc manner." }, { "pmid": "28423815", "title": "Mobile Medical Apps and mHealth Devices: A Framework to Build Medical Apps and mHealth Devices in an Ethical Manner to Promote Safer Use - A Literature Review.", "abstract": "This paper presents a preliminary literature review in the area of ethics in the development of Mobile Medical Apps and mHealth. The review included both direct health apps and also apps marketed under the area of well-being in addition to mHealth devices. The following words and combinations of them were used to carry out the search for publications, mHealth, Apps, Ethics. The search engines used were Google Scholar, and PubMed. The paper is restricted to publications since 2012. The total number of papers found was 1,920 of which 84 were reviewed. The reason for so few being reviewed was that the majority only considered security. The search revealed many papers dealing with security for all types of apps and mHealth devices but there are very few papers dealing with the ethical issues related to Apps or mHealth devices in the area. It is noted however that the number of apps is increasing in number exponentially and therefore it is argued that it is necessary to pay attention to the ethical aspects. There are now estimated to be 165,000 apps available in this area. How ethics are addressed in health and well-being apps is important as they can have an effect on the health of the individual using them. In a similar way, the need for addressing ethical issues for development of well-being apps is evident. In a study [1] it was noted that even though Electronic Health Record (EHR) was the highest ranked tablet-related task only one third of clinicians said that EHR was optimized for smartphones. When apps are integrated with the EHR they fully optimize productivity. In the same study the significant challenges identified included the method of evaluation and selection of mobile health solutions in order to ensure that clinical outcomes, care and efficiency are included. Security is mentioned but again wider ethical issues were not a consideration. From the literature review it is clear that there is a need for guidelines for how developers of medical ad well-being apps and mHealth devices should address ethical issues during development, and the generation of these guidelines is the subject of ongoing research by the authors." }, { "pmid": "22646729", "title": "Colorectal smartphone apps: opportunities and risks.", "abstract": "AIM\nThe increased utilization of smartphones within the clinical environment together with connected applications (apps) provides opportunity for doctors, including coloproctologists, to integrate such technology into clinical practice. However, the reliability of unregulated medical apps has recently been called into question. Here, we review contemporary medical apps specifically themed towards colorectal diseases and assess levels of medical professional involvement in their design and content.\n\n\nMETHOD\nThe most popular smartphone app stores (iPhone, Android, Blackberry, Nokia, Windows and Samsung) were searched for colorectal disease themed apps, using the disease terms colorectal cancer, Crohn's disease, ulcerative colitis, diverticulitis, haemorrhoids, anal fissure, bowel incontinence and irritable bowel syndrome.\n\n\nRESULTS\nA total of 68 individual colorectal themed apps were identified, amongst which there were five duplicates. Only 29% of colorectal apps had had customer satisfaction ratings and 32% had named medical professional involvement in their development or content.\n\n\nCONCLUSION\nThe benefits of apps are offset by lack of colorectal specification. There is little medical professional involvement in their design. Increased regulation is required to improve accountability of app content." }, { "pmid": "23801277", "title": "Contemporary hernia smartphone applications (apps).", "abstract": "AIMS\nSmartphone technology and downloadable applications (apps) have created an unprecedented opportunity for access to medical information and healthcare-related tools by clinicians and their patients. Here, we review the current smartphone apps in relation to hernias, one of the most common operations worldwide. This article presents an overview of apps relating to hernias and discusses content, the presence of medical professional involvement and commercial interests.\n\n\nMETHODS\nThe most widely used smartphone app online stores (Google Play, Apple, Nokia, Blackberry, Samsung and Windows) were searched for the following hernia-related terms: hernia, inguinal, femoral, umbilical, incisional and totally extraperitoneal. Those with no reference to hernia or hernia surgery were excluded.\n\n\nRESULTS\n26 smartphone apps were identified. Only 9 (35 %) had named medical professional involvement in their design/content and only 10 (38 %) were reviewed by consumers. Commercial interests/links were evident in 96 % of the apps. One app used a validated mathematical algorithm to help counsel patients about post-operative pain.\n\n\nCONCLUSIONS AND OPPORTUNITIES\nThere were a relatively small number of apps related to hernias in view of the worldwide frequency of hernia repair. This search identified many opportunities for the development of informative and validated evidence-based patient apps which can be recommended to patients by physicians. Greater regulation, transparency of commercial interests and involvement of medical professionals in the content and peer-review of healthcare-related apps is required." }, { "pmid": "26464800", "title": "Smartphone apps for orthopaedic sports medicine - a smart move?", "abstract": "BACKGROUND\nWith the advent of smartphones together with their downloadable applications (apps), there is increasing opportunities for doctors, including orthopaedic sports surgeons, to integrate such technology into clinical practice. However, the clinical reliability of these medical apps remains questionable. We reviewed available apps themed specifically towards Orthopaedic Sports Medicine and related conditions and assessed the level of medical professional involvement in their design and content, along with a review of these apps.\n\n\nMETHOD\nThe most popular smartphone app stores (Android, Apple, Blackberry, Windows, Samsung, Nokia) were searched for Orthopaedic Sports medicine themed apps, using the search terms; Orthopaedic Sports Medicine, Orthopaedics, Sports medicine, Knee Injury, Shoulder Injury, Anterior Cruciate Ligament Tear, Medial Collateral Ligament Tear, Rotator Cuff Tear, Meniscal Tear, Tennis Elbow. All English language apps related to orthopaedic sports medicine were included.\n\n\nRESULTS\nA total of 76 individual Orthopaedic Sports Medicine themed apps were identified. According to app store classifications, there were 45 (59 %) medical themed apps, 28 (37 %) health and fitness themed apps, 1 (1 %) business app, 1 (1 %) reference app and 1 (1 %) sports app. Forty-nine (64 %) apps were available for download free of charge. For those that charged access, the prices ranged from £0.69 to £69.99. Only 51 % of sports medicine apps had customer satisfaction ratings and 39 % had named medical professional involvement in their development or content.\n\n\nCONCLUSIONS\nWe found the majority of Orthopaedic Sports Medicine apps had no named medical professional involvement, raising concerns over their content and evidence-base. We recommend increased regulation of such apps to improve the accountability of app content." }, { "pmid": "24883008", "title": "Mobile devices and apps for health care professionals: uses and benefits.", "abstract": "Health care professionals' use of mobile devices is transforming clinical practice. Numerous medical software applications can now help with tasks ranging from information and time management to clinical decision-making at the point of care." }, { "pmid": "11552552", "title": "The point of triangulation.", "abstract": "PURPOSE\nTo explore various types of triangulation strategies and to indicate when different types of triangulation should be used in research.\n\n\nMETHODS\nReviews included literature on triangulation and multimethod strategies published since 1960 and research books specifically focusing on triangulation.\n\n\nFINDINGS\nTriangulation is the combination of at least two or more theoretical perspectives, methodological approaches, data sources, investigators, or data analysis methods. The intent of using triangulation is to decrease, negate, or counterbalance the deficiency of a single strategy, thereby increasing the ability to interpret the findings.\n\n\nCONCLUSIONS\nThe use of triangulation strategies does not strengthen a flawed study. Researchers should use triangulation if it can contribute to understanding the phenomenon; however, they must be able to articulate why the strategy is being used and how it might enhance the study." }, { "pmid": "12366654", "title": "The value of combining qualitative and quantitative approaches in nursing research by means of method triangulation.", "abstract": "AIM\nThe article contributes to the theoretical discussion of the epistemological grounds of triangulation in nursing research.\n\n\nBACKGROUND\nIn nursing research, the combination of qualitative and quantitative methods is being used increasingly. The attempt to relate different kinds of data through triangulation of different methods is a challenging task as data derived through different methodologies are viewed as incommensurable.\n\n\nCONTENT\nEpistemological questions become a vital issue in triangulation of different methods, as qualitative and quantitative methods are built on philosophical differences in the structure and confirmation of knowledge. The epistemology of nursing is manifold, complex and multifarious in character. Contemporary nursing research should be developed on the bases of an epistemology that reflects this multiplicity. The benefits and problems of triangulation are discussed on basis of an epistemological position that acknowledges the need for various types of knowledge and that does not attempt to rank them in a hierarchical order or place different values on them.\n\n\nCONCLUSION\nWe conclude that the complexity and diversity of reality provides the ontological basis for an alternative epistemological position. The various methods used should be recognized as springing from different epistemological traditions which, when combined, add new perspectives to the phenomenon under investigation. The different types of knowledge should not be seen as ranked, but as equally valid and necessary to obtain a richer and more comprehensive picture of the issue under investigation." }, { "pmid": "27462182", "title": "Multimorbidity in chronic disease: impact on health care resources and costs.", "abstract": "Effective and resource-efficient long-term management of multimorbidity is one of the greatest health-related challenges facing patients, health professionals, and society more broadly. The purpose of this review was to provide a synthesis of literature examining multimorbidity and resource utilization, including implications for cost-effectiveness estimates and resource allocation decision making. In summary, previous literature has reported substantially greater, near exponential, increases in health care costs and resource utilization when additional chronic comorbid conditions are present. Increased health care costs have been linked to elevated rates of primary care and specialist physician occasions of service, medication use, emergency department presentations, and hospital admissions (both frequency of admissions and bed days occupied). There is currently a paucity of cost-effectiveness information for chronic disease interventions originating from patient samples with multimorbidity. The scarcity of robust economic evaluations in the field represents a considerable challenge for resource allocation decision making intended to reduce the burden of multimorbidity in resource-constrained health care systems. Nonetheless, the few cost-effectiveness studies that are available provide valuable insight into the potential positive and cost-effective impact that interventions may have among patients with multiple comorbidities. These studies also highlight some of the pragmatic and methodological challenges underlying the conduct of economic evaluations among people who may have advanced age, frailty, and disadvantageous socioeconomic circumstances, and where long-term follow-up may be required to directly observe sustained and measurable health and quality of life benefits. Research in the field has indicated that the impact of multimorbidity on health care costs and resources will likely differ across health systems, regions, disease combinations, and person-specific factors (including social disadvantage and age), which represent important considerations for health service planning. Important priorities for research include economic evaluations of interventions, services, or health system approaches that can remediate the burden of multimorbidity in safe and cost-effective ways." } ]
Frontiers in Neurorobotics
30233350
PMC6129609
10.3389/fnbot.2018.00054
Shaping of Shared Autonomous Solutions With Minimal Interaction
A fundamental problem in creating successful shared autonomy systems is enabling efficient specification of the problem for which an autonomous system can generate a solution. We present a general paradigm, Interactive Shared Solution Shaping (IS3), broadly applied to shared autonomous systems where a human can use their domain knowledge to interactively provide feedback during the autonomous planning process. We hypothesize that this interaction process can be optimized so that with minimal interaction, near-optimal solutions can be achieved. We examine this hypothesis in the space of resource-constrained mobile search and surveillance and show that without directly instructing a robot or complete communication of a believed target distribution, the human teammate is able to successfully shape the generation of an autonomous search route. This ability is demonstrated in three experiments that show (1) the IS3 approach can improve performance in that routes generated from interactions in general reduce the variance of the target detection performance, and increase overall target detection; (2) the entire IS3 autonomous route generation system's performance, including cost of interaction along with movement cost, experiences a tradeoff between performance vs. numbers of interactions that can be optimized; (3) the IS3 autonomous route generation system is able to perform within constraints by generating tours that stay under budget when executed by a real robot in a realistic field environment.
2. Related workAt its core, this work is closely related to the general problem of planning informative paths for mobile robots. Most commonly, the application of interest is map-exploration, i.e., autonomous uncovering of environment structure by planning trajectories that maximize information gain on the underly probabilistic map representation. Recent methods form this as optimization-based solutions to problems of active control and planning as presented by Kollar and Roy (2008), Julian et al. (2013) and Charrow et al. (2015a). More recently, similar information-theoretic techniques have been applied to the target-detection and tracking problems highlighted by Dames et al. (2015) and Charrow et al. (2015b).The above techniques are all considered in the paradigm of receding horizon control. That is, they operate in the context of a feedback controller, reacting to the most recent model of the environment or problem at hand. When the goal is to autonomously plan for the best sequence of actions over a longer, possibly infinite, time horizon, it is common to turn to techniques from the Operations Research (OR) community. This is especially true when one seeks to incorporate budget- or topologically-based constraints. Further, in these settings, it is typical to have a discrete rather than continuous representation of locations of interest in the environment. For example, the traveling salesperson problem described by Laporte and Martello (1990) looks to find the shortest path that visits all sites, forming a tour that returns to the starting location.When a budget is introduced to this problem it is referred to as the selective traveling salesperson or Orienteering Problem (OP). It is well known that this is an NP-hard problem and most algorithms addressing the OP rely on approximations. Indeed, the development of practical solution algorithms continues to be an active area of research (e.g., Blum et al., 2007; Vansteenwegen et al., 2011). While solutions from the OR community typically focus on problems with fairly coarse discretizations of the environment, recent work by Tokekar et al. (2016) has demonstrated how these techniques can be applied in the field-robotics domain for hybrid aerial-ground systems.In addition to the assumption of pre-computed discrete sites, traditional solutions to the OP problem typically also assume independent reward at each site. However, in a real-world information-collecting application, it is clear that rewards for visiting sites, especially nearby ones, are highly correlated. Indeed, this observation was noted by Yu et al. (2014) where the correlated orienteering problem is introduced as an extension where the reward for visiting each location is correlated with the set of other locations visited, making the problem more amenable to planning informative tours in the context of persistent monitoring. More recently, Arora and Scherer (2016) demonstrate efficient approximate algorithms that solve this problem at speeds making it reasonable to use in an online robotic setting. We adopt the structure of this algorithm in our work here.One of the key observations of this work is that in all of the above planning and control scenarios, the robot or autonomous planning system has a precise definition of the objective function. There has been considerably less attention paid to how a human operator or teammate can efficiently communicate this objective function to the autonomous system.There is, however, some work by Crossman et al. (2012), Alonso-Mora et al. (2015), and Dawson et al. (2015) that does looks toward human interaction with autonomous planning systems. We see two fundamental and contrasting approaches that are taken. First, work such as that of Yi et al. (2014) models human input as a sequence of constraints within which the system plans for an maximally informative path. Second, in the work by Lin and Goodrich (2010) a strategy is adopted where the human shapes the objective function that is used to make autonomous decisions. Since we are interested in a domain of problems that are already heavily constrained, e.g., with limited budget and requirements on cyclical paths, we adopt the second strategy and focus on how the human teammate can provide iterative updates to the objective function, demonstrated as a proof-of-concept in our earlier work Reardon and Fink (2017).Finally, we do note that there is a potential connection between our work and the work concentrating on the idea of reward shaping in the reinforcement learning community. Clearly, there is a fundamental difference in the objective when interacting with a system during the training of a policy rather than the execution of an autonomous planning algorithm. However, we do draw inspiration from the interactive approaches described by Judah et al. (2014) and Raza et al. (2015).
[]
[]
Network Neuroscience
30294703
PMC6145855
10.1162/netn_a_00044
NeuroCave: A web-based immersive visualization platform for exploring connectome datasets
We introduce NeuroCave, a novel immersive visualization system that facilitates the visual inspection of structural and functional connectome datasets. The representation of the human connectome as a graph enables neuroscientists to apply network-theoretic approaches in order to explore its complex characteristics. With NeuroCave, brain researchers can interact with the connectome—either in a standard desktop environment or while wearing portable virtual reality headsets (such as Oculus Rift, Samsung Gear, or Google Daydream VR platforms)—in any coordinate system or topological space, as well as cluster brain regions into different modules on-demand. Furthermore, a default side-by-side layout enables simultaneous, synchronized manipulation in 3D, utilizing modern GPU hardware architecture, and facilitates comparison tasks across different subjects or diagnostic groups or longitudinally within the same subject. Visual clutter is mitigated using a state-of-the-art edge bundling technique and through an interactive layout strategy, while modular structure is optimally positioned in 3D exploiting mathematical properties of platonic solids. NeuroCave provides new functionality to support a range of analysis tasks not available in other visualization software platforms.
Related WorkMany tools exist to generate and visualize the connectome in 2D and 3D (Margulies, Böttger, Watanabe, & Gorgolewski, 2013). Three-dimensional visualization tools most often represent the connectome as node-link diagrams, in which nodes are positioned relative to their corresponding anatomical locations, and links represent the connectivity between nodes. Examples of such tools include the Connectome Visualization Utility (LaPlante, Douw, Tang, & Stufflebeam, 2014), BrainNet Viewer (Xia, Wang, & He, 2013), and the Connectome Viewer Toolkit (Gerhard et al., 2011). In general, node-link diagrams provide an effective overview of the entire graph, which makes it easy to observe relationships between both directly and indirectly connected nodes. However, excessive visual clutter is introduced as the number of edge crossings increases, affecting the readability of the graph.Representations of the connectome in 2D are also common. In certain cases, adjacency matrices can better manage large connectome datasets than node-link diagrams (Alper, Bach, Henry Riche, Isenberg, & Fekete, 2013; Ma et al., 2015). However, some visual analysis tasks are difficult to perform using matrix representations (Ghoniem, Fekete, & Castagliola, 2005; Keller, Eckert, & Clarkson, 2006), such as detecting graph alterations in group studies. A popular 2D technique to highlight relevant brain connectivity patterns is the connectogram (Irimia et al., 2012). In a connectogram, the names of each brain region are presented along the perimeter of a circle, and the regions are positioned in two different halves according to the hemisphere they belong to. Furthermore, each hemisphere is broken down into different lobes, subcortical structures, and the cerebellum. The inner space of the circle is divided into multiple-colored nested rings, where each ring shows a heat map representing a specific metric. Interconnections between the regions are illustrated inside the circle by means of curved lines. As with NeuroCave, a goal of the connectogram is to more effectively represent densely connected networks, as is the case for the human connectome. Cacciola et al. (2017) demonstrate that the intrinsic geometry of a structural brain connectome relates to its brain anatomy, noting that the hyperbolic disk seems a congruous space of representation for structural connectomes, one in which it is possible to design brain latent-geometry-based markers for differential connectomic analysis of healthy and diseased. Although connectograms help to prevent some of the clutter that occurs when visualizing networks containing a large number of edges, it can be challenging to correlate anatomical structures with connectivity, and users may find it difficult to make sense of connectograms with many layers of inner and outer circles (Burch & Weiskopf, 2014). Moreover, it can be time-consuming to produce a connectogram by using the popular Circos software (Krzywinski et al., 2009), which requires the preparation of nine distinct configuration files. Finally, lacking a graphical user interface, it is generally used as a presentation tool rather than as a means to interact with connectome data. Although NeuroCave focuses on supporting the analysis tasks defined above, it can be used to represent data in an analogous way. Figure 2 shows an example of similarly dense datasets represented in 2D using a connectogram and in 3D by using NeuroCave.Figure 2. An example of a 2D connectogram (left), taken from the Circos tutorial website (http://circos.ca/tutorials/), versus a 3D platonic solid representation of a connectome and its modularity using NeuroCave (right). With NeuroCave, users can interactively select particular nodes or groups of nodes to explore connectivity on-demand, and alternative layouts based on clustering parameters can be generated as required for a particular analysis task.Although most commonly used visualization tools are dedicated desktop applications, web-based implementations, such as Slice:Drop (Haehn, 2013) or BrainBrowser (Sherif, Kassis, Rousseau, Adalat, & Evans, 2015), free the user from being attached to a specific operating system (Pieloth, Pizarro, Knosche, Maess, & Fuchs, 2013). To this end, NeuroCave is a web-based application and runs in any modern browser, both on desktop and mobile computers. Rojas et al. (2014) finds that the use of stereoscopic techniques can provide a more immersive way to explore brain imaging data, and Hänel, Pieperhoff, Hentschel, Amunts, & Kuhlen (2014) show that healthcare professionals perceive the increased dimensionality provided by stereoscopy as beneficial for understanding depth in the displayed scenery. Moreover, Ware & Mitchell (2008) find that the use of stereographic visualizations reduces the error rate in graph perception for large graphs with more than 500 nodes. Alper, Hollerer, Kuchera-Morin, & Forbes (2011) observe that when coupled with a highlighting technique, stereoscopic representations of 3D graphs outperform their nonimmersive counterpart. NeuroCave harnesses the visualization capabilities of virtual reality (VR) environments, which can facilitate spatial manipulation, identification, and classification of objects and imagery, and aid users in understanding complex scenes (Bohil, Alicea, & Biocca, 2011; Forbes, Villegas, Almryde, & Plante, 2014; Marai, Forbes, & Johnson, 2016). Other tools that make use of VR for visualizing connectomes include AlloBrain (Thompson et al., 2009), BrainX3 (Arsiwalla et al., 2015; Betella et al., 2014), and BRAINtrinsic (Conte et al., 2016; Conte, Ye, Forbes, Ajilore, & Leow, 2015). Similar to BRAINtrinsic, NeuroCave emphasizes the ability to switch between anatomical representations and low-dimensional embeddings of connectome datasets. Although NeuroCave includes some of the virtual reality functionality available in these previous connectome visualization tools, it also enables users to move seamlessly between desktop and VR environments for interactively exploring 3D connectomes in a range of topological spaces, supports larger connectome datasets, includes novel layout strategies for presenting clusters of data in 3D space, and introduces a hardware-accelerated edge bundling technique for reducing link clutter.Table 1 provides an overview of popular tools used for visualizing connectome datasets. Although each of the visualization software tools listed in Table 1 may partially address the visualization tasks delineated in the introduction, none provides a visualization that can directly facilitate tasks involving various types of comparison between datasets, since they all lack the ability to simultaneously load and synchronize a comparative visualization of multiple connectomes. Instead, the user needs to open multiple instances of the application, which usually requires the use of multiple monitors in order to visually compare the structural or functional connectomes of the same subject, or two subjects belonging to different groups. Clearly, with two instances of the software running, user actions will not be synchronized, making it more difficult to assess visual differences. Some of the applications implemented in scripting languages, such as R and MATLAB, do provide the user with the flexibility to customize views (e.g., to present multiple connectomes simultaneously). However, this requires additional efforts as well as programming expertise. By introducing a side-by-side layout, NeuroCave enables neuroscientists and researchers to efficiently execute tasks that involve comparative analyses, and to simultaneously spot changes occurring within and across subjects. NeuroCave does not target tractography-related usages, which, although an important area of connectomics visualization, are not usually a requirement for clinical neuroscientists (who are the intended audience for our visualization software).Table 1. A survey of neuroimaging connectomic software. This table categorizes each software in terms of whether or not it supports structural or functional connectomes, or both, or if the software is accessed online via a browser. Additionally, we indicate whether or not the software visualizes connectomes as a volume, a surface, or as a graph.
[ "16399673", "22034353", "25759649", "17488217", "22048061", "19190637", "17603406", "28007987", "26037235", "22248573", "28866584", "21713110", "24847243", "16310346", "22363313", "19541911", "25437873", "4075870", "10867223", "9918726", "23660027", "21151783", "18977091", "25414626", "15635061", "25628562", "16201007", "11125149", "22049421", "16934233", "23861951", "27747562", "26096223", "28675490" ]
[ { "pmid": "16399673", "title": "A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs.", "abstract": "Small-world properties have been demonstrated for many complex networks. Here, we applied the discrete wavelet transform to functional magnetic resonance imaging (fMRI) time series, acquired from healthy volunteers in the resting state, to estimate frequency-dependent correlation matrices characterizing functional connectivity between 90 cortical and subcortical regions. After thresholding the wavelet correlation matrices to create undirected graphs of brain functional networks, we found a small-world topology of sparse connections most salient in the low-frequency interval 0.03-0.06 Hz. Global mean path length (2.49) was approximately equivalent to a comparable random network, whereas clustering (0.53) was two times greater; similar parameters have been reported for the network of anatomical connections in the macaque cortex. The human functional network was dominated by a neocortical core of highly connected hubs and had an exponentially truncated power law degree distribution. Hubs included recently evolved regions of the heteromodal association cortex, with long-distance connections to other regions, and more cliquishly connected regions of the unimodal association and primary cortices; paralimbic and limbic regions were topologically more peripheral. The network was more resilient to targeted attack on its hubs than a comparable scale-free network, but about equally resilient to random error. We conclude that correlated, low-frequency oscillations in human fMRI data have a small-world architecture that probably reflects underlying anatomical connectivity of the cortex. Because the major hubs of this network are critical for cognition, its slow dynamics could provide a physiological substrate for segregated and distributed information processing." }, { "pmid": "22034353", "title": "Stereoscopic highlighting: 2D graph visualization on stereo displays.", "abstract": "In this paper we present a new technique and prototype graph visualization system, stereoscopic highlighting, to help answer accessibility and adjacency queries when interacting with a node-link diagram. Our technique utilizes stereoscopic depth to highlight regions of interest in a 2D graph by projecting these parts onto a plane closer to the viewpoint of the user. This technique aims to isolate and magnify specific portions of the graph that need to be explored in detail without resorting to other highlighting techniques like color or motion, which can then be reserved to encode other data attributes. This mechanism of stereoscopic highlighting also enables focus+context views by juxtaposing a detailed image of a region of interest with the overall graph, which is visualized at a further depth with correspondingly less detail. In order to validate our technique, we ran a controlled experiment with 16 subjects comparing static visual highlighting to stereoscopic highlighting on 2D and 3D graph layouts for a range of tasks. Our results show that while for most tasks the difference in performance between stereoscopic highlighting alone and static visual highlighting is not statistically significant, users performed better when both highlighting methods were used concurrently. In more complicated tasks, 3D layout with static visual highlighting outperformed 2D layouts with a single highlighting method. However, it did not outperform the 2D layout utilizing both highlighting techniques simultaneously. Based on these results, we conclude that stereoscopic highlighting is a promising technique that can significantly enhance graph visualizations for certain use cases." }, { "pmid": "25759649", "title": "Network dynamics with BrainX(3): a large-scale simulation of the human brain network with real-time interaction.", "abstract": "BrainX(3) is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX(3) in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX(3) can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas." }, { "pmid": "17488217", "title": "Superior temporal gyrus, language function, and autism.", "abstract": "Deficits in language are a core feature of autism. The superior temporal gyrus (STG) is involved in auditory processing, including language, but also has been implicated as a critical structure in social cognition. It was hypothesized that subjects with autism would display different size-function relationships between the STG and intellectual-language-based abilities when compared to controls. Intellectual ability was assessed by either the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) or Wechsler Adult Intelligence Scale-Third Edition (WAIS-III), where three intellectual quotients (IQ) were computed: verbal (VIQ), performance (PIQ), and full-scale (FSIQ). Language ability was assessed by the Clinical Evaluation of Language Fundamentals-Third Edition (CELF-3), also divided into three index scores: receptive, expressive, and total. Seven to 19-year-old rigorously diagnosed subjects with autism (n = 30) were compared to controls (n = 39; 13 of whom had a deficit in reading) of similar age who were matched on education, PIQ, and head circumference. STG volumes were computed based on 1.5 Tesla magnetic resonance imaging (MRI). IQ and CELF-3 performance were highly interrelated regardless of whether subjects had autism or were controls. Both IQ and CELF-3 ability were positively correlated with STG in controls, but a different pattern was observed in subjects with autism. In controls, left STG gray matter was significantly (r = .42, p < or = .05) related to receptive language on the CELF-3; in contrast, a zero order correlation was found with autism. When plotted by age, potential differences in growth trajectories related to language development associated with STG were observed between controls and those subjects with autism. Taken together, these findings suggest a possible failure in left hemisphere lateralization of language function involving the STG in autism." }, { "pmid": "22048061", "title": "Virtual reality in neuroscience research and therapy.", "abstract": "Virtual reality (VR) environments are increasingly being used by neuroscientists to simulate natural events and social interactions. VR creates interactive, multimodal sensory stimuli that offer unique advantages over other approaches to neuroscientific research and applications. VR's compatibility with imaging technologies such as functional MRI allows researchers to present multimodal stimuli with a high degree of ecological validity and control while recording changes in brain activity. Therapists, too, stand to gain from progress in VR technology, which provides a high degree of control over the therapeutic experience. Here we review the latest advances in VR technology and its applications in neuroscience research." }, { "pmid": "19190637", "title": "Complex brain networks: graph theoretical analysis of structural and functional systems.", "abstract": "Recent developments in the quantitative analysis of complex networks, based largely on graph theory, have been rapidly translated to studies of brain network organization. The brain's structural and functional systems have features of complex networks--such as small-world topology, highly connected hubs and modularity--both at the whole-brain scale of human neuroimaging and at a cellular scale in non-human animals. In this article, we review studies investigating complex brain networks in diverse experimental modalities (including structural and functional MRI, diffusion tensor imaging, magnetoencephalography and electroencephalography in humans) and provide an accessible introduction to the basic principles of graph theory. We also highlight some of the technical challenges and key questions to be addressed by future developments in this rapidly moving field." }, { "pmid": "17603406", "title": "The precuneus and consciousness.", "abstract": "This article reviews the rapidly growing literature on the functional anatomy and behavioral correlates of the precuneus, with special reference to imaging neuroscience studies using hamodynamic techniques. The precuneus, along with adjacent areas within the posteromedial parietal cortex, is among the most active cortical regions according to the \"default mode\" of brain function during the conscious resting state, whereas it selectively deactivates in a number of pathophysiological conditions (ie, sleep, vegetative state, drug-induced anesthesia), and neuropsychiatric disorders (ie, epilepsy, Alzheimer's disease, and schizophrenia) characterized by impaired consciousness. These findings, along with the widespread connectivity pattern, suggest that the precuneus may play a central role in the neural network correlates of consciousness. Specifically, its activity seems to correlate with self-reflection processes, possibly involving mental imagery and episodic/autobiographical memory retrieval." }, { "pmid": "28007987", "title": "Connectomic correlates of response to treatment in first-episode psychosis.", "abstract": "Connectomic approaches using diffusion tensor imaging have contributed to our understanding of brain changes in psychosis, and could provide further insights into the neural mechanisms underlying response to antipsychotic treatment. We here studied the brain network organization in patients at their first episode of psychosis, evaluating whether connectome-based descriptions of brain networks predict response to treatment, and whether they change after treatment. Seventy-six patients with a first episode of psychosis and 74 healthy controls were included. Thirty-three patients were classified as responders after 12 weeks of antipsychotic treatment. Baseline brain structural networks were built using whole-brain diffusion tensor imaging tractography, and analysed using graph analysis and network-based statistics to explore baseline characteristics of patients who subsequently responded to treatment. A subgroup of 43 patients was rescanned at the 12-week follow-up, to study connectomic changes over time in relation to treatment response. At baseline, those subjects who subsequently responded to treatment, compared to those that did not, showed higher global efficiency in their structural connectomes, a network configuration that theoretically facilitates the flow of information. We did not find specific connectomic changes related to treatment response after 12 weeks of treatment. Our data suggest that patients who have an efficiently-wired connectome at first onset of psychosis show a better subsequent response to antipsychotics. However, response is not accompanied by specific structural changes over time detectable with this method." }, { "pmid": "26037235", "title": "A novel brain partition highlights the modular skeleton shared by structure and function.", "abstract": "Elucidating the intricate relationship between brain structure and function, both in healthy and pathological conditions, is a key challenge for modern neuroscience. Recent progress in neuroimaging has helped advance our understanding of this important issue, with diffusion images providing information about structural connectivity (SC) and functional magnetic resonance imaging shedding light on resting state functional connectivity (rsFC). Here, we adopt a systems approach, relying on modular hierarchical clustering, to study together SC and rsFC datasets gathered independently from healthy human subjects. Our novel approach allows us to find a common skeleton shared by structure and function from which a new, optimal, brain partition can be extracted. We describe the emerging common structure-function modules (SFMs) in detail and compare them with commonly employed anatomical or functional parcellations. Our results underline the strong correspondence between brain structure and resting-state dynamics as well as the emerging coherent organization of the human brain." }, { "pmid": "22248573", "title": "FreeSurfer.", "abstract": "FreeSurfer is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and structural properties of the human brain. It has evolved from a package primarily aimed at generating surface representations of the cerebral cortex into one that automatically creates models of most macroscopically visible structures in the human brain given any reasonable T1-weighted input image. It is freely available, runs on a wide variety of hardware and software platforms, and is open source." }, { "pmid": "28866584", "title": "Dynamic Influence Networks for Rule-Based Models.", "abstract": "We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle." }, { "pmid": "21713110", "title": "The connectome viewer toolkit: an open source framework to manage, analyze, and visualize connectomes.", "abstract": "Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/" }, { "pmid": "24847243", "title": "Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome.", "abstract": "The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions." }, { "pmid": "16310346", "title": "The role of the left Brodmann's areas 44 and 45 in reading words and pseudowords.", "abstract": "In this functional magnetic resonance imaging (fMRI) study, we investigated the influence of two task (lexical decision, LDT; phonological decision, PDT) on activation in Broca's region (left Brodmann's areas [BA] 44 and 45) during the processing of visually presented words and pseudowords. Reaction times were longer for pseudowords than words in LDT but did not differ in PDT. By combining the fMRI data with cytoarchitectonic anatomical probability maps, we demonstrated that the left BA 44 and BA 45 were stronger activated for pseudowords than for words. Separate analyses for LDT and PDT revealed that the left BA 44 was activated in both tasks, whereas left BA 45 was only involved in LDT. The results are interpreted within a dual-route model of reading with the left BA 44 supporting grapheme-to-phoneme conversion and the left BA 45 being related to explicit lexical search." }, { "pmid": "22363313", "title": "Patient-tailored connectomics visualization for the assessment of white matter atrophy in traumatic brain injury.", "abstract": "Available approaches to the investigation of traumatic brain injury (TBI) are frequently hampered, to some extent, by the unsatisfactory abilities of existing methodologies to efficiently define and represent affected structural connectivity and functional mechanisms underlying TBI-related pathology. In this paper, we describe a patient-tailored framework which allows mapping and characterization of TBI-related structural damage to the brain via multimodal neuroimaging and personalized connectomics. Specifically, we introduce a graphically driven approach for the assessment of trauma-related atrophy of white matter connections between cortical structures, with relevance to the quantification of TBI chronic case evolution. This approach allows one to inform the formulation of graphical neurophysiological and neuropsychological TBI profiles based on the particular structural deficits of the affected patient. In addition, it allows one to relate the findings supplied by our workflow to the existing body of research that focuses on the functional roles of the cortical structures being targeted. A graphical means for representing patient TBI status is relevant to the emerging field of personalized medicine and to the investigation of neural atrophy." }, { "pmid": "19541911", "title": "Circos: an information aesthetic for comparative genomics.", "abstract": "We created a visualization tool called Circos to facilitate the identification and analysis of similarities and differences arising from comparisons of genomes. Our tool is effective in displaying variation in genome structure and, generally, any other kind of positional relationships between genomic intervals. Such data are routinely produced by sequence alignments, hybridization arrays, genome mapping, and genotyping studies. Circos uses a circular ideogram layout to facilitate the display of relationships between pairs of positions by the use of ribbons, which encode the position, size, and orientation of related genomic elements. Circos is capable of displaying data as scatter, line, and histogram plots, heat maps, tiles, connectors, and text. Bitmap or vector images can be created from GFF-style data inputs and hierarchical configuration files, which can be easily generated by automated tools, making Circos suitable for rapid deployment in data analysis and reporting pipelines." }, { "pmid": "25437873", "title": "The Connectome Visualization Utility: software for visualization of human brain networks.", "abstract": "In analysis of the human connectome, the connectivity of the human brain is collected from multiple imaging modalities and analyzed using graph theoretical techniques. The dimensionality of human connectivity data is high, and making sense of the complex networks in connectomics requires sophisticated visualization and analysis software. The current availability of software packages to analyze the human connectome is limited. The Connectome Visualization Utility (CVU) is a new software package designed for the visualization and network analysis of human brain networks. CVU complements existing software packages by offering expanded interactive analysis and advanced visualization features, including the automated visualization of networks in three different complementary styles and features the special visualization of scalar graph theoretical properties and modular structure. By decoupling the process of network creation from network visualization and analysis, we ensure that CVU can visualize networks from any imaging modality. CVU offers a graphical user interface, interactive scripting, and represents data uses transparent neuroimaging and matrix-based file types rather than opaque application-specific file formats." }, { "pmid": "4075870", "title": "Emergence and characterization of sex differences in spatial ability: a meta-analysis.", "abstract": "Sex differences in spatial ability are widely acknowledged, yet considerable dispute surrounds the magnitude, nature, and age of first occurrence of these differences. This article focuses on 3 questions about sex differences in spatial ability: What is the magnitude of sex differences in spatial ability? On which aspects of spatial ability are sex differences found? and When, in the life span, are sex differences in spatial ability first detected? Implications for clarifying the linkage between sex differences in spatial ability and other differences between males and females are discussed. We use meta-analysis, a method for synthesizing empirical studies, to investigate these questions. Results of the meta-analysis suggest that sex differences arise on some types of spatial ability but not others, that large sex differences are found only on measures of mental rotation, that smaller sex differences are found on measures of spatial perception, and that, when sex differences are found, they can be detected across the life span." }, { "pmid": "10867223", "title": "Longitudinal effects of estrogen replacement therapy on PET cerebral blood flow and cognition.", "abstract": "Observational studies suggest that estrogen replacement therapy (ERT) may protect against age-related memory decline and lower the risk of Alzheimer's disease (AD). This study aimed to characterize the neural substrates of those effects by comparing 2-year longitudinal changes in regional cerebral blood flow (rCBF) in 12 ERT users and 16 nonusers. Positron emission tomography (PET) measurements of rCBF were obtained under three conditions: rest, and verbal and figural recognition memory tasks. Groups showed different patterns of change in rCBF over time in a number of brain areas. These group differences, for the most part, reflected regions of increased rCBF over time in users compared to nonusers. The greatest differences between ERT users and nonusers were in the hippocampus, parahippocampal gyrus, and temporal lobe, regions that form a memory circuit and that are sensitive to preclinical AD. Across a battery of standardized neuropsychological tests of memory, users obtained higher scores than did nonusers of comparable intellect. Group differences in longitudinal change in rCBF patterns may reflect one way through which hormones modulate brain activity and contribute to enhanced memory performance among ERT users." }, { "pmid": "9918726", "title": "MRI-Based topographic parcellation of human cerebral white matter and nuclei II. Rationale and applications with systematics of cerebral connectivity.", "abstract": "We describe a system for parcellation of the human cerebral white matter and nuclei, based upon magnetic resonance images. An algorithm for subdivision of the cerebral central white matter according to topographic criteria is developed in the companion manuscript. In the present paper we provide a rationale for this system of parcellation of the central white matter and we extend the system of cerebral parcellation to include principal subcortical gray structures such as the thalamus and the basal ganglia. The volumetric measures of the subcortical gray and white matter parcellation units in 20 young adult brains are computed and reported here as well. In addition, with the comprehensive system for cerebral gray and white matter structure parcellation as reference, we formulate a systematics of forebrain connectivity. The degree to which functionally specific brain areas correspond to topographically specific areas is an open empirical issue. The resolution of this issue requires the development of topographically specific anatomic analyses, such as presented in the current system, and the application of such systems to a comprehensive set of functional-anatomic correlation studies in order to establish the degree of structural-functional correspondence. This system is expected to be applied in both cognitive and clinical neuroscience as an MRI-based topographic systematics of human forebrain anatomy with normative volumetric reference and also as a system of reference for the anatomic organization of specific neural systems as disrupted by focal lesions in lesion-deficit correlations." }, { "pmid": "23660027", "title": "Visualizing the human connectome.", "abstract": "Innovations in data visualization punctuate the landmark advances in human connectome research since its beginnings. From tensor glyphs for diffusion-weighted imaging, to advanced rendering of anatomical tracts, to more recent graph-based representations of functional connectivity data, many of the ways we have come to understand the human connectome are through the intuitive insight these visualizations enable. Nonetheless, several unresolved problems persist. For example, probabilistic tractography lacks the visual appeal of its deterministic equivalent, multimodal representations require extreme levels of data reduction, and rendering the full connectome within an anatomical space makes the contents cluttered and unreadable. In part, these challenges require compromises between several tensions that determine connectome visualization practice, such as prioritizing anatomic or connectomic information, aesthetic appeal or information content, and thoroughness or readability. To illustrate the ongoing negotiation between these priorities, we provide an overview of various visualization methods that have evolved for anatomical and functional connectivity data. We then describe interactive visualization tools currently available for use in research, and we conclude with concerns and developments in the presentation of connectivity results." }, { "pmid": "21151783", "title": "Modular and hierarchically modular organization of brain networks.", "abstract": "Brain networks are increasingly understood as one of a large class of information processing systems that share important organizational principles in common, including the property of a modular community structure. A module is topologically defined as a subset of highly inter-connected nodes which are relatively sparsely connected to nodes in other modules. In brain networks, topological modules are often made up of anatomically neighboring and/or functionally related cortical regions, and inter-modular connections tend to be relatively long distance. Moreover, brain networks and many other complex systems demonstrate the property of hierarchical modularity, or modularity on several topological scales: within each module there will be a set of sub-modules, and within each sub-module a set of sub-sub-modules, etc. There are several general advantages to modular and hierarchically modular network organization, including greater robustness, adaptivity, and evolvability of network function. In this context, we review some of the mathematical concepts available for quantitative analysis of (hierarchical) modularity in brain networks and we summarize some of the recent work investigating modularity of structural and functional brain networks derived from analysis of human neuroimaging data." }, { "pmid": "18977091", "title": "Evaluation of prefrontal-hippocampal effective connectivity following 24 hours of estrogen infusion: an FDG-PET study.", "abstract": "Although several functional neuroimaging studies have addressed the relevance of hormones to cerebral function, none have evaluated the effects of hormones on network effective connectivity. Since estrogen enhances synaptic connectivity and has been shown to drive activity across neural systems, and because the hippocampus and prefrontal cortex (PFC) are putative targets for the effects of estrogen, we hypothesized that effective connectivity between these regions would be enhanced by an estrogen challenge. In order to test this hypothesis, FDG-PET scans were collected in eleven postmenopausal women at baseline and 24h after a graded estrogen infusion. Subtraction analysis (SA) was conducted to identify sites of increased cerebral glucose uptake (CMRglc) during estrogen infusion. The lateral PFC and hippocampus were a priori sites for activation; SA identified the right superior frontal gyrus (RSFG; MNI coordinates 18, 60, 28) (SPM2, Wellcome Dept. of Cognitive Neurology, London, UK) as a site of increased CMRglc during estrogen infusion relative to baseline. Omnibus covariate analysis conducted relative to the RSFG identified the right hippocampus (MNI coordinates: 32, -32, -6) and right middle frontal gyrus (RMFG; MNI coordinates: 40, 22, 52) as sites of covariance. Path analysis (Amos 5.0 software) revealed that the path coefficient for the RSFG to RHIP path differed from zero only during E2 infusion (p<0.05); moreover, the magnitude of the path coefficient for the RHIP to RMFG path showed a significant further increase during the estrogen infusion condition relative to baseline [Deltachi(2)=4.05, Deltad.f.=1, p=0.044]. These findings are consistent with E2 imparting a stimulatory effect on effective connectivity within prefrontal-hippocampal circuitry. This holds mechanistic significance for resting state network interactions and may hold implications for mood and cognition." }, { "pmid": "25414626", "title": "Stereoscopic three-dimensional visualization applied to multimodal brain images: clinical applications and a functional connectivity atlas.", "abstract": "Effective visualization is central to the exploration and comprehension of brain imaging data. While MRI data are acquired in three-dimensional space, the methods for visualizing such data have rarely taken advantage of three-dimensional stereoscopic technologies. We present here results of stereoscopic visualization of clinical data, as well as an atlas of whole-brain functional connectivity. In comparison with traditional 3D rendering techniques, we demonstrate the utility of stereoscopic visualizations to provide an intuitive description of the exact location and the relative sizes of various brain landmarks, structures and lesions. In the case of resting state fMRI, stereoscopic 3D visualization facilitated comprehension of the anatomical position of complex large-scale functional connectivity patterns. Overall, stereoscopic visualization improves the intuitive visual comprehension of image contents, and brings increased dimensionality to visualization of traditional MRI data, as well as patterns of functional connectivity." }, { "pmid": "15635061", "title": "Neurophysiological architecture of functional magnetic resonance images of human brain.", "abstract": "We investigated large-scale systems organization of the whole human brain using functional magnetic resonance imaging (fMRI) data acquired from healthy volunteers in a no-task or 'resting' state. Images were parcellated using a prior anatomical template, yielding regional mean time series for each of 90 regions (major cortical gyri and subcortical nuclei) in each subject. Significant pairwise functional connections, defined by the group mean inter-regional partial correlation matrix, were mostly either local and intrahemispheric or symmetrically interhemispheric. Low-frequency components in the time series subtended stronger inter-regional correlations than high-frequency components. Intrahemispheric connectivity was generally related to anatomical distance by an inverse square law; many symmetrical interhemispheric connections were stronger than predicted by the anatomical distance between bilaterally homologous regions. Strong interhemispheric connectivity was notably absent in data acquired from a single patient, minimally conscious following a brainstem lesion. Multivariate analysis by hierarchical clustering and multidimensional scaling consistently defined six major systems in healthy volunteers-- corresponding approximately to four neocortical lobes, medial temporal lobe and subcortical nuclei- - that could be further decomposed into anatomically and functionally plausible subsystems, e.g. dorsal and ventral divisions of occipital cortex. An undirected graph derived by thresholding the healthy group mean partial correlation matrix demonstrated local clustering or cliquishness of connectivity and short mean path length compatible with prior data on small world characteristics of non-human cortical anatomy. Functional MRI demonstrates a neurophysiological architecture of the normal human brain that is anatomically sensible, strongly symmetrical, disrupted by acute brain injury, subtended predominantly by low frequencies and consistent with a small world network topology." }, { "pmid": "25628562", "title": "BrainBrowser: distributed, web-based neurological data visualization.", "abstract": "Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used to analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible." }, { "pmid": "16201007", "title": "The human connectome: A structural description of the human brain.", "abstract": "The connection matrix of the human brain (the human \"connectome\") represents an indispensable foundation for basic and applied neurobiological research. However, the network of anatomical connections linking the neuronal elements of the human brain is still largely unknown. While some databases or collations of large-scale anatomical connection patterns exist for other mammalian species, there is currently no connection matrix of the human brain, nor is there a coordinated research effort to collect, archive, and disseminate this important information. We propose a research strategy to achieve this goal, and discuss its potential impact." }, { "pmid": "11125149", "title": "A global geometric framework for nonlinear dimensionality reduction.", "abstract": "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure." }, { "pmid": "22049421", "title": "Rich-club organization of the human connectome.", "abstract": "The human brain is a complex network of interlinked regions. Recent studies have demonstrated the existence of a number of highly connected and highly central neocortical hub regions, regions that play a key role in global information integration between different parts of the network. The potential functional importance of these \"brain hubs\" is underscored by recent studies showing that disturbances of their structural and functional connectivity profile are linked to neuropathology. This study aims to map out both the subcortical and neocortical hubs of the brain and examine their mutual relationship, particularly their structural linkages. Here, we demonstrate that brain hubs form a so-called \"rich club,\" characterized by a tendency for high-degree nodes to be more densely connected among themselves than nodes of a lower degree, providing important information on the higher-level topology of the brain network. Whole-brain structural networks of 21 subjects were reconstructed using diffusion tensor imaging data. Examining the connectivity profile of these networks revealed a group of 12 strongly interconnected bihemispheric hub regions, comprising the precuneus, superior frontal and superior parietal cortex, as well as the subcortical hippocampus, putamen, and thalamus. Importantly, these hub regions were found to be more densely interconnected than would be expected based solely on their degree, together forming a rich club. We discuss the potential functional implications of the rich-club organization of the human connectome, particularly in light of its role in information integration and in conferring robustness to its structural core." }, { "pmid": "16934233", "title": "Ovariectomized rats show decreased recognition memory and spine density in the hippocampus and prefrontal cortex.", "abstract": "Effects of ovariectomy (OVX) on performance of the memory tasks, Object Recognition (OR) and Object Placement (OP), and on dendritic spine density in pyramidal neurons in layer II/III of the prefrontal cortex and the CA1 and CA3 regions of the hippocampus were determined. OVX was associated with a significant decline in performance of the memory tasks as compared to intact rats beginning at 1 week post OVX for OR and 4 weeks post OVX for OP. Golgi impregnation at 7 weeks post OVX showed significantly lower spine densities (17-53%) in the pyramidal neurons of the medial prefrontal cortex and the CA1, but not the CA3, region of the hippocampus in OVX compared to intact rats. These results suggest that cognitive impairments observed in OVX rats may be associated with morphological changes in brain areas mediating memory." }, { "pmid": "23861951", "title": "BrainNet Viewer: a network visualization tool for human brain connectomics.", "abstract": "The human brain is a complex system whose topological organization can be represented using connectomics. Recent studies have shown that human connectomes can be constructed using various neuroimaging technologies and further characterized using sophisticated analytic strategies, such as graph theory. These methods reveal the intriguing topological architectures of human brain networks in healthy populations and explore the changes throughout normal development and aging and under various pathological conditions. However, given the huge complexity of this methodology, toolboxes for graph-based network visualization are still lacking. Here, using MATLAB with a graphical user interface (GUI), we developed a graph-theoretical network visualization toolbox, called BrainNet Viewer, to illustrate human connectomes as ball-and-stick models. Within this toolbox, several combinations of defined files with connectome information can be loaded to display different combinations of brain surface, nodes and edges. In addition, display properties, such as the color and size of network elements or the layout of the figure, can be adjusted within a comprehensive but easy-to-use settings panel. Moreover, BrainNet Viewer draws the brain surface, nodes and edges in sequence and displays brain networks in multiple views, as required by the user. The figure can be manipulated with certain interaction functions to display more detailed information. Furthermore, the figures can be exported as commonly used image file formats or demonstration video for further use. BrainNet Viewer helps researchers to visualize brain networks in an easy, flexible and quick manner, and this software is freely available on the NITRC website (www.nitrc.org/projects/bnv/)." }, { "pmid": "27747562", "title": "The intrinsic geometry of the human brain connectome.", "abstract": "This paper describes novel methods for constructing the intrinsic geometry of the human brain connectome using dimensionality-reduction techniques. We posit that the high-dimensional, complex geometry that represents this intrinsic topology can be mathematically embedded into lower dimensions using coupling patterns encoded in the corresponding brain connectivity graphs. We tested both linear and nonlinear dimensionality-reduction techniques using the diffusion-weighted structural connectome data acquired from a sample of healthy subjects. Results supported the nonlinearity of brain connectivity data, as linear reduction techniques such as the multidimensional scaling yielded inferior lower-dimensional embeddings. To further validate our results, we demonstrated that for tractography-derived structural connectome more influential regions such as rich-club members of the brain are more centrally mapped or embedded. Further, abnormal brain connectivity can be visually understood by inspecting the altered geometry of these three-dimensional (3D) embeddings that represent the topology of the human brain, as illustrated using simulated lesion studies of both targeted and random removal. Last, in order to visualize brain's intrinsic topology we have developed software that is compatible with virtual reality technologies, thus allowing researchers to collaboratively and interactively explore and manipulate brain connectome data." }, { "pmid": "26096223", "title": "Measuring embeddedness: Hierarchical scale-dependent information exchange efficiency of the human brain connectome.", "abstract": "This article presents a novel approach for understanding information exchange efficiency and its decay across hierarchies of modularity, from local to global, of the structural human brain connectome. Magnetic resonance imaging techniques have allowed us to study the human brain connectivity as a graph, which can then be analyzed using a graph-theoretical approach. Collectively termed brain connectomics, these sophisticated mathematical techniques have revealed that the brain connectome, like many networks, is highly modular and brain regions can thus be organized into communities or modules. Here, using tractography-informed structural connectomes from 46 normal healthy human subjects, we constructed the hierarchical modularity of the structural connectome using bifurcating dendrograms. Moving from fine to coarse (i.e., local to global) up the connectome's hierarchy, we computed the rate of decay of a new metric that hierarchically preferentially weighs the information exchange between two nodes in the same module. By computing \"embeddedness\"-the ratio between nodal efficiency and this decay rate, one could thus probe the relative scale-invariant information exchange efficiency of the human brain. Results suggest that regions that exhibit high embeddedness are those that comprise the limbic system, the default mode network, and the subcortical nuclei. This supports the presence of near-decomposability overall yet relative embeddedness in select areas of the brain. The areas we identified as highly embedded are varied in function but are arguably linked in the evolutionary role they play in memory, emotion and behavior." }, { "pmid": "28675490", "title": "The significance of negative correlations in brain connectivity.", "abstract": "Understanding the modularity of functional magnetic resonance imaging (fMRI)-derived brain networks or \"connectomes\" can inform the study of brain function organization. However, fMRI connectomes additionally involve negative edges, which may not be optimally accounted for by existing approaches to modularity that variably threshold, binarize, or arbitrarily weight these connections. Consequently, many existing Q maximization-based modularity algorithms yield variable modular structures. Here, we present an alternative complementary approach that exploits how frequent the blood-oxygen-level-dependent (BOLD) signal correlation between two nodes is negative. We validated this novel probability-based modularity approach on two independent publicly-available resting-state connectome data sets (the Human Connectome Project [HCP] and the 1,000 functional connectomes) and demonstrated that negative correlations alone are sufficient in understanding resting-state modularity. In fact, this approach (a) permits a dual formulation, leading to equivalent solutions regardless of whether one considers positive or negative edges; (b) is theoretically linked to the Ising model defined on the connectome, thus yielding modularity result that maximizes data likelihood. Additionally, we were able to detect novel and consistent sex differences in modularity in both data sets. As data sets like HCP become widely available for analysis by the neuroscience community at large, alternative and perhaps more advantageous computational tools to understand the neurobiological information of negative edges in fMRI connectomes are increasingly important." } ]
Scientific Reports
30254231
PMC6156331
10.1038/s41598-018-32172-0
Cluster-based network proximities for arbitrary nodal subsets
The concept of a cluster or community in a network context has been of considerable interest in a variety of settings in recent years. In this paper, employing random walks and geodesic distance, we introduce a unified measure of cluster-based proximity between nodes, relative to a given subset of interest. The inherent simplicity and informativeness of the approach could make it of value to researchers in a variety of scientific fields. Applicability is demonstrated via application to clustering for a number of existent data sets (including multipartite networks). We view community detection (i.e. when the full set of network nodes is considered) as simply the limiting instance of clustering (for arbitrary subsets). This perspective should add to the dialogue on what constitutes a cluster or community within a network. In regards to health-relevant attributes in social networks, identification of clusters of individuals with similar attributes can support targeting of collective interventions. The method performs well in comparisons with other approaches, based on comparative measures such as NMI and ARI.
Related WorkClosest to the work presented here, specifically in the limiting case of community detection, is the popular Walktrap method of Pons and Latapy24. Therein, random walks are also employed to obtain distances which can then be used in agglomerative hierarchical procedures. In particular, therein, the distance, ri,j between nodes i and j is defined for fixed t ∈ {1, 2, …} via2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${r}_{i,j}(t)=\parallel {{\boldsymbol{\Delta }}}^{-1/2}{{\boldsymbol{P}}}_{i,\cdot }^{t}-{{\boldsymbol{\Delta }}}^{-1/2}{{\boldsymbol{P}}}_{j,\cdot }^{t}\parallel ,$$\end{document}ri,j(t)=∥Δ−1/2Pi,⋅t−Δ−1/2Pj,⋅t∥,where Δ is a diagonal matrix with diagonal entries Δi,i = d(i), d(i) is the degree of vi, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\boldsymbol{P}}}_{l,\cdot }^{t}$$\end{document}Pl,⋅t is the column probability vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${({P}_{l,k}^{t})}_{1\le k\le n}$$\end{document}(Pl,kt)1≤k≤n, P = [Pi,j] is the transition matrix for a random walk on the graph G, and |·| indicates the Euclidean norm on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb{R}}}^{n}$$\end{document}ℝn. A plot of these distances against community-relative distances for a cat cortical network (see36 and Applications and Discussion, below) is given in Fig. 4. Note that the ordering of distances is quite different in the two cases. In terms of community detection, community-relative distance does have advantages: (i) there is no need to choose an appropriate parameter t. The Walktrap method can be sensitive to values of t, as well as the choice of agglomerative method (compare Figs S2 and S3). (ii) Community-relative distances are particularly simple and parsimonious (See Eq. 1), while computational times are similar for the two methods, (iii) units of resulting distances are easily interpretable in terms of shortest path distance and (iv) most importantly, there is no immediate counterpart to clustering restricted to subsets in the case of the Walktrap algorithm.Figure 4A plot of Walktrap (t = 4) distances against community-relative distances for the cat cortical network.Figure 5a contains adjusted Rand index (ARI; see37), and normalized mutual information (NMI; see38) values for agglomerative clustering (employing average-linkage and a VR stopping condition) for some common networks possessing reasonable ground truths, via a range of common distance measures; for discussion of Jaccard and cosine similarity measures, see for instance39 and the references therein. Note that community-relative distance performs comparably or considerably better for the six common networks considered. For similar results employing ASW, see Fig. S4. The networks are discussed further in Applications and Discussion, below.Figure 5(a) ARI and NMI values for agglomerative clustering (employing average-linkage and a VR stopping condition) for some common networks possessing reasonable ground truths, via a range of common distance measures. The networks are discussed further in Applications and Discussion, below. For discussion of Jaccard and cosine similarity measures, see for instance39 and the references therein. (b) ARI and NMI values for the six network data sets, employing nine methods built into the igraph package in R (see40), alongside those for community-relative distance using both ASW and VR stopping conditions.Figure 5b contains ARI and NMI values for the six network data sets, employing nine methods built into the igraph package in R (see40), alongside those for community-relative distance using both ASW and VR stopping conditions. Again community-relative distance performs comparably or considerably better for the networks considered. Plots and dendrograms under community-relative distance are provided in Applications and Discussion, below; for plots of associated ASW and VR values, see Fig. S5. For general discussion regarding comparing clusterings see for instance41. For other work related to community detection and random walks see25 and26 and the references therein.In terms of restriction to nodal subsets, there has been considerable work recently in the special case of types within bipartite networks (see42–51). For discussion of community-relative distance in this context see Applications and Discussion, below. It is important to note that, contrasted with methods specific to bipartite networks, the perspective proposed here imposes no assumptions on the edge structure of the network considered, nor the sets under consideration for clustering.For some recent work on attributes in the context of clustering, see52. Although different in scope, it is worth noting connected work on clustering in spatial networks (see for instance53). Community-relative distance is applicable for arbitrary (potentially non-spatial) networks, and may be of some potential future use in existing algorithms for spatial networks, in place of often considered geodesic distance. In addition, there has been important recent work employing stochastic complementation54 in the context of restriction to subsets of network nodes (see55 and [28, Section 10.4.5]).
[ "18499567", "20368648", "19056788", "20610424", "28055070", "17652652", "21802807", "24726688", "19450038", "21555656", "22544996", "18024473", "10355908", "28235785", "16623848", "15801609" ]
[ { "pmid": "18499567", "title": "The collective dynamics of smoking in a large social network.", "abstract": "BACKGROUND\nThe prevalence of smoking has decreased substantially in the United States over the past 30 years. We examined the extent of the person-to-person spread of smoking behavior and the extent to which groups of widely connected people quit together.\n\n\nMETHODS\nWe studied a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the Framingham Heart Study. We used network analytic methods and longitudinal statistical models.\n\n\nRESULTS\nDiscernible clusters of smokers and nonsmokers were present in the network, and the clusters extended to three degrees of separation. Despite the decrease in smoking in the overall population, the size of the clusters of smokers remained the same across time, suggesting that whole groups of people were quitting in concert. Smokers were also progressively found in the periphery of the social network. Smoking cessation by a spouse decreased a person's chances of smoking by 67% (95% confidence interval [CI], 59 to 73). Smoking cessation by a sibling decreased the chances by 25% (95% CI, 14 to 35). Smoking cessation by a friend decreased the chances by 36% (95% CI, 12 to 55 ). Among persons working in small firms, smoking cessation by a coworker decreased the chances by 34% (95% CI, 5 to 56). Friends with more education influenced one another more than those with less education. These effects were not seen among neighbors in the immediate geographic area.\n\n\nCONCLUSIONS\nNetwork phenomena appear to be relevant to smoking cessation. Smoking behavior spreads through close and distant social ties, groups of interconnected people stop smoking in concert, and smokers are increasingly marginalized socially. These findings have implications for clinical and public health interventions to reduce and prevent smoking." }, { "pmid": "20368648", "title": "The spread of alcohol consumption behavior in a large social network.", "abstract": "BACKGROUND\nAlcohol consumption has important health-related consequences and numerous biological and social determinants.\n\n\nOBJECTIVE\nTo explore quantitatively whether alcohol consumption behavior spreads from person to person in a large social network of friends, coworkers, siblings, spouses, and neighbors, followed for 32 years.\n\n\nDESIGN\nLongitudinal network cohort study.\n\n\nSETTING\nThe Framingham Heart Study.\n\n\nPARTICIPANTS\n12 067 persons assessed at several time points between 1971 and 2003.\n\n\nMEASUREMENTS\nSelf-reported alcohol consumption (number of drinks per week on average over the past year and number of days drinking within the past week) and social network ties, measured at each time point.\n\n\nRESULTS\nClusters of drinkers and abstainers were present in the network at all time points, and the clusters extended to 3 degrees of separation. These clusters were not only due to selective formation of social ties among drinkers but also seem to reflect interpersonal influence. Changes in the alcohol consumption behavior of a person's social network had a statistically significant effect on that person's subsequent alcohol consumption behavior. The behaviors of immediate neighbors and coworkers were not significantly associated with a person's drinking behavior, but the behavior of relatives and friends was.\n\n\nLIMITATIONS\nA nonclinical measure of alcohol consumption was used. Also, it is unclear whether the effects on long-term health are positive or negative, because alcohol has been shown to be both harmful and protective. Finally, not all network ties were observed.\n\n\nCONCLUSION\nNetwork phenomena seem to influence alcohol consumption behavior. This has implications for clinical and public health interventions and further supports group-level interventions to reduce problematic drinking." }, { "pmid": "19056788", "title": "Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study.", "abstract": "OBJECTIVES\nTo evaluate whether happiness can spread from person to person and whether niches of happiness form within social networks.\n\n\nDESIGN\nLongitudinal social network analysis.\n\n\nSETTING\nFramingham Heart Study social network.\n\n\nPARTICIPANTS\n4739 individuals followed from 1983 to 2003.\n\n\nMAIN OUTCOME MEASURES\nHappiness measured with validated four item scale; broad array of attributes of social networks and diverse social ties.\n\n\nRESULTS\nClusters of happy and unhappy people are visible in the network, and the relationship between people's happiness extends up to three degrees of separation (for example, to the friends of one's friends' friends). People who are surrounded by many happy people and those who are central in the network are more likely to become happy in the future. Longitudinal statistical models suggest that clusters of happiness result from the spread of happiness and not just a tendency for people to associate with similar individuals. A friend who lives within a mile (about 1.6 km) and who becomes happy increases the probability that a person is happy by 25% (95% confidence interval 1% to 57%). Similar effects are seen in coresident spouses (8%, 0.2% to 16%), siblings who live within a mile (14%, 1% to 28%), and next door neighbours (34%, 7% to 70%). Effects are not seen between coworkers. The effect decays with time and with geographical separation.\n\n\nCONCLUSIONS\nPeople's happiness depends on the happiness of others with whom they are connected. This provides further justification for seeing happiness, like health, as a collective phenomenon." }, { "pmid": "20610424", "title": "Emotions as infectious diseases in a large social network: the SISa model.", "abstract": "Human populations are arranged in social networks that determine interactions and influence the spread of diseases, behaviours and ideas. We evaluate the spread of long-term emotional states across a social network. We introduce a novel form of the classical susceptible-infected-susceptible disease model which includes the possibility for 'spontaneous' (or 'automatic') infection, in addition to disease transmission (the SISa model). Using this framework and data from the Framingham Heart Study, we provide formal evidence that positive and negative emotional states behave like infectious diseases spreading across social networks over long periods of time. The probability of becoming content is increased by 0.02 per year for each content contact, and the probability of becoming discontent is increased by 0.04 per year per discontent contact. Our mathematical formalism allows us to derive various quantities from the data, such as the average lifetime of a contentment 'infection' (10 years) or discontentment 'infection' (5 years). Our results give insight into the transmissive nature of positive and negative emotional states. Determining to what extent particular emotions or behaviours are infectious is a promising direction for further research with important implications for social science, epidemiology and health policy. Our model provides a theoretical framework for studying the interpersonal spread of any state that may also arise spontaneously, such as emotions, behaviours, health states, ideas or diseases with reservoirs." }, { "pmid": "28055070", "title": "Modeling Contagion Through Social Networks to Explain and Predict Gunshot Violence in Chicago, 2006 to 2014.", "abstract": "Importance\nEvery day in the United States, more than 200 people are murdered or assaulted with a firearm. Little research has considered the role of interpersonal ties in the pathways through which gun violence spreads.\n\n\nObjective\nTo evaluate the extent to which the people who will become subjects of gun violence can be predicted by modeling gun violence as an epidemic that is transmitted between individuals through social interactions.\n\n\nDesign, Setting, and Participants\nThis study was an epidemiological analysis of a social network of individuals who were arrested during an 8-year period in Chicago, Illinois, with connections between people who were arrested together for the same offense. Modeling of the spread of gunshot violence over the network was assessed using a probabilistic contagion model that assumed individuals were subject to risks associated with being arrested together, in addition to demographic factors, such as age, sex, and neighborhood residence. Participants represented a network of 138 163 individuals who were arrested between January 1, 2006, and March 31, 2014 (29.9% of all individuals arrested in Chicago during this period), 9773 of whom were subjects of gun violence. Individuals were on average 27 years old at the midpoint of the study, predominantly male (82.0%) and black (75.6%), and often members of a gang (26.2%).\n\n\nMain Outcomes and Measures\nExplanation and prediction of becoming a subject of gun violence (fatal or nonfatal) using epidemic models based on person-to-person transmission through a social network.\n\n\nResults\nSocial contagion accounted for 63.1% of the 11 123 gunshot violence episodes; subjects of gun violence were shot on average 125 days after their infector (the person most responsible for exposing the subject to gunshot violence). Some subjects of gun violence were shot more than once. Models based on both social contagion and demographics performed best; when determining the 1.0% of people (n = 1382) considered at highest risk to be shot each day, the combined model identified 728 subjects of gun violence (6.5%) compared with 475 subjects of gun violence (4.3%) for the demographics model (53.3% increase) and 589 subjects of gun violence (5.3%) for the social contagion model (23.6% increase).\n\n\nConclusions and Relevance\nGunshot violence follows an epidemic-like process of social contagion that is transmitted through networks of people by social interactions. Violence prevention efforts that account for social contagion, in addition to demographics, have the potential to prevent more shootings than efforts that focus on only demographics." }, { "pmid": "17652652", "title": "The spread of obesity in a large social network over 32 years.", "abstract": "BACKGROUND\nThe prevalence of obesity has increased substantially over the past 30 years. We performed a quantitative analysis of the nature and extent of the person-to-person spread of obesity as a possible factor contributing to the obesity epidemic.\n\n\nMETHODS\nWe evaluated a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the Framingham Heart Study. The body-mass index was available for all subjects. We used longitudinal statistical models to examine whether weight gain in one person was associated with weight gain in his or her friends, siblings, spouse, and neighbors.\n\n\nRESULTS\nDiscernible clusters of obese persons (body-mass index [the weight in kilograms divided by the square of the height in meters], > or =30) were present in the network at all time points, and the clusters extended to three degrees of separation. These clusters did not appear to be solely attributable to the selective formation of social ties among obese persons. A person's chances of becoming obese increased by 57% (95% confidence interval [CI], 6 to 123) if he or she had a friend who became obese in a given interval. Among pairs of adult siblings, if one sibling became obese, the chance that the other would become obese increased by 40% (95% CI, 21 to 60). If one spouse became obese, the likelihood that the other spouse would become obese increased by 37% (95% CI, 7 to 73). These effects were not seen among neighbors in the immediate geographic location. Persons of the same sex had relatively greater influence on each other than those of the opposite sex. The spread of smoking cessation did not account for the spread of obesity in the network.\n\n\nCONCLUSIONS\nNetwork phenomena appear to be relevant to the biologic and behavioral trait of obesity, and obesity appears to spread through social ties. These findings have implications for clinical and public health interventions." }, { "pmid": "21802807", "title": "How physical activity shapes, and is shaped by, adolescent friendships.", "abstract": "The current study explored the role of school-based friendship networks in adolescents' engagement in physical activity (PA). It was hypothesized that similar participation in PA would be a basis for friendship formation, and that friends would also influence behavior. Whether these processes were mediated through cognitive mechanisms was also explored. Self-reported participation in PA, cognitions about PA, and friendship ties to grade-mates were measured in two cohorts of Australian grade eight students (N = 378; M age = 13.7) three times over the 2008 school year. Interdependence between the friendship networks and PA was tested using stochastic actor-based models for social networks and behavior. The results showed that participants tended to befriend peers who did similar amounts of PA, and subsequently emulated their friends' behaviors. Friends' influence on PA was not found to be mediated through adolescents' cognitions about PA. These findings show that there is a mutually dependent relationship between adolescent friendship networks and PA; they highlight how novel network-based strategies may be effective in supporting young people to be physically active." }, { "pmid": "24726688", "title": "Social network predictors of latrine ownership.", "abstract": "Poor sanitation, including the lack of clean functioning toilets, is a major factor contributing to morbidity and mortality from infectious diseases in the developing world. We examine correlates of latrine ownership in rural India with a focus on social network predictors. Participants from 75 villages provided the names of their social contacts as well as their own relevant demographic and household characteristics. Using these measures, we test whether the latrine ownership of an individual's social contacts is a significant predictor of individual latrine ownership. We also investigate whether network centrality significantly predicts latrine ownership, and if so, whether it moderates the relationship between the latrine ownership of the individual and that of her social contacts. Our results show that, controlling for the standard predictors of latrine ownership such as caste, education, and income, individuals are more likely to own latrines if their social contacts own latrines. Interaction models suggest that this relationship is stronger among those of the same caste, the same education, and those with stronger social ties. We also find that more central individuals are more likely to own latrines, but the correlation in latrine ownership between social contacts is strongest among individuals on the periphery of the network. Although more data is needed to determine how much the clustering of latrine ownership may be caused by social influence, the results here suggest that interventions designed to promote latrine ownership should consider focusing on those at the periphery of the network. The reason is that they are 1) less likely to own latrines and 2) more likely to exhibit the same behavior as their social contacts, possibly as a result of the spread of latrine adoption from one person to another." }, { "pmid": "19450038", "title": "Relationships between social norms, social network characteristics, and HIV risk behaviors in Thailand and the United States.", "abstract": "OBJECTIVE\nSocial norms have been associated with a wide range of health behaviors. In this study, the authors examined whether the social norms of HIV risk behaviors are clustered within social networks and whether the norms of network members are linked to the risk behaviors of their social network members.\n\n\nDESIGN\nData were collected from the baseline assessment of 354 networks with 933 participants in a network-oriented HIV prevention intervention targeting injection drug users in Philadelphia, United States, and Chiang Mai, Thailand.\n\n\nMAIN OUTCOME MEASURES\nFour descriptive HIV risk norms of sharing needles, cookers, and cotton and front- or back-loading among friends who inject were assessed.\n\n\nRESULTS\nThree of 4 injection risk norms (sharing needle, cookers, and cotton) were found to be significantly clustered. In Philadelphia, 1 network member's (the index participant) norms of sharing needles and front- or back-loading were found to be significantly associated with the network members' risk behaviors, and the norm of sharing cotton was marginally associated.\n\n\nCONCLUSION\nThe results of this study suggest that among injection drug users, social norms are clustered within networks; social networks are a meaningful level of analyses for understanding how social norms lead to risk behaviors, providing important data for intervening to reduce injection-related HIV risks." }, { "pmid": "21555656", "title": "Shared norms and their explanation for the social clustering of obesity.", "abstract": "OBJECTIVES\nWe aimed to test the hypothesized role of shared body size norms in the social contagion of body size and obesity.\n\n\nMETHODS\nUsing data collected in 2009 from 101 women and 812 of their social ties in Phoenix, Arizona, we assessed the indirect effect of social norms on shared body mass index (BMI) measured in 3 different ways.\n\n\nRESULTS\nWe confirmed Christakis and Fowler's basic finding that BMI and obesity do indeed cluster socially, but we found that body size norms accounted for only a small portion of this effect (at most 20%) and only via 1 of the 3 pathways.\n\n\nCONCLUSIONS\nIf shared social norms play only a minor role in the social contagion of obesity, interventions targeted at changing ideas about appropriate BMIs or body sizes may be less useful than those working more directly with behaviors, for example, by changing eating habits or transforming opportunities for and constraints on dietary intake." }, { "pmid": "22544996", "title": "Social Network Visualization in Epidemiology.", "abstract": "Epidemiological investigations and interventions are increasingly focusing on social networks. Two aspects of social networks are relevant in this regard: the structure of networks and the function of networks. A better understanding of the processes that determine how networks form and how they operate with respect to the spread of behavior holds promise for improving public health. Visualizing social networks is a key to both research and interventions. Network images supplement statistical analyses and allow the identification of groups of people for targeting, the identification of central and peripheral individuals, and the clarification of the macro-structure of the network in a way that should affect public health interventions. People are inter-connected and so their health is inter-connected. Inter-personal health effects in social networks provide a new foundation for public health." }, { "pmid": "18024473", "title": "Defining clusters from a hierarchical cluster tree: the Dynamic Tree Cut package for R.", "abstract": "SUMMARY\nHierarchical clustering is a widely used method for detecting clusters in genomic data. Clusters are defined by cutting branches off the dendrogram. A common but inflexible method uses a constant height cutoff value; this method exhibits suboptimal performance on complicated dendrograms. We present the Dynamic Tree Cut R package that implements novel dynamic branch cutting methods for detecting clusters in a dendrogram depending on their shape. Compared to the constant height cutoff method, our techniques offer the following advantages: (1) they are capable of identifying nested clusters; (2) they are flexible-cluster shape parameters can be tuned to suit the application at hand; (3) they are suitable for automation; and (4) they can optionally combine the advantages of hierarchical clustering and partitioning around medoids, giving better detection of outliers. We illustrate the use of these methods by applying them to protein-protein interaction network data and to a simulated gene expression data set.\n\n\nAVAILABILITY\nThe Dynamic Tree Cut method is implemented in an R package available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/BranchCutting." }, { "pmid": "10355908", "title": "The connectional organization of the cortico-thalamic system of the cat.", "abstract": "Data on connections between the areas of the cerebral cortex and nuclei of the thalamus are too complicated to analyse with naked intuition. Indeed, the complexity of connection data is one of the major challenges facing neuroanatomy. Recently, systematic methods have been developed and applied to the analysis of the connectivity in the cerebral cortex. These approaches have shed light on the gross organization of the cortical network, have made it possible to test systematically theories of cortical organization, and have guided new electrophysiological studies. This paper extends the approach to investigate the organization of the entire cortico-thalamic network. An extensive collation of connection tracing studies revealed approximately 1500 extrinsic connections between the cortical areas and thalamic nuclei of the cat cerebral hemisphere. Around 850 connections linked 53 cortical areas with each other, and around 650 connections linked the cortical areas with 42 thalamic nuclei. Non-metric multidimensional scaling, optimal set analysis and non-parametric cluster analysis were used to study global connectivity and the 'place' of individual structures within the overall scheme. Thalamic nuclei and cortical areas were in intimate connectional association. Connectivity defined four major thalamo-cortical systems. These included three broadly hierarchical sensory or sensory/motor systems (visual and auditory systems and a single system containing both somatosensory and motor structures). The highest stations of these sensory/motor systems were associated with a fourth processing system composed of prefrontal, cingulate, insular and parahippocampal cortex and associated thalamic nuclei (the 'fronto-limbic system'). The association between fronto-limbic and somato-motor systems was particularly close." }, { "pmid": "28235785", "title": "Emergence of communities and diversity in social networks.", "abstract": "Communities are common in complex networks and play a significant role in the functioning of social, biological, economic, and technological systems. Despite widespread interest in detecting community structures in complex networks and exploring the effect of communities on collective dynamics, a deep understanding of the emergence and prevalence of communities in social networks is still lacking. Addressing this fundamental problem is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in society. An elusive question is how communities with common internal properties arise in social networks with great individual diversity. Here, we answer this question using the ultimatum game, which has been a paradigm for characterizing altruism and fairness. We experimentally show that stable local communities with different internal agreements emerge spontaneously and induce social diversity into networks, which is in sharp contrast to populations with random interactions. Diverse communities and social norms come from the interaction between responders with inherent heterogeneous demands and rational proposers via local connections, where the former eventually become the community leaders. This result indicates that networks are significant in the emergence and stabilization of communities and social diversity. Our experimental results also provide valuable information about strategies for developing network models and theories of evolutionary games and social dynamics." }, { "pmid": "16623848", "title": "Prediction of the main cortical areas and connections involved in the tactile function of the visual cortex by network analysis.", "abstract": "We explored the cortical pathways from the primary somatosensory cortex to the primary visual cortex (V1) by analysing connectional data in the macaque monkey using graph-theoretical tools. Cluster analysis revealed the close relationship of the dorsal visual stream and the sensorimotor cortex. It was shown that prefrontal area 46 and parietal areas VIP and 7a occupy a central position between the different clusters in the visuo-tactile network. Among these structures all the shortest paths from primary somatosensory cortex (3a, 1 and 2) to V1 pass through VIP and then reach V1 via MT, V3 and PO. Comparison of the input and output fields suggested a larger specificity for the 3a/1-VIP-MT/V3-V1 pathways among the alternative routes. A reinforcement learning algorithm was used to evaluate the importance of the aforementioned pathways. The results suggest a higher role for V3 in relaying more direct sensorimotor information to V1. Analysing cliques, which identify areas with the strongest coupling in the network, supported the role of VIP, MT and V3 in visuo-tactile integration. These findings indicate that areas 3a, 1, VIP, MT and V3 play a major role in shaping the tactile information reaching V1 in both sighted and blind subjects. Our observations greatly support the findings of the experimental studies and provide a deeper insight into the network architecture underlying visuo-tactile integration in the primate cerebral cortex." }, { "pmid": "15801609", "title": "Identifying the role that animals play in their social networks.", "abstract": "Techniques recently developed for the analysis of human social networks are applied to the social network of bottlenose dolphins living in Doubtful Sound, New Zealand. We identify communities and subcommunities within the dolphin population and present evidence that sex- and age-related homophily play a role in the formation of clusters of preferred companionship. We also identify brokers who act as links between sub-communities and who appear to be crucial to the social cohesion of the population as a whole. The network is found to be similar to human social networks in some respects but different in some others, such as the level of assortative mixing by degree within the population. This difference elucidates some of the means by which the network forms and evolves." } ]
PLoS Computational Biology
30216352
PMC6157905
10.1371/journal.pcbi.1006376
Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease using longitudinal data
Computational models predicting symptomatic progression at the individual level can be highly beneficial for early intervention and treatment planning for Alzheimer’s disease (AD). Individual prognosis is complicated by many factors including the definition of the prediction objective itself. In this work, we present a computational framework comprising machine-learning techniques for 1) modeling symptom trajectories and 2) prediction of symptom trajectories using multimodal and longitudinal data. We perform primary analyses on three cohorts from Alzheimer’s Disease Neuroimaging Initiative (ADNI), and a replication analysis using subjects from Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL). We model the prototypical symptom trajectory classes using clinical assessment scores from mini-mental state exam (MMSE) and Alzheimer’s Disease Assessment Scale (ADAS-13) at nine timepoints spanned over six years based on a hierarchical clustering approach. Subsequently we predict these trajectory classes for a given subject using magnetic resonance (MR) imaging, genetic, and clinical variables from two timepoints (baseline + follow-up). For prediction, we present a longitudinal Siamese neural-network (LSN) with novel architectural modules for combining multimodal data from two timepoints. The trajectory modeling yields two (stable and decline) and three (stable, slow-decline, fast-decline) trajectory classes for MMSE and ADAS-13 assessments, respectively. For the predictive tasks, LSN offers highly accurate performance with 0.900 accuracy and 0.968 AUC for binary MMSE task and 0.760 accuracy for 3-way ADAS-13 task on ADNI datasets, as well as, 0.724 accuracy and 0.883 AUC for binary MMSE task on replication AIBL dataset.
Comparison with related workPast studies [14,15] propose different methods for trajectory modeling and report varying number of trajectories. Authors of [14] construct a two-group quadratic growth mixture model that characterizes the MMSE scores over a six year period. Authors of [15] used a latent class mixture model with quadratic trajectories to model cognitive and psychotic symptoms over 13.5 years. The results indicated presence of 6 trajectory courses. The more popular, diagnostic change-based trajectory models usually define only 2 classes such as AD converters vs. non-converters, or more specifically in the context of MCI population, and stable vs. progressive MCI groups [8,10,42]. The two-group model is computationally convenient; hence the nonparametric modeling approach presented here starts with two trajectories, but can be easily extended to incorporate more trajectories, as shown by 3-class ADAS-13 trajectory definitions.Pertaining to prediction tasks, it is not trivial to compare LSN performance with previous studies due to differences in task definitions and input data types. Authors of [8,10,42] have provided a comparative overview of studies tackling AD conversion tasks. In these compared studies, the conversion times under consideration range from 18 to 54 months, and the reported AUC from 0.70 to 0.902, with higher performance obtained by studies using combination of MR and cognitive features. The best performing study [8] among these propose a two-step approach in which first MR features are learned via a semi-supervised low density separation technique, which are then combined with cognitive measures using a random forest classifier. Authors report AUC of 0.882 with cognitive measures, 0.799 with MR features, and 0.902 with aggregate data on 264 ADNI1 subjects. Another recent study [9] presents a probabilistic multiple kernel learning (pMKL) classifier with input features comprising variety of risk factors, cognitive and functional assessments, structural MR imaging data, and plasma proteomic data. The authors report AUC of 0.83 with cognitive assessments, 0.76 with MR data, and 0.87 with multi-source data on 259 ADNI1 subjects.A few studies have also explored longitudinal data towards predictive tasks. The authors of [6] use a sparse linear regression model on imaging data and clinical scores (ADAS and MMSE), with group regularization for longitudinal feature extraction, which is fed into an SVM classifier. The authors report AUC of 0.670 with cognitive scores, 0.697 with MR features, and 0.768 with the proposed longitudinal multimodal classifier on 88 ADNI1 subjects. The authors also report AUC of 0.745 using baseline multimodal data. Another recent study [10] uses a hierarchical classification framework for selecting longitudinal features and report AUC of 0.754 with baseline features and 0.812 with longitudinal features solely derived from MR data from 131 ADNI1 subjects. Despite the differences in task definition, sample sizes, etc., we note a couple of trends. First, as mentioned earlier due to implicit dependency between task definition and cognitive assessments, we see a strong contribution by clinical scores towards the predictive performance with larger cohorts. Then, the longitudinal studies show promising results with performance gains from the added timepoint further motivating models that can handle both multimodal and longitudinal data.
[ "18390347", "19689234", "22731740", "22457741", "25260851", "25312773", "26901338", "19027862", "19371783", "27500865", "17941341", "19781112", "21825241", "23302773", "27697430", "23332364", "20139996", "19439419", "23108250", "23358601", "17353226", "23079557", "21802369", "25344382", "26923371", "28295799", "27697430", "22189451", "20378467", "21945694", "8126267", "15588607", "15896981", "10944416", "11771995", "24885474", "20541286", "28079104" ]
[ { "pmid": "18390347", "title": "MRI-based automated computer classification of probable AD versus normal controls.", "abstract": "Automated computer classification (ACC) techniques are needed to facilitate physician's diagnosis of complex diseases in individual patients. We provide an example of ACC using computational techniques within the context of cross-sectional analysis of magnetic resonance images (MRI) in neurodegenerative diseases, namely Alzheimer's dementia (AD). In this paper, the accuracy of our ACC methodology is assessed when presented with real life, imperfect data, i.e., cohorts of MRI with varying acquisition parameters and imaging quality. The comparative methodology uses the Jacobian determinants derived from dense deformation fields and scaled grey-level intensity from a selected volume of interest centered on the medial temporal lobe. The ACC performance is assessed in a series of leave-one-out experiments aimed at separating 75 probable AD and 75 age-matched normal controls. The resulting accuracy is 92% using a support vector machine classifier based on least squares optimization. Finally, it is shown in the Appendix that determinants and scaled grey-level intensity are appreciably more robust to varying parameters in validation studies using simulated data, when compared to raw intensities or grey/white matter volumes. The ability of cross-sectional MRI at detecting probable AD with high accuracy could have profound implications in the management of suspected AD candidates." }, { "pmid": "19689234", "title": "Baseline MRI predictors of conversion from MCI to probable AD in the ADNI cohort.", "abstract": "The Alzheimer's Disease Neuroimaging Initiative (ADNI) is a multi-center study assessing neuroimaging in diagnosis and longitudinal monitoring. Amnestic Mild Cognitive Impairment (MCI) often represents a prodromal form of dementia, conferring a 10-15% annual risk of converting to probable AD. We analyzed baseline 1.5T MRI scans in 693 participants from the ADNI cohort divided into four groups by baseline diagnosis and one year MCI to probable AD conversion status to identify neuroimaging phenotypes associated with MCI and AD and potential predictive markers of imminent conversion. MP-RAGE scans were analyzed using publicly available voxel-based morphometry (VBM) and automated parcellation methods. Measures included global and hippocampal grey matter (GM) density, hippocampal and amygdalar volumes, and cortical thickness values from entorhinal cortex and other temporal and parietal lobe regions. The overall pattern of structural MRI changes in MCI (n=339) and AD (n=148) compared to healthy controls (HC, n=206) was similar to prior findings in smaller samples. MCI-Converters (n=62) demonstrated a very similar pattern of atrophic changes to the AD group up to a year before meeting clinical criteria for AD. Finally, a comparison of effect sizes for contrasts between the MCI-Converters and MCI-Stable (n=277) groups on MRI metrics indicated that degree of neurodegeneration of medial temporal structures was the best antecedent MRI marker of imminent conversion, with decreased hippocampal volume (left > right) being the most robust. Validation of imaging biomarkers is important as they can help enrich clinical trials of disease modifying agents by identifying individuals at highest risk for progression to AD." }, { "pmid": "22731740", "title": "Sparse learning and stability selection for predicting MCI to AD conversion using baseline ADNI data.", "abstract": "BACKGROUND\nPatients with Mild Cognitive Impairment (MCI) are at high risk of progression to Alzheimer's dementia. Identifying MCI individuals with high likelihood of conversion to dementia and the associated biosignatures has recently received increasing attention in AD research. Different biosignatures for AD (neuroimaging, demographic, genetic and cognitive measures) may contain complementary information for diagnosis and prognosis of AD.\n\n\nMETHODS\nWe have conducted a comprehensive study using a large number of samples from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to test the power of integrating various baseline data for predicting the conversion from MCI to probable AD and identifying a small subset of biosignatures for the prediction and assess the relative importance of different modalities in predicting MCI to AD conversion. We have employed sparse logistic regression with stability selection for the integration and selection of potential predictors. Our study differs from many of the other ones in three important respects: (1) we use a large cohort of MCI samples that are unbiased with respect to age or education status between case and controls (2) we integrate and test various types of baseline data available in ADNI including MRI, demographic, genetic and cognitive measures and (3) we apply sparse logistic regression with stability selection to ADNI data for robust feature selection.\n\n\nRESULTS\nWe have used 319 MCI subjects from ADNI that had MRI measurements at the baseline and passed quality control, including 177 MCI Non-converters and 142 MCI Converters. Conversion was considered over the course of a 4-year follow-up period. A combination of 15 features (predictors) including those from MRI scans, APOE genotyping, and cognitive measures achieves the best prediction with an AUC score of 0.8587.\n\n\nCONCLUSIONS\nOur results demonstrate the power of integrating various baseline data for prediction of the conversion from MCI to probable AD. Our results also demonstrate the effectiveness of stability selection for feature selection in the context of sparse logistic regression." }, { "pmid": "22457741", "title": "Predicting future clinical changes of MCI patients using longitudinal and multimodal biomarkers.", "abstract": "Accurate prediction of clinical changes of mild cognitive impairment (MCI) patients, including both qualitative change (i.e., conversion to Alzheimer's disease (AD)) and quantitative change (i.e., cognitive scores) at future time points, is important for early diagnosis of AD and for monitoring the disease progression. In this paper, we propose to predict future clinical changes of MCI patients by using both baseline and longitudinal multimodality data. To do this, we first develop a longitudinal feature selection method to jointly select brain regions across multiple time points for each modality. Specifically, for each time point, we train a sparse linear regression model by using the imaging data and the corresponding clinical scores, with an extra 'group regularization' to group the weights corresponding to the same brain region across multiple time points together and to allow for selection of brain regions based on the strength of multiple time points jointly. Then, to further reflect the longitudinal changes on the selected brain regions, we extract a set of longitudinal features from the original baseline and longitudinal data. Finally, we combine all features on the selected brain regions, from different modalities, for prediction by using our previously proposed multi-kernel SVM. We validate our method on 88 ADNI MCI subjects, with both MRI and FDG-PET data and the corresponding clinical scores (i.e., MMSE and ADAS-Cog) at 5 different time points. We first predict the clinical scores (MMSE and ADAS-Cog) at 24-month by using the multimodality data at previous time points, and then predict the conversion of MCI to AD by using the multimodality data at time points which are at least 6-month ahead of the conversion. The results on both sets of experiments show that our proposed method can achieve better performance in predicting future clinical changes of MCI patients than the conventional methods." }, { "pmid": "25260851", "title": "Structural imaging biomarkers of Alzheimer's disease: predicting disease progression.", "abstract": "Optimized magnetic resonance imaging (MRI)-based biomarkers of Alzheimer's disease (AD) may allow earlier detection and refined prediction of the disease. In addition, they could serve as valuable tools when designing therapeutic studies of individuals at risk of AD. In this study, we combine (1) a novel method for grading medial temporal lobe structures with (2) robust cortical thickness measurements to predict AD among subjects with mild cognitive impairment (MCI) from a single T1-weighted MRI scan. Using AD and cognitively normal individuals, we generate a set of features potentially discriminating between MCI subjects who convert to AD and those who remain stable over a period of 3 years. Using mutual information-based feature selection, we identify 5 key features optimizing the classification of MCI converters. These features are the left and right hippocampi gradings and cortical thicknesses of the left precuneus, left superior temporal sulcus, and right anterior part of the parahippocampal gyrus. We show that these features are highly stable in cross-validation and enable a prediction accuracy of 72% using a simple linear discriminant classifier, the highest prediction accuracy obtained on the baseline Alzheimer's Disease Neuroimaging Initiative first phase cohort to date. The proposed structural features are consistent with Braak stages and previously reported atrophic patterns in AD and are easy to transfer to new cohorts and to clinical practice." }, { "pmid": "25312773", "title": "Machine learning framework for early MRI-based Alzheimer's conversion prediction in MCI subjects.", "abstract": "Mild cognitive impairment (MCI) is a transitional stage between age-related cognitive decline and Alzheimer's disease (AD). For the effective treatment of AD, it would be important to identify MCI patients at high risk for conversion to AD. In this study, we present a novel magnetic resonance imaging (MRI)-based method for predicting the MCI-to-AD conversion from one to three years before the clinical diagnosis. First, we developed a novel MRI biomarker of MCI-to-AD conversion using semi-supervised learning and then integrated it with age and cognitive measures about the subjects using a supervised learning algorithm resulting in what we call the aggregate biomarker. The novel characteristics of the methods for learning the biomarkers are as follows: 1) We used a semi-supervised learning method (low density separation) for the construction of MRI biomarker as opposed to more typical supervised methods; 2) We performed a feature selection on MRI data from AD subjects and normal controls without using data from MCI subjects via regularized logistic regression; 3) We removed the aging effects from the MRI data before the classifier training to prevent possible confounding between AD and age related atrophies; and 4) We constructed the aggregate biomarker by first learning a separate MRI biomarker and then combining it with age and cognitive measures about the MCI subjects at the baseline by applying a random forest classifier. We experimentally demonstrated the added value of these novel characteristics in predicting the MCI-to-AD conversion on data obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. With the ADNI data, the MRI biomarker achieved a 10-fold cross-validated area under the receiver operating characteristic curve (AUC) of 0.7661 in discriminating progressive MCI patients (pMCI) from stable MCI patients (sMCI). Our aggregate biomarker based on MRI data together with baseline cognitive measurements and age achieved a 10-fold cross-validated AUC score of 0.9020 in discriminating pMCI from sMCI. The results presented in this study demonstrate the potential of the suggested approach for early AD diagnosis and an important role of MRI in the MCI-to-AD conversion prediction. However, it is evident based on our results that combining MRI data with cognitive test results improved the accuracy of the MCI-to-AD conversion prediction." }, { "pmid": "26901338", "title": "Predicting Progression from Mild Cognitive Impairment to Alzheimer's Dementia Using Clinical, MRI, and Plasma Biomarkers via Probabilistic Pattern Classification.", "abstract": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment." }, { "pmid": "19027862", "title": "Baseline and longitudinal patterns of brain atrophy in MCI patients, and their use in prediction of short-term conversion to AD: results from ADNI.", "abstract": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis." }, { "pmid": "19371783", "title": "Relating one-year cognitive change in mild cognitive impairment to baseline MRI features.", "abstract": "BACKGROUND\nWe propose a completely automated methodology to investigate the relationship between magnetic resonance image (MRI) features and changes in cognitive estimates, applied to the study of Mini-Mental State Examination (MMSE) changes in mild cognitive impairment (MCI).\n\n\nSUBJECTS\nA reference group composed of 75 patients with clinically probable Alzheimer's Disease (AD) and 75 age-matched controls; and a study group composed of 49 MCI, 20 having progressed to clinically probable AD and 29 having remained stable after a 48 month follow-up.\n\n\nMETHODS\nWe created a pathology-specific reference space using principal component analysis of MRI-based features (intensity, local volume changes) within the medial temporal lobe of T1-weighted baseline images for the reference group. We projected similar data from the study group and identified a restricted set of image features highly correlated with one-year change in MMSE, using a bootstrap sampling estimation. We used robust linear regression models to predict one-year MMSE changes from baseline MRI, baseline MMSE, age, gender, and years of education.\n\n\nRESULTS\nAll experiments were performed using a leave-one-out paradigm. We found multiple image-based features highly correlated with one-year MMSE changes (/r/>0.425). The model for all N=49 MCI subjects had a correlation of r=0.31 between actual and predicted one-year MMSE change values. A second model only for MCI subjects with MMSE loss larger than 1 U had a pairwise correlation r=0.80 with an adjusted coefficient of determination r(2)=0.61.\n\n\nFINDINGS\nOur automated MRI-based technique revealed a strong relationship between baseline MRI features and one-year cognitive changes in a sub-group of MCI subjects. This technique should be generalized to other aspects of cognitive evaluation and to a wider scope of dementias." }, { "pmid": "27500865", "title": "Longitudinal clinical score prediction in Alzheimer's disease with soft-split sparse regression based random forest.", "abstract": "Alzheimer's disease (AD) is an irreversible neurodegenerative disease and affects a large population in the world. Cognitive scores at multiple time points can be reliably used to evaluate the progression of the disease clinically. In recent studies, machine learning techniques have shown promising results on the prediction of AD clinical scores. However, there are multiple limitations in the current models such as linearity assumption and missing data exclusion. Here, we present a nonlinear supervised sparse regression-based random forest (RF) framework to predict a variety of longitudinal AD clinical scores. Furthermore, we propose a soft-split technique to assign probabilistic paths to a test sample in RF for more accurate predictions. In order to benefit from the longitudinal scores in the study, unlike the previous studies that often removed the subjects with missing scores, we first estimate those missing scores with our proposed soft-split sparse regression-based RF and then utilize those estimated longitudinal scores at all the previous time points to predict the scores at the next time point. The experiment results demonstrate that our proposed method is superior to the traditional RF and outperforms other state-of-art regression models. Our method can also be extended to be a general regression framework to predict other disease scores." }, { "pmid": "17941341", "title": "Longitudinal trajectories of cognitive change in preclinical Alzheimer's disease: a growth mixture modeling analysis.", "abstract": "Preclinical Alzheimer's disease (AD) refers to a period of time prior to diagnosis during which cognitive deficits among individuals who will go on to receive a diagnosis of AD are present. There is great interest in describing the nature of cognitive change during the preclinical period, in terms of whether persons decline in a linear fashion to diagnosis, or exhibit some stability of functioning, followed by rapid losses in performance. In the current study we apply Growth Mixture Modeling to data from The Kungsholmen Project to evaluate whether decline in Mini Mental State Examination (MMSE) scores during the preclinical period of AD follows a linear or quadratic function. At the end of a 7-year follow-up period, some individuals would be diagnosed with AD (n=71), whereas others would remain free of dementia (n=457). The results indicated that a two-group quadratic model of decline provided the best statistical fit measures, as well as the greatest estimates of sensitivity (67%) and specificity (86%). Differences in MMSE scores were apparent at baseline, but the preclinical AD group began to experience precipitous declines three years prior to diagnosis. Finally, persons who were misclassified as preclinical AD had fewer years of education and poorer MMSE scores at baseline." }, { "pmid": "19781112", "title": "Trajectories of cognitive decline in Alzheimer's disease.", "abstract": "BACKGROUND\nLate-onset Alzheimer disease (LOAD) is a clinically heterogeneous complex disease defined by progressively disabling cognitive impairment. Psychotic symptoms which affect approximately one-half of LOAD subjects have been associated with more rapid cognitive decline. However, the variety of cognitive trajectories in LOAD, and their correlates, have not been well defined. We therefore used latent class modeling to characterize trajectories of cognitive and behavioral decline in a cohort of AD subjects.\n\n\nMETHODS\n201 Caucasian subjects with possible or probable Alzheimer's disease (AD) were evaluated for cognitive and psychotic symptoms at regular intervals for up to 13.5 years. Cognitive symptoms were evaluated serially with the Mini-mental State Examination (MMSE), and psychotic symptoms were rated using the CERAD behavioral rating scale (CBRS). Analyses undertaken were latent class mixture models of quadratic trajectories including a random intercept with initial MMSE score, age, gender, education, and APOE 4 count modeled as concomitant variables. In a secondary analysis, psychosis status was also included.\n\n\nRESULTS\nAD subjects showed six trajectories with significantly different courses and rates of cognitive decline. The concomitant variables included in the best latent class trajectory model were initial MMSE and age. Greater burden of psychotic symptoms increased the probability of following a trajectory of more rapid cognitive decline in all age and initial MMSE groups. APOE 4 was not associated with any trajectory.\n\n\nCONCLUSION\nTrajectory modeling of longitudinal cognitive and behavioral data may provide enhanced resolution of phenotypic variation in AD." }, { "pmid": "21825241", "title": "The dynamics of cortical and hippocampal atrophy in Alzheimer disease.", "abstract": "OBJECTIVE\nTo characterize rates of regional Alzheimer disease (AD)-specific brain atrophy across the presymptomatic, mild cognitive impairment, and dementia stages.\n\n\nDESIGN\nMulticenter case-control study of neuroimaging, cerebrospinal fluid, and cognitive test score data from the Alzheimer's Disease Neuroimaging Initiative.\n\n\nSETTING\nResearch centers across the United States and Canada.\n\n\nPATIENTS\nWe examined a total of 317 participants with baseline cerebrospinal fluid biomarker measurements and 3 T1-weighted magnetic resonance images obtained within 1 year.\n\n\nMAIN OUTCOME MEASURES\nWe used automated tools to compute annual longitudinal atrophy in the hippocampus and cortical regions targeted in AD. We used Mini-Mental State Examination scores as a measure of cognitive performance. We performed a cross-subject analysis of atrophy rates and acceleration on individuals with an AD-like cerebrospinal fluid molecular profile.\n\n\nRESULTS\nIn presymptomatic individuals harboring indicators of AD, baseline thickness in AD-vulnerable cortical regions was significantly reduced compared with that of healthy control individuals, but baseline hippocampal volume was not. Across the clinical spectrum, rates of AD-specific cortical thinning increased with decreasing cognitive performance before peaking at approximately the Mini-Mental State Examination score of 21, beyond which rates of thinning started to decline. Annual rates of hippocampal volume loss showed a continuously increasing pattern with decreasing cognitive performance as low as the Mini-Mental State Examination score of 15. Analysis of the second derivative of imaging measurements revealed that AD-specific cortical thinning exhibited early acceleration followed by deceleration. Conversely, hippocampal volume loss exhibited positive acceleration across all study participants.\n\n\nCONCLUSIONS\nAlzheimer disease-specific cortical thinning and hippocampal volume loss are consistent with a sigmoidal pattern, with an acceleration phase during the early stages of the disease. Clinical trials should carefully consider the nonlinear behavior of these AD biomarkers." }, { "pmid": "23302773", "title": "Clinical, imaging, and pathological heterogeneity of the Alzheimer's disease syndrome.", "abstract": "With increasing knowledge of clinical in vivo biomarkers and the pathological intricacies of Alzheimer's disease (AD), nosology is evolving. Harmonized consensus criteria that emphasize prototypic illness continue to develop to achieve diagnostic clarity for treatment decisions and clinical trials. However, it is clear that AD is clinically heterogeneous in presentation and progression, demonstrating variable topographic distributions of atrophy and hypometabolism/hypoperfusion. AD furthermore often keeps company with other conditions that may further nuance clinical expression, such as synucleinopathy exacerbating executive and visuospatial dysfunction and vascular pathologies (particularly small vessel disease that is increasingly ubiquitous with human aging) accentuating frontal-dysexecutive symptomatology. That some of these atypical clinical patterns recur may imply the existence of distinct AD variants. For example, focal temporal lobe dysfunction is associated with a pure amnestic syndrome, very slow decline, with atrophy and neurofibrillary tangles limited largely to the medial temporal region including the entorhinal cortex. Left parietal atrophy and/or hypometabolism/hypoperfusion are associated with language symptoms, younger age of onset, and faster rate of decline - a potential 'language variant' of AD. Conversely, the same pattern but predominantly affecting the right parietal lobe is associated with a similar syndrome but with visuospatial symptoms replacing impaired language function. Finally, the extremely rare frontal variant is associated with executive dysfunction out of keeping with degree of memory decline and may have prominent behavioural symptoms. Genotypic differences may underlie some of these subtypes; for example, absence of apolipoprotein E e4 is often associated with atypicality in younger onset AD. Understanding the mechanisms behind this variability merits further investigation, informed by recent advances in imaging techniques, biomarker assays, and quantitative pathological methods, in conjunction with standardized clinical, functional, neuropsychological and neurobehavioral evaluations. Such an understanding is needed to facilitate 'personalized AD medicine', and eventually allow for clinical trials targeting specific AD subtypes. Although the focus legitimately remains on prototypic illness, continuing efforts to develop disease-modifying therapies should not exclude the rarer AD subtypes and common comorbid presentations, as is currently often the case. Only by treating them as well can we address the full burden of this devastating dementia syndrome." }, { "pmid": "27697430", "title": "Defining imaging biomarker cut points for brain aging and Alzheimer's disease.", "abstract": "INTRODUCTION\nOur goal was to develop cut points for amyloid positron emission tomography (PET), tau PET, flouro-deoxyglucose (FDG) PET, and MRI cortical thickness.\n\n\nMETHODS\nWe examined five methods for determining cut points.\n\n\nRESULTS\nThe reliable worsening method produced a cut point only for amyloid PET. The specificity, sensitivity, and accuracy of cognitively impaired versus young clinically normal (CN) methods labeled the most people abnormal and all gave similar cut points for tau PET, FDG PET, and cortical thickness. Cut points defined using the accuracy of cognitively impaired versus age-matched CN method labeled fewer people abnormal.\n\n\nDISCUSSION\nIn the future, we will use a single cut point for amyloid PET (standardized uptake value ratio, 1.42; centiloid, 19) based on the reliable worsening cut point method. We will base lenient cut points for tau PET, FDG PET, and cortical thickness on the accuracy of cognitively impaired versus young CN method and base conservative cut points on the accuracy of cognitively impaired versus age-matched CN method." }, { "pmid": "23332364", "title": "Tracking pathophysiological processes in Alzheimer's disease: an updated hypothetical model of dynamic biomarkers.", "abstract": "In 2010, we put forward a hypothetical model of the major biomarkers of Alzheimer's disease (AD). The model was received with interest because we described the temporal evolution of AD biomarkers in relation to each other and to the onset and progression of clinical symptoms. Since then, evidence has accumulated that supports the major assumptions of this model. Evidence has also appeared that challenges some of our assumptions, which has allowed us to modify our original model. Refinements to our model include indexing of individuals by time rather than clinical symptom severity; incorporation of interindividual variability in cognitive impairment associated with progression of AD pathophysiology; modifications of the specific temporal ordering of some biomarkers; and recognition that the two major proteinopathies underlying AD biomarker changes, amyloid β (Aβ) and tau, might be initiated independently in sporadic AD, in which we hypothesise that an incident Aβ pathophysiology can accelerate antecedent limbic and brainstem tauopathy." }, { "pmid": "20139996", "title": "The clinical use of structural MRI in Alzheimer disease.", "abstract": "Structural imaging based on magnetic resonance is an integral part of the clinical assessment of patients with suspected Alzheimer dementia. Prospective data on the natural history of change in structural markers from preclinical to overt stages of Alzheimer disease are radically changing how the disease is conceptualized, and will influence its future diagnosis and treatment. Atrophy of medial temporal structures is now considered to be a valid diagnostic marker at the mild cognitive impairment stage. Structural imaging is also included in diagnostic criteria for the most prevalent non-Alzheimer dementias, reflecting its value in differential diagnosis. In addition, rates of whole-brain and hippocampal atrophy are sensitive markers of neurodegeneration, and are increasingly used as outcome measures in trials of potentially disease-modifying therapies. Large multicenter studies are currently investigating the value of other imaging and nonimaging markers as adjuncts to clinical assessment in diagnosis and monitoring of progression. The utility of structural imaging and other markers will be increased by standardization of acquisition and analysis methods, and by development of robust algorithms for automated assessment." }, { "pmid": "19439419", "title": "Early diagnosis of Alzheimer's disease using cortical thickness: impact of cognitive reserve.", "abstract": "Brain atrophy measured by magnetic resonance structural imaging has been proposed as a surrogate marker for the early diagnosis of Alzheimer's disease. Studies on large samples are still required to determine its practical interest at the individual level, especially with regards to the capacity of anatomical magnetic resonance imaging to disentangle the confounding role of the cognitive reserve in the early diagnosis of Alzheimer's disease. One hundred and thirty healthy controls, 122 subjects with mild cognitive impairment of the amnestic type and 130 Alzheimer's disease patients were included from the ADNI database and followed up for 24 months. After 24 months, 72 amnestic mild cognitive impairment had converted to Alzheimer's disease (referred to as progressive mild cognitive impairment, as opposed to stable mild cognitive impairment). For each subject, cortical thickness was measured on the baseline magnetic resonance imaging volume. The resulting cortical thickness map was parcellated into 22 regions and a normalized thickness index was computed using the subset of regions (right medial temporal, left lateral temporal, right posterior cingulate) that optimally distinguished stable mild cognitive impairment from progressive mild cognitive impairment. We tested the ability of baseline normalized thickness index to predict evolution from amnestic mild cognitive impairment to Alzheimer's disease and compared it to the predictive values of the main cognitive scores at baseline. In addition, we studied the relationship between the normalized thickness index, the education level and the timeline of conversion to Alzheimer's disease. Normalized thickness index at baseline differed significantly among all the four diagnosis groups (P < 0.001) and correctly distinguished Alzheimer's disease patients from healthy controls with an 85% cross-validated accuracy. Normalized thickness index also correctly predicted evolution to Alzheimer's disease for 76% of amnestic mild cognitive impairment subjects after cross-validation, thus showing an advantage over cognitive scores (range 63-72%). Moreover, progressive mild cognitive impairment subjects, who converted later than 1 year after baseline, showed a significantly higher education level than those who converted earlier than 1 year after baseline. Using a normalized thickness index-based criterion may help with early diagnosis of Alzheimer's disease at the individual level, especially for highly educated subjects, up to 24 months before clinical criteria for Alzheimer's disease diagnosis are met." }, { "pmid": "23108250", "title": "Neuroimaging insights into network-based neurodegeneration.", "abstract": "PURPOSE OF REVIEW\nConvergent evidence from a number of neuroscience disciplines supports the hypothesis that Alzheimer's disease and other neurodegenerative disorders progress along brain networks. This review considers the role of neuroimaging in strengthening the case for network-based neurodegeneration and elucidating potential mechanisms.\n\n\nRECENT FINDINGS\nAdvances in functional and structural MRI have recently enabled the delineation of multiple large-scale distributed brain networks. The application of these network-imaging modalities to neurodegenerative disease has shown that specific disorders appear to progress along specific networks. Recent work applying theoretical measures of network efficiency to in-vivo network imaging has allowed for the development and evaluation of models of disease spread along networks. Novel MRI acquisition and analysis methods are paving the way for in-vivo assessment of the layer-specific microcircuits first targeted by neurodegenerative diseases. These methodological advances coupled with large, longitudinal studies of subjects progressing from healthy aging into dementia will enable a detailed understanding of the seeding and spread of these disorders.\n\n\nSUMMARY\nNeuroimaging has provided ample evidence that neurodegenerative disorders progress along brain networks, and is now beginning to elucidate how they do so." }, { "pmid": "23358601", "title": "Diverging patterns of amyloid deposition and hypometabolism in clinical variants of probable Alzheimer's disease.", "abstract": "The factors driving clinical heterogeneity in Alzheimer's disease are not well understood. This study assessed the relationship between amyloid deposition, glucose metabolism and clinical phenotype in Alzheimer's disease, and investigated how these relate to the involvement of functional networks. The study included 17 patients with early-onset Alzheimer's disease (age at onset <65 years), 12 patients with logopenic variant primary progressive aphasia and 13 patients with posterior cortical atrophy [whole Alzheimer's disease group: age = 61.5 years (standard deviation 6.5 years), 55% male]. Thirty healthy control subjects [age = 70.8 (3.3) years, 47% male] were also included. Subjects underwent positron emission tomography with (11)C-labelled Pittsburgh compound B and (18)F-labelled fluorodeoxyglucose. All patients met National Institute on Ageing-Alzheimer's Association criteria for probable Alzheimer's disease and showed evidence of amyloid deposition on (11)C-labelled Pittsburgh compound B positron emission tomography. We hypothesized that hypometabolism patterns would differ across variants, reflecting involvement of specific functional networks, whereas amyloid patterns would be diffuse and similar across variants. We tested these hypotheses using three complimentary approaches: (i) mass-univariate voxel-wise group comparison of (18)F-labelled fluorodeoxyglucose and (11)C-labelled Pittsburgh compound B; (ii) generation of covariance maps across all subjects with Alzheimer's disease from seed regions of interest specifically atrophied in each variant, and comparison of these maps to functional network templates; and (iii) extraction of (11)C-labelled Pittsburgh compound B and (18)F-labelled fluorodeoxyglucose values from functional network templates. Alzheimer's disease clinical groups showed syndrome-specific (18)F-labelled fluorodeoxyglucose patterns, with greater parieto-occipital involvement in posterior cortical atrophy, and asymmetric involvement of left temporoparietal regions in logopenic variant primary progressive aphasia. In contrast, all Alzheimer's disease variants showed diffuse patterns of (11)C-labelled Pittsburgh compound B binding, with posterior cortical atrophy additionally showing elevated uptake in occipital cortex compared with early-onset Alzheimer's disease. The seed region of interest covariance analysis revealed distinct (18)F-labelled fluorodeoxyglucose correlation patterns that greatly overlapped with the right executive-control network for the early-onset Alzheimer's disease region of interest, the left language network for the logopenic variant primary progressive aphasia region of interest, and the higher visual network for the posterior cortical atrophy region of interest. In contrast, (11)C-labelled Pittsburgh compound B covariance maps for each region of interest were diffuse. Finally, (18)F-labelled fluorodeoxyglucose was similarly reduced in all Alzheimer's disease variants in the dorsal and left ventral default mode network, whereas significant differences were found in the right ventral default mode, right executive-control (both lower in early-onset Alzheimer's disease and posterior cortical atrophy than logopenic variant primary progressive aphasia) and higher-order visual network (lower in posterior cortical atrophy than in early-onset Alzheimer's disease and logopenic variant primary progressive aphasia), with a trend towards lower (18)F-labelled fluorodeoxyglucose also found in the left language network in logopenic variant primary progressive aphasia. There were no differences in (11)C-labelled Pittsburgh compound B binding between syndromes in any of the networks. Our data suggest that Alzheimer's disease syndromes are associated with degeneration of specific functional networks, and that fibrillar amyloid-β deposition explains at most a small amount of the clinico-anatomic heterogeneity in Alzheimer's disease." }, { "pmid": "17353226", "title": "Different regional patterns of cortical thinning in Alzheimer's disease and frontotemporal dementia.", "abstract": "Alzheimer's disease and frontotemporal dementia (FTD) can be difficult to differentiate clinically because of overlapping symptoms. Distinguishing the two dementias based on volumetric measurements of brain atrophy with MRI has been only partially successful. Whether MRI measurements of cortical thinning improve the differentiation between Alzheimer's disease and FTD is unclear. In this study, we measured cortical thickness using a set of automated tools (Freesurfer) to reconstruct the brain's cortical surface from T1-weighted structural MRI data in 22 patients with Alzheimer's disease, 19 patients with FTD and 23 cognitively normal subjects. The goals were to detect the characteristic patterns of cortical thinning in these two types of dementia, to test the relationship between cortical thickness and cognitive impairment, to determine if measurement of cortical thickness is better than that of cortical volume for differentiating between these dementias and normal ageing and improving the classification of Alzheimer's disease and FTD based on neuropsychological scores alone. Compared to cognitively normal subjects, Alzheimer's disease patients had a thinner cortex primarily in bilateral, frontal, parietal, temporal and occipital lobes (P < 0.001), while FTD patients had a thinner cortex in bilateral, frontal and temporal regions and some thinning in inferior parietal regions and the posterior cingulate (P < 0.001). Compared to FTD patients, Alzheimer's disease patients had a thinner cortex (P < 0.001) in parts of bilateral parietal and precuneus regions. Cognitive impairment was negatively correlated with cortical thickness of frontal, parietal and temporal lobes in Alzheimer's disease, while similar correlations were not significant in FTD. Measurement of cortical thickness was similar to that of cortical volume in differentiating between normal ageing, Alzheimer's disease and FTD. Furthermore, cortical thickness measurements significantly improved the classification between Alzheimer's disease and FTD based on neuropsychological scores alone, including the Mini-Mental State Examination and a modified version of the Trail-Making Test. In conclusion, the characteristic patterns of cortical thinning in Alzheimer's disease and FTD suggest that cortical thickness may be a useful surrogate marker for these types of dementia." }, { "pmid": "23079557", "title": "Cognitive reserve in ageing and Alzheimer's disease.", "abstract": "The concept of cognitive reserve provides an explanation for differences between individuals in susceptibility to age-related brain changes or pathology related to Alzheimer's disease, whereby some people can tolerate more of these changes than others and maintain function. Epidemiological studies suggest that lifelong experiences, including educational and occupational attainment, and leisure activities in later life, can increase this reserve. For example, the risk of developing Alzheimer's disease is reduced in individuals with higher educational or occupational attainment. Reserve can conveniently be divided into two types: brain reserve, which refers to differences in the brain structure that may increase tolerance to pathology, and cognitive reserve, which refers to differences between individuals in how tasks are performed that might enable some people to be more resilient to brain changes than others. Greater understanding of the concept of cognitive reserve could lead to interventions to slow cognitive ageing or reduce the risk of dementia." }, { "pmid": "21802369", "title": "Neuropathologically defined subtypes of Alzheimer's disease with distinct clinical characteristics: a retrospective study.", "abstract": "BACKGROUND\nNeurofibrillary pathology has a stereotypical progression in Alzheimer's disease (AD) that is encapsulated in the Braak staging scheme; however, some AD cases are atypical and do not fit into this scheme. We aimed to compare clinical and neuropathological features between typical and atypical AD cases.\n\n\nMETHODS\nAD cases with a Braak neurofibrillary tangle stage of more than IV were identified from a brain bank database. By use of thioflavin-S fluorescence microscopy, we assessed the density and the distribution of neurofibrillary tangles in three cortical regions and two hippocampal sectors. These data were used to construct an algorithm to classify AD cases into typical, hippocampal sparing, or limbic predominant. Classified cases were then compared for clinical, demographic, pathological, and genetic characteristics. An independent cohort of AD cases was assessed to validate findings from the initial cohort.\n\n\nFINDINGS\n889 cases of AD, 398 men and 491 women with age at death of 37-103 years, were classified with the algorithm as hippocampal sparing (97 cases [11%]), typical (665 [75%]), or limbic predominant (127 [14%]). By comparison with typical AD, neurofibrillary tangle counts per 0.125 mm(2) in hippocampal sparing cases were higher in cortical areas (median 13, IQR 11-16) and lower in the hippocampus (7.5, 5.2-9.5), whereas counts in limbic-predominant cases were lower in cortical areas (4.3, 3.0-5.7) and higher in the hippocampus (27, 22-35). Hippocampal sparing cases had less hippocampal atrophy than did typical and limbic-predominant cases. Patients with hippocampal sparing AD were younger at death (mean 72 years [SD 10]) and a higher proportion of them were men (61 [63%]), whereas those with limbic-predominant AD were older (mean 86 years [SD 6]) and a higher proportion of them were women (87 [69%]). Microtubule-associated protein tau (MAPT) H1H1 genotype was more common in limbic-predominant AD (54 [70%]) than in hippocampal sparing AD (24 [46%]; p=0.011), but did not differ significantly between limbic-predominant and typical AD (204 [59%]; p=0.11). Apolipoprotein E (APOE) ɛ4 allele status differed between AD subtypes only when data were stratified by age at onset. Clinical presentation, age at onset, disease duration, and rate of cognitive decline differed between the AD subtypes. These findings were confirmed in a validation cohort of 113 patients with AD.\n\n\nINTERPRETATION\nThese data support the hypothesis that AD has distinct clinicopathological subtypes. Hippocampal sparing and limbic-predominant AD subtypes might account for about 25% of cases, and hence should be considered when designing clinical, genetic, biomarker, and treatment studies in patients with AD.\n\n\nFUNDING\nUS National Institutes of Health via Mayo Alzheimer's Disease Research Center, Mayo Clinic Study on Aging, Florida Alzheimer's Disease Research Center, and Einstein Aging Study; and State of Florida Alzheimer's Disease Initiative." }, { "pmid": "25344382", "title": "Anatomical heterogeneity of Alzheimer disease: based on cortical thickness on MRIs.", "abstract": "OBJECTIVE\nBecause the signs associated with dementia due to Alzheimer disease (AD) can be heterogeneous, the goal of this study was to use 3-dimensional MRI to examine the various patterns of cortical atrophy that can be associated with dementia of AD type, and to investigate whether AD dementia can be categorized into anatomical subtypes.\n\n\nMETHODS\nHigh-resolution T1-weighted volumetric MRIs were taken of 152 patients in their earlier stages of AD dementia. The images were processed to measure cortical thickness, and hierarchical agglomerative cluster analysis was performed using Ward's clustering linkage. The identified clusters of patients were compared with an age- and sex-matched control group using a general linear model.\n\n\nRESULTS\nThere were several distinct patterns of cortical atrophy and the number of patterns varied according to the level of cluster analyses. At the 3-cluster level, patients were divided into (1) bilateral medial temporal-dominant atrophy subtype (n = 52, ∼ 34.2%), (2) parietal-dominant subtype (n = 28, ∼ 18.4%) in which the bilateral parietal lobes, the precuneus, along with bilateral dorsolateral frontal lobes, were atrophic, and (3) diffuse atrophy subtype (n = 72, ∼ 47.4%) in which nearly all association cortices revealed atrophy. These 3 subtypes also differed in their demographic and clinical features.\n\n\nCONCLUSIONS\nThis cluster analysis of cortical thickness of the entire brain showed that AD dementia in the earlier stages can be categorized into various anatomical subtypes, with distinct clinical features." }, { "pmid": "26923371", "title": "HYDRA: Revealing heterogeneity of imaging and genetic patterns through a multiple max-margin discriminative analysis framework.", "abstract": "Multivariate pattern analysis techniques have been increasingly used over the past decade to derive highly sensitive and specific biomarkers of diseases on an individual basis. The driving assumption behind the vast majority of the existing methodologies is that a single imaging pattern can distinguish between healthy and diseased populations, or between two subgroups of patients (e.g., progressors vs. non-progressors). This assumption effectively ignores the ample evidence for the heterogeneous nature of brain diseases. Neurodegenerative, neuropsychiatric and neurodevelopmental disorders are largely characterized by high clinical heterogeneity, which likely stems in part from underlying neuroanatomical heterogeneity of various pathologies. Detecting and characterizing heterogeneity may deepen our understanding of disease mechanisms and lead to patient-specific treatments. However, few approaches tackle disease subtype discovery in a principled machine learning framework. To address this challenge, we present a novel non-linear learning algorithm for simultaneous binary classification and subtype identification, termed HYDRA (Heterogeneity through Discriminative Analysis). Neuroanatomical subtypes are effectively captured by multiple linear hyperplanes, which form a convex polytope that separates two groups (e.g., healthy controls from pathologic samples); each face of this polytope effectively defines a disease subtype. We validated HYDRA on simulated and clinical data. In the latter case, we applied the proposed method independently to the imaging and genetic datasets of the Alzheimer's Disease Neuroimaging Initiative (ADNI 1) study. The imaging dataset consisted of T1-weighted volumetric magnetic resonance images of 123 AD patients and 177 controls. The genetic dataset consisted of single nucleotide polymorphism information of 103 AD patients and 139 controls. We identified 3 reproducible subtypes of atrophy in AD relative to controls: (1) diffuse and extensive atrophy, (2) precuneus and extensive temporal lobe atrophy, as well some prefrontal atrophy, (3) atrophy pattern very much confined to the hippocampus and the medial temporal lobe. The genetics dataset yielded two subtypes of AD characterized mainly by the presence/absence of the apolipoprotein E (APOE) ε4 genotype, but also involving differential presence of risk alleles of CD2AP, SPON1 and LOC39095 SNPs that were associated with differences in the respective patterns of brain atrophy, especially in the precuneus. The results demonstrate the potential of the proposed approach to map disease heterogeneity in neuroimaging and genetic studies." }, { "pmid": "28295799", "title": "Your algorithm might think the hippocampus grows in Alzheimer's disease: Caveats of longitudinal automated hippocampal volumetry.", "abstract": "Hippocampal atrophy rate-measured using automated techniques applied to structural MRI scans-is considered a sensitive marker of disease progression in Alzheimer's disease, frequently used as an outcome measure in clinical trials. Using publicly accessible data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), we examined 1-year hippocampal atrophy rates generated by each of five automated or semiautomated hippocampal segmentation algorithms in patients with Alzheimer's disease, subjects with mild cognitive impairment, or elderly controls. We analyzed MRI data from 398 and 62 subjects available at baseline and at 1 year at MRI field strengths of 1.5 T and 3 T, respectively. We observed a high rate of hippocampal segmentation failures across all algorithms and diagnostic categories, with only 50.8% of subjects at 1.5 T and 58.1% of subjects at 3 T passing stringent segmentation quality control. We also found that all algorithms identified several subjects (between 2.94% and 48.68%) across all diagnostic categories showing increases in hippocampal volume over 1 year. For any given algorithm, hippocampal \"growth\" could not entirely be explained by excluding patients with flawed hippocampal segmentations, scan-rescan variability, or MRI field strength. Furthermore, different algorithms did not uniformly identify the same subjects as hippocampal \"growers,\" and showed very poor concordance in estimates of magnitude of hippocampal volume change over time (intraclass correlation coefficient 0.319 at 1.5 T and 0.149 at 3 T). This precluded a meaningful analysis of whether hippocampal \"growth\" represents a true biological phenomenon. Taken together, our findings suggest that longitudinal hippocampal volume change should be interpreted with considerable caution as a biomarker. Hum Brain Mapp 38:2875-2896, 2017. © 2017 Wiley Periodicals, Inc." }, { "pmid": "27697430", "title": "Defining imaging biomarker cut points for brain aging and Alzheimer's disease.", "abstract": "INTRODUCTION\nOur goal was to develop cut points for amyloid positron emission tomography (PET), tau PET, flouro-deoxyglucose (FDG) PET, and MRI cortical thickness.\n\n\nMETHODS\nWe examined five methods for determining cut points.\n\n\nRESULTS\nThe reliable worsening method produced a cut point only for amyloid PET. The specificity, sensitivity, and accuracy of cognitively impaired versus young clinically normal (CN) methods labeled the most people abnormal and all gave similar cut points for tau PET, FDG PET, and cortical thickness. Cut points defined using the accuracy of cognitively impaired versus age-matched CN method labeled fewer people abnormal.\n\n\nDISCUSSION\nIn the future, we will use a single cut point for amyloid PET (standardized uptake value ratio, 1.42; centiloid, 19) based on the reliable worsening cut point method. We will base lenient cut points for tau PET, FDG PET, and cortical thickness on the accuracy of cognitively impaired versus young CN method and base conservative cut points on the accuracy of cognitively impaired versus age-matched CN method." }, { "pmid": "22189451", "title": "MRI cortical thickness biomarker predicts AD-like CSF and cognitive decline in normal adults.", "abstract": "OBJECTIVE\nNew preclinical Alzheimer disease (AD) diagnostic criteria have been developed using biomarkers in cognitively normal (CN) adults. We implemented these criteria using an MRI biomarker previously associated with AD dementia, testing the hypothesis that individuals at high risk for preclinical AD would be at elevated risk for cognitive decline.\n\n\nMETHODS\nThe Alzheimer's Disease Neuroimaging Initiative database was interrogated for CN individuals. MRI data were processed using a published set of a priori regions of interest to derive a single measure known as the AD signature (ADsig). Each individual was classified as ADsig-low (≥ 1 SD below the mean: high risk for preclinical AD), ADsig-average (within 1 SD of mean), or ADsig-high (≥ 1 SD above mean). A 3-year cognitive decline outcome was defined a priori using change in Clinical Dementia Rating sum of boxes and selected neuropsychological measures.\n\n\nRESULTS\nIndividuals at high risk for preclinical AD were more likely to experience cognitive decline, which developed in 21% compared with 7% of ADsig-average and 0% of ADsig-high groups (p = 0.03). Logistic regression demonstrated that every 1 SD of cortical thinning was associated with a nearly tripled risk of cognitive decline (p = 0.02). Of those for whom baseline CSF data were available, 60% of the high risk for preclinical AD group had CSF characteristics consistent with AD while 36% of the ADsig-average and 19% of the ADsig-high groups had such CSF characteristics (p = 0.1).\n\n\nCONCLUSIONS\nThis approach to the detection of individuals at high risk for preclinical AD-identified in single CN individuals using this quantitative ADsig MRI biomarker-may provide investigators with a population enriched for AD pathobiology and with a relatively high likelihood of imminent cognitive decline consistent with prodromal AD." }, { "pmid": "20378467", "title": "N4ITK: improved N3 bias correction.", "abstract": "A variant of the popular nonparametric nonuniform intensity normalization (N3) algorithm is proposed for bias field correction. Given the superb performance of N3 and its public availability, it has been the subject of several evaluation studies. These studies have demonstrated the importance of certain parameters associated with the B-spline least-squares fitting. We propose the substitution of a recently developed fast and robust B-spline approximation routine and a modified hierarchical optimization scheme for improved bias field correction over the original N3 algorithm. Similar to the N3 algorithm, we also make the source code, testing, and technical documentation of our contribution, which we denote as \"N4ITK,\" available to the public through the Insight Toolkit of the National Institutes of Health. Performance assessment is demonstrated using simulated data from the publicly available Brainweb database, hyperpolarized (3)He lung image data, and 9.4T postmortem hippocampus data." }, { "pmid": "21945694", "title": "BEaST: brain extraction based on nonlocal segmentation technique.", "abstract": "Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases. In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors." }, { "pmid": "8126267", "title": "Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space.", "abstract": "OBJECTIVE\nIn both diagnostic and research applications, the interpretation of MR images of the human brain is facilitated when different data sets can be compared by visual inspection of equivalent anatomical planes. Quantitative analysis with predefined atlas templates often requires the initial alignment of atlas and image planes. Unfortunately, the axial planes acquired during separate scanning sessions are often different in their relative position and orientation, and these slices are not coplanar with those in the atlas. We have developed a completely automatic method to register a given volumetric data set with Talairach stereotaxic coordinate system.\n\n\nMATERIALS AND METHODS\nThe registration method is based on multi-scale, three-dimensional (3D) cross-correlation with an average (n > 300) MR brain image volume aligned with the Talariach stereotaxic space. Once the data set is re-sampled by the transformation recovered by the algorithm, atlas slices can be directly superimposed on the corresponding slices of the re-sampled volume. the use of such a standardized space also allows the direct comparison, voxel to voxel, of two or more data sets brought into stereotaxic space.\n\n\nRESULTS\nWith use of a two-tailed Student t test for paired samples, there was no significant difference in the transformation parameters recovered by the automatic algorithm when compared with two manual landmark-based methods (p > 0.1 for all parameters except y-scale, where p > 0.05). Using root-mean-square difference between normalized voxel intensities as an unbiased measure of registration, we show that when estimated and averaged over 60 volumetric MR images in standard space, this measure was 30% lower for the automatic technique than the manual method, indicating better registrations. Likewise, the automatic method showed a 57% reduction in standard deviation, implying a more stable technique. The algorithm is able to recover the transformation even when data are missing from the top or bottom of the volume.\n\n\nCONCLUSION\nWe present a fully automatic registration method to map volumetric data into stereotaxic space that yields results comparable with those of manually based techniques. The method requires no manual identification of points or contours and therefore does not suffer the drawbacks involved in user intervention such as reproducibility and interobserver variability." }, { "pmid": "15588607", "title": "Cortical thickness analysis examined through power analysis and a population simulation.", "abstract": "We have previously developed a procedure for measuring the thickness of cerebral cortex over the whole brain using 3-D MRI data and a fully automated surface-extraction (ASP) algorithm. This paper examines the precision of this algorithm, its optimal performance parameters, and the sensitivity of the method to subtle, focal changes in cortical thickness. The precision of cortical thickness measurements was studied using a simulated population study and single subject reproducibility metrics. Cortical thickness was shown to be a reliable method, reaching a sensitivity (probability of a true-positive) of 0.93. Six different cortical thickness metrics were compared. The simplest and most precise method measures the distance between corresponding vertices from the white matter to the gray matter surface. Given two groups of 25 subjects, a 0.6-mm (15%) change in thickness can be recovered after blurring with a 3-D Gaussian kernel (full-width half max = 30 mm). Smoothing across the 2-D surface manifold also improves precision; in this experiment, the optimal kernel size was 30 mm." }, { "pmid": "15896981", "title": "Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification.", "abstract": "Accurate reconstruction of the inner and outer cortical surfaces of the human cerebrum is a critical objective for a wide variety of neuroimaging analysis purposes, including visualization, morphometry, and brain mapping. The Anatomic Segmentation using Proximity (ASP) algorithm, previously developed by our group, provides a topology-preserving cortical surface deformation method that has been extensively used for the aforementioned purposes. However, constraints in the algorithm to ensure topology preservation occasionally produce incorrect thickness measurements due to a restriction in the range of allowable distances between the gray and white matter surfaces. This problem is particularly prominent in pediatric brain images with tightly folded gyri. This paper presents a novel method for improving the conventional ASP algorithm by making use of partial volume information through probabilistic classification in order to allow for topology preservation across a less restricted range of cortical thickness values. The new algorithm also corrects the classification of the insular cortex by masking out subcortical tissues. For 70 pediatric brains, validation experiments for the modified algorithm, Constrained Laplacian ASP (CLASP), were performed by three methods: (i) volume matching between surface-masked gray matter (GM) and conventional tissue-classified GM, (ii) surface matching between simulated and CLASP-extracted surfaces, and (iii) repeatability of the surface reconstruction among 16 MRI scans of the same subject. In the volume-based evaluation, the volume enclosed by the CLASP WM and GM surfaces matched the classified GM volume 13% more accurately than using conventional ASP. In the surface-based evaluation, using synthesized thick cortex, the average difference between simulated and extracted surfaces was 4.6 +/- 1.4 mm for conventional ASP and 0.5 +/- 0.4 mm for CLASP. In a repeatability study, CLASP produced a 30% lower RMS error for the GM surface and a 8% lower RMS error for the WM surface compared with ASP." }, { "pmid": "10944416", "title": "Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI.", "abstract": "Automatic computer processing of large multidimensional images such as those produced by magnetic resonance imaging (MRI) is greatly aided by deformable models, which are used to extract, identify, and quantify specific neuroanatomic structures. A general method of deforming polyhedra is presented here, with two novel features. First, explicit prevention of self-intersecting surface geometries is provided, unlike conventional deformable models, which use regularization constraints to discourage but not necessarily prevent such behavior. Second, deformation of multiple surfaces with intersurface proximity constraints allows each surface to help guide other surfaces into place using model-based constraints such as expected thickness of an anatomic surface. These two features are used advantageously to identify automatically the total surface of the outer and inner boundaries of cerebral cortical gray matter from normal human MR images, accurately locating the depths of the sulci, even where noise and partial volume artifacts in the image obscure the visibility of sulci. The extracted surfaces are enforced to be simple two-dimensional manifolds (having the topology of a sphere), even though the data may have topological holes. This automatic 3-D cortex segmentation technique has been applied to 150 normal subjects, simultaneously extracting both the gray/white and gray/cerebrospinal fluid interface from each individual. The collection of surfaces has been used to create a spatial map of the mean and standard deviation for the location and the thickness of cortical gray matter. Three alternative criteria for defining cortical thickness at each cortical location were developed and compared. These results are shown to corroborate published postmortem and in vivo measurements of cortical thickness." }, { "pmid": "11771995", "title": "Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain.", "abstract": "An anatomical parcellation of the spatially normalized single-subject high-resolution T1 volume provided by the Montreal Neurological Institute (MNI) (D. L. Collins et al., 1998, Trans. Med. Imag. 17, 463-468) was performed. The MNI single-subject main sulci were first delineated and further used as landmarks for the 3D definition of 45 anatomical volumes of interest (AVOI) in each hemisphere. This procedure was performed using a dedicated software which allowed a 3D following of the sulci course on the edited brain. Regions of interest were then drawn manually with the same software every 2 mm on the axial slices of the high-resolution MNI single subject. The 90 AVOI were reconstructed and assigned a label. Using this parcellation method, three procedures to perform the automated anatomical labeling of functional studies are proposed: (1) labeling of an extremum defined by a set of coordinates, (2) percentage of voxels belonging to each of the AVOI intersected by a sphere centered by a set of coordinates, and (3) percentage of voxels belonging to each of the AVOI intersected by an activated cluster. An interface with the Statistical Parametric Mapping package (SPM, J. Ashburner and K. J. Friston, 1999, Hum. Brain Mapp. 7, 254-266) is provided as a freeware to researchers of the neuroimaging community. We believe that this tool is an improvement for the macroscopical labeling of activated area compared to labeling assessed using the Talairach atlas brain in which deformations are well known. However, this tool does not alleviate the need for more sophisticated labeling strategies based on anatomical or cytoarchitectonic probabilistic maps." }, { "pmid": "24885474", "title": "Early intervention in Alzheimer's disease: a health economic study of the effects of diagnostic timing.", "abstract": "BACKGROUND\nIntervention and treatment in Alzheimer's disease dementia (AD-dementia) can be cost effective but the majority of patients are not diagnosed in a timely manner. Technology is now available that can enable the earlier detection of cognitive loss associated with incipient dementia, offering the potential for earlier intervention in the UK health care system. This study aimed to determine to what extent the timing of an intervention affects its cost-effectiveness.\n\n\nMETHODS\nUsing published data describing cognitive decline in the years prior to an AD diagnosis, we modelled the effects on healthcare costs and quality-adjusted life years of hypothetical symptomatic and disease-modifying interventions. Early and standard interventions were assumed to have equal clinical effects, but the early intervention could be applied up to eight years prior to standard diagnosis.\n\n\nRESULTS\nA symptomatic treatment which immediately improved cognition by one MMSE point and reduced in efficacy over three years, would produce a maximum net benefit when applied at the earliest timepoint considered, i.e. eight years prior to standard diagnosis. In this scenario, the net benefit was reduced by around 17% for every year that intervention was delayed. In contrast, for a disease-modifying intervention which halted cognitive decline for one year, economic benefits would peak when treatment effects were applied two years prior to standard diagnosis. In these models, the maximum net benefit of the disease modifying intervention was fifteen times larger than that of the symptomatic treatment.\n\n\nCONCLUSION\nTimeliness of intervention is likely to have an important impact on the cost-effectiveness of both current and future treatments. Healthcare policy should aim to optimise the timing of AD-dementia diagnosis, which is likely to necessitate detecting and treating patients several years prior to current clinical practice." }, { "pmid": "20541286", "title": "Boosting power for clinical trials using classifiers based on multiple biomarkers.", "abstract": "Machine learning methods pool diverse information to perform computer-assisted diagnosis and predict future clinical decline. We introduce a machine learning method to boost power in clinical trials. We created a Support Vector Machine algorithm that combines brain imaging and other biomarkers to classify 737 Alzheimer's disease Neuroimaging initiative (ADNI) subjects as having Alzheimer's disease (AD), mild cognitive impairment (MCI), or normal controls. We trained our classifiers based on example data including: MRI measures of hippocampal, ventricular, and temporal lobe volumes, a PET-FDG numerical summary, CSF biomarkers (t-tau, p-tau, and Abeta(42)), ApoE genotype, age, sex, and body mass index. MRI measures contributed most to Alzheimer's disease (AD) classification; PET-FDG and CSF biomarkers, particularly Abeta(42), contributed more to MCI classification. Using all biomarkers jointly, we used our classifier to select the one-third of the subjects most likely to decline. In this subsample, fewer than 40 AD and MCI subjects would be needed to detect a 25% slowing in temporal lobe atrophy rates with 80% power--a substantial boosting of power relative to standard imaging measures." }, { "pmid": "28079104", "title": "Longitudinal measurement and hierarchical classification framework for the prediction of Alzheimer's disease.", "abstract": "Accurate prediction of Alzheimer's disease (AD) is important for the early diagnosis and treatment of this condition. Mild cognitive impairment (MCI) is an early stage of AD. Therefore, patients with MCI who are at high risk of fully developing AD should be identified to accurately predict AD. However, the relationship between brain images and AD is difficult to construct because of the complex characteristics of neuroimaging data. To address this problem, we present a longitudinal measurement of MCI brain images and a hierarchical classification method for AD prediction. Longitudinal images obtained from individuals with MCI were investigated to acquire important information on the longitudinal changes, which can be used to classify MCI subjects as either MCI conversion (MCIc) or MCI non-conversion (MCInc) individuals. Moreover, a hierarchical framework was introduced to the classifier to manage high feature dimensionality issues and incorporate spatial information for improving the prediction accuracy. The proposed method was evaluated using 131 patients with MCI (70 MCIc and 61 MCInc) based on MRI scans taken at different time points. Results showed that the proposed method achieved 79.4% accuracy for the classification of MCIc versus MCInc, thereby demonstrating very promising performance for AD prediction." } ]
Frontiers in Neuroscience
30294250
PMC6158311
10.3389/fnins.2018.00600
Spatio-Temporal Dynamics of Intrinsic Networks in Functional Magnetic Imaging Data Using Recurrent Neural Networks
We introduce a novel recurrent neural network (RNN) approach to account for temporal dynamics and dependencies in brain networks observed via functional magnetic resonance imaging (fMRI). Our approach directly parameterizes temporal dynamics through recurrent connections, which can be used to formulate blind source separation with a conditional (rather than marginal) independence assumption, which we call RNN-ICA. This formulation enables us to visualize the temporal dynamics of both first order (activity) and second order (directed connectivity) information in brain networks that are widely studied in a static sense, but not well-characterized dynamically. RNN-ICA predicts dynamics directly from the recurrent states of the RNN in both task and resting state fMRI. Our results show both task-related and group-differentiating directed connectivity.
5.2. Related workOur method introduces deep and non-linear computations in time to MLE independent component analysis (MLE ICA) without sacrificing the simplicity of linear relationships between source and observation. MLE ICA has an equivalent learning objective to infomax ICA, widely used in fMRI studies, in which the sources are drawn from a factorized logistic distribution (Hyvärinen et al., 2004). While the model learns a linear transformation between data and sources through the unmixing matrix, the source dynamics are encoded by a deep non-linear transformation with recurrent structure, as represented by an RNN. Alternative non-linear parameterizations of the ICA transformation exist that use deep neural networks have been shown to work with fMRI data (Castro et al., 2016). Such approaches allow for deep and non-linear static spatial maps and are compatible with our learning objective. Temporal ICA as used in group ICA (Calhoun et al., 2009), like spatial ICA, does capture some temporal dynamics, but only as summaries through a one- to two-stage PCA preprocessing step. These temporal summaries are captured and can be analyzed, however they are not learned as part of an end-to-end learning objective. Overall, the strengths of RNN-ICA compared to these methods are the dynamics are directly learned as model parameters, which allows for richer and higher-order temporal analyses, as we showed in the previous section.Recurrent neural networks do not typically incorporate latent variables, as this requires expensive inference. Versions that incorporate stochastic latent variables exist, are trainable via variational methods, and working approaches for sequential data exist (Chung et al., 2015). However, these require complex inference which introduces variance into learning that may make training with fMRI data challenging. Our method instead incorporates concepts from noiseless ICA, which reduces inference to the inverse of a generative transformation. The consequence is that the temporal analyses are relatively simple, relying on only the tractable computation of the Jacobian of component conditional densities given the activations.
[ "22019879", "7584893", "8524021", "23231989", "11284046", "11559959", "18438867", "19059344", "26891483", "8812068", "25161896", "16945915", "22178299", "28232797", "24680869", "9377276", "10946390", "18082428", "18602482", "28578129", "25364771", "25191215", "23747961", "19620724", "21391254", "22743197", "29075569", "19896537" ]
[ { "pmid": "22019879", "title": "Capturing inter-subject variability with group independent component analysis of fMRI data: a simulation study.", "abstract": "A key challenge in functional neuroimaging is the meaningful combination of results across subjects. Even in a sample of healthy participants, brain morphology and functional organization exhibit considerable variability, such that no two individuals have the same neural activation at the same location in response to the same stimulus. This inter-subject variability limits inferences at the group-level as average activation patterns may fail to represent the patterns seen in individuals. A promising approach to multi-subject analysis is group independent component analysis (GICA), which identifies group components and reconstructs activations at the individual level. GICA has gained considerable popularity, particularly in studies where temporal response models cannot be specified. However, a comprehensive understanding of the performance of GICA under realistic conditions of inter-subject variability is lacking. In this study we use simulated functional magnetic resonance imaging (fMRI) data to determine the capabilities and limitations of GICA under conditions of spatial, temporal, and amplitude variability. Simulations, generated with the SimTB toolbox, address questions that commonly arise in GICA studies, such as: (1) How well can individual subject activations be estimated and when will spatial variability preclude estimation? (2) Why does component splitting occur and how is it affected by model order? (3) How should we analyze component features to maximize sensitivity to intersubject differences? Overall, our results indicate an excellent capability of GICA to capture between-subject differences and we make a number of recommendations regarding analytic choices for application to functional imaging data." }, { "pmid": "7584893", "title": "An information-maximization approach to blind separation and blind deconvolution.", "abstract": "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing." }, { "pmid": "8524021", "title": "Functional connectivity in the motor cortex of resting human brain using echo-planar MRI.", "abstract": "An MRI time course of 512 echo-planar images (EPI) in resting human brain obtained every 250 ms reveals fluctuations in signal intensity in each pixel that have a physiologic origin. Regions of the sensorimotor cortex that were activated secondary to hand movement were identified using functional MRI methodology (FMRI). Time courses of low frequency (< 0.1 Hz) fluctuations in resting brain were observed to have a high degree of temporal correlation (P < 10(-3)) within these regions and also with time courses in several other regions that can be associated with motor function. It is concluded that correlation of low frequency fluctuations, which may arise from fluctuations in blood oxygenation or flow, is a manifestation of functional connectivity of the brain." }, { "pmid": "23231989", "title": "Multisubject independent component analysis of fMRI: a decade of intrinsic networks, default mode, and neurodiagnostic discovery.", "abstract": "Since the discovery of functional connectivity in fMRI data (i.e., temporal correlations between spatially distinct regions of the brain) there has been a considerable amount of work in this field. One important focus has been on the analysis of brain connectivity using the concept of networks instead of regions. Approximately ten years ago, two important research areas grew out of this concept. First, a network proposed to be \"a default mode of brain function\" since dubbed the default mode network was proposed by Raichle. Secondly, multisubject or group independent component analysis (ICA) provided a data-driven approach to study properties of brain networks, including the default mode network. In this paper, we provide a focused review of how ICA has contributed to the study of intrinsic networks. We discuss some methodological considerations for group ICA and highlight multiple analytic approaches for studying brain networks. We also show examples of some of the differences observed in the default mode and resting networks in the diseased brain. In summary, we are in exciting times and still just beginning to reap the benefits of the richness of functional brain networks as well as available analytic approaches." }, { "pmid": "11284046", "title": "Spatial and temporal independent component analysis of functional MRI data containing a pair of task-related waveforms.", "abstract": "Independent component analysis (ICA) is a technique that attempts to separate data into maximally independent groups. Achieving maximal independence in space or time yields two varieties of ICA meaningful for functional MRI (fMRI) applications: spatial ICA (SICA) and temporal ICA (TICA). SICA has so far dominated the application of ICA to fMRI. The objective of these experiments was to study ICA with two predictable components present and evaluate the importance of the underlying independence assumption in the application of ICA. Four novel visual activation paradigms were designed, each consisting of two spatiotemporal components that were either spatially dependent, temporally dependent, both spatially and temporally dependent, or spatially and temporally uncorrelated, respectively. Simulated data were generated and fMRI data from six subjects were acquired using these paradigms. Data from each paradigm were analyzed with regression analysis in order to determine if the signal was occurring as expected. Spatial and temporal ICA were then applied to these data, with the general result that ICA found components only where expected, e.g., S(T)ICA \"failed\" (i.e., yielded independent components unrelated to the \"self-evident\" components) for paradigms that were spatially (temporally) dependent, and \"worked\" otherwise. Regression analysis proved a useful \"check\" for these data, however strong hypotheses will not always be available, and a strength of ICA is that it can characterize data without making specific modeling assumptions. We report a careful examination of some of the assumptions behind ICA methodologies, provide examples of when applying ICA would provide difficult-to-interpret results, and offer suggestions for applying ICA to fMRI data especially when more than one task-related component is present in the data." }, { "pmid": "11559959", "title": "A method for making group inferences from functional MRI data using independent component analysis.", "abstract": "Independent component analysis (ICA) is a promising analysis method that is being increasingly applied to fMRI data. A principal advantage of this approach is its applicability to cognitive paradigms for which detailed models of brain activity are not available. Independent component analysis has been successfully utilized to analyze single-subject fMRI data sets, and an extension of this work would be to provide for group inferences. However, unlike univariate methods (e.g., regression analysis, Kolmogorov-Smirnov statistics), ICA does not naturally generalize to a method suitable for drawing inferences about groups of subjects. We introduce a novel approach for drawing group inferences using ICA of fMRI data, and present its application to a simple visual paradigm that alternately stimulates the left or right visual field. Our group ICA analysis revealed task-related components in left and right visual cortex, a transiently task-related component in bilateral occipital/parietal cortex, and a non-task-related component in bilateral visual association cortex. We address issues involved in the use of ICA as an fMRI analysis method such as: (1) How many components should be calculated? (2) How are these components to be combined across subjects? (3) How should the final results be thresholded and/or presented? We show that the methodology we present provides answers to these questions and lay out a process for making group inferences from fMRI data using independent component analysis." }, { "pmid": "18438867", "title": "Modulation of temporally coherent brain networks estimated using ICA at rest and during cognitive tasks.", "abstract": "Brain regions which exhibit temporally coherent fluctuations, have been increasingly studied using functional magnetic resonance imaging (fMRI). Such networks are often identified in the context of an fMRI scan collected during rest (and thus are called \"resting state networks\"); however, they are also present during (and modulated by) the performance of a cognitive task. In this article, we will refer to such networks as temporally coherent networks (TCNs). Although there is still some debate over the physiological source of these fluctuations, TCNs are being studied in a variety of ways. Recent studies have examined ways TCNs can be used to identify patterns associated with various brain disorders (e.g. schizophrenia, autism or Alzheimer's disease). Independent component analysis (ICA) is one method being used to identify TCNs. ICA is a data driven approach which is especially useful for decomposing activation during complex cognitive tasks where multiple operations occur simultaneously. In this article we review recent TCN studies with emphasis on those that use ICA. We also present new results showing that TCNs are robust, and can be consistently identified at rest and during performance of a cognitive task in healthy individuals and in patients with schizophrenia. In addition, multiple TCNs show temporal and spatial modulation during the cognitive task versus rest. In summary, TCNs show considerable promise as potential imaging biological markers of brain diseases, though each network needs to be studied in more detail." }, { "pmid": "19059344", "title": "A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data.", "abstract": "Independent component analysis (ICA) has become an increasingly utilized approach for analyzing brain imaging data. In contrast to the widely used general linear model (GLM) that requires the user to parameterize the data (e.g. the brain's response to stimuli), ICA, by relying upon a general assumption of independence, allows the user to be agnostic regarding the exact form of the response. In addition, ICA is intrinsically a multivariate approach, and hence each component provides a grouping of brain activity into regions that share the same response pattern thus providing a natural measure of functional connectivity. There are a wide variety of ICA approaches that have been proposed, in this paper we focus upon two distinct methods. The first part of this paper reviews the use of ICA for making group inferences from fMRI data. We provide an overview of current approaches for utilizing ICA to make group inferences with a focus upon the group ICA approach implemented in the GIFT software. In the next part of this paper, we provide an overview of the use of ICA to combine or fuse multimodal data. ICA has proven particularly useful for data fusion of multiple tasks or data modalities such as single nucleotide polymorphism (SNP) data or event-related potentials. As demonstrated by a number of examples in this paper, ICA is a powerful and versatile data-driven approach for studying the brain." }, { "pmid": "26891483", "title": "Deep Independence Network Analysis of Structural Brain Imaging: Application to Schizophrenia.", "abstract": "Linear independent component analysis (ICA) is a standard signal processing technique that has been extensively used on neuroimaging data to detect brain networks with coherent brain activity (functional MRI) or covarying structural patterns (structural MRI). However, its formulation assumes that the measured brain signals are generated by a linear mixture of the underlying brain networks and this assumption limits its ability to detect the inherent nonlinear nature of brain interactions. In this paper, we introduce nonlinear independent component estimation (NICE) to structural MRI data to detect abnormal patterns of gray matter concentration in schizophrenia patients. For this biomedical application, we further addressed the issue of model regularization of nonlinear ICA by performing dimensionality reduction prior to NICE, together with an appropriate control of the complexity of the model and the usage of a proper approximation of the probability distribution functions of the estimated components. We show that our results are consistent with previous findings in the literature, but we also demonstrate that the incorporation of nonlinear associations in the data enables the detection of spatial patterns that are not identified by linear ICA. Specifically, we show networks including basal ganglia, cerebellum and thalamus that show significant differences in patients versus controls, some of which show distinct nonlinear patterns." }, { "pmid": "8812068", "title": "AFNI: software for analysis and visualization of functional magnetic resonance neuroimages.", "abstract": "A package of computer programs for analysis and visualization of three-dimensional human brain functional magnetic resonance imaging (FMRI) results is described. The software can color overlay neural activation maps onto higher resolution anatomical scans. Slices in each cardinal plane can be viewed simultaneously. Manual placement of markers on anatomical landmarks allows transformation of anatomical and functional scans into stereotaxic (Talairach-Tournoux) coordinates. The techniques for automatically generating transformed functional data sets from manually labeled anatomical data sets are described. Facilities are provided for several types of statistical analyses of multiple 3D functional data sets. The programs are written in ANSI C and Motif 1.2 to run on Unix workstations." }, { "pmid": "25161896", "title": "Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia.", "abstract": "Schizophrenia is a psychotic disorder characterized by functional dysconnectivity or abnormal integration between distant brain regions. Recent functional imaging studies have implicated large-scale thalamo-cortical connectivity as being disrupted in patients. However, observed connectivity differences in schizophrenia have been inconsistent between studies, with reports of hyperconnectivity and hypoconnectivity between the same brain regions. Using resting state eyes-closed functional imaging and independent component analysis on a multi-site data that included 151 schizophrenia patients and 163 age- and gender matched healthy controls, we decomposed the functional brain data into 100 components and identified 47 as functionally relevant intrinsic connectivity networks. We subsequently evaluated group differences in functional network connectivity, both in a static sense, computed as the pairwise Pearson correlations between the full network time courses (5.4 minutes in length), and a dynamic sense, computed using sliding windows (44 s in length) and k-means clustering to characterize five discrete functional connectivity states. Static connectivity analysis revealed that compared to healthy controls, patients show significantly stronger connectivity, i.e., hyperconnectivity, between the thalamus and sensory networks (auditory, motor and visual), as well as reduced connectivity (hypoconnectivity) between sensory networks from all modalities. Dynamic analysis suggests that (1), on average, schizophrenia patients spend much less time than healthy controls in states typified by strong, large-scale connectivity, and (2), that abnormal connectivity patterns are more pronounced during these connectivity states. In particular, states exhibiting cortical-subcortical antagonism (anti-correlations) and strong positive connectivity between sensory networks are those that show the group differences of thalamic hyperconnectivity and sensory hypoconnectivity. Group differences are weak or absent during other connectivity states. Dynamic analysis also revealed hypoconnectivity between the putamen and sensory networks during the same states of thalamic hyperconnectivity; notably, this finding cannot be observed in the static connectivity analysis. Finally, in post-hoc analyses we observed that the relationships between sub-cortical low frequency power and connectivity with sensory networks is altered in patients, suggesting different functional interactions between sub-cortical nuclei and sensorimotor cortex during specific connectivity states. While important differences between patients with schizophrenia and healthy controls have been identified, one should interpret the results with caution given the history of medication in patients. Taken together, our results support and expand current knowledge regarding dysconnectivity in schizophrenia, and strongly advocate the use of dynamic analyses to better account for and understand functional connectivity differences." }, { "pmid": "16945915", "title": "Consistent resting-state networks across healthy subjects.", "abstract": "Functional MRI (fMRI) can be applied to study the functional connectivity of the human brain. It has been suggested that fluctuations in the blood oxygenation level-dependent (BOLD) signal during rest reflect the neuronal baseline activity of the brain, representing the state of the human brain in the absence of goal-directed neuronal action and external input, and that these slow fluctuations correspond to functionally relevant resting-state networks. Several studies on resting fMRI have been conducted, reporting an apparent similarity between the identified patterns. The spatial consistency of these resting patterns, however, has not yet been evaluated and quantified. In this study, we apply a data analysis approach called tensor probabilistic independent component analysis to resting-state fMRI data to find coherencies that are consistent across subjects and sessions. We characterize and quantify the consistency of these effects by using a bootstrapping approach, and we estimate the BOLD amplitude modulation as well as the voxel-wise cross-subject variation. The analysis found 10 patterns with potential functional relevance, consisting of regions known to be involved in motor function, visual processing, executive functioning, auditory processing, memory, and the so-called default-mode network, each with BOLD signal changes up to 3%. In general, areas with a high mean percentage BOLD signal are consistent and show the least variation around the mean. These findings show that the baseline activity of the brain is consistent across subjects exhibiting significant temporal dynamics, with percentage BOLD signal change comparable with the signal changes found in task-related experiments." }, { "pmid": "22178299", "title": "SimTB, a simulation toolbox for fMRI data under a model of spatiotemporal separability.", "abstract": "We introduce SimTB, a MATLAB toolbox designed to simulate functional magnetic resonance imaging (fMRI) datasets under a model of spatiotemporal separability. The toolbox meets the increasing need of the fMRI community to more comprehensively understand the effects of complex processing strategies by providing a ground truth that estimation methods may be compared against. SimTB captures the fundamental structure of real data, but data generation is fully parameterized and fully controlled by the user, allowing for accurate and precise comparisons. The toolbox offers a wealth of options regarding the number and configuration of spatial sources, implementation of experimental paradigms, inclusion of tissue-specific properties, addition of noise and head movement, and much more. A straightforward data generation method and short computation time (3-10 seconds for each dataset) allow a practitioner to simulate and analyze many datasets to potentially understand a problem from many angles. Beginning MATLAB users can use the SimTB graphical user interface (GUI) to design and execute simulations while experienced users can write batch scripts to automate and customize this process. The toolbox is freely available at http://mialab.mrn.org/software together with sample scripts and tutorials." }, { "pmid": "28232797", "title": "Modeling the Dynamics of Human Brain Activity with Recurrent Neural Networks.", "abstract": "Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution of features to responses (response model). While there has been extensive work on developing better feature models, the work on developing better response models has been rather limited. Here, we investigate the extent to which recurrent neural network models can use their internal memories for nonlinear processing of arbitrary feature sequences to predict feature-evoked response sequences as measured by functional magnetic resonance imaging. We show that the proposed recurrent neural network models can significantly outperform established response models by accurately estimating long-term dependencies that drive hemodynamic responses. The results open a new window into modeling the dynamics of brain activity in response to sensory stimuli." }, { "pmid": "24680869", "title": "Restricted Boltzmann machines for neuroimaging: an application in identifying intrinsic networks.", "abstract": "Matrix factorization models are the current dominant approach for resolving meaningful data-driven features in neuroimaging data. Among them, independent component analysis (ICA) is arguably the most widely used for identifying functional networks, and its success has led to a number of versatile extensions to group and multimodal data. However there are indications that ICA may have reached a limit in flexibility and representational capacity, as the majority of such extensions are case-driven, custom-made solutions that are still contained within the class of mixture models. In this work, we seek out a principled and naturally extensible approach and consider a probabilistic model known as a restricted Boltzmann machine (RBM). An RBM separates linear factors from functional brain imaging data by fitting a probability distribution model to the data. Importantly, the solution can be used as a building block for more complex (deep) models, making it naturally suitable for hierarchical and multimodal extensions that are not easily captured when using linear factorizations alone. We investigate the capability of RBMs to identify intrinsic networks and compare its performance to that of well-known linear mixture models, in particular ICA. Using synthetic and real task fMRI data, we show that RBMs can be used to identify networks and their temporal activations with accuracy that is equal or greater than that of factorization models. The demonstrated effectiveness of RBMs supports its use as a building block for deeper models, a significant prospect for future neuroimaging research." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "10946390", "title": "Independent component analysis: algorithms and applications.", "abstract": "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject." }, { "pmid": "18082428", "title": "A method for functional network connectivity among spatially independent resting-state components in schizophrenia.", "abstract": "Functional connectivity of the brain has been studied by analyzing correlation differences in time courses among seed voxels or regions with other voxels of the brain in healthy individuals as well as in patients with brain disorders. The spatial extent of strongly temporally coherent brain regions co-activated during rest has also been examined using independent component analysis (ICA). However, the weaker temporal relationships among ICA component time courses, which we operationally define as a measure of functional network connectivity (FNC), have not yet been studied. In this study, we propose an approach for evaluating FNC and apply it to functional magnetic resonance imaging (fMRI) data collected from persons with schizophrenia and healthy controls. We examined the connectivity and latency among ICA component time courses to test the hypothesis that patients with schizophrenia would show increased functional connectivity and increased lag among resting state networks compared to controls. Resting state fMRI data were collected and the inter-relationships among seven selected resting state networks (identified using group ICA) were evaluated by correlating each subject's ICA time courses with one another. Patients showed higher correlation than controls among most of the dominant resting state networks. Patients also had slightly more variability in functional connectivity than controls. We present a novel approach for quantifying functional connectivity among brain networks identified with spatial ICA. Significant differences between patient and control connectivity in different networks were revealed possibly reflecting deficiencies in cortical processing in patients." }, { "pmid": "18602482", "title": "Hybrid ICA-Bayesian network approach reveals distinct effective connectivity differences in schizophrenia.", "abstract": "We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge, J., Lane, T., Link, H., Qiu, S., Clark, V.P., 2007. Discrete dynamic Bayesian network analysis of fMRI data. Hum Brain Mapp.) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge, J., Lane, T., 2005. Learning Class-Discriminative Dynamic Bayesian Networks. Proceedings of the International Conference on Machine Learning, Bonn, Germany, pp. 97-104.). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, A., 1991. Probability, random variables, and stochastic processes. McGraw-Hill, New York.). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal, frontal, and cerebellar regions during an auditory paradigm." }, { "pmid": "28578129", "title": "Assessing dynamic functional connectivity in heterogeneous samples.", "abstract": "Several methods have been developed to measure dynamic functional connectivity (dFC) in fMRI data. These methods are often based on a sliding-window analysis, which aims to capture how the brain's functional organization varies over the course of a scan. The aim of many studies is to compare dFC across groups, such as younger versus older people. However, spurious group differences in measured dFC may be caused by other sources of heterogeneity between people. For example, the shape of the haemodynamic response function (HRF) and levels of measurement noise have been found to vary with age. We use a generic simulation framework for fMRI data to investigate the effect of such heterogeneity on estimates of dFC. Our findings show that, despite no differences in true dFC, individual differences in measured dFC can result from other (non-dynamic) features of the data, such as differences in neural autocorrelation, HRF shape, connectivity strength and measurement noise. We also find that common dFC methods such as k-means and multilayer modularity approaches can detect spurious group differences in dynamic connectivity due to inappropriate setting of their hyperparameters. fMRI studies therefore need to consider alternative sources of heterogeneity across individuals before concluding differences in dFC." }, { "pmid": "25364771", "title": "BioImage Suite: An integrated medical image analysis suite: An update.", "abstract": "BioImage Suite is an NIH-supported medical image analysis software suite developed at Yale. It leverages both the Visualization Toolkit (VTK) and the Insight Toolkit (ITK) and it includes many additional algorithms for image analysis especially in the areas of segmentation, registration, diffusion weighted image processing and fMRI analysis. BioImage Suite has a user-friendly user interface developed in the Tcl scripting language. A final beta version is freely available for download." }, { "pmid": "25191215", "title": "Deep learning for neuroimaging: a validation study.", "abstract": "Deep learning methods have recently made notable advances in the tasks of classification and representation learning. These tasks are important for brain imaging and neuroscience discovery, making the methods attractive for porting to a neuroimager's toolbox. Success of these methods is, in part, explained by the flexibility of deep learning models. However, this flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. These methods include deep belief networks and their building block the restricted Boltzmann machine. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data." }, { "pmid": "23747961", "title": "Groupwise whole-brain parcellation from resting-state fMRI data for network node identification.", "abstract": "In this paper, we present a groupwise graph-theory-based parcellation approach to define nodes for network analysis. The application of network-theory-based analysis to extend the utility of functional MRI has recently received increased attention. Such analyses require first and foremost a reasonable definition of a set of nodes as input to the network analysis. To date many applications have used existing atlases based on cytoarchitecture, task-based fMRI activations, or anatomic delineations. A potential pitfall in using such atlases is that the mean timecourse of a node may not represent any of the constituent timecourses if different functional areas are included within a single node. The proposed approach involves a groupwise optimization that ensures functional homogeneity within each subunit and that these definitions are consistent at the group level. Parcellation reproducibility of each subunit is computed across multiple groups of healthy volunteers and is demonstrated to be high. Issues related to the selection of appropriate number of nodes in the brain are considered. Within typical parameters of fMRI resolution, parcellation results are shown for a total of 100, 200, and 300 subunits. Such parcellations may ultimately serve as a functional atlas for fMRI and as such three atlases at the 100-, 200- and 300-parcellation levels derived from 79 healthy normal volunteers are made freely available online along with tools to interface this atlas with SPM, BioImage Suite and other analysis packages." }, { "pmid": "19620724", "title": "Correspondence of the brain's functional architecture during activation and rest.", "abstract": "Neural connections, providing the substrate for functional networks, exist whether or not they are functionally active at any given moment. However, it is not known to what extent brain regions are continuously interacting when the brain is \"at rest.\" In this work, we identify the major explicit activation networks by carrying out an image-based activation network analysis of thousands of separate activation maps derived from the BrainMap database of functional imaging studies, involving nearly 30,000 human subjects. Independently, we extract the major covarying networks in the resting brain, as imaged with functional magnetic resonance imaging in 36 subjects at rest. The sets of major brain networks, and their decompositions into subnetworks, show close correspondence between the independent analyses of resting and activation brain dynamics. We conclude that the full repertoire of functional networks utilized by the brain in action is continuously and dynamically \"active\" even when at \"rest.\"" }, { "pmid": "21391254", "title": "Lateral differences in the default mode network in healthy controls and patients with schizophrenia.", "abstract": "We investigate lateral differences in the intrinsic fluctuations comprising the default mode network (DMN) for healthy controls (HCs) and patients with schizophrenia (SZ), both during rest and during an auditory oddball (AOD) task. Our motivation for this study comes from multiple prior hypotheses of disturbed hemispheric asymmetry in SZ and more recently observed lateral abnormalities in the DMN for SZ during AOD. We hypothesized that significant lateral differences would be found in HCs during both rest and AOD, and SZ would show differences from HCs due to hemispheric dysfunction. Our study examined 28 HCs and 28 outpatients with schizophrenia. The scans were conducted on a Siemens Allegra 3T dedicated head scanner. There were numerous crossgroup lateral fluctuations that were found in both AOD and rest. During the resting state, within-group results showed the largest functional asymmetries in the inferior parietal lobule for HCs, whereas functional asymmetries were seen in posterior cingulate gyrus for SZ. Comparing asymmetries between groups, in resting state and/or performing AOD, areas showing significant differences between group (HC > SZ) included inferior parietal lobule and posterior cingulate. Our results support the hypothesis that schizophrenia is characterized by abnormal hemispheric asymmetry. Secondly, the number of similarities in crossgroup AOD and rest data suggests that neurological disruptions in SZ that may cause evoked symptoms are also detectable in SZ during resting conditions. Furthermore, the results suggest a reduction in activity in language-related areas for SZ compared to HCs during rest." }, { "pmid": "22743197", "title": "Automatic sleep staging using fMRI functional connectivity data.", "abstract": "Recent EEG-fMRI studies have shown that different stages of sleep are associated with changes in both brain activity and functional connectivity. These results raise the concern that lack of vigilance measures in resting state experiments may introduce confounds and contamination due to subjects falling asleep inside the scanner. In this study we present a method to perform automatic sleep staging using only fMRI functional connectivity data, thus providing vigilance information while circumventing the technical demands of simultaneous recording of EEG, the gold standard for sleep scoring. The features to classify are the linear correlation values between 20 cortical regions identified using independent component analysis and two regions in the bilateral thalamus. The method is based on the construction of binary support vector machine classifiers discriminating between all pairs of sleep stages and the subsequent combination of them into multiclass classifiers. Different multiclass schemes and kernels are explored. After parameter optimization through 5-fold cross validation we achieve accuracies over 0.8 in the binary problem with functional connectivities obtained for epochs as short as 60s. The multiclass classifier generalizes well to two independent datasets (accuracies over 0.8 in both sets) and can be efficiently applied to any dataset using a sliding window procedure. Modeling vigilance states in resting state analysis will avoid confounded inferences and facilitate the study of vigilance states themselves. We thus consider the method introduced in this study a novel and practical contribution for monitoring vigilance levels inside an MRI scanner without the need of extra recordings other than fMRI BOLD signals." }, { "pmid": "29075569", "title": "The effect of preprocessing in dynamic functional network connectivity used to classify mild traumatic brain injury.", "abstract": "INTRODUCTION\nDynamic functional network connectivity (dFNC), derived from magnetic resonance imaging (fMRI), is an important technique in the search for biomarkers of brain diseases such as mild traumatic brain injury (mTBI). At the individual level, mTBI can affect cognitive functions and change personality traits. Previous research aimed at detecting significant changes in the dFNC of mTBI subjects. However, one of the main concerns in dFNC analysis is the appropriateness of methods used to correct for subject movement. In this work, we focus on the effect that rearranging movement correction at different points of the processing pipeline has in dFNC analysis utilizing mTBI data.\n\n\nMETHODS\nThe sample cohort consists of 50 mTBI patients and matched healthy controls. A 5-min resting-state run was completed by each participant. Data were preprocessed using different pipeline alternatives varying with the place where motion-related variance was removed. In all pipelines, group-independent component analysis (gICA) followed by dFNC analysis was performed. Additional tests were performed varying the detection of temporal spikes, the number of gICA components, and the sliding-window size. A linear support vector machine was used to test how each pipeline affects classification accuracy.\n\n\nRESULTS\nResults suggest that correction for motion variance before spatial smoothing, but leaving correction for spiky time courses after gICA produced the best mean classification performance. The number of gICA components and the sliding-window size were also important in determining classification performance. Variance in spikes correction affected some pipelines more than others with fewer significant differences than the other parameters.\n\n\nCONCLUSION\nThe sequence of preprocessing steps motion regression, smoothing, gICA, and despiking produced data most suitable for differentiating mTBI from healthy subjects. However, the selection of optimal preprocessing parameters strongly affected the final results." }, { "pmid": "19896537", "title": "Reliable intrinsic connectivity networks: test-retest evaluation using ICA and dual regression approach.", "abstract": "Functional connectivity analyses of resting-state fMRI data are rapidly emerging as highly efficient and powerful tools for in vivo mapping of functional networks in the brain, referred to as intrinsic connectivity networks (ICNs). Despite a burgeoning literature, researchers continue to struggle with the challenge of defining computationally efficient and reliable approaches for identifying and characterizing ICNs. Independent component analysis (ICA) has emerged as a powerful tool for exploring ICNs in both healthy and clinical populations. In particular, temporal concatenation group ICA (TC-GICA) coupled with a back-reconstruction step produces participant-level resting state functional connectivity maps for each group-level component. The present work systematically evaluated the test-retest reliability of TC-GICA derived RSFC measures over the short-term (<45 min) and long-term (5-16 months). Additionally, to investigate the degree to which the components revealed by TC-GICA are detectable via single-session ICA, we investigated the reproducibility of TC-GICA findings. First, we found moderate-to-high short- and long-term test-retest reliability for ICNs derived by combining TC-GICA and dual regression. Exceptions to this finding were limited to physiological- and imaging-related artifacts. Second, our reproducibility analyses revealed notable limitations for template matching procedures to accurately detect TC-GICA based components at the individual scan level. Third, we found that TC-GICA component's reliability and reproducibility ranks are highly consistent. In summary, TC-GICA combined with dual regression is an effective and reliable approach to exploratory analyses of resting state fMRI data." } ]
IEEE Journal of Translational Engineering in Health and Medicine
30310761
PMC6168182
10.1109/JTEHM.2018.2863366
Improved Detection of Lung Fluid With Standardized Acoustic Stimulation of the Chest
Accumulation of excess air and water in the lungs leads to breakdown of respiratory function and is a common cause of patient hospitalization. Compact and non-invasive methods to detect the changes in lung fluid accumulation can allow physicians to assess patients’ respiratory conditions. In this paper, an acoustic transducer and a digital stethoscope system are proposed as a targeted solution for this clinical need. Alterations in the structure of the lungs lead to measurable changes which can be used to assess lung pathology. We standardize this procedure by sending a controlled signal through the lungs of six healthy subjects and six patients with lung disease. We extract mel-frequency cepstral coefficients and spectroid audio features, commonly used in classification for music retrieval, to characterize subjects as healthy or diseased. Using the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$K$ \end{document}-nearest neighbors algorithm, we demonstrate 91.7% accuracy in distinguishing between healthy subjects and patients with lung pathology.
A.Related WorkThe development of computerized lung sound analysis has led to several studies investigating classification of breath sounds as healthy or pathological [10]–[14]. Lung sounds are heard over the chest during inspiration and expiration. They are non-stationary and non-linear signals, requiring a combined time and frequency approach for accurate analysis [15]. Processing typically involves recording the breath sounds with an acoustic sensor, extracting audio features from the recordings, and feeding these features into a classifier. Lung sounds are typically recorded using contact microphones, such as an electronic stethoscope. Classification features are commonly based on autoregressive (AR) modelling, mel-frequency cepstral coefficients (MFCC), spectral energy, and the wavelet transform [16]. For classification, artificial neural network (ANN) and K-nearest neighbors (KNN) are commonly used. Previous work utilizing KNN and ANN classification to distinguish between healthy and pathological lung sounds reported values ranging from 69.7–92.4% for accurate classification [12], [14], [17], [18]. Although the usage of breath sound analysis shows potential for accurate classification, the large range in accuracy reported in prior work motivates the need for a standardized approach. Changes observed in recorded breath sounds could be a result of both the differences in structure of the system or the result of intersubject and intrasubject variability between breath cycles.Compared to lung sound analysis, the study of how fixed external sounds travel through the lungs offers much room for development. A 2014 study reported sending a chirp in the range of 50–400 Hz into the chest using a transducer, and demonstrated measurable changes in sound transmission for air accumulation in the chest [5]. Previous work on this device investigated changes in sound transmission during lung fluid accumulation due to pneumonia, sending a chirp into the chest in the range of 50–500 Hz using a surface exciter transducer [19]. The present study expands the frequency range from 500 Hz to 1000 Hz, as our signal-to-noise ratio in this range is sufficiently high for chirp detection and analysis. To our knowledge, no work has yet been done in classifying patients with pulmonary pathology based on a fixed percussive input acoustic signal.
[ "25001497", "10335685", "20946643", "25592455", "21712152", "15265722", "12578066", "16723034", "25234130", "6826415", "2066121", "7204204", "19631934" ]
[ { "pmid": "25001497", "title": "Sound transmission in the chest under surface excitation: an experimental and computational study with diagnostic applications.", "abstract": "Chest physical examination often includes performing chest percussion, which involves introducing sound stimulus to the chest wall and detecting an audible change. This approach relies on observations that underlying acoustic transmission, coupling, and resonance patterns can be altered by chest structure changes due to pathologies. More accurate detection and quantification of these acoustic alterations may provide further useful diagnostic information. To elucidate the physical processes involved, a realistic computer model of sound transmission in the chest is helpful. In the present study, a computational model was developed and validated by comparing its predictions with results from animal and human experiments which involved applying acoustic excitation to the anterior chest, while detecting skin vibrations at the posterior chest. To investigate the effect of pathology on sound transmission, the computational model was used to simulate the effects of pneumothorax on sounds introduced at the anterior chest and detected at the posterior. Model predictions and experimental results showed similar trends. The model also predicted wave patterns inside the chest, which may be used to assess results of elastography measurements. Future animal and human tests may expand the predictive power of the model to include acoustic behavior for a wider range of pulmonary conditions." }, { "pmid": "10335685", "title": "Diagnosing pneumonia by physical examination: relevant or relic?", "abstract": "BACKGROUND\nThe reliability of chest physical examination and the degree of agreement among examiners in diagnosing pneumonia based on these findings are largely unknown.\n\n\nOBJECTIVES\nTo determine the accuracy of various physical examination maneuvers in diagnosing pneumonia and to compare the interobserver reliability of the maneuvers among 3 examiners.\n\n\nMETHODS\nFifty-two male patients presenting to the emergency department of a university-affiliated Veterans Affairs medical center with symptoms of lower respiratory tract infection (cough and change in sputum) were prospectively examined. A comprehensive lung physical examination was performed sequentially by 3 physicians who were blind to clinical history, laboratory findings, and x-ray results. Examination findings by lung site and whether the examiner diagnosed pneumonia were recorded on a standard form. Chest x-ray films were read by a radiologist.\n\n\nRESULTS\nTwenty-four patients had pneumonia confirmed by chest x-ray films. Twenty-eight patients did not have pneumonia. Abnormal lung sounds were common in both groups; the most frequently detected were rales in the upright seated position and bronchial breath sounds. Relatively high agreement among examiners (kappa approximately 0.5) occurred for rales in the lateral decubitus position and for wheezes. The 3 examiners' clinical diagnosis of pneumonia had a sensitivity of 47% to 69% and specificity of 58% to 75%.\n\n\nCONCLUSIONS\nThe degree of interobserver agreement was highly variable for different physical examination findings. The most valuable examination maneuvers in detecting pneumonia were unilateral rales and rales in the lateral decubitus position. The traditional chest physical examination is not sufficiently accurate on its own to confirm or exclude the diagnosis of pneumonia." }, { "pmid": "20946643", "title": "Turning a blind eye: the mobilization of radiology services in resource-poor regions.", "abstract": "While primary care, obstetrical, and surgical services have started to expand in the world's poorest regions, there is only sparse literature on the essential support systems that are required to make these operations function. Diagnostic imaging is critical to effective rural healthcare delivery, yet it has been severely neglected by the academic, public, and private sectors. Currently, a large portion of the world's population lacks access to any form of diagnostic imaging. In this paper we argue that two primary imaging modalities--diagnostic ultrasound and X-Ray--are ideal for rural healthcare services and should be scaled-up in a rapid and standardized manner. Such machines, if designed for resource-poor settings, should a) be robust in harsh environmental conditions, b) function reliably in environments with unstable electricity, c) minimize radiation dangers to staff and patients, d) be operable by non-specialist providers, and e) produce high-quality images required for accurate diagnosis. Few manufacturers are producing ultrasound and X-Ray machines that meet the specifications needed for rural healthcare delivery in resource-poor regions. A coordinated effort is required to create demand sufficient for manufacturers to produce the desired machines and to ensure that the programs operating them are safe, effective, and financially feasible." }, { "pmid": "25592455", "title": "Ultrasound of the pleurae and lungs.", "abstract": "The value of ultrasound techniques in examination of the pleurae and lungs has been underestimated over recent decades. One explanation for this is the assumption that the ventilated lungs and the bones of the rib cage constitute impermeable obstacles to ultrasound. However, a variety of pathologies of the chest wall, pleurae and lungs result in altered tissue composition, providing substantially increased access and visibility for ultrasound examination. It is a great benefit that the pleurae and lungs can be non-invasively imaged repeatedly without discomfort or radiation exposure for the patient. Ultrasound is thus particularly valuable in follow-up of disease, differential diagnosis and detection of complications. Diagnostic and therapeutic interventions in patients with pathologic pleural and pulmonary findings can tolerably be performed under real-time ultrasound guidance. In this article, an updated overview is given presenting not only the benefits and indications, but also the limitations of pleural and pulmonary ultrasound." }, { "pmid": "21712152", "title": "Adventitious sounds identification and extraction using temporal-spectral dominance-based features.", "abstract": "Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method." }, { "pmid": "15265722", "title": "Neural classification of lung sounds using wavelet coefficients.", "abstract": "Electronic auscultation is an efficient technique to evaluate the condition of respiratory system using lung sounds. As lung sound signals are non-stationary, the conventional method of frequency analysis is not highly successful in diagnostic classification. This paper deals with a novel method of analysis of lung sound signals using wavelet transform, and classification using artificial neural network (ANN). Lung sound signals were decomposed into the frequency subbands using wavelet transform and a set of statistical features was extracted from the subbands to represent the distribution of wavelet coefficients. An ANN based system, trained using the resilient back propagation algorithm, was implemented to classify the lung sounds to one of the six categories: normal, wheeze, crackle, squawk, stridor, or rhonchus." }, { "pmid": "12578066", "title": "Representation and classification of breath sounds recorded in an intensive care setting using neural networks.", "abstract": "OBJECTIVE\nDevelop and test methods for representing and classifying breath sounds in an intensive care setting.\n\n\nMETHODS\nBreath sounds were recorded over the bronchial regions of the chest. The breath sounds were represented by their averaged power spectral density, summed into feature vectors across the frequency spectrum from 0 to 800 Hertz. The sounds were segmented by individual breath and each breath was divided into inspiratory and expiratory segments. Sounds were classified as normal or abnormal. Different back-propagation neural network configurations were evaluated. The number of input features, hidden units, and hidden layers were varied.\n\n\nRESULTS\n2127 individual breath sounds from the ICU patients and 321 breaths from training tapes were obtained. Best overall classification rate for the ICU breath sounds was 73% with 62% sensitivity and 85% specificity. Best overall classification rate for the training tapes was 91% with 87% sensitivity and 95% specificity.\n\n\nCONCLUSIONS\nLong term monitoring of lung sounds is not feasible unless several barriers can be overcome. Several choices in signal representation and neural network design greatly improved the classification rates of breath sounds. The analysis of transmitted sounds from the trachea to the lung is suggested as an area for future study." }, { "pmid": "16723034", "title": "Acute respiratory failure in the elderly: etiology, emergency diagnosis and prognosis.", "abstract": "INTRODUCTION\nOur objectives were to determine the causes of acute respiratory failure (ARF) in elderly patients and to assess the accuracy of the initial diagnosis by the emergency physician, and that of the prognosis.\n\n\nMETHOD\nIn this prospective observational study, patients were included if they were admitted to our emergency department, aged 65 years or more with dyspnea, and fulfilled at least one of the following criteria of ARF: respiratory rate at least 25 minute-1; arterial partial pressure of oxygen (PaO2) 70 mmHg or less, or peripheral oxygen saturation 92% or less in breathing room air; arterial partial pressure of CO2 (PaCO2) > or = 45 mmHg, with pH < or = 7.35. The final diagnoses were determined by an expert panel from the completed medical chart.\n\n\nRESULTS\nA total of 514 patients (aged (mean +/- standard deviation) 80 +/- 9 years) were included. The main causes of ARF were cardiogenic pulmonary edema (43%), community-acquired pneumonia (35%), acute exacerbation of chronic respiratory disease (32%), pulmonary embolism (18%), and acute asthma (3%); 47% had more than two diagnoses. In-hospital mortality was 16%. A missed diagnosis in the emergency department was noted in 101 (20%) patients. The accuracy of the diagnosis of the emergency physician ranged from 0.76 for cardiogenic pulmonary edema to 0.96 for asthma. An inappropriate treatment occurred in 162 (32%) patients, and lead to a higher mortality (25% versus 11%; p < 0.001). In a multivariate analysis, inappropriate initial treatment (odds ratio 2.83, p < 0.002), hypercapnia > 45 mmHg (odds ratio 2.79, p < 0.004), clearance of creatinine < 50 ml minute-1 (odds ratio 2.37, p < 0.013), elevated NT-pro-B-type natriuretic peptide or B-type natriuretic peptide (odds ratio 2.06, p < 0.046), and clinical signs of acute ventilatory failure (odds ratio 1.98, p < 0.047) were predictive of death.\n\n\nCONCLUSION\nInappropriate initial treatment in the emergency room was associated with increased mortality in elderly patients with ARF." }, { "pmid": "25234130", "title": "Physiological acoustic sensing based on accelerometers: a survey for mobile healthcare.", "abstract": "This paper reviews the applications of accelerometers on the detection of physiological acoustic signals such as heart sounds, respiratory sounds, and gastrointestinal sounds. These acoustic signals contain a rich reservoir of vital physiological and pathological information. Accelerometer-based systems enable continuous, mobile, low-cost, and unobtrusive monitoring of physiological acoustic signals and thus can play significant roles in the emerging mobile healthcare. In this review, we first briefly explain the operation principle of accelerometers and specifications that are important for mobile healthcare. Applications of accelerometer-based monitoring systems are then presented. Next, we review a variety of accelerometers which have been reported in literatures for physiological acoustic sensing, including both commercial products and research prototypes. Finally, we discuss some challenges and our vision for future development." }, { "pmid": "6826415", "title": "Sound speed in pulmonary parenchyma.", "abstract": "The time it takes audible sound waves to travel across a lobe of excised horse lung was measured. Sound speed, which is the slope in the relationship between transit time and distance across the lobe, was estimated by linear regression analysis. Sound-speed estimates for air-filled lungs varied between 25 and 70 m/s, depending on lung volume. These speeds are less than 5% of sound speed in tissue and less than 20% of sound speed in air. Filling the lung with helium or sulfur hexafluoride, whose free-field sound speeds are 970 and 140 m/s, respectively, changed sound speed +/- 10% relative to air filling. Reducing the ambient pressure to 0.1 atm reduced sound speed to 30% of its 1-atm value. Increasing pressure to 7 atm increased sound speed by a factor of 2.6. These results suggest that 1) translobar sound travels through the bulk of the parenchyma and not along airways or blood vessels, and 2) the parenchyma acts as an elastic continuum to audible sound. The speed of sound is given by c = (B/rho)1/2, where B is composite volumetric stiffness of the medium and rho is average density. In the physiologic state B is affected by ambient pressure and percent gas phase. The average density includes both the tissue and gas phases of the parenchyma, so it is dependent on lung volume. These results may be helpful in the quantification of clinical observations of lung sounds." }, { "pmid": "2066121", "title": "Acoustic transmission of the respiratory system using speech stimulation.", "abstract": "Two methods for the analysis of the acoustic transmission of the respiratory system are presented. Continuous speech utterance is used as acoustic stimulation. The transmitted acoustic signal is recorded from various sites over the chest wall. The AR method analyzes the power spectral density function of the transmitted sound, which heavily depends on the microphone assembly and the utterance. The method was applied to a screening problem and was tested on a small database that consisted of 19 normal and five abnormal patients. Using the first five AR coefficients and the prediction error of an AR(10) model, as discriminating features, the system screened all abnormals. An ARMA method is suggested, which eliminates the dependence on microphone and utterance. In this method, the generalized least squares identification algorithm is used to estimate the ARMA transfer function of the respiratory system. The normal transfer function demonstrates a peak at the range of 130-250 Hz and sharp decrease in gain for higher frequencies. A pulmonary fibrotic patient demonstrated a peak at the same frequency range, a much higher gain in the high frequency range with an additional peak at about 700 Hz." }, { "pmid": "7204204", "title": "Spectral characteristics of normal breath sounds.", "abstract": "An objective and accurate measurement and characterization of breath sounds was carried out by a fast-Fourier-transform frequency-domain analysis. Normal vesicular breath sounds, picked up over the chest wall of 10 healthy subjects showed a characteristic pattern: the power of the signal decreased exponentially as frequency increased. Since the log amplitude vs. log frequency relationships were linear, they could be characterized by the values of the slope and the maximal frequency. The average slope of the power spectrum curves was found to be (in dB/oct +/- SD) 13.0 +/- 1.4 over the base of the right lung, 12.6 +/- 2.4 over the base of the left lung, 9.8 +/- 1.4 over the interscapular region, and 14.4 +/- 4.3 over the right anterior chest. The maximal frequencies of inspiratory and expiratory breath sounds, picked up over the base of the right lung, were (in Hz +/- SD) 446 +/- 143 and 286 +/- 53 (P less than 0.01), over the base of the left lung 475 +/- 115 and 284 +/- 47 (P less than 0.01), over the interscapular region 434 +/- 130 and 338 +/- 77 (P less than 0.05), and over the right anterior chest 604 +/- 302 and 406 +/- 205 (P less than 0.05). Breath sounds picked up over the trachea were characterized by power spectra typical to a broad spectrum sound with a sharp decrease of power at a cut-off frequency that varied between 850 and 1,600 Hz among the 10 healthy subjects studied." }, { "pmid": "19631934", "title": "Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes.", "abstract": "In this paper, we present the pattern recognition methods proposed to classify respiratory sounds into normal and wheeze classes. We evaluate and compare the feature extraction techniques based on Fourier transform, linear predictive coding, wavelet transform and Mel-frequency cepstral coefficients (MFCC) in combination with the classification methods based on vector quantization, Gaussian mixture models (GMM) and artificial neural networks, using receiver operating characteristic curves. We propose the use of an optimized threshold to discriminate the wheezing class from the normal one. Also, post-processing filter is employed to considerably improve the classification accuracy. Experimental results show that our approach based on MFCC coefficients combined to GMM is well adapted to classify respiratory sounds in normal and wheeze classes. McNemar's test demonstrated significant difference between results obtained by the presented classifiers (p<0.05)." } ]
Frontiers in Psychology
30319496
PMC6168679
10.3389/fpsyg.2018.01792
Exploring the Potential of Concept Associations for the Creative Generation of Linguistic Artifacts: A Case Study With Riddles and Rhetorical Figures
Automatic generation of linguistic artifacts is a problem that has been sporadically tackled over the years. The main goal of this paper is to explore how concept associations can be useful from a computational creativity point of view to generate some of these artifacts. We present an approach where finding associations between concepts that would not usually be considered as related (for example life and politics or diamond and concrete) could be the seed for the generation of creative and surprising linguistic artifacts such as rhetorical figures (life is like politics) and riddles (what is as hard as concrete?). Human volunteers evaluated the quality and appropriateness of the generated figures and riddles, and the results show that the concept associations obtained are useful for producing these kinds of creative artifacts.
2. Related workThe work presented in this paper draws from the idea that concept associations based on similarity can be useful to generate linguistic artifacts, such as riddles and rhetorical figures. We have surveyed the areas of conceptual similarity, riddle generation and rhetorical figures generation and the most relevant works in these areas are described in the following subsections.2.1. Conceptual similarity and its relation to creativityConcepts share structures and properties which make them ideal candidates for creative operations such as analogy generation, conceptual blending or design.Analogy is a cognitive process that transfers information or meaning from one concept (the source) to another concept (the target), or a linguistic expression corresponding to such process. Analogy is based on the mapping of the properties of source and target. This mapping takes place not only between objects, but also between relations of objects. Analogy plays a significant role in problem solving, decision making, generalization, creativity, invention and prediction, and lies behind basic tasks such as the identification of places, objects and people. It has been argued that analogy is “the core of cognition” (Gentner et al., 2001). Specific analogical language comprises comparisons, metaphors and similes. The Contemporary Theory of Metaphor (Lakoff, 1993) suggests that commonly used metaphorical expressions are surface realizations of an underlying conceptual metaphor and are understood via a cross-domain conceptual mapping between two concepts. The Structure Mapping Model (Gentner and Wolff, 1997) proposes that metaphors act to set up correspondences between conceptual structures of the concepts involved. More recently, Feldman (2008) has ellaborated the Neural Theory of Language. This theory treats language as a biological human ability and suggests ways in which language and thought may be realized in the brain, putting forward that many basic conceptual metaphors arise from embodied experiences, even before we learn to speak, as they map concepts in our brain rather than words in a sentence (Lakoff, 2012).Conceptual blending (Fauconnier and Turner, 2003) is a basic mental operation that leads to new meaning. It plays a fundamental role in the construction of meaning in everyday life, arts and sciences. The same idea of mapping between source and target used in analogy is used by conceptual blending. The essence of conceptual blending (Fauconnier and Turner, 1998) is to match the mental spaces of two concepts and project them to a separate blended mental space, giving rise to a new concept. Mental spaces are small conceptual packets constructed as we think and talk, for purposes of local understanding and action.The role of concept associations has also been studied in design. In order to generate a product, designers must select a source, discern properties of this source, and transfer this property to the product they are designing. The selection of a source is affected by the extent to which it represents the meaning the designer intends to convey (its salience) and the strength of its association with the product (their relatedness).Therefore, if we intend to emulate creative operations such as those mentioned above, it is vital to have a way of mapping concepts and finding similarity between them. The structured resource WordNet (Miller, 1995) has a taxonomic organization of nouns and verbs, in which very general categories are successively divided into sub-categories. This structure allows us to measure the mapping information of two lexical concepts. Therefore, we can identify the deepest point in the taxonomy at which this content starts to diverge, which is called the Least Common Subsumer (LCS) of two concepts (Pedersen et al., 2004). Leacock et al. (1998) use the length of the shortest path between two concepts as a proxy for the conceptual distance between them. To connect two ideas in a hierarchical system, one must vertically ascend the hierarchy from one concept, change direction at a potential LCS, and then descend the hierarchy to reach the second concept. This way of understanding conceptual similarity is called vertical thinking (De Bono, 1970).Vertical thinking reduces the similarity of two concepts to a single number which is poorly suited to creative comparisons (Veale and Li, 2013). When creativity comes into play as when we look for similarities between concepts to create rhetorical figures, the obvious similarities are not enough, it is necessary to look for new ways of seeing a concept, to find other not so obvious similarities. For example, if we look for the most similar concepts to lawyer we would get concepts like defender or judge, but if we intend to obtain shark as a lawyer-like concept we need to go further. To find novel and non-obvious similar concepts, one must use what De Bono (1970) calls lateral thinking. De Bono states that the best solution for creativity purposes is the combination of lateral and vertical thinking. Lateral thinking can be used to create a group of similar concepts from which one will then be selected by vertical thinking.Thesaurus Rex1 is a system for the exploration of concepts that returns lateral views of a concept which are obtained from the web (Veale and Li, 2013). For example, to highlight the potential toxicity of coffee, Thesaurus Rex suggests, as concepts similar to coffee, alcohol, tobacco or pesticide because they are all categorized as toxic substances on the web.2.2. Generation of rhetorical figuresMetaphors play an important role in communication, occurring as often as every third sentence (Shutova et al., 2012). However, although metaphors have been widely studied in Natural Language Analysis, this has not been the case in Natural Language Generation. There is a lot of work related to metaphor detection (Wilks et al., 2013), identification (Shutova et al., 2010), meaning (Glucksberg and McGlone, 2001; Vega Moreno, 2004; Terai and Nakagawa, 2008; Xiao et al., 2016), extraction and annotation (Wallington et al., 2003) but few related to metaphor generation. The reason can be that metaphor generation is as challenging as human creativity will allow.In the field of Natural Language Generation, there have been a number of attempts to establish procedures for constructing rhetorical figures as important ingredients of generated spans of text. This has been attempted both in general terms (Hervás et al., 2006b) for different types of rhetorical figures, and for specific cases like analogies (Hervás et al., 2006a) or metaphors (Hervás et al., 2007). The attempts considered the problem of using rhetorical figures during text generation in general theoretical terms but lacked sufficient volume of explicit knowledge on the underlying semantics of words to be capable of practical generation.An interesting feature of rhetorical figures is that they usually work independently of language. For example, the metaphor “an argument is a war” is used in different languages to express that arguments are as wars to be won. This has led to the hypothesis that the mapping between conceptual domains corresponds to neural mappings in the brain (Feldman and Narayanan, 2004; du Castel, 2015). This hypothesis, together with the recent development of sources of knowledge that allow easy mining of large corpora of text for significant word associations, has lead to the emergence of a number of systems that rely on these for constructing rhetorical figures of different types.Jigsaw Bard (Veale and Hao, 2011) is a web service that exploits linguistic idioms to generate similes on demand. For any given property (or blend of properties), Jigsaw Bard presents a range of similes. To get these similes it scans Google n-grams to index potential idioms which are then re-purposed as a simile. For example, for the property wet Jigsaw Bard returns the idiom “a lake of tears,” that can be used to create simile like “wet like a lake of tears,” a melancholic way to accentuate the property wet.Metaphor Magnet (Veale and Li, 2012) is a web service that allows users to enter queries such as “Life is a +mystery,” “Google is -Microsoft” or “Steve Jobs is Tony Stark”2. Each of the concepts of the query is expanded using the set of stereotypes that are commonly used to describe it. Then, its properties and those of its stereotypes are associated to the concept. The properties highlighted in the resulting metaphor are those that are at the intersection of the two concepts' properties. For example, in the case of “Life is a +mystery,” the properties entertaining, thrilling, intriguing, and alluring are all highlighted as being in the intersection of life and mystery.Metaphor Eyes (Veale, 2014a) metaphorizes one concept as another concept. Given scientist and explorer it generates metaphors such as “scientists make discoveries like explorers.” It employs a propositional model of the world that reasons with subject-relation-object triples rather than subject-attribute pairs (as Metaphor Magnet does). Metaphor Eyes acquires its world-model from a variety of sources and it views metaphor as a representational lever, allowing it to fill the holes in its weak understanding of one concept by importing relevant knowledge from a neighboring concept.Figure 8 (Harmon, 2015) is a system that contains an underlying model for what defines creative and figurative comparisons, and evaluates its own output based on these rules. The system is provided with a model of the current world and an entity in the world to be described. A suitable noun is selected from the knowledge base, and the comparison between the two nouns is clarified by obtaining an understanding via corpora search of what these nouns can do and how they can be described. Sentence completion occurs by intelligent adaptation of a case library of valid grammar constructions. Finally, the comparison is ranked by the system based on semantic, prosodic, and knowledge-based qualities. Figure 8 simulates the human-authoring process of revision by generating many figure variations for a single concept, and choosing the best among them.2.3. Generation of riddlesAlthough the generation of riddles may seem a difficult task from a computational point of view, there are several attempts to the automatic generation of riddles that are presented in this section.De Palma and Weiner (1992) propose a model of a knowledge representation that contains the data to generate or solve riddles. Its knowledge-base contains Concepts and RoleSets. The Concept is the primary representational entity. For example, MOUTH, RIVER-MOUTH, or PERSON-MOUTH. Concepts are connected to one another by links which indicates that the subordinate Concept (subConcept) stands in an inheritance and subsumption relationship with the superordinate Concept (superConcept). For example, PERSON-MOUTH is an ANIMAL-MOUTH and a MOUTH. RoleSets represent predicates of a Concept. For example, PERSON-MOUTH has the RoleSet EAT, meaning that a function of a person's mouth is to eat. They developed an algorithm that generated a riddle based on homophonous concepts in the following way: first, two homophonous concepts are searched in the knowdedge-base (for example, PERSON and RIVER which shared the fact that both have a MOUTH). Secondly, they look for a property that is not shared by both concepts (in the case of PERSON and RIVER we have that PERSON-MOUTH has EAT as a property but RIVER-MOUTH does not have this property). The result is a riddle of this type: “What has a mouth and cannot eat?”JAPE (Joke Analysis and Production Engine) (Binsted and Ritchie, 1997; Ritchie, 2003) is a question-answer riddle generation system. To create riddles, JAPE uses templates with slots where words or phrases are inserted. To determine which words must be incorporated to the final riddle, the system makes use of predefined schemas (manually built from previously known jokes), which establish relationships words must hold to build a joke. The program was tested by 120 children that rated generated riddles, human-generated texts, and non-joke texts for “jokiness” and “funniness.” The evaluation confirmed that riddles generated were jokes, and that there is no significant difference in “funniness” or “jokiness” between punning riddles generated by their system and published human-generated jokes.Some of the authors of JAPE have also developed STANDUP (Waller et al., 2009), a large-scale pun generator to allow children with communication disabilities to improve their linguistic skills. The pun generation followed the same steps used in JAPE, but several improvements had to be introduced in order to adapt the generated puns to the target audience, i.e. children with communication disabilities: speech output, picture support, restricted topics or use of familiar words, etc. The system was evaluated with real users over a short period, and although no positive effects could be observed on the long term, the authors report a change in the attitude of the children toward communication.Guerrero et al. (2015) present a Twitter bot that generates riddles about celebrities. The model selects a celebrity, retrieves relevant attributes to describe her, generates analogies between her attributes and converts such descriptions into utterances, and, finally, tweets the generated riddle and interact with users by evaluating their answers. The attributes of the celebrities are retrieved from well-structured sources, such as the Non-Official Characterization (NOC) list (Veale, 2015), and from poorly-structured sources, such as Wikipedia. All the attributes obtained are filtered, only a subset of unique and interesting attributes are considered. A subset of features is considered unique if they describe only one celebrity and is considered interesting when it describes a character with attributes that altogether represent relevant traits, but do not provide excessive information so that the riddle cannot be easily guessed. To evaluate the riddle generation they asked 86 people to evaluate five riddles. They first asked the participants to guess the answer to the riddle. Then, they presented the correct answer and asked if they knew the person in question. The participants indicated whether they considered the quality of the riddle satisfactory and, if negative, gave the reason why it was not good. The percentage of known celebrities once the answer was presented (54.19%) indicates that the process for the selection of celebrities should be improved. The low number of correct answers (15.58%) suggests that the complexity of the generated riddles was high.
[ "15631593", "26236228", "15068922", "22961950", "21702791" ]
[ { "pmid": "15631593", "title": "The career of metaphor.", "abstract": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form." }, { "pmid": "26236228", "title": "Pattern activation/recognition theory of mind.", "abstract": "In his 2012 book How to Create a Mind, Ray Kurzweil defines a \"Pattern Recognition Theory of Mind\" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call \"Pattern Activation/Recognition Theory of Mind.\" While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation." }, { "pmid": "22961950", "title": "Explaining embodied cognition results.", "abstract": "From the late 1950s until 1975, cognition was understood mainly as disembodied symbol manipulation in cognitive psychology, linguistics, artificial intelligence, and the nascent field of Cognitive Science. The idea of embodied cognition entered the field of Cognitive Linguistics at its beginning in 1975. Since then, cognitive linguists, working with neuroscientists, computer scientists, and experimental psychologists, have been developing a neural theory of thought and language (NTTL). Central to NTTL are the following ideas: (a) we think with our brains, that is, thought is physical and is carried out by functional neural circuitry; (b) what makes thought meaningful are the ways those neural circuits are connected to the body and characterize embodied experience; (c) so-called abstract ideas are embodied in this way as well, as is language. Experimental results in embodied cognition are seen not only as confirming NTTL but also explained via NTTL, mostly via the neural theory of conceptual metaphor. Left behind more than three decades ago is the old idea that cognition uses the abstract manipulation of disembodied symbols that are meaningless in themselves but that somehow constitute internal \"representations of external reality\" without serious mediation by the body and brain. This article uniquely explains the connections between embodied cognition results since that time and results from cognitive linguistics, experimental psychology, computational modeling, and neuroscience." }, { "pmid": "21702791", "title": "Content differences for abstract and concrete concepts.", "abstract": "Concept properties are an integral part of theories of conceptual representation and processing. To date, little is known about conceptual properties of abstract concepts, such as idea. This experiment systematically compared the content of 18 abstract and 18 concrete concepts, using a feature generation task. Thirty-one participants listed characteristics of the concepts (i.e., item properties) or their relevant context (i.e., context properties). Abstract concepts had significantly fewer intrinsic item properties and more properties expressing subjective experiences than concrete concepts. Situation components generated for abstract and concrete concepts differed in kind, but not in number. Abstract concepts were predominantly related to social aspects of situations. Properties were significantly less specific for abstract than for concrete concepts. Thus, abstractness emerged as a function of several, both qualitative and quantitative, factors." } ]
IEEE Journal of Translational Engineering in Health and Medicine
30310758
PMC6170138
10.1109/JTEHM.2018.2853553
Design and Study of a Smart Cup for Monitoring the Arm and Hand Activity of Stroke Patients
This paper presents a new platform to monitor the arm and hand activity of stroke patients during rehabilitation exercises in the hospital and at home during their daily living activities. The platform provides relevant data to the therapist in order to assess the patients physical state and adapt the rehabilitation program if necessary. The platform consists of a self-contained smart cup that can be used to perform exercises that are similar to everyday tasks such as drinking. The first smart cup prototype, the design of which was based on interviews regarding the needs of therapists, contains various sensors that collect information about its orientation, the liquid level, its position compared to a reference target and tremors. The prototype also includes audio and visual displays that provide feedback to patients about their movements. Two studies were carried out in conjunction with healthcare professionals and patients. The first study focused on collecting feedback from healthcare professionals to assess the functionalities of the cup and to improve the prototype. Based on this paper, we designed an improved prototype and created a visualization tool for therapists. Finally, we carried out a preliminary study involving nine patients who had experienced an ischemic or hemorrhagic stroke in the previous 24 months. This preliminary study focused on assessing the usability and acceptability of the cup to the patients. The results showed that the cup was very well accepted by eight of the nine patients in monitoring their activity within a rehabilitation center or at home. Moreover, these eight patients had almost no concerns about the design of the cup and its usability.
II.Related WorkAfter a stroke, patients may face motor disorders such as hemiparesis, spasticity [12], visual impairments (vision loss, double vision, depth and distance perception problems or color detection problems) [4] or tremors [13], [14]. These motor and sensory disabilities have a direct impact on daily activities. Indeed, previous studies have shown that stroke patients experience problems in manipulating everyday objects (cups, forks, pens, etc.) [11], [15]. Patients tend to use compensatory strategies to reach a cup and move it to their mouth; for example, they may move their chest forward rather than extending their arm fully to reach and grasp an object. Gialanella et al. [9] have shown that ADLs are good outcome predictors in stroke patients, since they reflect the real motor activity of the patient by highlighting motor and sensory weaknesses. This section presents the state of the art in motor assessment tools for upper limbs after a stroke and new platforms for stroke monitoring.A.Motor Assessment Tools for Upper LimbsAlthough various methods have emerged for evaluating the recovery of patients after a stroke, these are empirical and are based on visual estimations. Pandian and Arya [16] surveyed the existing motor assessment tools for evaluating upper extremity (UE) motor recovery. Most post-stroke hemiparetic patients recover their motor abilities in stages. After carrying out a series of longitudinal observations, Brunnstrom defined the two phases in arm and hand motor recovery called Brunnstrom Recovery Stages (BRS). BRS is divided in a part A for the arm and a part H for the hand [17]. BRS-A has seven stages that involve basic and complex arm controls, such as bending, extension or moving forward without moving the trunk. BRS-H has six stages that describe the recovery of grasping, and lateral and palmar prehension; the higher the stage, the better the recovery. However, BRS uses subjective observations made by the therapists. The Fugle-Meyer assessment (FMA) was created based on the BRS [18], and is the first stroke-specific assessment tool which follows the natural recovery process of a post-stroke hemiparetic patient [19]. Five domains are assessed: motor function, sensory function, balance, joint range of motion and joint pain. Each domain is divided into two parts, the UE and lower extremity (LE). However, the UE aspect of FMA (FMA-UE) is more widely used than FMA-LE [20]. Like the BRS, FMA is based on subjective observations. Following this, the Wolf motor function test (WMFT) was exclusively developed for patients receiving constraint-induced movement therapy. The WMFT is used to estimate a patients UE motor deficits, and involves movements from shoulder flexion to fine finger flexion [21]. However, the WMFT requires specific equipment, such as a dynamometer, to measure the prehension force, and can therefore be expensive. Finally, the most interesting scale is the action research arm test (ARAT), which is divided into four categories: grasp, grip, pinch and gross movements [22]. The ARAT includes 19 items that are given scores of between zero (cannot perform any part of the test) and three (performs the test normally). The patient is asked to complete the most difficult task in the subscale; if the patient achieves the maximum score in this task (score = 3), this score is assigned for this sub-scale and the patient then tries the next sub-scale. It should be noted that the ARAT grip and gross movements are more widely used than the grasp and pinch movements [23]. The interesting aspect of ARAT is that the test must be performed under standard conditions (with a specific chair and table set), meaning that the test is easily reproducible. Moreover, the objects used during the test can be easily implemented using sensors (a cube or cup, for example). All of the above estimation methods evaluate arm and hand function using different tasks and scales, although the most common tasks are hand grasp and elbow flexion. In addition, these empirical methods are based on visual estimations and subjective observations, which require the presence of a therapist.B.New Platforms for Stroke MonitoringIn the past decade, several authors have investigated new information and communication technologies for health applications, and more particularly to provide new approaches to the monitoring of stroke patients during rehabilitation. Iosa et al. found that smart objects and wearable devices will be able to enhance the monitoring of stroke patients in the near future [24]. For example, Patel et al. [25] used accelerometers to accurately estimate the scores assigned by clinicians using the functional ability scale. Other studies have tried to characterize tremors during daily activities using wearables, by comparing the number of movement units (NMUs); these are based on an analysis of the smoothness and efficiency of the movements made by healthy, mild stroke and moderate subjects [11], [26]. The results show that the more a patient is affected by stroke, the greater the number of NMUs that can be found in the movement. Some studies of wearables focus on monitoring the body kinematics of stroke patients in everyday life. Tognetti et al. proposed an innovative garment that is able to detect the position and movement of the upper limbs [27] while Laudanski et al. [28] used inertial sensors to estimate lower limb joint kinematics during stair ambulation in healthy older adults and stroke survivors. However, the patients were required to wear devices or sensors at specific locations on their body. Other works have investigated self-monitoring for rehabilitation and have found that this is effective in improving physical function and quality of life, particularly for early post-stroke patients [29]. For example, MagicMirror is a self-monitoring platform that based on motion tracking using a Kinect [7]. Rehabilitation exercises are performed with a therapist and recorded with a camera, and the patient then uses this video recording as a reference exercise while exercising at home. However, these exercises are stroke-specific and are not based on ADLs. Moreover, they also require the therapist to spend an entire session at the hospital recording the reference exercise.As described above, current evaluation methods for motor activity recovery after a stroke are empirical and are based on visual estimations made by the therapists during rehabilitation sessions. Other solutions have emerged for monitoring stroke patients during everyday activities in order to assess their independence and recovery. However, wearables require the patient to wear sensors, and self-monitoring can be expensive and is based on specific exercises that do not match those performed in daily life. Using objects with embedded sensors to perform exercises based on ADLs is an interesting alternative to provide relevant information to a therapist about a patients level of independence in everyday life. The manipulation of a cup (filling, drinking, etc.) is a task that can be used to provide consistent information on hand and arm motor functions, as it involves reaching, grasping, filling and manipulation.
[ "12194622", "19029069", "10797161", "19246706", "22641250", "26900258", "20829411", "25100036", "8248993", "1561662", "1135616", "17553941", "21051765", "20616302", "11482350", "22647879", "15743530", "22791198", "2592969" ]
[ { "pmid": "12194622", "title": "Prevalence of spasticity post stroke.", "abstract": "OBJECTIVES\nTo establish the prevalence of spasticity 12 months after stroke and examine its relationship with functional ability.\n\n\nDESIGN\nA cohort study of prevalence of spasticity at 12 months post stroke.\n\n\nSETTING\nInitially hospitalized but subsequently community-dwelling stroke survivors in Liverpool, UK.\n\n\nSUBJECTS\nOne hundred and six consecutively presenting stroke patients surviving to 12 months.\n\n\nMAIN OUTCOME MEASURES\nMuscle tone measured at the elbow using the Modified Ashworth Scale and at several joints, in the arms and legs, using the Tone Assessment Scale; functional ability using the modified Barthel Index.\n\n\nRESULTS\nIncreased muscle tone (spasticity) was present in 29 (27%) and 38 (36%) of the 106 patients when measured using the Modified Ashworth Scale and Tone Assessment Scale respectively. Combining the results from both scales produced a prevalence of 40 (38%). Those with spasticity had significantly lower Barthel scores at 12 months (p < 0.0001).\n\n\nCONCLUSION\nWhen estimating the prevalence of spasticity it is essential to assess both arms and legs, using both scales. Despite measuring tone at several joints, spasticity was demonstrated in only 40 (38%) patients, lower than previous estimates." }, { "pmid": "19029069", "title": "Visual impairment following stroke: do stroke patients require vision assessment?", "abstract": "BACKGROUND\nthe types of visual impairment followings stroke are wide ranging and encompass low vision, eye movement and visual field abnormalities, and visual perceptual difficulties.\n\n\nOBJECTIVE\nthe purpose of this paper is to present a 1-year data set and identify the types of visual impairment occurring following stroke and their prevalence.\n\n\nMETHODS\na multi-centre prospective observation study was undertaken in 14 acute trust hospitals. Stroke survivors with a suspected visual difficulty were recruited. Standardised screening/referral and investigation forms were employed to document data on visual impairment specifically assessment of visual acuity, ocular pathology, eye alignment and movement, visual perception (including inattention) and visual field defects.\n\n\nRESULTS\nthree hundred and twenty-three patients were recruited with a mean age of 69 years [standard deviation (SD) 15]. Sixty-eight per cent had eye alignment/movement impairment, 49% had visual field impairment, 26.5% had low vision and 20.5% had perceptual difficulties.\n\n\nCONCLUSIONS\nof patients referred with a suspected visual difficulty, only 8% had normal vision status confirmed on examination. Ninety-two per cent had visual impairment of some form confirmed which is considerably higher than previous publications and probably relates to the prospective, standardised investigation offered by specialist orthoptists. However, under-ascertainment of visual problems cannot be ruled out." }, { "pmid": "10797161", "title": "Home or hospital for stroke rehabilitation? results of a randomized controlled trial : I: health outcomes at 6 months.", "abstract": "BACKGROUND AND PURPOSE\nWe wished to examine the effectiveness of an early hospital discharge and home-based rehabilitation scheme for patients with acute stroke.\n\n\nMETHODS\nThis was a randomized, controlled trial comparing early hospital discharge and home-based rehabilitation with usual inpatient rehabilitation and follow-up care. The trial was carried out in 2 affiliated teaching hospitals in Adelaide, South Australia. Participants were 86 patients with acute stroke (mean age, 75 years) who were admitted to hospital and required rehabilitation. Forty-two patients received early hospital discharge and home-based rehabilitation (median duration, 5 weeks), and 44 patients continued with conventional rehabilitation care after randomization. The primary end point was self-reported general health status (SF-36) at 6 months after randomization. A variety of secondary outcome measures were also assessed.\n\n\nRESULTS\nOverall, clinical outcomes for patients did not differ significantly between the groups at 6 months after randomization, but the total duration of hospital stay in the experimental group was significantly reduced (15 versus 30 days; P<0.001). Caregivers among the home-based rehabilitation group had significantly lower mental health SF-36 scores (mean difference, 7 points).\n\n\nCONCLUSIONS\nA policy of early hospital discharge and home-based rehabilitation for patients with stroke can reduce the use of hospital rehabilitation beds without compromising clinical patient outcomes. However, there is a potential risk of poorer mental health on the part of caregivers. The choice of this management strategy may therefore depend on convenience and costs but also on further evaluations of the impact of stroke on caregivers." }, { "pmid": "19246706", "title": "Future demographic trends decrease the proportion of ischemic stroke patients receiving thrombolytic therapy: a call to set-up therapeutic studies in the very old.", "abstract": "BACKGROUND AND PURPOSE\nThrombolytic therapy with tissue plasminogen activator (tPA) is rarely applied to ischemic stroke patients aged 80 years and above. As future demographic trends will increase the proportion of older stroke patients, the overall tPA treatment rate may decrease. The aim of the present analysis was to provide an estimate of the future number of ischemic stroke patients and the fraction thereof receiving tPA.\n\n\nMETHODS\nIn 2005, n=12 906 hospitalized ischemic stroke patients were included into a large registry covering the Federal State of Hesse, Germany. Age- and gender-specific frequency rates for ischemic stroke and tPA therapy were calculated based on the registry and the respective population data. Population projections until 2050 were derived from the Hessian Bureau of Statistics.\n\n\nRESULTS\nAssuming constant age- and gender-specific stroke incidence rates and treatment strategies, the total number of ischemic stroke patients will rise by approximately 68% until 2050, whereas the proportion of tPA-treated ischemic stroke patients will decrease from 4.5% to 3.8% in the same time frame (relative decrease 16%; chi(2) P<0.001).\n\n\nCONCLUSIONS\nFuture demographic changes will reduce tPA treatment rates. Therapeutic studies focusing on very old stroke patients are necessary to counteract this trend." }, { "pmid": "22641250", "title": "Predicting outcome after stroke: the role of basic activities of daily living predicting outcome after stroke.", "abstract": "BACKGROUND\nVery few studies have investigated the influence of single activities of daily living (ADL) at admission as possible predictors of functional outcome after rehabilitation.\n\n\nAIM\nThe aim of the current study was to investigate admission functional status and performance of basic ADLs as assessed by Functional Independence Measure (FIM) scale as possible predictors of motor and functional outcome after stroke during inpatient rehabilitation.\n\n\nDESIGN\nThis is a prospective and observational study.\n\n\nSETTING\nInpatients of our Department of Physical Medicine and Rehabilitation.\n\n\nPOPULATION\nTwo hundred sixty consecutive patients with primary diagnosis of stroke were enrolled and 241 patients were used in the final analyses.\n\n\nMETHODS\nTwo backward stepwise regression analyses were applied to predict outcome. The first backward stepwise regression had age, gender, stroke type, stroke-lesion size, aphasia, neglect, onset to admission interval, Cumulative Illness Rating Scale, National Institute of Health Stroke Scale (NIHSS), Fugl-Meyer Scale, Trunk Control Test, and FIM (total, motor and cognitive scores) as independent variables. The second analyses included the above variables plus FIM items as an independent variable. The dependent variables were the discharge scores and effectiveness in total and motor-FIM, and discharge destination.\n\n\nRESULTS\nThe first multivariate analysis showed that admission Fugl-Meyer, neglect, total, motor and cognitive FIM scores were the most important predictors of FIM outcomes, while admission NIHSS score was the only predictor of discharge destination. Conversely, when admission single FIM items were included in the statistical model, admission Fugl-Meyer, neglect, grooming, dressing upper body, and social interaction scores were the most important predictors of FIM outcomes, while admission memory and bowel control scores were the only predictors of discharge destination.\n\n\nCONCLUSION\nOur study indicates that performances of basic ADLs are important stroke outcome predictors and among which social interaction, grooming, upper body dressing, and bowel control are the most important.\n\n\nCLINICAL REHABILITATION IMPACT\nThe results of this study suggests that, when designing other studies on stroke outcome predictions, researchers should also include tests which assess performances of basic ADLs as independent variables, because this may allow identification of new prognostic indicators that can be helpful for the physician for managing stroke patients at the end of the rehabilitation period." }, { "pmid": "26900258", "title": "Smart Cup: A Minimally-Instrumented, Smartphone-Based Point-of-Care Molecular Diagnostic Device.", "abstract": "Nucleic acid amplification-based diagnostics offer rapid, sensitive, and specific means for detecting and monitoring the progression of infectious diseases. However, this method typically requires extensive sample preparation, expensive instruments, and trained personnel. All of which hinder its use in resource-limited settings, where many infectious diseases are endemic. Here, we report on a simple, inexpensive, minimally-instrumented, smart cup platform for rapid, quantitative molecular diagnostics of pathogens at the point of care. Our smart cup takes advantage of water-triggered, exothermic chemical reaction to supply heat for the nucleic acid-based, isothermal amplification. The amplification temperature is regulated with a phase-change material (PCM). The PCM maintains the amplification reactor at a constant temperature, typically, 60-65°C, when ambient temperatures range from 12 to 35°C. To eliminate the need for an optical detector and minimize cost, we use the smartphone's flashlight to excite the fluorescent dye and the phone camera to record real-time fluorescence emission during the amplification process. The smartphone can concurrently monitor multiple amplification reactors and analyze the recorded data. Our smart cup's utility was demonstrated by amplifying and quantifying herpes simplex virus type 2 (HSV-2) with LAMP assay in our custom-made microfluidic diagnostic chip. We have consistently detected as few as 100 copies of HSV-2 viral DNA per sample. Our system does not require any lab facilities and is suitable for use at home, in the field, and in the clinic, as well as in resource-poor settings, where access to sophisticated laboratories is impractical, unaffordable, or nonexistent." }, { "pmid": "20829411", "title": "Kinematic variables quantifying upper-extremity performance after stroke during reaching and drinking from a glass.", "abstract": "BACKGROUND\nThree-dimensional kinematic analysis provides quantitative and qualitative assessment of upper-limb motion and is used as an outcome measure to evaluate impaired movement after stroke. The number of kinematic variables used, however, is diverse, and models for upper-extremity motion analysis vary.\n\n\nOBJECTIVE\nThe authors aim to identify a set of clinically useful and sensitive kinematic variables to quantify upper-extremity motor control during a purposeful daily activity, that is, drinking from a glass.\n\n\nMETHODS\nFor this purpose, 19 participants with chronic stroke and 19 healthy controls reached for a glass of water, took a sip, and placed it back on a table in a standardized way. An optoelectronic system captured 3-dimensional kinematics. Kinematical parameters describing movement time, velocity, strategy and smoothness, interjoint coordination, and compensatory movements were analyzed between groups.\n\n\nRESULTS\nThe majority of kinematic variables showed significant differences between study groups. The number of movement units, total movement time, and peak angular velocity of elbow discriminated best between healthy participants and those with stroke as well as between those with moderate (Fugl-Meyer scores of 39-57) versus mild (Fugl-Meyer scores of 58-64) arm impairment. In addition, the measures of compensatory trunk and arm movements discriminated between those with moderate and mild stroke impairment.\n\n\nCONCLUSION\nKinematic analysis in this study identified a set of movement variables during a functional task that may serve as an objective assessment of upper-extremity motor performance in persons who can complete a task, such as reaching and drinking, after stroke." }, { "pmid": "25100036", "title": "Automated assessment of upper extremity movement impairment due to stroke.", "abstract": "Current diagnosis and treatment of movement impairment post-stroke is based on the subjective assessment of select movements by a trained clinical specialist. However, modern low-cost motion capture technology allows for the development of automated quantitative assessment of motor impairment. Such outcome measures are crucial for advancing post-stroke treatment methods. We sought to develop an automated method of measuring the quality of movement in clinically-relevant terms from low-cost motion capture. Unconstrained movements of upper extremity were performed by people with chronic hemiparesis and recorded by standard and low-cost motion capture systems. Quantitative scores derived from motion capture were compared to qualitative clinical scores produced by trained human raters. A strong linear relationship was found between qualitative scores and quantitative scores derived from both standard and low-cost motion capture. Performance of the automated scoring algorithm was matched by averaged qualitative scores of three human raters. We conclude that low-cost motion capture combined with an automated scoring algorithm is a feasible method to assess objectively upper-arm impairment post stroke. The application of this technology may not only reduce the cost of assessment of post-stroke movement impairment, but also promote the acceptance of objective impairment measures into routine medical practice." }, { "pmid": "8248993", "title": "Hemibody tremor related to stroke.", "abstract": "BACKGROUND\nHemibody tremor is an uncommon manifestation of stroke. We describe a case investigated by both brain magnetic resonance imaging and positron emission tomography using [18F]fluorodeoxyglucose.\n\n\nCASE DESCRIPTION\nThree months after a pure motor stroke, a 65-year-old man developed a right arm and leg tremor. The tremor was of large amplitude, intermittent at rest; its frequency was 5 to 6 Hz. Neither rigidity nor akinesia was detected, and administration of L-dopa was ineffective. Brain magnetic resonance imaging revealed an ischemic lesion in the left centrum semiovale and a left caudate lacunar infarction. We suspected that the resting unilateral tremor was related to this lacunar lesion. Positron emission tomography demonstrated glucose hypermetabolism in the left sensorimotor cortex.\n\n\nCONCLUSIONS\nThis case suggests that unilateral tremor may be related to a lacunar stroke in the caudate nucleus and may be accompanied by an increased glucose metabolism in the contralateral sensorimotor cortex." }, { "pmid": "1561662", "title": "Delayed onset hand tremor caused by cerebral infarction.", "abstract": "BACKGROUND AND PURPOSE\nHand tremor has been rarely described as a manifestation of stroke.\n\n\nCASE DESCRIPTIONS\nThree patients developed delayed onset hand tremor due to cerebral infarctions on the contralateral side, two in the caudate nucleus and the third in the thalamus. The tremors of all three patients were similar to parkinsonian tremor, but less monotonous.\n\n\nCONCLUSIONS\nDelayed-onset hand tremor should be considered as one of the movement disorders caused by cerebral infarction." }, { "pmid": "1135616", "title": "The post-stroke hemiplegic patient. 1. a method for evaluation of physical performance.", "abstract": "A system for evaluation of motor function, balance, some sensation qualities and joint function in hemiplegic patients is described in detail. The system applies a cumulative numerical score. A series of hemiplegic patients has been followed from within one week post-stroke and throughout one year. When initially nearly flaccid hemiparalysis prevails, the motor recovery, if any occur, follows a definable course. The findings in this study substantiate the validity of ontogenetic principles as applicable to the assessment of motor behaviour in hemiplegic patients, and foocus the importance of early therapeutic measures against contractures." }, { "pmid": "17553941", "title": "Changing motor synergies in chronic stroke.", "abstract": "Synergies are thought to be the building blocks of vertebrate movements. The inability to execute synergies in properly timed and graded fashion precludes adequate functional motor performance. In humans with stroke, abnormal synergies are a sign of persistent neurological deficit and result in loss of independent joint control, which disrupts the kinematics of voluntary movements. This study aimed at characterizing training-related changes in synergies apparent from movement kinematics and, specifically, at assessing: 1) the extent to which they characterize recovery and 2) whether they follow a pattern of augmentation of existing abnormal synergies or, conversely, are characterized by a process of extinction of the abnormal synergies. We used a robotic therapy device to train and analyze paretic arm movements of 117 persons with chronic stroke. In a task for which they received no training, subjects were better able to draw circles by discharge. Comparison with performance at admission on kinematic robot-derived metrics showed that subjects were able to execute shoulder and elbow joint movements with significantly greater independence or, using the clinical description, with more isolated control. We argue that the changes we observed in the proposed metrics reflect changes in synergies. We show that they capture a significant portion of the recovery process, as measured by the clinical Fugl-Meyer scale. A process of \"tuning\" or augmentation of existing abnormal synergies, not extinction of the abnormal synergies, appears to underlie recovery." }, { "pmid": "21051765", "title": "Motor recovery and cortical reorganization after mirror therapy in chronic stroke patients: a phase II randomized controlled trial.", "abstract": "OBJECTIVE\nTo evaluate for any clinical effects of home-based mirror therapy and subsequent cortical reorganization in patients with chronic stroke with moderate upper extremity paresis.\n\n\nMETHODS\nA total of 40 chronic stroke patients (mean time post .onset, 3.9 years) were randomly assigned to the mirror group (n = 20) or the control group (n = 20) and then joined a 6-week training program. Both groups trained once a week under supervision of a physiotherapist at the rehabilitation center and practiced at home 1 hour daily, 5 times a week. The primary outcome measure was the Fugl-Meyer motor assessment (FMA). The grip force, spasticity, pain, dexterity, hand-use in daily life, and quality of life at baseline-posttreatment and at 6 months-were all measured by a blinded assessor. Changes in neural activation patterns were assessed with functional magnetic resonance imaging (fMRI) at baseline and posttreatment in an available subgroup (mirror, 12; control, 9).\n\n\nRESULTS\nPosttreatment, the FMA improved more in the mirror than in the control group (3.6 ± 1.5, P < .05), but this improvement did not persist at follow-up. No changes were found on the other outcome measures (all Ps >.05). fMRI results showed a shift in activation balance within the primary motor cortex toward the affected hemisphere in the mirror group only (weighted laterality index difference 0.40 ± 0.39, P < .05).\n\n\nCONCLUSION\nThis phase II trial showed some effectiveness for mirror therapy in chronic stroke patients and is the first to associate mirror therapy with cortical reorganization. Future research has to determine the optimum practice intensity and duration for improvements to persist and generalize to other functional domains." }, { "pmid": "20616302", "title": "Measurement structure of the Wolf Motor Function Test: implications for motor control theory.", "abstract": "BACKGROUND\nTools chosen to measure poststroke upper-extremity rehabilitation outcomes must match contemporary theoretical expectations of motor deficit and recovery because an assessment's theoretical underpinning forms the conceptual basis for interpreting its score.\n\n\nOBJECTIVE\nThe purpose of this study was to investigate the theoretical framework of the Wolf Motor Function Test (WMFT) by (1) determining whether all items measured a single underlying trait and (2) examining the congruency between the hypothesized and the empirically determined item difficulty orders.\n\n\nMETHODS\nConfirmatory factor analysis (CFA) and Rasch analysis were applied to existing WMFT Functional Ability Rating Scale data from 189 participants in the EXCITE (Extremity Constraint-Induced Therapy Evaluation) trial. Fit of a 1-factor CFA model (all items) was compared with the fit of a 2-factor CFA model (factors defined according to item object-grasp requirements) with fit indices, model comparison test, and interfactor correlations.\n\n\nRESULTS\nOne item was missing sufficient data and therefore removed from analysis. CFA fit indices and the model-comparison test suggested that both models fit equally well. The 2-factor model yielded a strong interfactor correlation, and 13 of 14 items fit the Rasch model. The Rasch item difficulty order was consistent with the hypothesized item difficulty order.\n\n\nCONCLUSION\nThe results suggest that WMFT items measure a single construct. Furthermore, the results depict an item difficulty hierarchy that may advance the theoretical discussion of the person ability versus task difficulty interaction during stroke recovery." }, { "pmid": "11482350", "title": "The responsiveness of the Action Research Arm test and the Fugl-Meyer Assessment scale in chronic stroke patients.", "abstract": "The responsiveness of the Action Research Arm (ARA) test and the upper extremity motor section of the Fugl-Meyer Assessment (FMA) scale were compared in a cohort of 22 chronic stroke patients undergoing intensive forced use treatment aimed at improvement of upper extremity function. The cohort consisted of 13 men and 9 women, median age 58.5 years, median time since stroke 3.6 years. Responsiveness was defined as the sensitivity of an instrument to real change. Two baseline measurements were performed with a 2-week interval before the intervention, and a follow-up measurement after 2 weeks of intensive forced use treatment. The limits of agreement, according to the Bland-Altman method, were computed as a measure of the test-retest reliability. Two different measures of responsiveness were compared: (i) the number of patients who improved more than the upper limit of agreement during the intervention; (ii) the responsiveness ratio. The limits of agreement, designating the interval comprising 95% of the differences between two measurements in a stable individual, were -5.7 to 6.2 and -5.0 to 6.6 for the ARA test and the FMA scale, respectively. The possible sum scores range from 0 to 57 (ARA) and from 0 to 66 (FMA). The number of patients who improved more than the upper limit were 12 (54.5%) and 2 (9.1%); and the responsiveness ratios were 2.03 and 0.41 for the ARA test and the FMA scale, respectively. These results strongly suggest that the ARA test is more responsive to improvement in upper extremity function than the FMA scale in chronic stroke patients undergoing forced use treatment." }, { "pmid": "22647879", "title": "Movement kinematics during a drinking task are associated with the activity capacity level after stroke.", "abstract": "BACKGROUND\nKinematic analysis is a powerful method for an objective assessment of movements and is increasingly used as an outcome measure after stroke. Little is known about how the actual movement performance measured with kinematics is related to the common traditional assessment scales. The aim of this study was to determine the relationships between movement kinematics from a drinking task and the impairment or activity limitation level after stroke.\n\n\nMETHODS\nKinematic analysis of movement performance in a drinking task was used to measure movement time, smoothness, and angular velocity of elbow and trunk displacement (TD) in 30 individuals with stroke. Sensorimotor impairment was assessed with the Fugl-Meyer Assessment (FMA), activity capacity limitation with the Action Research Arm Test (ARAT), and self-perceived activity difficulties with the ABILHAND questionnaire.\n\n\nRESULTS\nBackward multiple regression revealed that the movement smoothness (similarly to movement time) and TD together explain 67% of the total variance in ARAT. Both variables uniquely contributed 37% and 11%, respectively. The TD alone explained 20% of the variance in the FMA, and movement smoothness explained 6% of the variance in the ABILHAND.\n\n\nCONCLUSIONS\nThe kinematic movement performance measures obtained during a drinking task are more strongly associated with activity capacity than with impairment. The movement smoothness and time, possibly together with compensatory movement of the trunk, are valid measures of activity capacity and can be considered as key variables in the evaluation of upper-extremity function after stroke. This increased knowledge is of great value for better interpretation and application of kinematic data in clinical studies." }, { "pmid": "15743530", "title": "Wearable kinesthetic system for capturing and classifying upper limb gesture in post-stroke rehabilitation.", "abstract": "BACKGROUND: Monitoring body kinematics has fundamental relevance in several biological and technical disciplines. In particular the possibility to exactly know the posture may furnish a main aid in rehabilitation topics. In the present work an innovative and unobtrusive garment able to detect the posture and the movement of the upper limb has been introduced, with particular care to its application in post stroke rehabilitation field by describing the integration of the prototype in a healthcare service. METHODS: This paper deals with the design, the development and implementation of a sensing garment, from the characterization of innovative comfortable and diffuse sensors we used to the methodologies employed to gather information on the posture and movement which derive from the entire garments. Several new algorithms devoted to the signal acquisition, the treatment and posture and gesture reconstruction are introduced and tested. RESULTS: Data obtained by means of the sensing garment are analyzed and compared with the ones recorded using a traditional movement tracking system. CONCLUSION: The main results treated in this work are summarized and remarked. The system was compared with a commercial movement tracking system (a set of electrogoniometers) and it performed the same accuracy in detecting upper limb postures and movements." }, { "pmid": "22791198", "title": "Unraveling the interaction between pathological upper limb synergies and compensatory trunk movements during reach-to-grasp after stroke: a cross-sectional study.", "abstract": "The aim of the present study was to identify how pathological limb synergies between shoulder and elbow movements interact with compensatory trunk movements during a functional movement with the paretic upper limb after stroke. 3D kinematic joint and trunk angles were measured during a reach-to-grasp movement in 46 patients with stroke and 12 healthy individuals. We used principal component analyses (PCA) to identify components representing linear relations between the degrees of freedom of the upper limb and trunk across patients with stroke and healthy participants. Using multivariate logistic regression analysis, we investigated whether component scores were related to the presence or absence of basic limb synergies as indicated by the arm section of the Fugl-Meyer motor assessment (FMA). Four and three principal components were extracted in patients with stroke and healthy individuals, respectively. Visual inspection revealed that the contribution of joint and trunk angles to each component differed substantially between groups. The presence of the flexion synergy (Shoulder Abduction and Elbow Flexion) was reflected by component 1, whereas the compensatory role of trunk movements for lack of shoulder and elbow movements was reflected by components 2 and 3 respectively. The presence or absence of basic limb synergies as determined by means of the FMA was significantly related to components 2 (p = 0.014) and 3 (p = 0.003) in patients with stroke. These significant relations indicate that PCA is a useful tool to identify clinically meaningful interactions between compensatory trunk movements and pathological synergies in the elbow and shoulder during reach-to-grasp after stroke." }, { "pmid": "2592969", "title": "Arm function after stroke. An evaluation of grip strength as a measure of recovery and a prognostic indicator.", "abstract": "The value of strength of voluntary grip as an indicator of recovery of arm function was assessed by testing 38 recent stroke patients using a sensitive electronic dynamometer, and comparing the results with those from five other arm movement and function tests (Motricity Index, Motor Club Assessment, Nine Hole Peg Test, and Frenchay Arm Test). This procedure allowed measurement of grip in a large proportion of patients, and strength correlated highly with performance on the other tests. Measuring grip over a six month follow up period was a sensitive method of charting intrinsic neurological recovery. The presence of voluntary grip at one month indicates that there will be some functional recovery at six months." } ]
Royal Society Open Science
30839667
PMC6170552
10.1098/rsos.180529
Taking advantage of hybrid bioinspired intelligent algorithm with decoupled extended Kalman filter for optimizing growing and pruning radial basis function network
The growing and pruning radial basis function (GAP-RBF) network is a promising sequential learning algorithm for prediction analysis, but the parameter selection of such a network is usually a non-convex problem and makes it difficult to handle. In this paper, a hybrid bioinspired intelligent algorithm is proposed to optimize GAP-RBF. Specifically, the excellent local convergence of particle swarm optimization (PSO) and the extensive search ability of genetic algorithm (GA) are both considered to optimize the weights and bias term of GAP-RBF. Meanwhile, a competitive mechanism is proposed to make the hybrid algorithm choose the appropriate individuals for effective search and further improve its optimization ability. Moreover, a decoupled extended Kalman filter (DEKF) method is introduced in this study to reduce the size of error covariance matrix and decrease the computational complexity for performing real-time predictions. In the experiments, three classic forecasting issues including abalone age, Boston house price and auto MPG are adopted for extensive test, and the experimental results show that our method performs better than PSO and GA these two single bioinspired optimization algorithms. What is more, our method via DEKF achieves the better results in comparison with the state-of-art sequential learning algorithms, such as GAP-RBF, minimal resource allocation network, resource allocation network using an extended Kalman filter and resource allocation network.
2.Related work2.1.Growing and pruning radial basis function networkGAP-RBF is a promising feed-forward neural network proposed by Huang et al. [14]. Such a network introduces the concept of significance with respect to each neuron and uses it for growing and pruning hidden neurons. The neuron significance is defined as the contribution made by that neuron to the network output averaged over all the knowledge of input data received so far. The output of an RBF network with respect to an input vector x ∈ Rl is given by: 2.1f(x)=α0+∑k=1Kαkϕk(x),where K is the number of neurons, α0 and αk are bias term and connecting weight vector to the output neurons, respectively. ϕk(x) is the Gaussian response of the kth hidden neuron: 2.2ϕk(x)=exp(−∥x−μk∥2σk2),where μk ∈ Rl and σk are the centre and width of the kth hidden neuron, respectively. The learning steps of GAP-RBF contain the allocation of new hidden neurons, adaptation of network parameters as well as pruning neurons of few contributions. The network begins with no hidden neuron. When a new observation (xn, yn) is received during training, a new hidden neuron will be allocated if the growing condition is satisfied. The growing condition is given as: 2.3{∥xn−μnr∥>εn,∥en∥>emin,Esig(K+1)   >emin.where en = yn − f(xn) denotes the error between the real output and expected output of the network. μnr is the nearest centre to xn. The value of emin represents the desired approximation accuracy of the network output, and ɛn distance is the scale of resolution in the input space. The algorithm begins with ɛn = ɛmax, and ɛmax is chosen as the largest scale of interest in the input space, typically the entire input space of non-zero probability. ɛn distance is decayed exponentially as ɛn = {ɛmaxγn, ɛmin}, where 0 < γ < 1 is a decay constant [13], and the value of ɛn is decayed until it reaches ɛmin [6]. Specifically, GAP-RBF derived the significance using a piecewise linear approximation to the Gaussian functions for the sake of reducing the computational efforts. Subsequently, the significance of the kth hidden neuron Esig(k) is estimated as 2.4Esig(k)=|(1.8σk)lαkS(X)|,where S(X) is the size of the input space, and l is the dimension of the input space. Thus, when all of these three criteria in formula (2.3) are satisfied, a new neuron K + 1 is added to the network as: 2.5{αK+1=en,μK+1=xn,σK+1=κ∥xn−μnr∥where κ is an overlap factor determining the amount of overlap with respect to the response of hidden units in the input space. If an observation (xn, yn) does not meet all of the three criteria in formula (2.3), only the network parameters of the nearest neuron to the current input are adapted using the extended Kalman filter (EKF) algorithm to fit that observation. Finally, to keep a compact network, the nearest neuron is checked for pruning according to its significance Esig(nr) which is given by: 2.6Esig(nr)=|αnr|(1.8σnr)lS(X)<emin.If the average contribution made by neuron nr in the whole range X is less than the expected accuracy emin, that is to say, neuron nr is insignificant, and this neuron will be removed [21]. In comparison with previous RANs, GAP-RBF has very few thresholds to define, but when using GAP-RBF to deal with real-world problems, it cannot achieve a better accuracy, because the significance condition is difficult to precisely estimate, especial for non-uniform distribution of input data.2.2.Bioinspired intelligent algorithmsAs we know, the optimization of network to tackle with real-world problems is usually complex and non-convex issue. In such a situation, it is difficult to obtain the global optima or near global optima using traditional gradient-based optimization algorithms. Inspired by the rules of biological intelligence, swarm sociality or natural phenomena, bioinspired intelligent algorithms have received extensive attention since they can effectively resolve the above issue.As one of typical bioinspired intelligent algorithms, PSO was proposed by Kennedy & Eberhart [15,22] based on the concepts of social models and swarm theories. The swarm consists of individual particles, and it assumes that each particle in the swarm flies over the search space looking for promising regions of the landscape [23]. In each iteration, each particle moves towards the personal best solution and the global best solution concurrently to explore the optimum solution in the search space [24]. PSO is established based on few or even no assumption on the search space. Such a feature enables PSO to search the optimum solution in a wide search space. Though using the best information of individual and swarm, PSO shares relatively comprehensive search ability to an optimization problem. However, it performs poorly on problems that have many potential optima and may get trapped at local optima [25].GA was introduced by Holland [16] as a powerful bioinspired intelligent algorithm for global search and optimization. It is a random searching algorithm based on computational models simulating the evolutionary mechanism of nature, and can solve nonlinear problems by searching all spaces [26]. GA has become more popular because of its relative simplicity and robustness [27]. In the population, each chromosome evolves in parallel, repeatedly modifying individual solutions. Though GA shows strong global search ability, the blindness of random crossover and mutation operators makes it difficult to perform detailed search for local optima. Thus, through combing other powerful search algorithms for evolving the global optimum solution, it has become the recent research hot spot for GA. Note that PSO uses the individual and social best information and shares the excellent local convergence. In this study, the extensive search ability of GA and the excellent local convergence of PSO are both considered to optimize the weights and bias term between hidden layer and output layer, which are difficult to precisely optimize in basic GAP-RBF.
[ "23811384", "18252454", "9117909", "15619929" ]
[ { "pmid": "23811384", "title": "A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation.", "abstract": "Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network." }, { "pmid": "18252454", "title": "Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm.", "abstract": "This paper presents a detailed performance analysis of the minimal resource allocation network (M-RAN) learning algorithm, M-RAN is a sequential learning radial basis function neural network which combines the growth criterion of the resource allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RAN. The performance of this algorithm is compared with the multilayer feedforward networks (MFNs) trained with 1) a variant of the standard backpropagation algorithm, known as RPROP and 2) the dependence identification (DI) algorithm of Moody and Antsaklis on several benchmark problems in the function approximation and pattern classification areas. For all these problems, the M-RAN algorithm is shown to realize networks with far fewer hidden neurons with better or same approximation/classification accuracy. Further, the time taken for learning (training) is also considerably shorter as M-RAN does not require repeated presentation of the training data." }, { "pmid": "9117909", "title": "A sequential learning scheme for function approximation using minimal radial basis function neural networks.", "abstract": "This article presents a sequential learning algorithm for function approximation and time-series prediction using a minimal radial basis function neural network (RBFNN). The algorithm combines the growth criterion of the resource-allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RBFNN. The performance of the algorithm is compared with RAN and the enhanced RAN algorithm of Kadirkamanathan and Niranjan (1993) for the following benchmark problems: (1) hearta from the benchmark problems database PROBEN1, (2) Hermite polynomial, and (3) Mackey-Glass chaotic time series. For these problems, the proposed algorithm is shown to realize RBFNNs with far fewer hidden neurons with better or same accuracy." }, { "pmid": "15619929", "title": "An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks.", "abstract": "This paper presents a simple sequential growing and pruning algorithm for radial basis function (RBF) networks. The algorithm referred to as growing and pruning (GAP)-RBF uses the concept of \"Significance\" of a neuron and links it to the learning accuracy. \"Significance\" of a neuron is defined as its contribution to the network output averaged over all the input data received so far. Using a piecewise-linear approximation for the Gaussian function, a simple and efficient way of computing this significance has been derived for uniformly distributed input data. In the GAP-RBF algorithm, the growing and pruning are based on the significance of the \"nearest\" neuron. In this paper, the performance of the GAP-RBF learning algorithm is compared with other well-known sequential learning algorithms like RAN, RANEKF, and MRAN on an artificial problem with uniform input distribution and three real-world nonuniform, higher dimensional benchmark problems. The results indicate that the GAP-RBF algorithm can provide comparable generalization performance with a considerably reduced network size and training time." } ]
Frontiers in Neurorobotics
30319389
PMC6170616
10.3389/fnbot.2018.00061
Cooperative and Competitive Reinforcement and Imitation Learning for a Mixture of Heterogeneous Learning Modules
This paper proposes Cooperative and competitive Reinforcement And Imitation Learning (CRAIL) for selecting an appropriate policy from a set of multiple heterogeneous modules and training all of them in parallel. Each learning module has its own network architecture and improves the policy based on an off-policy reinforcement learning algorithm and behavior cloning from samples collected by a behavior policy that is constructed by a combination of all the policies. Since the mixing weights are determined by the performance of the module, a better policy is automatically selected based on the learning progress. Experimental results on a benchmark control task show that CRAIL successfully achieves fast learning by allowing modules with complicated network structures to exploit task-relevant samples for training.
2. Related workSeveral reinforcement learning methods with multiple modules have been proposed. Compositional Q-learning (Singh, 1992) selects a learning module with the least TD-error, and Selected Expert Reinforcement Learner (Ring and Schaul, 2011) extends the value function to select a module with better performance. Doya et al. (2002) proposed Multiple Model-based Reinforcement Learning (MMRL), in which each module is comprised of a state prediction model and the module with the least prediction error is selected and trained. These approaches are interpreted as the concept of “Mixture of Experts.” In these approaches, the structure of each module is the same and uses the same learning algorithm, while CRAIL enables the use of heterogeneous learning modules that can be trained concurrently. One interpretation is that the modules are spatially distributed in their methods because they change the module based on the current environmental state. On the other hand, CRAIL temporarily distributes the modules because it switches them due to the learning progress.Some researchers integrated an RL algorithm with hand-coded policies to improve the learning progress in its initial stage. Smart and Kaelbling (2002) proposed an architecture comprised of a supplied control policy and Q-learning. In the first learning phase, a robot was controlled with the supplied control policy developed by a designer. The second learning phase begins to control the robot effectively when the value function is approximated sufficiently. Xie et al. (2018) proposed a similar approach to incorporate a prior knowledge, in which Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2016) and a PID controller are used as off-policy learning and a hand-coded policy, respectively. However, a limitation of their approach is that it uses only one learning module. CRAIL is a more general architecture for incorporating multiple prior knowledge. In addition, it can automatically select an appropriate module depending on the learning progress. Sutton et al. (1999) described the advantages of off-policy learning and proposed a novel framework to accelerate learning by representing policies at multiple levels of temporal abstraction. Although their method assumed a semi-Markov decision problem and AVRL, CLIS can use different learning algorithms.Our framework can be interpreted as learning from demonstrations. Many previous studies can be found in this field, and some recent studies such as (Gao et al., 2018; Hester et al., 2018; Nair et al., 2018) integrated reinforcement learning with learning from demonstrations by augmenting the objective function. Our framework resembles those methods from the viewpoint of the design of the objective function. The role of the demonstrator is different because our framework's demonstrator is selected from multiple heterogeneous policies based on the learning progress; previous studies assumed that it is stationary and used it to generate a training dataset. Since CRAIL explicitly represents the behavior policy, actions can be easily sampled from it to evaluate the behavior cloning loss.The most closely related study is Mix & Match (Czarnecki et al., 2018), in which multiple heterogeneous modules are trained in parallel. Mix & Match's basic idea resembles CRAIL, but it does not consider multiple reinforcement learning algorithms; CRAIL adopts three learning algorithms for every module. In addition, Mix & Match uses a mixture of policies and optimizes the mixing weights by a kind of evolutionary computation. Since Mix & Match needs multiple simulators, it is sample-inefficient. The mixing weights are automatically determined in the case of CRAIL.
[ "12020450", "29395652", "18555958", "25719670", "26819042", "29052630" ]
[ { "pmid": "12020450", "title": "Multiple model-based reinforcement learning.", "abstract": "We propose a modular reinforcement learning architecture for nonlinear, nonstationary control tasks, which we call multiple model-based reinforcement learning (MMRL). The basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the environmental dynamics. The system is composed of multiple modules, each of which consists of a state prediction model and a reinforcement learning controller. The \"responsibility signal,\" which is given by the softmax function of the prediction errors, is used to weight the outputs of multiple modules, as well as to gate the learning of the prediction models and the reinforcement learning controllers. We formulate MMRL for both discrete-time, finite-state case and continuous-time, continuous-state case. The performance of MMRL was demonstrated for discrete case in a nonstationary hunting task in a grid world and for continuous case in a nonlinear, nonstationary control task of swinging up a pendulum with variable physical parameters." }, { "pmid": "29395652", "title": "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning.", "abstract": "In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10 × 10 board, using TD(λ) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(λ) agent with SiLU and dSiLU hidden units." }, { "pmid": "18555958", "title": "Central pattern generators for locomotion control in animals and robots: a review.", "abstract": "The problem of controlling locomotion is an area in which neuroscience and robotics can fruitfully interact. In this article, I will review research carried out on locomotor central pattern generators (CPGs), i.e. neural circuits capable of producing coordinated patterns of high-dimensional rhythmic output signals while receiving only simple, low-dimensional, input signals. The review will first cover neurobiological observations concerning locomotor CPGs and their numerical modelling, with a special focus on vertebrates. It will then cover how CPG models implemented as neural networks or systems of coupled oscillators can be used in robotics for controlling the locomotion of articulated robots. The review also presents how robots can be used as scientific tools to obtain a better understanding of the functioning of biological CPGs. Finally, various methods for designing CPGs to control specific modes of locomotion will be briefly reviewed. In this process, I will discuss different types of CPG models, the pros and cons of using CPGs with robots, and the pros and cons of using robots as scientific tools. Open research topics both in biology and in robotics will also be discussed." }, { "pmid": "25719670", "title": "Human-level control through deep reinforcement learning.", "abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks." }, { "pmid": "26819042", "title": "Mastering the game of Go with deep neural networks and tree search.", "abstract": "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away." }, { "pmid": "29052630", "title": "Mastering the game of Go without human knowledge.", "abstract": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo." } ]
Research Synthesis Methods
29956486
PMC6175382
10.1002/jrsm.1311
Prioritising references for systematic reviews with RobotAnalyst: A user study
Screening references is a time‐consuming step necessary for systematic reviews and guideline development. Previous studies have shown that human effort can be reduced by using machine learning software to prioritise large reference collections such that most of the relevant references are identified before screening is completed. We describe and evaluate RobotAnalyst, a Web‐based software system that combines text‐mining and machine learning algorithms for organising references by their content and actively prioritising them based on a relevancy classification model trained and updated throughout the process. We report an evaluation over 22 reference collections (most are related to public health topics) screened using RobotAnalyst with a total of 43 610 abstract‐level decisions. The number of references that needed to be screened to identify 95% of the abstract‐level inclusions for the evidence review was reduced on 19 of the 22 collections. Significant gains over random sampling were achieved for all reviews conducted with active prioritisation, as compared with only two of five when prioritisation was not used. RobotAnalyst's descriptive clustering and topic modelling functionalities were also evaluated by public health analysts. Descriptive clustering provided more coherent organisation than topic modelling, and the content of the clusters was apparent to the users across a varying number of clusters. This is the first large‐scale study using technology‐assisted screening to perform new reviews, and the positive results provide empirical evidence that RobotAnalyst can accelerate the identification of relevant studies. The results also highlight the issue of user complacency and the need for a stopping criterion to realise the work savings.
2RELATED WORKThe earliest study of machine learning to emulate the inclusion decisions for systematic reviews was the work of Cohen et al.9 Previously, machine learning had been demonstrated to be as effective for retrieving general categories (therapy, diagnosis, aetiology, and prognosis) of high‐quality studies for evidence‐based medicine literature50, 51 as hand‐tuned Boolean queries.52 Subsequent studies15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 32, 33, 36 have explored different feature spaces (words or multiword patterns, MeSH terms from PubMed metadata, unified medical language system (UMLS) for nomenclature, and topical or thematic features) and machine learning models and techniques such as naive Bayes, support vector machine (SVM), and logistic regression. Others have incorporated out‐of‐topic inclusions53 and unscreened references.30, 31 Most previous studies of machine learning for systematic reviews have a fixed training and test set of references. In these cases, a user screens a portion of references (either random or based on publication year for an update); the machine learns from the screened portion and predicts the relevant references within the remainder of references (the test set). Finally, only the references predicted as relevant are screened manually by a human reviewer. In this scenario, two reviewers (human and machine) have screened every inclusion. A similar scenario involves two human reviewers that each screen half of the collection to train two independent classification models. Each model provides relevancy predictions on the other half, and discrepancies are resolved by the humans.18 A key issue with these scenarios is the low specificity within the training set. This poses a problem since identifying all or nearly all of the inclusions is essential for systematic reviews, but off‐the‐shelf machine learning algorithms offer predictions under the assumption that misclassifications, eg, predicting an inclusion instead of an exclusion or vice versa, are equally unwelcome. This is not the case in systematic reviews, where unnecessary inclusions during abstract‐level screening (by being overly inclusive) can later be discarded, while missing relevant references violates the purpose of systematic reviews.1 Without adjustment, the classification model may perform poorly with imbalanced samples. To overcome this, principled adjustments and various ad hoc techniques, such as subsampling or reweighting, have been explored.Active learning54, 55, 56 is the process of using a classification model's predictions to iteratively select training data. It provides an alternative scenario for prioritising the screening process from the beginning to the end, which naturally ameliorates the imbalanced sample problem. After training with a small set of references screened by a human, active learning proceeds by prioritising references based on their predicted relevancy and the confidence of this prediction. One objective of active learning is to select training examples that improve the model as quickly as possible such that it can eventually be applied to the remaining references. In this case, the references which have the lowest confidence in their model predictions are screened first.22, 23, 26, 30, 31, 54 Another approach is relevancy‐based prioritisation, where the references with the highest probability of being relevant are screened first24, 26, 29, 31, 33 (a process known as relevancy feedback57 or certainty‐based screening26). Essentially, active learning uses new screening decisions made by the user to improve the prioritisation throughout the process. Furthermore, active learning naturally handles the imbalanced sample problem by including references with a substantial chance of being relevant.In the rest of this section, we review screening systems currently used in applications for systematic reviewing. Prioritisation performance for some of these systems has been measured, but the nature of the evaluation settings varies.EPPI‐reviewer34 is a tool for reference screening available through a Web‐based interface for a subscription fee. § https://eppi.ioe.ac.uk/cms/er4/.It contains automatic term recognition using several methods, including Termine from the National Centre for Text Mining,58 which, as described in the EPPI‐reviewer user manual, ¶ https://eppi.ioe.ac.uk/CMS/Portals/35/Manuals/ER4.7.0%20user%20manual.pdf. could be used to find relevant references based on terms found in previous inclusions. References can also be clustered using Lingo3G software. # https://carrotsearch.com/lingo3g/.Reference prioritisation is not generally available to all users, but it has already been tested for scoping reviews,59 which differ from systematic reviews by taking into accounts much larger sets of possibly eligible references and having eligibility criteria developed iteratively during the process. EPPI‐reviewer was used in two scoping reviews, containing over 800  000 and 1 million references, and provided substantial workload reduction (around 90%). One should note though that because of collection sizes, not all references were manually screened, so recall was estimated using random samples from the whole reference set.Specifically designed for facilitating screening based on active learning, Abstrackr33 is a free online open‐source tool ‖ http://abstrackr.cebm.brown.edu/.that uses the dual supervision paradigm, where the classification rules are not only automatically learned from screening decisions but also provided explicitly by users as lists of words, whose occurrence in text is indicative for reference inclusion. Another interesting extension is collaborative screening, which takes into account different levels of experience and costs of reviewers working on the same study in an active learning scenario.60, 61 The underlying classifier is an SVM over n‐grams (word sequences). A prospective evaluation using relevancy‐based prioritisation was performed by an assistant, who used decisions by a domain expert to resolve dubious cases. An independent evaluation62 was performed on four previous reviews (containing 517, 1042, 1415, and 1735 references). In this case, only the inclusions were evaluated by a reviewer, while exclusions were judged by verifying whether they were present in the published reviews (ie, the references were included after full‐text screening). The reported work saved was 40%, 9%, 56%, and 57%, respectively.SWIFT‐Active Screener ** https://www.sciome.com/swift-activescreener/.is a Web‐based interface for systematic reviews with active learning prioritisation.36 Similar to RobotAnalyst, it uses bag‐of‐words features (the counts of distinct words within the title and abstract) and the topic distributions estimated by latent Dirichl et allocation,43, 63 and prioritises references using a logistic regression model. The differences are SWIFT's inclusion of MeSH terms and RobotAnalyst's use of a linear SVM for the classification model rather than logistic regression. SWIFT‐Active Screener implements a separate model to predict the number of inclusions remaining to be screened, which can be used as a signal for the reviewer to stop screening. The system is interoperable with a related desktop application, SWIFT‐Review, †† https://www.sciome.com/swift-review/.which is freely available. A cross‐validation evaluation36 across 20 previously completed reviews, including 15 from Cohen et al,9 has shown consistent work saved over sampling.Rayyan35 is a free Web application ‡‡ https://rayyan.qcri.org/.for systematic review screening. The machine learning model28 is an SVM‐trained classifier that uses unigrams, bigrams, and MeSH terms and suggests relevancy using a 5‐star system. It was evaluated on data from Cohen et al,9 and a pilot user study on two previously completed Cochrane reviews (273 and 1030 references) was undertaken for qualitative evaluation. The interface provides a simple tool for noting exclusions reasons and supports visualisation of a similarity graph of references.Another Web‐based system for screening references is Colandr. §§ http://www.colandrcommunity.com/.The system has an open‐source code base and uses a linear model that is applied to vector representations of references based on word vectors.64, 65 Besides screening prioritisation, there are other text‐mining tools to assist study selection for systematic reviews.66 For some systematic reviews, the inclusion criteria dictate that the reference describes a randomised control study or that the study uses certain methodologies (eg, double‐blinding) to ensure quality. Tools to automatically recognise these69,67, 68, 69, 70, 71 can be used to generate tags to filter references. Study selection can also benefit from fine‐grained information extraction from full article text, eg, to find sentences corresponding to PICO criteria elements,72 or from efforts to automatically summarise included studies.66 In summary, numerous studies have evaluated automatic classification for systematic reviews; some of which have been implemented within end‐user systems, but their evaluations have been limited to either simulations involving previously completed reviews or partial reviews that have not been verified by complete manual screening. To the best of our knowledge, our work is the first large‐scale user‐based evaluation that performs new screening tasks from start to finish.
[ "20877712", "12111924", "28096751", "16357352", "24558353", "22515596", "22481134", "22677493", "24475099", "20595313", "21084178", "23428470", "22961102", "24954015", "27293211", "28648605", "14872004", "15561789", "7850570", "19567792", "26104742", "25656516", "26659355", "28806999", "27443385" ]
[ { "pmid": "20877712", "title": "Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?", "abstract": "When Archie Cochrane reproached the medical profession for not having critical summaries of all randomised controlled trials, about 14 reports of trials were being published per day. There are now 75 trials, and 11 systematic reviews of trials, per day and a plateau in growth has not yet been reached. Although trials, reviews, and health technology assessments have undoubtedly had major impacts, the staple of medical literature synthesis remains the non-systematic narrative review. Only a small minority of trial reports are being analysed in up-to-date systematic reviews. Given the constraints, Archie Cochrane's vision will not be achieved without some serious changes in course. To meet the needs of patients, clinicians, and policymakers, unnecessary trials need to be reduced, and systematic reviews need to be prioritised. Streamlining and innovation in methods of systematic reviewing are necessary to enable valid answers to be found for most patient questions. Finally, clinicians and patients require open access to these important resources." }, { "pmid": "12111924", "title": "Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records.", "abstract": "A study was conducted to estimate the accuracy and reliability of reviewers when screening records for relevant trials for a systematic review. A sensitive search of ten electronic bibliographic databases yielded 22 571 records of potentially relevant trials. Records were allocated to four reviewers such that two reviewers examined each record and so that identification of trials by each reviewer could be compared with those identified by each of the other reviewers. Agreement between reviewers was assessed using Cohen's kappa statistic. Ascertainment intersection methods were used to estimate the likely number of trials missed by reviewers. Full copies of reports were obtained and assessed independently by two researchers for eligibility for the review. Eligible reports formed the 'gold standard' against which an assessment was made about the accuracy of screening by reviewers. After screening, 301 of 22 571 records were identified by at least one reviewer as potentially relevant. Agreement was 'almost perfect' (kappa>0.8) within two pairs, 'substantial' (kappa>0.6) within three pairs and 'moderate' (kappa>0.4) within one pair. Of the 301 records selected, 273 complete reports were available. When pairs of reviewers agreed on the potential relevance of records, 81 per cent were eligible (range 69 to 91 per cent). If reviewers disagreed, 22 per cent were eligible (range 12 to 45 per cent). Single reviewers missed on average 8 per cent of eligible reports (range 0 to 24 per cent), whereas pairs of reviewers did not miss any (range 0 to 1 per cent). The use of two reviewers to screen records increased the number of randomized trials identified by an average of 9 per cent (range 0 to 32 per cent). Reviewers can reliably identify potentially relevant records when screening thousands of records for eligibility. Two reviewers should screen records for eligibility, whenever possible, in order to maximize ascertainment of relevant trials." }, { "pmid": "16357352", "title": "Reducing workload in systematic review preparation using automated citation classification.", "abstract": "OBJECTIVE\nTo determine whether automated classification of document citations can be useful in reducing the time spent by experts reviewing journal articles for inclusion in updating systematic reviews of drug class efficacy for treatment of disease.\n\n\nDESIGN\nA test collection was built using the annotated reference files from 15 systematic drug class reviews. A voting perceptron-based automated citation classification system was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Cross-validation experiments were performed to evaluate performance.\n\n\nMEASUREMENTS\nPrecision, recall, and F-measure were evaluated at a range of sample weightings. Work saved over sampling at 95% recall was used as the measure of value to the review process.\n\n\nRESULTS\nA reduction in the number of articles needing manual review was found for 11 of the 15 drug review topics studied. For three of the topics, the reduction was 50% or greater.\n\n\nCONCLUSION\nAutomated document citation classification could be a useful tool in maintaining systematic reviews of the efficacy of drug therapy. Further work is needed to refine the classification system and determine the best manner to integrate the system into the production of systematic reviews." }, { "pmid": "24558353", "title": "Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap.", "abstract": "The current difficulties in keeping systematic reviews up to date leads to considerable inaccuracy, hampering the translation of knowledge into action. Incremental advances in conventional review updating are unlikely to lead to substantial improvements in review currency. A new approach is needed. We propose living systematic review as a contribution to evidence synthesis that combines currency with rigour to enhance the accuracy and utility of health evidence. Living systematic reviews are high quality, up-to-date online summaries of health research, updated as new research becomes available, and enabled by improved production efficiency and adherence to the norms of scholarly communication. Together with innovations in primary research reporting and the creation and use of evidence in health systems, living systematic review contributes to an emerging evidence ecosystem." }, { "pmid": "22515596", "title": "Studying the potential impact of automated document classification on scheduling a systematic review update.", "abstract": "BACKGROUND\nSystematic Reviews (SRs) are an essential part of evidence-based medicine, providing support for clinical practice and policy on a wide range of medical topics. However, producing SRs is resource-intensive, and progress in the research they review leads to SRs becoming outdated, requiring updates. Although the question of how and when to update SRs has been studied, the best method for determining when to update is still unclear, necessitating further research.\n\n\nMETHODS\nIn this work we study the potential impact of a machine learning-based automated system for providing alerts when new publications become available within an SR topic. Some of these new publications are especially important, as they report findings that are more likely to initiate a review update. To this end, we have designed a classification algorithm to identify articles that are likely to be included in an SR update, along with an annotation scheme designed to identify the most important publications in a topic area. Using an SR database containing over 70,000 articles, we annotated articles from 9 topics that had received an update during the study period. The algorithm was then evaluated in terms of the overall correct and incorrect alert rate for publications meeting the topic inclusion criteria, as well as in terms of its ability to identify important, update-motivating publications in a topic area.\n\n\nRESULTS\nOur initial approach, based on our previous work in topic-specific SR publication classification, identifies over 70% of the most important new publications, while maintaining a low overall alert rate.\n\n\nCONCLUSIONS\nWe performed an initial analysis of the opportunities and challenges in aiding the SR update planning process with an informatics-based machine learning approach. Alerts could be a useful tool in the planning, scheduling, and allocation of resources for SR updates, providing an improvement in timeliness and coverage for the large number of medical topics needing SRs. While the performance of this initial method is not perfect, it could be a useful supplement to current approaches to scheduling an SR update. Approaches specifically targeting the types of important publications identified by this work are likely to improve results." }, { "pmid": "22481134", "title": "Toward modernizing the systematic review pipeline in genetics: efficient updating via data mining.", "abstract": "PURPOSE\nThe aim of this study was to demonstrate that modern data mining tools can be used as one step in reducing the labor necessary to produce and maintain systematic reviews.\n\n\nMETHODS\nWe used four continuously updated, manually curated resources that summarize MEDLINE-indexed articles in entire fields using systematic review methods (PDGene, AlzGene, and SzGene for genetic determinants of Parkinson disease, Alzheimer disease, and schizophrenia, respectively; and the Tufts Cost-Effectiveness Analysis (CEA) Registry for cost-effectiveness analyses). In each data set, we trained a classification model on citations screened up until 2009. We then evaluated the ability of the model to classify citations published in 2010 as \"relevant\" or \"irrelevant\" using human screening as the gold standard.\n\n\nRESULTS\nClassification models did not miss any of the 104, 65, and 179 eligible citations in PDGene, AlzGene, and SzGene, respectively, and missed only 1 of 79 in the CEA Registry (100% sensitivity for the first three and 99% for the fourth). The respective specificities were 90, 93, 90, and 73%. Had the semiautomated system been used in 2010, a human would have needed to read only 605/5,616 citations to update the PDGene registry (11%) and 555/7,298 (8%), 717/5,381 (13%), and 334/1,015 (33%) for the other three databases.\n\n\nCONCLUSION\nData mining methodologies can reduce the burden of updating systematic reviews, without missing more papers than humans." }, { "pmid": "22677493", "title": "Screening nonrandomized studies for medical systematic reviews: a comparative study of classifiers.", "abstract": "OBJECTIVES\nTo investigate whether (1) machine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers; (2) classifier performance varies with optimization; and (3) the number of citations to screen can be reduced.\n\n\nMETHODS\nWe used an open-source, data-mining suite to process and classify biomedical citations that point to mostly nonrandomized studies from 2 systematic reviews. We built training and test sets for citation portions and compared classifier performance by considering the value of indexing, various feature sets, and optimization. We conducted our experiments in 2 phases. The design of phase I with no optimization was: 4 classifiers × 3 feature sets × 3 citation portions. Classifiers included k-nearest neighbor, naïve Bayes, complement naïve Bayes, and evolutionary support vector machine. Feature sets included bag of words, and 2- and 3-term n-grams. Citation portions included titles, titles and abstracts, and full citations with metadata. Phase II with optimization involved a subset of the classifiers, as well as features extracted from full citations, and full citations with overweighted titles. We optimized features and classifier parameters by manually setting information gain thresholds outside of a process for iterative grid optimization with 10-fold cross-validations. We independently tested models on data reserved for that purpose and statistically compared classifier performance on 2 types of feature sets. We estimated the number of citations needed to screen by reviewers during a second pass through a reduced set of citations.\n\n\nRESULTS\nIn phase I, the evolutionary support vector machine returned the best recall for bag of words extracted from full citations; the best classifier with respect to overall performance was k-nearest neighbor. No classifier attained good enough recall for this task without optimization. In phase II, we boosted performance with optimization for evolutionary support vector machine and complement naïve Bayes classifiers. Generalization performance was better for the latter in the independent tests. For evolutionary support vector machine and complement naïve Bayes classifiers, the initial retrieval set was reduced by 46% and 35%, respectively.\n\n\nCONCLUSIONS\nMachine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers. Optimization can markedly improve performance of classifiers. However, generalizability varies with the classifier. The number of citations to screen during a second independent pass through the citations can be substantially reduced." }, { "pmid": "24475099", "title": "Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.", "abstract": "OBJECTIVES\nEvidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance.\n\n\nMETHODS\nWe built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric(+), indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests.\n\n\nRESULTS\nAll tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric(+) features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall.\n\n\nCONCLUSIONS\nA computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration." }, { "pmid": "20595313", "title": "A new algorithm for reducing the workload of experts in performing systematic reviews.", "abstract": "OBJECTIVE\nTo determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment.\n\n\nDESIGN\nThe proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance.\n\n\nMEASUREMENTS\nWork saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance.\n\n\nRESULTS\nThe minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system.\n\n\nCONCLUSION\nThe FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment." }, { "pmid": "21084178", "title": "Exploiting the systematic review protocol for classification of medical abstracts.", "abstract": "OBJECTIVE\nTo determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers.\n\n\nMETHODS AND MATERIALS\nThe test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload.\n\n\nRESULTS\nFor the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%.\n\n\nCONCLUSION\nThe per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review." }, { "pmid": "23428470", "title": "A new iterative method to reduce workload in systematic review process.", "abstract": "High cost for systematic review of biomedical literature has generated interest in decreasing overall workload. This can be done by applying natural language processing techniques to 'automate' the classification of publications that are potentially relevant for a given question. Existing solutions need training using a specific supervised machine-learning algorithm and feature-extraction system separately for each systematic review. We propose a system that only uses the input and feedback of human reviewers during the course of review. As the reviewers classify articles, the query is modified using a simple relevance feedback algorithm, and the semantically closest document to the query is presented. An evaluation of our approach was performed using a set of 15 published drug systematic reviews. The number of articles that needed to be reviewed was substantially reduced (ranging from 6% to 30% for a 95% recall)." }, { "pmid": "22961102", "title": "A pilot study using machine learning and domain knowledge to facilitate comparative effectiveness review updating.", "abstract": "BACKGROUND\nComparative effectiveness and systematic reviews require frequent and time-consuming updating.\n\n\nRESULTS\nof earlier screening should be useful in reducing the effort needed to screen relevant articles.\n\n\nMETHODS\nWe collected 16,707 PubMed citation classification decisions from 2 comparative effectiveness reviews: interventions to prevent fractures in low bone density (LBD) and off-label uses of atypical antipsychotic drugs (AAP). We used previously written search strategies to guide extraction of a limited number of explanatory variables pertaining to the intervention, outcome, and\n\n\nSTUDY DESIGN\nWe empirically derived statistical models (based on a sparse generalized linear model with convex penalties [GLMnet] and a gradient boosting machine [GBM]) that predicted article relevance. We evaluated model sensitivity, positive predictive value (PPV), and screening workload reductions using 11,003 PubMed citations retrieved for the LBD and AAP updates. Results. GLMnet-based models performed slightly better than GBM-based models. When attempting to maximize sensitivity for all relevant articles, GLMnet-based models achieved high sensitivities (0.99 and 1.0 for AAP and LBD, respectively) while reducing projected screening by 55.4% and 63.2%. The GLMnet-based model yielded sensitivities of 0.921 and 0.905 and PPVs of 0.185 and 0.102 when predicting articles relevant to the AAP and LBD efficacy/effectiveness analyses, respectively (using a threshold of P ≥ 0.02). GLMnet performed better when identifying adverse effect relevant articles for the AAP review (sensitivity = 0.981) than for the LBD review (0.685). The system currently requires MEDLINE-indexed articles.\n\n\nCONCLUSIONS\nWe evaluated statistical classifiers that used previous classification decisions and explanatory variables derived from MEDLINE indexing terms to predict inclusion decisions. This pilot system reduced workload associated with screening 2 simulated comparative effectiveness review updates by more than 50% with minimal loss of relevant articles." }, { "pmid": "24954015", "title": "Reducing systematic review workload through certainty-based screening.", "abstract": "In systematic reviews, the growing number of published studies imposes a significant screening workload on reviewers. Active learning is a promising approach to reduce the workload by automating some of the screening decisions, but it has been evaluated for a limited number of disciplines. The suitability of applying active learning to complex topics in disciplines such as social science has not been studied, and the selection of useful criteria and enhancements to address the data imbalance problem in systematic reviews remains an open problem. We applied active learning with two criteria (certainty and uncertainty) and several enhancements in both clinical medicine and social science (specifically, public health) areas, and compared the results in both. The results show that the certainty criterion is useful for finding relevant documents, and weighting positive instances is promising to overcome the data imbalance problem in both data sets. Latent dirichlet allocation (LDA) is also shown to be promising when little manually-assigned information is available. Active learning is effective in complex topics, although its efficiency is limited due to the difficulties in text classification. The most promising criterion and weighting method are the same regardless of the review topic, and unsupervised techniques like LDA have a possibility to boost the performance of active learning without manual annotation." }, { "pmid": "27293211", "title": "Topic detection using paragraph vectors to support active learning in systematic reviews.", "abstract": "Systematic reviews require expert reviewers to manually screen thousands of citations in order to identify all relevant articles to the review. Active learning text classification is a supervised machine learning approach that has been shown to significantly reduce the manual annotation workload by semi-automating the citation screening process of systematic reviews. In this paper, we present a new topic detection method that induces an informative representation of studies, to improve the performance of the underlying active learner. Our proposed topic detection method uses a neural network-based vector space model to capture semantic similarities between documents. We firstly represent documents within the vector space, and cluster the documents into a predefined number of clusters. The centroids of the clusters are treated as latent topics. We then represent each document as a mixture of latent topics. For evaluation purposes, we employ the active learning strategy using both our novel topic detection method and a baseline topic model (i.e., Latent Dirichlet Allocation). Results obtained demonstrate that our method is able to achieve a high sensitivity of eligible studies and a significantly reduced manual annotation cost when compared to the baseline method. This observation is consistent across two clinical and three public health reviews. The tool introduced in this work is available from https://nactem.ac.uk/pvtopic/." }, { "pmid": "28648605", "title": "A semi-supervised approach using label propagation to support citation screening.", "abstract": "Citation screening, an integral process within systematic reviews that identifies citations relevant to the underlying research question, is a time-consuming and resource-intensive task. During the screening task, analysts manually assign a label to each citation, to designate whether a citation is eligible for inclusion in the review. Recently, several studies have explored the use of active learning in text classification to reduce the human workload involved in the screening task. However, existing approaches require a significant amount of manually labelled citations for the text classification to achieve a robust performance. In this paper, we propose a semi-supervised method that identifies relevant citations as early as possible in the screening process by exploiting the pairwise similarities between labelled and unlabelled citations to improve the classification performance without additional manual labelling effort. Our approach is based on the hypothesis that similar citations share the same label (e.g., if one citation should be included, then other similar citations should be included also). To calculate the similarity between labelled and unlabelled citations we investigate two different feature spaces, namely a bag-of-words and a spectral embedding based on the bag-of-words. The semi-supervised method propagates the classification codes of manually labelled citations to neighbouring unlabelled citations in the feature space. The automatically labelled citations are combined with the manually labelled citations to form an augmented training set. For evaluation purposes, we apply our method to reviews from clinical and public health. The results show that our semi-supervised method with label propagation achieves statistically significant improvements over two state-of-the-art active learning approaches across both clinical and public health reviews." }, { "pmid": "14872004", "title": "Finding scientific topics.", "abstract": "A first step in identifying the content of a document is determining which topics that document addresses. We describe a generative model for documents, introduced by Blei, Ng, and Jordan [Blei, D. M., Ng, A. Y. & Jordan, M. I. (2003) J. Machine Learn. Res. 3, 993-1022], in which each document is generated by choosing a distribution over topics and then choosing each word in the document from a topic selected according to this distribution. We then present a Markov chain Monte Carlo algorithm for inference in this model. We use this algorithm to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics. We show that the extracted topics capture meaningful structure in the data, consistent with the class designations provided by the authors of the articles, and outline further applications of this analysis, including identifying \"hot topics\" by examining temporal dynamics and tagging abstracts to illustrate semantic content." }, { "pmid": "15561789", "title": "Text categorization models for high-quality article retrieval in internal medicine.", "abstract": "OBJECTIVE Finding the best scientific evidence that applies to a patient problem is becoming exceedingly difficult due to the exponential growth of medical publications. The objective of this study was to apply machine learning techniques to automatically identify high-quality, content-specific articles for one time period in internal medicine and compare their performance with previous Boolean-based PubMed clinical query filters of Haynes et al. DESIGN The selection criteria of the ACP Journal Club for articles in internal medicine were the basis for identifying high-quality articles in the areas of etiology, prognosis, diagnosis, and treatment. Naive Bayes, a specialized AdaBoost algorithm, and linear and polynomial support vector machines were applied to identify these articles. MEASUREMENTS The machine learning models were compared in each category with each other and with the clinical query filters using area under the receiver operating characteristic curves, 11-point average recall precision, and a sensitivity/specificity match method. RESULTS In most categories, the data-induced models have better or comparable sensitivity, specificity, and precision than the clinical query filters. The polynomial support vector machine models perform the best among all learning methods in ranking the articles as evaluated by area under the receiver operating curve and 11-point average recall precision. CONCLUSION This research shows that, using machine learning methods, it is possible to automatically build models for retrieving high-quality, content-specific articles using inclusion or citation by the ACP Journal Club as a gold standard in a given time period in internal medicine that perform better than the 1994 PubMed clinical query filters." }, { "pmid": "7850570", "title": "Developing optimal search strategies for detecting clinically sound studies in MEDLINE.", "abstract": "OBJECTIVE\nTo develop optimal MEDLINE search strategies for retrieving sound clinical studies of the etiology, prognosis, diagnosis, prevention, or treatment of disorders in adult general medicine.\n\n\nDESIGN\nAnalytic survey of operating characteristics of search strategies developed by computerized combinations of terms selected to detect studies meeting basic methodologic criteria for direct clinical use in adult general medicine.\n\n\nMEASURES\nThe sensitivities, specificities, precision, and accuracy of 134,264 unique combinations of search terms were determined by comparison with a manual review of all articles (the \"gold standard\") in ten internal medicine and general medicine journals for 1986 and 1991.\n\n\nRESULTS\nLess than half of the studies of the topics of interest met basic criteria for scientific merit for testing clinical applications. Combinations of search terms reached peak sensitivities of 82% for sound studies of etiology, 92% for prognosis, 92% for diagnosis, and 99% for therapy in 1991. Compared with the best single terms, multiple terms increased sensitivity for sound studies by over 30% (absolute increase), but with some loss of specificity when sensitivity was maximized. For 1986, combinations reached peak sensitivities of 72% for etiology, 95% for prognosis, 86% for diagnosis, and 98% for therapy. When search terms were combined to maximize specificity, over 93% specificity was achieved for all purpose categories in both years. Compared with individual terms, combined terms achieved near-perfect specificity that was maintained with modest increases in sensitivity in all purpose categories except therapy. Increases in accuracy were achieved by combining terms for all purpose categories, with peak accuracies reaching over 90% for therapy in 1986 and 1991.\n\n\nCONCLUSIONS\nThe retrieval of studies of important clinical topics cited in MEDLINE can be substantially enhanced by selected combinations of indexing terms and textwords." }, { "pmid": "19567792", "title": "Cross-topic learning for work prioritization in systematic review creation and update.", "abstract": "OBJECTIVE\nMachine learning systems can be an aid to experts performing systematic reviews (SRs) by automatically ranking journal articles for work-prioritization. This work investigates whether a topic-specific automated document ranking system for SRs can be improved using a hybrid approach, combining topic-specific training data with data from other SR topics.\n\n\nDESIGN\nA test collection was built using annotated reference files from 24 systematic drug class reviews. A support vector machine learning algorithm was evaluated with cross-validation, using seven different fractions of topic-specific training data in combination with samples from the other 23 topics. This approach was compared to both a baseline system, which used only topic-specific training data, and to a system using only the nontopic data sampled from the remaining topics.\n\n\nMEASUREMENTS\nMean area under the receiver-operating curve (AUC) was used as the measure of comparison.\n\n\nRESULTS\nOn average, the hybrid system improved mean AUC over the baseline system by 20%, when topic-specific training data were scarce. The system performed significantly better than the baseline system at all levels of topic-specific training data. In addition, the system performed better than the nontopic system at all but the two smallest fractions of topic specific training data, and no worse than the nontopic system with these smallest amounts of topic specific training data.\n\n\nCONCLUSIONS\nAutomated literature prioritization could be helpful in assisting experts to organize their time when performing systematic reviews. Future work will focus on extending the algorithm to use additional sources of topic-specific data, and on embedding the algorithm in an interactive system available to systematic reviewers during the literature review process." }, { "pmid": "26104742", "title": "RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.", "abstract": "OBJECTIVE\nTo develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments.\n\n\nMETHODS\nWe algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR.\n\n\nRESULTS\nBy retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR).\n\n\nCONCLUSION\nRisk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses." }, { "pmid": "25656516", "title": "Automated confidence ranked classification of randomized controlled trial articles: an aid to evidence-based medicine.", "abstract": "OBJECTIVE\nFor many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT.\n\n\nMATERIALS AND METHODS\nThe LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article.\n\n\nRESULTS\nThe model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well.\n\n\nDISCUSSION\nBoth models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified.\n\n\nCONCLUSION\nRetagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/RCT_Tagger.cgi." }, { "pmid": "26659355", "title": "Machine learning to assist risk-of-bias assessments in systematic reviews.", "abstract": "BACKGROUND\nRisk-of-bias assessments are now a standard component of systematic reviews. At present, reviewers need to manually identify relevant parts of research articles for a set of methodological elements that affect the risk of bias, in order to make a risk-of-bias judgement for each of these elements. We investigate the use of text mining methods to automate risk-of-bias assessments in systematic reviews. We aim to identify relevant sentences within the text of included articles, to rank articles by risk of bias and to reduce the number of risk-of-bias assessments that the reviewers need to perform by hand.\n\n\nMETHODS\nWe use supervised machine learning to train two types of models, for each of the three risk-of-bias properties of sequence generation, allocation concealment and blinding. The first model predicts whether a sentence in a research article contains relevant information. The second model predicts a risk-of-bias value for each research article. We use logistic regression, where each independent variable is the frequency of a word in a sentence or article, respectively.\n\n\nRESULTS\nWe found that sentences can be successfully ranked by relevance with area under the receiver operating characteristic (ROC) curve (AUC) > 0.98. Articles can be ranked by risk of bias with AUC > 0.72. We estimate that more than 33% of articles can be assessed by just one reviewer, where two reviewers are normally required.\n\n\nCONCLUSIONS\nWe show that text mining can be used to assist risk-of-bias assessments." }, { "pmid": "28806999", "title": "Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey.", "abstract": "BACKGROUND\nDecisionmakers and guideline developers demand rapid syntheses of the evidence when time sensitive evidence-informed decisions are required. A potential trade-off of such rapid reviews is that their results can have less reliability than results of systematic reviews that can lead to an increased risk of making incorrect decisions or recommendations. We sought to determine how much incremental uncertainty about the correctness of an answer guideline developers and health policy decisionmakers are willing to accept in exchange for a rapid evidence-synthesis.\n\n\nMETHODS\nEmploying a purposive sample, we conducted an international web-based, anonymous survey of decisionmakers and guideline developers. Based on a clinical treatment, a public health, and a clinical prevention scenario, participants indicated the maximum risk of getting an incorrect answer from a rapid review that they would be willing to accept. We carefully reviewed data and performed descriptive statistical analyses.\n\n\nRESULTS\nIn total, 325 (58.5%) of 556 participants completed our survey and were eligible for analysis. The median acceptable incremental risk for getting an incorrect answer from a rapid review across all three scenarios was 10.0% (interquartile range [IQR] 5.0-15.0). Acceptable risks were similar for the clinical treatment (n = 313, median 10.0% [IQR 5.0-15.0]) and the public health scenarios (n = 320, median 10.0% [IQR 5.0-15.0]) and lower for the clinical prevention scenario (n = 312, median 6.5% [IQR 5.0-10.5]).\n\n\nCONCLUSIONS\nFindings suggest that decisionmakers are willing to accept some trade-off in validity in exchange for a rapid review. Nevertheless, they expect the validity of rapid reviews to come close to that of systematic reviews." } ]